From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352313; cv=none; d=zohomail.com; s=zohoarc; b=fuxV6DfuWVjBGoA3DOnJU4hsouG0waJJ+M38fn/rTy1zGNmB0++A2egwJiLW5KCU8LmXN0B6fjvYzhkMUn0NBzmGPZrwVRyHZyMzyhGkB52QBxLZGOyojX1UdEmm/Va9CwDli2sUkl6Wp/9+W4eSGPW5NyaC43ywVVXaFoQTWwA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352313; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=egNbs1urfubEVb1/syxvSwftmEMHwxMFxqleLJuQaAU=; b=gBemAi5HwL7fLC03NDXYH0SgTRzZMRhOjKzp2iFwkAwYdPx0DMeL5ufSd66pd3beCwZcXSo5JynA4IpUJwv4GEdV/nP2IFWtUVnz6pm1NInVfBx8U+n9qwz+muK7SnOReLjSO5YI+1OEepZeRAwETl5PaatYcbtBvTzRcBSs1AU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352313143144.78529328050854; Wed, 24 Aug 2022 07:45:13 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-416-I7Yeo1rjOPuC8JxNfysHOw-1; Wed, 24 Aug 2022 10:45:08 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE2C91818CBC; Wed, 24 Aug 2022 14:45:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B88DC1415125; Wed, 24 Aug 2022 14:45:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 95E9D1946A43; Wed, 24 Aug 2022 14:45:05 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C3ACC1946A42 for ; Wed, 24 Aug 2022 13:43:46 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 99EEA18ECC; Wed, 24 Aug 2022 13:43:46 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 29A1B9459C for ; Wed, 24 Aug 2022 13:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352311; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=egNbs1urfubEVb1/syxvSwftmEMHwxMFxqleLJuQaAU=; b=VbwX5K2HJuCLcuaZNLh0ohV2cxqA6a+VNElHd1FzFgZuSt0vpPgdB9yV9+BCJaw7fqO1Jb 6SnbMKH45H6jGFkRfcaTpZvOkIBS4NI7003E7T8VAeRFlR9/TTz2QKeNUF55GRmYZInM3i pfM1cihrj20ICCa0T6Mu+gKrbctCBoY= X-MC-Unique: I7Yeo1rjOPuC8JxNfysHOw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 01/17] qemu & hypervisor: move qemuDomainObjBeginJobInternal() into hyperisor Date: Wed, 24 Aug 2022 15:43:24 +0200 Message-Id: <9120590183e15f176ea9a6a4791f6dea0e001f3a.1661348243.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352313784100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch moves qemuDomainObjBeginJobInternal() as virDomainObjBeginJobInternal() into hypervisor in order to be used by other hypervisors in the following patches. Signed-off-by: Kristina Hanicova --- po/POTFILES | 1 + src/hypervisor/domain_job.c | 250 +++++++++++++++++++++++++++++++ src/hypervisor/domain_job.h | 8 + src/libvirt_private.syms | 1 + src/qemu/qemu_domainjob.c | 284 +++--------------------------------- 5 files changed, 284 insertions(+), 260 deletions(-) diff --git a/po/POTFILES b/po/POTFILES index 9621efb0d3..e3a1824834 100644 --- a/po/POTFILES +++ b/po/POTFILES @@ -90,6 +90,7 @@ src/hyperv/hyperv_util.c src/hyperv/hyperv_wmi.c src/hypervisor/domain_cgroup.c src/hypervisor/domain_driver.c +src/hypervisor/domain_job.c src/hypervisor/virclosecallbacks.c src/hypervisor/virhostdev.c src/interface/interface_backend_netcf.c diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c index 77110d2a23..ef3bee0248 100644 --- a/src/hypervisor/domain_job.c +++ b/src/hypervisor/domain_job.c @@ -10,6 +10,13 @@ =20 #include "domain_job.h" #include "viralloc.h" +#include "virthreadjob.h" +#include "virlog.h" +#include "virtime.h" + +#define VIR_FROM_THIS VIR_FROM_HYPERV + +VIR_LOG_INIT("hypervisor.domain_job"); =20 =20 VIR_ENUM_IMPL(virDomainJob, @@ -247,3 +254,246 @@ virDomainObjCanSetJob(virDomainJobObj *job, (newAgentJob =3D=3D VIR_AGENT_JOB_NONE || job->agentActive =3D=3D VIR_AGENT_JOB_NONE)); } + +/* Give up waiting for mutex after 30 seconds */ +#define VIR_JOB_WAIT_TIME (1000ull * 30) + +/** + * virDomainObjBeginJobInternal: + * @obj: virDomainObj =3D domain object + * @jobObj: virDomainJobObj =3D domain job object + * @job: virDomainJob to start + * @agentJob: virDomainAgentJob to start + * @asyncJob: virDomainAsyncJob to start + * @nowait: don't wait trying to acquire @job + * + * Acquires job for a domain object which must be locked before + * calling. If there's already a job running waits up to + * VIR_JOB_WAIT_TIME after which the functions fails reporting + * an error unless @nowait is set. + * + * If @nowait is true this function tries to acquire job and if + * it fails, then it returns immediately without waiting. No + * error is reported in this case. + * + * Returns: 0 on success, + * -2 if unable to start job because of timeout or + * maxQueuedJobs limit, + * -1 otherwise. + */ +int +virDomainObjBeginJobInternal(virDomainObj *obj, + virDomainJobObj *jobObj, + virDomainJob job, + virDomainAgentJob agentJob, + virDomainAsyncJob asyncJob, + bool nowait) +{ + unsigned long long now =3D 0; + unsigned long long then =3D 0; + bool nested =3D job =3D=3D VIR_JOB_ASYNC_NESTED; + const char *blocker =3D NULL; + const char *agentBlocker =3D NULL; + int ret =3D -1; + unsigned long long duration =3D 0; + unsigned long long agentDuration =3D 0; + unsigned long long asyncDuration =3D 0; + const char *currentAPI =3D virThreadJobGet(); + + VIR_DEBUG("Starting job: API=3D%s job=3D%s agentJob=3D%s asyncJob=3D%s= " + "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", + NULLSTR(currentAPI), + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAgentJobTypeToString(jobObj->agentActive), + virDomainAsyncJobTypeToString(jobObj->asyncJob)); + + if (virTimeMillisNow(&now) < 0) + return -1; + + jobObj->jobsQueued++; + then =3D now + VIR_JOB_WAIT_TIME; + + retry: + if (job !=3D VIR_JOB_ASYNC && + job !=3D VIR_JOB_DESTROY && + jobObj->maxQueuedJobs && + jobObj->jobsQueued > jobObj->maxQueuedJobs) { + goto error; + } + + while (!nested && !virDomainNestedJobAllowed(jobObj, job)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); + if (virCondWaitUntil(&jobObj->asyncCond, &obj->parent.lock, then) = < 0) + goto error; + } + + while (!virDomainObjCanSetJob(jobObj, job, agentJob)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); + if (virCondWaitUntil(&jobObj->cond, &obj->parent.lock, then) < 0) + goto error; + } + + /* No job is active but a new async job could have been started while = obj + * was unlocked, so we need to recheck it. */ + if (!nested && !virDomainNestedJobAllowed(jobObj, job)) + goto retry; + + if (obj->removing) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + + virUUIDFormat(obj->def->uuid, uuidstr); + virReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s' (%s)"), + uuidstr, obj->def->name); + goto cleanup; + } + + ignore_value(virTimeMillisNow(&now)); + + if (job) { + virDomainObjResetJob(jobObj); + + if (job !=3D VIR_JOB_ASYNC) { + VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(jobObj->asyncJob), + obj, obj->def->name); + jobObj->active =3D job; + jobObj->owner =3D virThreadSelfID(); + jobObj->ownerAPI =3D g_strdup(virThreadJobGet()); + jobObj->started =3D now; + } else { + VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", + virDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name); + virDomainObjResetAsyncJob(jobObj); + jobObj->current =3D virDomainJobDataInit(jobObj->jobDataPrivat= eCb); + jobObj->current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; + jobObj->asyncJob =3D asyncJob; + jobObj->asyncOwner =3D virThreadSelfID(); + jobObj->asyncOwnerAPI =3D g_strdup(virThreadJobGet()); + jobObj->asyncStarted =3D now; + jobObj->current->started =3D now; + } + } + + if (agentJob) { + virDomainObjResetAgentJob(jobObj); + VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", + virDomainAgentJobTypeToString(agentJob), + obj, obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAsyncJobTypeToString(jobObj->asyncJob)); + jobObj->agentActive =3D agentJob; + jobObj->agentOwner =3D virThreadSelfID(); + jobObj->agentOwnerAPI =3D g_strdup(virThreadJobGet()); + jobObj->agentStarted =3D now; + } + + if (virDomainTrackJob(job) && jobObj->cb && + jobObj->cb->saveStatusPrivate) + jobObj->cb->saveStatusPrivate(obj); + + return 0; + + error: + ignore_value(virTimeMillisNow(&now)); + if (jobObj->active && jobObj->started) + duration =3D now - jobObj->started; + if (jobObj->agentActive && jobObj->agentStarted) + agentDuration =3D now - jobObj->agentStarted; + if (jobObj->asyncJob && jobObj->asyncStarted) + asyncDuration =3D now - jobObj->asyncStarted; + + VIR_WARN("Cannot start job (%s, %s, %s) for domain %s; " + "current job is (%s, %s, %s) " + "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " + "for (%llus, %llus, %llus)", + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), + obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAgentJobTypeToString(jobObj->agentActive), + virDomainAsyncJobTypeToString(jobObj->asyncJob), + jobObj->owner, NULLSTR(jobObj->ownerAPI), + jobObj->agentOwner, NULLSTR(jobObj->agentOwnerAPI), + jobObj->asyncOwner, NULLSTR(jobObj->asyncOwnerAPI), + jobObj->apiFlags, + duration / 1000, agentDuration / 1000, asyncDuration / 1000); + + if (job) { + if (nested || virDomainNestedJobAllowed(jobObj, job)) + blocker =3D jobObj->ownerAPI; + else + blocker =3D jobObj->asyncOwnerAPI; + } + + if (agentJob) + agentBlocker =3D jobObj->agentOwnerAPI; + + if (errno =3D=3D ETIMEDOUT) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s)"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s)"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by agent=3D%s)"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", + _("cannot acquire state change lock")); + } + ret =3D -2; + } else if (jobObj->maxQueuedJobs && + jobObj->jobsQueued > jobObj->maxQueuedJobs) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s) " + "due to max_queued limit"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s) " + "due to max_queued limit"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by agent=3D%s) " + "due to max_queued limit"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("cannot acquire state change lock " + "due to max_queued limit")); + } + ret =3D -2; + } else { + virReportSystemError(errno, "%s", _("cannot acquire job mutex")); + } + + cleanup: + jobObj->jobsQueued--; + return ret; +} diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h index 334b59c465..d7409c05f0 100644 --- a/src/hypervisor/domain_job.h +++ b/src/hypervisor/domain_job.h @@ -234,3 +234,11 @@ bool virDomainNestedJobAllowed(virDomainJobObj *jobs, = virDomainJob newJob); bool virDomainObjCanSetJob(virDomainJobObj *job, virDomainJob newJob, virDomainAgentJob newAgentJob); + +int virDomainObjBeginJobInternal(virDomainObj *obj, + virDomainJobObj *jobObj, + virDomainJob job, + virDomainAgentJob agentJob, + virDomainAsyncJob asyncJob, + bool nowait) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index ac2802095e..51efd64ff2 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1596,6 +1596,7 @@ virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; +virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; virDomainObjInitJob; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 66a91a3e4f..a6ea7b2f58 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -697,248 +697,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) priv->job.asyncOwner =3D 0; } =20 -/* Give up waiting for mutex after 30 seconds */ -#define QEMU_JOB_WAIT_TIME (1000ull * 30) - -/** - * qemuDomainObjBeginJobInternal: - * @obj: domain object - * @job: virDomainJob to start - * @asyncJob: virDomainAsyncJob to start - * @nowait: don't wait trying to acquire @job - * - * Acquires job for a domain object which must be locked before - * calling. If there's already a job running waits up to - * QEMU_JOB_WAIT_TIME after which the functions fails reporting - * an error unless @nowait is set. - * - * If @nowait is true this function tries to acquire job and if - * it fails, then it returns immediately without waiting. No - * error is reported in this case. - * - * Returns: 0 on success, - * -2 if unable to start job because of timeout or - * maxQueuedJobs limit, - * -1 otherwise. - */ -static int ATTRIBUTE_NONNULL(1) -qemuDomainObjBeginJobInternal(virDomainObj *obj, - virDomainJob job, - virDomainAgentJob agentJob, - virDomainAsyncJob asyncJob, - bool nowait) -{ - qemuDomainObjPrivate *priv =3D obj->privateData; - unsigned long long now; - unsigned long long then; - bool nested =3D job =3D=3D VIR_JOB_ASYNC_NESTED; - const char *blocker =3D NULL; - const char *agentBlocker =3D NULL; - int ret =3D -1; - unsigned long long duration =3D 0; - unsigned long long agentDuration =3D 0; - unsigned long long asyncDuration =3D 0; - const char *currentAPI =3D virThreadJobGet(); - - VIR_DEBUG("Starting job: API=3D%s job=3D%s agentJob=3D%s asyncJob=3D%s= " - "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", - NULLSTR(currentAPI), - virDomainJobTypeToString(job), - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAgentJobTypeToString(priv->job.agentActive), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); - - if (virTimeMillisNow(&now) < 0) - return -1; - - priv->job.jobsQueued++; - then =3D now + QEMU_JOB_WAIT_TIME; - - retry: - if (job !=3D VIR_JOB_ASYNC && - job !=3D VIR_JOB_DESTROY && - priv->job.maxQueuedJobs && - priv->job.jobsQueued > priv->job.maxQueuedJobs) { - goto error; - } - - while (!nested && !virDomainNestedJobAllowed(&priv->job, job)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); - if (virCondWaitUntil(&priv->job.asyncCond, &obj->parent.lock, then= ) < 0) - goto error; - } - - while (!virDomainObjCanSetJob(&priv->job, job, agentJob)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) - goto error; - } - - /* No job is active but a new async job could have been started while = obj - * was unlocked, so we need to recheck it. */ - if (!nested && !virDomainNestedJobAllowed(&priv->job, job)) - goto retry; - - if (obj->removing) { - char uuidstr[VIR_UUID_STRING_BUFLEN]; - - virUUIDFormat(obj->def->uuid, uuidstr); - virReportError(VIR_ERR_NO_DOMAIN, - _("no domain with matching uuid '%s' (%s)"), - uuidstr, obj->def->name); - goto cleanup; - } - - ignore_value(virTimeMillisNow(&now)); - - if (job) { - virDomainObjResetJob(&priv->job); - - if (job !=3D VIR_JOB_ASYNC) { - VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", - virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); - priv->job.ownerAPI =3D g_strdup(virThreadJobGet()); - priv->job.started =3D now; - } else { - VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name); - virDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D virDomainJobDataInit(priv->job.jobDataPr= ivateCb); - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; - priv->job.asyncJob =3D asyncJob; - priv->job.asyncOwner =3D virThreadSelfID(); - priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); - priv->job.asyncStarted =3D now; - priv->job.current->started =3D now; - } - } - - if (agentJob) { - virDomainObjResetAgentJob(&priv->job); - - VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", - virDomainAgentJobTypeToString(agentJob), - obj, obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); - priv->job.agentActive =3D agentJob; - priv->job.agentOwner =3D virThreadSelfID(); - priv->job.agentOwnerAPI =3D g_strdup(virThreadJobGet()); - priv->job.agentStarted =3D now; - } - - if (virDomainTrackJob(job) && priv->job.cb) - priv->job.cb->saveStatusPrivate(obj); - - return 0; - - error: - ignore_value(virTimeMillisNow(&now)); - if (priv->job.active && priv->job.started) - duration =3D now - priv->job.started; - if (priv->job.agentActive && priv->job.agentStarted) - agentDuration =3D now - priv->job.agentStarted; - if (priv->job.asyncJob && priv->job.asyncStarted) - asyncDuration =3D now - priv->job.asyncStarted; - - VIR_WARN("Cannot start job (%s, %s, %s) in API %s for domain %s; " - "current job is (%s, %s, %s) " - "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " - "for (%llus, %llus, %llus)", - virDomainJobTypeToString(job), - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(asyncJob), - NULLSTR(currentAPI), - obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAgentJobTypeToString(priv->job.agentActive), - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.owner, NULLSTR(priv->job.ownerAPI), - priv->job.agentOwner, NULLSTR(priv->job.agentOwnerAPI), - priv->job.asyncOwner, NULLSTR(priv->job.asyncOwnerAPI), - priv->job.apiFlags, - duration / 1000, agentDuration / 1000, asyncDuration / 1000); - - if (job) { - if (nested || virDomainNestedJobAllowed(&priv->job, job)) - blocker =3D priv->job.ownerAPI; - else - blocker =3D priv->job.asyncOwnerAPI; - } - - if (agentJob) - agentBlocker =3D priv->job.agentOwnerAPI; - - if (errno =3D=3D ETIMEDOUT) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s)"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s)"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by agent=3D%s)"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", - _("cannot acquire state change lock")); - } - ret =3D -2; - } else if (priv->job.maxQueuedJobs && - priv->job.jobsQueued > priv->job.maxQueuedJobs) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s) " - "due to max_queued limit"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s) " - "due to max_queued limit"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by agent=3D%s) " - "due to max_queued limit"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("cannot acquire state change lock " - "due to max_queued limit")); - } - ret =3D -2; - } else { - virReportSystemError(errno, "%s", _("cannot acquire job mutex")); - } - - cleanup: - priv->job.jobsQueued--; - return ret; -} - /* * obj must be locked before calling * @@ -950,9 +708,11 @@ qemuDomainObjBeginJobInternal(virDomainObj *obj, int qemuDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { - if (qemuDomainObjBeginJobInternal(obj, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, false) < 0) + qemuDomainObjPrivate *priv =3D obj->privateData; + + if (virDomainObjBeginJobInternal(obj, &priv->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, false) < 0) return -1; return 0; } @@ -968,9 +728,11 @@ int qemuDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) { - return qemuDomainObjBeginJobInternal(obj, VIR_JOB_NONE, - agentJob, - VIR_ASYNC_JOB_NONE, false); + qemuDomainObjPrivate *priv =3D obj->privateData; + + return virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_NONE, + agentJob, + VIR_ASYNC_JOB_NONE, false); } =20 int qemuDomainObjBeginAsyncJob(virDomainObj *obj, @@ -978,11 +740,11 @@ int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainJobOperation operation, unsigned long apiFlags) { - qemuDomainObjPrivate *priv; + qemuDomainObjPrivate *priv =3D obj->privateData; =20 - if (qemuDomainObjBeginJobInternal(obj, VIR_JOB_ASYNC, - VIR_AGENT_JOB_NONE, - asyncJob, false) < 0) + if (virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_ASYNC, + VIR_AGENT_JOB_NONE, + asyncJob, false) < 0) return -1; =20 priv =3D obj->privateData; @@ -1009,11 +771,11 @@ qemuDomainObjBeginNestedJob(virDomainObj *obj, priv->job.asyncOwner); } =20 - return qemuDomainObjBeginJobInternal(obj, - VIR_JOB_ASYNC_NESTED, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, - false); + return virDomainObjBeginJobInternal(obj, &priv->job, + VIR_JOB_ASYNC_NESTED, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, + false); } =20 /** @@ -1032,9 +794,11 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) { - return qemuDomainObjBeginJobInternal(obj, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, true); + qemuDomainObjPrivate *priv =3D obj->privateData; + + return virDomainObjBeginJobInternal(obj, &priv->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, true); } =20 /* --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352790; cv=none; d=zohomail.com; s=zohoarc; b=duPDi52AiNtgz7Awgq5T2+Dhhab6lNefvpSq45MEqT+KrekfVWBF7uwe+fKDZkSQjmyvNMHc+Gcrbi0OnWBIpDclCqrfEspb6T8avxJwBKf9Cp4Yv6eWiSrRuuFPOX2krMwGC8wlC9B2OCYY/hKAFsBp2vPqxMdwyRWMF7zVWd4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352790; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=EItOvgjoxkE6E/QlH7Jz0IhKe8x//gSqJ8MBwJEahvk=; b=GvKRcR+iDRNNsxTzOezmLosrl77RI2GOFaXL8gZzNedu1C7tiQIfSw9uMNaRAUfzzbTuXeE70SRjqWqzKyGAxQaT7scU0Q0FuwRewbKgNaUj9UmphR1fhnfG8QY3vH2HOJl8dVwVxIJw6B0wfV5NmEDjajTQC9kUOqA6tHYrCKE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1661352790956746.6007358947079; Wed, 24 Aug 2022 07:53:10 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-357-y-5Ux93OMmKcpNgdvgROsg-1; Wed, 24 Aug 2022 10:53:05 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 578DF3C1022E; Wed, 24 Aug 2022 14:53:02 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B8060492C3B; Wed, 24 Aug 2022 14:53:00 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D52411946A47; Wed, 24 Aug 2022 14:52:59 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 42B271946A42 for ; Wed, 24 Aug 2022 13:43:47 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 30B0C945D7; Wed, 24 Aug 2022 13:43:47 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id D087818ECC for ; Wed, 24 Aug 2022 13:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352790; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=EItOvgjoxkE6E/QlH7Jz0IhKe8x//gSqJ8MBwJEahvk=; b=dnkTGXg2BjY4Prn1R3VBjvHIuw8sACer77ZE8tookeJ7dHmrjxEooXqgirj7F2Z+I1xSMw NZCM6lVyo3cXjFxEabhaJ8zaJ0kLGX/jQyQVCR6ytmwLfGuDjU0mKLxgQCXvAXjHFj7ibB 1krVIQ19MYFZ8JrGzy2uizbHpI/6nyw= X-MC-Unique: y-5Ux93OMmKcpNgdvgROsg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 02/17] libxl: remove usage of virDomainJobData Date: Wed, 24 Aug 2022 15:43:25 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352791627100001 Content-Type: text/plain; charset="utf-8"; x-default="true" Struct virDomainJobData is meant for statistics for async jobs. It was used to keep track of only two attributes, one of which is also in the generalized virDomainJobObj ("started") and one which is always set to the same value, if any job is active ("jobType"). This patch removes usage & allocation of virDomainJobData structure and rewrites libxlDomainJobUpdateTime() into more suitable libxlDomainJobGetTimeElapsed(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/libxl/libxl_domain.c | 16 ++++++---------- src/libxl/libxl_domain.h | 4 ++-- src/libxl/libxl_driver.c | 16 ++++++++-------- 3 files changed, 16 insertions(+), 20 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 52e0aa1e60..6695ec670e 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -81,8 +81,7 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); - priv->job.current->started =3D now; - priv->job.current->jobType =3D VIR_DOMAIN_JOB_UNBOUNDED; + priv->job.started =3D now; =20 return 0; =20 @@ -129,23 +128,22 @@ libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, } =20 int -libxlDomainJobUpdateTime(virDomainJobObj *job) +libxlDomainJobGetTimeElapsed(virDomainJobObj *job, unsigned long long *tim= eElapsed) { - virDomainJobData *jobData =3D job->current; unsigned long long now; =20 - if (!jobData->started) + if (!job->started) return 0; =20 if (virTimeMillisNow(&now) < 0) return -1; =20 - if (now < jobData->started) { - jobData->started =3D 0; + if (now < job->started) { + job->started =3D 0; return 0; } =20 - jobData->timeElapsed =3D now - jobData->started; + *timeElapsed =3D now - job->started; return 0; } =20 @@ -167,8 +165,6 @@ libxlDomainObjPrivateAlloc(void *opaque G_GNUC_UNUSED) return NULL; } =20 - priv->job.current =3D virDomainJobDataInit(NULL); - return priv; } =20 diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 5843a4921f..8ad56f1e88 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -62,8 +62,8 @@ libxlDomainObjEndJob(libxlDriverPrivate *driver, virDomainObj *obj); =20 int -libxlDomainJobUpdateTime(virDomainJobObj *job) - G_GNUC_WARN_UNUSED_RESULT; +libxlDomainJobGetTimeElapsed(virDomainJobObj *job, + unsigned long long *timeElapsed); =20 char * libxlDomainManagedSavePath(libxlDriverPrivate *driver, diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index f980039165..ff6c2f6506 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -5207,6 +5207,7 @@ libxlDomainGetJobInfo(virDomainPtr dom, libxlDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; + unsigned long long timeElapsed =3D 0; =20 if (!(vm =3D libxlDomObjFromDomain(dom))) goto cleanup; @@ -5225,14 +5226,14 @@ libxlDomainGetJobInfo(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobUpdateTime(&priv->job) < 0) + if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) goto cleanup; =20 /* setting only these two attributes is enough because libxl never sets * anything else */ memset(info, 0, sizeof(*info)); - info->type =3D priv->job.current->jobType; - info->timeElapsed =3D priv->job.current->timeElapsed; + info->type =3D VIR_DOMAIN_JOB_UNBOUNDED; + info->timeElapsed =3D timeElapsed; ret =3D 0; =20 cleanup: @@ -5249,9 +5250,9 @@ libxlDomainGetJobStats(virDomainPtr dom, { libxlDomainObjPrivate *priv; virDomainObj *vm; - virDomainJobData *jobData; int ret =3D -1; int maxparams =3D 0; + unsigned long long timeElapsed =3D 0; =20 /* VIR_DOMAIN_JOB_STATS_COMPLETED not supported yet */ virCheckFlags(0, -1); @@ -5263,7 +5264,6 @@ libxlDomainGetJobStats(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; - jobData =3D priv->job.current; if (!priv->job.active) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; @@ -5275,15 +5275,15 @@ libxlDomainGetJobStats(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobUpdateTime(&priv->job) < 0) + if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) goto cleanup; =20 if (virTypedParamsAddULLong(params, nparams, &maxparams, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobData->timeElapsed) < 0) + timeElapsed) < 0) goto cleanup; =20 - *type =3D jobData->jobType; + *type =3D VIR_DOMAIN_JOB_UNBOUNDED; ret =3D 0; =20 cleanup: --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661350896; cv=none; d=zohomail.com; s=zohoarc; b=ZI1cqkFO/ZqarXB/jEaYxFyhflT/8KL0ntO5FgjySPILmujg0mhL409H5fZYXkNJFcmPvMa7d7wwSW4ZUNxswOk8VotVcz8RqqpxI0JOF/fSD0WOXcxJYhHYo75IVFrkCd0O3dWTiN2PnrUzl7SQmb34dmMwp39vcf0lt6k8vpo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661350896; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Q2qvHyWijTM17b9G3YpLJ22Kj258WwDRKUwqz/2rSxE=; b=NVrYIlNrp8DYTO56DGj1WYtbM6BogztRIKwvJZUlZLvdmbJJgu6bXgmAzvVDSYOSEaB4+/K2luPhc5s53x6l+fvTvw6BTaT0rYl70uXfzK6vvHvkqEn45f8qwYaibp1Ibysp7eHVOzwWLdaf44L9giC+00pQfxtzxPgtGXGagdE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1661350896578361.7164402824393; Wed, 24 Aug 2022 07:21:36 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-219-sHp80kqMNO6RaIxNgetHFg-1; Wed, 24 Aug 2022 10:21:32 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BE97C80231E; Wed, 24 Aug 2022 14:21:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 29035945DB; Wed, 24 Aug 2022 14:21:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 11C131946A43; Wed, 24 Aug 2022 14:21:29 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D7D621946A42 for ; Wed, 24 Aug 2022 13:43:47 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id BCC20945DB; Wed, 24 Aug 2022 13:43:47 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 67EA39459C for ; Wed, 24 Aug 2022 13:43:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661350894; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=Q2qvHyWijTM17b9G3YpLJ22Kj258WwDRKUwqz/2rSxE=; b=LNxgZGXQARSfQh0UcNSBeWsHKEmlp0b+k+3uRUJocRGD+HhK3TK+HR7vCjgKYbegaYx828 3JMQIpvkslK0bVGo/lBNzvJh4humdV+PmMa+DM+kGC52Q1VdjGVEW67C+ZrcDsgiKz/ly6 hRqRnrwUCqcdRcpfYID3XSiWMcPG0wY= X-MC-Unique: sHp80kqMNO6RaIxNgetHFg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 03/17] move files: hypervisor/domain_job -> conf/virdomainjob Date: Wed, 24 Aug 2022 15:43:26 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661350898305100001 Content-Type: text/plain; charset="utf-8"; x-default="true" The following patches move job object as a member into the domain object. Because of this, domain_conf (where the domain object is defined) needs to import the file with the job object. It makes sense to move jobs to the same level as the domain_conf: into src/conf/ Signed-off-by: Kristina Hanicova --- po/POTFILES | 2 +- src/ch/ch_domain.h | 2 +- src/conf/meson.build | 1 + .../domain_job.c =3D> conf/virdomainjob.c} | 6 +-- .../domain_job.h =3D> conf/virdomainjob.h} | 2 +- src/hypervisor/meson.build | 1 - src/libvirt_private.syms | 44 +++++++++---------- src/libxl/libxl_domain.c | 1 - src/libxl/libxl_domain.h | 2 +- src/lxc/lxc_domain.c | 1 - src/lxc/lxc_domain.h | 2 +- src/qemu/qemu_domainjob.h | 2 +- 12 files changed, 32 insertions(+), 34 deletions(-) rename src/{hypervisor/domain_job.c =3D> conf/virdomainjob.c} (99%) rename src/{hypervisor/domain_job.h =3D> conf/virdomainjob.h} (99%) diff --git a/po/POTFILES b/po/POTFILES index e3a1824834..372e14e031 100644 --- a/po/POTFILES +++ b/po/POTFILES @@ -53,6 +53,7 @@ src/conf/storage_conf.c src/conf/storage_encryption_conf.c src/conf/storage_source_conf.c src/conf/virchrdev.c +src/conf/virdomainjob.c src/conf/virdomainmomentobjlist.c src/conf/virdomainobjlist.c src/conf/virnetworkobj.c @@ -90,7 +91,6 @@ src/hyperv/hyperv_util.c src/hyperv/hyperv_wmi.c src/hypervisor/domain_cgroup.c src/hypervisor/domain_driver.c -src/hypervisor/domain_job.c src/hypervisor/virclosecallbacks.c src/hypervisor/virhostdev.c src/interface/interface_backend_netcf.c diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index b3bebd6b9a..27efe2feed 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -24,7 +24,7 @@ #include "ch_monitor.h" #include "virchrdev.h" #include "vircgroup.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 /* Give up waiting for mutex after 30 seconds */ #define CH_JOB_WAIT_TIME (1000ull * 30) diff --git a/src/conf/meson.build b/src/conf/meson.build index 5ef494c3ba..5116c23fe3 100644 --- a/src/conf/meson.build +++ b/src/conf/meson.build @@ -20,6 +20,7 @@ domain_conf_sources =3D [ 'numa_conf.c', 'snapshot_conf.c', 'virdomaincheckpointobjlist.c', + 'virdomainjob.c', 'virdomainmomentobjlist.c', 'virdomainobjlist.c', 'virdomainsnapshotobjlist.c', diff --git a/src/hypervisor/domain_job.c b/src/conf/virdomainjob.c similarity index 99% rename from src/hypervisor/domain_job.c rename to src/conf/virdomainjob.c index ef3bee0248..80c92f7939 100644 --- a/src/hypervisor/domain_job.c +++ b/src/conf/virdomainjob.c @@ -1,5 +1,5 @@ /* - * domain_job.c: job functions shared between hypervisor drivers + * virdomainjob.c: job functions shared between hypervisor drivers * * Copyright (C) 2022 Red Hat, Inc. * SPDX-License-Identifier: LGPL-2.1-or-later @@ -8,7 +8,7 @@ #include #include =20 -#include "domain_job.h" +#include "virdomainjob.h" #include "viralloc.h" #include "virthreadjob.h" #include "virlog.h" @@ -16,7 +16,7 @@ =20 #define VIR_FROM_THIS VIR_FROM_HYPERV =20 -VIR_LOG_INIT("hypervisor.domain_job"); +VIR_LOG_INIT("conf.virdomainjob"); =20 =20 VIR_ENUM_IMPL(virDomainJob, diff --git a/src/hypervisor/domain_job.h b/src/conf/virdomainjob.h similarity index 99% rename from src/hypervisor/domain_job.h rename to src/conf/virdomainjob.h index d7409c05f0..bdfdc91935 100644 --- a/src/hypervisor/domain_job.h +++ b/src/conf/virdomainjob.h @@ -1,5 +1,5 @@ /* - * domain_job.h: job functions shared between hypervisor drivers + * virdomainjob.h: job functions shared between hypervisor drivers * * Copyright (C) 2022 Red Hat, Inc. * SPDX-License-Identifier: LGPL-2.1-or-later diff --git a/src/hypervisor/meson.build b/src/hypervisor/meson.build index 7532f30ee2..f35565b16b 100644 --- a/src/hypervisor/meson.build +++ b/src/hypervisor/meson.build @@ -3,7 +3,6 @@ hypervisor_sources =3D [ 'domain_driver.c', 'virclosecallbacks.c', 'virhostdev.c', - 'domain_job.c', ] =20 stateful_driver_source_files +=3D files(hypervisor_sources) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 51efd64ff2..f406fa39ae 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1175,6 +1175,28 @@ virDomainCheckpointUpdateRelations; virDomainListCheckpoints; =20 =20 +#conf/virdomainjob.h +virDomainAgentJobTypeToString; +virDomainAsyncJobTypeFromString; +virDomainAsyncJobTypeToString; +virDomainJobDataCopy; +virDomainJobDataFree; +virDomainJobDataInit; +virDomainJobStatusToType; +virDomainJobTypeFromString; +virDomainJobTypeToString; +virDomainNestedJobAllowed; +virDomainObjBeginJobInternal; +virDomainObjCanSetJob; +virDomainObjClearJob; +virDomainObjInitJob; +virDomainObjPreserveJob; +virDomainObjResetAgentJob; +virDomainObjResetAsyncJob; +virDomainObjResetJob; +virDomainTrackJob; + + # conf/virdomainmomentobjlist.h virDomainMomentDropChildren; virDomainMomentDropParent; @@ -1585,28 +1607,6 @@ virDomainDriverParseBlkioDeviceStr; virDomainDriverSetupPersistentDefBlkioParams; =20 =20 -# hypervisor/domain_job.h -virDomainAgentJobTypeToString; -virDomainAsyncJobTypeFromString; -virDomainAsyncJobTypeToString; -virDomainJobDataCopy; -virDomainJobDataFree; -virDomainJobDataInit; -virDomainJobStatusToType; -virDomainJobTypeFromString; -virDomainJobTypeToString; -virDomainNestedJobAllowed; -virDomainObjBeginJobInternal; -virDomainObjCanSetJob; -virDomainObjClearJob; -virDomainObjInitJob; -virDomainObjPreserveJob; -virDomainObjResetAgentJob; -virDomainObjResetAsyncJob; -virDomainObjResetJob; -virDomainTrackJob; - - # hypervisor/virclosecallbacks.h virCloseCallbacksGet; virCloseCallbacksNew; diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 6695ec670e..aadb13f461 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -37,7 +37,6 @@ #include "xen_common.h" #include "driver.h" #include "domain_validate.h" -#include "domain_job.h" =20 #define VIR_FROM_THIS VIR_FROM_LIBXL =20 diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 8ad56f1e88..451e76e311 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -24,7 +24,7 @@ =20 #include "libxl_conf.h" #include "virchrdev.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 =20 typedef struct _libxlDomainObjPrivate libxlDomainObjPrivate; diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index 61e59ec726..f234aaf39c 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -29,7 +29,6 @@ #include "virsystemd.h" #include "virinitctl.h" #include "domain_driver.h" -#include "domain_job.h" =20 #define VIR_FROM_THIS VIR_FROM_LXC =20 diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index 82c36eb940..db622acc86 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -25,7 +25,7 @@ #include "lxc_conf.h" #include "lxc_monitor.h" #include "virenum.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 =20 typedef enum { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index bb3c7ede14..23eadc26a7 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -20,7 +20,7 @@ =20 #include #include "qemu_monitor.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 =20 typedef enum { --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661351173; cv=none; d=zohomail.com; s=zohoarc; b=d4VMYIJnpyAtFG5btbXRbLYYYq/RxcvtJ6w0DTojc30FQC7cIvsop+nINR5V857GOrvKZRyVpoO0RjeFyq6F2F7kb+foZ2L2ApWfX7NmzQemHcAxvk/77EDViXUjTq9g0t9zMx4YdRb84osJY1egijKwBxv5kjyKlZy33Oc6+sI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661351173; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=KF4S9KA8gWsKPDQnLrnkWk4SUB+6PAihKfTvQ6ijylk=; b=IWigR5TvxsUDxdM1yz55QkRjhVecVufwLBDKlBZpAZCPgrZqCc2Jl1g2c7GN46fGZpINzpaH0DMUJ99MXxqpFsH0aJtUP/ai/NSEc5hEo2mMyLKOx5iF4xP2Oq/bsGxhbWgvwqBlCA7jxpvlR4LHyBbM4bNk0Y9UorLx4y3VBHA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661351173248206.14180919746354; Wed, 24 Aug 2022 07:26:13 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-640-IOCXJdcwMFuYh2gCkipwIg-1; Wed, 24 Aug 2022 10:26:09 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4CE4F811130; Wed, 24 Aug 2022 14:26:06 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3856F2026D64; Wed, 24 Aug 2022 14:26:06 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 085A81946A43; Wed, 24 Aug 2022 14:26:06 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 60AC51946A42 for ; Wed, 24 Aug 2022 13:43:48 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 5398318ECC; Wed, 24 Aug 2022 13:43:48 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id F34A8945D7 for ; Wed, 24 Aug 2022 13:43:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661351172; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=KF4S9KA8gWsKPDQnLrnkWk4SUB+6PAihKfTvQ6ijylk=; b=UhsrnLLjfdX0EW10B/2KJLJG4byqJrcQoc9yE+nsz6O+8vqyTFIAB3+PeG3MIRGENMDeGI kr72hOIuLtRjbLM+tMPKuuzQ71NierDDP86jlw+srQc0H4ZPl5mr/0YLumzETr0f/2NDCO y4vk4/YHKmrjVavI7abPzkwv0IzhRO0= X-MC-Unique: IOCXJdcwMFuYh2gCkipwIg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 04/17] virdomainjob: add check for callbacks Date: Wed, 24 Aug 2022 15:43:27 +0200 Message-Id: <2482ebba6e26df542c7101c7eb8c21f424d2300f.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661351173788100001 Content-Type: text/plain; charset="utf-8"; x-default="true" There may be a case that the callback structure will exist with no callbacks (following patches). This patch adds check for specific callbacks before using them. Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/conf/virdomainjob.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 80c92f7939..0e246cbb93 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -138,7 +138,7 @@ virDomainObjInitJob(virDomainJobObj *job, return -1; } =20 - if (job->cb && + if (job->cb && job->cb->allocJobPrivate && !(job->privateData =3D job->cb->allocJobPrivate())) { virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); @@ -180,7 +180,7 @@ virDomainObjResetAsyncJob(virDomainJobObj *job) g_clear_pointer(&job->current, virDomainJobDataFree); job->apiFlags =3D 0; =20 - if (job->cb) + if (job->cb && job->cb->resetJobPrivate) job->cb->resetJobPrivate(job->privateData); } =20 @@ -206,7 +206,7 @@ virDomainObjPreserveJob(virDomainJobObj *currJob, job->privateData =3D g_steal_pointer(&currJob->privateData); job->apiFlags =3D currJob->apiFlags; =20 - if (currJob->cb && + if (currJob->cb && currJob->cb->allocJobPrivate && !(currJob->privateData =3D currJob->cb->allocJobPrivate())) return -1; job->cb =3D currJob->cb; @@ -226,7 +226,7 @@ virDomainObjClearJob(virDomainJobObj *job) virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); =20 - if (job->cb) + if (job->cb && job->cb->freeJobPrivate) g_clear_pointer(&job->privateData, job->cb->freeJobPrivate); } =20 --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352829; cv=none; d=zohomail.com; s=zohoarc; b=NjqSpkf3m8bf7ebDfetss80x1+gzXEzSsPxkaYx9V13GuU7QTMYPfXS+49+YLbGKVnXiFw8ITTgfqQHtySPLn40zTFdRrWqXXo8ZQQVufayVYwB4jjGqWUKwRUWltehWquNNNanmmuXsS42ACdnFFBRa2rR9wNdHg69yMDMltOg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352829; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ocyC+/if8u2UuBoz2/ogBod12gtioG2pgGGsuy6ykuw=; b=O4Yi05Zzj80fEK1DPu/ymUDF8rHOzzZraqZ13UT9VL9L9zk3D8xxNkDdwEmgOD1jaLp0RCTFk9EUlw2yyYIUL9GcuQPIYmYrcNJtUQx/epKEzAsoSIhPfhXQLV6FCgHX9GgkCOpZMxrcRxhInnntoOvcp05HI2yFf0qKKC1CzRM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352829283158.24836861167273; Wed, 24 Aug 2022 07:53:49 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-272-n-rzXETUOOidXdZt6l11nA-1; Wed, 24 Aug 2022 10:53:42 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E10083815D2E; Wed, 24 Aug 2022 14:53:40 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id C97E118ECC; Wed, 24 Aug 2022 14:53:40 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 90D941946A43; Wed, 24 Aug 2022 14:53:40 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 303F81946A42 for ; Wed, 24 Aug 2022 13:43:49 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 055F09459C; Wed, 24 Aug 2022 13:43:49 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AFD318ECC for ; Wed, 24 Aug 2022 13:43:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352828; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=ocyC+/if8u2UuBoz2/ogBod12gtioG2pgGGsuy6ykuw=; b=IOz2lUpP31uYdXSEyUX3Jd8QvMzdTo67vrJKafnPKj7mxIbuT52oUH1aIqaJfo29Z6jo2C GqynUBGqbUGVgursMd7KeSVi0twMbOZMnFOH2jd1lJpw/DcNY8PzUoCYARp1sNeoPwYtQ4 bdWcE6tQZCTsFPdX6VydW2SJt1Syu+A= X-MC-Unique: n-rzXETUOOidXdZt6l11nA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 05/17] conf: extend xmlopt with job config & add job object into domain object Date: Wed, 24 Aug 2022 15:43:28 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352831855100005 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch adds the generalized job object into the domain object so that it can be used by all drivers without the need to extract it from the private data. Because of this, the job object needs to be created and set during the creation of the domain object. This patch also extends xmlopt with possible job config containing virDomainJobObj callbacks, its private data callbacks and one variable (maxQueuedJobs). This patch includes: * addition of virDomainJobObj into virDomainObj (used in the following patches) * extending xmlopt with job config structure * new function for freeing the virDomainJobObj Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/bhyve/bhyve_domain.c | 2 +- src/ch/ch_conf.c | 2 +- src/conf/domain_conf.c | 13 ++++++++++++- src/conf/domain_conf.h | 16 +++++++++++++++- src/conf/virconftypes.h | 2 ++ src/conf/virdomainjob.c | 11 +++++++++++ src/conf/virdomainjob.h | 5 ++++- src/hyperv/hyperv_driver.c | 2 +- src/libvirt_private.syms | 1 + src/libxl/libxl_conf.c | 2 +- src/lxc/lxc_conf.c | 2 +- src/openvz/openvz_conf.c | 2 +- src/qemu/qemu_conf.c | 3 ++- src/qemu/qemu_process.c | 2 +- src/security/virt-aa-helper.c | 2 +- src/test/test_driver.c | 2 +- src/vbox/vbox_common.c | 2 +- src/vmware/vmware_driver.c | 2 +- src/vmx/vmx.c | 2 +- src/vz/vz_driver.c | 2 +- tests/bhyveargv2xmltest.c | 2 +- tests/testutils.c | 2 +- 22 files changed, 62 insertions(+), 19 deletions(-) diff --git a/src/bhyve/bhyve_domain.c b/src/bhyve/bhyve_domain.c index 69555a3efc..b7b2db57b8 100644 --- a/src/bhyve/bhyve_domain.c +++ b/src/bhyve/bhyve_domain.c @@ -221,7 +221,7 @@ virBhyveDriverCreateXMLConf(struct _bhyveConn *driver) return virDomainXMLOptionNew(&virBhyveDriverDomainDefParserConfig, &virBhyveDriverPrivateDataCallbacks, &virBhyveDriverDomainXMLNamespace, - NULL, NULL); + NULL, NULL, NULL); } =20 =20 diff --git a/src/ch/ch_conf.c b/src/ch/ch_conf.c index 775596e9f5..0d07fa270c 100644 --- a/src/ch/ch_conf.c +++ b/src/ch/ch_conf.c @@ -110,7 +110,7 @@ chDomainXMLConfInit(virCHDriver *driver) virCHDriverDomainDefParserConfig.priv =3D driver; return virDomainXMLOptionNew(&virCHDriverDomainDefParserConfig, &virCHDriverPrivateDataCallbacks, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); } =20 virCHDriverConfig * diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 2fc94b40ef..221e89eb7e 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1604,7 +1604,8 @@ virDomainXMLOptionNew(virDomainDefParserConfig *confi= g, virDomainXMLPrivateDataCallbacks *priv, virXMLNamespace *xmlns, virDomainABIStability *abi, - virSaveCookieCallbacks *saveCookie) + virSaveCookieCallbacks *saveCookie, + virDomainJobObjConfig *jobConfig) { virDomainXMLOption *xmlopt; =20 @@ -1629,6 +1630,9 @@ virDomainXMLOptionNew(virDomainDefParserConfig *confi= g, if (saveCookie) xmlopt->saveCookie =3D *saveCookie; =20 + if (jobConfig) + xmlopt->jobObjConfig =3D *jobConfig; + /* Technically this forbids to use one of Xerox's MAC address prefixes= in * our hypervisor drivers. This shouldn't ever be a problem. * @@ -3857,6 +3861,7 @@ static void virDomainObjDispose(void *obj) virDomainObjDeprecationFree(dom); virDomainSnapshotObjListFree(dom->snapshots); virDomainCheckpointObjListFree(dom->checkpoints); + virDomainJobObjFree(dom->job); } =20 virDomainObj * @@ -3889,6 +3894,12 @@ virDomainObjNew(virDomainXMLOption *xmlopt) if (!(domain->checkpoints =3D virDomainCheckpointObjListNew())) goto error; =20 + domain->job =3D g_new0(virDomainJobObj, 1); + if (virDomainObjInitJob(domain->job, + &xmlopt->jobObjConfig.cb, + &xmlopt->jobObjConfig.jobDataPrivateCb) < 0) + goto error; + virObjectLock(domain); virDomainObjSetState(domain, VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_UNKNOWN); diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index a1f6cf7a6f..67405f2fcd 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -56,6 +56,7 @@ #include "virsavecookie.h" #include "virresctrl.h" #include "virenum.h" +#include "virdomainjob.h" =20 /* Flags for the 'type' field in virDomainDeviceDef */ typedef enum { @@ -3093,6 +3094,8 @@ struct _virDomainObj { virObjectLockable parent; virCond cond; =20 + virDomainJobObj *job; + pid_t pid; /* 0 for no PID, avoid negative values like -1 */ virDomainStateReason state; =20 @@ -3277,11 +3280,19 @@ struct _virDomainABIStability { virDomainABIStabilityDomain domain; }; =20 + +struct _virDomainJobObjConfig { + virDomainObjPrivateJobCallbacks cb; + virDomainJobDataPrivateDataCallbacks jobDataPrivateCb; + unsigned int maxQueuedJobs; +}; + virDomainXMLOption *virDomainXMLOptionNew(virDomainDefParserConfig *config, virDomainXMLPrivateDataCallbacks= *priv, virXMLNamespace *xmlns, virDomainABIStability *abi, - virSaveCookieCallbacks *saveCook= ie); + virSaveCookieCallbacks *saveCook= ie, + virDomainJobObjConfig *jobConfig= ); =20 virSaveCookieCallbacks * virDomainXMLOptionGetSaveCookie(virDomainXMLOption *xmlopt); @@ -3321,6 +3332,9 @@ struct _virDomainXMLOption { =20 /* Snapshot postparse callbacks */ virDomainMomentPostParseCallback momentPostParse; + + /* virDomainJobObj callbacks, private data callbacks and defaults */ + virDomainJobObjConfig jobObjConfig; }; G_DEFINE_AUTOPTR_CLEANUP_FUNC(virDomainXMLOption, virObjectUnref); =20 diff --git a/src/conf/virconftypes.h b/src/conf/virconftypes.h index c3f1c5fa01..154805091a 100644 --- a/src/conf/virconftypes.h +++ b/src/conf/virconftypes.h @@ -150,6 +150,8 @@ typedef struct _virDomainIdMapEntry virDomainIdMapEntry; =20 typedef struct _virDomainInputDef virDomainInputDef; =20 +typedef struct _virDomainJobObjConfig virDomainJobObjConfig; + typedef struct _virDomainKeyWrapDef virDomainKeyWrapDef; =20 typedef struct _virDomainLeaseDef virDomainLeaseDef; diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 0e246cbb93..53861cb153 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -13,6 +13,7 @@ #include "virthreadjob.h" #include "virlog.h" #include "virtime.h" +#include "domain_conf.h" =20 #define VIR_FROM_THIS VIR_FROM_HYPERV =20 @@ -230,6 +231,16 @@ virDomainObjClearJob(virDomainJobObj *job) g_clear_pointer(&job->privateData, job->cb->freeJobPrivate); } =20 +void +virDomainJobObjFree(virDomainJobObj *job) +{ + if (!job) + return; + + virDomainObjClearJob(job); + g_free(job); +} + bool virDomainTrackJob(virDomainJob job) { diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index bdfdc91935..091d951aa6 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -11,7 +11,8 @@ #include "virenum.h" #include "virthread.h" #include "virbuffer.h" -#include "domain_conf.h" +#include "virconftypes.h" +#include "virxml.h" =20 #define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) #define VIR_JOB_DEFAULT_MASK \ @@ -227,6 +228,8 @@ int virDomainObjPreserveJob(virDomainJobObj *currJob, void virDomainObjClearJob(virDomainJobObj *job); G_DEFINE_AUTO_CLEANUP_CLEAR_FUNC(virDomainJobObj, virDomainObjClearJob); =20 +void virDomainJobObjFree(virDomainJobObj *job); + bool virDomainTrackJob(virDomainJob job); =20 bool virDomainNestedJobAllowed(virDomainJobObj *jobs, virDomainJob newJob); diff --git a/src/hyperv/hyperv_driver.c b/src/hyperv/hyperv_driver.c index 288c01ad14..3929e27e09 100644 --- a/src/hyperv/hyperv_driver.c +++ b/src/hyperv/hyperv_driver.c @@ -1766,7 +1766,7 @@ hypervConnectOpen(virConnectPtr conn, virConnectAuthP= tr auth, goto cleanup; =20 /* init xmlopt for domain XML */ - priv->xmlopt =3D virDomainXMLOptionNew(&hypervDomainDefParserConfig, N= ULL, NULL, NULL, NULL); + priv->xmlopt =3D virDomainXMLOptionNew(&hypervDomainDefParserConfig, N= ULL, NULL, NULL, NULL, NULL); =20 if (hypervGetOperatingSystem(priv, &os) < 0) goto cleanup; diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index f406fa39ae..71ed240cbf 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1182,6 +1182,7 @@ virDomainAsyncJobTypeToString; virDomainJobDataCopy; virDomainJobDataFree; virDomainJobDataInit; +virDomainJobObjFree; virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 6d7a6c5853..ab2dab358c 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -2486,5 +2486,5 @@ libxlCreateXMLConf(libxlDriverPrivate *driver) return virDomainXMLOptionNew(&libxlDomainDefParserConfig, &libxlDomainXMLPrivateDataCallbacks, &libxlDriverDomainXMLNamespace, - NULL, NULL); + NULL, NULL, NULL); } diff --git a/src/lxc/lxc_conf.c b/src/lxc/lxc_conf.c index 7834801fb5..fefe63bf20 100644 --- a/src/lxc/lxc_conf.c +++ b/src/lxc/lxc_conf.c @@ -189,7 +189,7 @@ lxcDomainXMLConfInit(virLXCDriver *driver, const char *= defsecmodel) return virDomainXMLOptionNew(&virLXCDriverDomainDefParserConfig, &virLXCDriverPrivateDataCallbacks, &virLXCDriverDomainXMLNamespace, - NULL, NULL); + NULL, NULL, NULL); } =20 =20 diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c index c94f9b8577..c28d0e9f43 100644 --- a/src/openvz/openvz_conf.c +++ b/src/openvz/openvz_conf.c @@ -1062,5 +1062,5 @@ virDomainXMLOption *openvzXMLOption(struct openvz_dri= ver *driver) { openvzDomainDefParserConfig.priv =3D driver; return virDomainXMLOptionNew(&openvzDomainDefParserConfig, - NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); } diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 3b75cdeb95..2809ae53b9 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -1278,7 +1278,8 @@ virQEMUDriverCreateXMLConf(virQEMUDriver *driver, &virQEMUDriverPrivateDataCallbacks, &virQEMUDriverDomainXMLNamespace, &virQEMUDriverDomainABIStability, - &virQEMUDriverDomainSaveCookie); + &virQEMUDriverDomainSaveCookie, + NULL); } =20 =20 diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 5c8413a6b6..e872553cfc 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -9299,7 +9299,7 @@ qemuProcessQMPConnectMonitor(qemuProcessQMP *proc) monConfig.data.nix.path =3D proc->monpath; monConfig.data.nix.listen =3D false; =20 - if (!(xmlopt =3D virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL)) = || + if (!(xmlopt =3D virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL, N= ULL)) || !(proc->vm =3D virDomainObjNew(xmlopt)) || !(proc->vm->def =3D virDomainDefNew(xmlopt))) return -1; diff --git a/src/security/virt-aa-helper.c b/src/security/virt-aa-helper.c index 2d0bc99c73..f338488da3 100644 --- a/src/security/virt-aa-helper.c +++ b/src/security/virt-aa-helper.c @@ -632,7 +632,7 @@ get_definition(vahControl * ctl, const char *xmlStr) } =20 if (!(ctl->xmlopt =3D virDomainXMLOptionNew(&virAAHelperDomainDefParse= rConfig, - NULL, NULL, NULL, NULL))) { + NULL, NULL, NULL, NULL, NULL= ))) { vah_error(ctl, 0, _("Failed to create XML config object")); return -1; } diff --git a/src/test/test_driver.c b/src/test/test_driver.c index 24ff6e8967..ea5f690d48 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -460,7 +460,7 @@ testDriverNew(void) if (!(ret =3D virObjectLockableNew(testDriverClass))) return NULL; =20 - if (!(ret->xmlopt =3D virDomainXMLOptionNew(&config, &privatecb, &ns, = NULL, NULL)) || + if (!(ret->xmlopt =3D virDomainXMLOptionNew(&config, &privatecb, &ns, = NULL, NULL, NULL)) || !(ret->eventState =3D virObjectEventStateNew()) || !(ret->ifaces =3D virInterfaceObjListNew()) || !(ret->domains =3D virDomainObjListNew()) || diff --git a/src/vbox/vbox_common.c b/src/vbox/vbox_common.c index e249980195..bd77641d39 100644 --- a/src/vbox/vbox_common.c +++ b/src/vbox/vbox_common.c @@ -143,7 +143,7 @@ vboxDriverObjNew(void) =20 if (!(driver->caps =3D vboxCapsInit()) || !(driver->xmlopt =3D virDomainXMLOptionNew(&vboxDomainDefParserCon= fig, - NULL, NULL, NULL, NULL))) + NULL, NULL, NULL, NULL, N= ULL))) goto cleanup; =20 return driver; diff --git a/src/vmware/vmware_driver.c b/src/vmware/vmware_driver.c index 2a18d73988..3c578434f3 100644 --- a/src/vmware/vmware_driver.c +++ b/src/vmware/vmware_driver.c @@ -139,7 +139,7 @@ vmwareDomainXMLConfigInit(struct vmware_driver *driver) .free =3D vmwareDataFreeFunc= }; vmwareDomainDefParserConfig.priv =3D driver; return virDomainXMLOptionNew(&vmwareDomainDefParserConfig, &priv, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); } =20 static virDrvOpenStatus diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 191f0b5e83..64817080ab 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -696,7 +696,7 @@ virVMXDomainXMLConfInit(virCaps *caps) { virVMXDomainDefParserConfig.priv =3D caps; return virDomainXMLOptionNew(&virVMXDomainDefParserConfig, NULL, - &virVMXDomainXMLNamespace, NULL, NULL); + &virVMXDomainXMLNamespace, NULL, NULL, NU= LL); } =20 char * diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index 017c084ede..571d895167 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -331,7 +331,7 @@ vzDriverObjNew(void) if (!(driver->caps =3D vzBuildCapabilities()) || !(driver->xmlopt =3D virDomainXMLOptionNew(&vzDomainDefParserConfi= g, &vzDomainXMLPrivateDataCa= llbacksPtr, - NULL, NULL, NULL)) || + NULL, NULL, NULL, NULL)) = || !(driver->domains =3D virDomainObjListNew()) || !(driver->domainEventState =3D virObjectEventStateNew()) || (vzInitVersion(driver) < 0) || diff --git a/tests/bhyveargv2xmltest.c b/tests/bhyveargv2xmltest.c index 2ccc1379b9..92189a2e58 100644 --- a/tests/bhyveargv2xmltest.c +++ b/tests/bhyveargv2xmltest.c @@ -113,7 +113,7 @@ mymain(void) if ((driver.caps =3D virBhyveCapsBuild()) =3D=3D NULL) return EXIT_FAILURE; =20 - if ((driver.xmlopt =3D virDomainXMLOptionNew(NULL, NULL, + if ((driver.xmlopt =3D virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL, NULL)) =3D=3D N= ULL) return EXIT_FAILURE; =20 diff --git a/tests/testutils.c b/tests/testutils.c index 30f91dcd28..8fe624f029 100644 --- a/tests/testutils.c +++ b/tests/testutils.c @@ -989,7 +989,7 @@ static virDomainDefParserConfig virTestGenericDomainDef= ParserConfig =3D { virDomainXMLOption *virTestGenericDomainXMLConfInit(void) { return virDomainXMLOptionNew(&virTestGenericDomainDefParserConfig, - NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); } =20 =20 --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352327; cv=none; d=zohomail.com; s=zohoarc; b=G3YsmT7eyIXqie3A/Nkv2HiO+Y8HgshpNPMzlVzSJlnwwp6yzw6XVTQpb5BxlSLpZuaJLoZO0NmuJE2dpHIYdCK58IrgUD8BjA+sy1kbKDo9UBFyXBnNwpafKznmC154X+v+HRmvfFjEsydjaqNXE5BU7Kp+LtiyZzlsgctOtrU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352327; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=EktbV8Fc2/Rgjqm1WP5TVqNsuqCQq1k25Wzccsm4lfM=; b=KMwBaHUny/kARTvG88JC+frp10szmOFI+KRU0O78aEB3PL7z41l1m8+u/kpJkG6kloyfRk90NMaYj0IwjWtdYGnaRG5QKOC0Xw2+L5CmiP/eQUd255Lbzws14xEENlkBeOpDk2DtRgdddrdjFBTw1chXmblcWnM9EOar3BoHamk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352327074723.5711425970972; Wed, 24 Aug 2022 07:45:27 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-362-HziS0-SOMz-I_Q7srloRqg-1; Wed, 24 Aug 2022 10:45:24 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8ACEA3C138AA; Wed, 24 Aug 2022 14:45:21 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6ECDB2026D4C; Wed, 24 Aug 2022 14:45:21 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C13D31946A60; Wed, 24 Aug 2022 14:45:05 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id CA2091946A42 for ; Wed, 24 Aug 2022 13:43:49 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id AFAC2945D7; Wed, 24 Aug 2022 13:43:49 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3E8B59459C for ; Wed, 24 Aug 2022 13:43:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352325; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=EktbV8Fc2/Rgjqm1WP5TVqNsuqCQq1k25Wzccsm4lfM=; b=Fr3ceDAeuhVrUzj33NfjjWa+cYmzpfELkaoC5ZShopIKH9nOT8F4z+qa07CiARaBMh5g4N 02cdOMsTFcRBQPyzYdMa9Pz71enV6n4JthSBgcQQuI8dSwsXZEk8kPzlYguapgFTiwWedl urcy0vpINdty/+cpxEqeMOC6Ey3jLDI= X-MC-Unique: HziS0-SOMz-I_Q7srloRqg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 06/17] virdomainjob: make drivers use job object in the domain object Date: Wed, 24 Aug 2022 15:43:29 +0200 Message-Id: <1dbca9d96cf565b5b9ec85fa3e0f946b617bbc7e.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352328019100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch uses the job object directly in the domain object and removes the job object from private data of all drivers that use it as well as other relevant code (initializing and freeing the structure). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/ch/ch_domain.c | 29 ++-- src/ch/ch_domain.h | 2 - src/conf/domain_conf.c | 1 + src/libxl/libxl_domain.c | 31 ++-- src/libxl/libxl_domain.h | 2 - src/libxl/libxl_driver.c | 12 +- src/lxc/lxc_domain.c | 28 ++-- src/lxc/lxc_domain.h | 2 - src/qemu/qemu_backup.c | 18 +-- src/qemu/qemu_conf.c | 6 +- src/qemu/qemu_domain.c | 69 +++++---- src/qemu/qemu_domain.h | 3 +- src/qemu/qemu_domainjob.c | 223 +++++++++++------------------ src/qemu/qemu_domainjob.h | 2 - src/qemu/qemu_driver.c | 72 +++++----- src/qemu/qemu_migration.c | 193 ++++++++++++------------- src/qemu/qemu_migration_cookie.c | 17 ++- src/qemu/qemu_migration_cookie.h | 3 +- src/qemu/qemu_migration_params.c | 8 +- src/qemu/qemu_process.c | 73 +++++----- src/qemu/qemu_snapshot.c | 2 +- tests/qemumigrationcookiexmltest.c | 3 +- 22 files changed, 347 insertions(+), 452 deletions(-) diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index 89b494b388..9ddf9a8584 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -44,7 +44,6 @@ VIR_LOG_INIT("ch.ch_domain"); int virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { - virCHDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; unsigned long long then; =20 @@ -52,16 +51,16 @@ virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob = job) return -1; then =3D now + CH_JOB_WAIT_TIME; =20 - while (priv->job.active) { + while (obj->job->active) { VIR_DEBUG("Wait normal job condition for starting job: %s", virDomainJobTypeToString(job)); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0= ) { + if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0= ) { VIR_WARN("Cannot start job (%s) for domain %s;" " current job is (%s) owned by (%llu)", virDomainJobTypeToString(job), obj->def->name, - virDomainJobTypeToString(priv->job.active), - priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->owner); =20 if (errno =3D=3D ETIMEDOUT) virReportError(VIR_ERR_OPERATION_TIMEOUT, @@ -73,11 +72,11 @@ virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob = job) } } =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); =20 VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); + obj->job->active =3D job; + obj->job->owner =3D virThreadSelfID(); =20 return 0; } @@ -91,14 +90,13 @@ virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob = job) void virCHDomainObjEndJob(virDomainObj *obj) { - virCHDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 VIR_DEBUG("Stopping job: %s", virDomainJobTypeToString(job)); =20 - virDomainObjResetJob(&priv->job); - virCondSignal(&priv->job.cond); + virDomainObjResetJob(obj->job); + virCondSignal(&obj->job->cond); } =20 void @@ -117,13 +115,7 @@ virCHDomainObjPrivateAlloc(void *opaque) =20 priv =3D g_new0(virCHDomainObjPrivate, 1); =20 - if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) { - g_free(priv); - return NULL; - } - if (!(priv->chrdevs =3D virChrdevAlloc())) { - virDomainObjClearJob(&priv->job); g_free(priv); return NULL; } @@ -138,7 +130,6 @@ virCHDomainObjPrivateFree(void *data) virCHDomainObjPrivate *priv =3D data; =20 virChrdevFree(priv->chrdevs); - virDomainObjClearJob(&priv->job); g_free(priv->machineName); g_free(priv); } diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index 27efe2feed..c7dfde601e 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -32,8 +32,6 @@ =20 typedef struct _virCHDomainObjPrivate virCHDomainObjPrivate; struct _virCHDomainObjPrivate { - virDomainJobObj job; - virChrdevs *chrdevs; virCHDriver *driver; virCHMonitor *monitor; diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 221e89eb7e..b3671a614b 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -60,6 +60,7 @@ #include "virdomainsnapshotobjlist.h" #include "virdomaincheckpointobjlist.h" #include "virutil.h" +#include "virdomainjob.h" =20 #define VIR_FROM_THIS VIR_FROM_DOMAIN =20 diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index aadb13f461..80a362df46 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -60,7 +60,6 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, virDomainObj *obj, virDomainJob job) { - libxlDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; unsigned long long then; =20 @@ -68,19 +67,19 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, return -1; then =3D now + LIBXL_JOB_WAIT_TIME; =20 - while (priv->job.active) { + while (obj->job->active) { VIR_DEBUG("Wait normal job condition for starting job: %s", virDomainJobTypeToString(job)); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) + if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) goto error; } =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); =20 VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); - priv->job.started =3D now; + obj->job->active =3D job; + obj->job->owner =3D virThreadSelfID(); + obj->job->started =3D now; =20 return 0; =20 @@ -89,8 +88,8 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, " current job is (%s) owned by (%llu)", virDomainJobTypeToString(job), obj->def->name, - virDomainJobTypeToString(priv->job.active), - priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->owner); =20 if (errno =3D=3D ETIMEDOUT) virReportError(VIR_ERR_OPERATION_TIMEOUT, @@ -116,14 +115,13 @@ void libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNUC_UNUSED, virDomainObj *obj) { - libxlDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 VIR_DEBUG("Stopping job: %s", virDomainJobTypeToString(job)); =20 - virDomainObjResetJob(&priv->job); - virCondSignal(&priv->job.cond); + virDomainObjResetJob(obj->job); + virCondSignal(&obj->job->cond); } =20 int @@ -158,12 +156,6 @@ libxlDomainObjPrivateAlloc(void *opaque G_GNUC_UNUSED) return NULL; } =20 - if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) { - virChrdevFree(priv->devs); - g_free(priv); - return NULL; - } - return priv; } =20 @@ -173,7 +165,6 @@ libxlDomainObjPrivateFree(void *data) libxlDomainObjPrivate *priv =3D data; =20 g_free(priv->lockState); - virDomainObjClearJob(&priv->job); virChrdevFree(priv->devs); g_free(priv); } diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 451e76e311..9e8804f747 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -37,8 +37,6 @@ struct _libxlDomainObjPrivate { char *lockState; bool lockProcessRunning; =20 - virDomainJobObj job; - bool hookRun; /* true if there was a hook run over this domain */ }; =20 diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index ff6c2f6506..0ae1ee95c4 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -5204,7 +5204,6 @@ static int libxlDomainGetJobInfo(virDomainPtr dom, virDomainJobInfoPtr info) { - libxlDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; unsigned long long timeElapsed =3D 0; @@ -5215,8 +5214,7 @@ libxlDomainGetJobInfo(virDomainPtr dom, if (virDomainGetJobInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; - if (!priv->job.active) { + if (!vm->job->active) { memset(info, 0, sizeof(*info)); info->type =3D VIR_DOMAIN_JOB_NONE; ret =3D 0; @@ -5226,7 +5224,7 @@ libxlDomainGetJobInfo(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) + if (libxlDomainJobGetTimeElapsed(vm->job, &timeElapsed) < 0) goto cleanup; =20 /* setting only these two attributes is enough because libxl never sets @@ -5248,7 +5246,6 @@ libxlDomainGetJobStats(virDomainPtr dom, int *nparams, unsigned int flags) { - libxlDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; int maxparams =3D 0; @@ -5263,8 +5260,7 @@ libxlDomainGetJobStats(virDomainPtr dom, if (virDomainGetJobStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; - if (!priv->job.active) { + if (!vm->job->active) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; *nparams =3D 0; @@ -5275,7 +5271,7 @@ libxlDomainGetJobStats(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) + if (libxlDomainJobGetTimeElapsed(vm->job, &timeElapsed) < 0) goto cleanup; =20 if (virTypedParamsAddULLong(params, nparams, &maxparams, diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index f234aaf39c..3dddc0a7d4 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -52,7 +52,6 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNUSE= D, virDomainObj *obj, virDomainJob job) { - virLXCDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; unsigned long long then; =20 @@ -60,18 +59,18 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNU= SED, return -1; then =3D now + LXC_JOB_WAIT_TIME; =20 - while (priv->job.active) { + while (obj->job->active) { VIR_DEBUG("Wait normal job condition for starting job: %s", virDomainJobTypeToString(job)); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) + if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) goto error; } =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); =20 VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); + obj->job->active =3D job; + obj->job->owner =3D virThreadSelfID(); =20 return 0; =20 @@ -80,8 +79,8 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNUSE= D, " current job is (%s) owned by (%llu)", virDomainJobTypeToString(job), obj->def->name, - virDomainJobTypeToString(priv->job.active), - priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->owner); =20 if (errno =3D=3D ETIMEDOUT) virReportError(VIR_ERR_OPERATION_TIMEOUT, @@ -103,14 +102,13 @@ void virLXCDomainObjEndJob(virLXCDriver *driver G_GNUC_UNUSED, virDomainObj *obj) { - virLXCDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 VIR_DEBUG("Stopping job: %s", virDomainJobTypeToString(job)); =20 - virDomainObjResetJob(&priv->job); - virCondSignal(&priv->job.cond); + virDomainObjResetJob(obj->job); + virCondSignal(&obj->job->cond); } =20 =20 @@ -119,11 +117,6 @@ virLXCDomainObjPrivateAlloc(void *opaque) { virLXCDomainObjPrivate *priv =3D g_new0(virLXCDomainObjPrivate, 1); =20 - if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) { - g_free(priv); - return NULL; - } - priv->driver =3D opaque; =20 return priv; @@ -136,7 +129,6 @@ virLXCDomainObjPrivateFree(void *data) virLXCDomainObjPrivate *priv =3D data; =20 virCgroupFree(priv->cgroup); - virDomainObjClearJob(&priv->job); g_free(priv); } =20 diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index db622acc86..8cbcc0818c 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -66,8 +66,6 @@ struct _virLXCDomainObjPrivate { =20 virCgroup *cgroup; char *machineName; - - virDomainJobObj job; }; =20 extern virXMLNamespace virLXCDriverDomainXMLNamespace; diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 5280186970..2da520dbc7 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -594,30 +594,30 @@ qemuBackupJobTerminate(virDomainObj *vm, } } =20 - if (priv->job.current) { + if (vm->job->current) { qemuDomainJobDataPrivate *privData =3D NULL; =20 - qemuDomainJobDataUpdateTime(priv->job.current); + qemuDomainJobDataUpdateTime(vm->job->current); =20 - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); - priv->job.completed =3D virDomainJobDataCopy(priv->job.current); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); + vm->job->completed =3D virDomainJobDataCopy(vm->job->current); =20 - privData =3D priv->job.completed->privateData; + privData =3D vm->job->completed->privateData; =20 privData->stats.backup.total =3D priv->backup->push_total; privData->stats.backup.transferred =3D priv->backup->push_transfer= red; privData->stats.backup.tmp_used =3D priv->backup->pull_tmp_used; privData->stats.backup.tmp_total =3D priv->backup->pull_tmp_total; =20 - priv->job.completed->status =3D jobstatus; - priv->job.completed->errmsg =3D g_strdup(priv->backup->errmsg); + vm->job->completed->status =3D jobstatus; + vm->job->completed->errmsg =3D g_strdup(priv->backup->errmsg); =20 qemuDomainEventEmitJobCompleted(priv->driver, vm); } =20 g_clear_pointer(&priv->backup, virDomainBackupDefFree); =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP) qemuDomainObjEndAsyncJob(vm); } =20 @@ -793,7 +793,7 @@ qemuBackupBegin(virDomainObj *vm, qemuDomainObjSetAsyncJobMask(vm, (VIR_JOB_DEFAULT_MASK | JOB_MASK(VIR_JOB_SUSPEND) | JOB_MASK(VIR_JOB_MODIFY))); - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP); =20 if (!virDomainObjIsActive(vm)) { diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 2809ae53b9..4f59e5fb07 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -1272,14 +1272,18 @@ virDomainXMLOption * virQEMUDriverCreateXMLConf(virQEMUDriver *driver, const char *defsecmodel) { + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); + virQEMUDriverDomainDefParserConfig.priv =3D driver; virQEMUDriverDomainDefParserConfig.defSecModel =3D defsecmodel; + virQEMUDriverDomainJobConfig.maxQueuedJobs =3D cfg->maxQueuedJobs; + return virDomainXMLOptionNew(&virQEMUDriverDomainDefParserConfig, &virQEMUDriverPrivateDataCallbacks, &virQEMUDriverDomainXMLNamespace, &virQEMUDriverDomainABIStability, &virQEMUDriverDomainSaveCookie, - NULL); + &virQEMUDriverDomainJobConfig); } =20 =20 diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index d5fef76211..de8de2fd0e 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -278,7 +278,7 @@ qemuDomainObjPrivateXMLParseJobNBD(virDomainObj *vm, return -1; =20 if (n > 0) { - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { VIR_WARN("Found disks marked for migration but we were not " "migrating"); n =3D 0; @@ -359,14 +359,46 @@ qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt, return 0; } =20 +static void * +qemuJobDataAllocPrivateData(void) +{ + return g_new0(qemuDomainJobDataPrivate, 1); +} + + +static void * +qemuJobDataCopyPrivateData(void *data) +{ + qemuDomainJobDataPrivate *ret =3D g_new0(qemuDomainJobDataPrivate, 1); + + memcpy(ret, data, sizeof(qemuDomainJobDataPrivate)); + + return ret; +} + =20 -static virDomainObjPrivateJobCallbacks qemuPrivateJobCallbacks =3D { - .allocJobPrivate =3D qemuJobAllocPrivate, - .freeJobPrivate =3D qemuJobFreePrivate, - .resetJobPrivate =3D qemuJobResetPrivate, - .formatJobPrivate =3D qemuDomainFormatJobPrivate, - .parseJobPrivate =3D qemuDomainParseJobPrivate, - .saveStatusPrivate =3D qemuDomainSaveStatus, +static void +qemuJobDataFreePrivateData(void *data) +{ + g_free(data); +} + + +virDomainJobObjConfig virQEMUDriverDomainJobConfig =3D { + .cb =3D { + .allocJobPrivate =3D qemuJobAllocPrivate, + .freeJobPrivate =3D qemuJobFreePrivate, + .resetJobPrivate =3D qemuJobResetPrivate, + .formatJobPrivate =3D qemuDomainFormatJobPrivate, + .parseJobPrivate =3D qemuDomainParseJobPrivate, + .saveStatusPrivate =3D qemuDomainSaveStatus, + }, + .jobDataPrivateCb =3D { + .allocPrivateData =3D qemuJobDataAllocPrivateData, + .copyPrivateData =3D qemuJobDataCopyPrivateData, + .freePrivateData =3D qemuJobDataFreePrivateData, + }, + .maxQueuedJobs =3D 0, }; =20 /** @@ -1719,7 +1751,6 @@ qemuDomainObjPrivateFree(void *data) qemuDomainObjPrivateDataClear(priv); =20 virObjectUnref(priv->monConfig); - virDomainObjClearJob(&priv->job); g_free(priv->lockState); g_free(priv->origname); =20 @@ -1757,22 +1788,12 @@ static void * qemuDomainObjPrivateAlloc(void *opaque) { g_autoptr(qemuDomainObjPrivate) priv =3D g_new0(qemuDomainObjPrivate, = 1); - g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(opaque); - - if (virDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks, - &qemuJobDataPrivateDataCallbacks) < 0) { - virReportSystemError(errno, "%s", - _("Unable to init qemu driver mutexes")); - return NULL; - } =20 if (!(priv->devs =3D virChrdevAlloc())) return NULL; =20 priv->blockjobs =3D virHashNew(virObjectUnref); =20 - priv->job.maxQueuedJobs =3D cfg->maxQueuedJobs; - /* agent commands block by default, user can choose different behavior= */ priv->agentTimeout =3D VIR_DOMAIN_AGENT_RESPONSE_TIMEOUT_BLOCK; priv->migMaxBandwidth =3D QEMU_DOMAIN_MIG_BANDWIDTH_MAX; @@ -5955,14 +5976,14 @@ qemuDomainObjEnterMonitorInternal(virDomainObj *obj, qemuDomainObjEndJob(obj); return -1; } - } else if (priv->job.asyncOwner =3D=3D virThreadSelfID()) { + } else if (obj->job->asyncOwner =3D=3D virThreadSelfID()) { VIR_WARN("This thread seems to be the async job owner; entering" " monitor without asking for a nested job is dangerous"); - } else if (priv->job.owner !=3D virThreadSelfID()) { + } else if (obj->job->owner !=3D virThreadSelfID()) { VIR_WARN("Entering a monitor without owning a job. " "Job %s owner %s (%llu)", - virDomainJobTypeToString(priv->job.active), - priv->job.ownerAPI, priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->ownerAPI, obj->job->owner); } =20 VIR_DEBUG("Entering monitor (mon=3D%p vm=3D%p name=3D%s)", @@ -6001,7 +6022,7 @@ qemuDomainObjExitMonitor(virDomainObj *obj) if (!hasRefs) priv->mon =3D NULL; =20 - if (priv->job.active =3D=3D VIR_JOB_ASYNC_NESTED) + if (obj->job->active =3D=3D VIR_JOB_ASYNC_NESTED) qemuDomainObjEndJob(obj); } =20 diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 592ee9805b..ef149b9fa9 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -99,8 +99,6 @@ typedef struct _qemuDomainObjPrivate qemuDomainObjPrivate; struct _qemuDomainObjPrivate { virQEMUDriver *driver; =20 - virDomainJobObj job; - virBitmap *namespaces; =20 virEventThread *eventThread; @@ -775,6 +773,7 @@ extern virXMLNamespace virQEMUDriverDomainXMLNamespace; extern virDomainDefParserConfig virQEMUDriverDomainDefParserConfig; extern virDomainABIStability virQEMUDriverDomainABIStability; extern virSaveCookieCallbacks virQEMUDriverDomainSaveCookie; +extern virDomainJobObjConfig virQEMUDriverDomainJobConfig; =20 int qemuDomainUpdateDeviceList(virDomainObj *vm, int asyncJob); =20 diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index a6ea7b2f58..0ecadddbc7 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -31,38 +31,6 @@ =20 VIR_LOG_INIT("qemu.qemu_domainjob"); =20 -static void * -qemuJobDataAllocPrivateData(void) -{ - return g_new0(qemuDomainJobDataPrivate, 1); -} - - -static void * -qemuJobDataCopyPrivateData(void *data) -{ - qemuDomainJobDataPrivate *ret =3D g_new0(qemuDomainJobDataPrivate, 1); - - memcpy(ret, data, sizeof(qemuDomainJobDataPrivate)); - - return ret; -} - - -static void -qemuJobDataFreePrivateData(void *data) -{ - g_free(data); -} - - -virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallbacks =3D { - .allocPrivateData =3D qemuJobDataAllocPrivateData, - .copyPrivateData =3D qemuJobDataCopyPrivateData, - .freePrivateData =3D qemuJobDataFreePrivateData, -}; - - void qemuDomainJobSetStatsType(virDomainJobData *jobData, qemuDomainJobStatsType type) @@ -130,16 +98,15 @@ void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; virObjectEvent *event; virTypedParameterPtr params =3D NULL; int nparams =3D 0; int type; =20 - if (!priv->job.completed) + if (!vm->job->completed) return; =20 - if (qemuDomainJobDataToParams(priv->job.completed, &type, + if (qemuDomainJobDataToParams(vm->job->completed, &type, ¶ms, &nparams) < 0) { VIR_WARN("Could not get stats for completed job; domain %s", vm->def->name); @@ -160,8 +127,7 @@ qemuDomainObjRestoreAsyncJob(virDomainObj *vm, virDomainJobStatus status, unsigned long long allowedJobs) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobObj *job =3D &priv->job; + virDomainJobObj *job =3D vm->job; =20 VIR_DEBUG("Restoring %s async job for domain %s", virDomainAsyncJobTypeToString(asyncJob), vm->def->name); @@ -177,8 +143,8 @@ qemuDomainObjRestoreAsyncJob(virDomainObj *vm, =20 qemuDomainObjSetAsyncJobMask(vm, allowedJobs); =20 - job->current =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks= ); - qemuDomainJobSetStatsType(priv->job.current, statsType); + job->current =3D virDomainJobDataInit(&virQEMUDriverDomainJobConfig.jo= bDataPrivateCb); + qemuDomainJobSetStatsType(vm->job->current, statsType); job->current->operation =3D operation; job->current->status =3D status; job->current->started =3D started; @@ -603,25 +569,24 @@ void qemuDomainObjSetJobPhase(virDomainObj *obj, int phase) { - qemuDomainObjPrivate *priv =3D obj->privateData; unsigned long long me =3D virThreadSelfID(); =20 - if (!priv->job.asyncJob) + if (!obj->job->asyncJob) return; =20 VIR_DEBUG("Setting '%s' phase to '%s'", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase)); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + qemuDomainAsyncJobPhaseToString(obj->job->asyncJob, phase)); =20 - if (priv->job.asyncOwner !=3D 0 && - priv->job.asyncOwner !=3D me) { + if (obj->job->asyncOwner !=3D 0 && + obj->job->asyncOwner !=3D me) { VIR_WARN("'%s' async job is owned by thread %llu, API '%s'", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner, - NULLSTR(priv->job.asyncOwnerAPI)); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj->job->asyncOwner, + NULLSTR(obj->job->asyncOwnerAPI)); } =20 - priv->job.phase =3D phase; + obj->job->phase =3D phase; qemuDomainSaveStatus(obj); } =20 @@ -634,26 +599,25 @@ void qemuDomainObjStartJobPhase(virDomainObj *obj, int phase) { - qemuDomainObjPrivate *priv =3D obj->privateData; unsigned long long me =3D virThreadSelfID(); =20 - if (!priv->job.asyncJob) + if (!obj->job->asyncJob) return; =20 VIR_DEBUG("Starting phase '%s' of '%s' job", - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + qemuDomainAsyncJobPhaseToString(obj->job->asyncJob, phase), + virDomainAsyncJobTypeToString(obj->job->asyncJob)); =20 - if (priv->job.asyncOwner =3D=3D 0) { - priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); - } else if (me !=3D priv->job.asyncOwner) { + if (obj->job->asyncOwner =3D=3D 0) { + obj->job->asyncOwnerAPI =3D g_strdup(virThreadJobGet()); + } else if (me !=3D obj->job->asyncOwner) { VIR_WARN("'%s' async job is owned by thread %llu, API '%s'", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner, - NULLSTR(priv->job.asyncOwnerAPI)); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj->job->asyncOwner, + NULLSTR(obj->job->asyncOwnerAPI)); } =20 - priv->job.asyncOwner =3D me; + obj->job->asyncOwner =3D me; qemuDomainObjSetJobPhase(obj, phase); } =20 @@ -662,39 +626,33 @@ void qemuDomainObjSetAsyncJobMask(virDomainObj *obj, unsigned long long allowedJobs) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (!priv->job.asyncJob) + if (!obj->job->asyncJob) return; =20 - priv->job.mask =3D allowedJobs | JOB_MASK(VIR_JOB_DESTROY); + obj->job->mask =3D allowedJobs | JOB_MASK(VIR_JOB_DESTROY); } =20 void qemuDomainObjDiscardAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (priv->job.active =3D=3D VIR_JOB_ASYNC_NESTED) - virDomainObjResetJob(&priv->job); - virDomainObjResetAsyncJob(&priv->job); + if (obj->job->active =3D=3D VIR_JOB_ASYNC_NESTED) + virDomainObjResetJob(obj->job); + virDomainObjResetAsyncJob(obj->job); qemuDomainSaveStatus(obj); } =20 void qemuDomainObjReleaseAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - VIR_DEBUG("Releasing ownership of '%s' async job", - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainAsyncJobTypeToString(obj->job->asyncJob)); =20 - if (priv->job.asyncOwner !=3D virThreadSelfID()) { + if (obj->job->asyncOwner !=3D virThreadSelfID()) { VIR_WARN("'%s' async job is owned by thread %llu", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj->job->asyncOwner); } - priv->job.asyncOwner =3D 0; + obj->job->asyncOwner =3D 0; } =20 /* @@ -708,9 +666,7 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) int qemuDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (virDomainObjBeginJobInternal(obj, &priv->job, job, + if (virDomainObjBeginJobInternal(obj, obj->job, job, VIR_AGENT_JOB_NONE, VIR_ASYNC_JOB_NONE, false) < 0) return -1; @@ -728,9 +684,7 @@ int qemuDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - return virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_NONE, + return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_NONE, agentJob, VIR_ASYNC_JOB_NONE, false); } @@ -740,16 +694,13 @@ int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainJobOperation operation, unsigned long apiFlags) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_ASYNC, + if (virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC, VIR_AGENT_JOB_NONE, asyncJob, false) < 0) return -1; =20 - priv =3D obj->privateData; - priv->job.current->operation =3D operation; - priv->job.apiFlags =3D apiFlags; + obj->job->current->operation =3D operation; + obj->job->apiFlags =3D apiFlags; return 0; } =20 @@ -757,21 +708,19 @@ int qemuDomainObjBeginNestedJob(virDomainObj *obj, virDomainAsyncJob asyncJob) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (asyncJob !=3D priv->job.asyncJob) { + if (asyncJob !=3D obj->job->asyncJob) { virReportError(VIR_ERR_INTERNAL_ERROR, _("unexpected async job %d type expected %d"), - asyncJob, priv->job.asyncJob); + asyncJob, obj->job->asyncJob); return -1; } =20 - if (priv->job.asyncOwner !=3D virThreadSelfID()) { + if (obj->job->asyncOwner !=3D virThreadSelfID()) { VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", - priv->job.asyncOwner); + obj->job->asyncOwner); } =20 - return virDomainObjBeginJobInternal(obj, &priv->job, + return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC_NESTED, VIR_AGENT_JOB_NONE, VIR_ASYNC_JOB_NONE, @@ -794,9 +743,7 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - return virDomainObjBeginJobInternal(obj, &priv->job, job, + return virDomainObjBeginJobInternal(obj, obj->job, job, VIR_AGENT_JOB_NONE, VIR_ASYNC_JOB_NONE, true); } @@ -810,69 +757,63 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, void qemuDomainObjEndJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 - priv->job.jobsQueued--; + obj->job->jobsQueued--; =20 VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); if (virDomainTrackJob(job)) qemuDomainSaveStatus(obj); /* We indeed need to wake up ALL threads waiting because * grabbing a job requires checking more variables. */ - virCondBroadcast(&priv->job.cond); + virCondBroadcast(&obj->job->cond); } =20 void qemuDomainObjEndAgentJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - virDomainAgentJob agentJob =3D priv->job.agentActive; + virDomainAgentJob agentJob =3D obj->job->agentActive; =20 - priv->job.jobsQueued--; + obj->job->jobsQueued--; =20 VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - virDomainObjResetAgentJob(&priv->job); + virDomainObjResetAgentJob(obj->job); /* We indeed need to wake up ALL threads waiting because * grabbing a job requires checking more variables. */ - virCondBroadcast(&priv->job.cond); + virCondBroadcast(&obj->job->cond); } =20 void qemuDomainObjEndAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - priv->job.jobsQueued--; + obj->job->jobsQueued--; =20 VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - virDomainObjResetAsyncJob(&priv->job); + virDomainObjResetAsyncJob(obj->job); qemuDomainSaveStatus(obj); - virCondBroadcast(&priv->job.asyncCond); + virCondBroadcast(&obj->job->asyncCond); } =20 void qemuDomainObjAbortAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - VIR_DEBUG("Requesting abort of async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - priv->job.abortJob =3D true; + obj->job->abortJob =3D true; virDomainObjBroadcast(obj); } =20 @@ -880,35 +821,34 @@ int qemuDomainObjPrivateXMLFormatJob(virBuffer *buf, virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - virDomainJob job =3D priv->job.active; + virDomainJob job =3D vm->job->active; =20 if (!virDomainTrackJob(job)) job =3D VIR_JOB_NONE; =20 if (job =3D=3D VIR_JOB_NONE && - priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) + vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) return 0; =20 virBufferAsprintf(&attrBuf, " type=3D'%s' async=3D'%s'", virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainAsyncJobTypeToString(vm->job->asyncJob)); =20 - if (priv->job.phase) { + if (vm->job->phase) { virBufferAsprintf(&attrBuf, " phase=3D'%s'", - qemuDomainAsyncJobPhaseToString(priv->job.asyncJ= ob, - priv->job.phase)= ); + qemuDomainAsyncJobPhaseToString(vm->job->asyncJo= b, + vm->job->phase)); } =20 - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_NONE) { - virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", priv->job.apiFlags= ); - virBufferAsprintf(&attrBuf, " asyncStarted=3D'%llu'", priv->job.as= yncStarted); + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_NONE) { + virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", vm->job->apiFlags); + virBufferAsprintf(&attrBuf, " asyncStarted=3D'%llu'", vm->job->asy= ncStarted); } =20 - if (priv->job.cb && - priv->job.cb->formatJobPrivate(&childBuf, &priv->job, vm) < 0) + if (vm->job->cb && + vm->job->cb->formatJobPrivate(&childBuf, vm->job, vm) < 0) return -1; =20 virXMLFormatElement(buf, "job", &attrBuf, &childBuf); @@ -921,8 +861,7 @@ int qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, xmlXPathContextPtr ctxt) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobObj *job =3D &priv->job; + virDomainJobObj *job =3D vm->job; VIR_XPATH_NODE_AUTORESTORE(ctxt) g_autofree char *tmp =3D NULL; =20 @@ -938,7 +877,7 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, return -1; } VIR_FREE(tmp); - priv->job.active =3D type; + vm->job->active =3D type; } =20 if ((tmp =3D virXPathString("string(@async)", ctxt))) { @@ -950,11 +889,11 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, return -1; } VIR_FREE(tmp); - priv->job.asyncJob =3D async; + vm->job->asyncJob =3D async; =20 if ((tmp =3D virXPathString("string(@phase)", ctxt))) { - priv->job.phase =3D qemuDomainAsyncJobPhaseFromString(async, t= mp); - if (priv->job.phase < 0) { + vm->job->phase =3D qemuDomainAsyncJobPhaseFromString(async, tm= p); + if (vm->job->phase < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Unknown job phase %s"), tmp); return -1; @@ -963,20 +902,20 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, } =20 if (virXPathULongLong("string(@asyncStarted)", ctxt, - &priv->job.asyncStarted) =3D=3D -2) { + &vm->job->asyncStarted) =3D=3D -2) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid async job start")); return -1; } } =20 - if (virXPathULongHex("string(@flags)", ctxt, &priv->job.apiFlags) =3D= =3D -2) { + if (virXPathULongHex("string(@flags)", ctxt, &vm->job->apiFlags) =3D= =3D -2) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid job flags"= )); return -1; } =20 - if (priv->job.cb && - priv->job.cb->parseJobPrivate(ctxt, job, vm) < 0) + if (vm->job->cb && + vm->job->cb->parseJobPrivate(ctxt, job, vm) < 0) return -1; =20 return 0; diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 23eadc26a7..201d7857a8 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -58,8 +58,6 @@ struct _qemuDomainJobDataPrivate { qemuDomainMirrorStats mirrorStats; }; =20 -extern virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallback= s; - void qemuDomainJobSetStatsType(virDomainJobData *jobData, qemuDomainJobStatsType type); =20 diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 707f4cc1bb..ce54890d44 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1670,7 +1670,6 @@ static int qemuDomainSuspend(virDomainPtr dom) virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; int ret =3D -1; - qemuDomainObjPrivate *priv; virDomainPausedReason reason; int state; =20 @@ -1680,17 +1679,15 @@ static int qemuDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; - if (qemuDomainObjBeginJob(vm, VIR_JOB_SUSPEND) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) goto endjob; =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) reason =3D VIR_DOMAIN_PAUSED_MIGRATION; - else if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_SNAPSHOT) + else if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_SNAPSHOT) reason =3D VIR_DOMAIN_PAUSED_SNAPSHOT; else reason =3D VIR_DOMAIN_PAUSED_USER; @@ -2101,7 +2098,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, =20 qemuDomainSetFakeReboot(vm, false); =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; =20 qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_DESTROYED, @@ -2587,12 +2584,12 @@ qemuDomainGetControlInfo(virDomainPtr dom, if (priv->monError) { info->state =3D VIR_DOMAIN_CONTROL_ERROR; info->details =3D VIR_DOMAIN_CONTROL_ERROR_REASON_MONITOR; - } else if (priv->job.active) { + } else if (vm->job->active) { if (virTimeMillisNow(&info->stateTime) < 0) goto cleanup; - if (priv->job.current) { + if (vm->job->current) { info->state =3D VIR_DOMAIN_CONTROL_JOB; - info->stateTime -=3D priv->job.current->started; + info->stateTime -=3D vm->job->current->started; } else { if (priv->monStart > 0) { info->state =3D VIR_DOMAIN_CONTROL_OCCUPIED; @@ -2651,7 +2648,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, goto endjob; } =20 - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Pause */ @@ -3014,28 +3011,27 @@ qemuDomainManagedSaveRemove(virDomainPtr dom, unsig= ned int flags) static int qemuDumpWaitForCompletion(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; - qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; + qemuDomainJobDataPrivate *privJobCurrent =3D vm->job->current->private= Data; =20 VIR_DEBUG("Waiting for dump completion"); - while (!jobPriv->dumpCompleted && !priv->job.abortJob) { + while (!jobPriv->dumpCompleted && !vm->job->abortJob) { if (qemuDomainObjWait(vm) < 0) return -1; } =20 if (privJobCurrent->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STATUS_= FAILED) { - if (priv->job.error) + if (vm->job->error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"), - priv->job.error); + vm->job->error); else virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("memory-only dump failed for unknown reason")= ); =20 return -1; } - qemuDomainJobDataUpdateTime(priv->job.current); + qemuDomainJobDataUpdateTime(vm->job->current); =20 return 0; } @@ -3058,10 +3054,10 @@ qemuDumpToFd(virQEMUDriver *driver, return -1; =20 if (detach) { - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP); } else { - g_clear_pointer(&priv->job.current, virDomainJobDataFree); + g_clear_pointer(&vm->job->current, virDomainJobDataFree); } =20 if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) @@ -3220,7 +3216,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; =20 priv =3D vm->privateData; - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Migrate will always stop the VM, so the resume condition is @@ -4056,7 +4052,7 @@ processMonitorEOFEvent(virQEMUDriver *driver, auditReason =3D "failed"; } =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; qemuMigrationDstErrorSave(driver, vm->def->name, qemuMonitorLastError(priv->mon)); @@ -6505,7 +6501,6 @@ qemuDomainObjStart(virConnectPtr conn, bool force_boot =3D (flags & VIR_DOMAIN_START_FORCE_BOOT) !=3D 0; bool reset_nvram =3D (flags & VIR_DOMAIN_START_RESET_NVRAM) !=3D 0; unsigned int start_flags =3D VIR_QEMU_PROCESS_START_COLD; - qemuDomainObjPrivate *priv =3D vm->privateData; =20 start_flags |=3D start_paused ? VIR_QEMU_PROCESS_START_PAUSED : 0; start_flags |=3D autodestroy ? VIR_QEMU_PROCESS_START_AUTODESTROY : 0; @@ -6527,8 +6522,8 @@ qemuDomainObjStart(virConnectPtr conn, } vm->hasManagedSave =3D false; } else { - virDomainJobOperation op =3D priv->job.current->operation; - priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_REST= ORE; + virDomainJobOperation op =3D vm->job->current->operation; + vm->job->current->operation =3D VIR_DOMAIN_JOB_OPERATION_RESTO= RE; =20 ret =3D qemuDomainObjRestore(conn, driver, vm, managed_save, start_paused, bypass_cache, @@ -6547,7 +6542,7 @@ qemuDomainObjStart(virConnectPtr conn, return ret; } else { VIR_WARN("Ignoring incomplete managed state %s", managed_s= ave); - priv->job.current->operation =3D op; + vm->job->current->operation =3D op; vm->hasManagedSave =3D false; } } @@ -12663,20 +12658,19 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, bool completed, virDomainJobData **jobData) { - qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobDataPrivate *privStats =3D NULL; int ret =3D -1; =20 *jobData =3D NULL; =20 if (completed) { - if (priv->job.completed && !priv->job.current) - *jobData =3D virDomainJobDataCopy(priv->job.completed); + if (vm->job->completed && !vm->job->current) + *jobData =3D virDomainJobDataCopy(vm->job->completed); =20 return 0; } =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", _("migration statistics are available only on " "the source host")); @@ -12689,11 +12683,11 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, if (virDomainObjCheckActive(vm) < 0) goto cleanup; =20 - if (!priv->job.current) { + if (!vm->job->current) { ret =3D 0; goto cleanup; } - *jobData =3D virDomainJobDataCopy(priv->job.current); + *jobData =3D virDomainJobDataCopy(vm->job->current); =20 privStats =3D (*jobData)->privateData; =20 @@ -12767,7 +12761,6 @@ qemuDomainGetJobStats(virDomainPtr dom, unsigned int flags) { virDomainObj *vm; - qemuDomainObjPrivate *priv; g_autoptr(virDomainJobData) jobData =3D NULL; bool completed =3D !!(flags & VIR_DOMAIN_JOB_STATS_COMPLETED); int ret =3D -1; @@ -12781,7 +12774,6 @@ qemuDomainGetJobStats(virDomainPtr dom, if (virDomainGetJobStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; if (qemuDomainGetJobStatsInternal(vm, completed, &jobData) < 0) goto cleanup; =20 @@ -12797,7 +12789,7 @@ qemuDomainGetJobStats(virDomainPtr dom, ret =3D qemuDomainJobDataToParams(jobData, type, params, nparams); =20 if (completed && ret =3D=3D 0 && !(flags & VIR_DOMAIN_JOB_STATS_KEEP_C= OMPLETED)) - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12873,14 +12865,14 @@ qemuDomainAbortJobFlags(virDomainPtr dom, priv =3D vm->privateData; =20 if (flags & VIR_DOMAIN_ABORT_JOB_POSTCOPY && - (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT || + (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT || !virDomainObjIsPostcopy(vm, VIR_DOMAIN_JOB_OPERATION_MIGRATION_OU= T))) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("current job is not outgoing migration in post-co= py mode")); goto endjob; } =20 - switch (priv->job.asyncJob) { + switch (vm->job->asyncJob) { case VIR_ASYNC_JOB_NONE: virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("no job is active on the domain")); @@ -12910,7 +12902,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, break; =20 case VIR_ASYNC_JOB_DUMP: - if (priv->job.apiFlags & VIR_DUMP_MEMORY_ONLY) { + if (vm->job->apiFlags & VIR_DUMP_MEMORY_ONLY) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cannot abort memory-only dump")); goto endjob; @@ -12930,7 +12922,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, =20 case VIR_ASYNC_JOB_LAST: default: - virReportEnumRangeError(virDomainAsyncJob, priv->job.asyncJob); + virReportEnumRangeError(virDomainAsyncJob, vm->job->asyncJob); break; } =20 @@ -13339,14 +13331,14 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, =20 priv =3D vm->privateData; =20 - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("post-copy can only be started while " "outgoing migration is in progress")); goto endjob; } =20 - if (!(priv->job.apiFlags & VIR_MIGRATE_POSTCOPY)) { + if (!(vm->job->apiFlags & VIR_MIGRATE_POSTCOPY)) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("switching to post-copy requires migration to be " "started with VIR_MIGRATE_POSTCOPY flag")); diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index b3b25d78b4..2572f47385 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -86,10 +86,8 @@ VIR_ENUM_IMPL(qemuMigrationJobPhase, static bool ATTRIBUTE_NONNULL(1) qemuMigrationJobIsAllowed(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN || - priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN || + vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { virReportError(VIR_ERR_OPERATION_INVALID, _("another migration job is already running for dom= ain '%s'"), vm->def->name); @@ -105,7 +103,6 @@ qemuMigrationJobStart(virDomainObj *vm, virDomainAsyncJob job, unsigned long apiFlags) { - qemuDomainObjPrivate *priv =3D vm->privateData; virDomainJobOperation op; unsigned long long mask; =20 @@ -126,7 +123,7 @@ qemuMigrationJobStart(virDomainObj *vm, if (qemuDomainObjBeginAsyncJob(vm, job, op, apiFlags) < 0) return -1; =20 - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION); =20 qemuDomainObjSetAsyncJobMask(vm, mask); @@ -138,13 +135,11 @@ static int qemuMigrationCheckPhase(virDomainObj *vm, qemuMigrationJobPhase phase) { - qemuDomainObjPrivate *priv =3D vm->privateData; - if (phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && - phase < priv->job.phase) { + phase < vm->job->phase) { virReportError(VIR_ERR_INTERNAL_ERROR, _("migration protocol going backwards %s =3D> %s"), - qemuMigrationJobPhaseTypeToString(priv->job.phase), + qemuMigrationJobPhaseTypeToString(vm->job->phase), qemuMigrationJobPhaseTypeToString(phase)); return -1; } @@ -190,9 +185,7 @@ static bool ATTRIBUTE_NONNULL(1) qemuMigrationJobIsActive(virDomainObj *vm, virDomainAsyncJob job) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (priv->job.asyncJob !=3D job) { + if (vm->job->asyncJob !=3D job) { const char *msg; =20 if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) @@ -956,7 +949,7 @@ qemuMigrationSrcCancelRemoveTempBitmaps(virDomainObj *v= m, virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; GSList *next; =20 for (next =3D jobPriv->migTempBitmaps; next; next =3D next->next) { @@ -1236,10 +1229,10 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *drive= r, if (rv < 0) return -1; =20 - if (priv->job.abortJob) { - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; + if (vm->job->abortJob) { + vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), - virDomainAsyncJobTypeToString(priv->job.asyncJo= b), + virDomainAsyncJobTypeToString(vm->job->asyncJob= ), _("canceled by client")); return -1; } @@ -1255,7 +1248,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, } =20 qemuMigrationSrcFetchMirrorStats(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - priv->job.current); + vm->job->current); return 0; } =20 @@ -1718,14 +1711,13 @@ qemuMigrationDstPostcopyFailed(virDomainObj *vm) static void qemuMigrationSrcWaitForSpice(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 if (!jobPriv->spiceMigration) return; =20 VIR_DEBUG("Waiting for SPICE to finish migration"); - while (!jobPriv->spiceMigrated && !priv->job.abortJob) { + while (!jobPriv->spiceMigrated && !vm->job->abortJob) { if (qemuDomainObjWait(vm) < 0) return; } @@ -1810,9 +1802,7 @@ qemuMigrationAnyFetchStats(virDomainObj *vm, static const char * qemuMigrationJobName(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - switch (priv->job.asyncJob) { + switch (vm->job->asyncJob) { case VIR_ASYNC_JOB_MIGRATION_OUT: return _("migration out"); case VIR_ASYNC_JOB_SAVE: @@ -1839,8 +1829,7 @@ static int qemuMigrationJobCheckStatus(virDomainObj *vm, virDomainAsyncJob asyncJob) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.current; + virDomainJobData *jobData =3D vm->job->current; qemuDomainJobDataPrivate *privJob =3D jobData->privateData; g_autofree char *error =3D NULL; =20 @@ -1916,8 +1905,7 @@ qemuMigrationAnyCompleted(virDomainObj *vm, virConnectPtr dconn, unsigned int flags) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.current; + virDomainJobData *jobData =3D vm->job->current; int pauseReason; =20 if (qemuMigrationJobCheckStatus(vm, asyncJob) < 0) @@ -2009,7 +1997,7 @@ qemuMigrationSrcWaitForCompletion(virDomainObj *vm, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.current; + virDomainJobData *jobData =3D vm->job->current; int rv; =20 jobData->status =3D VIR_DOMAIN_JOB_STATUS_MIGRATING; @@ -2029,9 +2017,9 @@ qemuMigrationSrcWaitForCompletion(virDomainObj *vm, =20 qemuDomainJobDataUpdateTime(jobData); qemuDomainJobDataUpdateDowntime(jobData); - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); - priv->job.completed =3D virDomainJobDataCopy(jobData); - priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); + vm->job->completed =3D virDomainJobDataCopy(jobData); + vm->job->completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 if (asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT && jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED) @@ -2143,7 +2131,7 @@ qemuMigrationSrcGraphicsRelocate(virDomainObj *vm, return 0; =20 if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_OUT) = =3D=3D 0) { - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 rc =3D qemuMonitorGraphicsRelocate(priv->mon, type, listenAddress, port, tlsPort, tlsSubject); @@ -2276,16 +2264,15 @@ static void qemuMigrationAnyConnectionClosed(virDomainObj *vm, virConnectPtr conn) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; bool postcopy =3D false; int phase; =20 VIR_DEBUG("vm=3D%s, conn=3D%p, asyncJob=3D%s, phase=3D%s", vm->def->name, conn, - virDomainAsyncJobTypeToString(priv->job.asyncJob), - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, - priv->job.phase)); + virDomainAsyncJobTypeToString(vm->job->asyncJob), + qemuDomainAsyncJobPhaseToString(vm->job->asyncJob, + vm->job->phase)); =20 if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_IN) && !qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_OUT)) @@ -2294,7 +2281,7 @@ qemuMigrationAnyConnectionClosed(virDomainObj *vm, VIR_WARN("The connection which controls migration of domain %s was clo= sed", vm->def->name); =20 - switch ((qemuMigrationJobPhase) priv->job.phase) { + switch ((qemuMigrationJobPhase) vm->job->phase) { case QEMU_MIGRATION_PHASE_BEGIN3: VIR_DEBUG("Aborting outgoing migration after Begin phase"); break; @@ -2346,14 +2333,14 @@ qemuMigrationAnyConnectionClosed(virDomainObj *vm, ignore_value(qemuMigrationJobStartPhase(vm, phase)); =20 if (postcopy) { - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) qemuMigrationSrcPostcopyFailed(vm); else qemuMigrationDstPostcopyFailed(vm); qemuMigrationJobContinue(vm, qemuProcessCleanupMigrationJob); } else { - qemuMigrationParamsReset(vm, priv->job.asyncJob, - jobPriv->migParams, priv->job.apiFlags); + qemuMigrationParamsReset(vm, vm->job->asyncJob, + jobPriv->migParams, vm->job->apiFlags); qemuMigrationJobFinish(vm); } } @@ -2377,12 +2364,11 @@ qemuMigrationSrcBeginPhaseBlockDirtyBitmaps(qemuMig= rationCookie *mig, =20 { GSList *disks =3D NULL; - qemuDomainObjPrivate *priv =3D vm->privateData; size_t i; =20 g_autoptr(GHashTable) blockNamedNodeData =3D NULL; =20 - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, priv->job.a= syncJob))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, vm->job->as= yncJob))) return -1; =20 for (i =3D 0; i < vm->def->ndisks; i++) { @@ -2452,7 +2438,7 @@ qemuMigrationAnyRefreshStatus(virDomainObj *vm, g_autoptr(virDomainJobData) jobData =3D NULL; qemuDomainJobDataPrivate *priv; =20 - jobData =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks); + jobData =3D virDomainJobDataInit(&virQEMUDriverDomainJobConfig.jobData= PrivateCb); priv =3D jobData->privateData; =20 if (qemuMigrationAnyFetchStats(vm, asyncJob, jobData, NULL) < 0) @@ -2549,11 +2535,11 @@ qemuMigrationSrcBeginPhase(virQEMUDriver *driver, * Otherwise we will start the async job later in the perform phase lo= sing * change protection. */ - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && qemuMigrationJobStartPhase(vm, QEMU_MIGRATION_PHASE_BEGIN3) < 0) return NULL; =20 - if (!qemuMigrationSrcIsAllowed(driver, vm, true, priv->job.asyncJob, f= lags)) + if (!qemuMigrationSrcIsAllowed(driver, vm, true, vm->job->asyncJob, fl= ags)) return NULL; =20 if (!(flags & (VIR_MIGRATE_UNSAFE | VIR_MIGRATE_OFFLINE)) && @@ -2656,8 +2642,6 @@ qemuMigrationAnyCanResume(virDomainObj *vm, unsigned long flags, qemuMigrationJobPhase expectedPhase) { - qemuDomainObjPrivate *priv =3D vm->privateData; - VIR_DEBUG("vm=3D%p, job=3D%s, flags=3D0x%lx, expectedPhase=3D%s", vm, virDomainAsyncJobTypeToString(job), flags, qemuDomainAsyncJobPhaseToString(VIR_ASYNC_JOB_MIGRATION_OUT, @@ -2684,22 +2668,22 @@ qemuMigrationAnyCanResume(virDomainObj *vm, if (!qemuMigrationJobIsActive(vm, job)) return false; =20 - if (priv->job.asyncOwner !=3D 0 && - priv->job.asyncOwner !=3D virThreadSelfID()) { + if (vm->job->asyncOwner !=3D 0 && + vm->job->asyncOwner !=3D virThreadSelfID()) { virReportError(VIR_ERR_OPERATION_INVALID, _("migration of domain %s is being actively monitor= ed by another thread"), vm->def->name); return false; } =20 - if (!virDomainObjIsPostcopy(vm, priv->job.current->operation)) { + if (!virDomainObjIsPostcopy(vm, vm->job->current->operation)) { virReportError(VIR_ERR_OPERATION_INVALID, _("migration of domain %s is not in post-copy phase= "), vm->def->name); return false; } =20 - if (priv->job.phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && + if (vm->job->phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && !virDomainObjIsFailedPostcopy(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, _("post-copy migration of domain %s has not failed"= ), @@ -2707,7 +2691,7 @@ qemuMigrationAnyCanResume(virDomainObj *vm, return false; } =20 - if (priv->job.phase > expectedPhase) { + if (vm->job->phase > expectedPhase) { virReportError(VIR_ERR_OPERATION_INVALID, _("resuming failed post-copy migration of domain %s= already in progress"), vm->def->name); @@ -2881,8 +2865,8 @@ qemuMigrationDstPrepareCleanup(virQEMUDriver *driver, VIR_DEBUG("driver=3D%p, vm=3D%s, job=3D%s, asyncJob=3D%s", driver, vm->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainJobTypeToString(vm->job->active), + virDomainAsyncJobTypeToString(vm->job->asyncJob)); =20 virPortAllocatorRelease(priv->migrationPort); priv->migrationPort =3D 0; @@ -3061,7 +3045,7 @@ qemuMigrationDstPrepareActive(virQEMUDriver *driver, unsigned long flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; qemuProcessIncomingDef *incoming =3D NULL; g_autofree char *tlsAlias =3D NULL; virObjectEvent *event =3D NULL; @@ -3219,7 +3203,7 @@ qemuMigrationDstPrepareActive(virQEMUDriver *driver, error: virErrorPreserveLast(&origErr); qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_IN, - jobPriv->migParams, priv->job.apiFlags); + jobPriv->migParams, vm->job->apiFlags); =20 if (stopProcess) { unsigned int stopFlags =3D VIR_QEMU_PROCESS_STOP_MIGRATED; @@ -3333,7 +3317,8 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, QEMU_MIGRATION_COOKIE_CPU_HOTPLUG= | QEMU_MIGRATION_COOKIE_CPU | QEMU_MIGRATION_COOKIE_CAPS | - QEMU_MIGRATION_COOKIE_BLOCK_DIRTY= _BITMAPS))) + QEMU_MIGRATION_COOKIE_BLOCK_DIRTY= _BITMAPS, + NULL))) goto cleanup; =20 if (!(vm =3D virDomainObjListAdd(driver->domains, def, @@ -3477,7 +3462,7 @@ qemuMigrationDstPrepareResume(virQEMUDriver *driver, =20 if (!(mig =3D qemuMigrationCookieParse(driver, def, origname, NULL, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_CAPS))) + QEMU_MIGRATION_COOKIE_CAPS, vm))) goto cleanup; =20 priv->origname =3D g_strdup(origname); @@ -3858,13 +3843,13 @@ qemuMigrationSrcComplete(virQEMUDriver *driver, virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.completed; + virDomainJobData *jobData =3D vm->job->completed; virObjectEvent *event; int reason; =20 if (!jobData) { - priv->job.completed =3D virDomainJobDataCopy(priv->job.current); - jobData =3D priv->job.completed; + vm->job->completed =3D virDomainJobDataCopy(vm->job->current); + jobData =3D vm->job->completed; jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; } =20 @@ -3909,7 +3894,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, { g_autoptr(qemuMigrationCookie) mig =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; virDomainJobData *jobData =3D NULL; qemuMigrationJobPhase phase; =20 @@ -3927,7 +3912,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, * job will stay active even though migration API finishes with an * error. */ - phase =3D priv->job.phase; + phase =3D vm->job->phase; } else if (retcode =3D=3D 0) { phase =3D QEMU_MIGRATION_PHASE_CONFIRM3; } else { @@ -3939,13 +3924,13 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, =20 if (!(mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname= , priv, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_STATS))) + QEMU_MIGRATION_COOKIE_STATS, vm))) return -1; =20 if (retcode =3D=3D 0) - jobData =3D priv->job.completed; + jobData =3D vm->job->completed; else - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); =20 /* Update times with the values sent by the destination daemon */ if (mig->jobData && jobData) { @@ -3985,7 +3970,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, qemuMigrationSrcRestoreDomainState(driver, vm); =20 qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - jobPriv->migParams, priv->job.apiFlag= s); + jobPriv->migParams, vm->job->apiFlags= ); qemuDomainSetMaxMemLock(vm, 0, &priv->preMigrationMemlock); } =20 @@ -4005,7 +3990,6 @@ qemuMigrationSrcConfirm(virQEMUDriver *driver, { qemuMigrationJobPhase phase; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); - qemuDomainObjPrivate *priv =3D vm->privateData; int ret =3D -1; =20 VIR_DEBUG("vm=3D%p, flags=3D0x%x, cancelled=3D%d", vm, flags, cancelle= d); @@ -4024,7 +4008,7 @@ qemuMigrationSrcConfirm(virQEMUDriver *driver, * error. */ if (virDomainObjIsFailedPostcopy(vm)) - phase =3D priv->job.phase; + phase =3D vm->job->phase; else if (cancelled) phase =3D QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED; else @@ -4416,7 +4400,7 @@ qemuMigrationSrcRunPrepareBlockDirtyBitmapsMerge(virD= omainObj *vm, { g_autoslist(qemuDomainJobPrivateMigrateTempBitmap) tmpbitmaps =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; g_autoptr(virJSONValue) actions =3D virJSONValueNewArray(); g_autoptr(GHashTable) blockNamedNodeData =3D NULL; GSList *nextdisk; @@ -4703,7 +4687,8 @@ qemuMigrationSrcRun(virQEMUDriver *driver, cookieFlags | QEMU_MIGRATION_COOKIE_GRAPHICS | QEMU_MIGRATION_COOKIE_CAPS | - QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMA= PS); + QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMA= PS, + NULL); if (!mig) goto error; =20 @@ -4814,13 +4799,13 @@ qemuMigrationSrcRun(virQEMUDriver *driver, if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_OUT) < = 0) goto error; =20 - if (priv->job.abortJob) { + if (vm->job->abortJob) { /* explicitly do this *after* we entered the monitor, * as this is a critical section so we are guaranteed - * priv->job.abortJob will not change */ - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; + * vm->job->abortJob will not change */ + vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(vm->job->asyncJob), _("canceled by client")); goto exit_monitor; } @@ -4884,7 +4869,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, * resume it now once we finished all block jobs and wait for the real * end of the migration. */ - if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { + if (vm->job->current->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { if (qemuMigrationSrcContinue(vm, QEMU_MONITOR_MIGRATION_STATUS_PRE_SWI= TCHOVER, VIR_ASYNC_JOB_MIGRATION_OUT) < 0) @@ -4913,11 +4898,11 @@ qemuMigrationSrcRun(virQEMUDriver *driver, goto error; } =20 - if (priv->job.completed) { - priv->job.completed->stopped =3D priv->job.current->stopped; - qemuDomainJobDataUpdateTime(priv->job.completed); - qemuDomainJobDataUpdateDowntime(priv->job.completed); - ignore_value(virTimeMillisNow(&priv->job.completed->sent)); + if (vm->job->completed) { + vm->job->completed->stopped =3D vm->job->current->stopped; + qemuDomainJobDataUpdateTime(vm->job->completed); + qemuDomainJobDataUpdateDowntime(vm->job->completed); + ignore_value(virTimeMillisNow(&vm->job->completed->sent)); } =20 cookieFlags |=3D QEMU_MIGRATION_COOKIE_NETWORK | @@ -4952,7 +4937,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, } =20 if (cancel && - priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_HYPERVISO= R_COMPLETED && + vm->job->current->status !=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR= _COMPLETED && qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_OUT= ) =3D=3D 0) { qemuMonitorMigrateCancel(priv->mon); qemuDomainObjExitMonitor(vm); @@ -4966,8 +4951,8 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 qemuMigrationSrcCancelRemoveTempBitmaps(vm, VIR_ASYNC_JOB_MIGRATIO= N_OUT); =20 - if (priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_CANCELED) - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; + if (vm->job->current->status !=3D VIR_DOMAIN_JOB_STATUS_CANCELED) + vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; } =20 if (iothread) @@ -5002,7 +4987,7 @@ qemuMigrationSrcResume(virDomainObj *vm, =20 mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname, priv, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_CAPS); + QEMU_MIGRATION_COOKIE_CAPS, vm); if (!mig) return -1; =20 @@ -5913,7 +5898,7 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, virErrorPtr orig_err =3D NULL; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 if (flags & VIR_MIGRATE_POSTCOPY_RESUME) { if (!qemuMigrationAnyCanResume(vm, VIR_ASYNC_JOB_MIGRATION_OUT, fl= ags, @@ -5991,7 +5976,7 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, */ if (!v3proto && ret < 0) qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - jobPriv->migParams, priv->job.apiFlag= s); + jobPriv->migParams, vm->job->apiFlags= ); =20 qemuMigrationSrcRestoreDomainState(driver, vm); =20 @@ -6080,7 +6065,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, const char *nbdURI) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; int ret =3D -1; =20 if (flags & VIR_MIGRATE_POSTCOPY_RESUME) { @@ -6120,7 +6105,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, if (ret < 0 && !virDomainObjIsFailedPostcopy(vm)) { qemuMigrationSrcRestoreDomainState(driver, vm); qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - jobPriv->migParams, priv->job.apiFlags); + jobPriv->migParams, vm->job->apiFlags); qemuDomainSetMaxMemLock(vm, 0, &priv->preMigrationMemlock); qemuMigrationJobFinish(vm); } else { @@ -6399,7 +6384,7 @@ qemuMigrationDstFinishOffline(virQEMUDriver *driver, g_autoptr(qemuMigrationCookie) mig =3D NULL; =20 if (!(mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname= , priv, - cookiein, cookieinlen, cookie_fla= gs))) + cookiein, cookieinlen, cookie_fla= gs, NULL))) return NULL; =20 if (qemuMigrationDstPersist(driver, vm, mig, false) < 0) @@ -6432,7 +6417,6 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, bool *doKill, bool *inPostCopy) { - qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(virDomainJobData) jobData =3D NULL; =20 if (qemuMigrationDstVPAssociatePortProfiles(vm->def) < 0) @@ -6492,7 +6476,7 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, return -1; } =20 - if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) + if (vm->job->current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) *inPostCopy =3D true; =20 if (!(flags & VIR_MIGRATE_PAUSED)) { @@ -6542,9 +6526,9 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, } =20 if (jobData) { - priv->job.completed =3D g_steal_pointer(&jobData); - priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; - qemuDomainJobSetStatsType(priv->job.completed, + vm->job->completed =3D g_steal_pointer(&jobData); + vm->job->completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; + qemuDomainJobSetStatsType(vm->job->completed, QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION); } =20 @@ -6586,17 +6570,17 @@ qemuMigrationDstFinishActive(virQEMUDriver *driver, virDomainPtr dom =3D NULL; g_autoptr(qemuMigrationCookie) mig =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; virObjectEvent *event; bool inPostCopy =3D false; - bool doKill =3D priv->job.phase !=3D QEMU_MIGRATION_PHASE_FINISH_RESUM= E; + bool doKill =3D vm->job->phase !=3D QEMU_MIGRATION_PHASE_FINISH_RESUME; int rc; =20 VIR_DEBUG("vm=3D%p, flags=3D0x%lx, retcode=3D%d", vm, flags, retcode); =20 if (!(mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname= , priv, - cookiein, cookieinlen, cookie_fla= gs))) + cookiein, cookieinlen, cookie_fla= gs, NULL))) goto error; =20 if (retcode !=3D 0) { @@ -6633,7 +6617,7 @@ qemuMigrationDstFinishActive(virQEMUDriver *driver, VIR_WARN("Unable to encode migration cookie"); =20 qemuMigrationDstComplete(driver, vm, inPostCopy, - VIR_ASYNC_JOB_MIGRATION_IN, &priv->job); + VIR_ASYNC_JOB_MIGRATION_IN, vm->job); =20 return dom; =20 @@ -6663,7 +6647,7 @@ qemuMigrationDstFinishActive(virQEMUDriver *driver, *finishJob =3D false; } else { qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_IN, - jobPriv->migParams, priv->job.apiFlags); + jobPriv->migParams, vm->job->apiFlags); } =20 if (!virDomainObjIsActive(vm)) @@ -6725,7 +6709,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, } else { qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); } - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); =20 cookie_flags =3D QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_STATS | @@ -6778,7 +6762,6 @@ qemuMigrationProcessUnattended(virQEMUDriver *driver, virDomainAsyncJob job, qemuMonitorMigrationStatus status) { - qemuDomainObjPrivate *priv =3D vm->privateData; qemuMigrationJobPhase phase; =20 if (!qemuMigrationJobIsActive(vm, job) || @@ -6798,7 +6781,7 @@ qemuMigrationProcessUnattended(virQEMUDriver *driver, return; =20 if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) - qemuMigrationDstComplete(driver, vm, true, job, &priv->job); + qemuMigrationDstComplete(driver, vm, true, job, vm->job); else qemuMigrationSrcComplete(driver, vm, job); =20 diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_coo= kie.c index bd939a12be..a4e018e204 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -495,7 +495,7 @@ qemuMigrationCookieAddNBD(qemuMigrationCookie *mig, mig->nbd->disks =3D g_new0(struct qemuMigrationCookieNBDDisk, vm->def-= >ndisks); mig->nbd->ndisks =3D 0; =20 - if (qemuDomainObjEnterMonitorAsync(vm, priv->job.asyncJob) < 0) + if (qemuDomainObjEnterMonitorAsync(vm, vm->job->asyncJob) < 0) return -1; =20 rc =3D qemuMonitorBlockStatsUpdateCapacityBlockdev(priv->mon, stats); @@ -525,13 +525,11 @@ static int qemuMigrationCookieAddStatistics(qemuMigrationCookie *mig, virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!priv->job.completed) + if (!vm->job->completed) return 0; =20 g_clear_pointer(&mig->jobData, virDomainJobDataFree); - mig->jobData =3D virDomainJobDataCopy(priv->job.completed); + mig->jobData =3D virDomainJobDataCopy(vm->job->completed); =20 mig->flags |=3D QEMU_MIGRATION_COOKIE_STATS; =20 @@ -1042,7 +1040,7 @@ qemuMigrationCookieStatisticsXMLParse(xmlXPathContext= Ptr ctxt) if (!(ctxt->node =3D virXPathNode("./statistics", ctxt))) return NULL; =20 - jobData =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks); + jobData =3D virDomainJobDataInit(&virQEMUDriverDomainJobConfig.jobData= PrivateCb); priv =3D jobData->privateData; stats =3D &priv->stats.mig; jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; @@ -1497,7 +1495,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, qemuDomainObjPrivate *priv, const char *cookiein, int cookieinlen, - unsigned int flags) + unsigned int flags, + virDomainObj *vm) { g_autoptr(qemuMigrationCookie) mig =3D NULL; =20 @@ -1547,8 +1546,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, } } =20 - if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData && priv->job.c= urrent) - mig->jobData->operation =3D priv->job.current->operation; + if (vm && flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData && vm->j= ob->current) + mig->jobData->operation =3D vm->job->current->operation; =20 return g_steal_pointer(&mig); } diff --git a/src/qemu/qemu_migration_cookie.h b/src/qemu/qemu_migration_coo= kie.h index 2f0cdcf7b6..07776aaa8b 100644 --- a/src/qemu/qemu_migration_cookie.h +++ b/src/qemu/qemu_migration_cookie.h @@ -194,7 +194,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, qemuDomainObjPrivate *priv, const char *cookiein, int cookieinlen, - unsigned int flags); + unsigned int flags, + virDomainObj *vm); =20 void qemuMigrationCookieFree(qemuMigrationCookie *mig); diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_par= ams.c index c667be8520..7a023b36c8 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1005,7 +1005,7 @@ qemuMigrationParamsEnableTLS(virQEMUDriver *driver, qemuMigrationParams *migParams) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; g_autoptr(virJSONValue) tlsProps =3D NULL; g_autoptr(virJSONValue) secProps =3D NULL; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); @@ -1080,8 +1080,7 @@ int qemuMigrationParamsDisableTLS(virDomainObj *vm, qemuMigrationParams *migParams) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 if (!jobPriv->migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) return 0; @@ -1213,8 +1212,7 @@ qemuMigrationParamsCheck(virDomainObj *vm, qemuMigrationParams *migParams, virBitmap *remoteCaps) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; qemuMigrationCapability cap; qemuMigrationParty party; size_t i; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index e872553cfc..ec86bcb2b2 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -648,8 +648,8 @@ qemuProcessHandleStop(qemuMonitor *mon G_GNUC_UNUSED, * reveal it in domain state nor sent events */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING && !priv->pausedShutdown) { - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { - if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POS= TCOPY) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POST= COPY) reason =3D VIR_DOMAIN_PAUSED_POSTCOPY; else reason =3D VIR_DOMAIN_PAUSED_MIGRATION; @@ -661,8 +661,8 @@ qemuProcessHandleStop(qemuMonitor *mon G_GNUC_UNUSED, vm->def->name, virDomainPausedReasonTypeToString(reason), detail); =20 - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (vm->job->current) + ignore_value(virTimeMillisNow(&vm->job->current->stopped)); =20 if (priv->signalStop) virDomainObjBroadcast(vm); @@ -1390,7 +1390,6 @@ static void qemuProcessHandleSpiceMigrated(qemuMonitor *mon G_GNUC_UNUSED, virDomainObj *vm) { - qemuDomainObjPrivate *priv; qemuDomainJobPrivate *jobPriv; =20 virObjectLock(vm); @@ -1398,9 +1397,8 @@ qemuProcessHandleSpiceMigrated(qemuMonitor *mon G_GNU= C_UNUSED, VIR_DEBUG("Spice migration completed for domain %p %s", vm, vm->def->name); =20 - priv =3D vm->privateData; - jobPriv =3D priv->job.privateData; - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + jobPriv =3D vm->job->privateData; + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { VIR_DEBUG("got SPICE_MIGRATE_COMPLETED event without a migration j= ob"); goto cleanup; } @@ -1434,12 +1432,12 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G= _GNUC_UNUSED, priv =3D vm->privateData; driver =3D priv->driver; =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION event without a migration job"); goto cleanup; } =20 - privJob =3D priv->job.current->privateData; + privJob =3D vm->job->current->privateData; =20 privJob->stats.mig.status =3D status; virDomainObjBroadcast(vm); @@ -1448,7 +1446,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, =20 switch ((qemuMonitorMigrationStatus) status) { case QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY: - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && state =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_MIGRATION) { VIR_DEBUG("Correcting paused state reason for domain %s to %s", @@ -1464,7 +1462,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, break; =20 case QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY_PAUSED: - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && state =3D=3D VIR_DOMAIN_PAUSED) { /* At this point no thread is watching the migration progress = on * the source as it is just waiting for the Finish phase to en= d. @@ -1505,11 +1503,11 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G= _GNUC_UNUSED, * watching it in any thread. Let's make sure the migration is pro= perly * finished in case we get a "completed" event. */ - if (virDomainObjIsPostcopy(vm, priv->job.current->operation) && - priv->job.phase =3D=3D QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && - priv->job.asyncOwner =3D=3D 0) { + if (virDomainObjIsPostcopy(vm, vm->job->current->operation) && + vm->job->phase =3D=3D QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && + vm->job->asyncOwner =3D=3D 0) { qemuProcessEventSubmit(vm, QEMU_PROCESS_EVENT_UNATTENDED_MIGRA= TION, - priv->job.asyncJob, status, NULL); + vm->job->asyncJob, status, NULL); } break; =20 @@ -1546,7 +1544,7 @@ qemuProcessHandleMigrationPass(qemuMonitor *mon G_GNU= C_UNUSED, vm, vm->def->name, pass); =20 priv =3D vm->privateData; - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION_PASS event without a migration job"); goto cleanup; } @@ -1566,7 +1564,6 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_GNU= C_UNUSED, qemuMonitorDumpStats *stats, const char *error) { - qemuDomainObjPrivate *priv; qemuDomainJobPrivate *jobPriv; qemuDomainJobDataPrivate *privJobCurrent =3D NULL; =20 @@ -1575,20 +1572,19 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_G= NUC_UNUSED, VIR_DEBUG("Dump completed for domain %p %s with stats=3D%p error=3D'%s= '", vm, vm->def->name, stats, NULLSTR(error)); =20 - priv =3D vm->privateData; - jobPriv =3D priv->job.privateData; - privJobCurrent =3D priv->job.current->privateData; - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { + jobPriv =3D vm->job->privateData; + privJobCurrent =3D vm->job->current->privateData; + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got DUMP_COMPLETED event without a dump_completed job"); goto cleanup; } jobPriv->dumpCompleted =3D true; privJobCurrent->stats.dump =3D *stats; - priv->job.error =3D g_strdup(error); + vm->job->error =3D g_strdup(error); =20 /* Force error if extracting the DUMP_COMPLETED status failed */ if (!error && status < 0) { - priv->job.error =3D g_strdup(virGetLastErrorMessage()); + vm->job->error =3D g_strdup(virGetLastErrorMessage()); privJobCurrent->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_FAI= LED; } =20 @@ -3209,8 +3205,8 @@ int qemuProcessStopCPUs(virQEMUDriver *driver, /* de-activate netdevs after stopping CPUs */ ignore_value(qemuInterfaceStopDevices(vm->def)); =20 - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (vm->job->current) + ignore_value(virTimeMillisNow(&vm->job->current->stopped)); =20 /* The STOP event handler will change the domain state with the reason * saved in priv->pausedReason and it will also emit corresponding dom= ain @@ -3375,12 +3371,12 @@ qemuProcessCleanupMigrationJob(virQEMUDriver *drive= r, =20 VIR_DEBUG("driver=3D%p, vm=3D%s, asyncJob=3D%s, state=3D%s, reason=3D%= s", driver, vm->def->name, - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(vm->job->asyncJob), virDomainStateTypeToString(state), virDomainStateReasonToString(state, reason)); =20 - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_IN && - priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_IN && + vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) return; =20 virPortAllocatorRelease(priv->migrationPort); @@ -3393,7 +3389,6 @@ static void qemuProcessRestoreMigrationJob(virDomainObj *vm, virDomainJobObj *job) { - qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobPrivate *jobPriv =3D job->privateData; virDomainJobOperation op; unsigned long long allowedJobs; @@ -3413,9 +3408,9 @@ qemuProcessRestoreMigrationJob(virDomainObj *vm, VIR_DOMAIN_JOB_STATUS_PAUSED, allowedJobs); =20 - job->privateData =3D g_steal_pointer(&priv->job.privateData); - priv->job.privateData =3D jobPriv; - priv->job.apiFlags =3D job->apiFlags; + job->privateData =3D g_steal_pointer(&vm->job->privateData); + vm->job->privateData =3D jobPriv; + vm->job->apiFlags =3D job->apiFlags; =20 qemuDomainCleanupAdd(vm, qemuProcessCleanupMigrationJob); } @@ -8113,9 +8108,9 @@ void qemuProcessStop(virQEMUDriver *driver, if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { if (qemuDomainObjBeginNestedJob(vm, asyncJob) < 0) goto cleanup; - } else if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_NONE && - priv->job.asyncOwner =3D=3D virThreadSelfID() && - priv->job.active !=3D VIR_JOB_ASYNC_NESTED) { + } else if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_NONE && + vm->job->asyncOwner =3D=3D virThreadSelfID() && + vm->job->active !=3D VIR_JOB_ASYNC_NESTED) { VIR_WARN("qemuProcessStop called without a nested job (async=3D%s)= ", virDomainAsyncJobTypeToString(asyncJob)); } @@ -8438,10 +8433,10 @@ qemuProcessAutoDestroy(virDomainObj *dom, =20 VIR_DEBUG("vm=3D%s, conn=3D%p", dom->def->name, conn); =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) + if (dom->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; =20 - if (priv->job.asyncJob) { + if (dom->job->asyncJob) { VIR_DEBUG("vm=3D%s has long-term job active, cancelling", dom->def->name); qemuDomainObjDiscardAsyncJob(dom); @@ -8705,7 +8700,7 @@ qemuProcessReconnect(void *opaque) cfg =3D virQEMUDriverGetConfig(driver); priv =3D obj->privateData; =20 - virDomainObjPreserveJob(&priv->job, &oldjob); + virDomainObjPreserveJob(obj->job, &oldjob); if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP && priv->backup) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 6033deafed..243c18193b 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1334,7 +1334,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, if (!qemuMigrationSrcIsAllowed(driver, vm, false, VIR_ASYNC_JOB_SN= APSHOT, 0)) goto cleanup; =20 - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ diff --git a/tests/qemumigrationcookiexmltest.c b/tests/qemumigrationcookie= xmltest.c index 9731348b53..6f650213e2 100644 --- a/tests/qemumigrationcookiexmltest.c +++ b/tests/qemumigrationcookiexmltest.c @@ -146,7 +146,8 @@ testQemuMigrationCookieParse(const void *opaque) priv, data->xmlstr, data->xmlstrlen, - data->cookieParseFlags))= ) { + data->cookieParseFlags, + data->vm))) { VIR_TEST_DEBUG("\nfailed to parse qemu migration cookie:\n%s\n", d= ata->xmlstr); return -1; } --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352825; cv=none; d=zohomail.com; s=zohoarc; b=PdQtRbzQxWRNqM0B9tT7E/WaZyX0I1bR/ZDacQqkzF8XoaTkFdGLP+t3viGnlgUSs5C5peAg4spFgKJbBwsL3fGMFJ+WzZX+80AhoHgQOKzeuJBQRMmuISZEdfNA5Fh3gQQz8cAdxX84tHq+vyx/pkRFsxzvx79hSrrnEyJcBXU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352825; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rYf31m5roxVGMZRUT6OVUICGg8ICxnbk4teUVKuFVoA=; b=ckcIWMz9Oj8Q3d0DxT0z7iyz02GtfYHJ2DC/CKz+XQ2go+dTMlYnF/57PGbHGWNSrty95RxI1V26HMXALSdhQavx14sA/uAT55h1TjBBeNmd6PcUPknHKBheC46ULI6IeaB23m3wNh3//IY+oJ/wYhM9KGBG7A6KETxwf7fxbTs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1661352825983212.2964783863299; Wed, 24 Aug 2022 07:53:45 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-395-XSRDNh2iN2ShQiMdEaRXSw-1; Wed, 24 Aug 2022 10:53:43 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 334FF8037AA; Wed, 24 Aug 2022 14:53:41 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 165421415137; Wed, 24 Aug 2022 14:53:41 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id BB18E1946A58; Wed, 24 Aug 2022 14:53:40 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 6633C1946A42 for ; Wed, 24 Aug 2022 13:43:50 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 48937945D7; Wed, 24 Aug 2022 13:43:50 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id E7FF79459C for ; Wed, 24 Aug 2022 13:43:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352824; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=rYf31m5roxVGMZRUT6OVUICGg8ICxnbk4teUVKuFVoA=; b=XWFxe1xaXSe6PA4SAakANiWqFUAbwhQL79W8Xh6lQyFK5V6WVKOSI5RaY8FWLoisF1HWWz efQMTRxNdNHx5dVqmKJxpbKqYmkB3oz67HAY3oMUr8mdAFscAm3B7vKp9MsJgBQl5PcCdM hkQSIxwAwQPV3XDtaOP8hE7cAFDfwuI= X-MC-Unique: XSRDNh2iN2ShQiMdEaRXSw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 07/17] qemu: use virDomainObjBeginJob() Date: Wed, 24 Aug 2022 15:43:30 +0200 Message-Id: <1efd70dd1f66b2c5a7cdf200aa58ff2843d26d31.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352827779100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch moves qemuDomainObjBeginJob() into src/conf/virdomainjob as universal virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 8 +- src/conf/virdomainjob.c | 18 +++ src/conf/virdomainjob.h | 4 + src/libvirt_private.syms | 1 + src/qemu/qemu_checkpoint.c | 6 +- src/qemu/qemu_domain.c | 4 +- src/qemu/qemu_domainjob.c | 18 --- src/qemu/qemu_domainjob.h | 3 - src/qemu/qemu_driver.c | 152 +++++++++++++------------- src/qemu/qemu_migration.c | 2 +- src/qemu/qemu_process.c | 6 +- src/qemu/qemu_snapshot.c | 2 +- 12 files changed, 113 insertions(+), 111 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index c68512d1b3..00340bb732 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -62,7 +62,7 @@ There are a number of locks on various objects =20 Agent job condition is then used when thread wishes to talk to qemu agent monitor. It is possible to acquire just agent job - (``qemuDomainObjBeginAgentJob``), or only normal job (``qemuDomainObjB= eginJob``) + (``qemuDomainObjBeginAgentJob``), or only normal job (``virDomainObjBe= ginJob``) but not both at the same time. Holding an agent job and a normal job w= ould allow an unresponsive or malicious agent to block normal libvirt API a= nd potentially result in a denial of service. Which type of job to grab @@ -114,7 +114,7 @@ To lock the ``virDomainObj`` =20 To acquire the normal job condition =20 - ``qemuDomainObjBeginJob()`` + ``virDomainObjBeginJob()`` - Waits until the job is compatible with current async job or no async job is running - Waits for ``job.cond`` condition ``job.active !=3D 0`` using ``virDo= mainObj`` @@ -214,7 +214,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginJob(obj, VIR_JOB_TYPE); + virDomainObjBeginJob(obj, VIR_JOB_TYPE); =20 ...do work... =20 @@ -230,7 +230,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginJob(obj, VIR_JOB_TYPE); + virDomainObjBeginJob(obj, VIR_JOB_TYPE); =20 ...do prep work... =20 diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 53861cb153..81bd886c4c 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -508,3 +508,21 @@ virDomainObjBeginJobInternal(virDomainObj *obj, jobObj->jobsQueued--; return ret; } + +/* + * obj must be locked before calling + * + * This must be called by anything that will change the VM state + * in any way, or anything that will use the Hypervisor monitor. + * + * Successful calls must be followed by EndJob eventually + */ +int virDomainObjBeginJob(virDomainObj *obj, + virDomainJob job) +{ + if (virDomainObjBeginJobInternal(obj, obj->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, false) < 0) + return -1; + return 0; +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 091d951aa6..adee679e72 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -245,3 +245,7 @@ int virDomainObjBeginJobInternal(virDomainObj *obj, virDomainAsyncJob asyncJob, bool nowait) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); + +int virDomainObjBeginJob(virDomainObj *obj, + virDomainJob job) + G_GNUC_WARN_UNUSED_RESULT; diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 71ed240cbf..89d8bd5da6 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1187,6 +1187,7 @@ virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; +virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c index ed236eaace..c6fb3d1f97 100644 --- a/src/qemu/qemu_checkpoint.c +++ b/src/qemu/qemu_checkpoint.c @@ -604,7 +604,7 @@ qemuCheckpointCreateXML(virDomainPtr domain, /* Unlike snapshots, the RNG schema already ensured a sane filename. */ =20 /* We are going to modify the domain below. */ - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return NULL; =20 if (redefine) { @@ -654,7 +654,7 @@ qemuCheckpointGetXMLDescUpdateSize(virDomainObj *vm, size_t i; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -848,7 +848,7 @@ qemuCheckpointDelete(virDomainObj *vm, VIR_DOMAIN_CHECKPOINT_DELETE_METADATA_ONLY | VIR_DOMAIN_CHECKPOINT_DELETE_CHILDREN_ONLY, -1); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (!metadata_only) { diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index de8de2fd0e..c596a70a72 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5954,7 +5954,7 @@ qemuDomainSaveConfig(virDomainObj *obj) * obj must be locked before calling * * To be called immediately before any QEMU monitor API call - * Must have already called qemuDomainObjBeginJob() and checked + * Must have already called virDomainObjBeginJob() and checked * that the VM is still active; may not be used for nested async * jobs. * @@ -6035,7 +6035,7 @@ void qemuDomainObjEnterMonitor(virDomainObj *obj) * obj must be locked before calling * * To be called immediately before any QEMU monitor API call. - * Must have already either called qemuDomainObjBeginJob() + * Must have already either called virDomainObjBeginJob() * and checked that the VM is still active, with asyncJob of * VIR_ASYNC_JOB_NONE; or already called qemuDomainObjBeginAsyncJob, * with the same asyncJob. diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 0ecadddbc7..13e1f332dc 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,24 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -/* - * obj must be locked before calling - * - * This must be called by anything that will change the VM state - * in any way, or anything that will use the QEMU monitor. - * - * Successful calls must be followed by EndJob eventually - */ -int qemuDomainObjBeginJob(virDomainObj *obj, - virDomainJob job) -{ - if (virDomainObjBeginJobInternal(obj, obj->job, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, false) < 0) - return -1; - return 0; -} - /** * qemuDomainObjBeginAgentJob: * diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 201d7857a8..66f18483fe 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,9 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob j= ob, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginJob(virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) G_GNUC_WARN_UNUSED_RESULT; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ce54890d44..52f0a6e45b 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1679,7 +1679,7 @@ static int qemuDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_SUSPEND) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_SUSPEND) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1729,7 +1729,7 @@ static int qemuDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1812,7 +1812,7 @@ qemuDomainShutdownFlagsMonitor(virDomainObj *vm, =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_RUNNING) { @@ -1935,7 +1935,7 @@ qemuDomainRebootMonitor(virDomainObj *vm, qemuDomainObjPrivate *priv =3D vm->privateData; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2022,7 +2022,7 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) if (virDomainResetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2191,7 +2191,7 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom,= unsigned long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2334,7 +2334,7 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPt= r dom, int period, if (virDomainSetMemoryStatsPeriodEnsureACL(dom->conn, vm->def, flags) = < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2401,7 +2401,7 @@ static int qemuDomainInjectNMI(virDomainPtr domain, u= nsigned int flags) =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2459,7 +2459,7 @@ static int qemuDomainSendKey(virDomainPtr domain, if (virDomainSendKeyEnsureACL(domain->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -3321,7 +3321,7 @@ qemuDomainScreenshot(virDomainPtr dom, if (virDomainScreenshotEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -3601,7 +3601,7 @@ processDeviceDeletedEvent(virQEMUDriver *driver, VIR_DEBUG("Removing device %s from domain %p %s", devAlias, vm, vm->def->name); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -3837,7 +3837,7 @@ processNicRxFilterChangedEvent(virDomainObj *vm, "from domain %p %s", devAlias, vm, vm->def->name); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -3963,7 +3963,7 @@ processSerialChangedEvent(virQEMUDriver *driver, memset(&dev, 0, sizeof(dev)); } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY_MIGRATION_SAFE) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY_MIGRATION_SAFE) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4009,7 +4009,7 @@ static void processJobStatusChangeEvent(virDomainObj *vm, qemuBlockJobData *job) { - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4151,7 +4151,7 @@ processMemoryDeviceSizeChange(virQEMUDriver *driver, virObjectEvent *event =3D NULL; unsigned long long balloon; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4369,7 +4369,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; } else { - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; } =20 @@ -4506,7 +4506,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -4632,7 +4632,7 @@ qemuDomainPinEmulator(virDomainPtr dom, if (virDomainPinEmulatorEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -4890,7 +4890,7 @@ qemuDomainGetIOThreadsLive(virDomainObj *vm, size_t i; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -5018,7 +5018,7 @@ qemuDomainPinIOThread(virDomainPtr dom, if (virDomainPinIOThreadEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -5526,7 +5526,7 @@ qemuDomainChgIOThread(virQEMUDriver *driver, =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -6735,7 +6735,7 @@ qemuDomainUndefineFlags(virDomainPtr dom, if (virDomainUndefineFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -7992,7 +7992,7 @@ qemuDomainAttachDeviceFlags(virDomainPtr dom, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8047,7 +8047,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr d= om, if (virDomainUpdateDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8277,7 +8277,7 @@ qemuDomainDetachDeviceFlags(virDomainPtr dom, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8312,7 +8312,7 @@ qemuDomainDetachDeviceAlias(virDomainPtr dom, if (virDomainDetachDeviceAliasEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8385,7 +8385,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, autostart =3D (autostart !=3D 0); =20 if (vm->autostart !=3D autostart) { - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!(configFile =3D virDomainConfigFile(cfg->configDir, vm->def->= name))) @@ -8531,7 +8531,7 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -8705,7 +8705,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 /* QEMU and LXC implementation are identical */ @@ -8948,7 +8948,7 @@ qemuDomainSetNumaParameters(virDomainPtr dom, } } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9165,7 +9165,7 @@ qemuDomainSetPerfEvents(virDomainPtr dom, if (virDomainSetPerfEventsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9239,7 +9239,7 @@ qemuDomainGetPerfEvents(virDomainPtr dom, if (virDomainGetPerfEventsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!(def =3D virDomainObjGetOneDefState(vm, flags, &live))) @@ -9416,7 +9416,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr do= m, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9918,7 +9918,7 @@ qemuDomainBlockResize(virDomainPtr dom, if (virDomainBlockResizeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10118,7 +10118,7 @@ qemuDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10175,7 +10175,7 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10337,7 +10337,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, if (virDomainSetInterfaceParametersEnsureACL(dom->conn, vm->def, flags= ) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -10700,7 +10700,7 @@ qemuDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 ret =3D qemuDomainMemoryStatsInternal(vm, stats, nr_stats); @@ -10810,7 +10810,7 @@ qemuDomainMemoryPeek(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -11087,7 +11087,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, if (virDomainGetBlockInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!(disk =3D virDomainDiskByName(vm->def, path, false))) { @@ -12677,7 +12677,7 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, return -1; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12856,7 +12856,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, if (virDomainAbortJobFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_ABORT) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_ABORT) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12959,7 +12959,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, if (virDomainMigrateSetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13008,7 +13008,7 @@ qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, if (virDomainMigrateGetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13058,7 +13058,7 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr d= om, if (virDomainMigrateGetCompressionCacheEnsureACL(dom->conn, vm->def) <= 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13107,7 +13107,7 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr d= om, if (virDomainMigrateSetCompressionCacheEnsureACL(dom->conn, vm->def) <= 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13183,7 +13183,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13231,7 +13231,7 @@ qemuDomainMigrationGetPostcopyBandwidth(virDomainOb= j *vm, int rc; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13323,7 +13323,7 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, if (virDomainMigrateStartPostCopyEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14068,7 +14068,7 @@ qemuDomainQemuMonitorCommandWithFiles(virDomainPtr = domain, if (virDomainQemuMonitorCommandWithFilesEnsureACL(domain->conn, vm->de= f) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14399,7 +14399,7 @@ qemuDomainBlockPullCommon(virDomainObj *vm, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14507,7 +14507,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, if (virDomainBlockJobAbortEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14668,7 +14668,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, goto cleanup; =20 =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14737,7 +14737,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, if (virDomainBlockJobSetSpeedEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14929,7 +14929,7 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, return -1; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15393,7 +15393,7 @@ qemuDomainBlockCommit(virDomainPtr dom, if (virDomainBlockCommitEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15593,7 +15593,7 @@ qemuDomainOpenGraphics(virDomainPtr dom, if (virDomainOpenGraphicsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15711,7 +15711,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, if (qemuSecurityClearSocketLabel(driver->securityManager, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; qemuDomainObjEnterMonitor(vm); ret =3D qemuMonitorOpenGraphics(priv->mon, protocol, pair[1], "graphic= sfd", @@ -15966,7 +15966,7 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, =20 cfg =3D virQEMUDriverGetConfig(driver); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 priv =3D vm->privateData; @@ -16244,7 +16244,7 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, if (virDomainGetBlockIoTuneEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 /* the API check guarantees that only one of the definitions will be s= et */ @@ -16384,7 +16384,7 @@ qemuDomainGetDiskErrors(virDomainPtr dom, if (virDomainGetDiskErrorsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16459,7 +16459,7 @@ qemuDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, @@ -16580,7 +16580,7 @@ qemuDomainQueryWakeupSuspendSupport(virDomainObj *v= m, if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_QUERY_CURRENT_MACHINE)) return -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if ((ret =3D virDomainObjCheckActive(vm)) < 0) @@ -16711,7 +16711,7 @@ qemuDomainPMWakeup(virDomainPtr dom, if (virDomainPMWakeupEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17062,7 +17062,7 @@ qemuDomainGetHostnameLease(virDomainObj *vm, size_t i, j; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17264,7 +17264,7 @@ qemuDomainSetTime(virDomainPtr dom, if (qemuDomainSetTimeAgent(vm, seconds, nseconds, rtcSync) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -18810,7 +18810,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, if (flags & VIR_CONNECT_GET_ALL_DOMAINS_STATS_NOWAIT) rv =3D qemuDomainObjBeginJobNowait(vm, VIR_JOB_QUERY); else - rv =3D qemuDomainObjBeginJob(vm, VIR_JOB_QUERY); + rv =3D virDomainObjBeginJob(vm, VIR_JOB_QUERY); =20 if (rv =3D=3D 0) domflags |=3D QEMU_DOMAIN_STATS_HAVE_JOB; @@ -18989,7 +18989,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, if ((nfs =3D qemuDomainGetFSInfoAgent(vm, &agentinfo)) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19277,7 +19277,7 @@ static int qemuDomainRename(virDomainPtr dom, if (virDomainRenameEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -19543,7 +19543,7 @@ qemuDomainSetVcpu(virDomainPtr dom, if (virDomainSetVcpuEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -19601,7 +19601,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, if (virDomainSetBlockThresholdEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19773,7 +19773,7 @@ qemuDomainSetLifecycleAction(virDomainPtr dom, if (virDomainSetLifecycleActionEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -19925,7 +19925,7 @@ qemuDomainGetSEVInfo(virDomainObj *vm, =20 virCheckFlags(VIR_TYPED_PARAM_STRING_OKAY, -1); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) { @@ -20068,7 +20068,7 @@ qemuDomainSetLaunchSecurityState(virDomainPtr domai= n, else if (rc =3D=3D 1) hasSetaddr =3D true; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -20473,7 +20473,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, qemuDomainObjEndAgentJob(vm); =20 if (nfs > 0 || ndisks > 0) { - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -20736,7 +20736,7 @@ qemuDomainStartDirtyRateCalc(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) { diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 2572f47385..ca1f9071bc 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2803,7 +2803,7 @@ qemuMigrationSrcBegin(virConnectPtr conn, if (!qemuMigrationJobIsAllowed(vm)) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; asyncJob =3D VIR_ASYNC_JOB_NONE; } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index ec86bcb2b2..d84d113829 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -459,7 +459,7 @@ qemuProcessFakeReboot(void *opaque) =20 VIR_DEBUG("vm=3D%p", vm); virObjectLock(vm); - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -8065,7 +8065,7 @@ qemuProcessBeginStopJob(virDomainObj *vm, VIR_DEBUG("waking up all jobs waiting on the domain condition"); virDomainObjBroadcast(vm); =20 - if (qemuDomainObjBeginJob(vm, job) < 0) + if (virDomainObjBeginJob(vm, job) < 0) goto cleanup; =20 ret =3D 0; @@ -8706,7 +8706,7 @@ qemuProcessReconnect(void *opaque) if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP && priv->backup) priv->backup->apiFlags =3D oldjob.apiFlags; =20 - if (qemuDomainObjBeginJob(obj, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(obj, VIR_JOB_MODIFY) < 0) goto error; jobStarted =3D true; =20 diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 243c18193b..c50e9f3846 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2299,7 +2299,7 @@ qemuSnapshotDelete(virDomainObj *vm, VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY | VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN_ONLY, -1); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (!(snap =3D qemuSnapObjFromSnapshot(vm, snapshot))) --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352829; cv=none; d=zohomail.com; s=zohoarc; b=YnwSxx0zi7S3TlpwpHx6KQ/8ffIzpDihrvm0T+I5RBLnNL9v/a3mLMleRTSzQK9MQWpCOIC9Pu54oLxEf/5X0r1hGF9yM1T6XzEUr+2LjuJ3c+Yzt8BGvw7CSwwHF3KUgZIQ1LyrUu6Ss4BRsPa1NKk8m+ea3EtRaeDepEO5N9I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352829; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=TBw/TlTfi+wlypteuihSSQLH7lUSuaMMYjmVn5qUKyc=; b=kiSM8yCJyaIGegFzaZNUuVZX1+XRgCb9W6L2rJ6KM+0z6YzRiaNDfBsbTJgqbkGWHbrWpxP9gQQZD6a+HL5BpEdVDQOJHEc66aePsUIPdNHLIcBOaAukbrifw3ZPa1tQX4cqziuZc8dJ1MVuqj1E5MXsSPY2PQAlARGVkXk3OdY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352829008971.985386217072; Wed, 24 Aug 2022 07:53:49 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-292-T7i2-4t-N46M0xiwsdF4Ng-1; Wed, 24 Aug 2022 10:53:45 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DD0A78032FB; Wed, 24 Aug 2022 14:53:43 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id C746D40CF8EF; Wed, 24 Aug 2022 14:53:43 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 456131946A67; Wed, 24 Aug 2022 14:53:42 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id E0A5B1946A42 for ; Wed, 24 Aug 2022 13:43:50 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id D4C919459C; Wed, 24 Aug 2022 13:43:50 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 802C618ECC for ; Wed, 24 Aug 2022 13:43:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352828; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=TBw/TlTfi+wlypteuihSSQLH7lUSuaMMYjmVn5qUKyc=; b=DFGggtAfANhIbs3Nm+yAjNhoqnweQtY/KSlgWUz9Kpyo/a1uKjqUY9BVP1GRyJIf619Ipg X4hq6up9biDG8Hinhv2bY2w7ZIN810eT2ZXzNEB9Wvuv6sgBw6BY8GKukS1CqOQ+AEviGD 9IgRPjAU49Y6oyIwBUQuZ5lHZl8FcUI= X-MC-Unique: T7i2-4t-N46M0xiwsdF4Ng-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 08/17] libxl: use virDomainObjBeginJob() Date: Wed, 24 Aug 2022 15:43:31 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352829806100003 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes libxlDomainObjBeginJob() and replaces it with generalized virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/libxl/libxl_domain.c | 62 ++----------------------------------- src/libxl/libxl_domain.h | 6 ---- src/libxl/libxl_driver.c | 48 ++++++++++++++-------------- src/libxl/libxl_migration.c | 6 ++-- 4 files changed, 29 insertions(+), 93 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 80a362df46..3a1b371054 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -43,64 +43,6 @@ VIR_LOG_INIT("libxl.libxl_domain"); =20 =20 -/* Give up waiting for mutex after 30 seconds */ -#define LIBXL_JOB_WAIT_TIME (1000ull * 30) - -/* - * obj must be locked before calling, libxlDriverPrivate *must NOT be lock= ed - * - * This must be called by anything that will change the VM state - * in any way - * - * Upon successful return, the object will have its ref count increased, - * successful calls must be followed by EndJob eventually - */ -int -libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_UNUSED, - virDomainObj *obj, - virDomainJob job) -{ - unsigned long long now; - unsigned long long then; - - if (virTimeMillisNow(&now) < 0) - return -1; - then =3D now + LIBXL_JOB_WAIT_TIME; - - while (obj->job->active) { - VIR_DEBUG("Wait normal job condition for starting job: %s", - virDomainJobTypeToString(job)); - if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) - goto error; - } - - virDomainObjResetJob(obj->job); - - VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - obj->job->active =3D job; - obj->job->owner =3D virThreadSelfID(); - obj->job->started =3D now; - - return 0; - - error: - VIR_WARN("Cannot start job (%s) for domain %s;" - " current job is (%s) owned by (%llu)", - virDomainJobTypeToString(job), - obj->def->name, - virDomainJobTypeToString(obj->job->active), - obj->job->owner); - - if (errno =3D=3D ETIMEDOUT) - virReportError(VIR_ERR_OPERATION_TIMEOUT, - "%s", _("cannot acquire state change lock")); - else - virReportSystemError(errno, - "%s", _("cannot acquire job mutex")); - - return -1; -} - /* * obj must be locked before calling * @@ -460,7 +402,7 @@ libxlDomainShutdownThread(void *opaque) goto cleanup; } =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (xl_reason =3D=3D LIBXL_SHUTDOWN_REASON_POWEROFF) { @@ -589,7 +531,7 @@ libxlDomainDeathThread(void *opaque) goto cleanup; } =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_DESTRO= YED); diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 9e8804f747..b80552e30a 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -49,12 +49,6 @@ extern const struct libxl_event_hooks ev_hooks; int libxlDomainObjPrivateInitCtx(virDomainObj *vm); =20 -int -libxlDomainObjBeginJob(libxlDriverPrivate *driver, - virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void libxlDomainObjEndJob(libxlDriverPrivate *driver, virDomainObj *obj); diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 0ae1ee95c4..d94430708a 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -326,7 +326,7 @@ libxlAutostartDomain(virDomainObj *vm, virObjectLock(vm); virResetLastError(); =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (vm->autostart && !virDomainObjIsActive(vm) && @@ -1049,7 +1049,7 @@ libxlDomainCreateXML(virConnectPtr conn, const char *= xml, NULL))) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1159,7 +1159,7 @@ libxlDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1212,7 +1212,7 @@ libxlDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1373,7 +1373,7 @@ libxlDomainDestroyFlags(virDomainPtr dom, if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1446,7 +1446,7 @@ libxlDomainPMSuspendForDuration(virDomainPtr dom, if (virDomainPMSuspendForDurationEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1499,7 +1499,7 @@ libxlDomainPMWakeup(virDomainPtr dom, unsigned int fl= ags) if (virDomainPMWakeupEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_PMSUSPENDED) { @@ -1633,7 +1633,7 @@ libxlDomainSetMemoryFlags(virDomainPtr dom, unsigned = long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainLiveConfigHelperMethod(cfg->caps, driver->xmlopt, vm, &fl= ags, @@ -1902,7 +1902,7 @@ libxlDomainSaveFlags(virDomainPtr dom, const char *to= , const char *dxml, if (virDomainSaveFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1967,7 +1967,7 @@ libxlDomainRestoreFlags(virConnectPtr conn, const cha= r *from, NULL))) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -2014,7 +2014,7 @@ libxlDomainCoreDump(virDomainPtr dom, const char *to,= unsigned int flags) if (virDomainCoreDumpEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2103,7 +2103,7 @@ libxlDomainManagedSave(virDomainPtr dom, unsigned int= flags) if (virDomainManagedSaveEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2248,7 +2248,7 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned i= nt nvcpus, if (virDomainSetVcpusFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_VCPU_LIVE)) { @@ -2446,7 +2446,7 @@ libxlDomainPinVcpuFlags(virDomainPtr dom, unsigned in= t vcpu, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainLiveConfigHelperMethod(cfg->caps, driver->xmlopt, vm, @@ -2777,7 +2777,7 @@ libxlDomainCreateWithFlags(virDomainPtr dom, if (virDomainCreateWithFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -4095,7 +4095,7 @@ libxlDomainAttachDeviceFlags(virDomainPtr dom, const = char *xml, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4189,7 +4189,7 @@ libxlDomainDetachDeviceFlags(virDomainPtr dom, const = char *xml, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4477,7 +4477,7 @@ libxlDomainSetAutostart(virDomainPtr dom, int autosta= rt) if (virDomainSetAutostartEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -4683,7 +4683,7 @@ libxlDomainSetSchedulerParametersFlags(virDomainPtr d= om, if (virDomainSetSchedulerParametersFlagsEnsureACL(dom->conn, vm->def, = flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5000,7 +5000,7 @@ libxlDomainInterfaceStats(virDomainPtr dom, if (virDomainInterfaceStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5168,7 +5168,7 @@ libxlDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5522,7 +5522,7 @@ libxlDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5572,7 +5572,7 @@ libxlDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -6368,7 +6368,7 @@ libxlDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, diff --git a/src/libxl/libxl_migration.c b/src/libxl/libxl_migration.c index 800a6b0365..90cf12ae00 100644 --- a/src/libxl/libxl_migration.c +++ b/src/libxl/libxl_migration.c @@ -383,7 +383,7 @@ libxlDomainMigrationSrcBegin(virConnectPtr conn, * terminated in the confirm phase. Errors in the begin or perform * phase will also terminate the job. */ - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!(mig =3D libxlMigrationCookieNew(vm))) @@ -553,7 +553,7 @@ libxlDomainMigrationDstPrepareTunnel3(virConnectPtr dco= nn, * Unless an error is encountered in this function, the job will * be terminated in the finish phase. */ - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto error; =20 priv =3D vm->privateData; @@ -662,7 +662,7 @@ libxlDomainMigrationDstPrepare(virConnectPtr dconn, * Unless an error is encountered in this function, the job will * be terminated in the finish phase. */ - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto error; =20 priv =3D vm->privateData; --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352198; cv=none; d=zohomail.com; s=zohoarc; b=hhjkSi7u1pSJ9/TxkVfergoR62H+gKIj+PlGdPdiUg9J+Qb3N6ps5jpQDZMe1dotDNMz7so9jbhoSxi/8xJugc1V/7HijqJAOKPvvc+N4g4S4kMUg5trz4qIC8UyiZjlQQ9MZk2iqhd4bOEo5EADyPMjqwjOxvxO8xafaBBalzg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352198; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=dEAsM8aItsZa3QMc42LEGVncqKRWtZgrk+wTxCMISPg=; b=YOTrWlUJiwdqbTjumFnPJSMzGRfvR9tjgvRt5JNCyZz20jpgX/NQqrQJD6cEpGK9G+DvnL6//7KBCFn+kNtiqAAQOrxJJ3brXbVExs+SPMK6vCeCJv7xSNY+kOHyi27r0rA+Zuw23FusfV8r421Qp1R6naXo8V6aiNti+12T+yg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352198385187.62202527405498; Wed, 24 Aug 2022 07:43:18 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-640-A8PB_Q3yOzeXv9I9YWU1Yg-1; Wed, 24 Aug 2022 10:43:14 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 35B82804191; Wed, 24 Aug 2022 14:43:12 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 233EB945D7; Wed, 24 Aug 2022 14:43:12 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C04471946A43; Wed, 24 Aug 2022 14:43:11 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 7CB611946A42 for ; Wed, 24 Aug 2022 13:43:51 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 6B731945D7; Wed, 24 Aug 2022 13:43:51 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 175DD9459C for ; Wed, 24 Aug 2022 13:43:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352197; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=dEAsM8aItsZa3QMc42LEGVncqKRWtZgrk+wTxCMISPg=; b=LX14eeBJlTmCpimMRL/rvtPsFn1gn4A5YupW+nA3eIqG5bcA9GaiLlX4Bet31m5IvzeXIC IllwTv5FaLUGLQ4nHhs/C1N55SD7CvHpa3OyBRCupowv9Ql/tgtL9X8VlsCqcpwwG9dHv5 mZYFG7nH0v4L+BZAuf+CcgLJRbyo0/I= X-MC-Unique: A8PB_Q3yOzeXv9I9YWU1Yg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 09/17] LXC: use virDomainObjBeginJob() Date: Wed, 24 Aug 2022 15:43:32 +0200 Message-Id: <8a51ca2223bdab1c46393f5dfbd4272ef459d751.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352200646100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virLXCDomainObjBeginJob() and replaces it with call to the generalized virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/lxc/lxc_domain.c | 57 -------------------------------------------- src/lxc/lxc_domain.h | 6 ----- src/lxc/lxc_driver.c | 46 +++++++++++++++++------------------ 3 files changed, 23 insertions(+), 86 deletions(-) diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index 3dddc0a7d4..aad9dae694 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -35,63 +35,6 @@ VIR_LOG_INIT("lxc.lxc_domain"); =20 =20 -/* Give up waiting for mutex after 30 seconds */ -#define LXC_JOB_WAIT_TIME (1000ull * 30) - -/* - * obj must be locked before calling, virLXCDriver *must NOT be locked - * - * This must be called by anything that will change the VM state - * in any way - * - * Upon successful return, the object will have its ref count increased. - * Successful calls must be followed by EndJob eventually. - */ -int -virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNUSED, - virDomainObj *obj, - virDomainJob job) -{ - unsigned long long now; - unsigned long long then; - - if (virTimeMillisNow(&now) < 0) - return -1; - then =3D now + LXC_JOB_WAIT_TIME; - - while (obj->job->active) { - VIR_DEBUG("Wait normal job condition for starting job: %s", - virDomainJobTypeToString(job)); - if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) - goto error; - } - - virDomainObjResetJob(obj->job); - - VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - obj->job->active =3D job; - obj->job->owner =3D virThreadSelfID(); - - return 0; - - error: - VIR_WARN("Cannot start job (%s) for domain %s;" - " current job is (%s) owned by (%llu)", - virDomainJobTypeToString(job), - obj->def->name, - virDomainJobTypeToString(obj->job->active), - obj->job->owner); - - if (errno =3D=3D ETIMEDOUT) - virReportError(VIR_ERR_OPERATION_TIMEOUT, - "%s", _("cannot acquire state change lock")); - else - virReportSystemError(errno, - "%s", _("cannot acquire job mutex")); - return -1; -} - - /* * obj must be locked and have a reference before calling * diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index 8cbcc0818c..e7b19fb2ff 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -72,12 +72,6 @@ extern virXMLNamespace virLXCDriverDomainXMLNamespace; extern virDomainXMLPrivateDataCallbacks virLXCDriverPrivateDataCallbacks; extern virDomainDefParserConfig virLXCDriverDomainDefParserConfig; =20 -int -virLXCDomainObjBeginJob(virLXCDriver *driver, - virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void virLXCDomainObjEndJob(virLXCDriver *driver, virDomainObj *obj); diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index 8a7135ff47..7aa13673a6 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -648,7 +648,7 @@ static int lxcDomainSetMemoryFlags(virDomainPtr dom, un= signed long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -762,7 +762,7 @@ lxcDomainSetMemoryParameters(virDomainPtr dom, if (virDomainSetMemoryParametersEnsureACL(dom->conn, vm->def, flags) <= 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 /* QEMU and LXC implementation are identical */ @@ -983,7 +983,7 @@ static int lxcDomainCreateWithFiles(virDomainPtr dom, goto cleanup; } =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -1105,7 +1105,7 @@ lxcDomainCreateXMLWithFiles(virConnectPtr conn, NULL))) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1350,7 +1350,7 @@ lxcDomainDestroyFlags(virDomainPtr dom, if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_DESTROY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_DESTROY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1816,7 +1816,7 @@ lxcDomainSetSchedulerParametersFlags(virDomainPtr dom, if (!(caps =3D virLXCDriverGetCapabilities(driver, false))) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2035,7 +2035,7 @@ lxcDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2118,7 +2118,7 @@ lxcDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2254,7 +2254,7 @@ lxcDomainSetBlkioParameters(virDomainPtr dom, if (virDomainSetBlkioParametersEnsureACL(dom->conn, vm->def, flags) < = 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2396,7 +2396,7 @@ lxcDomainInterfaceStats(virDomainPtr dom, if (virDomainInterfaceStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2456,7 +2456,7 @@ static int lxcDomainSetAutostart(virDomainPtr dom, if (virDomainSetAutostartEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -2607,7 +2607,7 @@ static int lxcDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2657,7 +2657,7 @@ static int lxcDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2786,7 +2786,7 @@ lxcDomainSendProcessSignal(virDomainPtr dom, if (virDomainSendProcessSignalEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2871,7 +2871,7 @@ lxcDomainShutdownFlags(virDomainPtr dom, if (virDomainShutdownFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2947,7 +2947,7 @@ lxcDomainReboot(virDomainPtr dom, if (virDomainRebootEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4278,7 +4278,7 @@ static int lxcDomainAttachDeviceFlags(virDomainPtr do= m, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4383,7 +4383,7 @@ static int lxcDomainUpdateDeviceFlags(virDomainPtr do= m, if (virDomainUpdateDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4446,7 +4446,7 @@ static int lxcDomainDetachDeviceFlags(virDomainPtr do= m, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4546,7 +4546,7 @@ static int lxcDomainLxcOpenNamespace(virDomainPtr dom, if (virDomainLxcOpenNamespaceEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4629,7 +4629,7 @@ lxcDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4799,7 +4799,7 @@ lxcDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, @@ -4905,7 +4905,7 @@ lxcDomainGetHostname(virDomainPtr dom, if (virDomainGetHostnameEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661355190; cv=none; d=zohomail.com; s=zohoarc; b=RTFSvcRpZcVaDWltggHXxW+QdR1n3pBY8kEw1SuuS856UOQnoVg5J6OH1xRgBM4etVSXXua8HJnDckZRvoJAuIm40Q0xKWRfrbITk43J6jRhoc3g4oisQevFMyLDtWKrr70ryxkUYfBnQnQOmYwRxPyoBsziGPSnrBXqdSbdeqA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661355190; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZnqTLMMgGWYsHm/ynLXKqqOAOqZTKe0B1EsjOwvciYQ=; b=VLvm4YQ8QeOlonT4dv0zyIRnfexkyVLbYHj6LAHBTYPC+fu5+qZGpwFrwTHRSvmEPYPG5UU/XPmYS654RkqIJGqNwj4LhyFTpS4H/ENe7NBVuVDgl2R6W77Pfvt1dmXp4sHRIvYBmfLb/IIO5+2K0G+tpkaUCY50W04n6XbIFcQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1661355190786360.7094628536337; Wed, 24 Aug 2022 08:33:10 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-513-uBkxm_UBPHWMlDTr2ZyGqA-1; Wed, 24 Aug 2022 11:33:04 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D9E7B802523; Wed, 24 Aug 2022 15:32:55 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B45CDC15BBA; Wed, 24 Aug 2022 15:32:55 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 3A4301946A5C; Wed, 24 Aug 2022 15:32:55 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 2053D1946A42 for ; Wed, 24 Aug 2022 13:43:52 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 02C6A945D7; Wed, 24 Aug 2022 13:43:52 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id A29D09459C for ; Wed, 24 Aug 2022 13:43:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661355189; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=ZnqTLMMgGWYsHm/ynLXKqqOAOqZTKe0B1EsjOwvciYQ=; b=dAXDaR8kLhjojaTLjsRS7SVOlA/QxncoKJ+IAFgE8m73XScWEAGZYzIWdRrxQndOszAOeJ hKMud8c0IxF/70blnDwI2uPeyqe9prG4nk7HKiNbfRZuSL1YJ4w8QcZqtOmWjTU4+nMql7 WeVOdzqZq378gj10dDo2VHbg/BWvS0U= X-MC-Unique: uBkxm_UBPHWMlDTr2ZyGqA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 10/17] CH: use virDomainObjBeginJob() Date: Wed, 24 Aug 2022 15:43:33 +0200 Message-Id: <6447d867d1ef7aca9fd9e5d16f3a720836fb62e6.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661355192731100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virCHDomainObjBeginJob() and replaces it with call to the generalized virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/ch/ch_domain.c | 51 +--------------------------------------------- src/ch/ch_domain.h | 4 ---- src/ch/ch_driver.c | 20 +++++++++--------- 3 files changed, 11 insertions(+), 64 deletions(-) diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index 9ddf9a8584..c592c6ffbb 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -32,60 +32,11 @@ =20 VIR_LOG_INIT("ch.ch_domain"); =20 -/* - * obj must be locked before calling, virCHDriver must NOT be locked - * - * This must be called by anything that will change the VM state - * in any way - * - * Upon successful return, the object will have its ref count increased. - * Successful calls must be followed by EndJob eventually. - */ -int -virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) -{ - unsigned long long now; - unsigned long long then; - - if (virTimeMillisNow(&now) < 0) - return -1; - then =3D now + CH_JOB_WAIT_TIME; - - while (obj->job->active) { - VIR_DEBUG("Wait normal job condition for starting job: %s", - virDomainJobTypeToString(job)); - if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0= ) { - VIR_WARN("Cannot start job (%s) for domain %s;" - " current job is (%s) owned by (%llu)", - virDomainJobTypeToString(job), - obj->def->name, - virDomainJobTypeToString(obj->job->active), - obj->job->owner); - - if (errno =3D=3D ETIMEDOUT) - virReportError(VIR_ERR_OPERATION_TIMEOUT, - "%s", _("cannot acquire state change lock")= ); - else - virReportSystemError(errno, - "%s", _("cannot acquire job mutex")); - return -1; - } - } - - virDomainObjResetJob(obj->job); - - VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - obj->job->active =3D job; - obj->job->owner =3D virThreadSelfID(); - - return 0; -} - /* * obj must be locked and have a reference before calling * * To be called after completing the work associated with the - * earlier virCHDomainBeginJob() call + * earlier virDomainObjBeginJob() call */ void virCHDomainObjEndJob(virDomainObj *obj) diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index c7dfde601e..076043f772 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -60,10 +60,6 @@ struct _virCHDomainVcpuPrivate { extern virDomainXMLPrivateDataCallbacks virCHDriverPrivateDataCallbacks; extern virDomainDefParserConfig virCHDriverDomainDefParserConfig; =20 -int -virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void virCHDomainObjEndJob(virDomainObj *obj); =20 diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c index e7c172c894..b089a7c9c7 100644 --- a/src/ch/ch_driver.c +++ b/src/ch/ch_driver.c @@ -217,7 +217,7 @@ chDomainCreateXML(virConnectPtr conn, NULL))) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED) < 0) @@ -251,7 +251,7 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int = flags) if (virDomainCreateWithFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED); @@ -390,7 +390,7 @@ chDomainShutdownFlags(virDomainPtr dom, if (virDomainShutdownFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -446,7 +446,7 @@ chDomainReboot(virDomainPtr dom, unsigned int flags) if (virDomainRebootEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -495,7 +495,7 @@ chDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -540,7 +540,7 @@ chDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -594,7 +594,7 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int fla= gs) if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_DESTROY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_DESTROY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1217,7 +1217,7 @@ chDomainPinVcpuFlags(virDomainPtr dom, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -1354,7 +1354,7 @@ chDomainPinEmulator(virDomainPtr dom, if (virDomainPinEmulatorEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -1625,7 +1625,7 @@ chDomainSetNumaParameters(virDomainPtr dom, } } =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661355187; cv=none; d=zohomail.com; s=zohoarc; b=Xvmukw/P6fqS0Zp/CCz4VuA7ODJm3cvfEjqtZIA7BRcRW7fuWUC4LReYhucS+PeB2CNBTTZ/xffCubRWiCNf+MXKWCM2xUPbNWKPAoV1T2j//JYrjX4Ufn6dlvGDO71UZMaj4APpZ3Hct+Jcfj133YH/9Se3nG9yj1LKmKbHFhE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661355187; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=FK/GCcslzBD51bhY5EAa2D+m2cmueKHMSiFqqHDLDlk=; b=Z6Ny75FNFAeARt9Y4el/BchJA/K16K47ttWUmfzmaJo3K8c+mm66XT4AazR4OMXh6cQAL5Mh/c9R6pFgdziwimwLWk06FG065P7CWOkduqoZbmYUzvdJPJMck+ASZ8wD75nxP5x17toEzWSJnEbWAd+CjP7Jxf2cpzbbnKYxR44= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661355187763771.4731625292843; Wed, 24 Aug 2022 08:33:07 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-464--j1Iz72NMMitLF_NKFOF8w-1; Wed, 24 Aug 2022 11:33:03 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DB9B519705B9; Wed, 24 Aug 2022 15:32:55 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4F6122026D4C; Wed, 24 Aug 2022 15:32:55 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id ECF7D1946A43; Wed, 24 Aug 2022 15:32:54 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 9A0FE1946A42 for ; Wed, 24 Aug 2022 13:43:52 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 8F77618ECC; Wed, 24 Aug 2022 13:43:52 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3AB8C94622 for ; Wed, 24 Aug 2022 13:43:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661355185; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=FK/GCcslzBD51bhY5EAa2D+m2cmueKHMSiFqqHDLDlk=; b=J1VjHNy7yFASyyJ5dCnzOfE0LOB8TVOm/0NYRG7UgEFKC7Rtvis/v+GgZsQCYR/z4xlNTX MyRzZeak54u/K96gH7/oMDHyFozpiDQB9/HbAm7Og28xbePKBs7953FENq1aK4KVDgmRLY moKxBPtRVQqVIV/f4gXJcVQV4Ve1vVg= X-MC-Unique: -j1Iz72NMMitLF_NKFOF8w-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 11/17] qemu: use virDomainObjEndJob() Date: Wed, 24 Aug 2022 15:43:34 +0200 Message-Id: <8efd7f25e0bcc6ea355905291a588031ff9fddb2.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661355188712100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch moves qemuDomainObjEndJob() into src/conf/virdomainjob as universal virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 6 +- src/conf/virdomainjob.c | 28 +++++ src/conf/virdomainjob.h | 2 + src/libvirt_private.syms | 1 + src/qemu/qemu_checkpoint.c | 6 +- src/qemu/qemu_domain.c | 4 +- src/qemu/qemu_domainjob.c | 26 ----- src/qemu/qemu_domainjob.h | 1 - src/qemu/qemu_driver.c | 156 +++++++++++++------------- src/qemu/qemu_migration.c | 2 +- src/qemu/qemu_process.c | 8 +- src/qemu/qemu_snapshot.c | 2 +- 12 files changed, 123 insertions(+), 119 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index 00340bb732..22d12f61a5 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -123,7 +123,7 @@ To acquire the normal job condition isn't - Sets ``job.active`` to the job type =20 - ``qemuDomainObjEndJob()`` + ``virDomainObjEndJob()`` - Sets job.active to 0 - Signals on job.cond condition =20 @@ -218,7 +218,7 @@ Design patterns =20 ...do work... =20 - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); =20 virDomainObjEndAPI(&obj); =20 @@ -242,7 +242,7 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); virDomainObjEndAPI(&obj); =20 =20 diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 81bd886c4c..0f239c680c 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -526,3 +526,31 @@ int virDomainObjBeginJob(virDomainObj *obj, return -1; return 0; } + +/* + * obj must be locked and have a reference before calling + * + * To be called after completing the work associated with the + * earlier virDomainBeginJob() call + */ +void +virDomainObjEndJob(virDomainObj *obj) +{ + virDomainJob job =3D obj->job->active; + + obj->job->jobsQueued--; + + VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj, obj->def->name); + + virDomainObjResetJob(obj->job); + + if (virDomainTrackJob(job) && + obj->job->cb->saveStatusPrivate) + obj->job->cb->saveStatusPrivate(obj); + /* We indeed need to wake up ALL threads waiting because + * grabbing a job requires checking more variables. */ + virCondBroadcast(&obj->job->cond); +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index adee679e72..7a06c384f3 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -249,3 +249,5 @@ int virDomainObjBeginJobInternal(virDomainObj *obj, int virDomainObjBeginJob(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; + +void virDomainObjEndJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 89d8bd5da6..77da4a1d01 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1191,6 +1191,7 @@ virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; +virDomainObjEndJob; virDomainObjInitJob; virDomainObjPreserveJob; virDomainObjResetAgentJob; diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c index c6fb3d1f97..8580158d66 100644 --- a/src/qemu/qemu_checkpoint.c +++ b/src/qemu/qemu_checkpoint.c @@ -626,7 +626,7 @@ qemuCheckpointCreateXML(virDomainPtr domain, checkpoint =3D virGetDomainCheckpoint(domain, chk->def->name); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 return checkpoint; } @@ -764,7 +764,7 @@ qemuCheckpointGetXMLDescUpdateSize(virDomainObj *vm, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -917,6 +917,6 @@ qemuCheckpointDelete(virDomainObj *vm, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c596a70a72..3514f104f1 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5973,7 +5973,7 @@ qemuDomainObjEnterMonitorInternal(virDomainObj *obj, if (!virDomainObjIsActive(obj)) { virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is no longer running")); - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); return -1; } } else if (obj->job->asyncOwner =3D=3D virThreadSelfID()) { @@ -6023,7 +6023,7 @@ qemuDomainObjExitMonitor(virDomainObj *obj) priv->mon =3D NULL; =20 if (obj->job->active =3D=3D VIR_JOB_ASYNC_NESTED) - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); } =20 void qemuDomainObjEnterMonitor(virDomainObj *obj) diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 13e1f332dc..a8baf096e5 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -730,32 +730,6 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, VIR_ASYNC_JOB_NONE, true); } =20 -/* - * obj must be locked and have a reference before calling - * - * To be called after completing the work associated with the - * earlier qemuDomainBeginJob() call - */ -void -qemuDomainObjEndJob(virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - obj->job->jobsQueued--; - - VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", - virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(obj->job->asyncJob), - obj, obj->def->name); - - virDomainObjResetJob(obj->job); - if (virDomainTrackJob(job)) - qemuDomainSaveStatus(obj); - /* We indeed need to wake up ALL threads waiting because - * grabbing a job requires checking more variables. */ - virCondBroadcast(&obj->job->cond); -} - void qemuDomainObjEndAgentJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 66f18483fe..918b74748b 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -84,7 +84,6 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 -void qemuDomainObjEndJob(virDomainObj *obj); void qemuDomainObjEndAgentJob(virDomainObj *obj); void qemuDomainObjEndAsyncJob(virDomainObj *obj); void qemuDomainObjAbortAsyncJob(virDomainObj *obj); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 52f0a6e45b..666e8dd228 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1706,7 +1706,7 @@ static int qemuDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1762,7 +1762,7 @@ static int qemuDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1827,7 +1827,7 @@ qemuDomainShutdownFlagsMonitor(virDomainObj *vm, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -1947,7 +1947,7 @@ qemuDomainRebootMonitor(virDomainObj *vm, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -2040,7 +2040,7 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, VIR_DOMAIN_PAUSED_CRAS= HED); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2117,7 +2117,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, endjob: if (ret =3D=3D 0) qemuDomainRemoveInactive(driver, vm); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2295,7 +2295,7 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom,= unsigned long newmem, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2378,7 +2378,7 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPt= r dom, int period, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2412,7 +2412,7 @@ static int qemuDomainInjectNMI(virDomainPtr domain, u= nsigned int flags) qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2470,7 +2470,7 @@ static int qemuDomainSendKey(virDomainPtr domain, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -3398,7 +3398,7 @@ qemuDomainScreenshot(virDomainPtr dom, unlink(tmp); } =20 - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -3622,7 +3622,7 @@ processDeviceDeletedEvent(virQEMUDriver *driver, qemuDomainSaveStatus(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -3918,7 +3918,7 @@ processNicRxFilterChangedEvent(virDomainObj *vm, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virNetDevRxFilterFree(hostFilter); @@ -4001,7 +4001,7 @@ processSerialChangedEvent(virQEMUDriver *driver, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -4020,7 +4020,7 @@ processJobStatusChangeEvent(virDomainObj *vm, qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -4066,7 +4066,7 @@ processMonitorEOFEvent(virQEMUDriver *driver, =20 endjob: qemuDomainRemoveInactive(driver, vm); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -4180,7 +4180,7 @@ processMemoryDeviceSizeChange(virQEMUDriver *driver, mem->currentsiz= e); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, event); } =20 @@ -4388,7 +4388,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, if (useAgent) qemuDomainObjEndAgentJob(vm); else - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4544,7 +4544,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4692,7 +4692,7 @@ qemuDomainPinEmulator(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -4934,7 +4934,7 @@ qemuDomainGetIOThreadsLive(virDomainObj *vm, ret =3D niothreads; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (info_ret) { @@ -5098,7 +5098,7 @@ qemuDomainPinIOThread(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -5642,7 +5642,7 @@ qemuDomainChgIOThread(virQEMUDriver *driver, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 return ret; } @@ -6829,7 +6829,7 @@ qemuDomainUndefineFlags(virDomainPtr dom, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8004,7 +8004,7 @@ qemuDomainAttachDeviceFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8107,7 +8107,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr d= om, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (dev !=3D dev_copy) @@ -8289,7 +8289,7 @@ qemuDomainDetachDeviceFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8324,7 +8324,7 @@ qemuDomainDetachDeviceAlias(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8423,7 +8423,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, vm->autostart =3D autostart; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } ret =3D 0; =20 @@ -8563,7 +8563,7 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8735,7 +8735,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9016,7 +9016,7 @@ qemuDomainSetNumaParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9206,7 +9206,7 @@ qemuDomainSetPerfEvents(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9267,7 +9267,7 @@ qemuDomainGetPerfEvents(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9647,7 +9647,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr do= m, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9967,7 +9967,7 @@ qemuDomainBlockResize(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10141,7 +10141,7 @@ qemuDomainBlockStats(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10228,7 +10228,7 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, *nparams =3D nstats; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(blockstats); @@ -10531,7 +10531,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10705,7 +10705,7 @@ qemuDomainMemoryStats(virDomainPtr dom, =20 ret =3D qemuDomainMemoryStatsInternal(vm, stats, nr_stats); =20 - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10852,7 +10852,7 @@ qemuDomainMemoryPeek(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FORCE_CLOSE(fd); @@ -11165,7 +11165,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); cleanup: VIR_FREE(entry); virDomainObjEndAPI(&vm); @@ -12715,7 +12715,7 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, ret =3D 0; =20 cleanup: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -12927,7 +12927,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12982,7 +12982,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13033,7 +13033,7 @@ qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13083,7 +13083,7 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr d= om, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13136,7 +13136,7 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr d= om, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13214,7 +13214,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13267,7 +13267,7 @@ qemuDomainMigrationGetPostcopyBandwidth(virDomainOb= j *vm, ret =3D 0; =20 cleanup: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -13351,7 +13351,7 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14085,7 +14085,7 @@ qemuDomainQemuMonitorCommandWithFiles(virDomainPtr = domain, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14476,7 +14476,7 @@ qemuDomainBlockPullCommon(virDomainObj *vm, qemuBlockJobStarted(job, vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: qemuBlockJobStartupFinalize(vm, job); @@ -14583,7 +14583,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, endjob: if (job && !async) qemuBlockJobSyncEnd(vm, job, VIR_ASYNC_JOB_NONE); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14698,7 +14698,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, ret =3D 1; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14759,7 +14759,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -15181,7 +15181,7 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, if (need_unlink && virStorageSourceUnlink(mirror) < 0) VIR_WARN("%s", _("unable to remove just-created copy target")); virStorageSourceDeinit(mirror); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); qemuBlockJobStartupFinalize(vm, job); =20 return ret; @@ -15566,7 +15566,7 @@ qemuDomainBlockCommit(virDomainPtr dom, virErrorRestore(&orig_err); } qemuBlockJobStartupFinalize(vm, job); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -15640,7 +15640,7 @@ qemuDomainOpenGraphics(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -15717,7 +15717,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, ret =3D qemuMonitorOpenGraphics(priv->mon, protocol, pair[1], "graphic= sfd", (flags & VIR_DOMAIN_OPEN_GRAPHICS_SKIPAU= TH)); qemuDomainObjExitMonitor(vm); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); if (ret < 0) goto cleanup; =20 @@ -16200,7 +16200,7 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(info.group_name); @@ -16353,7 +16353,7 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(reply.group_name); @@ -16424,7 +16424,7 @@ qemuDomainGetDiskErrors(virDomainPtr dom, ret =3D n; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -16472,7 +16472,7 @@ qemuDomainSetMetadata(virDomainPtr dom, virObjectEventStateQueue(driver->domainEventState, ev); } =20 - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -16589,7 +16589,7 @@ qemuDomainQueryWakeupSuspendSupport(virDomainObj *v= m, ret =3D qemuDomainProbeQMPCurrentMachine(vm, wakeupSupported); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -16724,7 +16724,7 @@ qemuDomainPMWakeup(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17104,7 +17104,7 @@ qemuDomainGetHostnameLease(virDomainObj *vm, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -17283,7 +17283,7 @@ qemuDomainSetTime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -18820,7 +18820,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, rc =3D qemuDomainGetStats(conn, vm, requestedStats, &tmp, domflags= ); =20 if (HAVE_JOB(domflags)) - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 virObjectUnlock(vm); =20 @@ -18998,7 +18998,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, ret =3D virDomainFSInfoFormat(agentinfo, nfs, vm->def, info); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (agentinfo) { @@ -19312,7 +19312,7 @@ static int qemuDomainRename(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19570,7 +19570,7 @@ qemuDomainSetVcpu(virDomainPtr dom, ret =3D qemuDomainSetVcpuInternal(driver, vm, def, persistentDef, map,= !!state); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19631,7 +19631,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19813,7 +19813,7 @@ qemuDomainSetLifecycleAction(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19972,7 +19972,7 @@ qemuDomainGetSEVInfo(virDomainObj *vm, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -20097,7 +20097,7 @@ qemuDomainSetLaunchSecurityState(virDomainPtr domai= n, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -20488,7 +20488,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, qemuAgentDiskInfoFormatParams(agentdiskinfo, ndisks, vm->def, = params, nparams, &maxparams); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 if (nifaces > 0) { @@ -20751,7 +20751,7 @@ qemuDomainStartDirtyRateCalc(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index ca1f9071bc..38c2f6fdfd 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2844,7 +2844,7 @@ qemuMigrationSrcBegin(virConnectPtr conn, else qemuMigrationJobFinish(vm); } else { - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 cleanup: diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index d84d113829..16651341fc 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -492,7 +492,7 @@ qemuProcessFakeReboot(void *opaque) ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: priv->pausedShutdown =3D false; @@ -8415,7 +8415,7 @@ void qemuProcessStop(virQEMUDriver *driver, =20 endjob: if (asyncJob !=3D VIR_ASYNC_JOB_NONE) - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virErrorRestore(&orig_err); @@ -8457,7 +8457,7 @@ qemuProcessAutoDestroy(virDomainObj *dom, =20 qemuDomainRemoveInactive(driver, dom); =20 - qemuDomainObjEndJob(dom); + virDomainObjEndJob(dom); =20 virObjectEventStateQueue(driver->domainEventState, event); } @@ -8919,7 +8919,7 @@ qemuProcessReconnect(void *opaque) =20 cleanup: if (jobStarted) - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); if (!virDomainObjIsActive(obj)) qemuDomainRemoveInactive(driver, obj); virDomainObjEndAPI(&obj); diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index c50e9f3846..afed0f0e28 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2373,7 +2373,7 @@ qemuSnapshotDelete(virDomainObj *vm, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 return ret; } --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352160; cv=none; d=zohomail.com; s=zohoarc; b=VNQq7ot5cJyRnnaux3b1HbyBmRwr/qttWxQx3wNYKSEdiG+QIK+fNE8yvdYkTQ3z/wwZe5EV0T3oSL1v0zwUEK6YkD5nvxzsCptU9ZhXkiiiY0RGcr2PHOWGjs2SszgfIQSv2dZLxNXg62iJihx5GSIosO8IzK0W/CqUelGthts= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352160; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=mLK3oExbcqS+8hv2fJYG4XgNi+OTaDf9XBFMIKgo71U=; b=bl9ulrTqWc9VCgd2TnFqkHvnWYnEJD/nOjam/EFxGM7cGevbIpB1qYkZJAu4mQTydcuKLQKBh4jmE0EJ0eqieKsXTy6NVVJNvizdROWhPYBntZgOkCKoPwuTupOu7l4W2J76kgzYIn7I2pnvPYmOrDCsgzmE0AI65H2Se/0Sq5U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352160160256.62045960274145; Wed, 24 Aug 2022 07:42:40 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-500-_NlrjJAJMOm6UvLn2zBZ7A-1; Wed, 24 Aug 2022 10:42:31 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A8C2385A597; Wed, 24 Aug 2022 14:42:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 10C2D400EAB6; Wed, 24 Aug 2022 14:42:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D57F21946A43; Wed, 24 Aug 2022 14:42:28 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 331FC1946A42 for ; Wed, 24 Aug 2022 13:43:53 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 27102945D7; Wed, 24 Aug 2022 13:43:53 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id C6A0E9459C for ; Wed, 24 Aug 2022 13:43:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352159; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=mLK3oExbcqS+8hv2fJYG4XgNi+OTaDf9XBFMIKgo71U=; b=Yc7IFS92sMGN615cEFNnUYayGj7XKFjBym9eOdY2oBBvDkjzsCp9WmoDTstOGWZS5XyBFy j8PbTPM/rwzO9WVCa3u+FaratpwZm43husgYfVrXGG9CdJRA8+3G8+i9Wj4FUPWK6/72ko p9Y2fhIRJHzAC94OiCxZ2QrAMmGFCKU= X-MC-Unique: _NlrjJAJMOm6UvLn2zBZ7A-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 12/17] libxl: use virDomainObjEndJob() Date: Wed, 24 Aug 2022 15:43:35 +0200 Message-Id: <059efdba1220857d058c08f57cb284c4b2a77788.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352161969100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes libxlDomainObjEndJob() and replaces it with call to the generalized virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/libxl/libxl_domain.c | 27 ++------------------ src/libxl/libxl_domain.h | 4 --- src/libxl/libxl_driver.c | 51 +++++++++++++++++-------------------- src/libxl/libxl_migration.c | 14 +++++----- 4 files changed, 33 insertions(+), 63 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 3a1b371054..2d53250895 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -43,29 +43,6 @@ VIR_LOG_INIT("libxl.libxl_domain"); =20 =20 -/* - * obj must be locked before calling - * - * To be called after completing the work associated with the - * earlier libxlDomainBeginJob() call - * - * Returns true if the remaining reference count on obj is - * non-zero, false if the reference count has dropped to zero - * and obj is disposed. - */ -void -libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNUC_UNUSED, - virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - VIR_DEBUG("Stopping job: %s", - virDomainJobTypeToString(job)); - - virDomainObjResetJob(obj->job); - virCondSignal(&obj->job->cond); -} - int libxlDomainJobGetTimeElapsed(virDomainJobObj *job, unsigned long long *tim= eElapsed) { @@ -505,7 +482,7 @@ libxlDomainShutdownThread(void *opaque) } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -541,7 +518,7 @@ libxlDomainDeathThread(void *opaque) libxlDomainCleanup(driver, vm); if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, dom_event); =20 cleanup: diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index b80552e30a..94b693e477 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -49,10 +49,6 @@ extern const struct libxl_event_hooks ev_hooks; int libxlDomainObjPrivateInitCtx(virDomainObj *vm); =20 -void -libxlDomainObjEndJob(libxlDriverPrivate *driver, - virDomainObj *obj); - int libxlDomainJobGetTimeElapsed(virDomainJobObj *job, unsigned long long *timeElapsed); diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index d94430708a..79af2f4441 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -340,7 +340,7 @@ libxlAutostartDomain(virDomainObj *vm, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); cleanup: virDomainObjEndAPI(&vm); =20 @@ -1065,7 +1065,7 @@ libxlDomainCreateXML(virConnectPtr conn, const char *= xml, dom =3D virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id); =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1185,7 +1185,7 @@ libxlDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1239,7 +1239,7 @@ libxlDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1398,7 +1398,7 @@ libxlDomainDestroyFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1472,7 +1472,7 @@ libxlDomainPMSuspendForDuration(virDomainPtr dom, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1537,7 +1537,7 @@ libxlDomainPMWakeup(virDomainPtr dom, unsigned int fl= ags) libxlDomainCleanup(driver, vm); =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1696,7 +1696,7 @@ libxlDomainSetMemoryFlags(virDomainPtr dom, unsigned = long newmem, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1917,7 +1917,7 @@ libxlDomainSaveFlags(virDomainPtr dom, const char *to= , const char *dxml, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1979,7 +1979,7 @@ libxlDomainRestoreFlags(virConnectPtr conn, const cha= r *from, if (ret < 0 && !vm->persistent) virDomainObjListRemove(driver->domains, vm); =20 - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: if (VIR_CLOSE(fd) < 0) @@ -2076,7 +2076,7 @@ libxlDomainCoreDump(virDomainPtr dom, const char *to,= unsigned int flags) } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2129,7 +2129,7 @@ libxlDomainManagedSave(virDomainPtr dom, unsigned int= flags) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2344,7 +2344,7 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned i= nt nvcpus, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(bitmask); @@ -2489,7 +2489,7 @@ libxlDomainPinVcpuFlags(virDomainPtr dom, unsigned in= t vcpu, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2793,7 +2793,7 @@ libxlDomainCreateWithFlags(virDomainPtr dom, dom->id =3D vm->def->id; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4152,7 +4152,7 @@ libxlDomainAttachDeviceFlags(virDomainPtr dom, const = char *xml, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainDeviceDefFree(devConf); @@ -4240,7 +4240,7 @@ libxlDomainDetachDeviceFlags(virDomainPtr dom, const = char *xml, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainDeviceDefFree(dev); @@ -4522,7 +4522,7 @@ libxlDomainSetAutostart(virDomainPtr dom, int autosta= rt) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4723,7 +4723,7 @@ libxlDomainSetSchedulerParametersFlags(virDomainPtr d= om, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4989,7 +4989,6 @@ libxlDomainInterfaceStats(virDomainPtr dom, const char *device, virDomainInterfaceStatsPtr stats) { - libxlDriverPrivate *driver =3D dom->conn->privateData; virDomainObj *vm; virDomainNetDef *net =3D NULL; int ret =3D -1; @@ -5016,7 +5015,7 @@ libxlDomainInterfaceStats(virDomainPtr dom, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -5189,7 +5188,7 @@ libxlDomainMemoryStats(virDomainPtr dom, ret =3D i; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: libxl_dominfo_dispose(&d_info); @@ -5511,7 +5510,6 @@ libxlDomainBlockStats(virDomainPtr dom, const char *path, virDomainBlockStatsPtr stats) { - libxlDriverPrivate *driver =3D dom->conn->privateData; virDomainObj *vm; libxlBlockStats blkstats; int ret =3D -1; @@ -5542,7 +5540,7 @@ libxlDomainBlockStats(virDomainPtr dom, stats->errs =3D -1; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -5556,7 +5554,6 @@ libxlDomainBlockStatsFlags(virDomainPtr dom, int *nparams, unsigned int flags) { - libxlDriverPrivate *driver =3D dom->conn->privateData; virDomainObj *vm; libxlBlockStats blkstats; int nstats; @@ -5615,7 +5612,7 @@ libxlDomainBlockStatsFlags(virDomainPtr dom, #undef LIBXL_BLKSTAT_ASSIGN_PARAM =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -6381,7 +6378,7 @@ libxlDomainSetMetadata(virDomainPtr dom, virObjectEventStateQueue(driver->domainEventState, ev); } =20 - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); diff --git a/src/libxl/libxl_migration.c b/src/libxl/libxl_migration.c index 90cf12ae00..6048540334 100644 --- a/src/libxl/libxl_migration.c +++ b/src/libxl/libxl_migration.c @@ -417,7 +417,7 @@ libxlDomainMigrationSrcBegin(virConnectPtr conn, goto cleanup; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: libxlMigrationCookieFree(mig); @@ -604,7 +604,7 @@ libxlDomainMigrationDstPrepareTunnel3(virConnectPtr dco= nn, goto done; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 error: libxlMigrationCookieFree(mig); @@ -774,7 +774,7 @@ libxlDomainMigrationDstPrepare(virConnectPtr dconn, goto done; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 error: for (i =3D 0; i < nsocks; i++) { @@ -1155,7 +1155,7 @@ libxlDomainMigrationSrcPerformP2P(libxlDriverPrivate = *driver, * Confirm phase will not be executed if perform fails. End the * job started in begin phase. */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); } =20 cleanup: @@ -1226,7 +1226,7 @@ libxlDomainMigrationSrcPerform(libxlDriverPrivate *dr= iver, * Confirm phase will not be executed if perform fails. End the * job started in begin phase. */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); } =20 return ret; @@ -1327,7 +1327,7 @@ libxlDomainMigrationDstFinish(virConnectPtr dconn, } =20 /* EndJob for corresponding BeginJob in prepare phase */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, event); virObjectUnref(cfg); return dom; @@ -1384,7 +1384,7 @@ libxlDomainMigrationSrcConfirm(libxlDriverPrivate *dr= iver, =20 cleanup: /* EndJob for corresponding BeginJob in begin phase */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, event); virObjectUnref(cfg); return ret; --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352156; cv=none; d=zohomail.com; s=zohoarc; b=CSrww9S5tHM/1Zfxd8KabQmQuis1v72sDLmL/l+vn7XVRcl3snzEwuQZSf6wF4+c8hQjKhwnCtpt3qwzhD/vcXWxRrpDGuOVoTDY3YylBI4TnyOWLHUJ0pWUFhNjS6+Rv93s6jLb4jEe/2xXzF5GMQnNoWWK4T1qLeO826aoISs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352156; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tutqgDYdmZL4NRnPcxfhcC0ARBmCUMTn0ZOiHmGe2VM=; b=Jk5zTT/HnY+4OWKu8h2ENOXR6TJDsRIbxC2SrhTl55FxOlFdvuI+hG8/+oLFEAs9j2Hyo90xFNEDJ5odjBQicYwOv98b6rOBJQjDPqqB/cAithBYhfmuoAMXljemZvCQ7bHkOmiL3UFBelRW9ZQx1bPUV1le5nxu+Lfq3/DvJy8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352156412723.6388799456753; Wed, 24 Aug 2022 07:42:36 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-472-aZrmJ7bsMKauCh7SPmCtfg-1; Wed, 24 Aug 2022 10:42:32 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C81922932495; Wed, 24 Aug 2022 14:42:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B5F7F18ECC; Wed, 24 Aug 2022 14:42:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 464F51946A63; Wed, 24 Aug 2022 14:42:29 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id BF92B1946A42 for ; Wed, 24 Aug 2022 13:43:53 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id B2775945D7; Wed, 24 Aug 2022 13:43:53 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5DF2F18ECC for ; Wed, 24 Aug 2022 13:43:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352154; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=tutqgDYdmZL4NRnPcxfhcC0ARBmCUMTn0ZOiHmGe2VM=; b=SQVSy+Hu+X/eH99X2hkklC+gEcgjQkCL9AA84wDwkfA79jTKh1saUZZuOHEEiyNVcfyz0B oc1ni+whJADe33MqUvXd8Jqma4coBE0igrM0ePgGn1cGPxs7T84K12v8ccgBXIfFddMHTQ C4V0PbHWx+i95JMJmCN1CqFgwnub5d4= X-MC-Unique: aZrmJ7bsMKauCh7SPmCtfg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 13/17] LXC: use virDomainObjEndJob() Date: Wed, 24 Aug 2022 15:43:36 +0200 Message-Id: <158fd0227f7cff364691f3f09b1096a9bc0ccabe.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352157969100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virLXCDomainObjEndJob() and replaces it with call to the generalized virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/lxc/lxc_domain.c | 20 ---------------- src/lxc/lxc_domain.h | 4 ---- src/lxc/lxc_driver.c | 57 +++++++++++++++++++------------------------- 3 files changed, 24 insertions(+), 57 deletions(-) diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index aad9dae694..1a39129f82 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -35,26 +35,6 @@ VIR_LOG_INIT("lxc.lxc_domain"); =20 =20 -/* - * obj must be locked and have a reference before calling - * - * To be called after completing the work associated with the - * earlier virLXCDomainBeginJob() call - */ -void -virLXCDomainObjEndJob(virLXCDriver *driver G_GNUC_UNUSED, - virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - VIR_DEBUG("Stopping job: %s", - virDomainJobTypeToString(job)); - - virDomainObjResetJob(obj->job); - virCondSignal(&obj->job->cond); -} - - static void * virLXCDomainObjPrivateAlloc(void *opaque) { diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index e7b19fb2ff..d22c2ea153 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -72,10 +72,6 @@ extern virXMLNamespace virLXCDriverDomainXMLNamespace; extern virDomainXMLPrivateDataCallbacks virLXCDriverPrivateDataCallbacks; extern virDomainDefParserConfig virLXCDriverDomainDefParserConfig; =20 -void -virLXCDomainObjEndJob(virLXCDriver *driver, - virDomainObj *obj); - =20 char * virLXCDomainGetMachineName(virDomainDef *def, pid_t pid); diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index 7aa13673a6..19861aed95 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -709,7 +709,7 @@ static int lxcDomainSetMemoryFlags(virDomainPtr dom, un= signed long newmem, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -793,7 +793,7 @@ lxcDomainSetMemoryParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1005,7 +1005,7 @@ static int lxcDomainCreateWithFiles(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1114,7 +1114,7 @@ lxcDomainCreateXMLWithFiles(virConnectPtr conn, if (virLXCProcessStart(driver, vm, nfiles, files, autoDestroyConn, VIR_DOMAIN_RUNNING_BOOTED) < 0) { virDomainAuditStart(vm, "booted", false); - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1127,7 +1127,7 @@ lxcDomainCreateXMLWithFiles(virConnectPtr conn, =20 dom =3D virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id); =20 - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1365,7 +1365,7 @@ lxcDomainDestroyFlags(virDomainPtr dom, virDomainAuditStop(vm, "destroyed"); =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); =20 @@ -1896,7 +1896,7 @@ lxcDomainSetSchedulerParametersFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2021,7 +2021,6 @@ lxcDomainBlockStats(virDomainPtr dom, const char *path, virDomainBlockStatsPtr stats) { - virLXCDriver *driver =3D dom->conn->privateData; int ret =3D -1; virDomainObj *vm; virDomainDiskDef *disk =3D NULL; @@ -2077,7 +2076,7 @@ lxcDomainBlockStats(virDomainPtr dom, &stats->wr_req); =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2092,7 +2091,6 @@ lxcDomainBlockStatsFlags(virDomainPtr dom, int * nparams, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; int tmp, ret =3D -1; virDomainObj *vm; virDomainDiskDef *disk =3D NULL; @@ -2205,7 +2203,7 @@ lxcDomainBlockStatsFlags(virDomainPtr dom, *nparams =3D tmp; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2285,7 +2283,7 @@ lxcDomainSetBlkioParameters(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2387,7 +2385,6 @@ lxcDomainInterfaceStats(virDomainPtr dom, { virDomainObj *vm; int ret =3D -1; - virLXCDriver *driver =3D dom->conn->privateData; virDomainNetDef *net =3D NULL; =20 if (!(vm =3D lxcDomObjFromDomain(dom))) @@ -2412,7 +2409,7 @@ lxcDomainInterfaceStats(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2508,7 +2505,7 @@ static int lxcDomainSetAutostart(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2631,7 +2628,7 @@ static int lxcDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -2687,7 +2684,7 @@ static int lxcDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -2763,7 +2760,6 @@ lxcDomainSendProcessSignal(virDomainPtr dom, unsigned int signum, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virDomainObj *vm =3D NULL; virLXCDomainObjPrivate *priv; pid_t victim; @@ -2825,7 +2821,7 @@ lxcDomainSendProcessSignal(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2854,7 +2850,6 @@ static int lxcDomainShutdownFlags(virDomainPtr dom, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virLXCDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; @@ -2912,7 +2907,7 @@ lxcDomainShutdownFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2930,7 +2925,6 @@ static int lxcDomainReboot(virDomainPtr dom, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virLXCDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; @@ -2988,7 +2982,7 @@ lxcDomainReboot(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4344,7 +4338,7 @@ static int lxcDomainAttachDeviceFlags(virDomainPtr do= m, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: if (dev !=3D dev_copy) @@ -4415,7 +4409,7 @@ static int lxcDomainUpdateDeviceFlags(virDomainPtr do= m, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainDeviceDefFree(dev); @@ -4506,7 +4500,7 @@ static int lxcDomainDetachDeviceFlags(virDomainPtr do= m, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: if (dev !=3D dev_copy) @@ -4529,7 +4523,6 @@ static int lxcDomainLxcOpenNamespace(virDomainPtr dom, int **fdlist, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virDomainObj *vm; virLXCDomainObjPrivate *priv; int ret =3D -1; @@ -4564,7 +4557,7 @@ static int lxcDomainLxcOpenNamespace(virDomainPtr dom, ret =3D nfds; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4617,7 +4610,6 @@ lxcDomainMemoryStats(virDomainPtr dom, virLXCDomainObjPrivate *priv; unsigned long long swap_usage; unsigned long mem_usage; - virLXCDriver *driver =3D dom->conn->privateData; =20 virCheckFlags(0, -1); =20 @@ -4659,7 +4651,7 @@ lxcDomainMemoryStats(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4812,7 +4804,7 @@ lxcDomainSetMetadata(virDomainPtr dom, virObjectEventStateQueue(driver->domainEventState, ev); } =20 - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4890,7 +4882,6 @@ static char * lxcDomainGetHostname(virDomainPtr dom, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virDomainObj *vm =3D NULL; char macaddr[VIR_MAC_STRING_BUFLEN]; g_autoptr(virConnect) conn =3D NULL; @@ -4954,7 +4945,7 @@ lxcDomainGetHostname(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661353261; cv=none; d=zohomail.com; s=zohoarc; b=Hr4sgnKOxhbPZfAeZ09wx45EYLe/zA5i3v13tk5rOseLZWAbykwpKeNN5IxIx3kA4dHZL5W6MEd1oBFl1JCJ15vka/iK7+l5YPOL1xou7X0S074o6H8bz3fMLOCCdBJx2PObKD/KKUmoOEBYsJ5b7RoUX9FQWBtdzthCo/Kpr1g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661353261; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zNOEwz1T9JYJD8yfdvAccxi0552EjvxW5JyTg78rGro=; b=ik9H0qfFWCGHEHKlxSjW9X2CSuqwjhWH4c89/ZYsQYP3dwbVhNAP16Gz/yNyuFWpvCkMYyHjkKUmwAAiUM78O/IRjc3EBUglugqmmKZXKBhool1qyFhU90Y9BNr/iOPz6fyYS/IIA3xH4pnk1u0g9z2tyWNOqauVrc/Gi3bSb+k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1661353261431637.9904886179519; Wed, 24 Aug 2022 08:01:01 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-121-_s1VtmeYPgqq5MsOtiUWxg-1; Wed, 24 Aug 2022 11:00:58 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D03BA101E98C; Wed, 24 Aug 2022 15:00:53 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 725719459C; Wed, 24 Aug 2022 15:00:53 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 5ACA61946A47; Wed, 24 Aug 2022 15:00:52 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 643641946A42 for ; Wed, 24 Aug 2022 13:43:54 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 4AB4F18ECC; Wed, 24 Aug 2022 13:43:54 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id E9412945DF for ; Wed, 24 Aug 2022 13:43:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661353260; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=zNOEwz1T9JYJD8yfdvAccxi0552EjvxW5JyTg78rGro=; b=PxpfZ5Lr1K3X7y8wkJMnk/2YM+g0ShHx2YY6yjJLt90bnJmH75ppRwHIxtYGDRp/xGMSnR 3fN5YyW/ICqsxWMdc+YotZU4q/EeK5S+6lHlJIRo5Xsz5WTFShV52XsRr9fE0H6V6S8e5x +KIPTTr8P4Ew4hAJFrzFls5XdGUJeSo= X-MC-Unique: _s1VtmeYPgqq5MsOtiUWxg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 14/17] CH: use virDomainObjEndJob() Date: Wed, 24 Aug 2022 15:43:37 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661353263302100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virCHDomainObjEndJob() and replaces it with call to the generalized virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/ch/ch_domain.c | 18 ------------------ src/ch/ch_domain.h | 3 --- src/ch/ch_driver.c | 20 ++++++++++---------- 3 files changed, 10 insertions(+), 31 deletions(-) diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index c592c6ffbb..dc666243a4 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -32,24 +32,6 @@ =20 VIR_LOG_INIT("ch.ch_domain"); =20 -/* - * obj must be locked and have a reference before calling - * - * To be called after completing the work associated with the - * earlier virDomainObjBeginJob() call - */ -void -virCHDomainObjEndJob(virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - VIR_DEBUG("Stopping job: %s", - virDomainJobTypeToString(job)); - - virDomainObjResetJob(obj->job); - virCondSignal(&obj->job->cond); -} - void virCHDomainRemoveInactive(virCHDriver *driver, virDomainObj *vm) diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index 076043f772..88e27d50b1 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -60,9 +60,6 @@ struct _virCHDomainVcpuPrivate { extern virDomainXMLPrivateDataCallbacks virCHDriverPrivateDataCallbacks; extern virDomainDefParserConfig virCHDriverDomainDefParserConfig; =20 -void -virCHDomainObjEndJob(virDomainObj *obj); - void virCHDomainRemoveInactive(virCHDriver *driver, virDomainObj *vm); diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c index b089a7c9c7..43d6396af7 100644 --- a/src/ch/ch_driver.c +++ b/src/ch/ch_driver.c @@ -226,7 +226,7 @@ chDomainCreateXML(virConnectPtr conn, dom =3D virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id); =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (vm && !dom) { @@ -256,7 +256,7 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int = flags) =20 ret =3D virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED); =20 - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -414,7 +414,7 @@ chDomainShutdownFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -473,7 +473,7 @@ chDomainReboot(virDomainPtr dom, unsigned int flags) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -518,7 +518,7 @@ chDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -563,7 +563,7 @@ chDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -607,7 +607,7 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int fla= gs) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1253,7 +1253,7 @@ chDomainPinVcpuFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1417,7 +1417,7 @@ chDomainPinEmulator(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1679,7 +1679,7 @@ chDomainSetNumaParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352797; cv=none; d=zohomail.com; s=zohoarc; b=Tq3PKISAS9kbqmpkzoBhzMLNtzmGpajyOwrVvjhbFZ44/tiNJs61mBU4wwA2Sm5hh0c09J6oe0n+FNAnWqHCEP6fya9U7HYK6lHqy9Cf9uxdLCnpztGHIoikJObFDSRFps6ayOvSXBoTJlo35t5oW+LLCoJs1o2SU5ouUwPTzls= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352797; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=7JC1xcfcaPx8EbTEG3TryIuKWDjkaFnmfQK8MWeZkEI=; b=haUmh25nb1CuQvhTkM5f3e+i3TviuFChoHL2CwxtS9pbPPSJZ3LWeMdUyGkqoSbE1FeHxdf7oyfbs2n86tMCiQr3cIWO/6VQLopGp3CKZTjdQFXqazcWaHUYn2ZmO8m+J8TvSm8/VMuAhHwhKF4eK9u9SQd3j9S8TFyZzbzp59Y= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1661352797574449.4302725469321; Wed, 24 Aug 2022 07:53:17 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-251-Ytgt2BHEO-a-Wvb18RRy_w-1; Wed, 24 Aug 2022 10:53:07 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A7DAB1C068D6; Wed, 24 Aug 2022 14:53:03 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8FACD2166B2B; Wed, 24 Aug 2022 14:53:03 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id B70061946A64; Wed, 24 Aug 2022 14:53:00 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id E1A771946A42 for ; Wed, 24 Aug 2022 13:43:54 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id D7DF5945DB; Wed, 24 Aug 2022 13:43:54 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 82DA3945D7 for ; Wed, 24 Aug 2022 13:43:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352796; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=7JC1xcfcaPx8EbTEG3TryIuKWDjkaFnmfQK8MWeZkEI=; b=GJ3oanS2L1jMqYehW6uCqWekfyHDQekUnATUwNTk2uvIAeT6Ev+67fisEFIdMuGouQyW1H lCYLC7GqRQpkTRgxwC7EQm8yn/ilGOtddsoe/gR8aF8EiDbz5rRDWl0Njd1EpxeAnusrZc VhHMl1ZygdbIHI0zceQe3xLCyjJEBrM= X-MC-Unique: Ytgt2BHEO-a-Wvb18RRy_w-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 15/17] qemu & conf: move BeginAgentJob & EndAgentJob into src/conf/virdomainjob Date: Wed, 24 Aug 2022 15:43:38 +0200 Message-Id: <9c6bac0f56631160bda5574a988847db1596e657.1661348244.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352799671100001 Content-Type: text/plain; charset="utf-8"; x-default="true" Although these and functions in the following two patches are for now just being used by the qemu driver, it makes sense to have all begin job functions in the same file. Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 10 ++-- src/conf/virdomainjob.c | 34 +++++++++++ src/conf/virdomainjob.h | 4 ++ src/libvirt_private.syms | 2 + src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domainjob.c | 34 ----------- src/qemu/qemu_domainjob.h | 4 -- src/qemu/qemu_driver.c | 82 +++++++++++++-------------- src/qemu/qemu_snapshot.c | 10 ++-- 9 files changed, 92 insertions(+), 90 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index 22d12f61a5..afdf9e61cc 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -62,7 +62,7 @@ There are a number of locks on various objects =20 Agent job condition is then used when thread wishes to talk to qemu agent monitor. It is possible to acquire just agent job - (``qemuDomainObjBeginAgentJob``), or only normal job (``virDomainObjBe= ginJob``) + (``virDomainObjBeginAgentJob``), or only normal job (``virDomainObjBeg= inJob``) but not both at the same time. Holding an agent job and a normal job w= ould allow an unresponsive or malicious agent to block normal libvirt API a= nd potentially result in a denial of service. Which type of job to grab @@ -130,11 +130,11 @@ To acquire the normal job condition =20 To acquire the agent job condition =20 - ``qemuDomainObjBeginAgentJob()`` + ``virDomainObjBeginAgentJob()`` - Waits until there is no other agent job set - Sets ``job.agentActive`` to the job type =20 - ``qemuDomainObjEndAgentJob()`` + ``virDomainObjEndAgentJob()`` - Sets ``job.agentActive`` to 0 - Signals on ``job.cond`` condition =20 @@ -253,7 +253,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAgentJob(obj, VIR_AGENT_JOB_TYPE); + virDomainObjBeginAgentJob(obj, VIR_AGENT_JOB_TYPE); =20 ...do prep work... =20 @@ -266,7 +266,7 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndAgentJob(obj); + virDomainObjEndAgentJob(obj); virDomainObjEndAPI(&obj); =20 =20 diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 0f239c680c..f91a606d84 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -527,6 +527,22 @@ int virDomainObjBeginJob(virDomainObj *obj, return 0; } =20 +/** + * virDomainObjBeginAgentJob: + * + * Grabs agent type of job. Use if caller talks to guest agent only. + * + * To end job call virDomainObjEndAgentJob. + */ +int +virDomainObjBeginAgentJob(virDomainObj *obj, + virDomainAgentJob agentJob) +{ + return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_NONE, + agentJob, + VIR_ASYNC_JOB_NONE, false); +} + /* * obj must be locked and have a reference before calling * @@ -554,3 +570,21 @@ virDomainObjEndJob(virDomainObj *obj) * grabbing a job requires checking more variables. */ virCondBroadcast(&obj->job->cond); } + +void +virDomainObjEndAgentJob(virDomainObj *obj) +{ + virDomainAgentJob agentJob =3D obj->job->agentActive; + + obj->job->jobsQueued--; + + VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj, obj->def->name); + + virDomainObjResetAgentJob(obj->job); + /* We indeed need to wake up ALL threads waiting because + * grabbing a job requires checking more variables. */ + virCondBroadcast(&obj->job->cond); +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 7a06c384f3..6cec322af1 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -249,5 +249,9 @@ int virDomainObjBeginJobInternal(virDomainObj *obj, int virDomainObjBeginJob(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginAgentJob(virDomainObj *obj, + virDomainAgentJob agentJob) + G_GNUC_WARN_UNUSED_RESULT; =20 void virDomainObjEndJob(virDomainObj *obj); +void virDomainObjEndAgentJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 77da4a1d01..05b27d6725 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1187,10 +1187,12 @@ virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; +virDomainObjBeginAgentJob; virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; +virDomainObjEndAgentJob; virDomainObjEndJob; virDomainObjInitJob; virDomainObjPreserveJob; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 3514f104f1..197be6a4d1 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6057,7 +6057,7 @@ qemuDomainObjEnterMonitorAsync(virDomainObj *obj, * obj must be locked before calling * * To be called immediately before any QEMU agent API call. - * Must have already called qemuDomainObjBeginAgentJob() and + * Must have already called virDomainObjBeginAgentJob() and * checked that the VM is still active. * * To be followed with qemuDomainObjExitAgent() once complete diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index a8baf096e5..0775e04add 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,22 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -/** - * qemuDomainObjBeginAgentJob: - * - * Grabs agent type of job. Use if caller talks to guest agent only. - * - * To end job call qemuDomainObjEndAgentJob. - */ -int -qemuDomainObjBeginAgentJob(virDomainObj *obj, - virDomainAgentJob agentJob) -{ - return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_NONE, - agentJob, - VIR_ASYNC_JOB_NONE, false); -} - int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainAsyncJob asyncJob, virDomainJobOperation operation, @@ -730,24 +714,6 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, VIR_ASYNC_JOB_NONE, true); } =20 -void -qemuDomainObjEndAgentJob(virDomainObj *obj) -{ - virDomainAgentJob agentJob =3D obj->job->agentActive; - - obj->job->jobsQueued--; - - VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(obj->job->asyncJob), - obj, obj->def->name); - - virDomainObjResetAgentJob(obj->job); - /* We indeed need to wake up ALL threads waiting because - * grabbing a job requires checking more variables. */ - virCondBroadcast(&obj->job->cond); -} - void qemuDomainObjEndAsyncJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 918b74748b..0cc4dc44f3 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,9 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob j= ob, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginAgentJob(virDomainObj *obj, - virDomainAgentJob agentJob) - G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainAsyncJob asyncJob, virDomainJobOperation operation, @@ -84,7 +81,6 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 -void qemuDomainObjEndAgentJob(virDomainObj *obj); void qemuDomainObjEndAsyncJob(virDomainObj *obj); void qemuDomainObjAbortAsyncJob(virDomainObj *obj); void qemuDomainObjSetJobPhase(virDomainObj *obj, diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 666e8dd228..443b4f679a 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1780,7 +1780,7 @@ qemuDomainShutdownFlagsAgent(virDomainObj *vm, int agentFlag =3D isReboot ? QEMU_AGENT_SHUTDOWN_REBOOT : QEMU_AGENT_SHUTDOWN_POWERDOWN; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_RUNNING) { @@ -1798,7 +1798,7 @@ qemuDomainShutdownFlagsAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -1908,7 +1908,7 @@ qemuDomainRebootAgent(virDomainObj *vm, if (!isReboot) agentFlag =3D QEMU_AGENT_SHUTDOWN_POWERDOWN; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (!qemuDomainAgentAvailable(vm, agentForced)) @@ -1923,7 +1923,7 @@ qemuDomainRebootAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -4366,7 +4366,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, =20 =20 if (useAgent) { - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; } else { if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) @@ -4386,7 +4386,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, =20 endjob: if (useAgent) - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); else virDomainObjEndJob(vm); =20 @@ -4807,7 +4807,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned in= t flags) goto cleanup; =20 if (flags & VIR_DOMAIN_VCPU_GUEST) { - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -4825,7 +4825,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned in= t flags) qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 if (ncpuinfo < 0) goto cleanup; @@ -16601,7 +16601,7 @@ qemuDomainPMSuspendAgent(virDomainObj *vm, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16615,7 +16615,7 @@ qemuDomainPMSuspendAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -16767,7 +16767,7 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, if (virDomainQemuAgentCommandEnsureACL(domain->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16785,7 +16785,7 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, VIR_FREE(result); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -16861,7 +16861,7 @@ qemuDomainFSTrim(virDomainPtr dom, if (virDomainFSTrimEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -16875,7 +16875,7 @@ qemuDomainFSTrim(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17031,7 +17031,7 @@ qemuDomainGetHostnameAgent(virDomainObj *vm, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17046,7 +17046,7 @@ qemuDomainGetHostnameAgent(virDomainObj *vm, =20 ret =3D 0; endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -17172,7 +17172,7 @@ qemuDomainGetTime(virDomainPtr dom, if (virDomainGetTimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17191,7 +17191,7 @@ qemuDomainGetTime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17208,7 +17208,7 @@ qemuDomainSetTimeAgent(virDomainObj *vm, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17222,7 +17222,7 @@ qemuDomainSetTimeAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -17308,7 +17308,7 @@ qemuDomainFSFreeze(virDomainPtr dom, if (virDomainFSFreezeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17317,7 +17317,7 @@ qemuDomainFSFreeze(virDomainPtr dom, ret =3D qemuSnapshotFSFreeze(vm, mountpoints, nmountpoints); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17348,7 +17348,7 @@ qemuDomainFSThaw(virDomainPtr dom, if (virDomainFSThawEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17357,7 +17357,7 @@ qemuDomainFSThaw(virDomainPtr dom, ret =3D qemuSnapshotFSThaw(vm, true); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -18880,7 +18880,7 @@ qemuDomainGetFSInfoAgent(virDomainObj *vm, int ret =3D -1; qemuAgent *agent; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) return ret; =20 if (virDomainObjCheckActive(vm) < 0) @@ -18894,7 +18894,7 @@ qemuDomainGetFSInfoAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -19039,7 +19039,7 @@ qemuDomainInterfaceAddresses(virDomainPtr dom, break; =20 case VIR_DOMAIN_INTERFACE_ADDRESSES_SRC_AGENT: - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19050,7 +19050,7 @@ qemuDomainInterfaceAddresses(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 break; =20 @@ -19090,7 +19090,7 @@ qemuDomainSetUserPassword(virDomainPtr dom, if (virDomainSetUserPasswordEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19110,7 +19110,7 @@ qemuDomainSetUserPassword(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19393,7 +19393,7 @@ qemuDomainGetGuestVcpus(virDomainPtr dom, if (virDomainGetGuestVcpusEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19412,7 +19412,7 @@ qemuDomainGetGuestVcpus(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: VIR_FREE(info); @@ -19451,7 +19451,7 @@ qemuDomainSetGuestVcpus(virDomainPtr dom, if (virDomainSetGuestVcpusEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19497,7 +19497,7 @@ qemuDomainSetGuestVcpus(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: VIR_FREE(info); @@ -20412,7 +20412,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, if (virDomainGetGuestInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20470,7 +20470,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, } =20 qemuDomainObjExitAgent(vm, agent); - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 if (nfs > 0 || ndisks > 0) { if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) @@ -20517,7 +20517,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endagentjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); goto cleanup; } =20 @@ -20588,7 +20588,7 @@ qemuDomainAuthorizedSSHKeysGet(virDomainPtr dom, if (virDomainAuthorizedSshKeysGetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20599,7 +20599,7 @@ qemuDomainAuthorizedSSHKeysGet(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endagentjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); cleanup: virDomainObjEndAPI(&vm); return rv; @@ -20628,7 +20628,7 @@ qemuDomainAuthorizedSSHKeysSet(virDomainPtr dom, if (virDomainAuthorizedSshKeysSetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20642,7 +20642,7 @@ qemuDomainAuthorizedSSHKeysSet(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endagentjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); cleanup: virDomainObjEndAPI(&vm); return rv; diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index afed0f0e28..c5e5e3ed5b 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1277,16 +1277,16 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE) { int frozen; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) { - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); goto cleanup; } =20 frozen =3D qemuSnapshotFSFreeze(vm, NULL, 0); - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 if (frozen < 0) goto cleanup; @@ -1422,13 +1422,13 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, } =20 if (thaw && - qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) >=3D 0 && + virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) >=3D 0 && virDomainObjIsActive(vm)) { /* report error only on an otherwise successful snapshot */ if (qemuSnapshotFSThaw(vm, ret =3D=3D 0) < 0) ret =3D -1; =20 - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); } =20 virQEMUSaveDataFree(data); --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661353273; cv=none; d=zohomail.com; s=zohoarc; b=ZKZsKO998sDA7HcbIvSZLk/VInbXwfqFFCuEiVd/Sh7bAC4Sk46irrNbuncpWvmDN+zpVtliXcmvshsKMVEux7PNomAVndyjGQy+bIhL+Kc+J5CYrXMNnyJhmGLIYUeBds1OQ9EXMsAcHdkG8BSOYG6YIy7dXWyNwMojbHT/RiQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661353273; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=PRRbNC8U//rrcPB4CAs0uVbB4q1FSXk7v/B15XBAPXc=; b=GBPCCHgH2VMM5XxQhLs1FTUIrZoMgAssmQBzcci+Z5p8PQHVW6TlR3x2Ft/H5cVwaKbaCAdDZ9ZnDQ3wjKGPvzZRy1CUzwKIrZ9lPvPNw066oTzqgXhmdoM8vTOaB5Pczx+CwCFCnY/E+8SGJ4WsCUspjdbnIHAb0wLIoi/o5MA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1661353273270430.60715497782235; Wed, 24 Aug 2022 08:01:13 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-349-cdekHanOMxGF4WMtmCvwBw-1; Wed, 24 Aug 2022 11:01:07 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BDB6D3C0E223; Wed, 24 Aug 2022 15:00:59 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id A949D40315A; Wed, 24 Aug 2022 15:00:59 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 826BF1946A43; Wed, 24 Aug 2022 15:00:57 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 7C5431946A42 for ; Wed, 24 Aug 2022 13:43:55 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 6F3DB18ECC; Wed, 24 Aug 2022 13:43:55 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1ADE29459C for ; Wed, 24 Aug 2022 13:43:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661353271; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=PRRbNC8U//rrcPB4CAs0uVbB4q1FSXk7v/B15XBAPXc=; b=CLnMrozNWYMucgib/hPwNdQYe6KMCE06i2AnU/8SE5r/reuIc85owUbqrBVBB6Sg9lBw6W l1TFjIiT7uvQdWG0bVZQY/dkPTGhK0W3rxd/oHewkhCfgB6geqDSXsvkNPd+UpDj8zJIHi tZQ3giwm2AUlQSzNV8zjDPIDuOLU3KI= X-MC-Unique: cdekHanOMxGF4WMtmCvwBw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 16/17] qemu & conf: move BeginAsyncJob & EndAsyncJob into src/conf Date: Wed, 24 Aug 2022 15:43:39 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661353275397100001 Content-Type: text/plain; charset="utf-8"; x-default="true" Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 12 +++++------ src/conf/virdomainjob.c | 30 +++++++++++++++++++++++++++ src/conf/virdomainjob.h | 6 ++++++ src/libvirt_private.syms | 2 ++ src/qemu/qemu_backup.c | 6 +++--- src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domainjob.c | 29 -------------------------- src/qemu/qemu_domainjob.h | 6 ------ src/qemu/qemu_driver.c | 18 ++++++++-------- src/qemu/qemu_migration.c | 4 ++-- src/qemu/qemu_process.c | 4 ++-- src/qemu/qemu_snapshot.c | 4 ++-- 12 files changed, 63 insertions(+), 60 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index afdf9e61cc..95681d1b9d 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -141,7 +141,7 @@ To acquire the agent job condition =20 To acquire the asynchronous job condition =20 - ``qemuDomainObjBeginAsyncJob()`` + ``virDomainObjBeginAsyncJob()`` - Waits until no async job is running - Waits for ``job.cond`` condition ``job.active !=3D 0`` using ``virDo= mainObj`` mutex @@ -149,7 +149,7 @@ To acquire the asynchronous job condition and repeats waiting in that case - Sets ``job.asyncJob`` to the asynchronous job type =20 - ``qemuDomainObjEndAsyncJob()`` + ``virDomainObjEndAsyncJob()`` - Sets ``job.asyncJob`` to 0 - Broadcasts on ``job.asyncCond`` condition =20 @@ -277,7 +277,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); + virDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); qemuDomainObjSetAsyncJobMask(obj, allowedJobs); =20 ...do prep work... @@ -306,7 +306,7 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndAsyncJob(obj); + virDomainObjEndAsyncJob(obj); virDomainObjEndAPI(&obj); =20 =20 @@ -317,7 +317,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); + virDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); =20 ...do prep work... =20 @@ -334,5 +334,5 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndAsyncJob(obj); + virDomainObjEndAsyncJob(obj); virDomainObjEndAPI(&obj); diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index f91a606d84..763fbfba00 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -543,6 +543,21 @@ virDomainObjBeginAgentJob(virDomainObj *obj, VIR_ASYNC_JOB_NONE, false); } =20 +int virDomainObjBeginAsyncJob(virDomainObj *obj, + virDomainAsyncJob asyncJob, + virDomainJobOperation operation, + unsigned long apiFlags) +{ + if (virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC, + VIR_AGENT_JOB_NONE, + asyncJob, false) < 0) + return -1; + + obj->job->current->operation =3D operation; + obj->job->apiFlags =3D apiFlags; + return 0; +} + /* * obj must be locked and have a reference before calling * @@ -588,3 +603,18 @@ virDomainObjEndAgentJob(virDomainObj *obj) * grabbing a job requires checking more variables. */ virCondBroadcast(&obj->job->cond); } + +void +virDomainObjEndAsyncJob(virDomainObj *obj) +{ + obj->job->jobsQueued--; + + VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj, obj->def->name); + + virDomainObjResetAsyncJob(obj->job); + if (obj->job->cb->saveStatusPrivate) + obj->job->cb->saveStatusPrivate(obj); + virCondBroadcast(&obj->job->asyncCond); +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 6cec322af1..3cd02ef4ae 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -252,6 +252,12 @@ int virDomainObjBeginJob(virDomainObj *obj, int virDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginAsyncJob(virDomainObj *obj, + virDomainAsyncJob asyncJob, + virDomainJobOperation operation, + unsigned long apiFlags) + G_GNUC_WARN_UNUSED_RESULT; =20 void virDomainObjEndJob(virDomainObj *obj); void virDomainObjEndAgentJob(virDomainObj *obj); +void virDomainObjEndAsyncJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 05b27d6725..8befe7ebe0 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1188,11 +1188,13 @@ virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; virDomainObjBeginAgentJob; +virDomainObjBeginAsyncJob; virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; virDomainObjEndAgentJob; +virDomainObjEndAsyncJob; virDomainObjEndJob; virDomainObjInitJob; virDomainObjPreserveJob; diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 2da520dbc7..c7721812a5 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -618,7 +618,7 @@ qemuBackupJobTerminate(virDomainObj *vm, g_clear_pointer(&priv->backup, virDomainBackupDefFree); =20 if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP) - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 =20 @@ -786,7 +786,7 @@ qemuBackupBegin(virDomainObj *vm, * infrastructure for async jobs. We'll allow standard modify-type jobs * as the interlocking of conflicting operations is handled on the blo= ck * job level */ - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_BACKUP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_BACKUP, VIR_DOMAIN_JOB_OPERATION_BACKUP, flags)= < 0) return -1; =20 @@ -937,7 +937,7 @@ qemuBackupBegin(virDomainObj *vm, if (ret =3D=3D 0) qemuDomainObjReleaseAsyncJob(vm); else - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); =20 return ret; } diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 197be6a4d1..c494d75790 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6037,7 +6037,7 @@ void qemuDomainObjEnterMonitor(virDomainObj *obj) * To be called immediately before any QEMU monitor API call. * Must have already either called virDomainObjBeginJob() * and checked that the VM is still active, with asyncJob of - * VIR_ASYNC_JOB_NONE; or already called qemuDomainObjBeginAsyncJob, + * VIR_ASYNC_JOB_NONE; or already called virDomainObjBeginAsyncJob, * with the same asyncJob. * * Returns 0 if job was started, in which case this must be followed with diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 0775e04add..99dcdb49b9 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,21 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -int qemuDomainObjBeginAsyncJob(virDomainObj *obj, - virDomainAsyncJob asyncJob, - virDomainJobOperation operation, - unsigned long apiFlags) -{ - if (virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC, - VIR_AGENT_JOB_NONE, - asyncJob, false) < 0) - return -1; - - obj->job->current->operation =3D operation; - obj->job->apiFlags =3D apiFlags; - return 0; -} - int qemuDomainObjBeginNestedJob(virDomainObj *obj, virDomainAsyncJob asyncJob) @@ -714,20 +699,6 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, VIR_ASYNC_JOB_NONE, true); } =20 -void -qemuDomainObjEndAsyncJob(virDomainObj *obj) -{ - obj->job->jobsQueued--; - - VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(obj->job->asyncJob), - obj, obj->def->name); - - virDomainObjResetAsyncJob(obj->job); - qemuDomainSaveStatus(obj); - virCondBroadcast(&obj->job->asyncCond); -} - void qemuDomainObjAbortAsyncJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 0cc4dc44f3..1cf9fcc113 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,11 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob = job, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginAsyncJob(virDomainObj *obj, - virDomainAsyncJob asyncJob, - virDomainJobOperation operation, - unsigned long apiFlags) - G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginNestedJob(virDomainObj *obj, virDomainAsyncJob asyncJob) G_GNUC_WARN_UNUSED_RESULT; @@ -81,7 +76,6 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 -void qemuDomainObjEndAsyncJob(virDomainObj *obj); void qemuDomainObjAbortAsyncJob(virDomainObj *obj); void qemuDomainObjSetJobPhase(virDomainObj *obj, int phase); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 443b4f679a..e61a4e329b 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2635,8 +2635,8 @@ qemuDomainSaveInternal(virQEMUDriver *driver, virQEMUSaveData *data =3D NULL; g_autoptr(qemuDomainSaveCookie) cookie =3D NULL; =20 - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SAVE, - VIR_DOMAIN_JOB_OPERATION_SAVE, flags) <= 0) + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SAVE, + VIR_DOMAIN_JOB_OPERATION_SAVE, flags) < = 0) goto cleanup; =20 if (!qemuMigrationSrcIsAllowed(driver, vm, false, VIR_ASYNC_JOB_SAVE, = 0)) @@ -2733,7 +2733,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, virErrorRestore(&save_err); } } - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); if (ret =3D=3D 0) qemuDomainRemoveInactive(driver, vm); =20 @@ -3207,7 +3207,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, if (virDomainCoreDumpWithFormatEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) < 0) goto cleanup; @@ -3273,7 +3273,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, } } =20 - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); if (ret =3D=3D 0 && flags & VIR_DUMP_CRASH) qemuDomainRemoveInactive(driver, vm); =20 @@ -3445,7 +3445,7 @@ processWatchdogEvent(virQEMUDriver *driver, =20 switch (action) { case VIR_DOMAIN_WATCHDOG_ACTION_DUMP: - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) < 0) { return; @@ -3473,7 +3473,7 @@ processWatchdogEvent(virQEMUDriver *driver, } =20 endjob: - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 static int @@ -3521,7 +3521,7 @@ processGuestPanicEvent(virQEMUDriver *driver, bool removeInactive =3D false; unsigned long flags =3D VIR_DUMP_MEMORY_ONLY; =20 - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) <= 0) return; =20 @@ -3585,7 +3585,7 @@ processGuestPanicEvent(virQEMUDriver *driver, } =20 endjob: - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); if (removeInactive) qemuDomainRemoveInactive(driver, vm); } diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 38c2f6fdfd..d00de9cc32 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -120,7 +120,7 @@ qemuMigrationJobStart(virDomainObj *vm, } mask |=3D JOB_MASK(VIR_JOB_MODIFY_MIGRATION_SAFE); =20 - if (qemuDomainObjBeginAsyncJob(vm, job, op, apiFlags) < 0) + if (virDomainObjBeginAsyncJob(vm, job, op, apiFlags) < 0) return -1; =20 qemuDomainJobSetStatsType(vm->job->current, @@ -203,7 +203,7 @@ qemuMigrationJobIsActive(virDomainObj *vm, static void ATTRIBUTE_NONNULL(1) qemuMigrationJobFinish(virDomainObj *vm) { - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 =20 diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 16651341fc..68a9efc7df 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4678,7 +4678,7 @@ qemuProcessBeginJob(virDomainObj *vm, virDomainJobOperation operation, unsigned long apiFlags) { - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_START, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_START, operation, apiFlags) < 0) return -1; =20 @@ -4690,7 +4690,7 @@ qemuProcessBeginJob(virDomainObj *vm, void qemuProcessEndJob(virDomainObj *vm) { - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 =20 diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index c5e5e3ed5b..d2835ab1a8 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1794,7 +1794,7 @@ qemuSnapshotCreateXML(virDomainPtr domain, * a regular job, so we need to set the job mask to disallow query as * 'savevm' blocks the monitor. External snapshot will then modify the * job mask appropriately. */ - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SNAPSHOT, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SNAPSHOT, VIR_DOMAIN_JOB_OPERATION_SNAPSHOT, flag= s) < 0) return NULL; =20 @@ -1806,7 +1806,7 @@ qemuSnapshotCreateXML(virDomainPtr domain, snapshot =3D qemuSnapshotCreate(vm, domain, def, driver, cfg, flag= s); } =20 - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); =20 return snapshot; } --=20 2.37.1 From nobody Fri May 17 05:26:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661353261; cv=none; d=zohomail.com; s=zohoarc; b=doxg7LgEYmgFNQyA5SwheS33gOU1XVeendwEreHA2exfmhqOTC31wEKX81m+nHO9Po3GUGZ075P2amS6di5OLhYwJc68xoJmrhjcPib6t/DBuq9N5PyawG6UkrbbYeS+3Fb7ZWlKio80+fgmZ5ncjLElrb11yI2Zy7HsyyR8MYU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661353261; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=MKw9OQIytaahQqV+x8wMIxlTt8Lwb+rknPP/4feRkEw=; b=GhFxPbIBgxpoRKRKF8PQvjDp6ov49Uv02uGLDn/RBZDrAvT4yc4w2BfVuerdKz1zp9Z3Xp9TJFuA2J466BuTAS3Z87+Q0IyARG9wORK84t7vWRpj9uftSnPhjpfX3wySd7RjHfsoAphAZ7blzO4EwJ1SxKGgorZ6SGVSvGxKGTs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661353261806913.8905031775078; Wed, 24 Aug 2022 08:01:01 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-262-RpCsuTz6N1qCmoMfhvYsxQ-1; Wed, 24 Aug 2022 11:00:57 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6568D1C05140; Wed, 24 Aug 2022 15:00:54 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 522412026D4C; Wed, 24 Aug 2022 15:00:54 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 665161946A43; Wed, 24 Aug 2022 15:00:53 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 145171946A42 for ; Wed, 24 Aug 2022 13:43:56 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 06965945D7; Wed, 24 Aug 2022 13:43:56 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id A62E79459C for ; Wed, 24 Aug 2022 13:43:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661353260; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=MKw9OQIytaahQqV+x8wMIxlTt8Lwb+rknPP/4feRkEw=; b=XeKiwMPkTUij5/Vi2vu1awtDPIuocQZYXf/5bfz9GF1ekoGt6I2Do66ue56cadcMYw4arZ +FUI2zXgtjDE6Us8csLsT/Vvp1NR27FRdUqLnGpHIx11L5GLzEIf7XG2Ij7FNdLP9F8oU4 /y8HiBP+SMaPX1fGYeHm+KUmTx0vyCY= X-MC-Unique: RpCsuTz6N1qCmoMfhvYsxQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 17/17] qemu & conf: move BeginNestedJob & BeginJobNowait into src/conf Date: Wed, 24 Aug 2022 15:43:40 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661353263327100002 Content-Type: text/plain; charset="utf-8"; x-default="true" Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/conf/virdomainjob.c | 44 +++++++++++++++++++++++++++++++++++++++ src/conf/virdomainjob.h | 6 ++++++ src/libvirt_private.syms | 2 ++ src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domainjob.c | 44 --------------------------------------- src/qemu/qemu_domainjob.h | 7 ------- src/qemu/qemu_driver.c | 2 +- src/qemu/qemu_process.c | 2 +- 8 files changed, 55 insertions(+), 54 deletions(-) diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 763fbfba00..fe418bcb30 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -558,6 +558,50 @@ int virDomainObjBeginAsyncJob(virDomainObj *obj, return 0; } =20 +int +virDomainObjBeginNestedJob(virDomainObj *obj, + virDomainAsyncJob asyncJob) +{ + if (asyncJob !=3D obj->job->asyncJob) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("unexpected async job %d type expected %d"), + asyncJob, obj->job->asyncJob); + return -1; + } + + if (obj->job->asyncOwner !=3D virThreadSelfID()) { + VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", + obj->job->asyncOwner); + } + + return virDomainObjBeginJobInternal(obj, obj->job, + VIR_JOB_ASYNC_NESTED, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, + false); +} + +/** + * virDomainObjBeginJobNowait: + * + * @obj: domain object + * @job: virDomainJob to start + * + * Acquires job for a domain object which must be locked before + * calling. If there's already a job running it returns + * immediately without any error reported. + * + * Returns: see virDomainObjBeginJobInternal + */ +int +virDomainObjBeginJobNowait(virDomainObj *obj, + virDomainJob job) +{ + return virDomainObjBeginJobInternal(obj, obj->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, true); +} + /* * obj must be locked and have a reference before calling * diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 3cd02ef4ae..c101334596 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -257,6 +257,12 @@ int virDomainObjBeginAsyncJob(virDomainObj *obj, virDomainJobOperation operation, unsigned long apiFlags) G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginNestedJob(virDomainObj *obj, + virDomainAsyncJob asyncJob) + G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginJobNowait(virDomainObj *obj, + virDomainJob job) + G_GNUC_WARN_UNUSED_RESULT; =20 void virDomainObjEndJob(virDomainObj *obj); void virDomainObjEndAgentJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 8befe7ebe0..03e186fe50 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1191,6 +1191,8 @@ virDomainObjBeginAgentJob; virDomainObjBeginAsyncJob; virDomainObjBeginJob; virDomainObjBeginJobInternal; +virDomainObjBeginJobNowait; +virDomainObjBeginNestedJob; virDomainObjCanSetJob; virDomainObjClearJob; virDomainObjEndAgentJob; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c494d75790..c526773633 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5968,7 +5968,7 @@ qemuDomainObjEnterMonitorInternal(virDomainObj *obj, =20 if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { int ret; - if ((ret =3D qemuDomainObjBeginNestedJob(obj, asyncJob)) < 0) + if ((ret =3D virDomainObjBeginNestedJob(obj, asyncJob)) < 0) return ret; if (!virDomainObjIsActive(obj)) { virReportError(VIR_ERR_OPERATION_FAILED, "%s", diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 99dcdb49b9..a170fdd97d 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,50 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -int -qemuDomainObjBeginNestedJob(virDomainObj *obj, - virDomainAsyncJob asyncJob) -{ - if (asyncJob !=3D obj->job->asyncJob) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("unexpected async job %d type expected %d"), - asyncJob, obj->job->asyncJob); - return -1; - } - - if (obj->job->asyncOwner !=3D virThreadSelfID()) { - VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", - obj->job->asyncOwner); - } - - return virDomainObjBeginJobInternal(obj, obj->job, - VIR_JOB_ASYNC_NESTED, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, - false); -} - -/** - * qemuDomainObjBeginJobNowait: - * - * @obj: domain object - * @job: virDomainJob to start - * - * Acquires job for a domain object which must be locked before - * calling. If there's already a job running it returns - * immediately without any error reported. - * - * Returns: see qemuDomainObjBeginJobInternal - */ -int -qemuDomainObjBeginJobNowait(virDomainObj *obj, - virDomainJob job) -{ - return virDomainObjBeginJobInternal(obj, obj->job, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, true); -} - void qemuDomainObjAbortAsyncJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 1cf9fcc113..c3de401aa5 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,13 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob = job, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginNestedJob(virDomainObj *obj, - virDomainAsyncJob asyncJob) - G_GNUC_WARN_UNUSED_RESULT; -int qemuDomainObjBeginJobNowait(virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void qemuDomainObjAbortAsyncJob(virDomainObj *obj); void qemuDomainObjSetJobPhase(virDomainObj *obj, int phase); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e61a4e329b..a7ad1b1139 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -18808,7 +18808,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, int rv; =20 if (flags & VIR_CONNECT_GET_ALL_DOMAINS_STATS_NOWAIT) - rv =3D qemuDomainObjBeginJobNowait(vm, VIR_JOB_QUERY); + rv =3D virDomainObjBeginJobNowait(vm, VIR_JOB_QUERY); else rv =3D virDomainObjBeginJob(vm, VIR_JOB_QUERY); =20 diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 68a9efc7df..3efeeb64c5 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -8106,7 +8106,7 @@ void qemuProcessStop(virQEMUDriver *driver, virErrorPreserveLast(&orig_err); =20 if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { - if (qemuDomainObjBeginNestedJob(vm, asyncJob) < 0) + if (virDomainObjBeginNestedJob(vm, asyncJob) < 0) goto cleanup; } else if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_NONE && vm->job->asyncOwner =3D=3D virThreadSelfID() && --=20 2.37.1