From nobody Mon Feb 9 05:01:42 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661352313; cv=none; d=zohomail.com; s=zohoarc; b=fuxV6DfuWVjBGoA3DOnJU4hsouG0waJJ+M38fn/rTy1zGNmB0++A2egwJiLW5KCU8LmXN0B6fjvYzhkMUn0NBzmGPZrwVRyHZyMzyhGkB52QBxLZGOyojX1UdEmm/Va9CwDli2sUkl6Wp/9+W4eSGPW5NyaC43ywVVXaFoQTWwA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661352313; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=egNbs1urfubEVb1/syxvSwftmEMHwxMFxqleLJuQaAU=; b=gBemAi5HwL7fLC03NDXYH0SgTRzZMRhOjKzp2iFwkAwYdPx0DMeL5ufSd66pd3beCwZcXSo5JynA4IpUJwv4GEdV/nP2IFWtUVnz6pm1NInVfBx8U+n9qwz+muK7SnOReLjSO5YI+1OEepZeRAwETl5PaatYcbtBvTzRcBSs1AU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1661352313143144.78529328050854; Wed, 24 Aug 2022 07:45:13 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-416-I7Yeo1rjOPuC8JxNfysHOw-1; Wed, 24 Aug 2022 10:45:08 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE2C91818CBC; Wed, 24 Aug 2022 14:45:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B88DC1415125; Wed, 24 Aug 2022 14:45:05 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 95E9D1946A43; Wed, 24 Aug 2022 14:45:05 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C3ACC1946A42 for ; Wed, 24 Aug 2022 13:43:46 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 99EEA18ECC; Wed, 24 Aug 2022 13:43:46 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 29A1B9459C for ; Wed, 24 Aug 2022 13:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661352311; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=egNbs1urfubEVb1/syxvSwftmEMHwxMFxqleLJuQaAU=; b=VbwX5K2HJuCLcuaZNLh0ohV2cxqA6a+VNElHd1FzFgZuSt0vpPgdB9yV9+BCJaw7fqO1Jb 6SnbMKH45H6jGFkRfcaTpZvOkIBS4NI7003E7T8VAeRFlR9/TTz2QKeNUF55GRmYZInM3i pfM1cihrj20ICCa0T6Mu+gKrbctCBoY= X-MC-Unique: I7Yeo1rjOPuC8JxNfysHOw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH 01/17] qemu & hypervisor: move qemuDomainObjBeginJobInternal() into hyperisor Date: Wed, 24 Aug 2022 15:43:24 +0200 Message-Id: <9120590183e15f176ea9a6a4791f6dea0e001f3a.1661348243.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661352313784100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch moves qemuDomainObjBeginJobInternal() as virDomainObjBeginJobInternal() into hypervisor in order to be used by other hypervisors in the following patches. Signed-off-by: Kristina Hanicova --- po/POTFILES | 1 + src/hypervisor/domain_job.c | 250 +++++++++++++++++++++++++++++++ src/hypervisor/domain_job.h | 8 + src/libvirt_private.syms | 1 + src/qemu/qemu_domainjob.c | 284 +++--------------------------------- 5 files changed, 284 insertions(+), 260 deletions(-) diff --git a/po/POTFILES b/po/POTFILES index 9621efb0d3..e3a1824834 100644 --- a/po/POTFILES +++ b/po/POTFILES @@ -90,6 +90,7 @@ src/hyperv/hyperv_util.c src/hyperv/hyperv_wmi.c src/hypervisor/domain_cgroup.c src/hypervisor/domain_driver.c +src/hypervisor/domain_job.c src/hypervisor/virclosecallbacks.c src/hypervisor/virhostdev.c src/interface/interface_backend_netcf.c diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c index 77110d2a23..ef3bee0248 100644 --- a/src/hypervisor/domain_job.c +++ b/src/hypervisor/domain_job.c @@ -10,6 +10,13 @@ =20 #include "domain_job.h" #include "viralloc.h" +#include "virthreadjob.h" +#include "virlog.h" +#include "virtime.h" + +#define VIR_FROM_THIS VIR_FROM_HYPERV + +VIR_LOG_INIT("hypervisor.domain_job"); =20 =20 VIR_ENUM_IMPL(virDomainJob, @@ -247,3 +254,246 @@ virDomainObjCanSetJob(virDomainJobObj *job, (newAgentJob =3D=3D VIR_AGENT_JOB_NONE || job->agentActive =3D=3D VIR_AGENT_JOB_NONE)); } + +/* Give up waiting for mutex after 30 seconds */ +#define VIR_JOB_WAIT_TIME (1000ull * 30) + +/** + * virDomainObjBeginJobInternal: + * @obj: virDomainObj =3D domain object + * @jobObj: virDomainJobObj =3D domain job object + * @job: virDomainJob to start + * @agentJob: virDomainAgentJob to start + * @asyncJob: virDomainAsyncJob to start + * @nowait: don't wait trying to acquire @job + * + * Acquires job for a domain object which must be locked before + * calling. If there's already a job running waits up to + * VIR_JOB_WAIT_TIME after which the functions fails reporting + * an error unless @nowait is set. + * + * If @nowait is true this function tries to acquire job and if + * it fails, then it returns immediately without waiting. No + * error is reported in this case. + * + * Returns: 0 on success, + * -2 if unable to start job because of timeout or + * maxQueuedJobs limit, + * -1 otherwise. + */ +int +virDomainObjBeginJobInternal(virDomainObj *obj, + virDomainJobObj *jobObj, + virDomainJob job, + virDomainAgentJob agentJob, + virDomainAsyncJob asyncJob, + bool nowait) +{ + unsigned long long now =3D 0; + unsigned long long then =3D 0; + bool nested =3D job =3D=3D VIR_JOB_ASYNC_NESTED; + const char *blocker =3D NULL; + const char *agentBlocker =3D NULL; + int ret =3D -1; + unsigned long long duration =3D 0; + unsigned long long agentDuration =3D 0; + unsigned long long asyncDuration =3D 0; + const char *currentAPI =3D virThreadJobGet(); + + VIR_DEBUG("Starting job: API=3D%s job=3D%s agentJob=3D%s asyncJob=3D%s= " + "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", + NULLSTR(currentAPI), + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAgentJobTypeToString(jobObj->agentActive), + virDomainAsyncJobTypeToString(jobObj->asyncJob)); + + if (virTimeMillisNow(&now) < 0) + return -1; + + jobObj->jobsQueued++; + then =3D now + VIR_JOB_WAIT_TIME; + + retry: + if (job !=3D VIR_JOB_ASYNC && + job !=3D VIR_JOB_DESTROY && + jobObj->maxQueuedJobs && + jobObj->jobsQueued > jobObj->maxQueuedJobs) { + goto error; + } + + while (!nested && !virDomainNestedJobAllowed(jobObj, job)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); + if (virCondWaitUntil(&jobObj->asyncCond, &obj->parent.lock, then) = < 0) + goto error; + } + + while (!virDomainObjCanSetJob(jobObj, job, agentJob)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); + if (virCondWaitUntil(&jobObj->cond, &obj->parent.lock, then) < 0) + goto error; + } + + /* No job is active but a new async job could have been started while = obj + * was unlocked, so we need to recheck it. */ + if (!nested && !virDomainNestedJobAllowed(jobObj, job)) + goto retry; + + if (obj->removing) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + + virUUIDFormat(obj->def->uuid, uuidstr); + virReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s' (%s)"), + uuidstr, obj->def->name); + goto cleanup; + } + + ignore_value(virTimeMillisNow(&now)); + + if (job) { + virDomainObjResetJob(jobObj); + + if (job !=3D VIR_JOB_ASYNC) { + VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(jobObj->asyncJob), + obj, obj->def->name); + jobObj->active =3D job; + jobObj->owner =3D virThreadSelfID(); + jobObj->ownerAPI =3D g_strdup(virThreadJobGet()); + jobObj->started =3D now; + } else { + VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", + virDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name); + virDomainObjResetAsyncJob(jobObj); + jobObj->current =3D virDomainJobDataInit(jobObj->jobDataPrivat= eCb); + jobObj->current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; + jobObj->asyncJob =3D asyncJob; + jobObj->asyncOwner =3D virThreadSelfID(); + jobObj->asyncOwnerAPI =3D g_strdup(virThreadJobGet()); + jobObj->asyncStarted =3D now; + jobObj->current->started =3D now; + } + } + + if (agentJob) { + virDomainObjResetAgentJob(jobObj); + VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", + virDomainAgentJobTypeToString(agentJob), + obj, obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAsyncJobTypeToString(jobObj->asyncJob)); + jobObj->agentActive =3D agentJob; + jobObj->agentOwner =3D virThreadSelfID(); + jobObj->agentOwnerAPI =3D g_strdup(virThreadJobGet()); + jobObj->agentStarted =3D now; + } + + if (virDomainTrackJob(job) && jobObj->cb && + jobObj->cb->saveStatusPrivate) + jobObj->cb->saveStatusPrivate(obj); + + return 0; + + error: + ignore_value(virTimeMillisNow(&now)); + if (jobObj->active && jobObj->started) + duration =3D now - jobObj->started; + if (jobObj->agentActive && jobObj->agentStarted) + agentDuration =3D now - jobObj->agentStarted; + if (jobObj->asyncJob && jobObj->asyncStarted) + asyncDuration =3D now - jobObj->asyncStarted; + + VIR_WARN("Cannot start job (%s, %s, %s) for domain %s; " + "current job is (%s, %s, %s) " + "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " + "for (%llus, %llus, %llus)", + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), + obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAgentJobTypeToString(jobObj->agentActive), + virDomainAsyncJobTypeToString(jobObj->asyncJob), + jobObj->owner, NULLSTR(jobObj->ownerAPI), + jobObj->agentOwner, NULLSTR(jobObj->agentOwnerAPI), + jobObj->asyncOwner, NULLSTR(jobObj->asyncOwnerAPI), + jobObj->apiFlags, + duration / 1000, agentDuration / 1000, asyncDuration / 1000); + + if (job) { + if (nested || virDomainNestedJobAllowed(jobObj, job)) + blocker =3D jobObj->ownerAPI; + else + blocker =3D jobObj->asyncOwnerAPI; + } + + if (agentJob) + agentBlocker =3D jobObj->agentOwnerAPI; + + if (errno =3D=3D ETIMEDOUT) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s)"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s)"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by agent=3D%s)"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", + _("cannot acquire state change lock")); + } + ret =3D -2; + } else if (jobObj->maxQueuedJobs && + jobObj->jobsQueued > jobObj->maxQueuedJobs) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s) " + "due to max_queued limit"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s) " + "due to max_queued limit"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by agent=3D%s) " + "due to max_queued limit"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("cannot acquire state change lock " + "due to max_queued limit")); + } + ret =3D -2; + } else { + virReportSystemError(errno, "%s", _("cannot acquire job mutex")); + } + + cleanup: + jobObj->jobsQueued--; + return ret; +} diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h index 334b59c465..d7409c05f0 100644 --- a/src/hypervisor/domain_job.h +++ b/src/hypervisor/domain_job.h @@ -234,3 +234,11 @@ bool virDomainNestedJobAllowed(virDomainJobObj *jobs, = virDomainJob newJob); bool virDomainObjCanSetJob(virDomainJobObj *job, virDomainJob newJob, virDomainAgentJob newAgentJob); + +int virDomainObjBeginJobInternal(virDomainObj *obj, + virDomainJobObj *jobObj, + virDomainJob job, + virDomainAgentJob agentJob, + virDomainAsyncJob asyncJob, + bool nowait) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index ac2802095e..51efd64ff2 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1596,6 +1596,7 @@ virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; +virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; virDomainObjInitJob; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 66a91a3e4f..a6ea7b2f58 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -697,248 +697,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) priv->job.asyncOwner =3D 0; } =20 -/* Give up waiting for mutex after 30 seconds */ -#define QEMU_JOB_WAIT_TIME (1000ull * 30) - -/** - * qemuDomainObjBeginJobInternal: - * @obj: domain object - * @job: virDomainJob to start - * @asyncJob: virDomainAsyncJob to start - * @nowait: don't wait trying to acquire @job - * - * Acquires job for a domain object which must be locked before - * calling. If there's already a job running waits up to - * QEMU_JOB_WAIT_TIME after which the functions fails reporting - * an error unless @nowait is set. - * - * If @nowait is true this function tries to acquire job and if - * it fails, then it returns immediately without waiting. No - * error is reported in this case. - * - * Returns: 0 on success, - * -2 if unable to start job because of timeout or - * maxQueuedJobs limit, - * -1 otherwise. - */ -static int ATTRIBUTE_NONNULL(1) -qemuDomainObjBeginJobInternal(virDomainObj *obj, - virDomainJob job, - virDomainAgentJob agentJob, - virDomainAsyncJob asyncJob, - bool nowait) -{ - qemuDomainObjPrivate *priv =3D obj->privateData; - unsigned long long now; - unsigned long long then; - bool nested =3D job =3D=3D VIR_JOB_ASYNC_NESTED; - const char *blocker =3D NULL; - const char *agentBlocker =3D NULL; - int ret =3D -1; - unsigned long long duration =3D 0; - unsigned long long agentDuration =3D 0; - unsigned long long asyncDuration =3D 0; - const char *currentAPI =3D virThreadJobGet(); - - VIR_DEBUG("Starting job: API=3D%s job=3D%s agentJob=3D%s asyncJob=3D%s= " - "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", - NULLSTR(currentAPI), - virDomainJobTypeToString(job), - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAgentJobTypeToString(priv->job.agentActive), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); - - if (virTimeMillisNow(&now) < 0) - return -1; - - priv->job.jobsQueued++; - then =3D now + QEMU_JOB_WAIT_TIME; - - retry: - if (job !=3D VIR_JOB_ASYNC && - job !=3D VIR_JOB_DESTROY && - priv->job.maxQueuedJobs && - priv->job.jobsQueued > priv->job.maxQueuedJobs) { - goto error; - } - - while (!nested && !virDomainNestedJobAllowed(&priv->job, job)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); - if (virCondWaitUntil(&priv->job.asyncCond, &obj->parent.lock, then= ) < 0) - goto error; - } - - while (!virDomainObjCanSetJob(&priv->job, job, agentJob)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) - goto error; - } - - /* No job is active but a new async job could have been started while = obj - * was unlocked, so we need to recheck it. */ - if (!nested && !virDomainNestedJobAllowed(&priv->job, job)) - goto retry; - - if (obj->removing) { - char uuidstr[VIR_UUID_STRING_BUFLEN]; - - virUUIDFormat(obj->def->uuid, uuidstr); - virReportError(VIR_ERR_NO_DOMAIN, - _("no domain with matching uuid '%s' (%s)"), - uuidstr, obj->def->name); - goto cleanup; - } - - ignore_value(virTimeMillisNow(&now)); - - if (job) { - virDomainObjResetJob(&priv->job); - - if (job !=3D VIR_JOB_ASYNC) { - VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", - virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); - priv->job.ownerAPI =3D g_strdup(virThreadJobGet()); - priv->job.started =3D now; - } else { - VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name); - virDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D virDomainJobDataInit(priv->job.jobDataPr= ivateCb); - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; - priv->job.asyncJob =3D asyncJob; - priv->job.asyncOwner =3D virThreadSelfID(); - priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); - priv->job.asyncStarted =3D now; - priv->job.current->started =3D now; - } - } - - if (agentJob) { - virDomainObjResetAgentJob(&priv->job); - - VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", - virDomainAgentJobTypeToString(agentJob), - obj, obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); - priv->job.agentActive =3D agentJob; - priv->job.agentOwner =3D virThreadSelfID(); - priv->job.agentOwnerAPI =3D g_strdup(virThreadJobGet()); - priv->job.agentStarted =3D now; - } - - if (virDomainTrackJob(job) && priv->job.cb) - priv->job.cb->saveStatusPrivate(obj); - - return 0; - - error: - ignore_value(virTimeMillisNow(&now)); - if (priv->job.active && priv->job.started) - duration =3D now - priv->job.started; - if (priv->job.agentActive && priv->job.agentStarted) - agentDuration =3D now - priv->job.agentStarted; - if (priv->job.asyncJob && priv->job.asyncStarted) - asyncDuration =3D now - priv->job.asyncStarted; - - VIR_WARN("Cannot start job (%s, %s, %s) in API %s for domain %s; " - "current job is (%s, %s, %s) " - "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " - "for (%llus, %llus, %llus)", - virDomainJobTypeToString(job), - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(asyncJob), - NULLSTR(currentAPI), - obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAgentJobTypeToString(priv->job.agentActive), - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.owner, NULLSTR(priv->job.ownerAPI), - priv->job.agentOwner, NULLSTR(priv->job.agentOwnerAPI), - priv->job.asyncOwner, NULLSTR(priv->job.asyncOwnerAPI), - priv->job.apiFlags, - duration / 1000, agentDuration / 1000, asyncDuration / 1000); - - if (job) { - if (nested || virDomainNestedJobAllowed(&priv->job, job)) - blocker =3D priv->job.ownerAPI; - else - blocker =3D priv->job.asyncOwnerAPI; - } - - if (agentJob) - agentBlocker =3D priv->job.agentOwnerAPI; - - if (errno =3D=3D ETIMEDOUT) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s)"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s)"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by agent=3D%s)"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", - _("cannot acquire state change lock")); - } - ret =3D -2; - } else if (priv->job.maxQueuedJobs && - priv->job.jobsQueued > priv->job.maxQueuedJobs) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s) " - "due to max_queued limit"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s) " - "due to max_queued limit"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by agent=3D%s) " - "due to max_queued limit"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("cannot acquire state change lock " - "due to max_queued limit")); - } - ret =3D -2; - } else { - virReportSystemError(errno, "%s", _("cannot acquire job mutex")); - } - - cleanup: - priv->job.jobsQueued--; - return ret; -} - /* * obj must be locked before calling * @@ -950,9 +708,11 @@ qemuDomainObjBeginJobInternal(virDomainObj *obj, int qemuDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { - if (qemuDomainObjBeginJobInternal(obj, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, false) < 0) + qemuDomainObjPrivate *priv =3D obj->privateData; + + if (virDomainObjBeginJobInternal(obj, &priv->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, false) < 0) return -1; return 0; } @@ -968,9 +728,11 @@ int qemuDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) { - return qemuDomainObjBeginJobInternal(obj, VIR_JOB_NONE, - agentJob, - VIR_ASYNC_JOB_NONE, false); + qemuDomainObjPrivate *priv =3D obj->privateData; + + return virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_NONE, + agentJob, + VIR_ASYNC_JOB_NONE, false); } =20 int qemuDomainObjBeginAsyncJob(virDomainObj *obj, @@ -978,11 +740,11 @@ int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainJobOperation operation, unsigned long apiFlags) { - qemuDomainObjPrivate *priv; + qemuDomainObjPrivate *priv =3D obj->privateData; =20 - if (qemuDomainObjBeginJobInternal(obj, VIR_JOB_ASYNC, - VIR_AGENT_JOB_NONE, - asyncJob, false) < 0) + if (virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_ASYNC, + VIR_AGENT_JOB_NONE, + asyncJob, false) < 0) return -1; =20 priv =3D obj->privateData; @@ -1009,11 +771,11 @@ qemuDomainObjBeginNestedJob(virDomainObj *obj, priv->job.asyncOwner); } =20 - return qemuDomainObjBeginJobInternal(obj, - VIR_JOB_ASYNC_NESTED, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, - false); + return virDomainObjBeginJobInternal(obj, &priv->job, + VIR_JOB_ASYNC_NESTED, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, + false); } =20 /** @@ -1032,9 +794,11 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) { - return qemuDomainObjBeginJobInternal(obj, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, true); + qemuDomainObjPrivate *priv =3D obj->privateData; + + return virDomainObjBeginJobInternal(obj, &priv->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, true); } =20 /* --=20 2.37.1