From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386249; cv=none; d=zohomail.com; s=zohoarc; b=ilDhOpQz8k5zeJ2fM0rELX3qn+YM9LAiGuCCpF69H866bBba1Xl6mDsm8gcvlv/yCGEf1LtY6lmIN8uTaXkHnnZZ4eUc76wGPu1aS4FXj8RnAl0zvZYNEw5vy2kI97UZ8mJ/BXzV9yFfhv05sa/YKZzUZZJxOjYDN22++Nzfp1g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386249; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=7kl5gnvAzkNDYhLOOT95Z1cHBUtXYvjsUSRcR4QEqLA=; b=MYwySYNj+BPth8/wV12FTH6jy+qW4SFWBtfKb4WAztHJAFSnxIlpIO5FSD9y4bik7ygo6p/ZW/ndzZpey59k/49Gf+CcvYvC2jJS4ikv+ddcV9gvV9AeATmL9C6rUaFlz8+qrvyYBP28gCKQdJ97MkXC1MV2aDqi+G5rylY5gpM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1662386249192410.95840676510943; Mon, 5 Sep 2022 06:57:29 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-92-DEYPpVL6O16zdWb-hi0dhA-1; Mon, 05 Sep 2022 09:57:26 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8674185A58C; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 69D0840C128A; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 77867194E012; Mon, 5 Sep 2022 13:57:23 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 2B8A81946A47 for ; Mon, 5 Sep 2022 13:57:22 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 02ECA2026D64; Mon, 5 Sep 2022 13:57:22 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 86E662026D4C for ; Mon, 5 Sep 2022 13:57:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386248; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=7kl5gnvAzkNDYhLOOT95Z1cHBUtXYvjsUSRcR4QEqLA=; b=LMl58Pv1Ead2tsZ1Y4PUPT59Y0Q5yFSX1sbVSjVqG1Jyd4rzxFYaNI4LAy+XL61FurlhYA XzL14AKOowjT+T3+9IVICPPhaTFmA9HLiChL58iBVoy6CNsgZPB/RLHCiMwr1lxyHH2WcV KBR9O+7mXzAdFYysWAZRHDfEoDfa4Hk= X-MC-Unique: DEYPpVL6O16zdWb-hi0dhA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 01/17] qemu & hypervisor: move qemuDomainObjBeginJobInternal() into hypervisor Date: Mon, 5 Sep 2022 15:56:59 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386250376100002 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch moves qemuDomainObjBeginJobInternal() as virDomainObjBeginJobInternal() into hypervisor in order to be used by other hypervisors in the following patches. Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- po/POTFILES | 1 + src/hypervisor/domain_job.c | 251 +++++++++++++++++++++++++++++++ src/hypervisor/domain_job.h | 8 + src/libvirt_private.syms | 1 + src/qemu/qemu_domainjob.c | 284 +++--------------------------------- 5 files changed, 285 insertions(+), 260 deletions(-) diff --git a/po/POTFILES b/po/POTFILES index 856c8e69b3..b9577e840d 100644 --- a/po/POTFILES +++ b/po/POTFILES @@ -90,6 +90,7 @@ src/hyperv/hyperv_util.c src/hyperv/hyperv_wmi.c src/hypervisor/domain_cgroup.c src/hypervisor/domain_driver.c +src/hypervisor/domain_job.c src/hypervisor/virclosecallbacks.c src/hypervisor/virhostdev.c src/interface/interface_backend_netcf.c diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c index 77110d2a23..07ee5b4a3d 100644 --- a/src/hypervisor/domain_job.c +++ b/src/hypervisor/domain_job.c @@ -10,6 +10,13 @@ =20 #include "domain_job.h" #include "viralloc.h" +#include "virthreadjob.h" +#include "virlog.h" +#include "virtime.h" + +#define VIR_FROM_THIS VIR_FROM_NONE + +VIR_LOG_INIT("hypervisor.domain_job"); =20 =20 VIR_ENUM_IMPL(virDomainJob, @@ -247,3 +254,247 @@ virDomainObjCanSetJob(virDomainJobObj *job, (newAgentJob =3D=3D VIR_AGENT_JOB_NONE || job->agentActive =3D=3D VIR_AGENT_JOB_NONE)); } + +/* Give up waiting for mutex after 30 seconds */ +#define VIR_JOB_WAIT_TIME (1000ull * 30) + +/** + * virDomainObjBeginJobInternal: + * @obj: virDomainObj =3D domain object + * @jobObj: virDomainJobObj =3D domain job object + * @job: virDomainJob to start + * @agentJob: virDomainAgentJob to start + * @asyncJob: virDomainAsyncJob to start + * @nowait: don't wait trying to acquire @job + * + * Acquires job for a domain object which must be locked before + * calling. If there's already a job running waits up to + * VIR_JOB_WAIT_TIME after which the functions fails reporting + * an error unless @nowait is set. + * + * If @nowait is true this function tries to acquire job and if + * it fails, then it returns immediately without waiting. No + * error is reported in this case. + * + * Returns: 0 on success, + * -2 if unable to start job because of timeout or + * maxQueuedJobs limit, + * -1 otherwise. + */ +int +virDomainObjBeginJobInternal(virDomainObj *obj, + virDomainJobObj *jobObj, + virDomainJob job, + virDomainAgentJob agentJob, + virDomainAsyncJob asyncJob, + bool nowait) +{ + unsigned long long now =3D 0; + unsigned long long then =3D 0; + bool nested =3D job =3D=3D VIR_JOB_ASYNC_NESTED; + const char *blocker =3D NULL; + const char *agentBlocker =3D NULL; + int ret =3D -1; + unsigned long long duration =3D 0; + unsigned long long agentDuration =3D 0; + unsigned long long asyncDuration =3D 0; + const char *currentAPI =3D virThreadJobGet(); + + VIR_DEBUG("Starting job: API=3D%s job=3D%s agentJob=3D%s asyncJob=3D%s= " + "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", + NULLSTR(currentAPI), + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAgentJobTypeToString(jobObj->agentActive), + virDomainAsyncJobTypeToString(jobObj->asyncJob)); + + if (virTimeMillisNow(&now) < 0) + return -1; + + jobObj->jobsQueued++; + then =3D now + VIR_JOB_WAIT_TIME; + + retry: + if (job !=3D VIR_JOB_ASYNC && + job !=3D VIR_JOB_DESTROY && + jobObj->maxQueuedJobs && + jobObj->jobsQueued > jobObj->maxQueuedJobs) { + goto error; + } + + while (!nested && !virDomainNestedJobAllowed(jobObj, job)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); + if (virCondWaitUntil(&jobObj->asyncCond, &obj->parent.lock, then) = < 0) + goto error; + } + + while (!virDomainObjCanSetJob(jobObj, job, agentJob)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); + if (virCondWaitUntil(&jobObj->cond, &obj->parent.lock, then) < 0) + goto error; + } + + /* No job is active but a new async job could have been started while = obj + * was unlocked, so we need to recheck it. */ + if (!nested && !virDomainNestedJobAllowed(jobObj, job)) + goto retry; + + if (obj->removing) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + + virUUIDFormat(obj->def->uuid, uuidstr); + virReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s' (%s)"), + uuidstr, obj->def->name); + goto cleanup; + } + + ignore_value(virTimeMillisNow(&now)); + + if (job) { + virDomainObjResetJob(jobObj); + + if (job !=3D VIR_JOB_ASYNC) { + VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(jobObj->asyncJob), + obj, obj->def->name); + jobObj->active =3D job; + jobObj->owner =3D virThreadSelfID(); + jobObj->ownerAPI =3D g_strdup(virThreadJobGet()); + jobObj->started =3D now; + } else { + VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", + virDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name); + virDomainObjResetAsyncJob(jobObj); + jobObj->current =3D virDomainJobDataInit(jobObj->jobDataPrivat= eCb); + jobObj->current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; + jobObj->asyncJob =3D asyncJob; + jobObj->asyncOwner =3D virThreadSelfID(); + jobObj->asyncOwnerAPI =3D g_strdup(virThreadJobGet()); + jobObj->asyncStarted =3D now; + jobObj->current->started =3D now; + } + } + + if (agentJob) { + virDomainObjResetAgentJob(jobObj); + VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", + virDomainAgentJobTypeToString(agentJob), + obj, obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAsyncJobTypeToString(jobObj->asyncJob)); + jobObj->agentActive =3D agentJob; + jobObj->agentOwner =3D virThreadSelfID(); + jobObj->agentOwnerAPI =3D g_strdup(virThreadJobGet()); + jobObj->agentStarted =3D now; + } + + if (virDomainTrackJob(job) && jobObj->cb && + jobObj->cb->saveStatusPrivate) + jobObj->cb->saveStatusPrivate(obj); + + return 0; + + error: + ignore_value(virTimeMillisNow(&now)); + if (jobObj->active && jobObj->started) + duration =3D now - jobObj->started; + if (jobObj->agentActive && jobObj->agentStarted) + agentDuration =3D now - jobObj->agentStarted; + if (jobObj->asyncJob && jobObj->asyncStarted) + asyncDuration =3D now - jobObj->asyncStarted; + + VIR_WARN("Cannot start job (%s, %s, %s) in API %s for domain %s; " + "current job is (%s, %s, %s) " + "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " + "for (%llus, %llus, %llus)", + virDomainJobTypeToString(job), + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(asyncJob), + NULLSTR(currentAPI), + obj->def->name, + virDomainJobTypeToString(jobObj->active), + virDomainAgentJobTypeToString(jobObj->agentActive), + virDomainAsyncJobTypeToString(jobObj->asyncJob), + jobObj->owner, NULLSTR(jobObj->ownerAPI), + jobObj->agentOwner, NULLSTR(jobObj->agentOwnerAPI), + jobObj->asyncOwner, NULLSTR(jobObj->asyncOwnerAPI), + jobObj->apiFlags, + duration / 1000, agentDuration / 1000, asyncDuration / 1000); + + if (job) { + if (nested || virDomainNestedJobAllowed(jobObj, job)) + blocker =3D jobObj->ownerAPI; + else + blocker =3D jobObj->asyncOwnerAPI; + } + + if (agentJob) + agentBlocker =3D jobObj->agentOwnerAPI; + + if (errno =3D=3D ETIMEDOUT) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s)"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s)"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by agent=3D%s)"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", + _("cannot acquire state change lock")); + } + ret =3D -2; + } else if (jobObj->maxQueuedJobs && + jobObj->jobsQueued > jobObj->maxQueuedJobs) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s) " + "due to max_queued limit"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s) " + "due to max_queued limit"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by agent=3D%s) " + "due to max_queued limit"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("cannot acquire state change lock " + "due to max_queued limit")); + } + ret =3D -2; + } else { + virReportSystemError(errno, "%s", _("cannot acquire job mutex")); + } + + cleanup: + jobObj->jobsQueued--; + return ret; +} diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h index 334b59c465..d7409c05f0 100644 --- a/src/hypervisor/domain_job.h +++ b/src/hypervisor/domain_job.h @@ -234,3 +234,11 @@ bool virDomainNestedJobAllowed(virDomainJobObj *jobs, = virDomainJob newJob); bool virDomainObjCanSetJob(virDomainJobObj *job, virDomainJob newJob, virDomainAgentJob newAgentJob); + +int virDomainObjBeginJobInternal(virDomainObj *obj, + virDomainJobObj *jobObj, + virDomainJob job, + virDomainAgentJob agentJob, + virDomainAsyncJob asyncJob, + bool nowait) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index f739259375..08571cd4b4 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1596,6 +1596,7 @@ virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; +virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; virDomainObjInitJob; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 66a91a3e4f..a6ea7b2f58 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -697,248 +697,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) priv->job.asyncOwner =3D 0; } =20 -/* Give up waiting for mutex after 30 seconds */ -#define QEMU_JOB_WAIT_TIME (1000ull * 30) - -/** - * qemuDomainObjBeginJobInternal: - * @obj: domain object - * @job: virDomainJob to start - * @asyncJob: virDomainAsyncJob to start - * @nowait: don't wait trying to acquire @job - * - * Acquires job for a domain object which must be locked before - * calling. If there's already a job running waits up to - * QEMU_JOB_WAIT_TIME after which the functions fails reporting - * an error unless @nowait is set. - * - * If @nowait is true this function tries to acquire job and if - * it fails, then it returns immediately without waiting. No - * error is reported in this case. - * - * Returns: 0 on success, - * -2 if unable to start job because of timeout or - * maxQueuedJobs limit, - * -1 otherwise. - */ -static int ATTRIBUTE_NONNULL(1) -qemuDomainObjBeginJobInternal(virDomainObj *obj, - virDomainJob job, - virDomainAgentJob agentJob, - virDomainAsyncJob asyncJob, - bool nowait) -{ - qemuDomainObjPrivate *priv =3D obj->privateData; - unsigned long long now; - unsigned long long then; - bool nested =3D job =3D=3D VIR_JOB_ASYNC_NESTED; - const char *blocker =3D NULL; - const char *agentBlocker =3D NULL; - int ret =3D -1; - unsigned long long duration =3D 0; - unsigned long long agentDuration =3D 0; - unsigned long long asyncDuration =3D 0; - const char *currentAPI =3D virThreadJobGet(); - - VIR_DEBUG("Starting job: API=3D%s job=3D%s agentJob=3D%s asyncJob=3D%s= " - "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", - NULLSTR(currentAPI), - virDomainJobTypeToString(job), - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAgentJobTypeToString(priv->job.agentActive), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); - - if (virTimeMillisNow(&now) < 0) - return -1; - - priv->job.jobsQueued++; - then =3D now + QEMU_JOB_WAIT_TIME; - - retry: - if (job !=3D VIR_JOB_ASYNC && - job !=3D VIR_JOB_DESTROY && - priv->job.maxQueuedJobs && - priv->job.jobsQueued > priv->job.maxQueuedJobs) { - goto error; - } - - while (!nested && !virDomainNestedJobAllowed(&priv->job, job)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); - if (virCondWaitUntil(&priv->job.asyncCond, &obj->parent.lock, then= ) < 0) - goto error; - } - - while (!virDomainObjCanSetJob(&priv->job, job, agentJob)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) - goto error; - } - - /* No job is active but a new async job could have been started while = obj - * was unlocked, so we need to recheck it. */ - if (!nested && !virDomainNestedJobAllowed(&priv->job, job)) - goto retry; - - if (obj->removing) { - char uuidstr[VIR_UUID_STRING_BUFLEN]; - - virUUIDFormat(obj->def->uuid, uuidstr); - virReportError(VIR_ERR_NO_DOMAIN, - _("no domain with matching uuid '%s' (%s)"), - uuidstr, obj->def->name); - goto cleanup; - } - - ignore_value(virTimeMillisNow(&now)); - - if (job) { - virDomainObjResetJob(&priv->job); - - if (job !=3D VIR_JOB_ASYNC) { - VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", - virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); - priv->job.ownerAPI =3D g_strdup(virThreadJobGet()); - priv->job.started =3D now; - } else { - VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name); - virDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D virDomainJobDataInit(priv->job.jobDataPr= ivateCb); - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_ACTIVE; - priv->job.asyncJob =3D asyncJob; - priv->job.asyncOwner =3D virThreadSelfID(); - priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); - priv->job.asyncStarted =3D now; - priv->job.current->started =3D now; - } - } - - if (agentJob) { - virDomainObjResetAgentJob(&priv->job); - - VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", - virDomainAgentJobTypeToString(agentJob), - obj, obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); - priv->job.agentActive =3D agentJob; - priv->job.agentOwner =3D virThreadSelfID(); - priv->job.agentOwnerAPI =3D g_strdup(virThreadJobGet()); - priv->job.agentStarted =3D now; - } - - if (virDomainTrackJob(job) && priv->job.cb) - priv->job.cb->saveStatusPrivate(obj); - - return 0; - - error: - ignore_value(virTimeMillisNow(&now)); - if (priv->job.active && priv->job.started) - duration =3D now - priv->job.started; - if (priv->job.agentActive && priv->job.agentStarted) - agentDuration =3D now - priv->job.agentStarted; - if (priv->job.asyncJob && priv->job.asyncStarted) - asyncDuration =3D now - priv->job.asyncStarted; - - VIR_WARN("Cannot start job (%s, %s, %s) in API %s for domain %s; " - "current job is (%s, %s, %s) " - "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " - "for (%llus, %llus, %llus)", - virDomainJobTypeToString(job), - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(asyncJob), - NULLSTR(currentAPI), - obj->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAgentJobTypeToString(priv->job.agentActive), - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.owner, NULLSTR(priv->job.ownerAPI), - priv->job.agentOwner, NULLSTR(priv->job.agentOwnerAPI), - priv->job.asyncOwner, NULLSTR(priv->job.asyncOwnerAPI), - priv->job.apiFlags, - duration / 1000, agentDuration / 1000, asyncDuration / 1000); - - if (job) { - if (nested || virDomainNestedJobAllowed(&priv->job, job)) - blocker =3D priv->job.ownerAPI; - else - blocker =3D priv->job.asyncOwnerAPI; - } - - if (agentJob) - agentBlocker =3D priv->job.agentOwnerAPI; - - if (errno =3D=3D ETIMEDOUT) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s)"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s)"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by agent=3D%s)"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", - _("cannot acquire state change lock")); - } - ret =3D -2; - } else if (priv->job.maxQueuedJobs && - priv->job.jobsQueued > priv->job.maxQueuedJobs) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s) " - "due to max_queued limit"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s) " - "due to max_queued limit"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by agent=3D%s) " - "due to max_queued limit"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("cannot acquire state change lock " - "due to max_queued limit")); - } - ret =3D -2; - } else { - virReportSystemError(errno, "%s", _("cannot acquire job mutex")); - } - - cleanup: - priv->job.jobsQueued--; - return ret; -} - /* * obj must be locked before calling * @@ -950,9 +708,11 @@ qemuDomainObjBeginJobInternal(virDomainObj *obj, int qemuDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { - if (qemuDomainObjBeginJobInternal(obj, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, false) < 0) + qemuDomainObjPrivate *priv =3D obj->privateData; + + if (virDomainObjBeginJobInternal(obj, &priv->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, false) < 0) return -1; return 0; } @@ -968,9 +728,11 @@ int qemuDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) { - return qemuDomainObjBeginJobInternal(obj, VIR_JOB_NONE, - agentJob, - VIR_ASYNC_JOB_NONE, false); + qemuDomainObjPrivate *priv =3D obj->privateData; + + return virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_NONE, + agentJob, + VIR_ASYNC_JOB_NONE, false); } =20 int qemuDomainObjBeginAsyncJob(virDomainObj *obj, @@ -978,11 +740,11 @@ int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainJobOperation operation, unsigned long apiFlags) { - qemuDomainObjPrivate *priv; + qemuDomainObjPrivate *priv =3D obj->privateData; =20 - if (qemuDomainObjBeginJobInternal(obj, VIR_JOB_ASYNC, - VIR_AGENT_JOB_NONE, - asyncJob, false) < 0) + if (virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_ASYNC, + VIR_AGENT_JOB_NONE, + asyncJob, false) < 0) return -1; =20 priv =3D obj->privateData; @@ -1009,11 +771,11 @@ qemuDomainObjBeginNestedJob(virDomainObj *obj, priv->job.asyncOwner); } =20 - return qemuDomainObjBeginJobInternal(obj, - VIR_JOB_ASYNC_NESTED, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, - false); + return virDomainObjBeginJobInternal(obj, &priv->job, + VIR_JOB_ASYNC_NESTED, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, + false); } =20 /** @@ -1032,9 +794,11 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) { - return qemuDomainObjBeginJobInternal(obj, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, true); + qemuDomainObjPrivate *priv =3D obj->privateData; + + return virDomainObjBeginJobInternal(obj, &priv->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, true); } =20 /* --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386293; cv=none; d=zohomail.com; s=zohoarc; b=mVFNSbu1lsH+lJYG1gnMgxqoD8wrwptPkusdiY7/jGbDV5WMBU0cbGiAF6W79bgiMVc6+jxEYHTgYEokHkVPuZCsTuLmHD/kOsubBdIgkzB/Mxy6mE9XS8Gx1FAlXI64sTQxRlFl7UXXVy6mW0OETiszzZ/lI4nn7OEdkX5fRDs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386293; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=x5AHV7ETeG2GefzImlERfNjVjlZ9NrdYuV5i8euLbjc=; b=gOhhKs2AZATsCzk2gwJUOpW/d2q00d+VUISFmMtA4Z3btkycvymGClYV8PgPnNouGeTDlEb5hBIalZcSFAh7O9TN+WWQRlmdwAl3aPIwbCNkmQMw74QgukSZvhgL1EVnP/5spkNJkvNkzlTCfI5hdRU6yrU3waB6aj1Az0TLvc4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386293164519.1906971099187; Mon, 5 Sep 2022 06:58:13 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-364-Ny9KkL72MJ6vvNoAIA12tw-1; Mon, 05 Sep 2022 09:57:28 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BC69F803C80; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id A758C2024CCB; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 69BE0194B966; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 978B31946A47 for ; Mon, 5 Sep 2022 13:57:22 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 8DD6A2026D64; Mon, 5 Sep 2022 13:57:22 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 39A032026D4C for ; Mon, 5 Sep 2022 13:57:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386292; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=x5AHV7ETeG2GefzImlERfNjVjlZ9NrdYuV5i8euLbjc=; b=ImdljaQz5LCCJx9o/IGp+Xb72dq4JKivwN6gFK0IKCWven2msBeeY6oAMyngkDJDU413Bi 6qVEhmDZeBeBirrcXKI/UQ1XqeqaZ2aDwqUhq/ZnwE+byl9MRiVnpccxWYPHui/ewElKry XCfqMyfUgF5HUpRuKIywg0/rmgj6JWE= X-MC-Unique: Ny9KkL72MJ6vvNoAIA12tw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 02/17] libxl: remove usage of virDomainJobData Date: Mon, 5 Sep 2022 15:57:00 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386294641100003 Content-Type: text/plain; charset="utf-8"; x-default="true" Struct virDomainJobData is meant for statistics for async jobs. It was used to keep track of only two attributes, one of which is also in the generalized virDomainJobObj ("started") and one which is always set to the same value, if any job is active ("jobType"). This patch removes usage & allocation of virDomainJobData structure and rewrites libxlDomainJobUpdateTime() into more suitable libxlDomainJobGetTimeElapsed(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/libxl/libxl_domain.c | 16 ++++++---------- src/libxl/libxl_domain.h | 4 ++-- src/libxl/libxl_driver.c | 16 ++++++++-------- 3 files changed, 16 insertions(+), 20 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 52e0aa1e60..6695ec670e 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -81,8 +81,7 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); priv->job.active =3D job; priv->job.owner =3D virThreadSelfID(); - priv->job.current->started =3D now; - priv->job.current->jobType =3D VIR_DOMAIN_JOB_UNBOUNDED; + priv->job.started =3D now; =20 return 0; =20 @@ -129,23 +128,22 @@ libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, } =20 int -libxlDomainJobUpdateTime(virDomainJobObj *job) +libxlDomainJobGetTimeElapsed(virDomainJobObj *job, unsigned long long *tim= eElapsed) { - virDomainJobData *jobData =3D job->current; unsigned long long now; =20 - if (!jobData->started) + if (!job->started) return 0; =20 if (virTimeMillisNow(&now) < 0) return -1; =20 - if (now < jobData->started) { - jobData->started =3D 0; + if (now < job->started) { + job->started =3D 0; return 0; } =20 - jobData->timeElapsed =3D now - jobData->started; + *timeElapsed =3D now - job->started; return 0; } =20 @@ -167,8 +165,6 @@ libxlDomainObjPrivateAlloc(void *opaque G_GNUC_UNUSED) return NULL; } =20 - priv->job.current =3D virDomainJobDataInit(NULL); - return priv; } =20 diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 5843a4921f..8ad56f1e88 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -62,8 +62,8 @@ libxlDomainObjEndJob(libxlDriverPrivate *driver, virDomainObj *obj); =20 int -libxlDomainJobUpdateTime(virDomainJobObj *job) - G_GNUC_WARN_UNUSED_RESULT; +libxlDomainJobGetTimeElapsed(virDomainJobObj *job, + unsigned long long *timeElapsed); =20 char * libxlDomainManagedSavePath(libxlDriverPrivate *driver, diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index f980039165..ff6c2f6506 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -5207,6 +5207,7 @@ libxlDomainGetJobInfo(virDomainPtr dom, libxlDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; + unsigned long long timeElapsed =3D 0; =20 if (!(vm =3D libxlDomObjFromDomain(dom))) goto cleanup; @@ -5225,14 +5226,14 @@ libxlDomainGetJobInfo(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobUpdateTime(&priv->job) < 0) + if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) goto cleanup; =20 /* setting only these two attributes is enough because libxl never sets * anything else */ memset(info, 0, sizeof(*info)); - info->type =3D priv->job.current->jobType; - info->timeElapsed =3D priv->job.current->timeElapsed; + info->type =3D VIR_DOMAIN_JOB_UNBOUNDED; + info->timeElapsed =3D timeElapsed; ret =3D 0; =20 cleanup: @@ -5249,9 +5250,9 @@ libxlDomainGetJobStats(virDomainPtr dom, { libxlDomainObjPrivate *priv; virDomainObj *vm; - virDomainJobData *jobData; int ret =3D -1; int maxparams =3D 0; + unsigned long long timeElapsed =3D 0; =20 /* VIR_DOMAIN_JOB_STATS_COMPLETED not supported yet */ virCheckFlags(0, -1); @@ -5263,7 +5264,6 @@ libxlDomainGetJobStats(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; - jobData =3D priv->job.current; if (!priv->job.active) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; @@ -5275,15 +5275,15 @@ libxlDomainGetJobStats(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobUpdateTime(&priv->job) < 0) + if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) goto cleanup; =20 if (virTypedParamsAddULLong(params, nparams, &maxparams, VIR_DOMAIN_JOB_TIME_ELAPSED, - jobData->timeElapsed) < 0) + timeElapsed) < 0) goto cleanup; =20 - *type =3D jobData->jobType; + *type =3D VIR_DOMAIN_JOB_UNBOUNDED; ret =3D 0; =20 cleanup: --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386256; cv=none; d=zohomail.com; s=zohoarc; b=Qf5XMYjSnZdZXk+bjPFsP3iEQvNVNRkcVtZ1Cp1TTnuq32y/uP10xsUZrSSiSoxz4qxO4D1U8TqLOg5QC1Ot5gEB49TJy+GVN6TpdJ4/OcM5IvBHjj7rD0/25Ftfx2Vhw/SHFONpE3kk2ETuo45nsl/NGMoRrKArOtF//SNPcB4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386256; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=psv1u9IzSdbBrGseCt6JCAu1lQC77A/sxe8eMBjrct4=; b=b2BPKs6rmj6NQo16zytHmvnRY6HpZ8rRNvE69bOXu/4Nk08Ot/1qFNadYA/DKhjEfCMnqRuVpzDU1/xKm6YEJf/dGtg6vN+Bv4OKt+TAxJseMNlCEmVOWAD1ERXyN/KQeHNc3cAwZ6z3/Ou0fLO14M9wCdrQyY2vJiGweBBN600= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1662386256518871.2337160712982; Mon, 5 Sep 2022 06:57:36 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-508-KPQlz62-Npq-w4MYwqQU4w-1; Mon, 05 Sep 2022 09:57:32 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 19DA28037AF; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 05B91C15BBD; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C0B82194E013; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 421DD1940379 for ; Mon, 5 Sep 2022 13:57:23 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 261302026D64; Mon, 5 Sep 2022 13:57:23 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id C51082026D4C for ; Mon, 5 Sep 2022 13:57:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386255; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=psv1u9IzSdbBrGseCt6JCAu1lQC77A/sxe8eMBjrct4=; b=Z++Qt0zusjMoxb5mFdwoP+DIclBD2K0Qudw1QMiQNxTEiXQhVgkKhTITqwJJXI9UfGGbvA jp2hvIdHYDolbeX5YxbpUCA3nyNaCVEw29/fI2fWo5orHkk18EOUgSoS0Q3vYtJ3cmKSSR lpEQBnrJ8NhHQIRHzv53cnCNn18pZoM= X-MC-Unique: KPQlz62-Npq-w4MYwqQU4w-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 03/17] move files: hypervisor/domain_job -> conf/virdomainjob Date: Mon, 5 Sep 2022 15:57:01 +0200 Message-Id: <0686f159f7b0bb7908ae65c8c8f4120327279054.1662385930.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386258562100001 Content-Type: text/plain; charset="utf-8"; x-default="true" The following patches move job object as a member into the domain object. Because of this, domain_conf (where the domain object is defined) needs to import the file with the job object. It makes sense to move jobs to the same level as the domain_conf: into src/conf/ Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- po/POTFILES | 2 +- src/ch/ch_domain.h | 2 +- src/conf/meson.build | 1 + .../domain_job.c =3D> conf/virdomainjob.c} | 6 +-- .../domain_job.h =3D> conf/virdomainjob.h} | 2 +- src/hypervisor/meson.build | 1 - src/libvirt_private.syms | 44 +++++++++---------- src/libxl/libxl_domain.c | 1 - src/libxl/libxl_domain.h | 2 +- src/lxc/lxc_domain.c | 1 - src/lxc/lxc_domain.h | 2 +- src/qemu/qemu_domainjob.h | 2 +- 12 files changed, 32 insertions(+), 34 deletions(-) rename src/{hypervisor/domain_job.c =3D> conf/virdomainjob.c} (99%) rename src/{hypervisor/domain_job.h =3D> conf/virdomainjob.h} (99%) diff --git a/po/POTFILES b/po/POTFILES index b9577e840d..169e2a41dc 100644 --- a/po/POTFILES +++ b/po/POTFILES @@ -53,6 +53,7 @@ src/conf/storage_conf.c src/conf/storage_encryption_conf.c src/conf/storage_source_conf.c src/conf/virchrdev.c +src/conf/virdomainjob.c src/conf/virdomainmomentobjlist.c src/conf/virdomainobjlist.c src/conf/virnetworkobj.c @@ -90,7 +91,6 @@ src/hyperv/hyperv_util.c src/hyperv/hyperv_wmi.c src/hypervisor/domain_cgroup.c src/hypervisor/domain_driver.c -src/hypervisor/domain_job.c src/hypervisor/virclosecallbacks.c src/hypervisor/virhostdev.c src/interface/interface_backend_netcf.c diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index b3bebd6b9a..27efe2feed 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -24,7 +24,7 @@ #include "ch_monitor.h" #include "virchrdev.h" #include "vircgroup.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 /* Give up waiting for mutex after 30 seconds */ #define CH_JOB_WAIT_TIME (1000ull * 30) diff --git a/src/conf/meson.build b/src/conf/meson.build index 5ef494c3ba..5116c23fe3 100644 --- a/src/conf/meson.build +++ b/src/conf/meson.build @@ -20,6 +20,7 @@ domain_conf_sources =3D [ 'numa_conf.c', 'snapshot_conf.c', 'virdomaincheckpointobjlist.c', + 'virdomainjob.c', 'virdomainmomentobjlist.c', 'virdomainobjlist.c', 'virdomainsnapshotobjlist.c', diff --git a/src/hypervisor/domain_job.c b/src/conf/virdomainjob.c similarity index 99% rename from src/hypervisor/domain_job.c rename to src/conf/virdomainjob.c index 07ee5b4a3d..0515e1d507 100644 --- a/src/hypervisor/domain_job.c +++ b/src/conf/virdomainjob.c @@ -1,5 +1,5 @@ /* - * domain_job.c: job functions shared between hypervisor drivers + * virdomainjob.c: job functions shared between hypervisor drivers * * Copyright (C) 2022 Red Hat, Inc. * SPDX-License-Identifier: LGPL-2.1-or-later @@ -8,7 +8,7 @@ #include #include =20 -#include "domain_job.h" +#include "virdomainjob.h" #include "viralloc.h" #include "virthreadjob.h" #include "virlog.h" @@ -16,7 +16,7 @@ =20 #define VIR_FROM_THIS VIR_FROM_NONE =20 -VIR_LOG_INIT("hypervisor.domain_job"); +VIR_LOG_INIT("conf.virdomainjob"); =20 =20 VIR_ENUM_IMPL(virDomainJob, diff --git a/src/hypervisor/domain_job.h b/src/conf/virdomainjob.h similarity index 99% rename from src/hypervisor/domain_job.h rename to src/conf/virdomainjob.h index d7409c05f0..bdfdc91935 100644 --- a/src/hypervisor/domain_job.h +++ b/src/conf/virdomainjob.h @@ -1,5 +1,5 @@ /* - * domain_job.h: job functions shared between hypervisor drivers + * virdomainjob.h: job functions shared between hypervisor drivers * * Copyright (C) 2022 Red Hat, Inc. * SPDX-License-Identifier: LGPL-2.1-or-later diff --git a/src/hypervisor/meson.build b/src/hypervisor/meson.build index 7532f30ee2..f35565b16b 100644 --- a/src/hypervisor/meson.build +++ b/src/hypervisor/meson.build @@ -3,7 +3,6 @@ hypervisor_sources =3D [ 'domain_driver.c', 'virclosecallbacks.c', 'virhostdev.c', - 'domain_job.c', ] =20 stateful_driver_source_files +=3D files(hypervisor_sources) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 08571cd4b4..5077db9c6b 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1175,6 +1175,28 @@ virDomainCheckpointUpdateRelations; virDomainListCheckpoints; =20 =20 +#conf/virdomainjob.h +virDomainAgentJobTypeToString; +virDomainAsyncJobTypeFromString; +virDomainAsyncJobTypeToString; +virDomainJobDataCopy; +virDomainJobDataFree; +virDomainJobDataInit; +virDomainJobStatusToType; +virDomainJobTypeFromString; +virDomainJobTypeToString; +virDomainNestedJobAllowed; +virDomainObjBeginJobInternal; +virDomainObjCanSetJob; +virDomainObjClearJob; +virDomainObjInitJob; +virDomainObjPreserveJob; +virDomainObjResetAgentJob; +virDomainObjResetAsyncJob; +virDomainObjResetJob; +virDomainTrackJob; + + # conf/virdomainmomentobjlist.h virDomainMomentDropChildren; virDomainMomentDropParent; @@ -1585,28 +1607,6 @@ virDomainDriverParseBlkioDeviceStr; virDomainDriverSetupPersistentDefBlkioParams; =20 =20 -# hypervisor/domain_job.h -virDomainAgentJobTypeToString; -virDomainAsyncJobTypeFromString; -virDomainAsyncJobTypeToString; -virDomainJobDataCopy; -virDomainJobDataFree; -virDomainJobDataInit; -virDomainJobStatusToType; -virDomainJobTypeFromString; -virDomainJobTypeToString; -virDomainNestedJobAllowed; -virDomainObjBeginJobInternal; -virDomainObjCanSetJob; -virDomainObjClearJob; -virDomainObjInitJob; -virDomainObjPreserveJob; -virDomainObjResetAgentJob; -virDomainObjResetAsyncJob; -virDomainObjResetJob; -virDomainTrackJob; - - # hypervisor/virclosecallbacks.h virCloseCallbacksGet; virCloseCallbacksNew; diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 6695ec670e..aadb13f461 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -37,7 +37,6 @@ #include "xen_common.h" #include "driver.h" #include "domain_validate.h" -#include "domain_job.h" =20 #define VIR_FROM_THIS VIR_FROM_LIBXL =20 diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 8ad56f1e88..451e76e311 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -24,7 +24,7 @@ =20 #include "libxl_conf.h" #include "virchrdev.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 =20 typedef struct _libxlDomainObjPrivate libxlDomainObjPrivate; diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index 61e59ec726..f234aaf39c 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -29,7 +29,6 @@ #include "virsystemd.h" #include "virinitctl.h" #include "domain_driver.h" -#include "domain_job.h" =20 #define VIR_FROM_THIS VIR_FROM_LXC =20 diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index 82c36eb940..db622acc86 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -25,7 +25,7 @@ #include "lxc_conf.h" #include "lxc_monitor.h" #include "virenum.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 =20 typedef enum { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index bb3c7ede14..23eadc26a7 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -20,7 +20,7 @@ =20 #include #include "qemu_monitor.h" -#include "domain_job.h" +#include "virdomainjob.h" =20 =20 typedef enum { --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386287; cv=none; d=zohomail.com; s=zohoarc; b=gvySgsFjEo2eWETEQb9MSUUJIqSdc/DKja09RFTFqkXOGNiuZhL4eIReTFnZRlm7qQZaath/9jozneliIpzWpN0c2GH78a/xO+yWO/PVvmypNHMnYiiFjer40wtPlJc7H5w82xCGp1s8Gz0MsVSrL/YOAYV01DFMfiDZPFGrHEs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386287; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=AUFETE7i2IpHwyXF7yvGtVKGyjT1R+QisZSw/xl0G9Q=; b=goZ4nW+ezimNx9/ZeOLKqgOb7npxFAdT+gQSkoSpZD/qoWgpZHXdQ8inD8fXvSn2+cL3futs/ulQ/44BJu8ZAuvfRpVV6UEdQiItAiezc8f7ii459fH5cV9V/sVbCtXOhGpoT3bGOwFtaTpQhhs9acutiRZX64bSChQRT65z6Co= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1662386287853499.50946265249434; Mon, 5 Sep 2022 06:58:07 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-297-va_LI5TLN0-6F-KTGSzfNQ-1; Mon, 05 Sep 2022 09:57:28 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5A1111824637; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3F84E1415137; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D4287194E01A; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id CA1A1194E113 for ; Mon, 5 Sep 2022 13:57:23 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id B12942026D64; Mon, 5 Sep 2022 13:57:23 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5CD092026D4C for ; Mon, 5 Sep 2022 13:57:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386285; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=AUFETE7i2IpHwyXF7yvGtVKGyjT1R+QisZSw/xl0G9Q=; b=G24aCTfmMx8gylL1LVMWELwRKMQpC8mD76tYobgLtqYHx/s3b9qlpoiLod0Iz6NL0lq9TQ vSxoEiz3YwAI7UTSVfCgiYmPsQUO4GJ+R7qbJR2VFEuxAWJvOM9zUmw7KRdnXj8RxBcka9 qIKz9Jg79Xi96QU4SRR17o5WebMnU30= X-MC-Unique: va_LI5TLN0-6F-KTGSzfNQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 04/17] virdomainjob: add check for callbacks Date: Mon, 5 Sep 2022 15:57:02 +0200 Message-Id: <18b764283b32956c460ce119ee11de990ae624e3.1662385930.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386288557100005 Content-Type: text/plain; charset="utf-8"; x-default="true" There may be a case that the callback structure will exist with no callbacks (following patches). This patch adds check for specific callbacks before using them. Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/conf/virdomainjob.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 0515e1d507..5ab4bdc18b 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -138,7 +138,7 @@ virDomainObjInitJob(virDomainJobObj *job, return -1; } =20 - if (job->cb && + if (job->cb && job->cb->allocJobPrivate && !(job->privateData =3D job->cb->allocJobPrivate())) { virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); @@ -180,7 +180,7 @@ virDomainObjResetAsyncJob(virDomainJobObj *job) g_clear_pointer(&job->current, virDomainJobDataFree); job->apiFlags =3D 0; =20 - if (job->cb) + if (job->cb && job->cb->resetJobPrivate) job->cb->resetJobPrivate(job->privateData); } =20 @@ -206,7 +206,7 @@ virDomainObjPreserveJob(virDomainJobObj *currJob, job->privateData =3D g_steal_pointer(&currJob->privateData); job->apiFlags =3D currJob->apiFlags; =20 - if (currJob->cb && + if (currJob->cb && currJob->cb->allocJobPrivate && !(currJob->privateData =3D currJob->cb->allocJobPrivate())) return -1; job->cb =3D currJob->cb; @@ -226,7 +226,7 @@ virDomainObjClearJob(virDomainJobObj *job) virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); =20 - if (job->cb) + if (job->cb && job->cb->freeJobPrivate) g_clear_pointer(&job->privateData, job->cb->freeJobPrivate); } =20 --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386285; cv=none; d=zohomail.com; s=zohoarc; b=awsNaTKKlbHv5OkMHklaq7giTvg6Ug++MC+Y2ZDLOd4zuMNOp922A2GIrEy+mdL/S9fChceiQJzpSry3VnChAlYuvHikAuqxxyF0gbVt/HpPMKTEQ8dzn29MNjjev+wkjYnrazTtT8J+BnEQWnu61LbnnpV3dB/r6MFund81V2s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386285; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qCKuuM0r9IZiXuD8AChxORX5srrG3JlMkcbg8Ts85KM=; b=igfoTzUAJt67ICFJeXvLFSkvT50X/iIW8+i24Jm4XJE9bMFgkRI5+Bw3e93u6Q08DQ0T7vSkAqRf8JtV8/rvGDotNYnn86jncNcXOFitL37uxUehuTZrpmsLSgQRsBLV5E0k0yDXnQKISeHnymc0ZKhvxf6g1bKWlOnGnuLSTYI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386285159417.84149495560973; Mon, 5 Sep 2022 06:58:05 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-487-cM17IgSTNTyEpBi2Wu4KjA-1; Mon, 05 Sep 2022 09:57:31 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6AAAB10395CA; Mon, 5 Sep 2022 13:57:26 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 557439458A; Mon, 5 Sep 2022 13:57:26 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 3A93B1946A48; Mon, 5 Sep 2022 13:57:26 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 7D9F51940379 for ; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 6495E2026D2D; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id E8B3B2026D4C for ; Mon, 5 Sep 2022 13:57:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386283; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=qCKuuM0r9IZiXuD8AChxORX5srrG3JlMkcbg8Ts85KM=; b=QpfIrZ0WLJ0NcdkgZKXaXtAW+pESevvxD9gi/m5UDxX3ByleYqoQSrBdNjVWarIOur4UIH h4GI+L5ZscnIfpTnG5iq/9z7+WSso6xXetPa5sIcHtSsBIm24S23VF/ZZ4FCVhWwWlvEdf jL/0w0bU7Tcu8cDubThy0g9mmUU9WlY= X-MC-Unique: cM17IgSTNTyEpBi2Wu4KjA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 05/17] conf: extend xmlopt with job config & add job object into domain object Date: Mon, 5 Sep 2022 15:57:03 +0200 Message-Id: <21507e56e221d0687c00fccb0eb476a8b525bae0.1662385930.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386286648100002 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch adds the generalized job object into the domain object so that it can be used by all drivers without the need to extract it from the private data. Because of this, the job object needs to be created and set during the creation of the domain object. This patch also extends xmlopt with possible job config containing virDomainJobObj callbacks, its private data callbacks and one variable (maxQueuedJobs). This patch includes: * addition of virDomainJobObj into virDomainObj (used in the following patches) * extending xmlopt with job config structure * new function for freeing the virDomainJobObj Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/bhyve/bhyve_domain.c | 2 +- src/ch/ch_conf.c | 2 +- src/conf/domain_conf.c | 13 ++++++++++++- src/conf/domain_conf.h | 16 +++++++++++++++- src/conf/virconftypes.h | 2 ++ src/conf/virdomainjob.c | 11 +++++++++++ src/conf/virdomainjob.h | 5 ++++- src/hyperv/hyperv_driver.c | 2 +- src/libvirt_private.syms | 1 + src/libxl/libxl_conf.c | 2 +- src/lxc/lxc_conf.c | 2 +- src/openvz/openvz_conf.c | 2 +- src/qemu/qemu_conf.c | 3 ++- src/qemu/qemu_process.c | 2 +- src/security/virt-aa-helper.c | 2 +- src/test/test_driver.c | 2 +- src/vbox/vbox_common.c | 2 +- src/vmware/vmware_driver.c | 2 +- src/vmx/vmx.c | 2 +- src/vz/vz_driver.c | 2 +- tests/bhyveargv2xmltest.c | 2 +- tests/testutils.c | 2 +- 22 files changed, 62 insertions(+), 19 deletions(-) diff --git a/src/bhyve/bhyve_domain.c b/src/bhyve/bhyve_domain.c index 69555a3efc..b7b2db57b8 100644 --- a/src/bhyve/bhyve_domain.c +++ b/src/bhyve/bhyve_domain.c @@ -221,7 +221,7 @@ virBhyveDriverCreateXMLConf(struct _bhyveConn *driver) return virDomainXMLOptionNew(&virBhyveDriverDomainDefParserConfig, &virBhyveDriverPrivateDataCallbacks, &virBhyveDriverDomainXMLNamespace, - NULL, NULL); + NULL, NULL, NULL); } =20 =20 diff --git a/src/ch/ch_conf.c b/src/ch/ch_conf.c index 775596e9f5..0d07fa270c 100644 --- a/src/ch/ch_conf.c +++ b/src/ch/ch_conf.c @@ -110,7 +110,7 @@ chDomainXMLConfInit(virCHDriver *driver) virCHDriverDomainDefParserConfig.priv =3D driver; return virDomainXMLOptionNew(&virCHDriverDomainDefParserConfig, &virCHDriverPrivateDataCallbacks, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); } =20 virCHDriverConfig * diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 237f1d6835..35ac2bb1c2 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1604,7 +1604,8 @@ virDomainXMLOptionNew(virDomainDefParserConfig *confi= g, virDomainXMLPrivateDataCallbacks *priv, virXMLNamespace *xmlns, virDomainABIStability *abi, - virSaveCookieCallbacks *saveCookie) + virSaveCookieCallbacks *saveCookie, + virDomainJobObjConfig *jobConfig) { virDomainXMLOption *xmlopt; =20 @@ -1629,6 +1630,9 @@ virDomainXMLOptionNew(virDomainDefParserConfig *confi= g, if (saveCookie) xmlopt->saveCookie =3D *saveCookie; =20 + if (jobConfig) + xmlopt->jobObjConfig =3D *jobConfig; + /* Technically this forbids to use one of Xerox's MAC address prefixes= in * our hypervisor drivers. This shouldn't ever be a problem. * @@ -3857,6 +3861,7 @@ static void virDomainObjDispose(void *obj) virDomainObjDeprecationFree(dom); virDomainSnapshotObjListFree(dom->snapshots); virDomainCheckpointObjListFree(dom->checkpoints); + virDomainJobObjFree(dom->job); } =20 virDomainObj * @@ -3889,6 +3894,12 @@ virDomainObjNew(virDomainXMLOption *xmlopt) if (!(domain->checkpoints =3D virDomainCheckpointObjListNew())) goto error; =20 + domain->job =3D g_new0(virDomainJobObj, 1); + if (virDomainObjInitJob(domain->job, + &xmlopt->jobObjConfig.cb, + &xmlopt->jobObjConfig.jobDataPrivateCb) < 0) + goto error; + virObjectLock(domain); virDomainObjSetState(domain, VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_UNKNOWN); diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 2b1497d78d..42fa2a4400 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -56,6 +56,7 @@ #include "virsavecookie.h" #include "virresctrl.h" #include "virenum.h" +#include "virdomainjob.h" =20 /* Flags for the 'type' field in virDomainDeviceDef */ typedef enum { @@ -3093,6 +3094,8 @@ struct _virDomainObj { virObjectLockable parent; virCond cond; =20 + virDomainJobObj *job; + pid_t pid; /* 0 for no PID, avoid negative values like -1 */ virDomainStateReason state; =20 @@ -3277,11 +3280,19 @@ struct _virDomainABIStability { virDomainABIStabilityDomain domain; }; =20 + +struct _virDomainJobObjConfig { + virDomainObjPrivateJobCallbacks cb; + virDomainJobDataPrivateDataCallbacks jobDataPrivateCb; + unsigned int maxQueuedJobs; +}; + virDomainXMLOption *virDomainXMLOptionNew(virDomainDefParserConfig *config, virDomainXMLPrivateDataCallbacks= *priv, virXMLNamespace *xmlns, virDomainABIStability *abi, - virSaveCookieCallbacks *saveCook= ie); + virSaveCookieCallbacks *saveCook= ie, + virDomainJobObjConfig *jobConfig= ); =20 virSaveCookieCallbacks * virDomainXMLOptionGetSaveCookie(virDomainXMLOption *xmlopt); @@ -3321,6 +3332,9 @@ struct _virDomainXMLOption { =20 /* Snapshot postparse callbacks */ virDomainMomentPostParseCallback momentPostParse; + + /* virDomainJobObj callbacks, private data callbacks and defaults */ + virDomainJobObjConfig jobObjConfig; }; G_DEFINE_AUTOPTR_CLEANUP_FUNC(virDomainXMLOption, virObjectUnref); =20 diff --git a/src/conf/virconftypes.h b/src/conf/virconftypes.h index c3f1c5fa01..154805091a 100644 --- a/src/conf/virconftypes.h +++ b/src/conf/virconftypes.h @@ -150,6 +150,8 @@ typedef struct _virDomainIdMapEntry virDomainIdMapEntry; =20 typedef struct _virDomainInputDef virDomainInputDef; =20 +typedef struct _virDomainJobObjConfig virDomainJobObjConfig; + typedef struct _virDomainKeyWrapDef virDomainKeyWrapDef; =20 typedef struct _virDomainLeaseDef virDomainLeaseDef; diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 5ab4bdc18b..a073ff08cd 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -13,6 +13,7 @@ #include "virthreadjob.h" #include "virlog.h" #include "virtime.h" +#include "domain_conf.h" =20 #define VIR_FROM_THIS VIR_FROM_NONE =20 @@ -230,6 +231,16 @@ virDomainObjClearJob(virDomainJobObj *job) g_clear_pointer(&job->privateData, job->cb->freeJobPrivate); } =20 +void +virDomainJobObjFree(virDomainJobObj *job) +{ + if (!job) + return; + + virDomainObjClearJob(job); + g_free(job); +} + bool virDomainTrackJob(virDomainJob job) { diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index bdfdc91935..091d951aa6 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -11,7 +11,8 @@ #include "virenum.h" #include "virthread.h" #include "virbuffer.h" -#include "domain_conf.h" +#include "virconftypes.h" +#include "virxml.h" =20 #define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) #define VIR_JOB_DEFAULT_MASK \ @@ -227,6 +228,8 @@ int virDomainObjPreserveJob(virDomainJobObj *currJob, void virDomainObjClearJob(virDomainJobObj *job); G_DEFINE_AUTO_CLEANUP_CLEAR_FUNC(virDomainJobObj, virDomainObjClearJob); =20 +void virDomainJobObjFree(virDomainJobObj *job); + bool virDomainTrackJob(virDomainJob job); =20 bool virDomainNestedJobAllowed(virDomainJobObj *jobs, virDomainJob newJob); diff --git a/src/hyperv/hyperv_driver.c b/src/hyperv/hyperv_driver.c index 288c01ad14..3929e27e09 100644 --- a/src/hyperv/hyperv_driver.c +++ b/src/hyperv/hyperv_driver.c @@ -1766,7 +1766,7 @@ hypervConnectOpen(virConnectPtr conn, virConnectAuthP= tr auth, goto cleanup; =20 /* init xmlopt for domain XML */ - priv->xmlopt =3D virDomainXMLOptionNew(&hypervDomainDefParserConfig, N= ULL, NULL, NULL, NULL); + priv->xmlopt =3D virDomainXMLOptionNew(&hypervDomainDefParserConfig, N= ULL, NULL, NULL, NULL, NULL); =20 if (hypervGetOperatingSystem(priv, &os) < 0) goto cleanup; diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 5077db9c6b..cd0b94297c 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1182,6 +1182,7 @@ virDomainAsyncJobTypeToString; virDomainJobDataCopy; virDomainJobDataFree; virDomainJobDataInit; +virDomainJobObjFree; virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 06f338ef90..918303c8d0 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -2486,5 +2486,5 @@ libxlCreateXMLConf(libxlDriverPrivate *driver) return virDomainXMLOptionNew(&libxlDomainDefParserConfig, &libxlDomainXMLPrivateDataCallbacks, &libxlDriverDomainXMLNamespace, - NULL, NULL); + NULL, NULL, NULL); } diff --git a/src/lxc/lxc_conf.c b/src/lxc/lxc_conf.c index 7834801fb5..fefe63bf20 100644 --- a/src/lxc/lxc_conf.c +++ b/src/lxc/lxc_conf.c @@ -189,7 +189,7 @@ lxcDomainXMLConfInit(virLXCDriver *driver, const char *= defsecmodel) return virDomainXMLOptionNew(&virLXCDriverDomainDefParserConfig, &virLXCDriverPrivateDataCallbacks, &virLXCDriverDomainXMLNamespace, - NULL, NULL); + NULL, NULL, NULL); } =20 =20 diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c index c94f9b8577..c28d0e9f43 100644 --- a/src/openvz/openvz_conf.c +++ b/src/openvz/openvz_conf.c @@ -1062,5 +1062,5 @@ virDomainXMLOption *openvzXMLOption(struct openvz_dri= ver *driver) { openvzDomainDefParserConfig.priv =3D driver; return virDomainXMLOptionNew(&openvzDomainDefParserConfig, - NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); } diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 3b75cdeb95..2809ae53b9 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -1278,7 +1278,8 @@ virQEMUDriverCreateXMLConf(virQEMUDriver *driver, &virQEMUDriverPrivateDataCallbacks, &virQEMUDriverDomainXMLNamespace, &virQEMUDriverDomainABIStability, - &virQEMUDriverDomainSaveCookie); + &virQEMUDriverDomainSaveCookie, + NULL); } =20 =20 diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 32f03ff79a..251bca4bdf 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -9295,7 +9295,7 @@ qemuProcessQMPConnectMonitor(qemuProcessQMP *proc) monConfig.data.nix.path =3D proc->monpath; monConfig.data.nix.listen =3D false; =20 - if (!(xmlopt =3D virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL)) = || + if (!(xmlopt =3D virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL, N= ULL)) || !(proc->vm =3D virDomainObjNew(xmlopt)) || !(proc->vm->def =3D virDomainDefNew(xmlopt))) return -1; diff --git a/src/security/virt-aa-helper.c b/src/security/virt-aa-helper.c index 2d0bc99c73..f338488da3 100644 --- a/src/security/virt-aa-helper.c +++ b/src/security/virt-aa-helper.c @@ -632,7 +632,7 @@ get_definition(vahControl * ctl, const char *xmlStr) } =20 if (!(ctl->xmlopt =3D virDomainXMLOptionNew(&virAAHelperDomainDefParse= rConfig, - NULL, NULL, NULL, NULL))) { + NULL, NULL, NULL, NULL, NULL= ))) { vah_error(ctl, 0, _("Failed to create XML config object")); return -1; } diff --git a/src/test/test_driver.c b/src/test/test_driver.c index 641a141b6a..686ff051a8 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -460,7 +460,7 @@ testDriverNew(void) if (!(ret =3D virObjectLockableNew(testDriverClass))) return NULL; =20 - if (!(ret->xmlopt =3D virDomainXMLOptionNew(&config, &privatecb, &ns, = NULL, NULL)) || + if (!(ret->xmlopt =3D virDomainXMLOptionNew(&config, &privatecb, &ns, = NULL, NULL, NULL)) || !(ret->eventState =3D virObjectEventStateNew()) || !(ret->ifaces =3D virInterfaceObjListNew()) || !(ret->domains =3D virDomainObjListNew()) || diff --git a/src/vbox/vbox_common.c b/src/vbox/vbox_common.c index e249980195..bd77641d39 100644 --- a/src/vbox/vbox_common.c +++ b/src/vbox/vbox_common.c @@ -143,7 +143,7 @@ vboxDriverObjNew(void) =20 if (!(driver->caps =3D vboxCapsInit()) || !(driver->xmlopt =3D virDomainXMLOptionNew(&vboxDomainDefParserCon= fig, - NULL, NULL, NULL, NULL))) + NULL, NULL, NULL, NULL, N= ULL))) goto cleanup; =20 return driver; diff --git a/src/vmware/vmware_driver.c b/src/vmware/vmware_driver.c index 2a18d73988..3c578434f3 100644 --- a/src/vmware/vmware_driver.c +++ b/src/vmware/vmware_driver.c @@ -139,7 +139,7 @@ vmwareDomainXMLConfigInit(struct vmware_driver *driver) .free =3D vmwareDataFreeFunc= }; vmwareDomainDefParserConfig.priv =3D driver; return virDomainXMLOptionNew(&vmwareDomainDefParserConfig, &priv, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); } =20 static virDrvOpenStatus diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index fa5cc56146..bf0dba17d8 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -696,7 +696,7 @@ virVMXDomainXMLConfInit(virCaps *caps) { virVMXDomainDefParserConfig.priv =3D caps; return virDomainXMLOptionNew(&virVMXDomainDefParserConfig, NULL, - &virVMXDomainXMLNamespace, NULL, NULL); + &virVMXDomainXMLNamespace, NULL, NULL, NU= LL); } =20 char * diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index 017c084ede..571d895167 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -331,7 +331,7 @@ vzDriverObjNew(void) if (!(driver->caps =3D vzBuildCapabilities()) || !(driver->xmlopt =3D virDomainXMLOptionNew(&vzDomainDefParserConfi= g, &vzDomainXMLPrivateDataCa= llbacksPtr, - NULL, NULL, NULL)) || + NULL, NULL, NULL, NULL)) = || !(driver->domains =3D virDomainObjListNew()) || !(driver->domainEventState =3D virObjectEventStateNew()) || (vzInitVersion(driver) < 0) || diff --git a/tests/bhyveargv2xmltest.c b/tests/bhyveargv2xmltest.c index 2ccc1379b9..92189a2e58 100644 --- a/tests/bhyveargv2xmltest.c +++ b/tests/bhyveargv2xmltest.c @@ -113,7 +113,7 @@ mymain(void) if ((driver.caps =3D virBhyveCapsBuild()) =3D=3D NULL) return EXIT_FAILURE; =20 - if ((driver.xmlopt =3D virDomainXMLOptionNew(NULL, NULL, + if ((driver.xmlopt =3D virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL, NULL)) =3D=3D N= ULL) return EXIT_FAILURE; =20 diff --git a/tests/testutils.c b/tests/testutils.c index 30f91dcd28..8fe624f029 100644 --- a/tests/testutils.c +++ b/tests/testutils.c @@ -989,7 +989,7 @@ static virDomainDefParserConfig virTestGenericDomainDef= ParserConfig =3D { virDomainXMLOption *virTestGenericDomainXMLConfInit(void) { return virDomainXMLOptionNew(&virTestGenericDomainDefParserConfig, - NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); } =20 =20 --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386257; cv=none; d=zohomail.com; s=zohoarc; b=g7Pm7V8VVCitnBHhtwnveiNv5ZogTM9Hoeo+zv0eVsGwseqHGJdssG8tZ9ZCsa4mVfBy/qeZp0yeamvfCidrEu2daj+jcguYVI7BT7B6NrIyEGNsfJ1LfRUnZQgkCvdOSPUFJfC49+ADSY3PtOzA3cWpL//JQ3Jc5rISOxj/ut4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386257; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=StusCCxRA0q0kRcdDmuwD5doBv3SNfRkIHyZJOseB0Y=; b=QMT2jnKrE4E19ooHURd2l9b8JS8OAVq6Z+p7QGTfHcE0oYw/4yuQc4unILdu0XKegGz9Nwu2/PZBzv6J3GfH4P2gmIGuLpdVcdDffO83ptjez46SuyBIJq72Xlzvo6AH09ZZZ6nwEf1i1USagBXa+sHGUkdgAu/TT5jVJmqn3sc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386257587167.5447782338481; Mon, 5 Sep 2022 06:57:37 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-527-Otz9xUkwOdiXr3bR3wbudg-1; Mon, 05 Sep 2022 09:57:34 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D708E803C9D; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id BBD41403D0E0; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 97AED193F518; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 409F3194E113 for ; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 367BA2026D4C; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9D5A22026D64 for ; Mon, 5 Sep 2022 13:57:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386256; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=StusCCxRA0q0kRcdDmuwD5doBv3SNfRkIHyZJOseB0Y=; b=Np+fuKw8+myhZylvVYuqOMlN6BDFswdC8+h9/f7aPIkeVKHzUQGeF97I/S9DZlW7kzMrOu ExdJyMj8e8oFdlOePzPdM4x5Bn/YqOtUDg54CpjWmtrUzyI/FNTrZyary1SdWj0II4CjVD l7PnXGNJBLW99SOP9rswEmGCsEu8Uns= X-MC-Unique: Otz9xUkwOdiXr3bR3wbudg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 06/17] virdomainjob: make drivers use job object in the domain object Date: Mon, 5 Sep 2022 15:57:04 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386258597100002 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch uses the job object directly in the domain object and removes the job object from private data of all drivers that use it as well as other relevant code (initializing and freeing the structure). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/ch/ch_domain.c | 29 ++-- src/ch/ch_domain.h | 2 - src/conf/domain_conf.c | 1 + src/libxl/libxl_domain.c | 31 ++-- src/libxl/libxl_domain.h | 2 - src/libxl/libxl_driver.c | 12 +- src/lxc/lxc_domain.c | 28 ++-- src/lxc/lxc_domain.h | 2 - src/qemu/qemu_backup.c | 18 +-- src/qemu/qemu_conf.c | 6 +- src/qemu/qemu_domain.c | 69 +++++---- src/qemu/qemu_domain.h | 3 +- src/qemu/qemu_domainjob.c | 223 +++++++++++------------------ src/qemu/qemu_domainjob.h | 2 - src/qemu/qemu_driver.c | 72 +++++----- src/qemu/qemu_migration.c | 193 ++++++++++++------------- src/qemu/qemu_migration_cookie.c | 17 ++- src/qemu/qemu_migration_cookie.h | 3 +- src/qemu/qemu_migration_params.c | 8 +- src/qemu/qemu_process.c | 73 +++++----- src/qemu/qemu_snapshot.c | 2 +- tests/qemumigrationcookiexmltest.c | 3 +- 22 files changed, 347 insertions(+), 452 deletions(-) diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index 89b494b388..9ddf9a8584 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -44,7 +44,6 @@ VIR_LOG_INIT("ch.ch_domain"); int virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { - virCHDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; unsigned long long then; =20 @@ -52,16 +51,16 @@ virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob = job) return -1; then =3D now + CH_JOB_WAIT_TIME; =20 - while (priv->job.active) { + while (obj->job->active) { VIR_DEBUG("Wait normal job condition for starting job: %s", virDomainJobTypeToString(job)); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0= ) { + if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0= ) { VIR_WARN("Cannot start job (%s) for domain %s;" " current job is (%s) owned by (%llu)", virDomainJobTypeToString(job), obj->def->name, - virDomainJobTypeToString(priv->job.active), - priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->owner); =20 if (errno =3D=3D ETIMEDOUT) virReportError(VIR_ERR_OPERATION_TIMEOUT, @@ -73,11 +72,11 @@ virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob = job) } } =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); =20 VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); + obj->job->active =3D job; + obj->job->owner =3D virThreadSelfID(); =20 return 0; } @@ -91,14 +90,13 @@ virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob = job) void virCHDomainObjEndJob(virDomainObj *obj) { - virCHDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 VIR_DEBUG("Stopping job: %s", virDomainJobTypeToString(job)); =20 - virDomainObjResetJob(&priv->job); - virCondSignal(&priv->job.cond); + virDomainObjResetJob(obj->job); + virCondSignal(&obj->job->cond); } =20 void @@ -117,13 +115,7 @@ virCHDomainObjPrivateAlloc(void *opaque) =20 priv =3D g_new0(virCHDomainObjPrivate, 1); =20 - if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) { - g_free(priv); - return NULL; - } - if (!(priv->chrdevs =3D virChrdevAlloc())) { - virDomainObjClearJob(&priv->job); g_free(priv); return NULL; } @@ -138,7 +130,6 @@ virCHDomainObjPrivateFree(void *data) virCHDomainObjPrivate *priv =3D data; =20 virChrdevFree(priv->chrdevs); - virDomainObjClearJob(&priv->job); g_free(priv->machineName); g_free(priv); } diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index 27efe2feed..c7dfde601e 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -32,8 +32,6 @@ =20 typedef struct _virCHDomainObjPrivate virCHDomainObjPrivate; struct _virCHDomainObjPrivate { - virDomainJobObj job; - virChrdevs *chrdevs; virCHDriver *driver; virCHMonitor *monitor; diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 35ac2bb1c2..4c2cee3052 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -60,6 +60,7 @@ #include "virdomainsnapshotobjlist.h" #include "virdomaincheckpointobjlist.h" #include "virutil.h" +#include "virdomainjob.h" =20 #define VIR_FROM_THIS VIR_FROM_DOMAIN =20 diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index aadb13f461..80a362df46 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -60,7 +60,6 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, virDomainObj *obj, virDomainJob job) { - libxlDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; unsigned long long then; =20 @@ -68,19 +67,19 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNU= C_UNUSED, return -1; then =3D now + LIBXL_JOB_WAIT_TIME; =20 - while (priv->job.active) { + while (obj->job->active) { VIR_DEBUG("Wait normal job condition for starting job: %s", virDomainJobTypeToString(job)); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) + if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) goto error; } =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); =20 VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); - priv->job.started =3D now; + obj->job->active =3D job; + obj->job->owner =3D virThreadSelfID(); + obj->job->started =3D now; =20 return 0; =20 @@ -89,8 +88,8 @@ libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_= UNUSED, " current job is (%s) owned by (%llu)", virDomainJobTypeToString(job), obj->def->name, - virDomainJobTypeToString(priv->job.active), - priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->owner); =20 if (errno =3D=3D ETIMEDOUT) virReportError(VIR_ERR_OPERATION_TIMEOUT, @@ -116,14 +115,13 @@ void libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNUC_UNUSED, virDomainObj *obj) { - libxlDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 VIR_DEBUG("Stopping job: %s", virDomainJobTypeToString(job)); =20 - virDomainObjResetJob(&priv->job); - virCondSignal(&priv->job.cond); + virDomainObjResetJob(obj->job); + virCondSignal(&obj->job->cond); } =20 int @@ -158,12 +156,6 @@ libxlDomainObjPrivateAlloc(void *opaque G_GNUC_UNUSED) return NULL; } =20 - if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) { - virChrdevFree(priv->devs); - g_free(priv); - return NULL; - } - return priv; } =20 @@ -173,7 +165,6 @@ libxlDomainObjPrivateFree(void *data) libxlDomainObjPrivate *priv =3D data; =20 g_free(priv->lockState); - virDomainObjClearJob(&priv->job); virChrdevFree(priv->devs); g_free(priv); } diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 451e76e311..9e8804f747 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -37,8 +37,6 @@ struct _libxlDomainObjPrivate { char *lockState; bool lockProcessRunning; =20 - virDomainJobObj job; - bool hookRun; /* true if there was a hook run over this domain */ }; =20 diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index ff6c2f6506..0ae1ee95c4 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -5204,7 +5204,6 @@ static int libxlDomainGetJobInfo(virDomainPtr dom, virDomainJobInfoPtr info) { - libxlDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; unsigned long long timeElapsed =3D 0; @@ -5215,8 +5214,7 @@ libxlDomainGetJobInfo(virDomainPtr dom, if (virDomainGetJobInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; - if (!priv->job.active) { + if (!vm->job->active) { memset(info, 0, sizeof(*info)); info->type =3D VIR_DOMAIN_JOB_NONE; ret =3D 0; @@ -5226,7 +5224,7 @@ libxlDomainGetJobInfo(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) + if (libxlDomainJobGetTimeElapsed(vm->job, &timeElapsed) < 0) goto cleanup; =20 /* setting only these two attributes is enough because libxl never sets @@ -5248,7 +5246,6 @@ libxlDomainGetJobStats(virDomainPtr dom, int *nparams, unsigned int flags) { - libxlDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; int maxparams =3D 0; @@ -5263,8 +5260,7 @@ libxlDomainGetJobStats(virDomainPtr dom, if (virDomainGetJobStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; - if (!priv->job.active) { + if (!vm->job->active) { *type =3D VIR_DOMAIN_JOB_NONE; *params =3D NULL; *nparams =3D 0; @@ -5275,7 +5271,7 @@ libxlDomainGetJobStats(virDomainPtr dom, /* In libxl we don't have an estimated completion time * thus we always set to unbounded and update time * for the active job. */ - if (libxlDomainJobGetTimeElapsed(&priv->job, &timeElapsed) < 0) + if (libxlDomainJobGetTimeElapsed(vm->job, &timeElapsed) < 0) goto cleanup; =20 if (virTypedParamsAddULLong(params, nparams, &maxparams, diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index f234aaf39c..3dddc0a7d4 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -52,7 +52,6 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNUSE= D, virDomainObj *obj, virDomainJob job) { - virLXCDomainObjPrivate *priv =3D obj->privateData; unsigned long long now; unsigned long long then; =20 @@ -60,18 +59,18 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNU= SED, return -1; then =3D now + LXC_JOB_WAIT_TIME; =20 - while (priv->job.active) { + while (obj->job->active) { VIR_DEBUG("Wait normal job condition for starting job: %s", virDomainJobTypeToString(job)); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) + if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) goto error; } =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); =20 VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); + obj->job->active =3D job; + obj->job->owner =3D virThreadSelfID(); =20 return 0; =20 @@ -80,8 +79,8 @@ virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNUSE= D, " current job is (%s) owned by (%llu)", virDomainJobTypeToString(job), obj->def->name, - virDomainJobTypeToString(priv->job.active), - priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->owner); =20 if (errno =3D=3D ETIMEDOUT) virReportError(VIR_ERR_OPERATION_TIMEOUT, @@ -103,14 +102,13 @@ void virLXCDomainObjEndJob(virLXCDriver *driver G_GNUC_UNUSED, virDomainObj *obj) { - virLXCDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 VIR_DEBUG("Stopping job: %s", virDomainJobTypeToString(job)); =20 - virDomainObjResetJob(&priv->job); - virCondSignal(&priv->job.cond); + virDomainObjResetJob(obj->job); + virCondSignal(&obj->job->cond); } =20 =20 @@ -119,11 +117,6 @@ virLXCDomainObjPrivateAlloc(void *opaque) { virLXCDomainObjPrivate *priv =3D g_new0(virLXCDomainObjPrivate, 1); =20 - if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) { - g_free(priv); - return NULL; - } - priv->driver =3D opaque; =20 return priv; @@ -136,7 +129,6 @@ virLXCDomainObjPrivateFree(void *data) virLXCDomainObjPrivate *priv =3D data; =20 virCgroupFree(priv->cgroup); - virDomainObjClearJob(&priv->job); g_free(priv); } =20 diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index db622acc86..8cbcc0818c 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -66,8 +66,6 @@ struct _virLXCDomainObjPrivate { =20 virCgroup *cgroup; char *machineName; - - virDomainJobObj job; }; =20 extern virXMLNamespace virLXCDriverDomainXMLNamespace; diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 5280186970..2da520dbc7 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -594,30 +594,30 @@ qemuBackupJobTerminate(virDomainObj *vm, } } =20 - if (priv->job.current) { + if (vm->job->current) { qemuDomainJobDataPrivate *privData =3D NULL; =20 - qemuDomainJobDataUpdateTime(priv->job.current); + qemuDomainJobDataUpdateTime(vm->job->current); =20 - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); - priv->job.completed =3D virDomainJobDataCopy(priv->job.current); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); + vm->job->completed =3D virDomainJobDataCopy(vm->job->current); =20 - privData =3D priv->job.completed->privateData; + privData =3D vm->job->completed->privateData; =20 privData->stats.backup.total =3D priv->backup->push_total; privData->stats.backup.transferred =3D priv->backup->push_transfer= red; privData->stats.backup.tmp_used =3D priv->backup->pull_tmp_used; privData->stats.backup.tmp_total =3D priv->backup->pull_tmp_total; =20 - priv->job.completed->status =3D jobstatus; - priv->job.completed->errmsg =3D g_strdup(priv->backup->errmsg); + vm->job->completed->status =3D jobstatus; + vm->job->completed->errmsg =3D g_strdup(priv->backup->errmsg); =20 qemuDomainEventEmitJobCompleted(priv->driver, vm); } =20 g_clear_pointer(&priv->backup, virDomainBackupDefFree); =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP) qemuDomainObjEndAsyncJob(vm); } =20 @@ -793,7 +793,7 @@ qemuBackupBegin(virDomainObj *vm, qemuDomainObjSetAsyncJobMask(vm, (VIR_JOB_DEFAULT_MASK | JOB_MASK(VIR_JOB_SUSPEND) | JOB_MASK(VIR_JOB_MODIFY))); - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP); =20 if (!virDomainObjIsActive(vm)) { diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 2809ae53b9..4f59e5fb07 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -1272,14 +1272,18 @@ virDomainXMLOption * virQEMUDriverCreateXMLConf(virQEMUDriver *driver, const char *defsecmodel) { + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); + virQEMUDriverDomainDefParserConfig.priv =3D driver; virQEMUDriverDomainDefParserConfig.defSecModel =3D defsecmodel; + virQEMUDriverDomainJobConfig.maxQueuedJobs =3D cfg->maxQueuedJobs; + return virDomainXMLOptionNew(&virQEMUDriverDomainDefParserConfig, &virQEMUDriverPrivateDataCallbacks, &virQEMUDriverDomainXMLNamespace, &virQEMUDriverDomainABIStability, &virQEMUDriverDomainSaveCookie, - NULL); + &virQEMUDriverDomainJobConfig); } =20 =20 diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index fe3ce023a4..ebd3c7478f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -278,7 +278,7 @@ qemuDomainObjPrivateXMLParseJobNBD(virDomainObj *vm, return -1; =20 if (n > 0) { - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { VIR_WARN("Found disks marked for migration but we were not " "migrating"); n =3D 0; @@ -359,14 +359,46 @@ qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt, return 0; } =20 +static void * +qemuJobDataAllocPrivateData(void) +{ + return g_new0(qemuDomainJobDataPrivate, 1); +} + + +static void * +qemuJobDataCopyPrivateData(void *data) +{ + qemuDomainJobDataPrivate *ret =3D g_new0(qemuDomainJobDataPrivate, 1); + + memcpy(ret, data, sizeof(qemuDomainJobDataPrivate)); + + return ret; +} + =20 -static virDomainObjPrivateJobCallbacks qemuPrivateJobCallbacks =3D { - .allocJobPrivate =3D qemuJobAllocPrivate, - .freeJobPrivate =3D qemuJobFreePrivate, - .resetJobPrivate =3D qemuJobResetPrivate, - .formatJobPrivate =3D qemuDomainFormatJobPrivate, - .parseJobPrivate =3D qemuDomainParseJobPrivate, - .saveStatusPrivate =3D qemuDomainSaveStatus, +static void +qemuJobDataFreePrivateData(void *data) +{ + g_free(data); +} + + +virDomainJobObjConfig virQEMUDriverDomainJobConfig =3D { + .cb =3D { + .allocJobPrivate =3D qemuJobAllocPrivate, + .freeJobPrivate =3D qemuJobFreePrivate, + .resetJobPrivate =3D qemuJobResetPrivate, + .formatJobPrivate =3D qemuDomainFormatJobPrivate, + .parseJobPrivate =3D qemuDomainParseJobPrivate, + .saveStatusPrivate =3D qemuDomainSaveStatus, + }, + .jobDataPrivateCb =3D { + .allocPrivateData =3D qemuJobDataAllocPrivateData, + .copyPrivateData =3D qemuJobDataCopyPrivateData, + .freePrivateData =3D qemuJobDataFreePrivateData, + }, + .maxQueuedJobs =3D 0, }; =20 /** @@ -1719,7 +1751,6 @@ qemuDomainObjPrivateFree(void *data) qemuDomainObjPrivateDataClear(priv); =20 virObjectUnref(priv->monConfig); - virDomainObjClearJob(&priv->job); g_free(priv->lockState); g_free(priv->origname); =20 @@ -1757,22 +1788,12 @@ static void * qemuDomainObjPrivateAlloc(void *opaque) { g_autoptr(qemuDomainObjPrivate) priv =3D g_new0(qemuDomainObjPrivate, = 1); - g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(opaque); - - if (virDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks, - &qemuJobDataPrivateDataCallbacks) < 0) { - virReportSystemError(errno, "%s", - _("Unable to init qemu driver mutexes")); - return NULL; - } =20 if (!(priv->devs =3D virChrdevAlloc())) return NULL; =20 priv->blockjobs =3D virHashNew(virObjectUnref); =20 - priv->job.maxQueuedJobs =3D cfg->maxQueuedJobs; - /* agent commands block by default, user can choose different behavior= */ priv->agentTimeout =3D VIR_DOMAIN_AGENT_RESPONSE_TIMEOUT_BLOCK; priv->migMaxBandwidth =3D QEMU_DOMAIN_MIG_BANDWIDTH_MAX; @@ -5955,14 +5976,14 @@ qemuDomainObjEnterMonitorInternal(virDomainObj *obj, qemuDomainObjEndJob(obj); return -1; } - } else if (priv->job.asyncOwner =3D=3D virThreadSelfID()) { + } else if (obj->job->asyncOwner =3D=3D virThreadSelfID()) { VIR_WARN("This thread seems to be the async job owner; entering" " monitor without asking for a nested job is dangerous"); - } else if (priv->job.owner !=3D virThreadSelfID()) { + } else if (obj->job->owner !=3D virThreadSelfID()) { VIR_WARN("Entering a monitor without owning a job. " "Job %s owner %s (%llu)", - virDomainJobTypeToString(priv->job.active), - priv->job.ownerAPI, priv->job.owner); + virDomainJobTypeToString(obj->job->active), + obj->job->ownerAPI, obj->job->owner); } =20 VIR_DEBUG("Entering monitor (mon=3D%p vm=3D%p name=3D%s)", @@ -6001,7 +6022,7 @@ qemuDomainObjExitMonitor(virDomainObj *obj) if (!hasRefs) priv->mon =3D NULL; =20 - if (priv->job.active =3D=3D VIR_JOB_ASYNC_NESTED) + if (obj->job->active =3D=3D VIR_JOB_ASYNC_NESTED) qemuDomainObjEndJob(obj); } =20 diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 592ee9805b..ef149b9fa9 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -99,8 +99,6 @@ typedef struct _qemuDomainObjPrivate qemuDomainObjPrivate; struct _qemuDomainObjPrivate { virQEMUDriver *driver; =20 - virDomainJobObj job; - virBitmap *namespaces; =20 virEventThread *eventThread; @@ -775,6 +773,7 @@ extern virXMLNamespace virQEMUDriverDomainXMLNamespace; extern virDomainDefParserConfig virQEMUDriverDomainDefParserConfig; extern virDomainABIStability virQEMUDriverDomainABIStability; extern virSaveCookieCallbacks virQEMUDriverDomainSaveCookie; +extern virDomainJobObjConfig virQEMUDriverDomainJobConfig; =20 int qemuDomainUpdateDeviceList(virDomainObj *vm, int asyncJob); =20 diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index a6ea7b2f58..0ecadddbc7 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -31,38 +31,6 @@ =20 VIR_LOG_INIT("qemu.qemu_domainjob"); =20 -static void * -qemuJobDataAllocPrivateData(void) -{ - return g_new0(qemuDomainJobDataPrivate, 1); -} - - -static void * -qemuJobDataCopyPrivateData(void *data) -{ - qemuDomainJobDataPrivate *ret =3D g_new0(qemuDomainJobDataPrivate, 1); - - memcpy(ret, data, sizeof(qemuDomainJobDataPrivate)); - - return ret; -} - - -static void -qemuJobDataFreePrivateData(void *data) -{ - g_free(data); -} - - -virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallbacks =3D { - .allocPrivateData =3D qemuJobDataAllocPrivateData, - .copyPrivateData =3D qemuJobDataCopyPrivateData, - .freePrivateData =3D qemuJobDataFreePrivateData, -}; - - void qemuDomainJobSetStatsType(virDomainJobData *jobData, qemuDomainJobStatsType type) @@ -130,16 +98,15 @@ void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; virObjectEvent *event; virTypedParameterPtr params =3D NULL; int nparams =3D 0; int type; =20 - if (!priv->job.completed) + if (!vm->job->completed) return; =20 - if (qemuDomainJobDataToParams(priv->job.completed, &type, + if (qemuDomainJobDataToParams(vm->job->completed, &type, ¶ms, &nparams) < 0) { VIR_WARN("Could not get stats for completed job; domain %s", vm->def->name); @@ -160,8 +127,7 @@ qemuDomainObjRestoreAsyncJob(virDomainObj *vm, virDomainJobStatus status, unsigned long long allowedJobs) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobObj *job =3D &priv->job; + virDomainJobObj *job =3D vm->job; =20 VIR_DEBUG("Restoring %s async job for domain %s", virDomainAsyncJobTypeToString(asyncJob), vm->def->name); @@ -177,8 +143,8 @@ qemuDomainObjRestoreAsyncJob(virDomainObj *vm, =20 qemuDomainObjSetAsyncJobMask(vm, allowedJobs); =20 - job->current =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks= ); - qemuDomainJobSetStatsType(priv->job.current, statsType); + job->current =3D virDomainJobDataInit(&virQEMUDriverDomainJobConfig.jo= bDataPrivateCb); + qemuDomainJobSetStatsType(vm->job->current, statsType); job->current->operation =3D operation; job->current->status =3D status; job->current->started =3D started; @@ -603,25 +569,24 @@ void qemuDomainObjSetJobPhase(virDomainObj *obj, int phase) { - qemuDomainObjPrivate *priv =3D obj->privateData; unsigned long long me =3D virThreadSelfID(); =20 - if (!priv->job.asyncJob) + if (!obj->job->asyncJob) return; =20 VIR_DEBUG("Setting '%s' phase to '%s'", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase)); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + qemuDomainAsyncJobPhaseToString(obj->job->asyncJob, phase)); =20 - if (priv->job.asyncOwner !=3D 0 && - priv->job.asyncOwner !=3D me) { + if (obj->job->asyncOwner !=3D 0 && + obj->job->asyncOwner !=3D me) { VIR_WARN("'%s' async job is owned by thread %llu, API '%s'", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner, - NULLSTR(priv->job.asyncOwnerAPI)); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj->job->asyncOwner, + NULLSTR(obj->job->asyncOwnerAPI)); } =20 - priv->job.phase =3D phase; + obj->job->phase =3D phase; qemuDomainSaveStatus(obj); } =20 @@ -634,26 +599,25 @@ void qemuDomainObjStartJobPhase(virDomainObj *obj, int phase) { - qemuDomainObjPrivate *priv =3D obj->privateData; unsigned long long me =3D virThreadSelfID(); =20 - if (!priv->job.asyncJob) + if (!obj->job->asyncJob) return; =20 VIR_DEBUG("Starting phase '%s' of '%s' job", - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + qemuDomainAsyncJobPhaseToString(obj->job->asyncJob, phase), + virDomainAsyncJobTypeToString(obj->job->asyncJob)); =20 - if (priv->job.asyncOwner =3D=3D 0) { - priv->job.asyncOwnerAPI =3D g_strdup(virThreadJobGet()); - } else if (me !=3D priv->job.asyncOwner) { + if (obj->job->asyncOwner =3D=3D 0) { + obj->job->asyncOwnerAPI =3D g_strdup(virThreadJobGet()); + } else if (me !=3D obj->job->asyncOwner) { VIR_WARN("'%s' async job is owned by thread %llu, API '%s'", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner, - NULLSTR(priv->job.asyncOwnerAPI)); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj->job->asyncOwner, + NULLSTR(obj->job->asyncOwnerAPI)); } =20 - priv->job.asyncOwner =3D me; + obj->job->asyncOwner =3D me; qemuDomainObjSetJobPhase(obj, phase); } =20 @@ -662,39 +626,33 @@ void qemuDomainObjSetAsyncJobMask(virDomainObj *obj, unsigned long long allowedJobs) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (!priv->job.asyncJob) + if (!obj->job->asyncJob) return; =20 - priv->job.mask =3D allowedJobs | JOB_MASK(VIR_JOB_DESTROY); + obj->job->mask =3D allowedJobs | JOB_MASK(VIR_JOB_DESTROY); } =20 void qemuDomainObjDiscardAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (priv->job.active =3D=3D VIR_JOB_ASYNC_NESTED) - virDomainObjResetJob(&priv->job); - virDomainObjResetAsyncJob(&priv->job); + if (obj->job->active =3D=3D VIR_JOB_ASYNC_NESTED) + virDomainObjResetJob(obj->job); + virDomainObjResetAsyncJob(obj->job); qemuDomainSaveStatus(obj); } =20 void qemuDomainObjReleaseAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - VIR_DEBUG("Releasing ownership of '%s' async job", - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainAsyncJobTypeToString(obj->job->asyncJob)); =20 - if (priv->job.asyncOwner !=3D virThreadSelfID()) { + if (obj->job->asyncOwner !=3D virThreadSelfID()) { VIR_WARN("'%s' async job is owned by thread %llu", - virDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner); + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj->job->asyncOwner); } - priv->job.asyncOwner =3D 0; + obj->job->asyncOwner =3D 0; } =20 /* @@ -708,9 +666,7 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) int qemuDomainObjBeginJob(virDomainObj *obj, virDomainJob job) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (virDomainObjBeginJobInternal(obj, &priv->job, job, + if (virDomainObjBeginJobInternal(obj, obj->job, job, VIR_AGENT_JOB_NONE, VIR_ASYNC_JOB_NONE, false) < 0) return -1; @@ -728,9 +684,7 @@ int qemuDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - return virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_NONE, + return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_NONE, agentJob, VIR_ASYNC_JOB_NONE, false); } @@ -740,16 +694,13 @@ int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainJobOperation operation, unsigned long apiFlags) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (virDomainObjBeginJobInternal(obj, &priv->job, VIR_JOB_ASYNC, + if (virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC, VIR_AGENT_JOB_NONE, asyncJob, false) < 0) return -1; =20 - priv =3D obj->privateData; - priv->job.current->operation =3D operation; - priv->job.apiFlags =3D apiFlags; + obj->job->current->operation =3D operation; + obj->job->apiFlags =3D apiFlags; return 0; } =20 @@ -757,21 +708,19 @@ int qemuDomainObjBeginNestedJob(virDomainObj *obj, virDomainAsyncJob asyncJob) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - if (asyncJob !=3D priv->job.asyncJob) { + if (asyncJob !=3D obj->job->asyncJob) { virReportError(VIR_ERR_INTERNAL_ERROR, _("unexpected async job %d type expected %d"), - asyncJob, priv->job.asyncJob); + asyncJob, obj->job->asyncJob); return -1; } =20 - if (priv->job.asyncOwner !=3D virThreadSelfID()) { + if (obj->job->asyncOwner !=3D virThreadSelfID()) { VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", - priv->job.asyncOwner); + obj->job->asyncOwner); } =20 - return virDomainObjBeginJobInternal(obj, &priv->job, + return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC_NESTED, VIR_AGENT_JOB_NONE, VIR_ASYNC_JOB_NONE, @@ -794,9 +743,7 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - return virDomainObjBeginJobInternal(obj, &priv->job, job, + return virDomainObjBeginJobInternal(obj, obj->job, job, VIR_AGENT_JOB_NONE, VIR_ASYNC_JOB_NONE, true); } @@ -810,69 +757,63 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, void qemuDomainObjEndJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - virDomainJob job =3D priv->job.active; + virDomainJob job =3D obj->job->active; =20 - priv->job.jobsQueued--; + obj->job->jobsQueued--; =20 VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - virDomainObjResetJob(&priv->job); + virDomainObjResetJob(obj->job); if (virDomainTrackJob(job)) qemuDomainSaveStatus(obj); /* We indeed need to wake up ALL threads waiting because * grabbing a job requires checking more variables. */ - virCondBroadcast(&priv->job.cond); + virCondBroadcast(&obj->job->cond); } =20 void qemuDomainObjEndAgentJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - virDomainAgentJob agentJob =3D priv->job.agentActive; + virDomainAgentJob agentJob =3D obj->job->agentActive; =20 - priv->job.jobsQueued--; + obj->job->jobsQueued--; =20 VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - virDomainObjResetAgentJob(&priv->job); + virDomainObjResetAgentJob(obj->job); /* We indeed need to wake up ALL threads waiting because * grabbing a job requires checking more variables. */ - virCondBroadcast(&priv->job.cond); + virCondBroadcast(&obj->job->cond); } =20 void qemuDomainObjEndAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - - priv->job.jobsQueued--; + obj->job->jobsQueued--; =20 VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - virDomainObjResetAsyncJob(&priv->job); + virDomainObjResetAsyncJob(obj->job); qemuDomainSaveStatus(obj); - virCondBroadcast(&priv->job.asyncCond); + virCondBroadcast(&obj->job->asyncCond); } =20 void qemuDomainObjAbortAsyncJob(virDomainObj *obj) { - qemuDomainObjPrivate *priv =3D obj->privateData; - VIR_DEBUG("Requesting abort of async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), obj, obj->def->name); =20 - priv->job.abortJob =3D true; + obj->job->abortJob =3D true; virDomainObjBroadcast(obj); } =20 @@ -880,35 +821,34 @@ int qemuDomainObjPrivateXMLFormatJob(virBuffer *buf, virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - virDomainJob job =3D priv->job.active; + virDomainJob job =3D vm->job->active; =20 if (!virDomainTrackJob(job)) job =3D VIR_JOB_NONE; =20 if (job =3D=3D VIR_JOB_NONE && - priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) + vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) return 0; =20 virBufferAsprintf(&attrBuf, " type=3D'%s' async=3D'%s'", virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainAsyncJobTypeToString(vm->job->asyncJob)); =20 - if (priv->job.phase) { + if (vm->job->phase) { virBufferAsprintf(&attrBuf, " phase=3D'%s'", - qemuDomainAsyncJobPhaseToString(priv->job.asyncJ= ob, - priv->job.phase)= ); + qemuDomainAsyncJobPhaseToString(vm->job->asyncJo= b, + vm->job->phase)); } =20 - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_NONE) { - virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", priv->job.apiFlags= ); - virBufferAsprintf(&attrBuf, " asyncStarted=3D'%llu'", priv->job.as= yncStarted); + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_NONE) { + virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", vm->job->apiFlags); + virBufferAsprintf(&attrBuf, " asyncStarted=3D'%llu'", vm->job->asy= ncStarted); } =20 - if (priv->job.cb && - priv->job.cb->formatJobPrivate(&childBuf, &priv->job, vm) < 0) + if (vm->job->cb && + vm->job->cb->formatJobPrivate(&childBuf, vm->job, vm) < 0) return -1; =20 virXMLFormatElement(buf, "job", &attrBuf, &childBuf); @@ -921,8 +861,7 @@ int qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, xmlXPathContextPtr ctxt) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobObj *job =3D &priv->job; + virDomainJobObj *job =3D vm->job; VIR_XPATH_NODE_AUTORESTORE(ctxt) g_autofree char *tmp =3D NULL; =20 @@ -938,7 +877,7 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, return -1; } VIR_FREE(tmp); - priv->job.active =3D type; + vm->job->active =3D type; } =20 if ((tmp =3D virXPathString("string(@async)", ctxt))) { @@ -950,11 +889,11 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, return -1; } VIR_FREE(tmp); - priv->job.asyncJob =3D async; + vm->job->asyncJob =3D async; =20 if ((tmp =3D virXPathString("string(@phase)", ctxt))) { - priv->job.phase =3D qemuDomainAsyncJobPhaseFromString(async, t= mp); - if (priv->job.phase < 0) { + vm->job->phase =3D qemuDomainAsyncJobPhaseFromString(async, tm= p); + if (vm->job->phase < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Unknown job phase %s"), tmp); return -1; @@ -963,20 +902,20 @@ qemuDomainObjPrivateXMLParseJob(virDomainObj *vm, } =20 if (virXPathULongLong("string(@asyncStarted)", ctxt, - &priv->job.asyncStarted) =3D=3D -2) { + &vm->job->asyncStarted) =3D=3D -2) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid async job start")); return -1; } } =20 - if (virXPathULongHex("string(@flags)", ctxt, &priv->job.apiFlags) =3D= =3D -2) { + if (virXPathULongHex("string(@flags)", ctxt, &vm->job->apiFlags) =3D= =3D -2) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid job flags"= )); return -1; } =20 - if (priv->job.cb && - priv->job.cb->parseJobPrivate(ctxt, job, vm) < 0) + if (vm->job->cb && + vm->job->cb->parseJobPrivate(ctxt, job, vm) < 0) return -1; =20 return 0; diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 23eadc26a7..201d7857a8 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -58,8 +58,6 @@ struct _qemuDomainJobDataPrivate { qemuDomainMirrorStats mirrorStats; }; =20 -extern virDomainJobDataPrivateDataCallbacks qemuJobDataPrivateDataCallback= s; - void qemuDomainJobSetStatsType(virDomainJobData *jobData, qemuDomainJobStatsType type); =20 diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index c7cca64001..383c88f784 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1671,7 +1671,6 @@ static int qemuDomainSuspend(virDomainPtr dom) virQEMUDriver *driver =3D dom->conn->privateData; virDomainObj *vm; int ret =3D -1; - qemuDomainObjPrivate *priv; virDomainPausedReason reason; int state; =20 @@ -1681,17 +1680,15 @@ static int qemuDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; - if (qemuDomainObjBeginJob(vm, VIR_JOB_SUSPEND) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) goto endjob; =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) reason =3D VIR_DOMAIN_PAUSED_MIGRATION; - else if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_SNAPSHOT) + else if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_SNAPSHOT) reason =3D VIR_DOMAIN_PAUSED_SNAPSHOT; else reason =3D VIR_DOMAIN_PAUSED_USER; @@ -2102,7 +2099,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, =20 qemuDomainSetFakeReboot(vm, false); =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; =20 qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_DESTROYED, @@ -2589,12 +2586,12 @@ qemuDomainGetControlInfo(virDomainPtr dom, if (priv->monError) { info->state =3D VIR_DOMAIN_CONTROL_ERROR; info->details =3D VIR_DOMAIN_CONTROL_ERROR_REASON_MONITOR; - } else if (priv->job.active) { + } else if (vm->job->active) { if (virTimeMillisNow(&info->stateTime) < 0) goto cleanup; - if (priv->job.current) { + if (vm->job->current) { info->state =3D VIR_DOMAIN_CONTROL_JOB; - info->stateTime -=3D priv->job.current->started; + info->stateTime -=3D vm->job->current->started; } else { if (priv->monStart > 0) { info->state =3D VIR_DOMAIN_CONTROL_OCCUPIED; @@ -2653,7 +2650,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, goto endjob; } =20 - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Pause */ @@ -3016,28 +3013,27 @@ qemuDomainManagedSaveRemove(virDomainPtr dom, unsig= ned int flags) static int qemuDumpWaitForCompletion(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; - qemuDomainJobDataPrivate *privJobCurrent =3D priv->job.current->privat= eData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; + qemuDomainJobDataPrivate *privJobCurrent =3D vm->job->current->private= Data; =20 VIR_DEBUG("Waiting for dump completion"); - while (!jobPriv->dumpCompleted && !priv->job.abortJob) { + while (!jobPriv->dumpCompleted && !vm->job->abortJob) { if (qemuDomainObjWait(vm) < 0) return -1; } =20 if (privJobCurrent->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STATUS_= FAILED) { - if (priv->job.error) + if (vm->job->error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"), - priv->job.error); + vm->job->error); else virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("memory-only dump failed for unknown reason")= ); =20 return -1; } - qemuDomainJobDataUpdateTime(priv->job.current); + qemuDomainJobDataUpdateTime(vm->job->current); =20 return 0; } @@ -3060,10 +3056,10 @@ qemuDumpToFd(virQEMUDriver *driver, return -1; =20 if (detach) { - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP); } else { - g_clear_pointer(&priv->job.current, virDomainJobDataFree); + g_clear_pointer(&vm->job->current, virDomainJobDataFree); } =20 if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) @@ -3222,7 +3218,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; =20 priv =3D vm->privateData; - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* Migrate will always stop the VM, so the resume condition is @@ -4058,7 +4054,7 @@ processMonitorEOFEvent(virQEMUDriver *driver, auditReason =3D "failed"; } =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; qemuMigrationDstErrorSave(driver, vm->def->name, qemuMonitorLastError(priv->mon)); @@ -6507,7 +6503,6 @@ qemuDomainObjStart(virConnectPtr conn, bool force_boot =3D (flags & VIR_DOMAIN_START_FORCE_BOOT) !=3D 0; bool reset_nvram =3D (flags & VIR_DOMAIN_START_RESET_NVRAM) !=3D 0; unsigned int start_flags =3D VIR_QEMU_PROCESS_START_COLD; - qemuDomainObjPrivate *priv =3D vm->privateData; =20 start_flags |=3D start_paused ? VIR_QEMU_PROCESS_START_PAUSED : 0; start_flags |=3D autodestroy ? VIR_QEMU_PROCESS_START_AUTODESTROY : 0; @@ -6529,8 +6524,8 @@ qemuDomainObjStart(virConnectPtr conn, } vm->hasManagedSave =3D false; } else { - virDomainJobOperation op =3D priv->job.current->operation; - priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_REST= ORE; + virDomainJobOperation op =3D vm->job->current->operation; + vm->job->current->operation =3D VIR_DOMAIN_JOB_OPERATION_RESTO= RE; =20 ret =3D qemuDomainObjRestore(conn, driver, vm, managed_save, start_paused, bypass_cache, @@ -6549,7 +6544,7 @@ qemuDomainObjStart(virConnectPtr conn, return ret; } else { VIR_WARN("Ignoring incomplete managed state %s", managed_s= ave); - priv->job.current->operation =3D op; + vm->job->current->operation =3D op; vm->hasManagedSave =3D false; } } @@ -12665,20 +12660,19 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, bool completed, virDomainJobData **jobData) { - qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobDataPrivate *privStats =3D NULL; int ret =3D -1; =20 *jobData =3D NULL; =20 if (completed) { - if (priv->job.completed && !priv->job.current) - *jobData =3D virDomainJobDataCopy(priv->job.completed); + if (vm->job->completed && !vm->job->current) + *jobData =3D virDomainJobDataCopy(vm->job->completed); =20 return 0; } =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", _("migration statistics are available only on " "the source host")); @@ -12691,11 +12685,11 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, if (virDomainObjCheckActive(vm) < 0) goto cleanup; =20 - if (!priv->job.current) { + if (!vm->job->current) { ret =3D 0; goto cleanup; } - *jobData =3D virDomainJobDataCopy(priv->job.current); + *jobData =3D virDomainJobDataCopy(vm->job->current); =20 privStats =3D (*jobData)->privateData; =20 @@ -12769,7 +12763,6 @@ qemuDomainGetJobStats(virDomainPtr dom, unsigned int flags) { virDomainObj *vm; - qemuDomainObjPrivate *priv; g_autoptr(virDomainJobData) jobData =3D NULL; bool completed =3D !!(flags & VIR_DOMAIN_JOB_STATS_COMPLETED); int ret =3D -1; @@ -12783,7 +12776,6 @@ qemuDomainGetJobStats(virDomainPtr dom, if (virDomainGetJobStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - priv =3D vm->privateData; if (qemuDomainGetJobStatsInternal(vm, completed, &jobData) < 0) goto cleanup; =20 @@ -12799,7 +12791,7 @@ qemuDomainGetJobStats(virDomainPtr dom, ret =3D qemuDomainJobDataToParams(jobData, type, params, nparams); =20 if (completed && ret =3D=3D 0 && !(flags & VIR_DOMAIN_JOB_STATS_KEEP_C= OMPLETED)) - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12875,14 +12867,14 @@ qemuDomainAbortJobFlags(virDomainPtr dom, priv =3D vm->privateData; =20 if (flags & VIR_DOMAIN_ABORT_JOB_POSTCOPY && - (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT || + (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT || !virDomainObjIsPostcopy(vm, VIR_DOMAIN_JOB_OPERATION_MIGRATION_OU= T))) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("current job is not outgoing migration in post-co= py mode")); goto endjob; } =20 - switch (priv->job.asyncJob) { + switch (vm->job->asyncJob) { case VIR_ASYNC_JOB_NONE: virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("no job is active on the domain")); @@ -12912,7 +12904,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, break; =20 case VIR_ASYNC_JOB_DUMP: - if (priv->job.apiFlags & VIR_DUMP_MEMORY_ONLY) { + if (vm->job->apiFlags & VIR_DUMP_MEMORY_ONLY) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cannot abort memory-only dump")); goto endjob; @@ -12932,7 +12924,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, =20 case VIR_ASYNC_JOB_LAST: default: - virReportEnumRangeError(virDomainAsyncJob, priv->job.asyncJob); + virReportEnumRangeError(virDomainAsyncJob, vm->job->asyncJob); break; } =20 @@ -13341,14 +13333,14 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, =20 priv =3D vm->privateData; =20 - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("post-copy can only be started while " "outgoing migration is in progress")); goto endjob; } =20 - if (!(priv->job.apiFlags & VIR_MIGRATE_POSTCOPY)) { + if (!(vm->job->apiFlags & VIR_MIGRATE_POSTCOPY)) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("switching to post-copy requires migration to be " "started with VIR_MIGRATE_POSTCOPY flag")); diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index b3b25d78b4..2572f47385 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -86,10 +86,8 @@ VIR_ENUM_IMPL(qemuMigrationJobPhase, static bool ATTRIBUTE_NONNULL(1) qemuMigrationJobIsAllowed(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN || - priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN || + vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { virReportError(VIR_ERR_OPERATION_INVALID, _("another migration job is already running for dom= ain '%s'"), vm->def->name); @@ -105,7 +103,6 @@ qemuMigrationJobStart(virDomainObj *vm, virDomainAsyncJob job, unsigned long apiFlags) { - qemuDomainObjPrivate *priv =3D vm->privateData; virDomainJobOperation op; unsigned long long mask; =20 @@ -126,7 +123,7 @@ qemuMigrationJobStart(virDomainObj *vm, if (qemuDomainObjBeginAsyncJob(vm, job, op, apiFlags) < 0) return -1; =20 - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION); =20 qemuDomainObjSetAsyncJobMask(vm, mask); @@ -138,13 +135,11 @@ static int qemuMigrationCheckPhase(virDomainObj *vm, qemuMigrationJobPhase phase) { - qemuDomainObjPrivate *priv =3D vm->privateData; - if (phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && - phase < priv->job.phase) { + phase < vm->job->phase) { virReportError(VIR_ERR_INTERNAL_ERROR, _("migration protocol going backwards %s =3D> %s"), - qemuMigrationJobPhaseTypeToString(priv->job.phase), + qemuMigrationJobPhaseTypeToString(vm->job->phase), qemuMigrationJobPhaseTypeToString(phase)); return -1; } @@ -190,9 +185,7 @@ static bool ATTRIBUTE_NONNULL(1) qemuMigrationJobIsActive(virDomainObj *vm, virDomainAsyncJob job) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (priv->job.asyncJob !=3D job) { + if (vm->job->asyncJob !=3D job) { const char *msg; =20 if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) @@ -956,7 +949,7 @@ qemuMigrationSrcCancelRemoveTempBitmaps(virDomainObj *v= m, virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; GSList *next; =20 for (next =3D jobPriv->migTempBitmaps; next; next =3D next->next) { @@ -1236,10 +1229,10 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *drive= r, if (rv < 0) return -1; =20 - if (priv->job.abortJob) { - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; + if (vm->job->abortJob) { + vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), - virDomainAsyncJobTypeToString(priv->job.asyncJo= b), + virDomainAsyncJobTypeToString(vm->job->asyncJob= ), _("canceled by client")); return -1; } @@ -1255,7 +1248,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, } =20 qemuMigrationSrcFetchMirrorStats(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - priv->job.current); + vm->job->current); return 0; } =20 @@ -1718,14 +1711,13 @@ qemuMigrationDstPostcopyFailed(virDomainObj *vm) static void qemuMigrationSrcWaitForSpice(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 if (!jobPriv->spiceMigration) return; =20 VIR_DEBUG("Waiting for SPICE to finish migration"); - while (!jobPriv->spiceMigrated && !priv->job.abortJob) { + while (!jobPriv->spiceMigrated && !vm->job->abortJob) { if (qemuDomainObjWait(vm) < 0) return; } @@ -1810,9 +1802,7 @@ qemuMigrationAnyFetchStats(virDomainObj *vm, static const char * qemuMigrationJobName(virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - switch (priv->job.asyncJob) { + switch (vm->job->asyncJob) { case VIR_ASYNC_JOB_MIGRATION_OUT: return _("migration out"); case VIR_ASYNC_JOB_SAVE: @@ -1839,8 +1829,7 @@ static int qemuMigrationJobCheckStatus(virDomainObj *vm, virDomainAsyncJob asyncJob) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.current; + virDomainJobData *jobData =3D vm->job->current; qemuDomainJobDataPrivate *privJob =3D jobData->privateData; g_autofree char *error =3D NULL; =20 @@ -1916,8 +1905,7 @@ qemuMigrationAnyCompleted(virDomainObj *vm, virConnectPtr dconn, unsigned int flags) { - qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.current; + virDomainJobData *jobData =3D vm->job->current; int pauseReason; =20 if (qemuMigrationJobCheckStatus(vm, asyncJob) < 0) @@ -2009,7 +1997,7 @@ qemuMigrationSrcWaitForCompletion(virDomainObj *vm, unsigned int flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.current; + virDomainJobData *jobData =3D vm->job->current; int rv; =20 jobData->status =3D VIR_DOMAIN_JOB_STATUS_MIGRATING; @@ -2029,9 +2017,9 @@ qemuMigrationSrcWaitForCompletion(virDomainObj *vm, =20 qemuDomainJobDataUpdateTime(jobData); qemuDomainJobDataUpdateDowntime(jobData); - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); - priv->job.completed =3D virDomainJobDataCopy(jobData); - priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); + vm->job->completed =3D virDomainJobDataCopy(jobData); + vm->job->completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; =20 if (asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT && jobData->status =3D=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED) @@ -2143,7 +2131,7 @@ qemuMigrationSrcGraphicsRelocate(virDomainObj *vm, return 0; =20 if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_OUT) = =3D=3D 0) { - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 rc =3D qemuMonitorGraphicsRelocate(priv->mon, type, listenAddress, port, tlsPort, tlsSubject); @@ -2276,16 +2264,15 @@ static void qemuMigrationAnyConnectionClosed(virDomainObj *vm, virConnectPtr conn) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; bool postcopy =3D false; int phase; =20 VIR_DEBUG("vm=3D%s, conn=3D%p, asyncJob=3D%s, phase=3D%s", vm->def->name, conn, - virDomainAsyncJobTypeToString(priv->job.asyncJob), - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, - priv->job.phase)); + virDomainAsyncJobTypeToString(vm->job->asyncJob), + qemuDomainAsyncJobPhaseToString(vm->job->asyncJob, + vm->job->phase)); =20 if (!qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_IN) && !qemuMigrationJobIsActive(vm, VIR_ASYNC_JOB_MIGRATION_OUT)) @@ -2294,7 +2281,7 @@ qemuMigrationAnyConnectionClosed(virDomainObj *vm, VIR_WARN("The connection which controls migration of domain %s was clo= sed", vm->def->name); =20 - switch ((qemuMigrationJobPhase) priv->job.phase) { + switch ((qemuMigrationJobPhase) vm->job->phase) { case QEMU_MIGRATION_PHASE_BEGIN3: VIR_DEBUG("Aborting outgoing migration after Begin phase"); break; @@ -2346,14 +2333,14 @@ qemuMigrationAnyConnectionClosed(virDomainObj *vm, ignore_value(qemuMigrationJobStartPhase(vm, phase)); =20 if (postcopy) { - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) qemuMigrationSrcPostcopyFailed(vm); else qemuMigrationDstPostcopyFailed(vm); qemuMigrationJobContinue(vm, qemuProcessCleanupMigrationJob); } else { - qemuMigrationParamsReset(vm, priv->job.asyncJob, - jobPriv->migParams, priv->job.apiFlags); + qemuMigrationParamsReset(vm, vm->job->asyncJob, + jobPriv->migParams, vm->job->apiFlags); qemuMigrationJobFinish(vm); } } @@ -2377,12 +2364,11 @@ qemuMigrationSrcBeginPhaseBlockDirtyBitmaps(qemuMig= rationCookie *mig, =20 { GSList *disks =3D NULL; - qemuDomainObjPrivate *priv =3D vm->privateData; size_t i; =20 g_autoptr(GHashTable) blockNamedNodeData =3D NULL; =20 - if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, priv->job.a= syncJob))) + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, vm->job->as= yncJob))) return -1; =20 for (i =3D 0; i < vm->def->ndisks; i++) { @@ -2452,7 +2438,7 @@ qemuMigrationAnyRefreshStatus(virDomainObj *vm, g_autoptr(virDomainJobData) jobData =3D NULL; qemuDomainJobDataPrivate *priv; =20 - jobData =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks); + jobData =3D virDomainJobDataInit(&virQEMUDriverDomainJobConfig.jobData= PrivateCb); priv =3D jobData->privateData; =20 if (qemuMigrationAnyFetchStats(vm, asyncJob, jobData, NULL) < 0) @@ -2549,11 +2535,11 @@ qemuMigrationSrcBeginPhase(virQEMUDriver *driver, * Otherwise we will start the async job later in the perform phase lo= sing * change protection. */ - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && qemuMigrationJobStartPhase(vm, QEMU_MIGRATION_PHASE_BEGIN3) < 0) return NULL; =20 - if (!qemuMigrationSrcIsAllowed(driver, vm, true, priv->job.asyncJob, f= lags)) + if (!qemuMigrationSrcIsAllowed(driver, vm, true, vm->job->asyncJob, fl= ags)) return NULL; =20 if (!(flags & (VIR_MIGRATE_UNSAFE | VIR_MIGRATE_OFFLINE)) && @@ -2656,8 +2642,6 @@ qemuMigrationAnyCanResume(virDomainObj *vm, unsigned long flags, qemuMigrationJobPhase expectedPhase) { - qemuDomainObjPrivate *priv =3D vm->privateData; - VIR_DEBUG("vm=3D%p, job=3D%s, flags=3D0x%lx, expectedPhase=3D%s", vm, virDomainAsyncJobTypeToString(job), flags, qemuDomainAsyncJobPhaseToString(VIR_ASYNC_JOB_MIGRATION_OUT, @@ -2684,22 +2668,22 @@ qemuMigrationAnyCanResume(virDomainObj *vm, if (!qemuMigrationJobIsActive(vm, job)) return false; =20 - if (priv->job.asyncOwner !=3D 0 && - priv->job.asyncOwner !=3D virThreadSelfID()) { + if (vm->job->asyncOwner !=3D 0 && + vm->job->asyncOwner !=3D virThreadSelfID()) { virReportError(VIR_ERR_OPERATION_INVALID, _("migration of domain %s is being actively monitor= ed by another thread"), vm->def->name); return false; } =20 - if (!virDomainObjIsPostcopy(vm, priv->job.current->operation)) { + if (!virDomainObjIsPostcopy(vm, vm->job->current->operation)) { virReportError(VIR_ERR_OPERATION_INVALID, _("migration of domain %s is not in post-copy phase= "), vm->def->name); return false; } =20 - if (priv->job.phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && + if (vm->job->phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && !virDomainObjIsFailedPostcopy(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, _("post-copy migration of domain %s has not failed"= ), @@ -2707,7 +2691,7 @@ qemuMigrationAnyCanResume(virDomainObj *vm, return false; } =20 - if (priv->job.phase > expectedPhase) { + if (vm->job->phase > expectedPhase) { virReportError(VIR_ERR_OPERATION_INVALID, _("resuming failed post-copy migration of domain %s= already in progress"), vm->def->name); @@ -2881,8 +2865,8 @@ qemuMigrationDstPrepareCleanup(virQEMUDriver *driver, VIR_DEBUG("driver=3D%p, vm=3D%s, job=3D%s, asyncJob=3D%s", driver, vm->def->name, - virDomainJobTypeToString(priv->job.active), - virDomainAsyncJobTypeToString(priv->job.asyncJob)); + virDomainJobTypeToString(vm->job->active), + virDomainAsyncJobTypeToString(vm->job->asyncJob)); =20 virPortAllocatorRelease(priv->migrationPort); priv->migrationPort =3D 0; @@ -3061,7 +3045,7 @@ qemuMigrationDstPrepareActive(virQEMUDriver *driver, unsigned long flags) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; qemuProcessIncomingDef *incoming =3D NULL; g_autofree char *tlsAlias =3D NULL; virObjectEvent *event =3D NULL; @@ -3219,7 +3203,7 @@ qemuMigrationDstPrepareActive(virQEMUDriver *driver, error: virErrorPreserveLast(&origErr); qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_IN, - jobPriv->migParams, priv->job.apiFlags); + jobPriv->migParams, vm->job->apiFlags); =20 if (stopProcess) { unsigned int stopFlags =3D VIR_QEMU_PROCESS_STOP_MIGRATED; @@ -3333,7 +3317,8 @@ qemuMigrationDstPrepareFresh(virQEMUDriver *driver, QEMU_MIGRATION_COOKIE_CPU_HOTPLUG= | QEMU_MIGRATION_COOKIE_CPU | QEMU_MIGRATION_COOKIE_CAPS | - QEMU_MIGRATION_COOKIE_BLOCK_DIRTY= _BITMAPS))) + QEMU_MIGRATION_COOKIE_BLOCK_DIRTY= _BITMAPS, + NULL))) goto cleanup; =20 if (!(vm =3D virDomainObjListAdd(driver->domains, def, @@ -3477,7 +3462,7 @@ qemuMigrationDstPrepareResume(virQEMUDriver *driver, =20 if (!(mig =3D qemuMigrationCookieParse(driver, def, origname, NULL, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_CAPS))) + QEMU_MIGRATION_COOKIE_CAPS, vm))) goto cleanup; =20 priv->origname =3D g_strdup(origname); @@ -3858,13 +3843,13 @@ qemuMigrationSrcComplete(virQEMUDriver *driver, virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv =3D vm->privateData; - virDomainJobData *jobData =3D priv->job.completed; + virDomainJobData *jobData =3D vm->job->completed; virObjectEvent *event; int reason; =20 if (!jobData) { - priv->job.completed =3D virDomainJobDataCopy(priv->job.current); - jobData =3D priv->job.completed; + vm->job->completed =3D virDomainJobDataCopy(vm->job->current); + jobData =3D vm->job->completed; jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; } =20 @@ -3909,7 +3894,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, { g_autoptr(qemuMigrationCookie) mig =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; virDomainJobData *jobData =3D NULL; qemuMigrationJobPhase phase; =20 @@ -3927,7 +3912,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, * job will stay active even though migration API finishes with an * error. */ - phase =3D priv->job.phase; + phase =3D vm->job->phase; } else if (retcode =3D=3D 0) { phase =3D QEMU_MIGRATION_PHASE_CONFIRM3; } else { @@ -3939,13 +3924,13 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, =20 if (!(mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname= , priv, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_STATS))) + QEMU_MIGRATION_COOKIE_STATS, vm))) return -1; =20 if (retcode =3D=3D 0) - jobData =3D priv->job.completed; + jobData =3D vm->job->completed; else - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); =20 /* Update times with the values sent by the destination daemon */ if (mig->jobData && jobData) { @@ -3985,7 +3970,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriver *driver, qemuMigrationSrcRestoreDomainState(driver, vm); =20 qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - jobPriv->migParams, priv->job.apiFlag= s); + jobPriv->migParams, vm->job->apiFlags= ); qemuDomainSetMaxMemLock(vm, 0, &priv->preMigrationMemlock); } =20 @@ -4005,7 +3990,6 @@ qemuMigrationSrcConfirm(virQEMUDriver *driver, { qemuMigrationJobPhase phase; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); - qemuDomainObjPrivate *priv =3D vm->privateData; int ret =3D -1; =20 VIR_DEBUG("vm=3D%p, flags=3D0x%x, cancelled=3D%d", vm, flags, cancelle= d); @@ -4024,7 +4008,7 @@ qemuMigrationSrcConfirm(virQEMUDriver *driver, * error. */ if (virDomainObjIsFailedPostcopy(vm)) - phase =3D priv->job.phase; + phase =3D vm->job->phase; else if (cancelled) phase =3D QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED; else @@ -4416,7 +4400,7 @@ qemuMigrationSrcRunPrepareBlockDirtyBitmapsMerge(virD= omainObj *vm, { g_autoslist(qemuDomainJobPrivateMigrateTempBitmap) tmpbitmaps =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; g_autoptr(virJSONValue) actions =3D virJSONValueNewArray(); g_autoptr(GHashTable) blockNamedNodeData =3D NULL; GSList *nextdisk; @@ -4703,7 +4687,8 @@ qemuMigrationSrcRun(virQEMUDriver *driver, cookieFlags | QEMU_MIGRATION_COOKIE_GRAPHICS | QEMU_MIGRATION_COOKIE_CAPS | - QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMA= PS); + QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMA= PS, + NULL); if (!mig) goto error; =20 @@ -4814,13 +4799,13 @@ qemuMigrationSrcRun(virQEMUDriver *driver, if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_OUT) < = 0) goto error; =20 - if (priv->job.abortJob) { + if (vm->job->abortJob) { /* explicitly do this *after* we entered the monitor, * as this is a critical section so we are guaranteed - * priv->job.abortJob will not change */ - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; + * vm->job->abortJob will not change */ + vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(vm->job->asyncJob), _("canceled by client")); goto exit_monitor; } @@ -4884,7 +4869,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, * resume it now once we finished all block jobs and wait for the real * end of the migration. */ - if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { + if (vm->job->current->status =3D=3D VIR_DOMAIN_JOB_STATUS_PAUSED) { if (qemuMigrationSrcContinue(vm, QEMU_MONITOR_MIGRATION_STATUS_PRE_SWI= TCHOVER, VIR_ASYNC_JOB_MIGRATION_OUT) < 0) @@ -4913,11 +4898,11 @@ qemuMigrationSrcRun(virQEMUDriver *driver, goto error; } =20 - if (priv->job.completed) { - priv->job.completed->stopped =3D priv->job.current->stopped; - qemuDomainJobDataUpdateTime(priv->job.completed); - qemuDomainJobDataUpdateDowntime(priv->job.completed); - ignore_value(virTimeMillisNow(&priv->job.completed->sent)); + if (vm->job->completed) { + vm->job->completed->stopped =3D vm->job->current->stopped; + qemuDomainJobDataUpdateTime(vm->job->completed); + qemuDomainJobDataUpdateDowntime(vm->job->completed); + ignore_value(virTimeMillisNow(&vm->job->completed->sent)); } =20 cookieFlags |=3D QEMU_MIGRATION_COOKIE_NETWORK | @@ -4952,7 +4937,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, } =20 if (cancel && - priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_HYPERVISO= R_COMPLETED && + vm->job->current->status !=3D VIR_DOMAIN_JOB_STATUS_HYPERVISOR= _COMPLETED && qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_MIGRATION_OUT= ) =3D=3D 0) { qemuMonitorMigrateCancel(priv->mon); qemuDomainObjExitMonitor(vm); @@ -4966,8 +4951,8 @@ qemuMigrationSrcRun(virQEMUDriver *driver, =20 qemuMigrationSrcCancelRemoveTempBitmaps(vm, VIR_ASYNC_JOB_MIGRATIO= N_OUT); =20 - if (priv->job.current->status !=3D VIR_DOMAIN_JOB_STATUS_CANCELED) - priv->job.current->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; + if (vm->job->current->status !=3D VIR_DOMAIN_JOB_STATUS_CANCELED) + vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_FAILED; } =20 if (iothread) @@ -5002,7 +4987,7 @@ qemuMigrationSrcResume(virDomainObj *vm, =20 mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname, priv, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_CAPS); + QEMU_MIGRATION_COOKIE_CAPS, vm); if (!mig) return -1; =20 @@ -5913,7 +5898,7 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, virErrorPtr orig_err =3D NULL; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 if (flags & VIR_MIGRATE_POSTCOPY_RESUME) { if (!qemuMigrationAnyCanResume(vm, VIR_ASYNC_JOB_MIGRATION_OUT, fl= ags, @@ -5991,7 +5976,7 @@ qemuMigrationSrcPerformJob(virQEMUDriver *driver, */ if (!v3proto && ret < 0) qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - jobPriv->migParams, priv->job.apiFlag= s); + jobPriv->migParams, vm->job->apiFlags= ); =20 qemuMigrationSrcRestoreDomainState(driver, vm); =20 @@ -6080,7 +6065,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, const char *nbdURI) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; int ret =3D -1; =20 if (flags & VIR_MIGRATE_POSTCOPY_RESUME) { @@ -6120,7 +6105,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriver *driver, if (ret < 0 && !virDomainObjIsFailedPostcopy(vm)) { qemuMigrationSrcRestoreDomainState(driver, vm); qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_OUT, - jobPriv->migParams, priv->job.apiFlags); + jobPriv->migParams, vm->job->apiFlags); qemuDomainSetMaxMemLock(vm, 0, &priv->preMigrationMemlock); qemuMigrationJobFinish(vm); } else { @@ -6399,7 +6384,7 @@ qemuMigrationDstFinishOffline(virQEMUDriver *driver, g_autoptr(qemuMigrationCookie) mig =3D NULL; =20 if (!(mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname= , priv, - cookiein, cookieinlen, cookie_fla= gs))) + cookiein, cookieinlen, cookie_fla= gs, NULL))) return NULL; =20 if (qemuMigrationDstPersist(driver, vm, mig, false) < 0) @@ -6432,7 +6417,6 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, bool *doKill, bool *inPostCopy) { - qemuDomainObjPrivate *priv =3D vm->privateData; g_autoptr(virDomainJobData) jobData =3D NULL; =20 if (qemuMigrationDstVPAssociatePortProfiles(vm->def) < 0) @@ -6492,7 +6476,7 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, return -1; } =20 - if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) + if (vm->job->current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POSTCOPY) *inPostCopy =3D true; =20 if (!(flags & VIR_MIGRATE_PAUSED)) { @@ -6542,9 +6526,9 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, } =20 if (jobData) { - priv->job.completed =3D g_steal_pointer(&jobData); - priv->job.completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; - qemuDomainJobSetStatsType(priv->job.completed, + vm->job->completed =3D g_steal_pointer(&jobData); + vm->job->completed->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; + qemuDomainJobSetStatsType(vm->job->completed, QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION); } =20 @@ -6586,17 +6570,17 @@ qemuMigrationDstFinishActive(virQEMUDriver *driver, virDomainPtr dom =3D NULL; g_autoptr(qemuMigrationCookie) mig =3D NULL; qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; virObjectEvent *event; bool inPostCopy =3D false; - bool doKill =3D priv->job.phase !=3D QEMU_MIGRATION_PHASE_FINISH_RESUM= E; + bool doKill =3D vm->job->phase !=3D QEMU_MIGRATION_PHASE_FINISH_RESUME; int rc; =20 VIR_DEBUG("vm=3D%p, flags=3D0x%lx, retcode=3D%d", vm, flags, retcode); =20 if (!(mig =3D qemuMigrationCookieParse(driver, vm->def, priv->origname= , priv, - cookiein, cookieinlen, cookie_fla= gs))) + cookiein, cookieinlen, cookie_fla= gs, NULL))) goto error; =20 if (retcode !=3D 0) { @@ -6633,7 +6617,7 @@ qemuMigrationDstFinishActive(virQEMUDriver *driver, VIR_WARN("Unable to encode migration cookie"); =20 qemuMigrationDstComplete(driver, vm, inPostCopy, - VIR_ASYNC_JOB_MIGRATION_IN, &priv->job); + VIR_ASYNC_JOB_MIGRATION_IN, vm->job); =20 return dom; =20 @@ -6663,7 +6647,7 @@ qemuMigrationDstFinishActive(virQEMUDriver *driver, *finishJob =3D false; } else { qemuMigrationParamsReset(vm, VIR_ASYNC_JOB_MIGRATION_IN, - jobPriv->migParams, priv->job.apiFlags); + jobPriv->migParams, vm->job->apiFlags); } =20 if (!virDomainObjIsActive(vm)) @@ -6725,7 +6709,7 @@ qemuMigrationDstFinish(virQEMUDriver *driver, } else { qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); } - g_clear_pointer(&priv->job.completed, virDomainJobDataFree); + g_clear_pointer(&vm->job->completed, virDomainJobDataFree); =20 cookie_flags =3D QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_STATS | @@ -6778,7 +6762,6 @@ qemuMigrationProcessUnattended(virQEMUDriver *driver, virDomainAsyncJob job, qemuMonitorMigrationStatus status) { - qemuDomainObjPrivate *priv =3D vm->privateData; qemuMigrationJobPhase phase; =20 if (!qemuMigrationJobIsActive(vm, job) || @@ -6798,7 +6781,7 @@ qemuMigrationProcessUnattended(virQEMUDriver *driver, return; =20 if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) - qemuMigrationDstComplete(driver, vm, true, job, &priv->job); + qemuMigrationDstComplete(driver, vm, true, job, vm->job); else qemuMigrationSrcComplete(driver, vm, job); =20 diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_coo= kie.c index bd939a12be..a4e018e204 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -495,7 +495,7 @@ qemuMigrationCookieAddNBD(qemuMigrationCookie *mig, mig->nbd->disks =3D g_new0(struct qemuMigrationCookieNBDDisk, vm->def-= >ndisks); mig->nbd->ndisks =3D 0; =20 - if (qemuDomainObjEnterMonitorAsync(vm, priv->job.asyncJob) < 0) + if (qemuDomainObjEnterMonitorAsync(vm, vm->job->asyncJob) < 0) return -1; =20 rc =3D qemuMonitorBlockStatsUpdateCapacityBlockdev(priv->mon, stats); @@ -525,13 +525,11 @@ static int qemuMigrationCookieAddStatistics(qemuMigrationCookie *mig, virDomainObj *vm) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!priv->job.completed) + if (!vm->job->completed) return 0; =20 g_clear_pointer(&mig->jobData, virDomainJobDataFree); - mig->jobData =3D virDomainJobDataCopy(priv->job.completed); + mig->jobData =3D virDomainJobDataCopy(vm->job->completed); =20 mig->flags |=3D QEMU_MIGRATION_COOKIE_STATS; =20 @@ -1042,7 +1040,7 @@ qemuMigrationCookieStatisticsXMLParse(xmlXPathContext= Ptr ctxt) if (!(ctxt->node =3D virXPathNode("./statistics", ctxt))) return NULL; =20 - jobData =3D virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks); + jobData =3D virDomainJobDataInit(&virQEMUDriverDomainJobConfig.jobData= PrivateCb); priv =3D jobData->privateData; stats =3D &priv->stats.mig; jobData->status =3D VIR_DOMAIN_JOB_STATUS_COMPLETED; @@ -1497,7 +1495,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, qemuDomainObjPrivate *priv, const char *cookiein, int cookieinlen, - unsigned int flags) + unsigned int flags, + virDomainObj *vm) { g_autoptr(qemuMigrationCookie) mig =3D NULL; =20 @@ -1547,8 +1546,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, } } =20 - if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData && priv->job.c= urrent) - mig->jobData->operation =3D priv->job.current->operation; + if (vm && flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobData && vm->j= ob->current) + mig->jobData->operation =3D vm->job->current->operation; =20 return g_steal_pointer(&mig); } diff --git a/src/qemu/qemu_migration_cookie.h b/src/qemu/qemu_migration_coo= kie.h index 2f0cdcf7b6..07776aaa8b 100644 --- a/src/qemu/qemu_migration_cookie.h +++ b/src/qemu/qemu_migration_cookie.h @@ -194,7 +194,8 @@ qemuMigrationCookieParse(virQEMUDriver *driver, qemuDomainObjPrivate *priv, const char *cookiein, int cookieinlen, - unsigned int flags); + unsigned int flags, + virDomainObj *vm); =20 void qemuMigrationCookieFree(qemuMigrationCookie *mig); diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_par= ams.c index c667be8520..7a023b36c8 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1005,7 +1005,7 @@ qemuMigrationParamsEnableTLS(virQEMUDriver *driver, qemuMigrationParams *migParams) { qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; g_autoptr(virJSONValue) tlsProps =3D NULL; g_autoptr(virJSONValue) secProps =3D NULL; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); @@ -1080,8 +1080,7 @@ int qemuMigrationParamsDisableTLS(virDomainObj *vm, qemuMigrationParams *migParams) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; =20 if (!jobPriv->migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) return 0; @@ -1213,8 +1212,7 @@ qemuMigrationParamsCheck(virDomainObj *vm, qemuMigrationParams *migParams, virBitmap *remoteCaps) { - qemuDomainObjPrivate *priv =3D vm->privateData; - qemuDomainJobPrivate *jobPriv =3D priv->job.privateData; + qemuDomainJobPrivate *jobPriv =3D vm->job->privateData; qemuMigrationCapability cap; qemuMigrationParty party; size_t i; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 251bca4bdf..71b476a32f 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -648,8 +648,8 @@ qemuProcessHandleStop(qemuMonitor *mon G_GNUC_UNUSED, * reveal it in domain state nor sent events */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING && !priv->pausedShutdown) { - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { - if (priv->job.current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POS= TCOPY) + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + if (vm->job->current->status =3D=3D VIR_DOMAIN_JOB_STATUS_POST= COPY) reason =3D VIR_DOMAIN_PAUSED_POSTCOPY; else reason =3D VIR_DOMAIN_PAUSED_MIGRATION; @@ -661,8 +661,8 @@ qemuProcessHandleStop(qemuMonitor *mon G_GNUC_UNUSED, vm->def->name, virDomainPausedReasonTypeToString(reason), detail); =20 - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (vm->job->current) + ignore_value(virTimeMillisNow(&vm->job->current->stopped)); =20 if (priv->signalStop) virDomainObjBroadcast(vm); @@ -1390,7 +1390,6 @@ static void qemuProcessHandleSpiceMigrated(qemuMonitor *mon G_GNUC_UNUSED, virDomainObj *vm) { - qemuDomainObjPrivate *priv; qemuDomainJobPrivate *jobPriv; =20 virObjectLock(vm); @@ -1398,9 +1397,8 @@ qemuProcessHandleSpiceMigrated(qemuMonitor *mon G_GNU= C_UNUSED, VIR_DEBUG("Spice migration completed for domain %p %s", vm, vm->def->name); =20 - priv =3D vm->privateData; - jobPriv =3D priv->job.privateData; - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { + jobPriv =3D vm->job->privateData; + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) { VIR_DEBUG("got SPICE_MIGRATE_COMPLETED event without a migration j= ob"); goto cleanup; } @@ -1434,12 +1432,12 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G= _GNUC_UNUSED, priv =3D vm->privateData; driver =3D priv->driver; =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION event without a migration job"); goto cleanup; } =20 - privJob =3D priv->job.current->privateData; + privJob =3D vm->job->current->privateData; =20 privJob->stats.mig.status =3D status; virDomainObjBroadcast(vm); @@ -1448,7 +1446,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, =20 switch ((qemuMonitorMigrationStatus) status) { case QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY: - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && state =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_MIGRATION) { VIR_DEBUG("Correcting paused state reason for domain %s to %s", @@ -1464,7 +1462,7 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G_G= NUC_UNUSED, break; =20 case QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY_PAUSED: - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT && state =3D=3D VIR_DOMAIN_PAUSED) { /* At this point no thread is watching the migration progress = on * the source as it is just waiting for the Finish phase to en= d. @@ -1505,11 +1503,11 @@ qemuProcessHandleMigrationStatus(qemuMonitor *mon G= _GNUC_UNUSED, * watching it in any thread. Let's make sure the migration is pro= perly * finished in case we get a "completed" event. */ - if (virDomainObjIsPostcopy(vm, priv->job.current->operation) && - priv->job.phase =3D=3D QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && - priv->job.asyncOwner =3D=3D 0) { + if (virDomainObjIsPostcopy(vm, vm->job->current->operation) && + vm->job->phase =3D=3D QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && + vm->job->asyncOwner =3D=3D 0) { qemuProcessEventSubmit(vm, QEMU_PROCESS_EVENT_UNATTENDED_MIGRA= TION, - priv->job.asyncJob, status, NULL); + vm->job->asyncJob, status, NULL); } break; =20 @@ -1546,7 +1544,7 @@ qemuProcessHandleMigrationPass(qemuMonitor *mon G_GNU= C_UNUSED, vm, vm->def->name, pass); =20 priv =3D vm->privateData; - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION_PASS event without a migration job"); goto cleanup; } @@ -1566,7 +1564,6 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_GNU= C_UNUSED, qemuMonitorDumpStats *stats, const char *error) { - qemuDomainObjPrivate *priv; qemuDomainJobPrivate *jobPriv; qemuDomainJobDataPrivate *privJobCurrent =3D NULL; =20 @@ -1575,20 +1572,19 @@ qemuProcessHandleDumpCompleted(qemuMonitor *mon G_G= NUC_UNUSED, VIR_DEBUG("Dump completed for domain %p %s with stats=3D%p error=3D'%s= '", vm, vm->def->name, stats, NULLSTR(error)); =20 - priv =3D vm->privateData; - jobPriv =3D priv->job.privateData; - privJobCurrent =3D priv->job.current->privateData; - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { + jobPriv =3D vm->job->privateData; + privJobCurrent =3D vm->job->current->privateData; + if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_NONE) { VIR_DEBUG("got DUMP_COMPLETED event without a dump_completed job"); goto cleanup; } jobPriv->dumpCompleted =3D true; privJobCurrent->stats.dump =3D *stats; - priv->job.error =3D g_strdup(error); + vm->job->error =3D g_strdup(error); =20 /* Force error if extracting the DUMP_COMPLETED status failed */ if (!error && status < 0) { - priv->job.error =3D g_strdup(virGetLastErrorMessage()); + vm->job->error =3D g_strdup(virGetLastErrorMessage()); privJobCurrent->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_FAI= LED; } =20 @@ -3209,8 +3205,8 @@ int qemuProcessStopCPUs(virQEMUDriver *driver, /* de-activate netdevs after stopping CPUs */ ignore_value(qemuInterfaceStopDevices(vm->def)); =20 - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (vm->job->current) + ignore_value(virTimeMillisNow(&vm->job->current->stopped)); =20 /* The STOP event handler will change the domain state with the reason * saved in priv->pausedReason and it will also emit corresponding dom= ain @@ -3375,12 +3371,12 @@ qemuProcessCleanupMigrationJob(virQEMUDriver *drive= r, =20 VIR_DEBUG("driver=3D%p, vm=3D%s, asyncJob=3D%s, state=3D%s, reason=3D%= s", driver, vm->def->name, - virDomainAsyncJobTypeToString(priv->job.asyncJob), + virDomainAsyncJobTypeToString(vm->job->asyncJob), virDomainStateTypeToString(state), virDomainStateReasonToString(state, reason)); =20 - if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_IN && - priv->job.asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) + if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_IN && + vm->job->asyncJob !=3D VIR_ASYNC_JOB_MIGRATION_OUT) return; =20 virPortAllocatorRelease(priv->migrationPort); @@ -3393,7 +3389,6 @@ static void qemuProcessRestoreMigrationJob(virDomainObj *vm, virDomainJobObj *job) { - qemuDomainObjPrivate *priv =3D vm->privateData; qemuDomainJobPrivate *jobPriv =3D job->privateData; virDomainJobOperation op; unsigned long long allowedJobs; @@ -3413,9 +3408,9 @@ qemuProcessRestoreMigrationJob(virDomainObj *vm, VIR_DOMAIN_JOB_STATUS_PAUSED, allowedJobs); =20 - job->privateData =3D g_steal_pointer(&priv->job.privateData); - priv->job.privateData =3D jobPriv; - priv->job.apiFlags =3D job->apiFlags; + job->privateData =3D g_steal_pointer(&vm->job->privateData); + vm->job->privateData =3D jobPriv; + vm->job->apiFlags =3D job->apiFlags; =20 qemuDomainCleanupAdd(vm, qemuProcessCleanupMigrationJob); } @@ -8109,9 +8104,9 @@ void qemuProcessStop(virQEMUDriver *driver, if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { if (qemuDomainObjBeginNestedJob(vm, asyncJob) < 0) goto cleanup; - } else if (priv->job.asyncJob !=3D VIR_ASYNC_JOB_NONE && - priv->job.asyncOwner =3D=3D virThreadSelfID() && - priv->job.active !=3D VIR_JOB_ASYNC_NESTED) { + } else if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_NONE && + vm->job->asyncOwner =3D=3D virThreadSelfID() && + vm->job->active !=3D VIR_JOB_ASYNC_NESTED) { VIR_WARN("qemuProcessStop called without a nested job (async=3D%s)= ", virDomainAsyncJobTypeToString(asyncJob)); } @@ -8434,10 +8429,10 @@ qemuProcessAutoDestroy(virDomainObj *dom, =20 VIR_DEBUG("vm=3D%s, conn=3D%p", dom->def->name, conn); =20 - if (priv->job.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) + if (dom->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; =20 - if (priv->job.asyncJob) { + if (dom->job->asyncJob) { VIR_DEBUG("vm=3D%s has long-term job active, cancelling", dom->def->name); qemuDomainObjDiscardAsyncJob(dom); @@ -8701,7 +8696,7 @@ qemuProcessReconnect(void *opaque) cfg =3D virQEMUDriverGetConfig(driver); priv =3D obj->privateData; =20 - virDomainObjPreserveJob(&priv->job, &oldjob); + virDomainObjPreserveJob(obj->job, &oldjob); if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP && priv->backup) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 6033deafed..243c18193b 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1334,7 +1334,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *drive= r, if (!qemuMigrationSrcIsAllowed(driver, vm, false, VIR_ASYNC_JOB_SN= APSHOT, 0)) goto cleanup; =20 - qemuDomainJobSetStatsType(priv->job.current, + qemuDomainJobSetStatsType(vm->job->current, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP); =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ diff --git a/tests/qemumigrationcookiexmltest.c b/tests/qemumigrationcookie= xmltest.c index 9731348b53..6f650213e2 100644 --- a/tests/qemumigrationcookiexmltest.c +++ b/tests/qemumigrationcookiexmltest.c @@ -146,7 +146,8 @@ testQemuMigrationCookieParse(const void *opaque) priv, data->xmlstr, data->xmlstrlen, - data->cookieParseFlags))= ) { + data->cookieParseFlags, + data->vm))) { VIR_TEST_DEBUG("\nfailed to parse qemu migration cookie:\n%s\n", d= ata->xmlstr); return -1; } --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386284; cv=none; d=zohomail.com; s=zohoarc; b=J8aUlUz55NynlgTiymoNm5Y5U45yo/GSlKcxYz4V1uPpJ3jgxZi23CbGXCTn0wjYvPvWrRRe76S4KKEscaNlLJWCGH6IzJRMBKZTyNmN74T3kkF2nIcfQNIGc4/LCTgEWYvlohkRnSpoYNr6jZYKF3yKDzE2aARqTLtUtKK2154= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386284; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=93fK8jgIdKRHXBuAtN+x9lQ7K1ETBm0amCCocK1yg7E=; b=GJhcs3CNFKDu+zjrx0OnZZfb8SB4gGf1xNxP6YTbIyZhqN1SAu8Svz/CD4BRPkcP7vQjoPTSK1L5Pr9RAkhHDCh1f7+PWMrchunQs+fB40GHKQOhYSvVaKYzr2ReBcWSj4ZUn8XA9TbxDYUh7veiRPwUCGj7NhXcfRGxBb1I3vg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386284476866.0420315981219; Mon, 5 Sep 2022 06:58:04 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-508-x3yMsUMoM0qElJ8kXw1ftQ-1; Mon, 05 Sep 2022 09:57:32 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7DF21811E9B; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 641954010E4D; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 441CF1946A48; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id DBA851946A47 for ; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id C18F82026D07; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6D99C2026614 for ; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386283; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=93fK8jgIdKRHXBuAtN+x9lQ7K1ETBm0amCCocK1yg7E=; b=a+cJur3tYdORzQ8rL96NLyT8jvN0ZJMQD0Tw4HBX419MV+CgwdX/q43QDUIWLPCAT1N+bS GzkZBjuB2lXZeg43K3xxYFy1reM1lSo0y+Ag1VXePXdvHn5BBs8HcSuw1qgoB729q6R+KF EzfA97JhsuAjQkz37VhkVU6k2Xu5Lkg= X-MC-Unique: x3yMsUMoM0qElJ8kXw1ftQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 07/17] qemu: use virDomainObjBeginJob() Date: Mon, 5 Sep 2022 15:57:05 +0200 Message-Id: <8b00f2cdf9155172e83ba7f4fce326bb5e4fd72b.1662385930.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386286611100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch moves qemuDomainObjBeginJob() into src/conf/virdomainjob as universal virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 8 +- src/conf/virdomainjob.c | 18 +++ src/conf/virdomainjob.h | 4 + src/libvirt_private.syms | 1 + src/qemu/qemu_checkpoint.c | 6 +- src/qemu/qemu_domain.c | 4 +- src/qemu/qemu_domainjob.c | 18 --- src/qemu/qemu_domainjob.h | 3 - src/qemu/qemu_driver.c | 152 +++++++++++++------------- src/qemu/qemu_migration.c | 2 +- src/qemu/qemu_process.c | 6 +- src/qemu/qemu_snapshot.c | 2 +- 12 files changed, 113 insertions(+), 111 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index c68512d1b3..00340bb732 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -62,7 +62,7 @@ There are a number of locks on various objects =20 Agent job condition is then used when thread wishes to talk to qemu agent monitor. It is possible to acquire just agent job - (``qemuDomainObjBeginAgentJob``), or only normal job (``qemuDomainObjB= eginJob``) + (``qemuDomainObjBeginAgentJob``), or only normal job (``virDomainObjBe= ginJob``) but not both at the same time. Holding an agent job and a normal job w= ould allow an unresponsive or malicious agent to block normal libvirt API a= nd potentially result in a denial of service. Which type of job to grab @@ -114,7 +114,7 @@ To lock the ``virDomainObj`` =20 To acquire the normal job condition =20 - ``qemuDomainObjBeginJob()`` + ``virDomainObjBeginJob()`` - Waits until the job is compatible with current async job or no async job is running - Waits for ``job.cond`` condition ``job.active !=3D 0`` using ``virDo= mainObj`` @@ -214,7 +214,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginJob(obj, VIR_JOB_TYPE); + virDomainObjBeginJob(obj, VIR_JOB_TYPE); =20 ...do work... =20 @@ -230,7 +230,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginJob(obj, VIR_JOB_TYPE); + virDomainObjBeginJob(obj, VIR_JOB_TYPE); =20 ...do prep work... =20 diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index a073ff08cd..033ea5c517 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -509,3 +509,21 @@ virDomainObjBeginJobInternal(virDomainObj *obj, jobObj->jobsQueued--; return ret; } + +/* + * obj must be locked before calling + * + * This must be called by anything that will change the VM state + * in any way, or anything that will use the Hypervisor monitor. + * + * Successful calls must be followed by EndJob eventually + */ +int virDomainObjBeginJob(virDomainObj *obj, + virDomainJob job) +{ + if (virDomainObjBeginJobInternal(obj, obj->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, false) < 0) + return -1; + return 0; +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 091d951aa6..adee679e72 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -245,3 +245,7 @@ int virDomainObjBeginJobInternal(virDomainObj *obj, virDomainAsyncJob asyncJob, bool nowait) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); + +int virDomainObjBeginJob(virDomainObj *obj, + virDomainJob job) + G_GNUC_WARN_UNUSED_RESULT; diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index cd0b94297c..4fc2ec2cca 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1187,6 +1187,7 @@ virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; +virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c index ed236eaace..c6fb3d1f97 100644 --- a/src/qemu/qemu_checkpoint.c +++ b/src/qemu/qemu_checkpoint.c @@ -604,7 +604,7 @@ qemuCheckpointCreateXML(virDomainPtr domain, /* Unlike snapshots, the RNG schema already ensured a sane filename. */ =20 /* We are going to modify the domain below. */ - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return NULL; =20 if (redefine) { @@ -654,7 +654,7 @@ qemuCheckpointGetXMLDescUpdateSize(virDomainObj *vm, size_t i; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -848,7 +848,7 @@ qemuCheckpointDelete(virDomainObj *vm, VIR_DOMAIN_CHECKPOINT_DELETE_METADATA_ONLY | VIR_DOMAIN_CHECKPOINT_DELETE_CHILDREN_ONLY, -1); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (!metadata_only) { diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index ebd3c7478f..a4640c5c13 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5954,7 +5954,7 @@ qemuDomainSaveConfig(virDomainObj *obj) * obj must be locked before calling * * To be called immediately before any QEMU monitor API call - * Must have already called qemuDomainObjBeginJob() and checked + * Must have already called virDomainObjBeginJob() and checked * that the VM is still active; may not be used for nested async * jobs. * @@ -6035,7 +6035,7 @@ void qemuDomainObjEnterMonitor(virDomainObj *obj) * obj must be locked before calling * * To be called immediately before any QEMU monitor API call. - * Must have already either called qemuDomainObjBeginJob() + * Must have already either called virDomainObjBeginJob() * and checked that the VM is still active, with asyncJob of * VIR_ASYNC_JOB_NONE; or already called qemuDomainObjBeginAsyncJob, * with the same asyncJob. diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 0ecadddbc7..13e1f332dc 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,24 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -/* - * obj must be locked before calling - * - * This must be called by anything that will change the VM state - * in any way, or anything that will use the QEMU monitor. - * - * Successful calls must be followed by EndJob eventually - */ -int qemuDomainObjBeginJob(virDomainObj *obj, - virDomainJob job) -{ - if (virDomainObjBeginJobInternal(obj, obj->job, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, false) < 0) - return -1; - return 0; -} - /** * qemuDomainObjBeginAgentJob: * diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 201d7857a8..66f18483fe 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,9 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob j= ob, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginJob(virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) G_GNUC_WARN_UNUSED_RESULT; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 383c88f784..5f36113a9c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1680,7 +1680,7 @@ static int qemuDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_SUSPEND) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_SUSPEND) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1730,7 +1730,7 @@ static int qemuDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1813,7 +1813,7 @@ qemuDomainShutdownFlagsMonitor(virDomainObj *vm, =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_RUNNING) { @@ -1936,7 +1936,7 @@ qemuDomainRebootMonitor(virDomainObj *vm, qemuDomainObjPrivate *priv =3D vm->privateData; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2023,7 +2023,7 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) if (virDomainResetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2192,7 +2192,7 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom,= unsigned long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2335,7 +2335,7 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPt= r dom, int period, if (virDomainSetMemoryStatsPeriodEnsureACL(dom->conn, vm->def, flags) = < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2402,7 +2402,7 @@ static int qemuDomainInjectNMI(virDomainPtr domain, u= nsigned int flags) =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2460,7 +2460,7 @@ static int qemuDomainSendKey(virDomainPtr domain, if (virDomainSendKeyEnsureACL(domain->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -3323,7 +3323,7 @@ qemuDomainScreenshot(virDomainPtr dom, if (virDomainScreenshotEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -3603,7 +3603,7 @@ processDeviceDeletedEvent(virQEMUDriver *driver, VIR_DEBUG("Removing device %s from domain %p %s", devAlias, vm, vm->def->name); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -3839,7 +3839,7 @@ processNicRxFilterChangedEvent(virDomainObj *vm, "from domain %p %s", devAlias, vm, vm->def->name); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -3965,7 +3965,7 @@ processSerialChangedEvent(virQEMUDriver *driver, memset(&dev, 0, sizeof(dev)); } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY_MIGRATION_SAFE) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY_MIGRATION_SAFE) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4011,7 +4011,7 @@ static void processJobStatusChangeEvent(virDomainObj *vm, qemuBlockJobData *job) { - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4153,7 +4153,7 @@ processMemoryDeviceSizeChange(virQEMUDriver *driver, virObjectEvent *event =3D NULL; unsigned long long balloon; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return; =20 if (!virDomainObjIsActive(vm)) { @@ -4371,7 +4371,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; } else { - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; } =20 @@ -4508,7 +4508,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -4634,7 +4634,7 @@ qemuDomainPinEmulator(virDomainPtr dom, if (virDomainPinEmulatorEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -4892,7 +4892,7 @@ qemuDomainGetIOThreadsLive(virDomainObj *vm, size_t i; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -5020,7 +5020,7 @@ qemuDomainPinIOThread(virDomainPtr dom, if (virDomainPinIOThreadEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -5528,7 +5528,7 @@ qemuDomainChgIOThread(virQEMUDriver *driver, =20 priv =3D vm->privateData; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -6737,7 +6737,7 @@ qemuDomainUndefineFlags(virDomainPtr dom, if (virDomainUndefineFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -7994,7 +7994,7 @@ qemuDomainAttachDeviceFlags(virDomainPtr dom, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8049,7 +8049,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr d= om, if (virDomainUpdateDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8279,7 +8279,7 @@ qemuDomainDetachDeviceFlags(virDomainPtr dom, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8314,7 +8314,7 @@ qemuDomainDetachDeviceAlias(virDomainPtr dom, if (virDomainDetachDeviceAliasEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -8387,7 +8387,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, autostart =3D (autostart !=3D 0); =20 if (vm->autostart !=3D autostart) { - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!(configFile =3D virDomainConfigFile(cfg->configDir, vm->def->= name))) @@ -8533,7 +8533,7 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -8707,7 +8707,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 /* QEMU and LXC implementation are identical */ @@ -8950,7 +8950,7 @@ qemuDomainSetNumaParameters(virDomainPtr dom, } } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9167,7 +9167,7 @@ qemuDomainSetPerfEvents(virDomainPtr dom, if (virDomainSetPerfEventsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9241,7 +9241,7 @@ qemuDomainGetPerfEvents(virDomainPtr dom, if (virDomainGetPerfEventsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!(def =3D virDomainObjGetOneDefState(vm, flags, &live))) @@ -9418,7 +9418,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr do= m, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -9920,7 +9920,7 @@ qemuDomainBlockResize(virDomainPtr dom, if (virDomainBlockResizeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10120,7 +10120,7 @@ qemuDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10177,7 +10177,7 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -10339,7 +10339,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, if (virDomainSetInterfaceParametersEnsureACL(dom->conn, vm->def, flags= ) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -10702,7 +10702,7 @@ qemuDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 ret =3D qemuDomainMemoryStatsInternal(vm, stats, nr_stats); @@ -10812,7 +10812,7 @@ qemuDomainMemoryPeek(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -11089,7 +11089,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, if (virDomainGetBlockInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (!(disk =3D virDomainDiskByName(vm->def, path, false))) { @@ -12679,7 +12679,7 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, return -1; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12858,7 +12858,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, if (virDomainAbortJobFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_ABORT) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_ABORT) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -12961,7 +12961,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, if (virDomainMigrateSetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13010,7 +13010,7 @@ qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, if (virDomainMigrateGetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13060,7 +13060,7 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr d= om, if (virDomainMigrateGetCompressionCacheEnsureACL(dom->conn, vm->def) <= 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13109,7 +13109,7 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr d= om, if (virDomainMigrateSetCompressionCacheEnsureACL(dom->conn, vm->def) <= 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13185,7 +13185,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13233,7 +13233,7 @@ qemuDomainMigrationGetPostcopyBandwidth(virDomainOb= j *vm, int rc; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -13325,7 +13325,7 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, if (virDomainMigrateStartPostCopyEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MIGRATION_OP) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14070,7 +14070,7 @@ qemuDomainQemuMonitorCommandWithFiles(virDomainPtr = domain, if (virDomainQemuMonitorCommandWithFilesEnsureACL(domain->conn, vm->de= f) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14401,7 +14401,7 @@ qemuDomainBlockPullCommon(virDomainObj *vm, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14509,7 +14509,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, if (virDomainBlockJobAbortEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14670,7 +14670,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, goto cleanup; =20 =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14739,7 +14739,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, if (virDomainBlockJobSetSpeedEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -14931,7 +14931,7 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, return -1; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15395,7 +15395,7 @@ qemuDomainBlockCommit(virDomainPtr dom, if (virDomainBlockCommitEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15595,7 +15595,7 @@ qemuDomainOpenGraphics(virDomainPtr dom, if (virDomainOpenGraphicsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -15713,7 +15713,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, if (qemuSecurityClearSocketLabel(driver->securityManager, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; qemuDomainObjEnterMonitor(vm); ret =3D qemuMonitorOpenGraphics(priv->mon, protocol, pair[1], "graphic= sfd", @@ -15968,7 +15968,7 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, =20 cfg =3D virQEMUDriverGetConfig(driver); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 priv =3D vm->privateData; @@ -16246,7 +16246,7 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, if (virDomainGetBlockIoTuneEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 /* the API check guarantees that only one of the definitions will be s= et */ @@ -16386,7 +16386,7 @@ qemuDomainGetDiskErrors(virDomainPtr dom, if (virDomainGetDiskErrorsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16461,7 +16461,7 @@ qemuDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, @@ -16582,7 +16582,7 @@ qemuDomainQueryWakeupSuspendSupport(virDomainObj *v= m, if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_QUERY_CURRENT_MACHINE)) return -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if ((ret =3D virDomainObjCheckActive(vm)) < 0) @@ -16713,7 +16713,7 @@ qemuDomainPMWakeup(virDomainPtr dom, if (virDomainPMWakeupEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17064,7 +17064,7 @@ qemuDomainGetHostnameLease(virDomainObj *vm, size_t i, j; int ret =3D -1; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17266,7 +17266,7 @@ qemuDomainSetTime(virDomainPtr dom, if (qemuDomainSetTimeAgent(vm, seconds, nseconds, rtcSync) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -18843,7 +18843,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, if (flags & VIR_CONNECT_GET_ALL_DOMAINS_STATS_NOWAIT) rv =3D qemuDomainObjBeginJobNowait(vm, VIR_JOB_QUERY); else - rv =3D qemuDomainObjBeginJob(vm, VIR_JOB_QUERY); + rv =3D virDomainObjBeginJob(vm, VIR_JOB_QUERY); =20 if (rv =3D=3D 0) domflags |=3D QEMU_DOMAIN_STATS_HAVE_JOB; @@ -19022,7 +19022,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, if ((nfs =3D qemuDomainGetFSInfoAgent(vm, &agentinfo)) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19310,7 +19310,7 @@ static int qemuDomainRename(virDomainPtr dom, if (virDomainRenameEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -19576,7 +19576,7 @@ qemuDomainSetVcpu(virDomainPtr dom, if (virDomainSetVcpuEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -19634,7 +19634,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, if (virDomainSetBlockThresholdEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19806,7 +19806,7 @@ qemuDomainSetLifecycleAction(virDomainPtr dom, if (virDomainSetLifecycleActionEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -19958,7 +19958,7 @@ qemuDomainGetSEVInfo(virDomainObj *vm, =20 virCheckFlags(VIR_TYPED_PARAM_STRING_OKAY, -1); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) { @@ -20101,7 +20101,7 @@ qemuDomainSetLaunchSecurityState(virDomainPtr domai= n, else if (rc =3D=3D 1) hasSetaddr =3D true; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -20506,7 +20506,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, qemuDomainObjEndAgentJob(vm); =20 if (nfs > 0 || ndisks > 0) { - if (qemuDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -20769,7 +20769,7 @@ qemuDomainStartDirtyRateCalc(virDomainPtr dom, goto cleanup; } =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) { diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 2572f47385..ca1f9071bc 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2803,7 +2803,7 @@ qemuMigrationSrcBegin(virConnectPtr conn, if (!qemuMigrationJobIsAllowed(vm)) goto cleanup; =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; asyncJob =3D VIR_ASYNC_JOB_NONE; } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 71b476a32f..6c0181b3ae 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -459,7 +459,7 @@ qemuProcessFakeReboot(void *opaque) =20 VIR_DEBUG("vm=3D%p", vm); virObjectLock(vm); - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -8061,7 +8061,7 @@ qemuProcessBeginStopJob(virDomainObj *vm, VIR_DEBUG("waking up all jobs waiting on the domain condition"); virDomainObjBroadcast(vm); =20 - if (qemuDomainObjBeginJob(vm, job) < 0) + if (virDomainObjBeginJob(vm, job) < 0) goto cleanup; =20 ret =3D 0; @@ -8702,7 +8702,7 @@ qemuProcessReconnect(void *opaque) if (oldjob.asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP && priv->backup) priv->backup->apiFlags =3D oldjob.apiFlags; =20 - if (qemuDomainObjBeginJob(obj, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(obj, VIR_JOB_MODIFY) < 0) goto error; jobStarted =3D true; =20 diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 243c18193b..c50e9f3846 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2299,7 +2299,7 @@ qemuSnapshotDelete(virDomainObj *vm, VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY | VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN_ONLY, -1); =20 - if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) return -1; =20 if (!(snap =3D qemuSnapObjFromSnapshot(vm, snapshot))) --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386297; cv=none; d=zohomail.com; s=zohoarc; b=dtRilM1x8ZIKTmqi+UJXhsqcXWR4msAkb7ZJCYsrsK+VetwqwkNSWWTGyIheNtmfSTglGbWBTXybS6Zvmxj2LjwG3QmJFnZiO3h3DM/E80YJOQyHIrevZADHwH/YrO2JlgByYqdOKT6TghQ+hA97BxvXYthqb3y+qNRUn6KwXdY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386297; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6njD1k2ge8P08P2sYDxzCF/f0SsGwPGhER+HG27aG7g=; b=Hwuo03av77VSn94f20GnHVUxKl0XrmYbAxigMHouFsORXN68aBneiJRwC90zxsDYYH+5i9p+JYHwfrSuqQfNHHv3xluRNHKWqUA54Fc1tMQbpKMCbpbPwt3c1F8s9jpyOvpX+53P+w5aGnaaPP7NsUjQ14aODWkc/Zwp5cN9gMc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386297603258.1207854449651; Mon, 5 Sep 2022 06:58:17 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-518-PVTgGdBBOHOuF36imxkpUQ-1; Mon, 05 Sep 2022 09:57:34 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 544971066574; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3C7112166B26; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id F22251940379; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 65E1C193F6E6 for ; Mon, 5 Sep 2022 13:57:26 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 590E02026D64; Mon, 5 Sep 2022 13:57:26 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0519C2026D4C for ; Mon, 5 Sep 2022 13:57:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386296; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=6njD1k2ge8P08P2sYDxzCF/f0SsGwPGhER+HG27aG7g=; b=MaS2HtpA/tTjhei3pNenKKWVLPffRc2wAVvf59T2LDJW6GM1oeNMYVXG26Ufb/RVqYnuar HKQE+klZ9NMio5pBnKfA+YSna7Qr6tV+6pPLRYO11T86+XxiC7aDVgO0pDyp/K2hoaoHbF YVE12fc+a9VANTE2zx4IMFxHMwG9JNo= X-MC-Unique: PVTgGdBBOHOuF36imxkpUQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 08/17] libxl: use virDomainObjBeginJob() Date: Mon, 5 Sep 2022 15:57:06 +0200 Message-Id: <7890c5464b1930cc255b9f73f69cfa5b42c2d7de.1662385930.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386298752100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes libxlDomainObjBeginJob() and replaces it with generalized virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/libxl/libxl_domain.c | 62 ++----------------------------------- src/libxl/libxl_domain.h | 6 ---- src/libxl/libxl_driver.c | 48 ++++++++++++++-------------- src/libxl/libxl_migration.c | 6 ++-- 4 files changed, 29 insertions(+), 93 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 80a362df46..3a1b371054 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -43,64 +43,6 @@ VIR_LOG_INIT("libxl.libxl_domain"); =20 =20 -/* Give up waiting for mutex after 30 seconds */ -#define LIBXL_JOB_WAIT_TIME (1000ull * 30) - -/* - * obj must be locked before calling, libxlDriverPrivate *must NOT be lock= ed - * - * This must be called by anything that will change the VM state - * in any way - * - * Upon successful return, the object will have its ref count increased, - * successful calls must be followed by EndJob eventually - */ -int -libxlDomainObjBeginJob(libxlDriverPrivate *driver G_GNUC_UNUSED, - virDomainObj *obj, - virDomainJob job) -{ - unsigned long long now; - unsigned long long then; - - if (virTimeMillisNow(&now) < 0) - return -1; - then =3D now + LIBXL_JOB_WAIT_TIME; - - while (obj->job->active) { - VIR_DEBUG("Wait normal job condition for starting job: %s", - virDomainJobTypeToString(job)); - if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) - goto error; - } - - virDomainObjResetJob(obj->job); - - VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - obj->job->active =3D job; - obj->job->owner =3D virThreadSelfID(); - obj->job->started =3D now; - - return 0; - - error: - VIR_WARN("Cannot start job (%s) for domain %s;" - " current job is (%s) owned by (%llu)", - virDomainJobTypeToString(job), - obj->def->name, - virDomainJobTypeToString(obj->job->active), - obj->job->owner); - - if (errno =3D=3D ETIMEDOUT) - virReportError(VIR_ERR_OPERATION_TIMEOUT, - "%s", _("cannot acquire state change lock")); - else - virReportSystemError(errno, - "%s", _("cannot acquire job mutex")); - - return -1; -} - /* * obj must be locked before calling * @@ -460,7 +402,7 @@ libxlDomainShutdownThread(void *opaque) goto cleanup; } =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (xl_reason =3D=3D LIBXL_SHUTDOWN_REASON_POWEROFF) { @@ -589,7 +531,7 @@ libxlDomainDeathThread(void *opaque) goto cleanup; } =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_DESTRO= YED); diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index 9e8804f747..b80552e30a 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -49,12 +49,6 @@ extern const struct libxl_event_hooks ev_hooks; int libxlDomainObjPrivateInitCtx(virDomainObj *vm); =20 -int -libxlDomainObjBeginJob(libxlDriverPrivate *driver, - virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void libxlDomainObjEndJob(libxlDriverPrivate *driver, virDomainObj *obj); diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 0ae1ee95c4..d94430708a 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -326,7 +326,7 @@ libxlAutostartDomain(virDomainObj *vm, virObjectLock(vm); virResetLastError(); =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (vm->autostart && !virDomainObjIsActive(vm) && @@ -1049,7 +1049,7 @@ libxlDomainCreateXML(virConnectPtr conn, const char *= xml, NULL))) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1159,7 +1159,7 @@ libxlDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1212,7 +1212,7 @@ libxlDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1373,7 +1373,7 @@ libxlDomainDestroyFlags(virDomainPtr dom, if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1446,7 +1446,7 @@ libxlDomainPMSuspendForDuration(virDomainPtr dom, if (virDomainPMSuspendForDurationEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1499,7 +1499,7 @@ libxlDomainPMWakeup(virDomainPtr dom, unsigned int fl= ags) if (virDomainPMWakeupEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_PMSUSPENDED) { @@ -1633,7 +1633,7 @@ libxlDomainSetMemoryFlags(virDomainPtr dom, unsigned = long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainLiveConfigHelperMethod(cfg->caps, driver->xmlopt, vm, &fl= ags, @@ -1902,7 +1902,7 @@ libxlDomainSaveFlags(virDomainPtr dom, const char *to= , const char *dxml, if (virDomainSaveFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1967,7 +1967,7 @@ libxlDomainRestoreFlags(virConnectPtr conn, const cha= r *from, NULL))) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -2014,7 +2014,7 @@ libxlDomainCoreDump(virDomainPtr dom, const char *to,= unsigned int flags) if (virDomainCoreDumpEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2103,7 +2103,7 @@ libxlDomainManagedSave(virDomainPtr dom, unsigned int= flags) if (virDomainManagedSaveEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2248,7 +2248,7 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned i= nt nvcpus, if (virDomainSetVcpusFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_VCPU_LIVE)) { @@ -2446,7 +2446,7 @@ libxlDomainPinVcpuFlags(virDomainPtr dom, unsigned in= t vcpu, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainLiveConfigHelperMethod(cfg->caps, driver->xmlopt, vm, @@ -2777,7 +2777,7 @@ libxlDomainCreateWithFlags(virDomainPtr dom, if (virDomainCreateWithFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -4095,7 +4095,7 @@ libxlDomainAttachDeviceFlags(virDomainPtr dom, const = char *xml, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4189,7 +4189,7 @@ libxlDomainDetachDeviceFlags(virDomainPtr dom, const = char *xml, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4477,7 +4477,7 @@ libxlDomainSetAutostart(virDomainPtr dom, int autosta= rt) if (virDomainSetAutostartEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -4683,7 +4683,7 @@ libxlDomainSetSchedulerParametersFlags(virDomainPtr d= om, if (virDomainSetSchedulerParametersFlagsEnsureACL(dom->conn, vm->def, = flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5000,7 +5000,7 @@ libxlDomainInterfaceStats(virDomainPtr dom, if (virDomainInterfaceStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5168,7 +5168,7 @@ libxlDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5522,7 +5522,7 @@ libxlDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -5572,7 +5572,7 @@ libxlDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -6368,7 +6368,7 @@ libxlDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, diff --git a/src/libxl/libxl_migration.c b/src/libxl/libxl_migration.c index 800a6b0365..90cf12ae00 100644 --- a/src/libxl/libxl_migration.c +++ b/src/libxl/libxl_migration.c @@ -383,7 +383,7 @@ libxlDomainMigrationSrcBegin(virConnectPtr conn, * terminated in the confirm phase. Errors in the begin or perform * phase will also terminate the job. */ - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!(mig =3D libxlMigrationCookieNew(vm))) @@ -553,7 +553,7 @@ libxlDomainMigrationDstPrepareTunnel3(virConnectPtr dco= nn, * Unless an error is encountered in this function, the job will * be terminated in the finish phase. */ - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto error; =20 priv =3D vm->privateData; @@ -662,7 +662,7 @@ libxlDomainMigrationDstPrepare(virConnectPtr dconn, * Unless an error is encountered in this function, the job will * be terminated in the finish phase. */ - if (libxlDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto error; =20 priv =3D vm->privateData; --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386258; cv=none; d=zohomail.com; s=zohoarc; b=WGWW1HBKQAK28QILUhN6yjjmm6XtEeFB64BUSIwx8fhIyVdqjijokMas7TfBXQrz6f4AesWTlPbFBfXRMupLEj8Jb0Oj1dOQgwCQu8/LPuAiy4lEHlxksd8lre+ekrw8J4U8u64GVc6lEgK93SloOvM9sOAsVRclC4jLhz7bQ9I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386258; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=BMzhv6SsZfQeq22qaC+oXGy5Hezp1tjup1cgIqleZNs=; b=jfx/77wPCcoClVp/+Z7RJhYtLNRn/qfNiXF5IS5fz8R/6vv4UDzoTbq9tazLdugVBRwxO4cOkcySlb0XdrnglUb5ZQrGZqR+OdXJ4A7RIbOw9lbq2aMihkORKQf3+hpm4+muP61OicA0BHcrqtQ8ZI3+yzLBETFE0WcCUPtKEAU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1662386258666888.3290500487179; Mon, 5 Sep 2022 06:57:38 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-126-i8OgykCLPwy71VqUugCK_w-1; Mon, 05 Sep 2022 09:57:35 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8B7C71818811; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7016E492CAA; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 5AACC194B966; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 0B0351946A48 for ; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id E4E8C2026D64; Mon, 5 Sep 2022 13:57:26 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 90D332026D4C for ; Mon, 5 Sep 2022 13:57:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386257; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=BMzhv6SsZfQeq22qaC+oXGy5Hezp1tjup1cgIqleZNs=; b=S43f6IC081xd28ossgc8+TwcjpeUkb3AQr9YRt5tvHryox7DFQnNvN6UvfywmNjfgHN6jt vX69kaV8QCHBgA9iWbw/bKrhI2Oykt7NI4Pal/N2s/oqLljxXRbH231l35NS59hvfOxYXw hLEV/UuZVtRuVHxKE8OzezstqT3G0vE= X-MC-Unique: i8OgykCLPwy71VqUugCK_w-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 09/17] LXC: use virDomainObjBeginJob() Date: Mon, 5 Sep 2022 15:57:07 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386260412100006 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virLXCDomainObjBeginJob() and replaces it with call to the generalized virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/lxc/lxc_domain.c | 57 -------------------------------------------- src/lxc/lxc_domain.h | 6 ----- src/lxc/lxc_driver.c | 46 +++++++++++++++++------------------ 3 files changed, 23 insertions(+), 86 deletions(-) diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index 3dddc0a7d4..aad9dae694 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -35,63 +35,6 @@ VIR_LOG_INIT("lxc.lxc_domain"); =20 =20 -/* Give up waiting for mutex after 30 seconds */ -#define LXC_JOB_WAIT_TIME (1000ull * 30) - -/* - * obj must be locked before calling, virLXCDriver *must NOT be locked - * - * This must be called by anything that will change the VM state - * in any way - * - * Upon successful return, the object will have its ref count increased. - * Successful calls must be followed by EndJob eventually. - */ -int -virLXCDomainObjBeginJob(virLXCDriver *driver G_GNUC_UNUSED, - virDomainObj *obj, - virDomainJob job) -{ - unsigned long long now; - unsigned long long then; - - if (virTimeMillisNow(&now) < 0) - return -1; - then =3D now + LXC_JOB_WAIT_TIME; - - while (obj->job->active) { - VIR_DEBUG("Wait normal job condition for starting job: %s", - virDomainJobTypeToString(job)); - if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0) - goto error; - } - - virDomainObjResetJob(obj->job); - - VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - obj->job->active =3D job; - obj->job->owner =3D virThreadSelfID(); - - return 0; - - error: - VIR_WARN("Cannot start job (%s) for domain %s;" - " current job is (%s) owned by (%llu)", - virDomainJobTypeToString(job), - obj->def->name, - virDomainJobTypeToString(obj->job->active), - obj->job->owner); - - if (errno =3D=3D ETIMEDOUT) - virReportError(VIR_ERR_OPERATION_TIMEOUT, - "%s", _("cannot acquire state change lock")); - else - virReportSystemError(errno, - "%s", _("cannot acquire job mutex")); - return -1; -} - - /* * obj must be locked and have a reference before calling * diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index 8cbcc0818c..e7b19fb2ff 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -72,12 +72,6 @@ extern virXMLNamespace virLXCDriverDomainXMLNamespace; extern virDomainXMLPrivateDataCallbacks virLXCDriverPrivateDataCallbacks; extern virDomainDefParserConfig virLXCDriverDomainDefParserConfig; =20 -int -virLXCDomainObjBeginJob(virLXCDriver *driver, - virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void virLXCDomainObjEndJob(virLXCDriver *driver, virDomainObj *obj); diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index f8e7010210..a115313b3f 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -648,7 +648,7 @@ static int lxcDomainSetMemoryFlags(virDomainPtr dom, un= signed long newmem, if (virDomainSetMemoryFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -762,7 +762,7 @@ lxcDomainSetMemoryParameters(virDomainPtr dom, if (virDomainSetMemoryParametersEnsureACL(dom->conn, vm->def, flags) <= 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 /* QEMU and LXC implementation are identical */ @@ -983,7 +983,7 @@ static int lxcDomainCreateWithFiles(virDomainPtr dom, goto cleanup; } =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjIsActive(vm)) { @@ -1105,7 +1105,7 @@ lxcDomainCreateXMLWithFiles(virConnectPtr conn, NULL))) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) { + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) { if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1350,7 +1350,7 @@ lxcDomainDestroyFlags(virDomainPtr dom, if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_DESTROY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_DESTROY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1816,7 +1816,7 @@ lxcDomainSetSchedulerParametersFlags(virDomainPtr dom, if (!(caps =3D virLXCDriverGetCapabilities(driver, false))) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2035,7 +2035,7 @@ lxcDomainBlockStats(virDomainPtr dom, if (virDomainBlockStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2118,7 +2118,7 @@ lxcDomainBlockStatsFlags(virDomainPtr dom, if (virDomainBlockStatsFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2254,7 +2254,7 @@ lxcDomainSetBlkioParameters(virDomainPtr dom, if (virDomainSetBlkioParametersEnsureACL(dom->conn, vm->def, flags) < = 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -2396,7 +2396,7 @@ lxcDomainInterfaceStats(virDomainPtr dom, if (virDomainInterfaceStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2456,7 +2456,7 @@ static int lxcDomainSetAutostart(virDomainPtr dom, if (virDomainSetAutostartEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (!vm->persistent) { @@ -2607,7 +2607,7 @@ static int lxcDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2657,7 +2657,7 @@ static int lxcDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2786,7 +2786,7 @@ lxcDomainSendProcessSignal(virDomainPtr dom, if (virDomainSendProcessSignalEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2871,7 +2871,7 @@ lxcDomainShutdownFlags(virDomainPtr dom, if (virDomainShutdownFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -2947,7 +2947,7 @@ lxcDomainReboot(virDomainPtr dom, if (virDomainRebootEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4278,7 +4278,7 @@ static int lxcDomainAttachDeviceFlags(virDomainPtr do= m, if (virDomainAttachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4383,7 +4383,7 @@ static int lxcDomainUpdateDeviceFlags(virDomainPtr do= m, if (virDomainUpdateDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4446,7 +4446,7 @@ static int lxcDomainDetachDeviceFlags(virDomainPtr do= m, if (virDomainDetachDeviceFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjUpdateModificationImpact(vm, &flags) < 0) @@ -4546,7 +4546,7 @@ static int lxcDomainLxcOpenNamespace(virDomainPtr dom, if (virDomainLxcOpenNamespaceEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4629,7 +4629,7 @@ lxcDomainMemoryStats(virDomainPtr dom, if (virDomainMemoryStatsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -4799,7 +4799,7 @@ lxcDomainSetMetadata(virDomainPtr dom, if (virDomainSetMetadataEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virDomainObjSetMetadata(vm, type, metadata, key, uri, @@ -4905,7 +4905,7 @@ lxcDomainGetHostname(virDomainPtr dom, if (virDomainGetHostnameEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virLXCDomainObjBeginJob(driver, vm, VIR_JOB_QUERY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386259; cv=none; d=zohomail.com; s=zohoarc; b=QHnoaaQRv7AywozQ9DbAGmWOFfhW7hENPJ6e9gnFAcMJ3Y28YplcQm4fHWRQs6/lGEypdcokkdCyPphfm43/g3bg5eRFOUlbCmCPQXMyWfXF5rg0ZRFi6anR4m1z16hvzzM5+0VM0yAoWHYdCZu3GwOPa4/ydjXaJavpdkYc9nA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386259; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=f8VHSYVHQWuwLYuS0UDiEBWbUCgbQ15ejVxt0rzTCn8=; b=eOfVto1MCsKQ+b46DDm1ZXTetCb7rHfv+A8P19mD8T81jLBGWTgmJzLMmZFRyH8//OcTxqv/EAs6kM1uB70ak7/yOcARY5WvsRWsUbsM6/wJKub0yl0qHU4Z2+RVEB6f/gR+5/hLATU8471GaHp3ylpMECc55Z1nT+AakYb1aiw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1662386259214306.3704363687269; Mon, 5 Sep 2022 06:57:39 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-98-gpY_J0roPACFi83jJxnE1A-1; Mon, 05 Sep 2022 09:57:34 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F0CC53C3050E; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id D21B42166B30; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C52AF1946A48; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 96CD1193F516 for ; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 7B9682026D64; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 280712026D4C for ; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386258; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=f8VHSYVHQWuwLYuS0UDiEBWbUCgbQ15ejVxt0rzTCn8=; b=hsj4iNvdbAZO8QOR5+p98DzT8KDSBfrJOxjoGYytxZrYpM4Esp2b2KSi3qjPw0ur7HcKKB EpEMkbkaA/gwvZaqr46y8tjKzMO17hazm8gzFqXGpQT9QvuvZDEcgMT7ta1ista+FfDCcY 1H6QL/3TUGMoA+LVckvron5vN67FFX0= X-MC-Unique: gpY_J0roPACFi83jJxnE1A-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 10/17] CH: use virDomainObjBeginJob() Date: Mon, 5 Sep 2022 15:57:08 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386260374100005 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virCHDomainObjBeginJob() and replaces it with call to the generalized virDomainObjBeginJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/ch/ch_domain.c | 51 +--------------------------------------------- src/ch/ch_domain.h | 4 ---- src/ch/ch_driver.c | 20 +++++++++--------- 3 files changed, 11 insertions(+), 64 deletions(-) diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index 9ddf9a8584..c592c6ffbb 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -32,60 +32,11 @@ =20 VIR_LOG_INIT("ch.ch_domain"); =20 -/* - * obj must be locked before calling, virCHDriver must NOT be locked - * - * This must be called by anything that will change the VM state - * in any way - * - * Upon successful return, the object will have its ref count increased. - * Successful calls must be followed by EndJob eventually. - */ -int -virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) -{ - unsigned long long now; - unsigned long long then; - - if (virTimeMillisNow(&now) < 0) - return -1; - then =3D now + CH_JOB_WAIT_TIME; - - while (obj->job->active) { - VIR_DEBUG("Wait normal job condition for starting job: %s", - virDomainJobTypeToString(job)); - if (virCondWaitUntil(&obj->job->cond, &obj->parent.lock, then) < 0= ) { - VIR_WARN("Cannot start job (%s) for domain %s;" - " current job is (%s) owned by (%llu)", - virDomainJobTypeToString(job), - obj->def->name, - virDomainJobTypeToString(obj->job->active), - obj->job->owner); - - if (errno =3D=3D ETIMEDOUT) - virReportError(VIR_ERR_OPERATION_TIMEOUT, - "%s", _("cannot acquire state change lock")= ); - else - virReportSystemError(errno, - "%s", _("cannot acquire job mutex")); - return -1; - } - } - - virDomainObjResetJob(obj->job); - - VIR_DEBUG("Starting job: %s", virDomainJobTypeToString(job)); - obj->job->active =3D job; - obj->job->owner =3D virThreadSelfID(); - - return 0; -} - /* * obj must be locked and have a reference before calling * * To be called after completing the work associated with the - * earlier virCHDomainBeginJob() call + * earlier virDomainObjBeginJob() call */ void virCHDomainObjEndJob(virDomainObj *obj) diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index c7dfde601e..076043f772 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -60,10 +60,6 @@ struct _virCHDomainVcpuPrivate { extern virDomainXMLPrivateDataCallbacks virCHDriverPrivateDataCallbacks; extern virDomainDefParserConfig virCHDriverDomainDefParserConfig; =20 -int -virCHDomainObjBeginJob(virDomainObj *obj, virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void virCHDomainObjEndJob(virDomainObj *obj); =20 diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c index bde148075d..d81bddcc23 100644 --- a/src/ch/ch_driver.c +++ b/src/ch/ch_driver.c @@ -217,7 +217,7 @@ chDomainCreateXML(virConnectPtr conn, NULL))) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED) < 0) @@ -251,7 +251,7 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int = flags) if (virDomainCreateWithFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 ret =3D virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED); @@ -390,7 +390,7 @@ chDomainShutdownFlags(virDomainPtr dom, if (virDomainShutdownFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -446,7 +446,7 @@ chDomainReboot(virDomainPtr dom, unsigned int flags) if (virDomainRebootEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -495,7 +495,7 @@ chDomainSuspend(virDomainPtr dom) if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -540,7 +540,7 @@ chDomainResume(virDomainPtr dom) if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -594,7 +594,7 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int fla= gs) if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_DESTROY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_DESTROY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -1218,7 +1218,7 @@ chDomainPinVcpuFlags(virDomainPtr dom, if (virDomainPinVcpuFlagsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -1355,7 +1355,7 @@ chDomainPinEmulator(virDomainPtr dom, if (virDomainPinEmulatorEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) @@ -1626,7 +1626,7 @@ chDomainSetNumaParameters(virDomainPtr dom, } } =20 - if (virCHDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386261; cv=none; d=zohomail.com; s=zohoarc; b=bTNMF3UWHOfrwGUV8sz3EU72xyQPxcTOQzxTBTEF/HmYlyk9ixwqVukvjQk2VkK5t+nTtvg+IWfiTkJ9Ykh7CuOBHYQA0AQd/oabQzLCNBwLZzSpotbILFpMJtwDtR2/YxT8jF1AlAfpqYeh6xP9NvzItEytQfdPYHsV5CDYrE0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386261; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=H2w6VmwxKrhE6g7+XTxc932OOsY3Bk4N6w23phGeut4=; b=H1gy6Z4enBJ7qKdUaU/yUJhM52hpNqRo07EXtYw8553iIoM+0PFJ6H/UXoWeHg35L6XTXWDfg/yDFpyJ2ryK5duNj9incoQyIBLJ+9pYQcdo3CuYjUcuvmK9M2csZnTjHvS1laD4ZKYtPl9+fXaIiJvBaxwOi8h1/gx3RuEuyJE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386261085892.0289804320416; Mon, 5 Sep 2022 06:57:41 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-531-HZEcine_OQiTcBeEK8LzwQ-1; Mon, 05 Sep 2022 09:57:34 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 476553815D56; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 31F2B945DB; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 2263C194B97A; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 23725193F6ED for ; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 162F92026D4C; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id B55C62026D64 for ; Mon, 5 Sep 2022 13:57:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386260; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=H2w6VmwxKrhE6g7+XTxc932OOsY3Bk4N6w23phGeut4=; b=JWHoMVNWKva3Kw+Lr2F4wkmmH+jeuyZUa049jewI5nxqFryx8ZCDD6eQE+9hUctkg3iiUX JoJExvVGDy0ERxDTZPBLBqBFBHUPjMc63279fRnIMknnkOFZ/yEeGSPJxCLekb5Q5F1BLI Ez47zQ23J888RMo1MLbZhs61i4Fvdm0= X-MC-Unique: HZEcine_OQiTcBeEK8LzwQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 11/17] qemu: use virDomainObjEndJob() Date: Mon, 5 Sep 2022 15:57:09 +0200 Message-Id: <7bf769c1141f9a985a9901bd4f89b4d2ce58400d.1662385930.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386262407100009 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch moves qemuDomainObjEndJob() into src/conf/virdomainjob as universal virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 6 +- src/conf/virdomainjob.c | 28 +++++ src/conf/virdomainjob.h | 2 + src/libvirt_private.syms | 1 + src/qemu/qemu_checkpoint.c | 6 +- src/qemu/qemu_domain.c | 4 +- src/qemu/qemu_domainjob.c | 26 ----- src/qemu/qemu_domainjob.h | 1 - src/qemu/qemu_driver.c | 156 +++++++++++++------------- src/qemu/qemu_migration.c | 2 +- src/qemu/qemu_process.c | 8 +- src/qemu/qemu_snapshot.c | 2 +- 12 files changed, 123 insertions(+), 119 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index 00340bb732..22d12f61a5 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -123,7 +123,7 @@ To acquire the normal job condition isn't - Sets ``job.active`` to the job type =20 - ``qemuDomainObjEndJob()`` + ``virDomainObjEndJob()`` - Sets job.active to 0 - Signals on job.cond condition =20 @@ -218,7 +218,7 @@ Design patterns =20 ...do work... =20 - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); =20 virDomainObjEndAPI(&obj); =20 @@ -242,7 +242,7 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); virDomainObjEndAPI(&obj); =20 =20 diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 033ea5c517..85486f612a 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -527,3 +527,31 @@ int virDomainObjBeginJob(virDomainObj *obj, return -1; return 0; } + +/* + * obj must be locked and have a reference before calling + * + * To be called after completing the work associated with the + * earlier virDomainBeginJob() call + */ +void +virDomainObjEndJob(virDomainObj *obj) +{ + virDomainJob job =3D obj->job->active; + + obj->job->jobsQueued--; + + VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", + virDomainJobTypeToString(job), + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj, obj->def->name); + + virDomainObjResetJob(obj->job); + + if (virDomainTrackJob(job) && + obj->job->cb->saveStatusPrivate) + obj->job->cb->saveStatusPrivate(obj); + /* We indeed need to wake up ALL threads waiting because + * grabbing a job requires checking more variables. */ + virCondBroadcast(&obj->job->cond); +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index adee679e72..7a06c384f3 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -249,3 +249,5 @@ int virDomainObjBeginJobInternal(virDomainObj *obj, int virDomainObjBeginJob(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; + +void virDomainObjEndJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 4fc2ec2cca..9d2b26bfa2 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1191,6 +1191,7 @@ virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; +virDomainObjEndJob; virDomainObjInitJob; virDomainObjPreserveJob; virDomainObjResetAgentJob; diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c index c6fb3d1f97..8580158d66 100644 --- a/src/qemu/qemu_checkpoint.c +++ b/src/qemu/qemu_checkpoint.c @@ -626,7 +626,7 @@ qemuCheckpointCreateXML(virDomainPtr domain, checkpoint =3D virGetDomainCheckpoint(domain, chk->def->name); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 return checkpoint; } @@ -764,7 +764,7 @@ qemuCheckpointGetXMLDescUpdateSize(virDomainObj *vm, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -917,6 +917,6 @@ qemuCheckpointDelete(virDomainObj *vm, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index a4640c5c13..4dfb978599 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5973,7 +5973,7 @@ qemuDomainObjEnterMonitorInternal(virDomainObj *obj, if (!virDomainObjIsActive(obj)) { virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is no longer running")); - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); return -1; } } else if (obj->job->asyncOwner =3D=3D virThreadSelfID()) { @@ -6023,7 +6023,7 @@ qemuDomainObjExitMonitor(virDomainObj *obj) priv->mon =3D NULL; =20 if (obj->job->active =3D=3D VIR_JOB_ASYNC_NESTED) - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); } =20 void qemuDomainObjEnterMonitor(virDomainObj *obj) diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 13e1f332dc..a8baf096e5 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -730,32 +730,6 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, VIR_ASYNC_JOB_NONE, true); } =20 -/* - * obj must be locked and have a reference before calling - * - * To be called after completing the work associated with the - * earlier qemuDomainBeginJob() call - */ -void -qemuDomainObjEndJob(virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - obj->job->jobsQueued--; - - VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", - virDomainJobTypeToString(job), - virDomainAsyncJobTypeToString(obj->job->asyncJob), - obj, obj->def->name); - - virDomainObjResetJob(obj->job); - if (virDomainTrackJob(job)) - qemuDomainSaveStatus(obj); - /* We indeed need to wake up ALL threads waiting because - * grabbing a job requires checking more variables. */ - virCondBroadcast(&obj->job->cond); -} - void qemuDomainObjEndAgentJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 66f18483fe..918b74748b 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -84,7 +84,6 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 -void qemuDomainObjEndJob(virDomainObj *obj); void qemuDomainObjEndAgentJob(virDomainObj *obj); void qemuDomainObjEndAsyncJob(virDomainObj *obj); void qemuDomainObjAbortAsyncJob(virDomainObj *obj); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 5f36113a9c..65f9ce6bca 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1707,7 +1707,7 @@ static int qemuDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1763,7 +1763,7 @@ static int qemuDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1828,7 +1828,7 @@ qemuDomainShutdownFlagsMonitor(virDomainObj *vm, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -1948,7 +1948,7 @@ qemuDomainRebootMonitor(virDomainObj *vm, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -2041,7 +2041,7 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, VIR_DOMAIN_PAUSED_CRAS= HED); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2118,7 +2118,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, endjob: if (ret =3D=3D 0) qemuDomainRemoveInactive(driver, vm); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2296,7 +2296,7 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom,= unsigned long newmem, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2379,7 +2379,7 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPt= r dom, int period, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2413,7 +2413,7 @@ static int qemuDomainInjectNMI(virDomainPtr domain, u= nsigned int flags) qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2471,7 +2471,7 @@ static int qemuDomainSendKey(virDomainPtr domain, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -3400,7 +3400,7 @@ qemuDomainScreenshot(virDomainPtr dom, unlink(tmp); } =20 - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -3624,7 +3624,7 @@ processDeviceDeletedEvent(virQEMUDriver *driver, qemuDomainSaveStatus(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -3920,7 +3920,7 @@ processNicRxFilterChangedEvent(virDomainObj *vm, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virNetDevRxFilterFree(hostFilter); @@ -4003,7 +4003,7 @@ processSerialChangedEvent(virQEMUDriver *driver, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -4022,7 +4022,7 @@ processJobStatusChangeEvent(virDomainObj *vm, qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -4068,7 +4068,7 @@ processMonitorEOFEvent(virQEMUDriver *driver, =20 endjob: qemuDomainRemoveInactive(driver, vm); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 =20 @@ -4182,7 +4182,7 @@ processMemoryDeviceSizeChange(virQEMUDriver *driver, mem->currentsiz= e); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, event); } =20 @@ -4390,7 +4390,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, if (useAgent) qemuDomainObjEndAgentJob(vm); else - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4546,7 +4546,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4694,7 +4694,7 @@ qemuDomainPinEmulator(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -4936,7 +4936,7 @@ qemuDomainGetIOThreadsLive(virDomainObj *vm, ret =3D niothreads; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (info_ret) { @@ -5100,7 +5100,7 @@ qemuDomainPinIOThread(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -5644,7 +5644,7 @@ qemuDomainChgIOThread(virQEMUDriver *driver, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 return ret; } @@ -6831,7 +6831,7 @@ qemuDomainUndefineFlags(virDomainPtr dom, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8006,7 +8006,7 @@ qemuDomainAttachDeviceFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8109,7 +8109,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr d= om, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (dev !=3D dev_copy) @@ -8291,7 +8291,7 @@ qemuDomainDetachDeviceFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8326,7 +8326,7 @@ qemuDomainDetachDeviceAlias(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8425,7 +8425,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, vm->autostart =3D autostart; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } ret =3D 0; =20 @@ -8565,7 +8565,7 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -8737,7 +8737,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9018,7 +9018,7 @@ qemuDomainSetNumaParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9208,7 +9208,7 @@ qemuDomainSetPerfEvents(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9269,7 +9269,7 @@ qemuDomainGetPerfEvents(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9649,7 +9649,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr do= m, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -9969,7 +9969,7 @@ qemuDomainBlockResize(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10143,7 +10143,7 @@ qemuDomainBlockStats(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10230,7 +10230,7 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, *nparams =3D nstats; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(blockstats); @@ -10533,7 +10533,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10707,7 +10707,7 @@ qemuDomainMemoryStats(virDomainPtr dom, =20 ret =3D qemuDomainMemoryStatsInternal(vm, stats, nr_stats); =20 - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -10854,7 +10854,7 @@ qemuDomainMemoryPeek(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FORCE_CLOSE(fd); @@ -11167,7 +11167,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); cleanup: VIR_FREE(entry); virDomainObjEndAPI(&vm); @@ -12717,7 +12717,7 @@ qemuDomainGetJobStatsInternal(virDomainObj *vm, ret =3D 0; =20 cleanup: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -12929,7 +12929,7 @@ qemuDomainAbortJobFlags(virDomainPtr dom, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -12984,7 +12984,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13035,7 +13035,7 @@ qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13085,7 +13085,7 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr d= om, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13138,7 +13138,7 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr d= om, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13216,7 +13216,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13269,7 +13269,7 @@ qemuDomainMigrationGetPostcopyBandwidth(virDomainOb= j *vm, ret =3D 0; =20 cleanup: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -13353,7 +13353,7 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14087,7 +14087,7 @@ qemuDomainQemuMonitorCommandWithFiles(virDomainPtr = domain, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14478,7 +14478,7 @@ qemuDomainBlockPullCommon(virDomainObj *vm, qemuBlockJobStarted(job, vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: qemuBlockJobStartupFinalize(vm, job); @@ -14585,7 +14585,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, endjob: if (job && !async) qemuBlockJobSyncEnd(vm, job, VIR_ASYNC_JOB_NONE); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14700,7 +14700,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, ret =3D 1; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -14761,7 +14761,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -15183,7 +15183,7 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, if (need_unlink && virStorageSourceUnlink(mirror) < 0) VIR_WARN("%s", _("unable to remove just-created copy target")); virStorageSourceDeinit(mirror); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); qemuBlockJobStartupFinalize(vm, job); =20 return ret; @@ -15568,7 +15568,7 @@ qemuDomainBlockCommit(virDomainPtr dom, virErrorRestore(&orig_err); } qemuBlockJobStartupFinalize(vm, job); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -15642,7 +15642,7 @@ qemuDomainOpenGraphics(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -15719,7 +15719,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, ret =3D qemuMonitorOpenGraphics(priv->mon, protocol, pair[1], "graphic= sfd", (flags & VIR_DOMAIN_OPEN_GRAPHICS_SKIPAU= TH)); qemuDomainObjExitMonitor(vm); - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); if (ret < 0) goto cleanup; =20 @@ -16202,7 +16202,7 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(info.group_name); @@ -16355,7 +16355,7 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(reply.group_name); @@ -16426,7 +16426,7 @@ qemuDomainGetDiskErrors(virDomainPtr dom, ret =3D n; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -16474,7 +16474,7 @@ qemuDomainSetMetadata(virDomainPtr dom, virObjectEventStateQueue(driver->domainEventState, ev); } =20 - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -16591,7 +16591,7 @@ qemuDomainQueryWakeupSuspendSupport(virDomainObj *v= m, ret =3D qemuDomainProbeQMPCurrentMachine(vm, wakeupSupported); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -16726,7 +16726,7 @@ qemuDomainPMWakeup(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17106,7 +17106,7 @@ qemuDomainGetHostnameLease(virDomainObj *vm, =20 ret =3D 0; endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -17285,7 +17285,7 @@ qemuDomainSetTime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -18853,7 +18853,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, rc =3D qemuDomainGetStats(conn, vm, requestedStats, &tmp, domflags= ); =20 if (HAVE_JOB(domflags)) - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 virObjectUnlock(vm); =20 @@ -19031,7 +19031,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, ret =3D virDomainFSInfoFormat(agentinfo, nfs, vm->def, info); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (agentinfo) { @@ -19345,7 +19345,7 @@ static int qemuDomainRename(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19603,7 +19603,7 @@ qemuDomainSetVcpu(virDomainPtr dom, ret =3D qemuDomainSetVcpuInternal(driver, vm, def, persistentDef, map,= !!state); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19664,7 +19664,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19846,7 +19846,7 @@ qemuDomainSetLifecycleAction(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -20005,7 +20005,7 @@ qemuDomainGetSEVInfo(virDomainObj *vm, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); return ret; } =20 @@ -20130,7 +20130,7 @@ qemuDomainSetLaunchSecurityState(virDomainPtr domai= n, ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -20521,7 +20521,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, qemuAgentDiskInfoFormatParams(agentdiskinfo, ndisks, vm->def, = params, nparams, &maxparams); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 if (nifaces > 0) { @@ -20784,7 +20784,7 @@ qemuDomainStartDirtyRateCalc(virDomainPtr dom, qemuDomainObjExitMonitor(vm); =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index ca1f9071bc..38c2f6fdfd 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2844,7 +2844,7 @@ qemuMigrationSrcBegin(virConnectPtr conn, else qemuMigrationJobFinish(vm); } else { - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); } =20 cleanup: diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 6c0181b3ae..94429bf3f8 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -492,7 +492,7 @@ qemuProcessFakeReboot(void *opaque) ret =3D 0; =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: priv->pausedShutdown =3D false; @@ -8411,7 +8411,7 @@ void qemuProcessStop(virQEMUDriver *driver, =20 endjob: if (asyncJob !=3D VIR_ASYNC_JOB_NONE) - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virErrorRestore(&orig_err); @@ -8453,7 +8453,7 @@ qemuProcessAutoDestroy(virDomainObj *dom, =20 qemuDomainRemoveInactive(driver, dom); =20 - qemuDomainObjEndJob(dom); + virDomainObjEndJob(dom); =20 virObjectEventStateQueue(driver->domainEventState, event); } @@ -8915,7 +8915,7 @@ qemuProcessReconnect(void *opaque) =20 cleanup: if (jobStarted) - qemuDomainObjEndJob(obj); + virDomainObjEndJob(obj); if (!virDomainObjIsActive(obj)) qemuDomainRemoveInactive(driver, obj); virDomainObjEndAPI(&obj); diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index c50e9f3846..afed0f0e28 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2373,7 +2373,7 @@ qemuSnapshotDelete(virDomainObj *vm, } =20 endjob: - qemuDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 return ret; } --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386291; cv=none; d=zohomail.com; s=zohoarc; b=DFiptQg9c9mEF9Eb4Au5IcI7TrjR+j1n1UK/E/kuE8O7g4JDWT05+ijVUDdDJlX/yKHotYW7gm+fHEPEmlefq5OQRQt/1v87otwUHS/iVU1D20gyRWMJvpPEDiEJb7SI3+Fj6aWXdc6gVQxJq0Md5NwrJUxSln0yrJ2cXIrKCnA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386291; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=3tGyftZRBcyQrWZDWKjcmg2ndvCLyzqHskwFTJoJdFA=; b=OIWdaStX19pwQX0Kjzte0xIa3q4wi5+3CNh9nBJMRkEfCIYOBbCTk6NXIktj0fMB0LYZeO60FVomcH31+nk4O253CF128LV4gKHyNZ9Hpve+RxGrJXhWOvDTIigzr5r1tKtzvdDsP8HCPlkeazJYEAHFMHqfvqsIjFq1g/ktqSI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386291564228.80406054925618; Mon, 5 Sep 2022 06:58:11 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-170-J6DJL-i5Pmq73yd7ld4tnA-1; Mon, 05 Sep 2022 09:57:35 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 03FA3811E80; Mon, 5 Sep 2022 13:57:30 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id E19A91415137; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C07B71946A48; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id BBA3E193F6EA for ; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id A1CA12026D64; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D7EC2026D4C for ; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386289; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=3tGyftZRBcyQrWZDWKjcmg2ndvCLyzqHskwFTJoJdFA=; b=DCy6WhED/nnTOP/DCstte1jZRUmNQw+7VCfcl/0KdkmNmcYNze7yozcb0MXoSeRvtUTxyh 8KbWjhkkmx8eRALxkuPNAOrd4UBdjvGc7x6dPWNmjdJmn54ightV7PHyKADW4EV57+zwpc ji9ZInzyypbgxwg8sN/rlgiS//Dh5hk= X-MC-Unique: J6DJL-i5Pmq73yd7ld4tnA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 12/17] libxl: use virDomainObjEndJob() Date: Mon, 5 Sep 2022 15:57:10 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386292611100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes libxlDomainObjEndJob() and replaces it with call to the generalized virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/libxl/libxl_domain.c | 27 ++------------------ src/libxl/libxl_domain.h | 4 --- src/libxl/libxl_driver.c | 51 +++++++++++++++++-------------------- src/libxl/libxl_migration.c | 14 +++++----- 4 files changed, 33 insertions(+), 63 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 3a1b371054..2d53250895 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -43,29 +43,6 @@ VIR_LOG_INIT("libxl.libxl_domain"); =20 =20 -/* - * obj must be locked before calling - * - * To be called after completing the work associated with the - * earlier libxlDomainBeginJob() call - * - * Returns true if the remaining reference count on obj is - * non-zero, false if the reference count has dropped to zero - * and obj is disposed. - */ -void -libxlDomainObjEndJob(libxlDriverPrivate *driver G_GNUC_UNUSED, - virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - VIR_DEBUG("Stopping job: %s", - virDomainJobTypeToString(job)); - - virDomainObjResetJob(obj->job); - virCondSignal(&obj->job->cond); -} - int libxlDomainJobGetTimeElapsed(virDomainJobObj *job, unsigned long long *tim= eElapsed) { @@ -505,7 +482,7 @@ libxlDomainShutdownThread(void *opaque) } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -541,7 +518,7 @@ libxlDomainDeathThread(void *opaque) libxlDomainCleanup(driver, vm); if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, dom_event); =20 cleanup: diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h index b80552e30a..94b693e477 100644 --- a/src/libxl/libxl_domain.h +++ b/src/libxl/libxl_domain.h @@ -49,10 +49,6 @@ extern const struct libxl_event_hooks ev_hooks; int libxlDomainObjPrivateInitCtx(virDomainObj *vm); =20 -void -libxlDomainObjEndJob(libxlDriverPrivate *driver, - virDomainObj *obj); - int libxlDomainJobGetTimeElapsed(virDomainJobObj *job, unsigned long long *timeElapsed); diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index d94430708a..79af2f4441 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -340,7 +340,7 @@ libxlAutostartDomain(virDomainObj *vm, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); cleanup: virDomainObjEndAPI(&vm); =20 @@ -1065,7 +1065,7 @@ libxlDomainCreateXML(virConnectPtr conn, const char *= xml, dom =3D virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id); =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1185,7 +1185,7 @@ libxlDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1239,7 +1239,7 @@ libxlDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1398,7 +1398,7 @@ libxlDomainDestroyFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1472,7 +1472,7 @@ libxlDomainPMSuspendForDuration(virDomainPtr dom, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1537,7 +1537,7 @@ libxlDomainPMWakeup(virDomainPtr dom, unsigned int fl= ags) libxlDomainCleanup(driver, vm); =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1696,7 +1696,7 @@ libxlDomainSetMemoryFlags(virDomainPtr dom, unsigned = long newmem, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1917,7 +1917,7 @@ libxlDomainSaveFlags(virDomainPtr dom, const char *to= , const char *dxml, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1979,7 +1979,7 @@ libxlDomainRestoreFlags(virConnectPtr conn, const cha= r *from, if (ret < 0 && !vm->persistent) virDomainObjListRemove(driver->domains, vm); =20 - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: if (VIR_CLOSE(fd) < 0) @@ -2076,7 +2076,7 @@ libxlDomainCoreDump(virDomainPtr dom, const char *to,= unsigned int flags) } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2129,7 +2129,7 @@ libxlDomainManagedSave(virDomainPtr dom, unsigned int= flags) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2344,7 +2344,7 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned i= nt nvcpus, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: VIR_FREE(bitmask); @@ -2489,7 +2489,7 @@ libxlDomainPinVcpuFlags(virDomainPtr dom, unsigned in= t vcpu, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2793,7 +2793,7 @@ libxlDomainCreateWithFlags(virDomainPtr dom, dom->id =3D vm->def->id; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4152,7 +4152,7 @@ libxlDomainAttachDeviceFlags(virDomainPtr dom, const = char *xml, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainDeviceDefFree(devConf); @@ -4240,7 +4240,7 @@ libxlDomainDetachDeviceFlags(virDomainPtr dom, const = char *xml, } =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainDeviceDefFree(dev); @@ -4522,7 +4522,7 @@ libxlDomainSetAutostart(virDomainPtr dom, int autosta= rt) ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4723,7 +4723,7 @@ libxlDomainSetSchedulerParametersFlags(virDomainPtr d= om, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4989,7 +4989,6 @@ libxlDomainInterfaceStats(virDomainPtr dom, const char *device, virDomainInterfaceStatsPtr stats) { - libxlDriverPrivate *driver =3D dom->conn->privateData; virDomainObj *vm; virDomainNetDef *net =3D NULL; int ret =3D -1; @@ -5016,7 +5015,7 @@ libxlDomainInterfaceStats(virDomainPtr dom, ret =3D 0; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -5189,7 +5188,7 @@ libxlDomainMemoryStats(virDomainPtr dom, ret =3D i; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: libxl_dominfo_dispose(&d_info); @@ -5511,7 +5510,6 @@ libxlDomainBlockStats(virDomainPtr dom, const char *path, virDomainBlockStatsPtr stats) { - libxlDriverPrivate *driver =3D dom->conn->privateData; virDomainObj *vm; libxlBlockStats blkstats; int ret =3D -1; @@ -5542,7 +5540,7 @@ libxlDomainBlockStats(virDomainPtr dom, stats->errs =3D -1; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -5556,7 +5554,6 @@ libxlDomainBlockStatsFlags(virDomainPtr dom, int *nparams, unsigned int flags) { - libxlDriverPrivate *driver =3D dom->conn->privateData; virDomainObj *vm; libxlBlockStats blkstats; int nstats; @@ -5615,7 +5612,7 @@ libxlDomainBlockStatsFlags(virDomainPtr dom, #undef LIBXL_BLKSTAT_ASSIGN_PARAM =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -6381,7 +6378,7 @@ libxlDomainSetMetadata(virDomainPtr dom, virObjectEventStateQueue(driver->domainEventState, ev); } =20 - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); diff --git a/src/libxl/libxl_migration.c b/src/libxl/libxl_migration.c index 90cf12ae00..6048540334 100644 --- a/src/libxl/libxl_migration.c +++ b/src/libxl/libxl_migration.c @@ -417,7 +417,7 @@ libxlDomainMigrationSrcBegin(virConnectPtr conn, goto cleanup; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: libxlMigrationCookieFree(mig); @@ -604,7 +604,7 @@ libxlDomainMigrationDstPrepareTunnel3(virConnectPtr dco= nn, goto done; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 error: libxlMigrationCookieFree(mig); @@ -774,7 +774,7 @@ libxlDomainMigrationDstPrepare(virConnectPtr dconn, goto done; =20 endjob: - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 error: for (i =3D 0; i < nsocks; i++) { @@ -1155,7 +1155,7 @@ libxlDomainMigrationSrcPerformP2P(libxlDriverPrivate = *driver, * Confirm phase will not be executed if perform fails. End the * job started in begin phase. */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); } =20 cleanup: @@ -1226,7 +1226,7 @@ libxlDomainMigrationSrcPerform(libxlDriverPrivate *dr= iver, * Confirm phase will not be executed if perform fails. End the * job started in begin phase. */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); } =20 return ret; @@ -1327,7 +1327,7 @@ libxlDomainMigrationDstFinish(virConnectPtr dconn, } =20 /* EndJob for corresponding BeginJob in prepare phase */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, event); virObjectUnref(cfg); return dom; @@ -1384,7 +1384,7 @@ libxlDomainMigrationSrcConfirm(libxlDriverPrivate *dr= iver, =20 cleanup: /* EndJob for corresponding BeginJob in begin phase */ - libxlDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); virObjectEventStateQueue(driver->domainEventState, event); virObjectUnref(cfg); return ret; --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386345; cv=none; d=zohomail.com; s=zohoarc; b=Ty3fLUZD79V/zAm4QAEqyr1gGiBKPMBii5y3Bm2VG6uyo2Qmx4/KlHjtnmQpiFLByW5P6BE5EDXzhWgRkBzPLp6/ZFJ+RuhmTZlre4iCX4KpH8PXcAL4Yt9UXwuBXxqeQdcrys1U2qzhDbQ/s8anpNsixDb070UVgT8Um8f6KXk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386345; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rZ/io1yX/xGXFzbHf7CsaD5enMEC5pa+f75uS0dt260=; b=chWEtu+VnPN+MDWacFKERbsDTr98o/3QljNMachjBT0upJFVE3BaDKR39yIhzgwlS9bdP/GvO871VxBlzHWnhz/VQqy8e41GmLYYghhHNiXqszXdGR0p/hhnmP+bf8cy/S5nC3Q0A8LWlGc2iImjxLbbq/T9rVMT+XpLu249D44= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386345886839.1607394469144; Mon, 5 Sep 2022 06:59:05 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-627-4dz9LZXKPhS9OqSJ77V_pg-1; Mon, 05 Sep 2022 09:57:36 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D1F291C006A2; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE0FAC15BC0; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id A58B4193F6EA; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 4571A194B966 for ; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 392452026D4C; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id D91A02024CC2 for ; Mon, 5 Sep 2022 13:57:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386344; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=rZ/io1yX/xGXFzbHf7CsaD5enMEC5pa+f75uS0dt260=; b=bmxTmC+Ym/7lqtFmOBe+hN640XySQWh84C70xl2AGhpbg8SFcKGU6Dj7z7QzTovzBvoNTB zvawqVe47KRhpOfS2hdhTms+4zxRCnKdjzUgHo1MbuINClRlP0rA2ljv1lR9HQMPnpCEVs u6z8urv0iyQXGz5Twc3CFIpRWvcFGpg= X-MC-Unique: 4dz9LZXKPhS9OqSJ77V_pg-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 13/17] LXC: use virDomainObjEndJob() Date: Mon, 5 Sep 2022 15:57:11 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386346863100001 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virLXCDomainObjEndJob() and replaces it with call to the generalized virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/lxc/lxc_domain.c | 20 ---------------- src/lxc/lxc_domain.h | 4 ---- src/lxc/lxc_driver.c | 57 +++++++++++++++++++------------------------- 3 files changed, 24 insertions(+), 57 deletions(-) diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c index aad9dae694..1a39129f82 100644 --- a/src/lxc/lxc_domain.c +++ b/src/lxc/lxc_domain.c @@ -35,26 +35,6 @@ VIR_LOG_INIT("lxc.lxc_domain"); =20 =20 -/* - * obj must be locked and have a reference before calling - * - * To be called after completing the work associated with the - * earlier virLXCDomainBeginJob() call - */ -void -virLXCDomainObjEndJob(virLXCDriver *driver G_GNUC_UNUSED, - virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - VIR_DEBUG("Stopping job: %s", - virDomainJobTypeToString(job)); - - virDomainObjResetJob(obj->job); - virCondSignal(&obj->job->cond); -} - - static void * virLXCDomainObjPrivateAlloc(void *opaque) { diff --git a/src/lxc/lxc_domain.h b/src/lxc/lxc_domain.h index e7b19fb2ff..d22c2ea153 100644 --- a/src/lxc/lxc_domain.h +++ b/src/lxc/lxc_domain.h @@ -72,10 +72,6 @@ extern virXMLNamespace virLXCDriverDomainXMLNamespace; extern virDomainXMLPrivateDataCallbacks virLXCDriverPrivateDataCallbacks; extern virDomainDefParserConfig virLXCDriverDomainDefParserConfig; =20 -void -virLXCDomainObjEndJob(virLXCDriver *driver, - virDomainObj *obj); - =20 char * virLXCDomainGetMachineName(virDomainDef *def, pid_t pid); diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index a115313b3f..e7c6c4fbc4 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -709,7 +709,7 @@ static int lxcDomainSetMemoryFlags(virDomainPtr dom, un= signed long newmem, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -793,7 +793,7 @@ lxcDomainSetMemoryParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1005,7 +1005,7 @@ static int lxcDomainCreateWithFiles(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1114,7 +1114,7 @@ lxcDomainCreateXMLWithFiles(virConnectPtr conn, if (virLXCProcessStart(driver, vm, nfiles, files, autoDestroyConn, VIR_DOMAIN_RUNNING_BOOTED) < 0) { virDomainAuditStart(vm, "booted", false); - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); goto cleanup; @@ -1127,7 +1127,7 @@ lxcDomainCreateXMLWithFiles(virConnectPtr conn, =20 dom =3D virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id); =20 - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1365,7 +1365,7 @@ lxcDomainDestroyFlags(virDomainPtr dom, virDomainAuditStop(vm, "destroyed"); =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); if (!vm->persistent) virDomainObjListRemove(driver->domains, vm); =20 @@ -1896,7 +1896,7 @@ lxcDomainSetSchedulerParametersFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2021,7 +2021,6 @@ lxcDomainBlockStats(virDomainPtr dom, const char *path, virDomainBlockStatsPtr stats) { - virLXCDriver *driver =3D dom->conn->privateData; int ret =3D -1; virDomainObj *vm; virDomainDiskDef *disk =3D NULL; @@ -2077,7 +2076,7 @@ lxcDomainBlockStats(virDomainPtr dom, &stats->wr_req); =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2092,7 +2091,6 @@ lxcDomainBlockStatsFlags(virDomainPtr dom, int * nparams, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; int tmp, ret =3D -1; virDomainObj *vm; virDomainDiskDef *disk =3D NULL; @@ -2205,7 +2203,7 @@ lxcDomainBlockStatsFlags(virDomainPtr dom, *nparams =3D tmp; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2285,7 +2283,7 @@ lxcDomainSetBlkioParameters(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2387,7 +2385,6 @@ lxcDomainInterfaceStats(virDomainPtr dom, { virDomainObj *vm; int ret =3D -1; - virLXCDriver *driver =3D dom->conn->privateData; virDomainNetDef *net =3D NULL; =20 if (!(vm =3D lxcDomObjFromDomain(dom))) @@ -2412,7 +2409,7 @@ lxcDomainInterfaceStats(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2508,7 +2505,7 @@ static int lxcDomainSetAutostart(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2631,7 +2628,7 @@ static int lxcDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -2687,7 +2684,7 @@ static int lxcDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virObjectEventStateQueue(driver->domainEventState, event); @@ -2763,7 +2760,6 @@ lxcDomainSendProcessSignal(virDomainPtr dom, unsigned int signum, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virDomainObj *vm =3D NULL; virLXCDomainObjPrivate *priv; pid_t victim; @@ -2825,7 +2821,7 @@ lxcDomainSendProcessSignal(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2854,7 +2850,6 @@ static int lxcDomainShutdownFlags(virDomainPtr dom, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virLXCDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; @@ -2912,7 +2907,7 @@ lxcDomainShutdownFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -2930,7 +2925,6 @@ static int lxcDomainReboot(virDomainPtr dom, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virLXCDomainObjPrivate *priv; virDomainObj *vm; int ret =3D -1; @@ -2988,7 +2982,7 @@ lxcDomainReboot(virDomainPtr dom, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4344,7 +4338,7 @@ static int lxcDomainAttachDeviceFlags(virDomainPtr do= m, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: if (dev !=3D dev_copy) @@ -4415,7 +4409,7 @@ static int lxcDomainUpdateDeviceFlags(virDomainPtr do= m, ret =3D 0; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainDeviceDefFree(dev); @@ -4506,7 +4500,7 @@ static int lxcDomainDetachDeviceFlags(virDomainPtr do= m, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: if (dev !=3D dev_copy) @@ -4529,7 +4523,6 @@ static int lxcDomainLxcOpenNamespace(virDomainPtr dom, int **fdlist, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virDomainObj *vm; virLXCDomainObjPrivate *priv; int ret =3D -1; @@ -4564,7 +4557,7 @@ static int lxcDomainLxcOpenNamespace(virDomainPtr dom, ret =3D nfds; =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4617,7 +4610,6 @@ lxcDomainMemoryStats(virDomainPtr dom, virLXCDomainObjPrivate *priv; unsigned long long swap_usage; unsigned long mem_usage; - virLXCDriver *driver =3D dom->conn->privateData; =20 virCheckFlags(0, -1); =20 @@ -4659,7 +4651,7 @@ lxcDomainMemoryStats(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4812,7 +4804,7 @@ lxcDomainSetMetadata(virDomainPtr dom, virObjectEventStateQueue(driver->domainEventState, ev); } =20 - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -4890,7 +4882,6 @@ static char * lxcDomainGetHostname(virDomainPtr dom, unsigned int flags) { - virLXCDriver *driver =3D dom->conn->privateData; virDomainObj *vm =3D NULL; char macaddr[VIR_MAC_STRING_BUFLEN]; g_autoptr(virConnect) conn =3D NULL; @@ -4954,7 +4945,7 @@ lxcDomainGetHostname(virDomainPtr dom, } =20 endjob: - virLXCDomainObjEndJob(driver, vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386263; cv=none; d=zohomail.com; s=zohoarc; b=TzRRDuHL0o5z3NkaDvAYhOzIiPeJJcU2Q7wTWFzJJJOvE+4bynWJMIaDD9e8NfX4uokPd2nw7uV+fLmtrhqKnqrPLrtcZ8BbBO3LeLBzDGMCJBjeS48qiHoE9Fe/oQfJvuZ/wim+kWivpFyShq4tfxGgdWZBHU2npuNVXLxY/mM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386263; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=PskHO1ffCnyJ/e9qmfVoURB74VsyxvEBv9DIspYiDxI=; b=n3tlgG5et9JLbghVNwX3dabAnSml0jxepqbzfr1xg253m7tzpggb5/hzB/K6+Wm0PhCzw4bwHsICchEKU7IKWk7hGT7Hy5/CJSMOHBKGP+1c4umVdSdNPDr3D2ZU2XhBFRZbZ/RzvG31HgHnbpcQIF4s/1kCQdmlZnEXZGPGXVU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386263621613.3499065102475; Mon, 5 Sep 2022 06:57:43 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-519-uDDaR5D0ORmYBs50ZTyT6w-1; Mon, 05 Sep 2022 09:57:36 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 54C061066684; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4267E94624; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 369971946A48; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D23ED1940373 for ; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id C448F2026D4C; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6FBC32026D2D for ; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386262; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=PskHO1ffCnyJ/e9qmfVoURB74VsyxvEBv9DIspYiDxI=; b=c/HySlj/oGBQUjlE7ZCCVZTyT+u0Heu322ooyTyqeF9AiOeCgDSPj/73uPYJkDniGuSa+M oMf2Nj6hoRPcLQ93ZW47q4QRYO3fzauNg/DygYZgGihVmy5vxKfESvJBX8O0hP2NBVTSS/ SiGN3ed38zm1ik9L+pIZ8lWzQ98bmbM= X-MC-Unique: uDDaR5D0ORmYBs50ZTyT6w-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 14/17] CH: use virDomainObjEndJob() Date: Mon, 5 Sep 2022 15:57:12 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386264420100011 Content-Type: text/plain; charset="utf-8"; x-default="true" This patch removes virCHDomainObjEndJob() and replaces it with call to the generalized virDomainObjEndJob(). Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/ch/ch_domain.c | 18 ------------------ src/ch/ch_domain.h | 3 --- src/ch/ch_driver.c | 20 ++++++++++---------- 3 files changed, 10 insertions(+), 31 deletions(-) diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c index c592c6ffbb..dc666243a4 100644 --- a/src/ch/ch_domain.c +++ b/src/ch/ch_domain.c @@ -32,24 +32,6 @@ =20 VIR_LOG_INIT("ch.ch_domain"); =20 -/* - * obj must be locked and have a reference before calling - * - * To be called after completing the work associated with the - * earlier virDomainObjBeginJob() call - */ -void -virCHDomainObjEndJob(virDomainObj *obj) -{ - virDomainJob job =3D obj->job->active; - - VIR_DEBUG("Stopping job: %s", - virDomainJobTypeToString(job)); - - virDomainObjResetJob(obj->job); - virCondSignal(&obj->job->cond); -} - void virCHDomainRemoveInactive(virCHDriver *driver, virDomainObj *vm) diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h index 076043f772..88e27d50b1 100644 --- a/src/ch/ch_domain.h +++ b/src/ch/ch_domain.h @@ -60,9 +60,6 @@ struct _virCHDomainVcpuPrivate { extern virDomainXMLPrivateDataCallbacks virCHDriverPrivateDataCallbacks; extern virDomainDefParserConfig virCHDriverDomainDefParserConfig; =20 -void -virCHDomainObjEndJob(virDomainObj *obj); - void virCHDomainRemoveInactive(virCHDriver *driver, virDomainObj *vm); diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c index d81bddcc23..c6e92efb2c 100644 --- a/src/ch/ch_driver.c +++ b/src/ch/ch_driver.c @@ -226,7 +226,7 @@ chDomainCreateXML(virConnectPtr conn, dom =3D virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id); =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: if (vm && !dom) { @@ -256,7 +256,7 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int = flags) =20 ret =3D virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED); =20 - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -414,7 +414,7 @@ chDomainShutdownFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -473,7 +473,7 @@ chDomainReboot(virDomainPtr dom, unsigned int flags) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -518,7 +518,7 @@ chDomainSuspend(virDomainPtr dom) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -563,7 +563,7 @@ chDomainResume(virDomainPtr dom) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -607,7 +607,7 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int fla= gs) ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1254,7 +1254,7 @@ chDomainPinVcpuFlags(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1418,7 +1418,7 @@ chDomainPinEmulator(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -1680,7 +1680,7 @@ chDomainSetNumaParameters(virDomainPtr dom, ret =3D 0; =20 endjob: - virCHDomainObjEndJob(vm); + virDomainObjEndJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386262; cv=none; d=zohomail.com; s=zohoarc; b=PGb42X63jkdJ98FTPzncfy/pDq0jBTB8M8x71whcX2GSrdRXEjVosPLKPOpAFjnwNKUykDrQuuIqXge0paTkAL2/3LHcF2GV+dlNG0i2W+K2x/kPgZiRnhKaKE0fXRbohZ7RwLMVNZKbzgTE9BON/PRgkmj6M2GfEOXbS+Jk/3M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386262; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=88Q55OI5ntcK5ley6SGH9Ry5XNd3o22z7FYjwlTodyw=; b=agYg+6HOB09TpIUcMwd3h4btkeV8Q72iAvLogPIZTWVaeCh0LW1pAcglIyPWSLbXmgNX4Nh3AQB6KWO/ehI4g0zz56gA6nqlQq3LlJ5erbMLYJpGWMNB3qKvsLTzzMlHRTB9Mk7laCwDPEe7K+ilorf2cSuNlXDOuiOZLxTNfrI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1662386262731815.4065315015823; Mon, 5 Sep 2022 06:57:42 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-350-AMIHSK6ZONO8G7tBUskpng-1; Mon, 05 Sep 2022 09:57:36 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E5F6E806007; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE9C6492CA2; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id B4EEA1946A48; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 7F044194B966 for ; Mon, 5 Sep 2022 13:57:30 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 5E8312026D4C; Mon, 5 Sep 2022 13:57:30 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 09AEF2024CC2 for ; Mon, 5 Sep 2022 13:57:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386261; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=88Q55OI5ntcK5ley6SGH9Ry5XNd3o22z7FYjwlTodyw=; b=KEqhGbJtwdc53XPSVI3rWJSmTuLna88YMP7RfCiTCRArtdNbP01m87fx2YY1aztxJuITCy oUiqhC2ZtH+6EyOWkoD5l0KSZh47prlw4FQ56+EJ6uPO9/2Pt1r3mHMVARGmiNKmTp428d tunZCkYsOVMG5Dw1OdhyR/OP0fX3GUk= X-MC-Unique: AMIHSK6ZONO8G7tBUskpng-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 15/17] qemu & conf: move BeginAgentJob & EndAgentJob into src/conf/virdomainjob Date: Mon, 5 Sep 2022 15:57:13 +0200 Message-Id: <3317c9f761f890196e0d6a6a95c8993f3163600c.1662385930.git.khanicov@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386264506100012 Content-Type: text/plain; charset="utf-8"; x-default="true" Although these and functions in the following two patches are for now just being used by the qemu driver, it makes sense to have all begin job functions in the same file. Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 10 ++-- src/conf/virdomainjob.c | 34 +++++++++++ src/conf/virdomainjob.h | 4 ++ src/libvirt_private.syms | 2 + src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domainjob.c | 34 ----------- src/qemu/qemu_domainjob.h | 4 -- src/qemu/qemu_driver.c | 82 +++++++++++++-------------- src/qemu/qemu_snapshot.c | 10 ++-- 9 files changed, 92 insertions(+), 90 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index 22d12f61a5..afdf9e61cc 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -62,7 +62,7 @@ There are a number of locks on various objects =20 Agent job condition is then used when thread wishes to talk to qemu agent monitor. It is possible to acquire just agent job - (``qemuDomainObjBeginAgentJob``), or only normal job (``virDomainObjBe= ginJob``) + (``virDomainObjBeginAgentJob``), or only normal job (``virDomainObjBeg= inJob``) but not both at the same time. Holding an agent job and a normal job w= ould allow an unresponsive or malicious agent to block normal libvirt API a= nd potentially result in a denial of service. Which type of job to grab @@ -130,11 +130,11 @@ To acquire the normal job condition =20 To acquire the agent job condition =20 - ``qemuDomainObjBeginAgentJob()`` + ``virDomainObjBeginAgentJob()`` - Waits until there is no other agent job set - Sets ``job.agentActive`` to the job type =20 - ``qemuDomainObjEndAgentJob()`` + ``virDomainObjEndAgentJob()`` - Sets ``job.agentActive`` to 0 - Signals on ``job.cond`` condition =20 @@ -253,7 +253,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAgentJob(obj, VIR_AGENT_JOB_TYPE); + virDomainObjBeginAgentJob(obj, VIR_AGENT_JOB_TYPE); =20 ...do prep work... =20 @@ -266,7 +266,7 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndAgentJob(obj); + virDomainObjEndAgentJob(obj); virDomainObjEndAPI(&obj); =20 =20 diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index 85486f612a..dfb28df6f9 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -528,6 +528,22 @@ int virDomainObjBeginJob(virDomainObj *obj, return 0; } =20 +/** + * virDomainObjBeginAgentJob: + * + * Grabs agent type of job. Use if caller talks to guest agent only. + * + * To end job call virDomainObjEndAgentJob. + */ +int +virDomainObjBeginAgentJob(virDomainObj *obj, + virDomainAgentJob agentJob) +{ + return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_NONE, + agentJob, + VIR_ASYNC_JOB_NONE, false); +} + /* * obj must be locked and have a reference before calling * @@ -555,3 +571,21 @@ virDomainObjEndJob(virDomainObj *obj) * grabbing a job requires checking more variables. */ virCondBroadcast(&obj->job->cond); } + +void +virDomainObjEndAgentJob(virDomainObj *obj) +{ + virDomainAgentJob agentJob =3D obj->job->agentActive; + + obj->job->jobsQueued--; + + VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", + virDomainAgentJobTypeToString(agentJob), + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj, obj->def->name); + + virDomainObjResetAgentJob(obj->job); + /* We indeed need to wake up ALL threads waiting because + * grabbing a job requires checking more variables. */ + virCondBroadcast(&obj->job->cond); +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 7a06c384f3..6cec322af1 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -249,5 +249,9 @@ int virDomainObjBeginJobInternal(virDomainObj *obj, int virDomainObjBeginJob(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginAgentJob(virDomainObj *obj, + virDomainAgentJob agentJob) + G_GNUC_WARN_UNUSED_RESULT; =20 void virDomainObjEndJob(virDomainObj *obj); +void virDomainObjEndAgentJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 9d2b26bfa2..74988dec34 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1187,10 +1187,12 @@ virDomainJobStatusToType; virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; +virDomainObjBeginAgentJob; virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; +virDomainObjEndAgentJob; virDomainObjEndJob; virDomainObjInitJob; virDomainObjPreserveJob; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4dfb978599..ab05bed456 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6057,7 +6057,7 @@ qemuDomainObjEnterMonitorAsync(virDomainObj *obj, * obj must be locked before calling * * To be called immediately before any QEMU agent API call. - * Must have already called qemuDomainObjBeginAgentJob() and + * Must have already called virDomainObjBeginAgentJob() and * checked that the VM is still active. * * To be followed with qemuDomainObjExitAgent() once complete diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index a8baf096e5..0775e04add 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,22 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -/** - * qemuDomainObjBeginAgentJob: - * - * Grabs agent type of job. Use if caller talks to guest agent only. - * - * To end job call qemuDomainObjEndAgentJob. - */ -int -qemuDomainObjBeginAgentJob(virDomainObj *obj, - virDomainAgentJob agentJob) -{ - return virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_NONE, - agentJob, - VIR_ASYNC_JOB_NONE, false); -} - int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainAsyncJob asyncJob, virDomainJobOperation operation, @@ -730,24 +714,6 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, VIR_ASYNC_JOB_NONE, true); } =20 -void -qemuDomainObjEndAgentJob(virDomainObj *obj) -{ - virDomainAgentJob agentJob =3D obj->job->agentActive; - - obj->job->jobsQueued--; - - VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", - virDomainAgentJobTypeToString(agentJob), - virDomainAsyncJobTypeToString(obj->job->asyncJob), - obj, obj->def->name); - - virDomainObjResetAgentJob(obj->job); - /* We indeed need to wake up ALL threads waiting because - * grabbing a job requires checking more variables. */ - virCondBroadcast(&obj->job->cond); -} - void qemuDomainObjEndAsyncJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 918b74748b..0cc4dc44f3 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,9 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob j= ob, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginAgentJob(virDomainObj *obj, - virDomainAgentJob agentJob) - G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginAsyncJob(virDomainObj *obj, virDomainAsyncJob asyncJob, virDomainJobOperation operation, @@ -84,7 +81,6 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 -void qemuDomainObjEndAgentJob(virDomainObj *obj); void qemuDomainObjEndAsyncJob(virDomainObj *obj); void qemuDomainObjAbortAsyncJob(virDomainObj *obj); void qemuDomainObjSetJobPhase(virDomainObj *obj, diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 65f9ce6bca..7cabd32b64 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1781,7 +1781,7 @@ qemuDomainShutdownFlagsAgent(virDomainObj *vm, int agentFlag =3D isReboot ? QEMU_AGENT_SHUTDOWN_REBOOT : QEMU_AGENT_SHUTDOWN_POWERDOWN; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjGetState(vm, NULL) !=3D VIR_DOMAIN_RUNNING) { @@ -1799,7 +1799,7 @@ qemuDomainShutdownFlagsAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -1909,7 +1909,7 @@ qemuDomainRebootAgent(virDomainObj *vm, if (!isReboot) agentFlag =3D QEMU_AGENT_SHUTDOWN_POWERDOWN; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (!qemuDomainAgentAvailable(vm, agentForced)) @@ -1924,7 +1924,7 @@ qemuDomainRebootAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -4368,7 +4368,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, =20 =20 if (useAgent) { - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; } else { if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) @@ -4388,7 +4388,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, =20 endjob: if (useAgent) - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); else virDomainObjEndJob(vm); =20 @@ -4809,7 +4809,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned in= t flags) goto cleanup; =20 if (flags & VIR_DOMAIN_VCPU_GUEST) { - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!virDomainObjIsActive(vm)) { @@ -4827,7 +4827,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned in= t flags) qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 if (ncpuinfo < 0) goto cleanup; @@ -16603,7 +16603,7 @@ qemuDomainPMSuspendAgent(virDomainObj *vm, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16617,7 +16617,7 @@ qemuDomainPMSuspendAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -16769,7 +16769,7 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, if (virDomainQemuAgentCommandEnsureACL(domain->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -16787,7 +16787,7 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, VIR_FREE(result); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -16863,7 +16863,7 @@ qemuDomainFSTrim(virDomainPtr dom, if (virDomainFSTrimEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -16877,7 +16877,7 @@ qemuDomainFSTrim(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17033,7 +17033,7 @@ qemuDomainGetHostnameAgent(virDomainObj *vm, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17048,7 +17048,7 @@ qemuDomainGetHostnameAgent(virDomainObj *vm, =20 ret =3D 0; endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -17174,7 +17174,7 @@ qemuDomainGetTime(virDomainPtr dom, if (virDomainGetTimeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17193,7 +17193,7 @@ qemuDomainGetTime(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17210,7 +17210,7 @@ qemuDomainSetTimeAgent(virDomainObj *vm, qemuAgent *agent; int ret =3D -1; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) return -1; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17224,7 +17224,7 @@ qemuDomainSetTimeAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -17310,7 +17310,7 @@ qemuDomainFSFreeze(virDomainPtr dom, if (virDomainFSFreezeEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17319,7 +17319,7 @@ qemuDomainFSFreeze(virDomainPtr dom, ret =3D qemuSnapshotFSFreeze(vm, mountpoints, nmountpoints); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -17350,7 +17350,7 @@ qemuDomainFSThaw(virDomainPtr dom, if (virDomainFSThawEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -17359,7 +17359,7 @@ qemuDomainFSThaw(virDomainPtr dom, ret =3D qemuSnapshotFSThaw(vm, true); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -18913,7 +18913,7 @@ qemuDomainGetFSInfoAgent(virDomainObj *vm, int ret =3D -1; qemuAgent *agent; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) return ret; =20 if (virDomainObjCheckActive(vm) < 0) @@ -18927,7 +18927,7 @@ qemuDomainGetFSInfoAgent(virDomainObj *vm, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); return ret; } =20 @@ -19072,7 +19072,7 @@ qemuDomainInterfaceAddresses(virDomainPtr dom, break; =20 case VIR_DOMAIN_INTERFACE_ADDRESSES_SRC_AGENT: - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19083,7 +19083,7 @@ qemuDomainInterfaceAddresses(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 break; =20 @@ -19123,7 +19123,7 @@ qemuDomainSetUserPassword(virDomainPtr dom, if (virDomainSetUserPasswordEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) @@ -19143,7 +19143,7 @@ qemuDomainSetUserPassword(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: virDomainObjEndAPI(&vm); @@ -19426,7 +19426,7 @@ qemuDomainGetGuestVcpus(virDomainPtr dom, if (virDomainGetGuestVcpusEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19445,7 +19445,7 @@ qemuDomainGetGuestVcpus(virDomainPtr dom, ret =3D 0; =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: VIR_FREE(info); @@ -19484,7 +19484,7 @@ qemuDomainSetGuestVcpus(virDomainPtr dom, if (virDomainSetGuestVcpusEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -19530,7 +19530,7 @@ qemuDomainSetGuestVcpus(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 cleanup: VIR_FREE(info); @@ -20445,7 +20445,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, if (virDomainGetGuestInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20503,7 +20503,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, } =20 qemuDomainObjExitAgent(vm, agent); - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 if (nfs > 0 || ndisks > 0) { if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) @@ -20550,7 +20550,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endagentjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); goto cleanup; } =20 @@ -20621,7 +20621,7 @@ qemuDomainAuthorizedSSHKeysGet(virDomainPtr dom, if (virDomainAuthorizedSshKeysGetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20632,7 +20632,7 @@ qemuDomainAuthorizedSSHKeysGet(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endagentjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); cleanup: virDomainObjEndAPI(&vm); return rv; @@ -20661,7 +20661,7 @@ qemuDomainAuthorizedSSHKeysSet(virDomainPtr dom, if (virDomainAuthorizedSshKeysSetEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_QUERY) < 0) goto cleanup; =20 if (!qemuDomainAgentAvailable(vm, true)) @@ -20675,7 +20675,7 @@ qemuDomainAuthorizedSSHKeysSet(virDomainPtr dom, qemuDomainObjExitAgent(vm, agent); =20 endagentjob: - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); cleanup: virDomainObjEndAPI(&vm); return rv; diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index afed0f0e28..c5e5e3ed5b 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1277,16 +1277,16 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE) { int frozen; =20 - if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) + if (virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0) goto cleanup; =20 if (virDomainObjCheckActive(vm) < 0) { - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); goto cleanup; } =20 frozen =3D qemuSnapshotFSFreeze(vm, NULL, 0); - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); =20 if (frozen < 0) goto cleanup; @@ -1422,13 +1422,13 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *dri= ver, } =20 if (thaw && - qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) >=3D 0 && + virDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) >=3D 0 && virDomainObjIsActive(vm)) { /* report error only on an otherwise successful snapshot */ if (qemuSnapshotFSThaw(vm, ret =3D=3D 0) < 0) ret =3D -1; =20 - qemuDomainObjEndAgentJob(vm); + virDomainObjEndAgentJob(vm); } =20 virQEMUSaveDataFree(data); --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386353; cv=none; d=zohomail.com; s=zohoarc; b=EZIZ26p9qNhxVIFXu6F1ZBKCAJmDVG9jHS+/r9Gz95ly6CO46DG7ES2pIbJFCdamikLJBRQQVRuTma1dPW3j5oGoUeyVY7ru7vfmhDkt3G/kxQkDa3i/n+i+/bYJmB4Cfc+KiGMwPnECxdVLUXd6D+Ia4hj0sJG3mokkF80Icxs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386353; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=M0U8I1Y25oNwlD6TnIAbp1QyHpqK+pF+i9n5xrTeA7Q=; b=J6P28J3fEdwVqTEphCTqRcJVzxxexMUq77F2hBi2JEqltXy6Kqo2T38CCj4A93bpDxedhnLZjAe9PachjTB5spMn0ORz7iG/vpe/RT6Hi4mLQt2+HoJ1YkP7/taUEgSnOC+F1yh1nwaaqzrf6GZa04J6HLywcZhurXBSxhuqIuY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1662386353056742.2880302256337; Mon, 5 Sep 2022 06:59:13 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-608-NX9b_e3oODytJ0PCbesoLQ-1; Mon, 05 Sep 2022 09:57:36 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 887881818378; Mon, 5 Sep 2022 13:57:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6F99C2166B29; Mon, 5 Sep 2022 13:57:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 636A7193F6EE; Mon, 5 Sep 2022 13:57:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 042B81946A47 for ; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id ED95C2024CD3; Mon, 5 Sep 2022 13:57:30 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 95DA12026D64 for ; Mon, 5 Sep 2022 13:57:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386350; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=M0U8I1Y25oNwlD6TnIAbp1QyHpqK+pF+i9n5xrTeA7Q=; b=OopVuZ+Z/Fju5qHlnQpqV2+ax9F8vkZsp6J+qvDmaHmZa3Wj20L9rk8QwB/NKrLxmWRE7Z QMI2mu0hwbuw7a4nN9ptrM16+qj3EF3pu125NkwZuomARN1/SfDMdLplLUPhCFQe6x8hGt 8IcR9EZtd6YGqT9GIq1Xr1Lk+aB2aUU= X-MC-Unique: NX9b_e3oODytJ0PCbesoLQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 16/17] qemu & conf: move BeginAsyncJob & EndAsyncJob into src/conf Date: Mon, 5 Sep 2022 15:57:14 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386354994100001 Content-Type: text/plain; charset="utf-8"; x-default="true" Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- docs/kbase/internals/qemu-threads.rst | 12 +++++------ src/conf/virdomainjob.c | 30 +++++++++++++++++++++++++++ src/conf/virdomainjob.h | 6 ++++++ src/libvirt_private.syms | 2 ++ src/qemu/qemu_backup.c | 6 +++--- src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domainjob.c | 29 -------------------------- src/qemu/qemu_domainjob.h | 6 ------ src/qemu/qemu_driver.c | 18 ++++++++-------- src/qemu/qemu_migration.c | 4 ++-- src/qemu/qemu_process.c | 4 ++-- src/qemu/qemu_snapshot.c | 4 ++-- 12 files changed, 63 insertions(+), 60 deletions(-) diff --git a/docs/kbase/internals/qemu-threads.rst b/docs/kbase/internals/q= emu-threads.rst index afdf9e61cc..95681d1b9d 100644 --- a/docs/kbase/internals/qemu-threads.rst +++ b/docs/kbase/internals/qemu-threads.rst @@ -141,7 +141,7 @@ To acquire the agent job condition =20 To acquire the asynchronous job condition =20 - ``qemuDomainObjBeginAsyncJob()`` + ``virDomainObjBeginAsyncJob()`` - Waits until no async job is running - Waits for ``job.cond`` condition ``job.active !=3D 0`` using ``virDo= mainObj`` mutex @@ -149,7 +149,7 @@ To acquire the asynchronous job condition and repeats waiting in that case - Sets ``job.asyncJob`` to the asynchronous job type =20 - ``qemuDomainObjEndAsyncJob()`` + ``virDomainObjEndAsyncJob()`` - Sets ``job.asyncJob`` to 0 - Broadcasts on ``job.asyncCond`` condition =20 @@ -277,7 +277,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); + virDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); qemuDomainObjSetAsyncJobMask(obj, allowedJobs); =20 ...do prep work... @@ -306,7 +306,7 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndAsyncJob(obj); + virDomainObjEndAsyncJob(obj); virDomainObjEndAPI(&obj); =20 =20 @@ -317,7 +317,7 @@ Design patterns =20 obj =3D qemuDomObjFromDomain(dom); =20 - qemuDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); + virDomainObjBeginAsyncJob(obj, VIR_ASYNC_JOB_TYPE); =20 ...do prep work... =20 @@ -334,5 +334,5 @@ Design patterns =20 ...do final work... =20 - qemuDomainObjEndAsyncJob(obj); + virDomainObjEndAsyncJob(obj); virDomainObjEndAPI(&obj); diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index dfb28df6f9..feb8c0e971 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -544,6 +544,21 @@ virDomainObjBeginAgentJob(virDomainObj *obj, VIR_ASYNC_JOB_NONE, false); } =20 +int virDomainObjBeginAsyncJob(virDomainObj *obj, + virDomainAsyncJob asyncJob, + virDomainJobOperation operation, + unsigned long apiFlags) +{ + if (virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC, + VIR_AGENT_JOB_NONE, + asyncJob, false) < 0) + return -1; + + obj->job->current->operation =3D operation; + obj->job->apiFlags =3D apiFlags; + return 0; +} + /* * obj must be locked and have a reference before calling * @@ -589,3 +604,18 @@ virDomainObjEndAgentJob(virDomainObj *obj) * grabbing a job requires checking more variables. */ virCondBroadcast(&obj->job->cond); } + +void +virDomainObjEndAsyncJob(virDomainObj *obj) +{ + obj->job->jobsQueued--; + + VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", + virDomainAsyncJobTypeToString(obj->job->asyncJob), + obj, obj->def->name); + + virDomainObjResetAsyncJob(obj->job); + if (obj->job->cb->saveStatusPrivate) + obj->job->cb->saveStatusPrivate(obj); + virCondBroadcast(&obj->job->asyncCond); +} diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 6cec322af1..3cd02ef4ae 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -252,6 +252,12 @@ int virDomainObjBeginJob(virDomainObj *obj, int virDomainObjBeginAgentJob(virDomainObj *obj, virDomainAgentJob agentJob) G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginAsyncJob(virDomainObj *obj, + virDomainAsyncJob asyncJob, + virDomainJobOperation operation, + unsigned long apiFlags) + G_GNUC_WARN_UNUSED_RESULT; =20 void virDomainObjEndJob(virDomainObj *obj); void virDomainObjEndAgentJob(virDomainObj *obj); +void virDomainObjEndAsyncJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 74988dec34..254abd5084 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1188,11 +1188,13 @@ virDomainJobTypeFromString; virDomainJobTypeToString; virDomainNestedJobAllowed; virDomainObjBeginAgentJob; +virDomainObjBeginAsyncJob; virDomainObjBeginJob; virDomainObjBeginJobInternal; virDomainObjCanSetJob; virDomainObjClearJob; virDomainObjEndAgentJob; +virDomainObjEndAsyncJob; virDomainObjEndJob; virDomainObjInitJob; virDomainObjPreserveJob; diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 2da520dbc7..c7721812a5 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -618,7 +618,7 @@ qemuBackupJobTerminate(virDomainObj *vm, g_clear_pointer(&priv->backup, virDomainBackupDefFree); =20 if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_BACKUP) - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 =20 @@ -786,7 +786,7 @@ qemuBackupBegin(virDomainObj *vm, * infrastructure for async jobs. We'll allow standard modify-type jobs * as the interlocking of conflicting operations is handled on the blo= ck * job level */ - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_BACKUP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_BACKUP, VIR_DOMAIN_JOB_OPERATION_BACKUP, flags)= < 0) return -1; =20 @@ -937,7 +937,7 @@ qemuBackupBegin(virDomainObj *vm, if (ret =3D=3D 0) qemuDomainObjReleaseAsyncJob(vm); else - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); =20 return ret; } diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index ab05bed456..83d78900a6 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -6037,7 +6037,7 @@ void qemuDomainObjEnterMonitor(virDomainObj *obj) * To be called immediately before any QEMU monitor API call. * Must have already either called virDomainObjBeginJob() * and checked that the VM is still active, with asyncJob of - * VIR_ASYNC_JOB_NONE; or already called qemuDomainObjBeginAsyncJob, + * VIR_ASYNC_JOB_NONE; or already called virDomainObjBeginAsyncJob, * with the same asyncJob. * * Returns 0 if job was started, in which case this must be followed with diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 0775e04add..99dcdb49b9 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,21 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -int qemuDomainObjBeginAsyncJob(virDomainObj *obj, - virDomainAsyncJob asyncJob, - virDomainJobOperation operation, - unsigned long apiFlags) -{ - if (virDomainObjBeginJobInternal(obj, obj->job, VIR_JOB_ASYNC, - VIR_AGENT_JOB_NONE, - asyncJob, false) < 0) - return -1; - - obj->job->current->operation =3D operation; - obj->job->apiFlags =3D apiFlags; - return 0; -} - int qemuDomainObjBeginNestedJob(virDomainObj *obj, virDomainAsyncJob asyncJob) @@ -714,20 +699,6 @@ qemuDomainObjBeginJobNowait(virDomainObj *obj, VIR_ASYNC_JOB_NONE, true); } =20 -void -qemuDomainObjEndAsyncJob(virDomainObj *obj) -{ - obj->job->jobsQueued--; - - VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", - virDomainAsyncJobTypeToString(obj->job->asyncJob), - obj, obj->def->name); - - virDomainObjResetAsyncJob(obj->job); - qemuDomainSaveStatus(obj); - virCondBroadcast(&obj->job->asyncCond); -} - void qemuDomainObjAbortAsyncJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 0cc4dc44f3..1cf9fcc113 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,11 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob = job, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginAsyncJob(virDomainObj *obj, - virDomainAsyncJob asyncJob, - virDomainJobOperation operation, - unsigned long apiFlags) - G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginNestedJob(virDomainObj *obj, virDomainAsyncJob asyncJob) G_GNUC_WARN_UNUSED_RESULT; @@ -81,7 +76,6 @@ int qemuDomainObjBeginJobNowait(virDomainObj *obj, virDomainJob job) G_GNUC_WARN_UNUSED_RESULT; =20 -void qemuDomainObjEndAsyncJob(virDomainObj *obj); void qemuDomainObjAbortAsyncJob(virDomainObj *obj); void qemuDomainObjSetJobPhase(virDomainObj *obj, int phase); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 7cabd32b64..379ec95dc5 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2637,8 +2637,8 @@ qemuDomainSaveInternal(virQEMUDriver *driver, virQEMUSaveData *data =3D NULL; g_autoptr(qemuDomainSaveCookie) cookie =3D NULL; =20 - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SAVE, - VIR_DOMAIN_JOB_OPERATION_SAVE, flags) <= 0) + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SAVE, + VIR_DOMAIN_JOB_OPERATION_SAVE, flags) < = 0) goto cleanup; =20 if (!qemuMigrationSrcIsAllowed(driver, vm, false, VIR_ASYNC_JOB_SAVE, = 0)) @@ -2735,7 +2735,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, virErrorRestore(&save_err); } } - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); if (ret =3D=3D 0) qemuDomainRemoveInactive(driver, vm); =20 @@ -3209,7 +3209,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, if (virDomainCoreDumpWithFormatEnsureACL(dom->conn, vm->def) < 0) goto cleanup; =20 - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) < 0) goto cleanup; @@ -3275,7 +3275,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, } } =20 - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); if (ret =3D=3D 0 && flags & VIR_DUMP_CRASH) qemuDomainRemoveInactive(driver, vm); =20 @@ -3447,7 +3447,7 @@ processWatchdogEvent(virQEMUDriver *driver, =20 switch (action) { case VIR_DOMAIN_WATCHDOG_ACTION_DUMP: - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) < 0) { return; @@ -3475,7 +3475,7 @@ processWatchdogEvent(virQEMUDriver *driver, } =20 endjob: - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 static int @@ -3523,7 +3523,7 @@ processGuestPanicEvent(virQEMUDriver *driver, bool removeInactive =3D false; unsigned long flags =3D VIR_DUMP_MEMORY_ONLY; =20 - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP, VIR_DOMAIN_JOB_OPERATION_DUMP, flags) <= 0) return; =20 @@ -3587,7 +3587,7 @@ processGuestPanicEvent(virQEMUDriver *driver, } =20 endjob: - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); if (removeInactive) qemuDomainRemoveInactive(driver, vm); } diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 38c2f6fdfd..d00de9cc32 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -120,7 +120,7 @@ qemuMigrationJobStart(virDomainObj *vm, } mask |=3D JOB_MASK(VIR_JOB_MODIFY_MIGRATION_SAFE); =20 - if (qemuDomainObjBeginAsyncJob(vm, job, op, apiFlags) < 0) + if (virDomainObjBeginAsyncJob(vm, job, op, apiFlags) < 0) return -1; =20 qemuDomainJobSetStatsType(vm->job->current, @@ -203,7 +203,7 @@ qemuMigrationJobIsActive(virDomainObj *vm, static void ATTRIBUTE_NONNULL(1) qemuMigrationJobFinish(virDomainObj *vm) { - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 =20 diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 94429bf3f8..c0338d25d2 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4678,7 +4678,7 @@ qemuProcessBeginJob(virDomainObj *vm, virDomainJobOperation operation, unsigned long apiFlags) { - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_START, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_START, operation, apiFlags) < 0) return -1; =20 @@ -4690,7 +4690,7 @@ qemuProcessBeginJob(virDomainObj *vm, void qemuProcessEndJob(virDomainObj *vm) { - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); } =20 =20 diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index c5e5e3ed5b..d2835ab1a8 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1794,7 +1794,7 @@ qemuSnapshotCreateXML(virDomainPtr domain, * a regular job, so we need to set the job mask to disallow query as * 'savevm' blocks the monitor. External snapshot will then modify the * job mask appropriately. */ - if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SNAPSHOT, + if (virDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_SNAPSHOT, VIR_DOMAIN_JOB_OPERATION_SNAPSHOT, flag= s) < 0) return NULL; =20 @@ -1806,7 +1806,7 @@ qemuSnapshotCreateXML(virDomainPtr domain, snapshot =3D qemuSnapshotCreate(vm, domain, def, driver, cfg, flag= s); } =20 - qemuDomainObjEndAsyncJob(vm); + virDomainObjEndAsyncJob(vm); =20 return snapshot; } --=20 2.37.2 From nobody Fri May 17 05:50:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1662386303; cv=none; d=zohomail.com; s=zohoarc; b=abVL0jr2osYuT7bh8gGz91jl3n852oGU3HVjrNs+FMLKZcYyzxJeeP1OSe6zDR/hDtZiZxMilrxZ5GH1Btl9F2O5Rb/qCW2+IiJr3GO0lDWUMsp0RGZ55KArUI5elrvE1+JJvSzwO14iCUogDsYUp4GijKvvmAg0ggdkBt4mxko= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662386303; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=fdWqGUf4zJQLPjREffwxR9EnxvTmemjJAspPWUY3gHs=; b=mwnP832ORL7BxoBdokZr/RdRw5Xzk7bSFkhD8pMWpcnaP76DW/fF90Sx+LP6tU3Cz86NEx+JigLJiPP99sp5CHbY4N81GUyqkvi8dnQHuL1+k0L7TsiCL9Pvmdz713STcduxk6wO4+RcEqM+46mQmxQo3mGDSr2ahmuyMO3hG8M= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1662386303256554.1934645941748; Mon, 5 Sep 2022 06:58:23 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-673-JUjfqTGoMgGd-UN0zaVtcw-1; Mon, 05 Sep 2022 09:57:37 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F225080D08E; Mon, 5 Sep 2022 13:57:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id DDFE314152E0; Mon, 5 Sep 2022 13:57:32 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C480D1946A48; Mon, 5 Sep 2022 13:57:32 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id AA118193F6E6 for ; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 819732026D64; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.43.2.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2D1F22026D4C for ; Mon, 5 Sep 2022 13:57:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662386302; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=fdWqGUf4zJQLPjREffwxR9EnxvTmemjJAspPWUY3gHs=; b=HDj2UJVGkLzXnx9ub40A5kpNZNg6qV8v/tOZX/sY9B6wNiEG4SmLhzwJitDrn8dRKO2JBw 1TTbqkOTtvUMvbxstsKwXdwEjwtjCxgttK7kKaeklNpaFkS/Z2v+EqgrLku4o13EnpJawH 8QZnBioILGUfsEi6VU9fQG6m7ZuwMvw= X-MC-Unique: JUjfqTGoMgGd-UN0zaVtcw-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Kristina Hanicova To: libvir-list@redhat.com Subject: [PATCH v2 17/17] qemu & conf: move BeginNestedJob & BeginJobNowait into src/conf Date: Mon, 5 Sep 2022 15:57:15 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1662386304664100001 Content-Type: text/plain; charset="utf-8"; x-default="true" Signed-off-by: Kristina Hanicova Reviewed-by: J=C3=A1n Tomko --- src/conf/virdomainjob.c | 44 +++++++++++++++++++++++++++++++++++++++ src/conf/virdomainjob.h | 6 ++++++ src/libvirt_private.syms | 2 ++ src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domainjob.c | 44 --------------------------------------- src/qemu/qemu_domainjob.h | 7 ------- src/qemu/qemu_driver.c | 2 +- src/qemu/qemu_process.c | 2 +- 8 files changed, 55 insertions(+), 54 deletions(-) diff --git a/src/conf/virdomainjob.c b/src/conf/virdomainjob.c index feb8c0e971..7915faa125 100644 --- a/src/conf/virdomainjob.c +++ b/src/conf/virdomainjob.c @@ -559,6 +559,50 @@ int virDomainObjBeginAsyncJob(virDomainObj *obj, return 0; } =20 +int +virDomainObjBeginNestedJob(virDomainObj *obj, + virDomainAsyncJob asyncJob) +{ + if (asyncJob !=3D obj->job->asyncJob) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("unexpected async job %d type expected %d"), + asyncJob, obj->job->asyncJob); + return -1; + } + + if (obj->job->asyncOwner !=3D virThreadSelfID()) { + VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", + obj->job->asyncOwner); + } + + return virDomainObjBeginJobInternal(obj, obj->job, + VIR_JOB_ASYNC_NESTED, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, + false); +} + +/** + * virDomainObjBeginJobNowait: + * + * @obj: domain object + * @job: virDomainJob to start + * + * Acquires job for a domain object which must be locked before + * calling. If there's already a job running it returns + * immediately without any error reported. + * + * Returns: see virDomainObjBeginJobInternal + */ +int +virDomainObjBeginJobNowait(virDomainObj *obj, + virDomainJob job) +{ + return virDomainObjBeginJobInternal(obj, obj->job, job, + VIR_AGENT_JOB_NONE, + VIR_ASYNC_JOB_NONE, true); +} + /* * obj must be locked and have a reference before calling * diff --git a/src/conf/virdomainjob.h b/src/conf/virdomainjob.h index 3cd02ef4ae..c101334596 100644 --- a/src/conf/virdomainjob.h +++ b/src/conf/virdomainjob.h @@ -257,6 +257,12 @@ int virDomainObjBeginAsyncJob(virDomainObj *obj, virDomainJobOperation operation, unsigned long apiFlags) G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginNestedJob(virDomainObj *obj, + virDomainAsyncJob asyncJob) + G_GNUC_WARN_UNUSED_RESULT; +int virDomainObjBeginJobNowait(virDomainObj *obj, + virDomainJob job) + G_GNUC_WARN_UNUSED_RESULT; =20 void virDomainObjEndJob(virDomainObj *obj); void virDomainObjEndAgentJob(virDomainObj *obj); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 254abd5084..25794bc2f4 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1191,6 +1191,8 @@ virDomainObjBeginAgentJob; virDomainObjBeginAsyncJob; virDomainObjBeginJob; virDomainObjBeginJobInternal; +virDomainObjBeginJobNowait; +virDomainObjBeginNestedJob; virDomainObjCanSetJob; virDomainObjClearJob; virDomainObjEndAgentJob; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 83d78900a6..d7ef23f493 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5968,7 +5968,7 @@ qemuDomainObjEnterMonitorInternal(virDomainObj *obj, =20 if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { int ret; - if ((ret =3D qemuDomainObjBeginNestedJob(obj, asyncJob)) < 0) + if ((ret =3D virDomainObjBeginNestedJob(obj, asyncJob)) < 0) return ret; if (!virDomainObjIsActive(obj)) { virReportError(VIR_ERR_OPERATION_FAILED, "%s", diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 99dcdb49b9..a170fdd97d 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -655,50 +655,6 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj) obj->job->asyncOwner =3D 0; } =20 -int -qemuDomainObjBeginNestedJob(virDomainObj *obj, - virDomainAsyncJob asyncJob) -{ - if (asyncJob !=3D obj->job->asyncJob) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("unexpected async job %d type expected %d"), - asyncJob, obj->job->asyncJob); - return -1; - } - - if (obj->job->asyncOwner !=3D virThreadSelfID()) { - VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", - obj->job->asyncOwner); - } - - return virDomainObjBeginJobInternal(obj, obj->job, - VIR_JOB_ASYNC_NESTED, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, - false); -} - -/** - * qemuDomainObjBeginJobNowait: - * - * @obj: domain object - * @job: virDomainJob to start - * - * Acquires job for a domain object which must be locked before - * calling. If there's already a job running it returns - * immediately without any error reported. - * - * Returns: see qemuDomainObjBeginJobInternal - */ -int -qemuDomainObjBeginJobNowait(virDomainObj *obj, - virDomainJob job) -{ - return virDomainObjBeginJobInternal(obj, obj->job, job, - VIR_AGENT_JOB_NONE, - VIR_ASYNC_JOB_NONE, true); -} - void qemuDomainObjAbortAsyncJob(virDomainObj *obj) { diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 1cf9fcc113..c3de401aa5 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -69,13 +69,6 @@ int qemuDomainAsyncJobPhaseFromString(virDomainAsyncJob = job, void qemuDomainEventEmitJobCompleted(virQEMUDriver *driver, virDomainObj *vm); =20 -int qemuDomainObjBeginNestedJob(virDomainObj *obj, - virDomainAsyncJob asyncJob) - G_GNUC_WARN_UNUSED_RESULT; -int qemuDomainObjBeginJobNowait(virDomainObj *obj, - virDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - void qemuDomainObjAbortAsyncJob(virDomainObj *obj); void qemuDomainObjSetJobPhase(virDomainObj *obj, int phase); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 379ec95dc5..838ce8e9bd 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -18841,7 +18841,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, int rv; =20 if (flags & VIR_CONNECT_GET_ALL_DOMAINS_STATS_NOWAIT) - rv =3D qemuDomainObjBeginJobNowait(vm, VIR_JOB_QUERY); + rv =3D virDomainObjBeginJobNowait(vm, VIR_JOB_QUERY); else rv =3D virDomainObjBeginJob(vm, VIR_JOB_QUERY); =20 diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index c0338d25d2..0d11cabf2d 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -8102,7 +8102,7 @@ void qemuProcessStop(virQEMUDriver *driver, virErrorPreserveLast(&orig_err); =20 if (asyncJob !=3D VIR_ASYNC_JOB_NONE) { - if (qemuDomainObjBeginNestedJob(vm, asyncJob) < 0) + if (virDomainObjBeginNestedJob(vm, asyncJob) < 0) goto cleanup; } else if (vm->job->asyncJob !=3D VIR_ASYNC_JOB_NONE && vm->job->asyncOwner =3D=3D virThreadSelfID() && --=20 2.37.2