From nobody Thu Mar 28 13:55:34 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) client-ip=205.139.110.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1592995562; cv=none; d=zohomail.com; s=zohoarc; b=k9s8d2Z7J3kXhlRwgPLE9x7wtSw/uc2QRIHFmS9s9/GJoR8tyLnLEMOslXZTKccROr47DBWVuLxS0Tb9welhikuesLnSh+/paA0deQYXqcZdBwaVcEWATAW3+0Wvd8RW7a3Oo0lV6RUU9hrMlT7rdOtGwVouKi1t52Kxag0qwvU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1592995562; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=N7psWJeZ3tiksNDUew9kVcJ9G7JyaypgueLGxFCa1jE=; b=LW6r/DiufgqjOm8cpkCbD5s3fBAoYf6dxJ/AG5wYikWJWf4CR2Hdrdg/x+V0Wmek217eCeI0qy7eDHXGGPdDGPxa1w/Y8QTwUspAyPgnKULsvcGO8MBAFf+2gvRVyF0Xzei3/QOkFvU97bjQ7uXwtecIKhCqV5cxPIr8KTdJ5x4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by mx.zohomail.com with SMTPS id 1592995562937534.3160550031713; Wed, 24 Jun 2020 03:46:02 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-22-AdFwj3iePHyDI8YH3awuRg-1; Wed, 24 Jun 2020 06:45:58 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9713110059A7; Wed, 24 Jun 2020 10:45:53 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 60C5C1974E; Wed, 24 Jun 2020 10:45:53 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 40703104912; Wed, 24 Jun 2020 10:45:52 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 05OAjnLR022376 for ; Wed, 24 Jun 2020 06:45:50 -0400 Received: by smtp.corp.redhat.com (Postfix) id DE653112D170; Wed, 24 Jun 2020 10:45:49 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast04.extmail.prod.ext.rdu2.redhat.com [10.11.55.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D93DF110F390 for ; Wed, 24 Jun 2020 10:45:47 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E53BC1049841 for ; Wed, 24 Jun 2020 10:45:46 +0000 (UTC) Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-91-26kYBdYXO7uvaqn9zfsazg-1; Wed, 24 Jun 2020 06:45:44 -0400 Received: by mail-pf1-f193.google.com with SMTP id d66so996341pfd.6 for ; Wed, 24 Jun 2020 03:45:44 -0700 (PDT) Received: from localhost.localdomain ([116.73.70.57]) by smtp.gmail.com with ESMTPSA id r8sm16962006pgn.19.2020.06.24.03.45.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jun 2020 03:45:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592995561; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:content-type:content-type: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=N7psWJeZ3tiksNDUew9kVcJ9G7JyaypgueLGxFCa1jE=; b=cnl1NSB9dcTtHaeifWVVTuGnaudD0jzAF+IJVU0RLh8f1nASo7bVl9tJ7m/joFhEhkIlWz zULXoOWM7OB8u3t0lj+uNZki3N/Wfks0mvGA0cqtun4+s4Ei+Hg/wIlS5kMUH3mM5n6n5C qJS5jWCcDB7t9lKZp9I3/8NHDIt+Oio= X-MC-Unique: AdFwj3iePHyDI8YH3awuRg-1 X-MC-Unique: 26kYBdYXO7uvaqn9zfsazg-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N7psWJeZ3tiksNDUew9kVcJ9G7JyaypgueLGxFCa1jE=; b=DiJ1fwz4g7Fb34YNCxcTZ/DY2KcOQr3nDspim+AJEQeI2vC5aAsDk/Fnh3keE0Aa1k YXJLMlZoKWh71FJYAZn+rNyXsEAjDD3iWgj6/k1BR16d9Y2scxEFoT1PFZu6ds/qBUOZ AhH7rr3X1s+uMen/yj/+o80fbjmYWNf7iQ3Y8OR7eKYGoL2OsyjYjAo1smmf8lAeSq3d dFC7sfJtwCbFis+Zd06GniAOpGAEYNTEv0dFa3qKIIPd4tvBKMoS6+vKBOaQiQ6UagVv FDLG8DLDvcibPqkGnbFE9P8H3orGIjmFQwl7WLoNkMXxFHEyjSClpyQ/5GMXO2UfW9pW Pfgw== X-Gm-Message-State: AOAM533ikDmvW97M4yTEbq3AhW/rgxbM+mjuVH2x75omafzHPsW0tuvi /2RS8n7AtmXdw/GusatGMxg57vZ7QHg= X-Google-Smtp-Source: ABdhPJwk8cBUOjRpkre3rxzfIcVyu0E0OZoE19Gadj4vl7lZc0OwgBQQrVR49t/LTbzqcSAzsBqRtw== X-Received: by 2002:aa7:8ad4:: with SMTP id b20mr29266754pfd.268.1592995543030; Wed, 24 Jun 2020 03:45:43 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH 1/2] qemu_domain: Avoid using qemuDomainObjPrivatePtr as parameter Date: Wed, 24 Jun 2020 16:15:22 +0530 Message-Id: <20200624104523.4770-2-pc44800@gmail.com> In-Reply-To: <20200624104523.4770-1-pc44800@gmail.com> References: <20200624104523.4770-1-pc44800@gmail.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" In functions `qemuDomainObjInitJob`, `qemuDomainObjResetJob`, `qemuDomainObjResetAgentJob`, `qemuDomainObjResetAsyncJob`, `qemuDomainObjFreeJob`, `qemuDomainJobAllowed`, `qemuDomainNestedJobAllowed` we avoid sending the complete qemuDomainObjPrivatePtr as parameter and instead just send qemuDomainJobObjPtr. This is done in a effort to separating the qemu-job APIs into a spearate file. Signed-off-by: Prathamesh Chavan Reviewed-by: Michal Privoznik --- src/qemu/qemu_domain.c | 94 ++++++++++++++++++++---------------------- src/qemu/qemu_domain.h | 4 +- 2 files changed, 46 insertions(+), 52 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 2375b0de35..1ddaa922c5 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -350,15 +350,15 @@ qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driv= er, =20 =20 static int -qemuDomainObjInitJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjInitJob(qemuDomainJobObjPtr job) { - memset(&priv->job, 0, sizeof(priv->job)); + memset(job, 0, sizeof(*job)); =20 - if (virCondInit(&priv->job.cond) < 0) + if (virCondInit(&job->cond) < 0) return -1; =20 - if (virCondInit(&priv->job.asyncCond) < 0) { - virCondDestroy(&priv->job.cond); + if (virCondInit(&job->asyncCond) < 0) { + virCondDestroy(&job->cond); return -1; } =20 @@ -366,10 +366,8 @@ qemuDomainObjInitJob(qemuDomainObjPrivatePtr priv) } =20 static void -qemuDomainObjResetJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjResetJob(qemuDomainJobObjPtr job) { - qemuDomainJobObjPtr job =3D &priv->job; - job->active =3D QEMU_JOB_NONE; job->owner =3D 0; job->ownerAPI =3D NULL; @@ -378,10 +376,8 @@ qemuDomainObjResetJob(qemuDomainObjPrivatePtr priv) =20 =20 static void -qemuDomainObjResetAgentJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjResetAgentJob(qemuDomainJobObjPtr job) { - qemuDomainJobObjPtr job =3D &priv->job; - job->agentActive =3D QEMU_AGENT_JOB_NONE; job->agentOwner =3D 0; job->agentOwnerAPI =3D NULL; @@ -390,10 +386,8 @@ qemuDomainObjResetAgentJob(qemuDomainObjPrivatePtr pri= v) =20 =20 static void -qemuDomainObjResetAsyncJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job) { - qemuDomainJobObjPtr job =3D &priv->job; - job->asyncJob =3D QEMU_ASYNC_JOB_NONE; job->asyncOwner =3D 0; job->asyncOwnerAPI =3D NULL; @@ -426,19 +420,19 @@ qemuDomainObjRestoreJob(virDomainObjPtr obj, job->migParams =3D g_steal_pointer(&priv->job.migParams); job->apiFlags =3D priv->job.apiFlags; =20 - qemuDomainObjResetJob(priv); - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetJob(&priv->job); + qemuDomainObjResetAsyncJob(&priv->job); } =20 static void -qemuDomainObjFreeJob(qemuDomainObjPrivatePtr priv) +qemuDomainObjFreeJob(qemuDomainJobObjPtr job) { - qemuDomainObjResetJob(priv); - qemuDomainObjResetAsyncJob(priv); - g_clear_pointer(&priv->job.current, qemuDomainJobInfoFree); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - virCondDestroy(&priv->job.cond); - virCondDestroy(&priv->job.asyncCond); + qemuDomainObjResetJob(job); + qemuDomainObjResetAsyncJob(job); + g_clear_pointer(&job->current, qemuDomainJobInfoFree); + g_clear_pointer(&job->completed, qemuDomainJobInfoFree); + virCondDestroy(&job->cond); + virCondDestroy(&job->asyncCond); } =20 static bool @@ -2231,7 +2225,7 @@ qemuDomainObjPrivateAlloc(void *opaque) if (VIR_ALLOC(priv) < 0) return NULL; =20 - if (qemuDomainObjInitJob(priv) < 0) { + if (qemuDomainObjInitJob(&priv->job) < 0) { virReportSystemError(errno, "%s", _("Unable to init qemu driver mutexes")); goto error; @@ -2342,7 +2336,7 @@ qemuDomainObjPrivateFree(void *data) qemuDomainObjPrivateDataClear(priv); =20 virObjectUnref(priv->monConfig); - qemuDomainObjFreeJob(priv); + qemuDomainObjFreeJob(&priv->job); VIR_FREE(priv->lockState); VIR_FREE(priv->origname); =20 @@ -6215,8 +6209,8 @@ qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver,= virDomainObjPtr obj) qemuDomainObjPrivatePtr priv =3D obj->privateData; =20 if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) - qemuDomainObjResetJob(priv); - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetJob(&priv->job); + qemuDomainObjResetAsyncJob(&priv->job); qemuDomainObjSaveStatus(driver, obj); } =20 @@ -6237,28 +6231,28 @@ qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj) } =20 static bool -qemuDomainNestedJobAllowed(qemuDomainObjPrivatePtr priv, qemuDomainJob job) +qemuDomainNestedJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) { - return !priv->job.asyncJob || - job =3D=3D QEMU_JOB_NONE || - (priv->job.mask & JOB_MASK(job)) !=3D 0; + return !jobs->asyncJob || + newJob =3D=3D QEMU_JOB_NONE || + (jobs->mask & JOB_MASK(newJob)) !=3D 0; } =20 bool -qemuDomainJobAllowed(qemuDomainObjPrivatePtr priv, qemuDomainJob job) +qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) { - return !priv->job.active && qemuDomainNestedJobAllowed(priv, job); + return !jobs->active && qemuDomainNestedJobAllowed(jobs, newJob); } =20 static bool -qemuDomainObjCanSetJob(qemuDomainObjPrivatePtr priv, - qemuDomainJob job, - qemuDomainAgentJob agentJob) +qemuDomainObjCanSetJob(qemuDomainJobObjPtr job, + qemuDomainJob newJob, + qemuDomainAgentJob newAgentJob) { - return ((job =3D=3D QEMU_JOB_NONE || - priv->job.active =3D=3D QEMU_JOB_NONE) && - (agentJob =3D=3D QEMU_AGENT_JOB_NONE || - priv->job.agentActive =3D=3D QEMU_AGENT_JOB_NONE)); + return ((newJob =3D=3D QEMU_JOB_NONE || + job->active =3D=3D QEMU_JOB_NONE) && + (newAgentJob =3D=3D QEMU_AGENT_JOB_NONE || + job->agentActive =3D=3D QEMU_AGENT_JOB_NONE)); } =20 /* Give up waiting for mutex after 30 seconds */ @@ -6330,7 +6324,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, goto error; } =20 - while (!nested && !qemuDomainNestedJobAllowed(priv, job)) { + while (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) { if (nowait) goto cleanup; =20 @@ -6339,7 +6333,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, goto error; } =20 - while (!qemuDomainObjCanSetJob(priv, job, agentJob)) { + while (!qemuDomainObjCanSetJob(&priv->job, job, agentJob)) { if (nowait) goto cleanup; =20 @@ -6350,13 +6344,13 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driv= er, =20 /* No job is active but a new async job could have been started while = obj * was unlocked, so we need to recheck it. */ - if (!nested && !qemuDomainNestedJobAllowed(priv, job)) + if (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) goto retry; =20 ignore_value(virTimeMillisNow(&now)); =20 if (job) { - qemuDomainObjResetJob(priv); + qemuDomainObjResetJob(&priv->job); =20 if (job !=3D QEMU_JOB_ASYNC) { VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", @@ -6371,7 +6365,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", qemuDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetAsyncJob(&priv->job); priv->job.current =3D g_new0(qemuDomainJobInfo, 1); priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; priv->job.asyncJob =3D asyncJob; @@ -6383,7 +6377,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, } =20 if (agentJob) { - qemuDomainObjResetAgentJob(priv); + qemuDomainObjResetAgentJob(&priv->job); =20 VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", qemuDomainAgentJobTypeToString(agentJob), @@ -6428,7 +6422,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, duration / 1000, agentDuration / 1000, asyncDuration / 1000); =20 if (job) { - if (nested || qemuDomainNestedJobAllowed(priv, job)) + if (nested || qemuDomainNestedJobAllowed(&priv->job, job)) blocker =3D priv->job.ownerAPI; else blocker =3D priv->job.asyncOwnerAPI; @@ -6617,7 +6611,7 @@ qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomai= nObjPtr obj) qemuDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 - qemuDomainObjResetJob(priv); + qemuDomainObjResetJob(&priv->job); if (qemuDomainTrackJob(job)) qemuDomainObjSaveStatus(driver, obj); /* We indeed need to wake up ALL threads waiting because @@ -6638,7 +6632,7 @@ qemuDomainObjEndAgentJob(virDomainObjPtr obj) qemuDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 - qemuDomainObjResetAgentJob(priv); + qemuDomainObjResetAgentJob(&priv->job); /* We indeed need to wake up ALL threads waiting because * grabbing a job requires checking more variables. */ virCondBroadcast(&priv->job.cond); @@ -6655,7 +6649,7 @@ qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, vir= DomainObjPtr obj) qemuDomainAsyncJobTypeToString(priv->job.asyncJob), obj, obj->def->name); =20 - qemuDomainObjResetAsyncJob(priv); + qemuDomainObjResetAsyncJob(&priv->job); qemuDomainObjSaveStatus(driver, obj); virCondBroadcast(&priv->job.asyncCond); } diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index e78a2b935d..19e80fef2b 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -860,8 +860,8 @@ void qemuDomainSetFakeReboot(virQEMUDriverPtr driver, virDomainObjPtr vm, bool value); =20 -bool qemuDomainJobAllowed(qemuDomainObjPrivatePtr priv, - qemuDomainJob job); +bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, + qemuDomainJob newJob); =20 int qemuDomainCheckDiskStartupPolicy(virQEMUDriverPtr driver, virDomainObjPtr vm, --=20 2.17.1 From nobody Thu Mar 28 13:55:34 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) client-ip=205.139.110.61; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-1.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1592995574; cv=none; d=zohomail.com; s=zohoarc; b=Mjda6B2KeDE6lJTw0CtJsU8NuCWHmebpRf+5sRL8X3tAKvVvt/ROasL688EQXphGF1uGI+eiRFNiAnNOphOlGTsThN68mB/dzSWLM7vvgiW0KJ4u0CyUU60FsY/TY9wsO3JHMzbCr6zXns+YnX9cOMZXwAqRbPotlFAnP+gb8Qk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1592995574; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=DUNUfSX4yCusSj9XmYy/AZ8t+V4s3wGfig+Z8t6yE8w=; b=Kwi/jJQKnj4nZVMoVoQXukDhA1Z2LT3DMDJRhbBX8wLm8lJbQPYtGNarfy/NJnE57vBfwNWHhvv2//AY7GAoG1N/Cq2M9UM/yBWXTr7oFL80DcIYypTCv20rMM+FoEQJr2u+//oJKNu38vjUmjHOaDC01pyDzx9XTj1+KE1GepM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.61 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by mx.zohomail.com with SMTPS id 1592995574691289.01679600028865; Wed, 24 Jun 2020 03:46:14 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-418-yqijQjg1N3-rWQfvfBoPYg-1; Wed, 24 Jun 2020 06:46:10 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0BF34107ACF2; Wed, 24 Jun 2020 10:46:05 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DF3D278FD9; Wed, 24 Jun 2020 10:46:04 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id B089F104919; Wed, 24 Jun 2020 10:46:04 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 05OAjtno022399 for ; Wed, 24 Jun 2020 06:45:55 -0400 Received: by smtp.corp.redhat.com (Postfix) id 9FCAF217B438; Wed, 24 Jun 2020 10:45:55 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast03.extmail.prod.ext.rdu2.redhat.com [10.11.55.19]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 99C71217B43A for ; Wed, 24 Jun 2020 10:45:53 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 095A0800394 for ; Wed, 24 Jun 2020 10:45:53 +0000 (UTC) Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-87-J3fzRfTNNzKZRMqzRte9Cg-1; Wed, 24 Jun 2020 06:45:50 -0400 Received: by mail-pl1-f172.google.com with SMTP id f2so906628plr.8 for ; Wed, 24 Jun 2020 03:45:50 -0700 (PDT) Received: from localhost.localdomain ([116.73.70.57]) by smtp.gmail.com with ESMTPSA id r8sm16962006pgn.19.2020.06.24.03.45.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jun 2020 03:45:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592995573; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:content-type:content-type: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=DUNUfSX4yCusSj9XmYy/AZ8t+V4s3wGfig+Z8t6yE8w=; b=eNW3+0lPYqGBqMxXRC7LphRz+Vp+fByqTZN4N//j0qkTQEho9Vw2zA1jhwYFdYDmFxnUft WDpCxcHyj8cU3BsWqsXVwFfefGq0XXic0lyI9wRI1zJCQTkRBNBQ6uWvyNrbiCOsu2kqLQ sqWoowtHsP9X+1HRK9icOoZLY49Qxew= X-MC-Unique: yqijQjg1N3-rWQfvfBoPYg-1 X-MC-Unique: J3fzRfTNNzKZRMqzRte9Cg-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DUNUfSX4yCusSj9XmYy/AZ8t+V4s3wGfig+Z8t6yE8w=; b=i3Xrte4aBZ7jLJSgPs6zZ82yrRg0UXHsx0mP/PnQkvuZDF0VlB736bMp4wiwvLHmcG DYmPIVlT/MylnBxQz9dcialeF8XV3hBLdynMn+lHTlX/vgGhTdATykcgpUjujoO6i91w bz0d0qp7tEA/eX6JcJhcrAvyIshXAzHeZLd8mYBhfNIpyp/7ZizBaIiyTp7UqHZv9y4+ +kF7LY7NVyYkyfECP4U91DS6Xb00tnWH9BOgd6Gd/nu2ytAldbI8MYpZ8nQ0PzYrdyfw 6qu12LXYOEXk0uns/koQdWailCJBnNlnwZYVfoVGkG9nJ8CgCz8OkQ7S2I/5kEQj0W9j FvLw== X-Gm-Message-State: AOAM532sMEbafFk2RyyDAKpEIBBV2IfIvu5LLg/3eYfxYpctLiySEo+R gu4nbUfq8uIvzjd3yW37ydqV9TaPzwM= X-Google-Smtp-Source: ABdhPJzs5AyWRzh5e3+LSaJ70bqjSVTHgkX3L4+KTRWLj6s7Kuz3tJs4M8Pg+Jb0+xyDUCTmljV0fw== X-Received: by 2002:a17:90a:ea18:: with SMTP id w24mr26374103pjy.42.1592995546273; Wed, 24 Jun 2020 03:45:46 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH 2/2] qemu_domainjob: moved domain job APIs to a separate file Date: Wed, 24 Jun 2020 16:15:23 +0530 Message-Id: <20200624104523.4770-3-pc44800@gmail.com> In-Reply-To: <20200624104523.4770-1-pc44800@gmail.com> References: <20200624104523.4770-1-pc44800@gmail.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Mimecast-Spam-Score: 2 X-Mimecast-Originator: redhat.com X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" All the domain job related APIs were present in `qemu_domain.c` along with the other domain APIs. In this patch, we move all the qemu domain job APIs into a separate file. Also, in this process, `qemuDomainTrackJob()`, `qemuDomainFreeJob()`, `qemuDomainInitJob()` and `qemuDomainObjSaveStatus()` were converted to a non-static funciton and exposed using `qemu_domain.h`. Signed-off-by: Prathamesh Chavan Reviewed-by: Michal Privoznik --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_domain.c | 1162 +----------------------------------- src/qemu/qemu_domain.h | 247 +------- src/qemu/qemu_domainjob.c | 1192 +++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domainjob.h | 269 +++++++++ 6 files changed, 1470 insertions(+), 1403 deletions(-) create mode 100644 src/qemu/qemu_domainjob.c create mode 100644 src/qemu/qemu_domainjob.h diff --git a/po/POTFILES.in b/po/POTFILES.in index 6607e298f2..af52054aa4 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -152,6 +152,7 @@ @SRCDIR@/src/qemu/qemu_conf.c @SRCDIR@/src/qemu/qemu_dbus.c @SRCDIR@/src/qemu/qemu_domain.c +@SRCDIR@/src/qemu/qemu_domainjob.c @SRCDIR@/src/qemu/qemu_domain_address.c @SRCDIR@/src/qemu/qemu_driver.c @SRCDIR@/src/qemu/qemu_extdevice.c diff --git a/src/qemu/Makefile.inc.am b/src/qemu/Makefile.inc.am index 6a7fc0822b..f83a675ba2 100644 --- a/src/qemu/Makefile.inc.am +++ b/src/qemu/Makefile.inc.am @@ -17,6 +17,8 @@ QEMU_DRIVER_SOURCES =3D \ qemu/qemu_dbus.h \ qemu/qemu_domain.c \ qemu/qemu_domain.h \ + qemu/qemu_domainjob.c \ + qemu/qemu_domainjob.h \ qemu/qemu_domain_address.c \ qemu/qemu_domain_address.h \ qemu/qemu_cgroup.c \ diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 1ddaa922c5..cb8d00c30b 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -83,44 +83,6 @@ =20 VIR_LOG_INIT("qemu.qemu_domain"); =20 -VIR_ENUM_IMPL(qemuDomainJob, - QEMU_JOB_LAST, - "none", - "query", - "destroy", - "suspend", - "modify", - "abort", - "migration operation", - "none", /* async job is never stored in job.active */ - "async nested", -); - -VIR_ENUM_IMPL(qemuDomainAgentJob, - QEMU_AGENT_JOB_LAST, - "none", - "query", - "modify", -); - -VIR_ENUM_IMPL(qemuDomainAsyncJob, - QEMU_ASYNC_JOB_LAST, - "none", - "migration out", - "migration in", - "save", - "dump", - "snapshot", - "start", - "backup", -); - -VIR_ENUM_IMPL(qemuDomainNamespace, - QEMU_DOMAIN_NS_LAST, - "mount", -); - - /** * qemuDomainObjFromDomain: * @domain: Domain pointer that has to be looked up @@ -204,58 +166,6 @@ qemuDomainLogContextFinalize(GObject *object) G_OBJECT_CLASS(qemu_domain_log_context_parent_class)->finalize(object); } =20 -const char * -qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, - int phase G_GNUC_UNUSED) -{ - switch (job) { - case QEMU_ASYNC_JOB_MIGRATION_OUT: - case QEMU_ASYNC_JOB_MIGRATION_IN: - return qemuMigrationJobPhaseTypeToString(phase); - - case QEMU_ASYNC_JOB_SAVE: - case QEMU_ASYNC_JOB_DUMP: - case QEMU_ASYNC_JOB_SNAPSHOT: - case QEMU_ASYNC_JOB_START: - case QEMU_ASYNC_JOB_NONE: - case QEMU_ASYNC_JOB_BACKUP: - G_GNUC_FALLTHROUGH; - case QEMU_ASYNC_JOB_LAST: - break; - } - - return "none"; -} - -int -qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, - const char *phase) -{ - if (!phase) - return 0; - - switch (job) { - case QEMU_ASYNC_JOB_MIGRATION_OUT: - case QEMU_ASYNC_JOB_MIGRATION_IN: - return qemuMigrationJobPhaseTypeFromString(phase); - - case QEMU_ASYNC_JOB_SAVE: - case QEMU_ASYNC_JOB_DUMP: - case QEMU_ASYNC_JOB_SNAPSHOT: - case QEMU_ASYNC_JOB_START: - case QEMU_ASYNC_JOB_NONE: - case QEMU_ASYNC_JOB_BACKUP: - G_GNUC_FALLTHROUGH; - case QEMU_ASYNC_JOB_LAST: - break; - } - - if (STREQ(phase, "none")) - return 0; - else - return -1; -} - =20 bool qemuDomainNamespaceEnabled(virDomainObjPtr vm, @@ -304,573 +214,6 @@ qemuDomainDisableNamespace(virDomainObjPtr vm, } } =20 - -void -qemuDomainJobInfoFree(qemuDomainJobInfoPtr info) -{ - g_free(info->errmsg); - g_free(info); -} - - -qemuDomainJobInfoPtr -qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info) -{ - qemuDomainJobInfoPtr ret =3D g_new0(qemuDomainJobInfo, 1); - - memcpy(ret, info, sizeof(*info)); - - ret->errmsg =3D g_strdup(info->errmsg); - - return ret; -} - -void -qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, - virDomainObjPtr vm) -{ - qemuDomainObjPrivatePtr priv =3D vm->privateData; - virObjectEventPtr event; - virTypedParameterPtr params =3D NULL; - int nparams =3D 0; - int type; - - if (!priv->job.completed) - return; - - if (qemuDomainJobInfoToParams(priv->job.completed, &type, - ¶ms, &nparams) < 0) { - VIR_WARN("Could not get stats for completed job; domain %s", - vm->def->name); - } - - event =3D virDomainEventJobCompletedNewFromObj(vm, params, nparams); - virObjectEventStateQueue(driver->domainEventState, event); -} - - -static int -qemuDomainObjInitJob(qemuDomainJobObjPtr job) -{ - memset(job, 0, sizeof(*job)); - - if (virCondInit(&job->cond) < 0) - return -1; - - if (virCondInit(&job->asyncCond) < 0) { - virCondDestroy(&job->cond); - return -1; - } - - return 0; -} - -static void -qemuDomainObjResetJob(qemuDomainJobObjPtr job) -{ - job->active =3D QEMU_JOB_NONE; - job->owner =3D 0; - job->ownerAPI =3D NULL; - job->started =3D 0; -} - - -static void -qemuDomainObjResetAgentJob(qemuDomainJobObjPtr job) -{ - job->agentActive =3D QEMU_AGENT_JOB_NONE; - job->agentOwner =3D 0; - job->agentOwnerAPI =3D NULL; - job->agentStarted =3D 0; -} - - -static void -qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job) -{ - job->asyncJob =3D QEMU_ASYNC_JOB_NONE; - job->asyncOwner =3D 0; - job->asyncOwnerAPI =3D NULL; - job->asyncStarted =3D 0; - job->phase =3D 0; - job->mask =3D QEMU_JOB_DEFAULT_MASK; - job->abortJob =3D false; - job->spiceMigration =3D false; - job->spiceMigrated =3D false; - job->dumpCompleted =3D false; - VIR_FREE(job->error); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); - qemuMigrationParamsFree(job->migParams); - job->migParams =3D NULL; - job->apiFlags =3D 0; -} - -void -qemuDomainObjRestoreJob(virDomainObjPtr obj, - qemuDomainJobObjPtr job) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - memset(job, 0, sizeof(*job)); - job->active =3D priv->job.active; - job->owner =3D priv->job.owner; - job->asyncJob =3D priv->job.asyncJob; - job->asyncOwner =3D priv->job.asyncOwner; - job->phase =3D priv->job.phase; - job->migParams =3D g_steal_pointer(&priv->job.migParams); - job->apiFlags =3D priv->job.apiFlags; - - qemuDomainObjResetJob(&priv->job); - qemuDomainObjResetAsyncJob(&priv->job); -} - -static void -qemuDomainObjFreeJob(qemuDomainJobObjPtr job) -{ - qemuDomainObjResetJob(job); - qemuDomainObjResetAsyncJob(job); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); - g_clear_pointer(&job->completed, qemuDomainJobInfoFree); - virCondDestroy(&job->cond); - virCondDestroy(&job->asyncCond); -} - -static bool -qemuDomainTrackJob(qemuDomainJob job) -{ - return (QEMU_DOMAIN_TRACK_JOBS & JOB_MASK(job)) !=3D 0; -} - - -int -qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) -{ - unsigned long long now; - - if (!jobInfo->started) - return 0; - - if (virTimeMillisNow(&now) < 0) - return -1; - - if (now < jobInfo->started) { - VIR_WARN("Async job starts in the future"); - jobInfo->started =3D 0; - return 0; - } - - jobInfo->timeElapsed =3D now - jobInfo->started; - return 0; -} - -int -qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) -{ - unsigned long long now; - - if (!jobInfo->stopped) - return 0; - - if (virTimeMillisNow(&now) < 0) - return -1; - - if (now < jobInfo->stopped) { - VIR_WARN("Guest's CPUs stopped in the future"); - jobInfo->stopped =3D 0; - return 0; - } - - jobInfo->stats.mig.downtime =3D now - jobInfo->stopped; - jobInfo->stats.mig.downtime_set =3D true; - return 0; -} - -static virDomainJobType -qemuDomainJobStatusToType(qemuDomainJobStatus status) -{ - switch (status) { - case QEMU_DOMAIN_JOB_STATUS_NONE: - break; - - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: - return VIR_DOMAIN_JOB_UNBOUNDED; - - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - return VIR_DOMAIN_JOB_COMPLETED; - - case QEMU_DOMAIN_JOB_STATUS_FAILED: - return VIR_DOMAIN_JOB_FAILED; - - case QEMU_DOMAIN_JOB_STATUS_CANCELED: - return VIR_DOMAIN_JOB_CANCELLED; - } - - return VIR_DOMAIN_JOB_NONE; -} - -int -qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, - virDomainJobInfoPtr info) -{ - info->type =3D qemuDomainJobStatusToType(jobInfo->status); - info->timeElapsed =3D jobInfo->timeElapsed; - - switch (jobInfo->statsType) { - case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; - info->fileTotal =3D jobInfo->stats.mig.disk_total + - jobInfo->mirrorStats.total; - info->fileRemaining =3D jobInfo->stats.mig.disk_remaining + - (jobInfo->mirrorStats.total - - jobInfo->mirrorStats.transferred); - info->fileProcessed =3D jobInfo->stats.mig.disk_transferred + - jobInfo->mirrorStats.transferred; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - info->memTotal =3D jobInfo->stats.dump.total; - info->memProcessed =3D jobInfo->stats.dump.completed; - info->memRemaining =3D info->memTotal - info->memProcessed; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - info->fileTotal =3D jobInfo->stats.backup.total; - info->fileProcessed =3D jobInfo->stats.backup.transferred; - info->fileRemaining =3D info->fileTotal - info->fileProcessed; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: - break; - } - - info->dataTotal =3D info->memTotal + info->fileTotal; - info->dataRemaining =3D info->memRemaining + info->fileRemaining; - info->dataProcessed =3D info->memProcessed + info->fileProcessed; - - return 0; -} - - -static int -qemuDomainMigrationJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; - qemuDomainMirrorStatsPtr mirrorStats =3D &jobInfo->mirrorStats; - virTypedParameterPtr par =3D NULL; - int maxpar =3D 0; - int npar =3D 0; - unsigned long long mirrorRemaining =3D mirrorStats->total - - mirrorStats->transferred; - - if (virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) - goto error; - - if (jobInfo->timeDeltaSet && - jobInfo->timeElapsed > jobInfo->timeDelta && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED_NET, - jobInfo->timeElapsed - jobInfo->timeDelta)= < 0) - goto error; - - if (stats->downtime_set && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DOWNTIME, - stats->downtime) < 0) - goto error; - - if (stats->downtime_set && - jobInfo->timeDeltaSet && - stats->downtime > jobInfo->timeDelta && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DOWNTIME_NET, - stats->downtime - jobInfo->timeDelta) < 0) - goto error; - - if (stats->setup_time_set && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_SETUP_TIME, - stats->setup_time) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_TOTAL, - stats->ram_total + - stats->disk_total + - mirrorStats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_PROCESSED, - stats->ram_transferred + - stats->disk_transferred + - mirrorStats->transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_REMAINING, - stats->ram_remaining + - stats->disk_remaining + - mirrorRemaining) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_TOTAL, - stats->ram_total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PROCESSED, - stats->ram_transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_REMAINING, - stats->ram_remaining) < 0) - goto error; - - if (stats->ram_bps && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_BPS, - stats->ram_bps) < 0) - goto error; - - if (stats->ram_duplicate_set) { - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_CONSTANT, - stats->ram_duplicate) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_NORMAL, - stats->ram_normal) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_NORMAL_BYTES, - stats->ram_normal_bytes) < 0) - goto error; - } - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE, - stats->ram_dirty_rate) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_ITERATION, - stats->ram_iteration) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_POSTCOPY_REQS, - stats->ram_postcopy_reqs) < 0) - goto error; - - if (stats->ram_page_size > 0 && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE, - stats->ram_page_size) < 0) - goto error; - - /* The remaining stats are disk, mirror, or migration specific - * so if this is a SAVEDUMP, we can just skip them */ - if (jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) - goto done; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_TOTAL, - stats->disk_total + - mirrorStats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_PROCESSED, - stats->disk_transferred + - mirrorStats->transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_REMAINING, - stats->disk_remaining + - mirrorRemaining) < 0) - goto error; - - if (stats->disk_bps && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_BPS, - stats->disk_bps) < 0) - goto error; - - if (stats->xbzrle_set) { - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_CACHE, - stats->xbzrle_cache_size) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_BYTES, - stats->xbzrle_bytes) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_PAGES, - stats->xbzrle_pages) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_CACHE_MISSE= S, - stats->xbzrle_cache_miss) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_OVERFLOW, - stats->xbzrle_overflow) < 0) - goto error; - } - - if (stats->cpu_throttle_percentage && - virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE, - stats->cpu_throttle_percentage) < 0) - goto error; - - done: - *type =3D qemuDomainJobStatusToType(jobInfo->status); - *params =3D par; - *nparams =3D npar; - return 0; - - error: - virTypedParamsFree(par, npar); - return -1; -} - - -static int -qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuMonitorDumpStats *stats =3D &jobInfo->stats.dump; - virTypedParameterPtr par =3D NULL; - int maxpar =3D 0; - int npar =3D 0; - - if (virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_TOTAL, - stats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PROCESSED, - stats->completed) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_REMAINING, - stats->total - stats->completed) < 0) - goto error; - - *type =3D qemuDomainJobStatusToType(jobInfo->status); - *params =3D par; - *nparams =3D npar; - return 0; - - error: - virTypedParamsFree(par, npar); - return -1; -} - - -static int -qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; - g_autoptr(virTypedParamList) par =3D g_new0(virTypedParamList, 1); - - if (virTypedParamListAddInt(par, jobInfo->operation, - VIR_DOMAIN_JOB_OPERATION) < 0) - return -1; - - if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, - VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) - return -1; - - if (stats->transferred > 0 || stats->total > 0) { - if (virTypedParamListAddULLong(par, stats->total, - VIR_DOMAIN_JOB_DISK_TOTAL) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->transferred, - VIR_DOMAIN_JOB_DISK_PROCESSED) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->total - stats->transfer= red, - VIR_DOMAIN_JOB_DISK_REMAINING) < 0) - return -1; - } - - if (stats->tmp_used > 0 || stats->tmp_total > 0) { - if (virTypedParamListAddULLong(par, stats->tmp_used, - VIR_DOMAIN_JOB_DISK_TEMP_USED) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->tmp_total, - VIR_DOMAIN_JOB_DISK_TEMP_TOTAL) < 0) - return -1; - } - - if (jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && - virTypedParamListAddBoolean(par, - jobInfo->status =3D=3D QEMU_DOMAIN_JOB= _STATUS_COMPLETED, - VIR_DOMAIN_JOB_SUCCESS) < 0) - return -1; - - if (jobInfo->errmsg && - virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) - return -1; - - *nparams =3D virTypedParamListStealParams(par, params); - *type =3D qemuDomainJobStatusToType(jobInfo->status); - return 0; -} - - -int -qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - switch (jobInfo->statsType) { - case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, n= params); - - case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparam= s); - - case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - return qemuDomainBackupJobInfoToParams(jobInfo, type, params, npar= ams); - - case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("invalid job statistics type")); - break; - - default: - virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType= ); - break; - } - - return -1; -} - - /* qemuDomainGetMasterKeyFilePath: * @libDir: Directory path to domain lib files * @@ -6123,7 +5466,7 @@ virDomainDefParserConfig virQEMUDriverDomainDefParser= Config =3D { }; =20 =20 -static void +void qemuDomainObjSaveStatus(virQEMUDriverPtr driver, virDomainObjPtr obj) { @@ -6165,508 +5508,6 @@ qemuDomainSaveConfig(virDomainObjPtr obj) } =20 =20 -void -qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, - virDomainObjPtr obj, - int phase) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - unsigned long long me =3D virThreadSelfID(); - - if (!priv->job.asyncJob) - return; - - VIR_DEBUG("Setting '%s' phase to '%s'", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase)); - - if (priv->job.asyncOwner && me !=3D priv->job.asyncOwner) { - VIR_WARN("'%s' async job is owned by thread %llu", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner); - } - - priv->job.phase =3D phase; - priv->job.asyncOwner =3D me; - qemuDomainObjSaveStatus(driver, obj); -} - -void -qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj, - unsigned long long allowedJobs) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - if (!priv->job.asyncJob) - return; - - priv->job.mask =3D allowedJobs | JOB_MASK(QEMU_JOB_DESTROY); -} - -void -qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) - qemuDomainObjResetJob(&priv->job); - qemuDomainObjResetAsyncJob(&priv->job); - qemuDomainObjSaveStatus(driver, obj); -} - -void -qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - VIR_DEBUG("Releasing ownership of '%s' async job", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); - - if (priv->job.asyncOwner !=3D virThreadSelfID()) { - VIR_WARN("'%s' async job is owned by thread %llu", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.asyncOwner); - } - priv->job.asyncOwner =3D 0; -} - -static bool -qemuDomainNestedJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) -{ - return !jobs->asyncJob || - newJob =3D=3D QEMU_JOB_NONE || - (jobs->mask & JOB_MASK(newJob)) !=3D 0; -} - -bool -qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) -{ - return !jobs->active && qemuDomainNestedJobAllowed(jobs, newJob); -} - -static bool -qemuDomainObjCanSetJob(qemuDomainJobObjPtr job, - qemuDomainJob newJob, - qemuDomainAgentJob newAgentJob) -{ - return ((newJob =3D=3D QEMU_JOB_NONE || - job->active =3D=3D QEMU_JOB_NONE) && - (newAgentJob =3D=3D QEMU_AGENT_JOB_NONE || - job->agentActive =3D=3D QEMU_AGENT_JOB_NONE)); -} - -/* Give up waiting for mutex after 30 seconds */ -#define QEMU_JOB_WAIT_TIME (1000ull * 30) - -/** - * qemuDomainObjBeginJobInternal: - * @driver: qemu driver - * @obj: domain object - * @job: qemuDomainJob to start - * @asyncJob: qemuDomainAsyncJob to start - * @nowait: don't wait trying to acquire @job - * - * Acquires job for a domain object which must be locked before - * calling. If there's already a job running waits up to - * QEMU_JOB_WAIT_TIME after which the functions fails reporting - * an error unless @nowait is set. - * - * If @nowait is true this function tries to acquire job and if - * it fails, then it returns immediately without waiting. No - * error is reported in this case. - * - * Returns: 0 on success, - * -2 if unable to start job because of timeout or - * maxQueuedJobs limit, - * -1 otherwise. - */ -static int ATTRIBUTE_NONNULL(1) -qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainJob job, - qemuDomainAgentJob agentJob, - qemuDomainAsyncJob asyncJob, - bool nowait) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - unsigned long long now; - unsigned long long then; - bool nested =3D job =3D=3D QEMU_JOB_ASYNC_NESTED; - bool async =3D job =3D=3D QEMU_JOB_ASYNC; - g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); - const char *blocker =3D NULL; - const char *agentBlocker =3D NULL; - int ret =3D -1; - unsigned long long duration =3D 0; - unsigned long long agentDuration =3D 0; - unsigned long long asyncDuration =3D 0; - - VIR_DEBUG("Starting job: job=3D%s agentJob=3D%s asyncJob=3D%s " - "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", - qemuDomainJobTypeToString(job), - qemuDomainAgentJobTypeToString(agentJob), - qemuDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name, - qemuDomainJobTypeToString(priv->job.active), - qemuDomainAgentJobTypeToString(priv->job.agentActive), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); - - if (virTimeMillisNow(&now) < 0) - return -1; - - priv->jobs_queued++; - then =3D now + QEMU_JOB_WAIT_TIME; - - retry: - if ((!async && job !=3D QEMU_JOB_DESTROY) && - cfg->maxQueuedJobs && - priv->jobs_queued > cfg->maxQueuedJobs) { - goto error; - } - - while (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); - if (virCondWaitUntil(&priv->job.asyncCond, &obj->parent.lock, then= ) < 0) - goto error; - } - - while (!qemuDomainObjCanSetJob(&priv->job, job, agentJob)) { - if (nowait) - goto cleanup; - - VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); - if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) - goto error; - } - - /* No job is active but a new async job could have been started while = obj - * was unlocked, so we need to recheck it. */ - if (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) - goto retry; - - ignore_value(virTimeMillisNow(&now)); - - if (job) { - qemuDomainObjResetJob(&priv->job); - - if (job !=3D QEMU_JOB_ASYNC) { - VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", - qemuDomainJobTypeToString(job), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - priv->job.active =3D job; - priv->job.owner =3D virThreadSelfID(); - priv->job.ownerAPI =3D virThreadJobGet(); - priv->job.started =3D now; - } else { - VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", - qemuDomainAsyncJobTypeToString(asyncJob), - obj, obj->def->name); - qemuDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; - priv->job.asyncJob =3D asyncJob; - priv->job.asyncOwner =3D virThreadSelfID(); - priv->job.asyncOwnerAPI =3D virThreadJobGet(); - priv->job.asyncStarted =3D now; - priv->job.current->started =3D now; - } - } - - if (agentJob) { - qemuDomainObjResetAgentJob(&priv->job); - - VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", - qemuDomainAgentJobTypeToString(agentJob), - obj, obj->def->name, - qemuDomainJobTypeToString(priv->job.active), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); - priv->job.agentActive =3D agentJob; - priv->job.agentOwner =3D virThreadSelfID(); - priv->job.agentOwnerAPI =3D virThreadJobGet(); - priv->job.agentStarted =3D now; - } - - if (qemuDomainTrackJob(job)) - qemuDomainObjSaveStatus(driver, obj); - - return 0; - - error: - ignore_value(virTimeMillisNow(&now)); - if (priv->job.active && priv->job.started) - duration =3D now - priv->job.started; - if (priv->job.agentActive && priv->job.agentStarted) - agentDuration =3D now - priv->job.agentStarted; - if (priv->job.asyncJob && priv->job.asyncStarted) - asyncDuration =3D now - priv->job.asyncStarted; - - VIR_WARN("Cannot start job (%s, %s, %s) for domain %s; " - "current job is (%s, %s, %s) " - "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " - "for (%llus, %llus, %llus)", - qemuDomainJobTypeToString(job), - qemuDomainAgentJobTypeToString(agentJob), - qemuDomainAsyncJobTypeToString(asyncJob), - obj->def->name, - qemuDomainJobTypeToString(priv->job.active), - qemuDomainAgentJobTypeToString(priv->job.agentActive), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - priv->job.owner, NULLSTR(priv->job.ownerAPI), - priv->job.agentOwner, NULLSTR(priv->job.agentOwnerAPI), - priv->job.asyncOwner, NULLSTR(priv->job.asyncOwnerAPI), - priv->job.apiFlags, - duration / 1000, agentDuration / 1000, asyncDuration / 1000); - - if (job) { - if (nested || qemuDomainNestedJobAllowed(&priv->job, job)) - blocker =3D priv->job.ownerAPI; - else - blocker =3D priv->job.asyncOwnerAPI; - } - - if (agentJob) - agentBlocker =3D priv->job.agentOwnerAPI; - - if (errno =3D=3D ETIMEDOUT) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s)"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by monitor=3D%s)"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_TIMEOUT, - _("cannot acquire state change " - "lock (held by agent=3D%s)"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", - _("cannot acquire state change lock")); - } - ret =3D -2; - } else if (cfg->maxQueuedJobs && - priv->jobs_queued > cfg->maxQueuedJobs) { - if (blocker && agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s agent=3D%s) " - "due to max_queued limit"), - blocker, agentBlocker); - } else if (blocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by monitor=3D%s) " - "due to max_queued limit"), - blocker); - } else if (agentBlocker) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("cannot acquire state change " - "lock (held by agent=3D%s) " - "due to max_queued limit"), - agentBlocker); - } else { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("cannot acquire state change lock " - "due to max_queued limit")); - } - ret =3D -2; - } else { - virReportSystemError(errno, "%s", _("cannot acquire job mutex")); - } - - cleanup: - priv->jobs_queued--; - return ret; -} - -/* - * obj must be locked before calling - * - * This must be called by anything that will change the VM state - * in any way, or anything that will use the QEMU monitor. - * - * Successful calls must be followed by EndJob eventually - */ -int qemuDomainObjBeginJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainJob job) -{ - if (qemuDomainObjBeginJobInternal(driver, obj, job, - QEMU_AGENT_JOB_NONE, - QEMU_ASYNC_JOB_NONE, false) < 0) - return -1; - else - return 0; -} - -/** - * qemuDomainObjBeginAgentJob: - * - * Grabs agent type of job. Use if caller talks to guest agent only. - * - * To end job call qemuDomainObjEndAgentJob. - */ -int -qemuDomainObjBeginAgentJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAgentJob agentJob) -{ - return qemuDomainObjBeginJobInternal(driver, obj, QEMU_JOB_NONE, - agentJob, - QEMU_ASYNC_JOB_NONE, false); -} - -int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob, - virDomainJobOperation operation, - unsigned long apiFlags) -{ - qemuDomainObjPrivatePtr priv; - - if (qemuDomainObjBeginJobInternal(driver, obj, QEMU_JOB_ASYNC, - QEMU_AGENT_JOB_NONE, - asyncJob, false) < 0) - return -1; - - priv =3D obj->privateData; - priv->job.current->operation =3D operation; - priv->job.apiFlags =3D apiFlags; - return 0; -} - -int -qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - if (asyncJob !=3D priv->job.asyncJob) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("unexpected async job %d type expected %d"), - asyncJob, priv->job.asyncJob); - return -1; - } - - if (priv->job.asyncOwner !=3D virThreadSelfID()) { - VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", - priv->job.asyncOwner); - } - - return qemuDomainObjBeginJobInternal(driver, obj, - QEMU_JOB_ASYNC_NESTED, - QEMU_AGENT_JOB_NONE, - QEMU_ASYNC_JOB_NONE, - false); -} - -/** - * qemuDomainObjBeginJobNowait: - * - * @driver: qemu driver - * @obj: domain object - * @job: qemuDomainJob to start - * - * Acquires job for a domain object which must be locked before - * calling. If there's already a job running it returns - * immediately without any error reported. - * - * Returns: see qemuDomainObjBeginJobInternal - */ -int -qemuDomainObjBeginJobNowait(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainJob job) -{ - return qemuDomainObjBeginJobInternal(driver, obj, job, - QEMU_AGENT_JOB_NONE, - QEMU_ASYNC_JOB_NONE, true); -} - -/* - * obj must be locked and have a reference before calling - * - * To be called after completing the work associated with the - * earlier qemuDomainBeginJob() call - */ -void -qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - qemuDomainJob job =3D priv->job.active; - - priv->jobs_queued--; - - VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", - qemuDomainJobTypeToString(job), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - - qemuDomainObjResetJob(&priv->job); - if (qemuDomainTrackJob(job)) - qemuDomainObjSaveStatus(driver, obj); - /* We indeed need to wake up ALL threads waiting because - * grabbing a job requires checking more variables. */ - virCondBroadcast(&priv->job.cond); -} - -void -qemuDomainObjEndAgentJob(virDomainObjPtr obj) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - qemuDomainAgentJob agentJob =3D priv->job.agentActive; - - priv->jobs_queued--; - - VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", - qemuDomainAgentJobTypeToString(agentJob), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - - qemuDomainObjResetAgentJob(&priv->job); - /* We indeed need to wake up ALL threads waiting because - * grabbing a job requires checking more variables. */ - virCondBroadcast(&priv->job.cond); -} - -void -qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - priv->jobs_queued--; - - VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - - qemuDomainObjResetAsyncJob(&priv->job); - qemuDomainObjSaveStatus(driver, obj); - virCondBroadcast(&priv->job.asyncCond); -} - -void -qemuDomainObjAbortAsyncJob(virDomainObjPtr obj) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - VIR_DEBUG("Requesting abort of async job: %s (vm=3D%p name=3D%s)", - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - - priv->job.abortJob =3D true; - virDomainObjBroadcast(obj); -} - /* * obj must be locked before calling * @@ -7870,7 +6711,6 @@ qemuDomainRemoveInactiveLocked(virQEMUDriverPtr drive= r, virDomainObjListRemoveLocked(driver->domains, vm); } =20 - /** * qemuDomainRemoveInactiveJob: * diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 19e80fef2b..15ffd87cb5 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -31,6 +31,7 @@ #include "qemu_monitor.h" #include "qemu_agent.h" #include "qemu_blockjob.h" +#include "qemu_domainjob.h" #include "qemu_conf.h" #include "qemu_capabilities.h" #include "qemu_migration_params.h" @@ -54,182 +55,15 @@ # define QEMU_DOMAIN_MIG_BANDWIDTH_MAX (INT64_MAX / (1024 * 1024)) #endif =20 -#define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) -#define QEMU_JOB_DEFAULT_MASK \ - (JOB_MASK(QEMU_JOB_QUERY) | \ - JOB_MASK(QEMU_JOB_DESTROY) | \ - JOB_MASK(QEMU_JOB_ABORT)) - -/* Jobs which have to be tracked in domain state XML. */ -#define QEMU_DOMAIN_TRACK_JOBS \ - (JOB_MASK(QEMU_JOB_DESTROY) | \ - JOB_MASK(QEMU_JOB_ASYNC)) - -/* Only 1 job is allowed at any time - * A job includes *all* monitor commands, even those just querying - * information, not merely actions */ -typedef enum { - QEMU_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) condi= tions */ - QEMU_JOB_QUERY, /* Doesn't change any state */ - QEMU_JOB_DESTROY, /* Destroys the domain (cannot be masked out) = */ - QEMU_JOB_SUSPEND, /* Suspends (stops vCPUs) the domain */ - QEMU_JOB_MODIFY, /* May change state */ - QEMU_JOB_ABORT, /* Abort current async job */ - QEMU_JOB_MIGRATION_OP, /* Operation influencing outgoing migration */ - - /* The following two items must always be the last items before JOB_LA= ST */ - QEMU_JOB_ASYNC, /* Asynchronous job */ - QEMU_JOB_ASYNC_NESTED, /* Normal job within an async job */ - - QEMU_JOB_LAST -} qemuDomainJob; -VIR_ENUM_DECL(qemuDomainJob); - -typedef enum { - QEMU_AGENT_JOB_NONE =3D 0, /* No agent job. */ - QEMU_AGENT_JOB_QUERY, /* Does not change state of domain */ - QEMU_AGENT_JOB_MODIFY, /* May change state of domain */ - - QEMU_AGENT_JOB_LAST -} qemuDomainAgentJob; -VIR_ENUM_DECL(qemuDomainAgentJob); - -/* Async job consists of a series of jobs that may change state. Independe= nt - * jobs that do not change state (and possibly others if explicitly allowe= d by - * current async job) are allowed to be run even if async job is active. - */ -typedef enum { - QEMU_ASYNC_JOB_NONE =3D 0, - QEMU_ASYNC_JOB_MIGRATION_OUT, - QEMU_ASYNC_JOB_MIGRATION_IN, - QEMU_ASYNC_JOB_SAVE, - QEMU_ASYNC_JOB_DUMP, - QEMU_ASYNC_JOB_SNAPSHOT, - QEMU_ASYNC_JOB_START, - QEMU_ASYNC_JOB_BACKUP, - - QEMU_ASYNC_JOB_LAST -} qemuDomainAsyncJob; -VIR_ENUM_DECL(qemuDomainAsyncJob); - -typedef enum { - QEMU_DOMAIN_JOB_STATUS_NONE =3D 0, - QEMU_DOMAIN_JOB_STATUS_ACTIVE, - QEMU_DOMAIN_JOB_STATUS_MIGRATING, - QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED, - QEMU_DOMAIN_JOB_STATUS_PAUSED, - QEMU_DOMAIN_JOB_STATUS_POSTCOPY, - QEMU_DOMAIN_JOB_STATUS_COMPLETED, - QEMU_DOMAIN_JOB_STATUS_FAILED, - QEMU_DOMAIN_JOB_STATUS_CANCELED, -} qemuDomainJobStatus; - -typedef enum { - QEMU_DOMAIN_JOB_STATS_TYPE_NONE =3D 0, - QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION, - QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP, - QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP, - QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP, -} qemuDomainJobStatsType; - - -typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; -typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; -struct _qemuDomainMirrorStats { - unsigned long long transferred; - unsigned long long total; -}; - -typedef struct _qemuDomainBackupStats qemuDomainBackupStats; -struct _qemuDomainBackupStats { - unsigned long long transferred; - unsigned long long total; - unsigned long long tmp_used; - unsigned long long tmp_total; -}; - -typedef struct _qemuDomainJobInfo qemuDomainJobInfo; -typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; -struct _qemuDomainJobInfo { - qemuDomainJobStatus status; - virDomainJobOperation operation; - unsigned long long started; /* When the async job started */ - unsigned long long stopped; /* When the domain's CPUs were stopped */ - unsigned long long sent; /* When the source sent status info to the - destination (only for migrations). */ - unsigned long long received; /* When the destination host received sta= tus - info from the source (migrations only)= . */ - /* Computed values */ - unsigned long long timeElapsed; - long long timeDelta; /* delta =3D received - sent, i.e., the difference - between the source and the destination time pl= us - the time between the end of Perform phase on t= he - source and the beginning of Finish phase on the - destination. */ - bool timeDeltaSet; - /* Raw values from QEMU */ - qemuDomainJobStatsType statsType; - union { - qemuMonitorMigrationStats mig; - qemuMonitorDumpStats dump; - qemuDomainBackupStats backup; - } stats; - qemuDomainMirrorStats mirrorStats; - - char *errmsg; /* optional error message for failed completed jobs */ -}; - -void -qemuDomainJobInfoFree(qemuDomainJobInfoPtr info); - -G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); - -qemuDomainJobInfoPtr -qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info); - -typedef struct _qemuDomainJobObj qemuDomainJobObj; -typedef qemuDomainJobObj *qemuDomainJobObjPtr; -struct _qemuDomainJobObj { - virCond cond; /* Use to coordinate jobs */ - - /* The following members are for QEMU_JOB_* */ - qemuDomainJob active; /* Currently running job */ - unsigned long long owner; /* Thread id which set current job= */ - const char *ownerAPI; /* The API which owns the job */ - unsigned long long started; /* When the current job started */ - - /* The following members are for QEMU_AGENT_JOB_* */ - qemuDomainAgentJob agentActive; /* Currently running agent job */ - unsigned long long agentOwner; /* Thread id which set current age= nt job */ - const char *agentOwnerAPI; /* The API which owns the agent jo= b */ - unsigned long long agentStarted; /* When the current agent job star= ted */ - - /* The following members are for QEMU_ASYNC_JOB_* */ - virCond asyncCond; /* Use to coordinate with async jo= bs */ - qemuDomainAsyncJob asyncJob; /* Currently active async job */ - unsigned long long asyncOwner; /* Thread which set current async = job */ - const char *asyncOwnerAPI; /* The API which owns the async jo= b */ - unsigned long long asyncStarted; /* When the current async job star= ted */ - int phase; /* Job phase (mainly for migration= s) */ - unsigned long long mask; /* Jobs allowed during async job */ - qemuDomainJobInfoPtr current; /* async job progress data */ - qemuDomainJobInfoPtr completed; /* statistics data of a recently c= ompleted job */ - bool abortJob; /* abort of the job requested */ - bool spiceMigration; /* we asked for spice migration an= d we - * should wait for it to finish */ - bool spiceMigrated; /* spice migration completed */ - char *error; /* job event completion error */ - bool dumpCompleted; /* dump completed */ - - qemuMigrationParamsPtr migParams; - unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ -}; - typedef void (*qemuDomainCleanupCallback)(virQEMUDriverPtr driver, virDomainObjPtr vm); =20 #define QEMU_DOMAIN_MASTER_KEY_LEN 32 /* 32 bytes for 256 bit random key = */ =20 +void +qemuDomainObjSaveStatus(virQEMUDriverPtr driver, + virDomainObjPtr obj); + void qemuDomainSaveStatus(virDomainObjPtr obj); void qemuDomainSaveConfig(virDomainObjPtr obj); =20 @@ -660,56 +494,8 @@ virDomainObjPtr qemuDomainObjFromDomain(virDomainPtr d= omain); =20 qemuDomainSaveCookiePtr qemuDomainSaveCookieNew(virDomainObjPtr vm); =20 -const char *qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, - int phase); -int qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, - const char *phase); - void qemuDomainEventFlush(int timer, void *opaque); =20 -void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, - virDomainObjPtr vm); - -int qemuDomainObjBeginJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; -int qemuDomainObjBeginAgentJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAgentJob agentJob) - G_GNUC_WARN_UNUSED_RESULT; -int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob, - virDomainJobOperation operation, - unsigned long apiFlags) - G_GNUC_WARN_UNUSED_RESULT; -int qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob) - G_GNUC_WARN_UNUSED_RESULT; -int qemuDomainObjBeginJobNowait(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainJob job) - G_GNUC_WARN_UNUSED_RESULT; - -void qemuDomainObjEndJob(virQEMUDriverPtr driver, - virDomainObjPtr obj); -void qemuDomainObjEndAgentJob(virDomainObjPtr obj); -void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, - virDomainObjPtr obj); -void qemuDomainObjAbortAsyncJob(virDomainObjPtr obj); -void qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, - virDomainObjPtr obj, - int phase); -void qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj, - unsigned long long allowedJobs); -void qemuDomainObjRestoreJob(virDomainObjPtr obj, - qemuDomainJobObjPtr job); -void qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver, - virDomainObjPtr obj); -void qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj); - qemuMonitorPtr qemuDomainGetMonitor(virDomainObjPtr vm) ATTRIBUTE_NONNULL(1); void qemuDomainObjEnterMonitor(virQEMUDriverPtr driver, @@ -850,19 +636,10 @@ int qemuDomainSnapshotDiscardAllMetadata(virQEMUDrive= rPtr driver, void qemuDomainRemoveInactive(virQEMUDriverPtr driver, virDomainObjPtr vm); =20 -void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver, - virDomainObjPtr vm); - -void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, - virDomainObjPtr vm); - void qemuDomainSetFakeReboot(virQEMUDriverPtr driver, virDomainObjPtr vm, bool value); =20 -bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, - qemuDomainJob newJob); - int qemuDomainCheckDiskStartupPolicy(virQEMUDriverPtr driver, virDomainObjPtr vm, size_t diskIndex, @@ -964,20 +741,6 @@ bool qemuDomainCheckABIStability(virQEMUDriverPtr driv= er, bool qemuDomainAgentAvailable(virDomainObjPtr vm, bool reportError); =20 -int qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) - ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) - ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, - virDomainJobInfoPtr info) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) - ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); - bool qemuDomainDiskBlockJobIsActive(virDomainDiskDefPtr disk); bool qemuDomainHasBlockjob(virDomainObjPtr vm, bool copy_only) ATTRIBUTE_NONNULL(1); diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c new file mode 100644 index 0000000000..7111acadda --- /dev/null +++ b/src/qemu/qemu_domainjob.c @@ -0,0 +1,1192 @@ +/* + * qemu_domainjob.c: helper functions for QEMU domain jobs + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * . + */ + +#include + +#include "qemu_domain.h" +#include "qemu_migration.h" +#include "qemu_domainjob.h" +#include "viralloc.h" +#include "virlog.h" +#include "virerror.h" +#include "virtime.h" +#include "virthreadjob.h" + +#define VIR_FROM_THIS VIR_FROM_QEMU + +VIR_LOG_INIT("qemu.qemu_domainjob"); + +VIR_ENUM_IMPL(qemuDomainJob, + QEMU_JOB_LAST, + "none", + "query", + "destroy", + "suspend", + "modify", + "abort", + "migration operation", + "none", /* async job is never stored in job.active */ + "async nested", +); + +VIR_ENUM_IMPL(qemuDomainAgentJob, + QEMU_AGENT_JOB_LAST, + "none", + "query", + "modify", +); + +VIR_ENUM_IMPL(qemuDomainAsyncJob, + QEMU_ASYNC_JOB_LAST, + "none", + "migration out", + "migration in", + "save", + "dump", + "snapshot", + "start", + "backup", +); + +VIR_ENUM_IMPL(qemuDomainNamespace, + QEMU_DOMAIN_NS_LAST, + "mount", +); + + +const char * +qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, + int phase G_GNUC_UNUSED) +{ + switch (job) { + case QEMU_ASYNC_JOB_MIGRATION_OUT: + case QEMU_ASYNC_JOB_MIGRATION_IN: + return qemuMigrationJobPhaseTypeToString(phase); + + case QEMU_ASYNC_JOB_SAVE: + case QEMU_ASYNC_JOB_DUMP: + case QEMU_ASYNC_JOB_SNAPSHOT: + case QEMU_ASYNC_JOB_START: + case QEMU_ASYNC_JOB_NONE: + case QEMU_ASYNC_JOB_BACKUP: + G_GNUC_FALLTHROUGH; + case QEMU_ASYNC_JOB_LAST: + break; + } + + return "none"; +} + +int +qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, + const char *phase) +{ + if (!phase) + return 0; + + switch (job) { + case QEMU_ASYNC_JOB_MIGRATION_OUT: + case QEMU_ASYNC_JOB_MIGRATION_IN: + return qemuMigrationJobPhaseTypeFromString(phase); + + case QEMU_ASYNC_JOB_SAVE: + case QEMU_ASYNC_JOB_DUMP: + case QEMU_ASYNC_JOB_SNAPSHOT: + case QEMU_ASYNC_JOB_START: + case QEMU_ASYNC_JOB_NONE: + case QEMU_ASYNC_JOB_BACKUP: + G_GNUC_FALLTHROUGH; + case QEMU_ASYNC_JOB_LAST: + break; + } + + if (STREQ(phase, "none")) + return 0; + else + return -1; +} + + +void +qemuDomainJobInfoFree(qemuDomainJobInfoPtr info) +{ + g_free(info->errmsg); + g_free(info); +} + + +qemuDomainJobInfoPtr +qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info) +{ + qemuDomainJobInfoPtr ret =3D g_new0(qemuDomainJobInfo, 1); + + memcpy(ret, info, sizeof(*info)); + + ret->errmsg =3D g_strdup(info->errmsg); + + return ret; +} + +void +qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + virObjectEventPtr event; + virTypedParameterPtr params =3D NULL; + int nparams =3D 0; + int type; + + if (!priv->job.completed) + return; + + if (qemuDomainJobInfoToParams(priv->job.completed, &type, + ¶ms, &nparams) < 0) { + VIR_WARN("Could not get stats for completed job; domain %s", + vm->def->name); + } + + event =3D virDomainEventJobCompletedNewFromObj(vm, params, nparams); + virObjectEventStateQueue(driver->domainEventState, event); +} + + +int +qemuDomainObjInitJob(qemuDomainJobObjPtr job) +{ + memset(job, 0, sizeof(*job)); + + if (virCondInit(&job->cond) < 0) + return -1; + + if (virCondInit(&job->asyncCond) < 0) { + virCondDestroy(&job->cond); + return -1; + } + + return 0; +} + + +static void +qemuDomainObjResetJob(qemuDomainJobObjPtr job) +{ + job->active =3D QEMU_JOB_NONE; + job->owner =3D 0; + job->ownerAPI =3D NULL; + job->started =3D 0; +} + + +static void +qemuDomainObjResetAgentJob(qemuDomainJobObjPtr job) +{ + job->agentActive =3D QEMU_AGENT_JOB_NONE; + job->agentOwner =3D 0; + job->agentOwnerAPI =3D NULL; + job->agentStarted =3D 0; +} + + +static void +qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job) +{ + job->asyncJob =3D QEMU_ASYNC_JOB_NONE; + job->asyncOwner =3D 0; + job->asyncOwnerAPI =3D NULL; + job->asyncStarted =3D 0; + job->phase =3D 0; + job->mask =3D QEMU_JOB_DEFAULT_MASK; + job->abortJob =3D false; + job->spiceMigration =3D false; + job->spiceMigrated =3D false; + job->dumpCompleted =3D false; + VIR_FREE(job->error); + g_clear_pointer(&job->current, qemuDomainJobInfoFree); + qemuMigrationParamsFree(job->migParams); + job->migParams =3D NULL; + job->apiFlags =3D 0; +} + +void +qemuDomainObjRestoreJob(virDomainObjPtr obj, + qemuDomainJobObjPtr job) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + + memset(job, 0, sizeof(*job)); + job->active =3D priv->job.active; + job->owner =3D priv->job.owner; + job->asyncJob =3D priv->job.asyncJob; + job->asyncOwner =3D priv->job.asyncOwner; + job->phase =3D priv->job.phase; + job->migParams =3D g_steal_pointer(&priv->job.migParams); + job->apiFlags =3D priv->job.apiFlags; + + qemuDomainObjResetJob(&priv->job); + qemuDomainObjResetAsyncJob(&priv->job); +} + +void +qemuDomainObjFreeJob(qemuDomainJobObjPtr job) +{ + qemuDomainObjResetJob(job); + qemuDomainObjResetAsyncJob(job); + g_clear_pointer(&job->current, qemuDomainJobInfoFree); + g_clear_pointer(&job->completed, qemuDomainJobInfoFree); + virCondDestroy(&job->cond); + virCondDestroy(&job->asyncCond); +} + +bool +qemuDomainTrackJob(qemuDomainJob job) +{ + return (QEMU_DOMAIN_TRACK_JOBS & JOB_MASK(job)) !=3D 0; +} + + +int +qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) +{ + unsigned long long now; + + if (!jobInfo->started) + return 0; + + if (virTimeMillisNow(&now) < 0) + return -1; + + if (now < jobInfo->started) { + VIR_WARN("Async job starts in the future"); + jobInfo->started =3D 0; + return 0; + } + + jobInfo->timeElapsed =3D now - jobInfo->started; + return 0; +} + +int +qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) +{ + unsigned long long now; + + if (!jobInfo->stopped) + return 0; + + if (virTimeMillisNow(&now) < 0) + return -1; + + if (now < jobInfo->stopped) { + VIR_WARN("Guest's CPUs stopped in the future"); + jobInfo->stopped =3D 0; + return 0; + } + + jobInfo->stats.mig.downtime =3D now - jobInfo->stopped; + jobInfo->stats.mig.downtime_set =3D true; + return 0; +} + +static virDomainJobType +qemuDomainJobStatusToType(qemuDomainJobStatus status) +{ + switch (status) { + case QEMU_DOMAIN_JOB_STATUS_NONE: + break; + + case QEMU_DOMAIN_JOB_STATUS_ACTIVE: + case QEMU_DOMAIN_JOB_STATUS_MIGRATING: + case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: + case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: + case QEMU_DOMAIN_JOB_STATUS_PAUSED: + return VIR_DOMAIN_JOB_UNBOUNDED; + + case QEMU_DOMAIN_JOB_STATUS_COMPLETED: + return VIR_DOMAIN_JOB_COMPLETED; + + case QEMU_DOMAIN_JOB_STATUS_FAILED: + return VIR_DOMAIN_JOB_FAILED; + + case QEMU_DOMAIN_JOB_STATUS_CANCELED: + return VIR_DOMAIN_JOB_CANCELLED; + } + + return VIR_DOMAIN_JOB_NONE; +} + +int +qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, + virDomainJobInfoPtr info) +{ + info->type =3D qemuDomainJobStatusToType(jobInfo->status); + info->timeElapsed =3D jobInfo->timeElapsed; + + switch (jobInfo->statsType) { + case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: + info->memTotal =3D jobInfo->stats.mig.ram_total; + info->memRemaining =3D jobInfo->stats.mig.ram_remaining; + info->memProcessed =3D jobInfo->stats.mig.ram_transferred; + info->fileTotal =3D jobInfo->stats.mig.disk_total + + jobInfo->mirrorStats.total; + info->fileRemaining =3D jobInfo->stats.mig.disk_remaining + + (jobInfo->mirrorStats.total - + jobInfo->mirrorStats.transferred); + info->fileProcessed =3D jobInfo->stats.mig.disk_transferred + + jobInfo->mirrorStats.transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: + info->memTotal =3D jobInfo->stats.mig.ram_total; + info->memRemaining =3D jobInfo->stats.mig.ram_remaining; + info->memProcessed =3D jobInfo->stats.mig.ram_transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: + info->memTotal =3D jobInfo->stats.dump.total; + info->memProcessed =3D jobInfo->stats.dump.completed; + info->memRemaining =3D info->memTotal - info->memProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + info->fileTotal =3D jobInfo->stats.backup.total; + info->fileProcessed =3D jobInfo->stats.backup.transferred; + info->fileRemaining =3D info->fileTotal - info->fileProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: + break; + } + + info->dataTotal =3D info->memTotal + info->fileTotal; + info->dataRemaining =3D info->memRemaining + info->fileRemaining; + info->dataProcessed =3D info->memProcessed + info->fileProcessed; + + return 0; +} + + +static int +qemuDomainMigrationJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; + qemuDomainMirrorStatsPtr mirrorStats =3D &jobInfo->mirrorStats; + virTypedParameterPtr par =3D NULL; + int maxpar =3D 0; + int npar =3D 0; + unsigned long long mirrorRemaining =3D mirrorStats->total - + mirrorStats->transferred; + + if (virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_OPERATION, + jobInfo->operation) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED, + jobInfo->timeElapsed) < 0) + goto error; + + if (jobInfo->timeDeltaSet && + jobInfo->timeElapsed > jobInfo->timeDelta && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED_NET, + jobInfo->timeElapsed - jobInfo->timeDelta)= < 0) + goto error; + + if (stats->downtime_set && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DOWNTIME, + stats->downtime) < 0) + goto error; + + if (stats->downtime_set && + jobInfo->timeDeltaSet && + stats->downtime > jobInfo->timeDelta && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DOWNTIME_NET, + stats->downtime - jobInfo->timeDelta) < 0) + goto error; + + if (stats->setup_time_set && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_SETUP_TIME, + stats->setup_time) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_TOTAL, + stats->ram_total + + stats->disk_total + + mirrorStats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_PROCESSED, + stats->ram_transferred + + stats->disk_transferred + + mirrorStats->transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_REMAINING, + stats->ram_remaining + + stats->disk_remaining + + mirrorRemaining) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_TOTAL, + stats->ram_total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PROCESSED, + stats->ram_transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_REMAINING, + stats->ram_remaining) < 0) + goto error; + + if (stats->ram_bps && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_BPS, + stats->ram_bps) < 0) + goto error; + + if (stats->ram_duplicate_set) { + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_CONSTANT, + stats->ram_duplicate) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_NORMAL, + stats->ram_normal) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_NORMAL_BYTES, + stats->ram_normal_bytes) < 0) + goto error; + } + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE, + stats->ram_dirty_rate) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_ITERATION, + stats->ram_iteration) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_POSTCOPY_REQS, + stats->ram_postcopy_reqs) < 0) + goto error; + + if (stats->ram_page_size > 0 && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE, + stats->ram_page_size) < 0) + goto error; + + /* The remaining stats are disk, mirror, or migration specific + * so if this is a SAVEDUMP, we can just skip them */ + if (jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) + goto done; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_TOTAL, + stats->disk_total + + mirrorStats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_PROCESSED, + stats->disk_transferred + + mirrorStats->transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_REMAINING, + stats->disk_remaining + + mirrorRemaining) < 0) + goto error; + + if (stats->disk_bps && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_BPS, + stats->disk_bps) < 0) + goto error; + + if (stats->xbzrle_set) { + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_CACHE, + stats->xbzrle_cache_size) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_BYTES, + stats->xbzrle_bytes) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_PAGES, + stats->xbzrle_pages) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_CACHE_MISSE= S, + stats->xbzrle_cache_miss) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_OVERFLOW, + stats->xbzrle_overflow) < 0) + goto error; + } + + if (stats->cpu_throttle_percentage && + virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE, + stats->cpu_throttle_percentage) < 0) + goto error; + + done: + *type =3D qemuDomainJobStatusToType(jobInfo->status); + *params =3D par; + *nparams =3D npar; + return 0; + + error: + virTypedParamsFree(par, npar); + return -1; +} + + +static int +qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuMonitorDumpStats *stats =3D &jobInfo->stats.dump; + virTypedParameterPtr par =3D NULL; + int maxpar =3D 0; + int npar =3D 0; + + if (virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_OPERATION, + jobInfo->operation) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED, + jobInfo->timeElapsed) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_TOTAL, + stats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PROCESSED, + stats->completed) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_REMAINING, + stats->total - stats->completed) < 0) + goto error; + + *type =3D qemuDomainJobStatusToType(jobInfo->status); + *params =3D par; + *nparams =3D npar; + return 0; + + error: + virTypedParamsFree(par, npar); + return -1; +} + + +static int +qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + g_autoptr(virTypedParamList) par =3D g_new0(virTypedParamList, 1); + + if (virTypedParamListAddInt(par, jobInfo->operation, + VIR_DOMAIN_JOB_OPERATION) < 0) + return -1; + + if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, + VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) + return -1; + + if (stats->transferred > 0 || stats->total > 0) { + if (virTypedParamListAddULLong(par, stats->total, + VIR_DOMAIN_JOB_DISK_TOTAL) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->transferred, + VIR_DOMAIN_JOB_DISK_PROCESSED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->total - stats->transfer= red, + VIR_DOMAIN_JOB_DISK_REMAINING) < 0) + return -1; + } + + if (stats->tmp_used > 0 || stats->tmp_total > 0) { + if (virTypedParamListAddULLong(par, stats->tmp_used, + VIR_DOMAIN_JOB_DISK_TEMP_USED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->tmp_total, + VIR_DOMAIN_JOB_DISK_TEMP_TOTAL) < 0) + return -1; + } + + if (jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && + virTypedParamListAddBoolean(par, + jobInfo->status =3D=3D QEMU_DOMAIN_JOB= _STATUS_COMPLETED, + VIR_DOMAIN_JOB_SUCCESS) < 0) + return -1; + + if (jobInfo->errmsg && + virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) + return -1; + + *nparams =3D virTypedParamListStealParams(par, params); + *type =3D qemuDomainJobStatusToType(jobInfo->status); + return 0; +} + + +int +qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + switch (jobInfo->statsType) { + case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: + case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: + return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, n= params); + + case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: + return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparam= s); + + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + return qemuDomainBackupJobInfoToParams(jobInfo, type, params, npar= ams); + + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("invalid job statistics type")); + break; + + default: + virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType= ); + break; + } + + return -1; +} + + +void +qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, + virDomainObjPtr obj, + int phase) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + unsigned long long me =3D virThreadSelfID(); + + if (!priv->job.asyncJob) + return; + + VIR_DEBUG("Setting '%s' phase to '%s'", + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase)); + + if (priv->job.asyncOwner && me !=3D priv->job.asyncOwner) { + VIR_WARN("'%s' async job is owned by thread %llu", + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + priv->job.asyncOwner); + } + + priv->job.phase =3D phase; + priv->job.asyncOwner =3D me; + qemuDomainObjSaveStatus(driver, obj); +} + +void +qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj, + unsigned long long allowedJobs) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + + if (!priv->job.asyncJob) + return; + + priv->job.mask =3D allowedJobs | JOB_MASK(QEMU_JOB_DESTROY); +} + +void +qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + + if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) + qemuDomainObjResetJob(&priv->job); + qemuDomainObjResetAsyncJob(&priv->job); + qemuDomainObjSaveStatus(driver, obj); +} + +void +qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + + VIR_DEBUG("Releasing ownership of '%s' async job", + qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + + if (priv->job.asyncOwner !=3D virThreadSelfID()) { + VIR_WARN("'%s' async job is owned by thread %llu", + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + priv->job.asyncOwner); + } + priv->job.asyncOwner =3D 0; +} + +static bool +qemuDomainNestedJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) +{ + return !jobs->asyncJob || + newJob =3D=3D QEMU_JOB_NONE || + (jobs->mask & JOB_MASK(newJob)) !=3D 0; +} + +bool +qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob) +{ + return !jobs->active && qemuDomainNestedJobAllowed(jobs, newJob); +} + +static bool +qemuDomainObjCanSetJob(qemuDomainJobObjPtr job, + qemuDomainJob newJob, + qemuDomainAgentJob newAgentJob) +{ + return ((newJob =3D=3D QEMU_JOB_NONE || + job->active =3D=3D QEMU_JOB_NONE) && + (newAgentJob =3D=3D QEMU_AGENT_JOB_NONE || + job->agentActive =3D=3D QEMU_AGENT_JOB_NONE)); +} + +/* Give up waiting for mutex after 30 seconds */ +#define QEMU_JOB_WAIT_TIME (1000ull * 30) + +/** + * qemuDomainObjBeginJobInternal: + * @driver: qemu driver + * @obj: domain object + * @job: qemuDomainJob to start + * @asyncJob: qemuDomainAsyncJob to start + * @nowait: don't wait trying to acquire @job + * + * Acquires job for a domain object which must be locked before + * calling. If there's already a job running waits up to + * QEMU_JOB_WAIT_TIME after which the functions fails reporting + * an error unless @nowait is set. + * + * If @nowait is true this function tries to acquire job and if + * it fails, then it returns immediately without waiting. No + * error is reported in this case. + * + * Returns: 0 on success, + * -2 if unable to start job because of timeout or + * maxQueuedJobs limit, + * -1 otherwise. + */ +static int ATTRIBUTE_NONNULL(1) +qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainJob job, + qemuDomainAgentJob agentJob, + qemuDomainAsyncJob asyncJob, + bool nowait) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + unsigned long long now; + unsigned long long then; + bool nested =3D job =3D=3D QEMU_JOB_ASYNC_NESTED; + bool async =3D job =3D=3D QEMU_JOB_ASYNC; + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); + const char *blocker =3D NULL; + const char *agentBlocker =3D NULL; + int ret =3D -1; + unsigned long long duration =3D 0; + unsigned long long agentDuration =3D 0; + unsigned long long asyncDuration =3D 0; + + VIR_DEBUG("Starting job: job=3D%s agentJob=3D%s asyncJob=3D%s " + "(vm=3D%p name=3D%s, current job=3D%s agentJob=3D%s async=3D= %s)", + qemuDomainJobTypeToString(job), + qemuDomainAgentJobTypeToString(agentJob), + qemuDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name, + qemuDomainJobTypeToString(priv->job.active), + qemuDomainAgentJobTypeToString(priv->job.agentActive), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + + if (virTimeMillisNow(&now) < 0) + return -1; + + priv->jobs_queued++; + then =3D now + QEMU_JOB_WAIT_TIME; + + retry: + if ((!async && job !=3D QEMU_JOB_DESTROY) && + cfg->maxQueuedJobs && + priv->jobs_queued > cfg->maxQueuedJobs) { + goto error; + } + + while (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); + if (virCondWaitUntil(&priv->job.asyncCond, &obj->parent.lock, then= ) < 0) + goto error; + } + + while (!qemuDomainObjCanSetJob(&priv->job, job, agentJob)) { + if (nowait) + goto cleanup; + + VIR_DEBUG("Waiting for job (vm=3D%p name=3D%s)", obj, obj->def->na= me); + if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0) + goto error; + } + + /* No job is active but a new async job could have been started while = obj + * was unlocked, so we need to recheck it. */ + if (!nested && !qemuDomainNestedJobAllowed(&priv->job, job)) + goto retry; + + ignore_value(virTimeMillisNow(&now)); + + if (job) { + qemuDomainObjResetJob(&priv->job); + + if (job !=3D QEMU_JOB_ASYNC) { + VIR_DEBUG("Started job: %s (async=3D%s vm=3D%p name=3D%s)", + qemuDomainJobTypeToString(job), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + obj, obj->def->name); + priv->job.active =3D job; + priv->job.owner =3D virThreadSelfID(); + priv->job.ownerAPI =3D virThreadJobGet(); + priv->job.started =3D now; + } else { + VIR_DEBUG("Started async job: %s (vm=3D%p name=3D%s)", + qemuDomainAsyncJobTypeToString(asyncJob), + obj, obj->def->name); + qemuDomainObjResetAsyncJob(&priv->job); + priv->job.current =3D g_new0(qemuDomainJobInfo, 1); + priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->job.asyncJob =3D asyncJob; + priv->job.asyncOwner =3D virThreadSelfID(); + priv->job.asyncOwnerAPI =3D virThreadJobGet(); + priv->job.asyncStarted =3D now; + priv->job.current->started =3D now; + } + } + + if (agentJob) { + qemuDomainObjResetAgentJob(&priv->job); + + VIR_DEBUG("Started agent job: %s (vm=3D%p name=3D%s job=3D%s async= =3D%s)", + qemuDomainAgentJobTypeToString(agentJob), + obj, obj->def->name, + qemuDomainJobTypeToString(priv->job.active), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob)); + priv->job.agentActive =3D agentJob; + priv->job.agentOwner =3D virThreadSelfID(); + priv->job.agentOwnerAPI =3D virThreadJobGet(); + priv->job.agentStarted =3D now; + } + + if (qemuDomainTrackJob(job)) + qemuDomainObjSaveStatus(driver, obj); + + return 0; + + error: + ignore_value(virTimeMillisNow(&now)); + if (priv->job.active && priv->job.started) + duration =3D now - priv->job.started; + if (priv->job.agentActive && priv->job.agentStarted) + agentDuration =3D now - priv->job.agentStarted; + if (priv->job.asyncJob && priv->job.asyncStarted) + asyncDuration =3D now - priv->job.asyncStarted; + + VIR_WARN("Cannot start job (%s, %s, %s) for domain %s; " + "current job is (%s, %s, %s) " + "owned by (%llu %s, %llu %s, %llu %s (flags=3D0x%lx)) " + "for (%llus, %llus, %llus)", + qemuDomainJobTypeToString(job), + qemuDomainAgentJobTypeToString(agentJob), + qemuDomainAsyncJobTypeToString(asyncJob), + obj->def->name, + qemuDomainJobTypeToString(priv->job.active), + qemuDomainAgentJobTypeToString(priv->job.agentActive), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + priv->job.owner, NULLSTR(priv->job.ownerAPI), + priv->job.agentOwner, NULLSTR(priv->job.agentOwnerAPI), + priv->job.asyncOwner, NULLSTR(priv->job.asyncOwnerAPI), + priv->job.apiFlags, + duration / 1000, agentDuration / 1000, asyncDuration / 1000); + + if (job) { + if (nested || qemuDomainNestedJobAllowed(&priv->job, job)) + blocker =3D priv->job.ownerAPI; + else + blocker =3D priv->job.asyncOwnerAPI; + } + + if (agentJob) + agentBlocker =3D priv->job.agentOwnerAPI; + + if (errno =3D=3D ETIMEDOUT) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s)"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by monitor=3D%s)"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_TIMEOUT, + _("cannot acquire state change " + "lock (held by agent=3D%s)"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", + _("cannot acquire state change lock")); + } + ret =3D -2; + } else if (cfg->maxQueuedJobs && + priv->jobs_queued > cfg->maxQueuedJobs) { + if (blocker && agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s agent=3D%s) " + "due to max_queued limit"), + blocker, agentBlocker); + } else if (blocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by monitor=3D%s) " + "due to max_queued limit"), + blocker); + } else if (agentBlocker) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("cannot acquire state change " + "lock (held by agent=3D%s) " + "due to max_queued limit"), + agentBlocker); + } else { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("cannot acquire state change lock " + "due to max_queued limit")); + } + ret =3D -2; + } else { + virReportSystemError(errno, "%s", _("cannot acquire job mutex")); + } + + cleanup: + priv->jobs_queued--; + return ret; +} + +/* + * obj must be locked before calling + * + * This must be called by anything that will change the VM state + * in any way, or anything that will use the QEMU monitor. + * + * Successful calls must be followed by EndJob eventually + */ +int qemuDomainObjBeginJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainJob job) +{ + if (qemuDomainObjBeginJobInternal(driver, obj, job, + QEMU_AGENT_JOB_NONE, + QEMU_ASYNC_JOB_NONE, false) < 0) + return -1; + else + return 0; +} + +/** + * qemuDomainObjBeginAgentJob: + * + * Grabs agent type of job. Use if caller talks to guest agent only. + * + * To end job call qemuDomainObjEndAgentJob. + */ +int +qemuDomainObjBeginAgentJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainAgentJob agentJob) +{ + return qemuDomainObjBeginJobInternal(driver, obj, QEMU_JOB_NONE, + agentJob, + QEMU_ASYNC_JOB_NONE, false); +} + +int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainAsyncJob asyncJob, + virDomainJobOperation operation, + unsigned long apiFlags) +{ + qemuDomainObjPrivatePtr priv; + + if (qemuDomainObjBeginJobInternal(driver, obj, QEMU_JOB_ASYNC, + QEMU_AGENT_JOB_NONE, + asyncJob, false) < 0) + return -1; + + priv =3D obj->privateData; + priv->job.current->operation =3D operation; + priv->job.apiFlags =3D apiFlags; + return 0; +} + +int +qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainAsyncJob asyncJob) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + + if (asyncJob !=3D priv->job.asyncJob) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("unexpected async job %d type expected %d"), + asyncJob, priv->job.asyncJob); + return -1; + } + + if (priv->job.asyncOwner !=3D virThreadSelfID()) { + VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", + priv->job.asyncOwner); + } + + return qemuDomainObjBeginJobInternal(driver, obj, + QEMU_JOB_ASYNC_NESTED, + QEMU_AGENT_JOB_NONE, + QEMU_ASYNC_JOB_NONE, + false); +} + +/** + * qemuDomainObjBeginJobNowait: + * + * @driver: qemu driver + * @obj: domain object + * @job: qemuDomainJob to start + * + * Acquires job for a domain object which must be locked before + * calling. If there's already a job running it returns + * immediately without any error reported. + * + * Returns: see qemuDomainObjBeginJobInternal + */ +int +qemuDomainObjBeginJobNowait(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainJob job) +{ + return qemuDomainObjBeginJobInternal(driver, obj, job, + QEMU_AGENT_JOB_NONE, + QEMU_ASYNC_JOB_NONE, true); +} + +/* + * obj must be locked and have a reference before calling + * + * To be called after completing the work associated with the + * earlier qemuDomainBeginJob() call + */ +void +qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + qemuDomainJob job =3D priv->job.active; + + priv->jobs_queued--; + + VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", + qemuDomainJobTypeToString(job), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + obj, obj->def->name); + + qemuDomainObjResetJob(&priv->job); + if (qemuDomainTrackJob(job)) + qemuDomainObjSaveStatus(driver, obj); + /* We indeed need to wake up ALL threads waiting because + * grabbing a job requires checking more variables. */ + virCondBroadcast(&priv->job.cond); +} + +void +qemuDomainObjEndAgentJob(virDomainObjPtr obj) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + qemuDomainAgentJob agentJob =3D priv->job.agentActive; + + priv->jobs_queued--; + + VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", + qemuDomainAgentJobTypeToString(agentJob), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + obj, obj->def->name); + + qemuDomainObjResetAgentJob(&priv->job); + /* We indeed need to wake up ALL threads waiting because + * grabbing a job requires checking more variables. */ + virCondBroadcast(&priv->job.cond); +} + +void +qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + + priv->jobs_queued--; + + VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + obj, obj->def->name); + + qemuDomainObjResetAsyncJob(&priv->job); + qemuDomainObjSaveStatus(driver, obj); + virCondBroadcast(&priv->job.asyncCond); +} + +void +qemuDomainObjAbortAsyncJob(virDomainObjPtr obj) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + + VIR_DEBUG("Requesting abort of async job: %s (vm=3D%p name=3D%s)", + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + obj, obj->def->name); + + priv->job.abortJob =3D true; + virDomainObjBroadcast(obj); +} diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h new file mode 100644 index 0000000000..124664354d --- /dev/null +++ b/src/qemu/qemu_domainjob.h @@ -0,0 +1,269 @@ +/* + * qemu_domainjob.h: helper functions for QEMU domain jobs + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * . + */ + +#pragma once + +#include +#include "qemu_migration_params.h" + +#define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) +#define QEMU_JOB_DEFAULT_MASK \ + (JOB_MASK(QEMU_JOB_QUERY) | \ + JOB_MASK(QEMU_JOB_DESTROY) | \ + JOB_MASK(QEMU_JOB_ABORT)) + +/* Jobs which have to be tracked in domain state XML. */ +#define QEMU_DOMAIN_TRACK_JOBS \ + (JOB_MASK(QEMU_JOB_DESTROY) | \ + JOB_MASK(QEMU_JOB_ASYNC)) + +/* Only 1 job is allowed at any time + * A job includes *all* monitor commands, even those just querying + * information, not merely actions */ +typedef enum { + QEMU_JOB_NONE =3D 0, /* Always set to 0 for easy if (jobActive) condi= tions */ + QEMU_JOB_QUERY, /* Doesn't change any state */ + QEMU_JOB_DESTROY, /* Destroys the domain (cannot be masked out) = */ + QEMU_JOB_SUSPEND, /* Suspends (stops vCPUs) the domain */ + QEMU_JOB_MODIFY, /* May change state */ + QEMU_JOB_ABORT, /* Abort current async job */ + QEMU_JOB_MIGRATION_OP, /* Operation influencing outgoing migration */ + + /* The following two items must always be the last items before JOB_LA= ST */ + QEMU_JOB_ASYNC, /* Asynchronous job */ + QEMU_JOB_ASYNC_NESTED, /* Normal job within an async job */ + + QEMU_JOB_LAST +} qemuDomainJob; +VIR_ENUM_DECL(qemuDomainJob); + +typedef enum { + QEMU_AGENT_JOB_NONE =3D 0, /* No agent job. */ + QEMU_AGENT_JOB_QUERY, /* Does not change state of domain */ + QEMU_AGENT_JOB_MODIFY, /* May change state of domain */ + + QEMU_AGENT_JOB_LAST +} qemuDomainAgentJob; +VIR_ENUM_DECL(qemuDomainAgentJob); + +/* Async job consists of a series of jobs that may change state. Independe= nt + * jobs that do not change state (and possibly others if explicitly allowe= d by + * current async job) are allowed to be run even if async job is active. + */ +typedef enum { + QEMU_ASYNC_JOB_NONE =3D 0, + QEMU_ASYNC_JOB_MIGRATION_OUT, + QEMU_ASYNC_JOB_MIGRATION_IN, + QEMU_ASYNC_JOB_SAVE, + QEMU_ASYNC_JOB_DUMP, + QEMU_ASYNC_JOB_SNAPSHOT, + QEMU_ASYNC_JOB_START, + QEMU_ASYNC_JOB_BACKUP, + + QEMU_ASYNC_JOB_LAST +} qemuDomainAsyncJob; +VIR_ENUM_DECL(qemuDomainAsyncJob); + +typedef enum { + QEMU_DOMAIN_JOB_STATUS_NONE =3D 0, + QEMU_DOMAIN_JOB_STATUS_ACTIVE, + QEMU_DOMAIN_JOB_STATUS_MIGRATING, + QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED, + QEMU_DOMAIN_JOB_STATUS_PAUSED, + QEMU_DOMAIN_JOB_STATUS_POSTCOPY, + QEMU_DOMAIN_JOB_STATUS_COMPLETED, + QEMU_DOMAIN_JOB_STATUS_FAILED, + QEMU_DOMAIN_JOB_STATUS_CANCELED, +} qemuDomainJobStatus; + +typedef enum { + QEMU_DOMAIN_JOB_STATS_TYPE_NONE =3D 0, + QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION, + QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP, + QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP, + QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP, +} qemuDomainJobStatsType; + + +typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; +typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; +struct _qemuDomainMirrorStats { + unsigned long long transferred; + unsigned long long total; +}; + +typedef struct _qemuDomainBackupStats qemuDomainBackupStats; +struct _qemuDomainBackupStats { + unsigned long long transferred; + unsigned long long total; + unsigned long long tmp_used; + unsigned long long tmp_total; +}; + +typedef struct _qemuDomainJobInfo qemuDomainJobInfo; +typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; +struct _qemuDomainJobInfo { + qemuDomainJobStatus status; + virDomainJobOperation operation; + unsigned long long started; /* When the async job started */ + unsigned long long stopped; /* When the domain's CPUs were stopped */ + unsigned long long sent; /* When the source sent status info to the + destination (only for migrations). */ + unsigned long long received; /* When the destination host received sta= tus + info from the source (migrations only)= . */ + /* Computed values */ + unsigned long long timeElapsed; + long long timeDelta; /* delta =3D received - sent, i.e., the difference + between the source and the destination time pl= us + the time between the end of Perform phase on t= he + source and the beginning of Finish phase on the + destination. */ + bool timeDeltaSet; + /* Raw values from QEMU */ + qemuDomainJobStatsType statsType; + union { + qemuMonitorMigrationStats mig; + qemuMonitorDumpStats dump; + qemuDomainBackupStats backup; + } stats; + qemuDomainMirrorStats mirrorStats; + + char *errmsg; /* optional error message for failed completed jobs */ +}; + +void +qemuDomainJobInfoFree(qemuDomainJobInfoPtr info); + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); + +qemuDomainJobInfoPtr +qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info); + +typedef struct _qemuDomainJobObj qemuDomainJobObj; +typedef qemuDomainJobObj *qemuDomainJobObjPtr; +struct _qemuDomainJobObj { + virCond cond; /* Use to coordinate jobs */ + + /* The following members are for QEMU_JOB_* */ + qemuDomainJob active; /* Currently running job */ + unsigned long long owner; /* Thread id which set current job= */ + const char *ownerAPI; /* The API which owns the job */ + unsigned long long started; /* When the current job started */ + + /* The following members are for QEMU_AGENT_JOB_* */ + qemuDomainAgentJob agentActive; /* Currently running agent job */ + unsigned long long agentOwner; /* Thread id which set current age= nt job */ + const char *agentOwnerAPI; /* The API which owns the agent jo= b */ + unsigned long long agentStarted; /* When the current agent job star= ted */ + + /* The following members are for QEMU_ASYNC_JOB_* */ + virCond asyncCond; /* Use to coordinate with async jo= bs */ + qemuDomainAsyncJob asyncJob; /* Currently active async job */ + unsigned long long asyncOwner; /* Thread which set current async = job */ + const char *asyncOwnerAPI; /* The API which owns the async jo= b */ + unsigned long long asyncStarted; /* When the current async job star= ted */ + int phase; /* Job phase (mainly for migration= s) */ + unsigned long long mask; /* Jobs allowed during async job */ + qemuDomainJobInfoPtr current; /* async job progress data */ + qemuDomainJobInfoPtr completed; /* statistics data of a recently c= ompleted job */ + bool abortJob; /* abort of the job requested */ + bool spiceMigration; /* we asked for spice migration an= d we + * should wait for it to finish */ + bool spiceMigrated; /* spice migration completed */ + char *error; /* job event completion error */ + bool dumpCompleted; /* dump completed */ + + qemuMigrationParamsPtr migParams; + unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ +}; + +const char *qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, + int phase); +int qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, + const char *phase); + +void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, + virDomainObjPtr vm); + +int qemuDomainObjBeginJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainJob job) + G_GNUC_WARN_UNUSED_RESULT; +int qemuDomainObjBeginAgentJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainAgentJob agentJob) + G_GNUC_WARN_UNUSED_RESULT; +int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainAsyncJob asyncJob, + virDomainJobOperation operation, + unsigned long apiFlags) + G_GNUC_WARN_UNUSED_RESULT; +int qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainAsyncJob asyncJob) + G_GNUC_WARN_UNUSED_RESULT; +int qemuDomainObjBeginJobNowait(virQEMUDriverPtr driver, + virDomainObjPtr obj, + qemuDomainJob job) + G_GNUC_WARN_UNUSED_RESULT; + +void qemuDomainObjEndJob(virQEMUDriverPtr driver, + virDomainObjPtr obj); +void qemuDomainObjEndAgentJob(virDomainObjPtr obj); +void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, + virDomainObjPtr obj); +void qemuDomainObjAbortAsyncJob(virDomainObjPtr obj); +void qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, + virDomainObjPtr obj, + int phase); +void qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj, + unsigned long long allowedJobs); +void qemuDomainObjRestoreJob(virDomainObjPtr obj, + qemuDomainJobObjPtr job); +void qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver, + virDomainObjPtr obj); +void qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj); + +void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver, + virDomainObjPtr vm); + +void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, + virDomainObjPtr vm); + +int qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) + ATTRIBUTE_NONNULL(1); +int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) + ATTRIBUTE_NONNULL(1); +int qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, + virDomainJobInfoPtr info) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); +int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) + ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); + +bool qemuDomainTrackJob(qemuDomainJob job); + +void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); + +int qemuDomainObjInitJob(qemuDomainJobObjPtr job); + +bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob); --=20 2.17.1