From nobody Tue May 7 09:55:32 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) client-ip=207.211.31.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1597640876; cv=none; d=zohomail.com; s=zohoarc; b=Z6IQu6RJzrHiIhkqE/xTJWBVZFLGFEgQWzkH5R7i3pPfvJh1ZBPK+kKF/jrSOqOlMAzA9YhG6XcelSfFTf/lGmiFtVWg3lAzxPLhVgL7OpU+dOT8MtddP4mbTN8HnIWusqzaCSBZpfjtUWg6XFb7PBpB4bELaV2u7Hd8D2L6IkE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1597640876; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=YLUixRycrmt4nt2scn6SRKHFNAF+VESDHbrfTztYASs=; b=Rarklho9tDdBv5Yf23dlMRqlvt30aWwq4t+0g9VuJW+ofec8CownoHcH59SjF/YYn+hULd6A012t/82Wnz/V/jTv5jRKuWTNBjHcUBBxxt88csoBkcBSGK9PmN9HRYuTcvcLecQS5PWlDmnnpdY2KXROk0RcRyWFIAkfRXIXVOo= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by mx.zohomail.com with SMTPS id 1597640876243787.0748171449062; Sun, 16 Aug 2020 22:07:56 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-483-DZR2l4KEMfqdxPKBAfM5qg-1; Mon, 17 Aug 2020 01:07:51 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D246218551A2; Mon, 17 Aug 2020 05:07:45 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B1CD77A1F6; Mon, 17 Aug 2020 05:07:45 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 75CA54EE15; Mon, 17 Aug 2020 05:07:45 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 07H57hpF021741 for ; Mon, 17 Aug 2020 01:07:43 -0400 Received: by smtp.corp.redhat.com (Postfix) id 581AA2166BA2; Mon, 17 Aug 2020 05:07:43 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast04.extmail.prod.ext.rdu2.redhat.com [10.11.55.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 52A8C2166BCC for ; Mon, 17 Aug 2020 05:07:41 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 17930101A568 for ; Mon, 17 Aug 2020 05:07:41 +0000 (UTC) Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-84-KC0djom4NJycCTWw5yWjGg-1; Mon, 17 Aug 2020 01:07:38 -0400 Received: by mail-pj1-f66.google.com with SMTP id mt12so7230398pjb.4 for ; Sun, 16 Aug 2020 22:07:37 -0700 (PDT) Received: from localhost.localdomain ([125.99.150.200]) by smtp.gmail.com with ESMTPSA id s64sm18091678pfs.111.2020.08.16.22.07.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Aug 2020 22:07:35 -0700 (PDT) X-MC-Unique: DZR2l4KEMfqdxPKBAfM5qg-1 X-MC-Unique: KC0djom4NJycCTWw5yWjGg-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YLUixRycrmt4nt2scn6SRKHFNAF+VESDHbrfTztYASs=; b=IRh50AIYHSCa4O/zgItFLxAGWL91cRGOLZOW2To73siKDR2fMM9ckC6IQGB7VW+1/D Qfilsp/9plumNK8JGAAsBoM1h7Zl+Jj3kb8IJ5lmkJh8qQ+OJL+sdh/RB7Rul3VHOP2K Vy3QoE3Z0JdpmHj5nEsSs+VDzg6FAfr6KWouLnYcDS+n95qsExwbYpvZHKckasFa7jX3 lBZAuM8DmJLzu9nrgZQ0dcEawkJDeSJ7eKhba/gIiolLWmdl0LAn1e52be2PLjettNy1 XDkTbF4PvyPjrovJRQAc39AaUqFApcyoY4FNc9sc4n0HSermxX71179nt8KSJTXOW69Y Eh3A== X-Gm-Message-State: AOAM533bI41I5oLbjt7nHxuNQmLrowjdBVXiFtO3aRr13yigHYhEm5Rd aTWV1zMUvPgxyglg/KCrhu1GZ7oQKJj0Bg== X-Google-Smtp-Source: ABdhPJxbJRodt6K7SnGTRiuqaZ4anGTmqzlaXPUpWhCshMPzzhGEPDBOAHhux6ez/KfW6Eyu1u9vLA== X-Received: by 2002:a17:90a:3307:: with SMTP id m7mr10878628pjb.203.1597640855595; Sun, 16 Aug 2020 22:07:35 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH v2 1/6] qemu_domain: Added `qemuDomainJobInfo` to domainJob's `privateData` Date: Mon, 17 Aug 2020 10:37:16 +0530 Message-Id: <20200817050721.7063-2-pc44800@gmail.com> In-Reply-To: <20200817050721.7063-1-pc44800@gmail.com> References: <20200817050721.7063-1-pc44800@gmail.com> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0.004 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As `qemuDomainJobInfo` had attributes specific to qemu hypervisor's jobs, we moved the attribute `current` and `completed` from `qemuDomainJobObj` to its `privateData` structure. In this process, two callback functions: `setJobInfoOperation` and `currentJobInfoInit` were introduced to qemuDomainJob's callback structure. Signed-off-by: Prathamesh Chavan --- src/qemu/qemu_backup.c | 22 +- src/qemu/qemu_domain.c | 498 +++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 74 +++++ src/qemu/qemu_domainjob.c | 483 +----------------------------- src/qemu/qemu_domainjob.h | 81 +---- src/qemu/qemu_driver.c | 49 +-- src/qemu/qemu_migration.c | 62 ++-- src/qemu/qemu_migration_cookie.c | 8 +- src/qemu/qemu_process.c | 32 +- 9 files changed, 680 insertions(+), 629 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index a402730d38..1822c6f267 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -529,20 +529,21 @@ qemuBackupJobTerminate(virDomainObjPtr vm, =20 { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; size_t i; =20 - qemuDomainJobInfoUpdateTime(priv->job.current); + qemuDomainJobInfoUpdateTime(jobPriv->current); =20 - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed =3D qemuDomainJobInfoCopy(priv->job.current); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); + jobPriv->completed =3D qemuDomainJobInfoCopy(jobPriv->current); =20 - priv->job.completed->stats.backup.total =3D priv->backup->push_total; - priv->job.completed->stats.backup.transferred =3D priv->backup->push_t= ransferred; - priv->job.completed->stats.backup.tmp_used =3D priv->backup->pull_tmp_= used; - priv->job.completed->stats.backup.tmp_total =3D priv->backup->pull_tmp= _total; + jobPriv->completed->stats.backup.total =3D priv->backup->push_total; + jobPriv->completed->stats.backup.transferred =3D priv->backup->push_tr= ansferred; + jobPriv->completed->stats.backup.tmp_used =3D priv->backup->pull_tmp_u= sed; + jobPriv->completed->stats.backup.tmp_total =3D priv->backup->pull_tmp_= total; =20 - priv->job.completed->status =3D jobstatus; - priv->job.completed->errmsg =3D g_strdup(priv->backup->errmsg); + jobPriv->completed->status =3D jobstatus; + jobPriv->completed->errmsg =3D g_strdup(priv->backup->errmsg); =20 qemuDomainEventEmitJobCompleted(priv->driver, vm); =20 @@ -694,6 +695,7 @@ qemuBackupBegin(virDomainObjPtr vm, unsigned int flags) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(priv->dr= iver); g_autoptr(virDomainBackupDef) def =3D NULL; g_autofree char *suffix =3D NULL; @@ -745,7 +747,7 @@ qemuBackupBegin(virDomainObjPtr vm, qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | JOB_MASK(QEMU_JOB_SUSPEND) | JOB_MASK(QEMU_JOB_MODIFY))); - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + jobPriv->current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; =20 if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c440c79e1d..1ae44ae39f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -75,6 +75,457 @@ VIR_LOG_INIT("qemu.qemu_domain"); =20 =20 +static virDomainJobType +qemuDomainJobStatusToType(qemuDomainJobStatus status) +{ + switch (status) { + case QEMU_DOMAIN_JOB_STATUS_NONE: + break; + + case QEMU_DOMAIN_JOB_STATUS_ACTIVE: + case QEMU_DOMAIN_JOB_STATUS_MIGRATING: + case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: + case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: + case QEMU_DOMAIN_JOB_STATUS_PAUSED: + return VIR_DOMAIN_JOB_UNBOUNDED; + + case QEMU_DOMAIN_JOB_STATUS_COMPLETED: + return VIR_DOMAIN_JOB_COMPLETED; + + case QEMU_DOMAIN_JOB_STATUS_FAILED: + return VIR_DOMAIN_JOB_FAILED; + + case QEMU_DOMAIN_JOB_STATUS_CANCELED: + return VIR_DOMAIN_JOB_CANCELLED; + } + + return VIR_DOMAIN_JOB_NONE; +} + +int +qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) +{ + unsigned long long now; + + if (!jobInfo->started) + return 0; + + if (virTimeMillisNow(&now) < 0) + return -1; + + if (now < jobInfo->started) { + VIR_WARN("Async job starts in the future"); + jobInfo->started =3D 0; + return 0; + } + + jobInfo->timeElapsed =3D now - jobInfo->started; + return 0; +} + +int +qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) +{ + unsigned long long now; + + if (!jobInfo->stopped) + return 0; + + if (virTimeMillisNow(&now) < 0) + return -1; + + if (now < jobInfo->stopped) { + VIR_WARN("Guest's CPUs stopped in the future"); + jobInfo->stopped =3D 0; + return 0; + } + + jobInfo->stats.mig.downtime =3D now - jobInfo->stopped; + jobInfo->stats.mig.downtime_set =3D true; + return 0; +} + + +int +qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, + virDomainJobInfoPtr info) +{ + info->type =3D qemuDomainJobStatusToType(jobInfo->status); + info->timeElapsed =3D jobInfo->timeElapsed; + + switch (jobInfo->statsType) { + case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: + info->memTotal =3D jobInfo->stats.mig.ram_total; + info->memRemaining =3D jobInfo->stats.mig.ram_remaining; + info->memProcessed =3D jobInfo->stats.mig.ram_transferred; + info->fileTotal =3D jobInfo->stats.mig.disk_total + + jobInfo->mirrorStats.total; + info->fileRemaining =3D jobInfo->stats.mig.disk_remaining + + (jobInfo->mirrorStats.total - + jobInfo->mirrorStats.transferred); + info->fileProcessed =3D jobInfo->stats.mig.disk_transferred + + jobInfo->mirrorStats.transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: + info->memTotal =3D jobInfo->stats.mig.ram_total; + info->memRemaining =3D jobInfo->stats.mig.ram_remaining; + info->memProcessed =3D jobInfo->stats.mig.ram_transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: + info->memTotal =3D jobInfo->stats.dump.total; + info->memProcessed =3D jobInfo->stats.dump.completed; + info->memRemaining =3D info->memTotal - info->memProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + info->fileTotal =3D jobInfo->stats.backup.total; + info->fileProcessed =3D jobInfo->stats.backup.transferred; + info->fileRemaining =3D info->fileTotal - info->fileProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: + break; + } + + info->dataTotal =3D info->memTotal + info->fileTotal; + info->dataRemaining =3D info->memRemaining + info->fileRemaining; + info->dataProcessed =3D info->memProcessed + info->fileProcessed; + + return 0; +} + + +static int +qemuDomainMigrationJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; + qemuDomainMirrorStatsPtr mirrorStats =3D &jobInfo->mirrorStats; + virTypedParameterPtr par =3D NULL; + int maxpar =3D 0; + int npar =3D 0; + unsigned long long mirrorRemaining =3D mirrorStats->total - + mirrorStats->transferred; + + if (virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_OPERATION, + jobInfo->operation) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED, + jobInfo->timeElapsed) < 0) + goto error; + + if (jobInfo->timeDeltaSet && + jobInfo->timeElapsed > jobInfo->timeDelta && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED_NET, + jobInfo->timeElapsed - jobInfo->timeDelta)= < 0) + goto error; + + if (stats->downtime_set && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DOWNTIME, + stats->downtime) < 0) + goto error; + + if (stats->downtime_set && + jobInfo->timeDeltaSet && + stats->downtime > jobInfo->timeDelta && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DOWNTIME_NET, + stats->downtime - jobInfo->timeDelta) < 0) + goto error; + + if (stats->setup_time_set && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_SETUP_TIME, + stats->setup_time) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_TOTAL, + stats->ram_total + + stats->disk_total + + mirrorStats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_PROCESSED, + stats->ram_transferred + + stats->disk_transferred + + mirrorStats->transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_REMAINING, + stats->ram_remaining + + stats->disk_remaining + + mirrorRemaining) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_TOTAL, + stats->ram_total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PROCESSED, + stats->ram_transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_REMAINING, + stats->ram_remaining) < 0) + goto error; + + if (stats->ram_bps && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_BPS, + stats->ram_bps) < 0) + goto error; + + if (stats->ram_duplicate_set) { + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_CONSTANT, + stats->ram_duplicate) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_NORMAL, + stats->ram_normal) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_NORMAL_BYTES, + stats->ram_normal_bytes) < 0) + goto error; + } + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE, + stats->ram_dirty_rate) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_ITERATION, + stats->ram_iteration) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_POSTCOPY_REQS, + stats->ram_postcopy_reqs) < 0) + goto error; + + if (stats->ram_page_size > 0 && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE, + stats->ram_page_size) < 0) + goto error; + + /* The remaining stats are disk, mirror, or migration specific + * so if this is a SAVEDUMP, we can just skip them */ + if (jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) + goto done; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_TOTAL, + stats->disk_total + + mirrorStats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_PROCESSED, + stats->disk_transferred + + mirrorStats->transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_REMAINING, + stats->disk_remaining + + mirrorRemaining) < 0) + goto error; + + if (stats->disk_bps && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_BPS, + stats->disk_bps) < 0) + goto error; + + if (stats->xbzrle_set) { + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_CACHE, + stats->xbzrle_cache_size) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_BYTES, + stats->xbzrle_bytes) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_PAGES, + stats->xbzrle_pages) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_CACHE_MISSE= S, + stats->xbzrle_cache_miss) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_OVERFLOW, + stats->xbzrle_overflow) < 0) + goto error; + } + + if (stats->cpu_throttle_percentage && + virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE, + stats->cpu_throttle_percentage) < 0) + goto error; + + done: + *type =3D qemuDomainJobStatusToType(jobInfo->status); + *params =3D par; + *nparams =3D npar; + return 0; + + error: + virTypedParamsFree(par, npar); + return -1; +} + + +static int +qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuMonitorDumpStats *stats =3D &jobInfo->stats.dump; + virTypedParameterPtr par =3D NULL; + int maxpar =3D 0; + int npar =3D 0; + + if (virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_OPERATION, + jobInfo->operation) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED, + jobInfo->timeElapsed) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_TOTAL, + stats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PROCESSED, + stats->completed) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_REMAINING, + stats->total - stats->completed) < 0) + goto error; + + *type =3D qemuDomainJobStatusToType(jobInfo->status); + *params =3D par; + *nparams =3D npar; + return 0; + + error: + virTypedParamsFree(par, npar); + return -1; +} + + +static int +qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; + g_autoptr(virTypedParamList) par =3D g_new0(virTypedParamList, 1); + + if (virTypedParamListAddInt(par, jobInfo->operation, + VIR_DOMAIN_JOB_OPERATION) < 0) + return -1; + + if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, + VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) + return -1; + + if (stats->transferred > 0 || stats->total > 0) { + if (virTypedParamListAddULLong(par, stats->total, + VIR_DOMAIN_JOB_DISK_TOTAL) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->transferred, + VIR_DOMAIN_JOB_DISK_PROCESSED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->total - stats->transfer= red, + VIR_DOMAIN_JOB_DISK_REMAINING) < 0) + return -1; + } + + if (stats->tmp_used > 0 || stats->tmp_total > 0) { + if (virTypedParamListAddULLong(par, stats->tmp_used, + VIR_DOMAIN_JOB_DISK_TEMP_USED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->tmp_total, + VIR_DOMAIN_JOB_DISK_TEMP_TOTAL) < 0) + return -1; + } + + if (jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && + virTypedParamListAddBoolean(par, + jobInfo->status =3D=3D QEMU_DOMAIN_JOB= _STATUS_COMPLETED, + VIR_DOMAIN_JOB_SUCCESS) < 0) + return -1; + + if (jobInfo->errmsg && + virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) + return -1; + + *nparams =3D virTypedParamListStealParams(par, params); + *type =3D qemuDomainJobStatusToType(jobInfo->status); + return 0; +} + + +int +qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + switch (jobInfo->statsType) { + case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: + case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: + return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, n= params); + + case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: + return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparam= s); + + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + return qemuDomainBackupJobInfoToParams(jobInfo, type, params, npar= ams); + + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("invalid job statistics type")); + break; + + default: + virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType= ); + break; + } + + return -1; +} + + +void +qemuDomainJobInfoFree(qemuDomainJobInfoPtr info) +{ + g_free(info->errmsg); + g_free(info); +} + + +qemuDomainJobInfoPtr +qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info) +{ + qemuDomainJobInfoPtr ret =3D g_new0(qemuDomainJobInfo, 1); + + memcpy(ret, info, sizeof(*info)); + + ret->errmsg =3D g_strdup(info->errmsg); + + return ret; +} + + static void * qemuJobAllocPrivate(void) { @@ -91,6 +542,8 @@ qemuJobFreePrivate(void *opaque) return; =20 qemuMigrationParamsFree(priv->migParams); + g_clear_pointer(&priv->current, qemuDomainJobInfoFree); + g_clear_pointer(&priv->completed, qemuDomainJobInfoFree); VIR_FREE(priv); } =20 @@ -104,6 +557,7 @@ qemuJobResetPrivate(void *opaque) priv->spiceMigrated =3D false; priv->dumpCompleted =3D false; qemuMigrationParamsFree(priv->migParams); + g_clear_pointer(&priv->current, qemuDomainJobInfoFree); priv->migParams =3D NULL; } =20 @@ -120,6 +574,48 @@ qemuDomainFormatJobPrivate(virBufferPtr buf, return 0; } =20 +static void +qemuDomainCurrentJobInfoInit(qemuDomainJobObjPtr job, + unsigned long long now) +{ + qemuDomainJobPrivatePtr priv =3D job->privateData; + priv->current =3D g_new0(qemuDomainJobInfo, 1); + priv->current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->current->started =3D now; + +} + +static void +qemuDomainJobInfoSetOperation(qemuDomainJobObjPtr job, + virDomainJobOperation operation) +{ + qemuDomainJobPrivatePtr priv =3D job->privateData; + priv->current->operation =3D operation; +} + +void +qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; + virObjectEventPtr event; + virTypedParameterPtr params =3D NULL; + int nparams =3D 0; + int type; + + if (!jobPriv->completed) + return; + + if (qemuDomainJobInfoToParams(jobPriv->completed, &type, + ¶ms, &nparams) < 0) { + VIR_WARN("Could not get stats for completed job; domain %s", + vm->def->name); + } + + event =3D virDomainEventJobCompletedNewFromObj(vm, params, nparams); + virObjectEventStateQueue(driver->domainEventState, event); +} =20 static int qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt, @@ -140,6 +636,8 @@ static qemuDomainObjPrivateJobCallbacks qemuPrivateJobC= allbacks =3D { .resetJobPrivate =3D qemuJobResetPrivate, .formatJob =3D qemuDomainFormatJobPrivate, .parseJob =3D qemuDomainParseJobPrivate, + .setJobInfoOperation =3D qemuDomainJobInfoSetOperation, + .currentJobInfoInit =3D qemuDomainCurrentJobInfoInit, }; =20 /** diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 3a1bcbbfa3..386ae17272 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -483,6 +483,52 @@ struct _qemuDomainXmlNsDef { char **capsdel; }; =20 +typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; +typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; +struct _qemuDomainMirrorStats { + unsigned long long transferred; + unsigned long long total; +}; + +typedef struct _qemuDomainBackupStats qemuDomainBackupStats; +struct _qemuDomainBackupStats { + unsigned long long transferred; + unsigned long long total; + unsigned long long tmp_used; + unsigned long long tmp_total; +}; + +typedef struct _qemuDomainJobInfo qemuDomainJobInfo; +typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; +struct _qemuDomainJobInfo { + qemuDomainJobStatus status; + virDomainJobOperation operation; + unsigned long long started; /* When the async job started */ + unsigned long long stopped; /* When the domain's CPUs were stopped */ + unsigned long long sent; /* When the source sent status info to the + destination (only for migrations). */ + unsigned long long received; /* When the destination host received sta= tus + info from the source (migrations only)= . */ + /* Computed values */ + unsigned long long timeElapsed; + long long timeDelta; /* delta =3D received - sent, i.e., the difference + between the source and the destination time pl= us + the time between the end of Perform phase on t= he + source and the beginning of Finish phase on the + destination. */ + bool timeDeltaSet; + /* Raw values from QEMU */ + qemuDomainJobStatsType statsType; + union { + qemuMonitorMigrationStats mig; + qemuMonitorDumpStats dump; + qemuDomainBackupStats backup; + } stats; + qemuDomainMirrorStats mirrorStats; + + char *errmsg; /* optional error message for failed completed jobs */ +}; + typedef struct _qemuDomainJobPrivate qemuDomainJobPrivate; typedef qemuDomainJobPrivate *qemuDomainJobPrivatePtr; struct _qemuDomainJobPrivate { @@ -491,8 +537,36 @@ struct _qemuDomainJobPrivate { bool spiceMigrated; /* spice migration completed */ bool dumpCompleted; /* dump completed */ qemuMigrationParamsPtr migParams; + qemuDomainJobInfoPtr current; /* async job progress data */ + qemuDomainJobInfoPtr completed; /* statistics data of a recently c= ompleted job */ }; =20 + +void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, + virDomainObjPtr vm); + +void +qemuDomainJobInfoFree(qemuDomainJobInfoPtr info); + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); + +qemuDomainJobInfoPtr +qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info); + +int qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) + ATTRIBUTE_NONNULL(1); +int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) + ATTRIBUTE_NONNULL(1); +int qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, + virDomainJobInfoPtr info) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); +int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) + ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); + int qemuDomainObjStartWorker(virDomainObjPtr dom); void qemuDomainObjStopWorker(virDomainObjPtr dom); =20 diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 6393cc0b40..503a87bb12 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -115,51 +115,6 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob j= ob, return -1; } =20 - -void -qemuDomainJobInfoFree(qemuDomainJobInfoPtr info) -{ - g_free(info->errmsg); - g_free(info); -} - - -qemuDomainJobInfoPtr -qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info) -{ - qemuDomainJobInfoPtr ret =3D g_new0(qemuDomainJobInfo, 1); - - memcpy(ret, info, sizeof(*info)); - - ret->errmsg =3D g_strdup(info->errmsg); - - return ret; -} - -void -qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, - virDomainObjPtr vm) -{ - qemuDomainObjPrivatePtr priv =3D vm->privateData; - virObjectEventPtr event; - virTypedParameterPtr params =3D NULL; - int nparams =3D 0; - int type; - - if (!priv->job.completed) - return; - - if (qemuDomainJobInfoToParams(priv->job.completed, &type, - ¶ms, &nparams) < 0) { - VIR_WARN("Could not get stats for completed job; domain %s", - vm->def->name); - } - - event =3D virDomainEventJobCompletedNewFromObj(vm, params, nparams); - virObjectEventStateQueue(driver->domainEventState, event); -} - - int qemuDomainObjInitJob(qemuDomainJobObjPtr job, qemuDomainObjPrivateJobCallbacksPtr cb) @@ -216,7 +171,6 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job) job->mask =3D QEMU_JOB_DEFAULT_MASK; job->abortJob =3D false; VIR_FREE(job->error); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); job->cb->resetJobPrivate(job->privateData); job->apiFlags =3D 0; } @@ -251,8 +205,6 @@ qemuDomainObjFreeJob(qemuDomainJobObjPtr job) qemuDomainObjResetJob(job); qemuDomainObjResetAsyncJob(job); job->cb->freeJobPrivate(job->privateData); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); - g_clear_pointer(&job->completed, qemuDomainJobInfoFree); virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); } @@ -264,435 +216,6 @@ qemuDomainTrackJob(qemuDomainJob job) } =20 =20 -int -qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) -{ - unsigned long long now; - - if (!jobInfo->started) - return 0; - - if (virTimeMillisNow(&now) < 0) - return -1; - - if (now < jobInfo->started) { - VIR_WARN("Async job starts in the future"); - jobInfo->started =3D 0; - return 0; - } - - jobInfo->timeElapsed =3D now - jobInfo->started; - return 0; -} - -int -qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) -{ - unsigned long long now; - - if (!jobInfo->stopped) - return 0; - - if (virTimeMillisNow(&now) < 0) - return -1; - - if (now < jobInfo->stopped) { - VIR_WARN("Guest's CPUs stopped in the future"); - jobInfo->stopped =3D 0; - return 0; - } - - jobInfo->stats.mig.downtime =3D now - jobInfo->stopped; - jobInfo->stats.mig.downtime_set =3D true; - return 0; -} - -static virDomainJobType -qemuDomainJobStatusToType(qemuDomainJobStatus status) -{ - switch (status) { - case QEMU_DOMAIN_JOB_STATUS_NONE: - break; - - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: - return VIR_DOMAIN_JOB_UNBOUNDED; - - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - return VIR_DOMAIN_JOB_COMPLETED; - - case QEMU_DOMAIN_JOB_STATUS_FAILED: - return VIR_DOMAIN_JOB_FAILED; - - case QEMU_DOMAIN_JOB_STATUS_CANCELED: - return VIR_DOMAIN_JOB_CANCELLED; - } - - return VIR_DOMAIN_JOB_NONE; -} - -int -qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, - virDomainJobInfoPtr info) -{ - info->type =3D qemuDomainJobStatusToType(jobInfo->status); - info->timeElapsed =3D jobInfo->timeElapsed; - - switch (jobInfo->statsType) { - case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; - info->fileTotal =3D jobInfo->stats.mig.disk_total + - jobInfo->mirrorStats.total; - info->fileRemaining =3D jobInfo->stats.mig.disk_remaining + - (jobInfo->mirrorStats.total - - jobInfo->mirrorStats.transferred); - info->fileProcessed =3D jobInfo->stats.mig.disk_transferred + - jobInfo->mirrorStats.transferred; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - info->memTotal =3D jobInfo->stats.mig.ram_total; - info->memRemaining =3D jobInfo->stats.mig.ram_remaining; - info->memProcessed =3D jobInfo->stats.mig.ram_transferred; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - info->memTotal =3D jobInfo->stats.dump.total; - info->memProcessed =3D jobInfo->stats.dump.completed; - info->memRemaining =3D info->memTotal - info->memProcessed; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - info->fileTotal =3D jobInfo->stats.backup.total; - info->fileProcessed =3D jobInfo->stats.backup.transferred; - info->fileRemaining =3D info->fileTotal - info->fileProcessed; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: - break; - } - - info->dataTotal =3D info->memTotal + info->fileTotal; - info->dataRemaining =3D info->memRemaining + info->fileRemaining; - info->dataProcessed =3D info->memProcessed + info->fileProcessed; - - return 0; -} - - -static int -qemuDomainMigrationJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuMonitorMigrationStats *stats =3D &jobInfo->stats.mig; - qemuDomainMirrorStatsPtr mirrorStats =3D &jobInfo->mirrorStats; - virTypedParameterPtr par =3D NULL; - int maxpar =3D 0; - int npar =3D 0; - unsigned long long mirrorRemaining =3D mirrorStats->total - - mirrorStats->transferred; - - if (virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) - goto error; - - if (jobInfo->timeDeltaSet && - jobInfo->timeElapsed > jobInfo->timeDelta && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED_NET, - jobInfo->timeElapsed - jobInfo->timeDelta)= < 0) - goto error; - - if (stats->downtime_set && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DOWNTIME, - stats->downtime) < 0) - goto error; - - if (stats->downtime_set && - jobInfo->timeDeltaSet && - stats->downtime > jobInfo->timeDelta && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DOWNTIME_NET, - stats->downtime - jobInfo->timeDelta) < 0) - goto error; - - if (stats->setup_time_set && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_SETUP_TIME, - stats->setup_time) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_TOTAL, - stats->ram_total + - stats->disk_total + - mirrorStats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_PROCESSED, - stats->ram_transferred + - stats->disk_transferred + - mirrorStats->transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_REMAINING, - stats->ram_remaining + - stats->disk_remaining + - mirrorRemaining) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_TOTAL, - stats->ram_total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PROCESSED, - stats->ram_transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_REMAINING, - stats->ram_remaining) < 0) - goto error; - - if (stats->ram_bps && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_BPS, - stats->ram_bps) < 0) - goto error; - - if (stats->ram_duplicate_set) { - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_CONSTANT, - stats->ram_duplicate) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_NORMAL, - stats->ram_normal) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_NORMAL_BYTES, - stats->ram_normal_bytes) < 0) - goto error; - } - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE, - stats->ram_dirty_rate) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_ITERATION, - stats->ram_iteration) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_POSTCOPY_REQS, - stats->ram_postcopy_reqs) < 0) - goto error; - - if (stats->ram_page_size > 0 && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE, - stats->ram_page_size) < 0) - goto error; - - /* The remaining stats are disk, mirror, or migration specific - * so if this is a SAVEDUMP, we can just skip them */ - if (jobInfo->statsType =3D=3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) - goto done; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_TOTAL, - stats->disk_total + - mirrorStats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_PROCESSED, - stats->disk_transferred + - mirrorStats->transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_REMAINING, - stats->disk_remaining + - mirrorRemaining) < 0) - goto error; - - if (stats->disk_bps && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_BPS, - stats->disk_bps) < 0) - goto error; - - if (stats->xbzrle_set) { - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_CACHE, - stats->xbzrle_cache_size) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_BYTES, - stats->xbzrle_bytes) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_PAGES, - stats->xbzrle_pages) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_CACHE_MISSE= S, - stats->xbzrle_cache_miss) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_OVERFLOW, - stats->xbzrle_overflow) < 0) - goto error; - } - - if (stats->cpu_throttle_percentage && - virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE, - stats->cpu_throttle_percentage) < 0) - goto error; - - done: - *type =3D qemuDomainJobStatusToType(jobInfo->status); - *params =3D par; - *nparams =3D npar; - return 0; - - error: - virTypedParamsFree(par, npar); - return -1; -} - - -static int -qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuMonitorDumpStats *stats =3D &jobInfo->stats.dump; - virTypedParameterPtr par =3D NULL; - int maxpar =3D 0; - int npar =3D 0; - - if (virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_TOTAL, - stats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PROCESSED, - stats->completed) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_REMAINING, - stats->total - stats->completed) < 0) - goto error; - - *type =3D qemuDomainJobStatusToType(jobInfo->status); - *params =3D par; - *nparams =3D npar; - return 0; - - error: - virTypedParamsFree(par, npar); - return -1; -} - - -static int -qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuDomainBackupStats *stats =3D &jobInfo->stats.backup; - g_autoptr(virTypedParamList) par =3D g_new0(virTypedParamList, 1); - - if (virTypedParamListAddInt(par, jobInfo->operation, - VIR_DOMAIN_JOB_OPERATION) < 0) - return -1; - - if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, - VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) - return -1; - - if (stats->transferred > 0 || stats->total > 0) { - if (virTypedParamListAddULLong(par, stats->total, - VIR_DOMAIN_JOB_DISK_TOTAL) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->transferred, - VIR_DOMAIN_JOB_DISK_PROCESSED) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->total - stats->transfer= red, - VIR_DOMAIN_JOB_DISK_REMAINING) < 0) - return -1; - } - - if (stats->tmp_used > 0 || stats->tmp_total > 0) { - if (virTypedParamListAddULLong(par, stats->tmp_used, - VIR_DOMAIN_JOB_DISK_TEMP_USED) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->tmp_total, - VIR_DOMAIN_JOB_DISK_TEMP_TOTAL) < 0) - return -1; - } - - if (jobInfo->status !=3D QEMU_DOMAIN_JOB_STATUS_ACTIVE && - virTypedParamListAddBoolean(par, - jobInfo->status =3D=3D QEMU_DOMAIN_JOB= _STATUS_COMPLETED, - VIR_DOMAIN_JOB_SUCCESS) < 0) - return -1; - - if (jobInfo->errmsg && - virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ER= RMSG) < 0) - return -1; - - *nparams =3D virTypedParamListStealParams(par, params); - *type =3D qemuDomainJobStatusToType(jobInfo->status); - return 0; -} - - -int -qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - switch (jobInfo->statsType) { - case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, n= params); - - case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparam= s); - - case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - return qemuDomainBackupJobInfoToParams(jobInfo, type, params, npar= ams); - - case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("invalid job statistics type")); - break; - - default: - virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType= ); - break; - } - - return -1; -} - - void qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, virDomainObjPtr obj, @@ -894,13 +417,11 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, qemuDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); qemuDomainObjResetAsyncJob(&priv->job); - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->job.cb->currentJobInfoInit(&priv->job, now); priv->job.asyncJob =3D asyncJob; priv->job.asyncOwner =3D virThreadSelfID(); priv->job.asyncOwnerAPI =3D virThreadJobGet(); priv->job.asyncStarted =3D now; - priv->job.current->started =3D now; } } =20 @@ -1066,7 +587,7 @@ int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, return -1; =20 priv =3D obj->privateData; - priv->job.current->operation =3D operation; + priv->job.cb->setJobInfoOperation(&priv->job, operation); priv->job.apiFlags =3D apiFlags; return 0; } diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index c83e055647..88051d099a 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -19,7 +19,6 @@ #pragma once =20 #include -#include "qemu_monitor.h" =20 #define JOB_MASK(job) (job =3D=3D 0 ? 0 : 1 << (job - 1)) #define QEMU_JOB_DEFAULT_MASK \ @@ -99,61 +98,6 @@ typedef enum { QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP, } qemuDomainJobStatsType; =20 - -typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; -typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; -struct _qemuDomainMirrorStats { - unsigned long long transferred; - unsigned long long total; -}; - -typedef struct _qemuDomainBackupStats qemuDomainBackupStats; -struct _qemuDomainBackupStats { - unsigned long long transferred; - unsigned long long total; - unsigned long long tmp_used; - unsigned long long tmp_total; -}; - -typedef struct _qemuDomainJobInfo qemuDomainJobInfo; -typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; -struct _qemuDomainJobInfo { - qemuDomainJobStatus status; - virDomainJobOperation operation; - unsigned long long started; /* When the async job started */ - unsigned long long stopped; /* When the domain's CPUs were stopped */ - unsigned long long sent; /* When the source sent status info to the - destination (only for migrations). */ - unsigned long long received; /* When the destination host received sta= tus - info from the source (migrations only)= . */ - /* Computed values */ - unsigned long long timeElapsed; - long long timeDelta; /* delta =3D received - sent, i.e., the difference - between the source and the destination time pl= us - the time between the end of Perform phase on t= he - source and the beginning of Finish phase on the - destination. */ - bool timeDeltaSet; - /* Raw values from QEMU */ - qemuDomainJobStatsType statsType; - union { - qemuMonitorMigrationStats mig; - qemuMonitorDumpStats dump; - qemuDomainBackupStats backup; - } stats; - qemuDomainMirrorStats mirrorStats; - - char *errmsg; /* optional error message for failed completed jobs */ -}; - -void -qemuDomainJobInfoFree(qemuDomainJobInfoPtr info); - -G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); - -qemuDomainJobInfoPtr -qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info); - typedef struct _qemuDomainJobObj qemuDomainJobObj; typedef qemuDomainJobObj *qemuDomainJobObjPtr; =20 @@ -164,6 +108,10 @@ typedef int (*qemuDomainObjPrivateJobFormat)(virBuffer= Ptr, qemuDomainJobObjPtr); typedef int (*qemuDomainObjPrivateJobParse)(xmlXPathContextPtr, qemuDomainJobObjPtr); +typedef void (*qemuDomainObjJobInfoSetOperation)(qemuDomainJobObjPtr, + virDomainJobOperation); +typedef void (*qemuDomainObjCurrentJobInfoInit)(qemuDomainJobObjPtr, + unsigned long long); =20 typedef struct _qemuDomainObjPrivateJobCallbacks qemuDomainObjPrivateJobCa= llbacks; typedef qemuDomainObjPrivateJobCallbacks *qemuDomainObjPrivateJobCallbacks= Ptr; @@ -173,6 +121,8 @@ struct _qemuDomainObjPrivateJobCallbacks { qemuDomainObjPrivateJobReset resetJobPrivate; qemuDomainObjPrivateJobFormat formatJob; qemuDomainObjPrivateJobParse parseJob; + qemuDomainObjJobInfoSetOperation setJobInfoOperation; + qemuDomainObjCurrentJobInfoInit currentJobInfoInit; }; =20 struct _qemuDomainJobObj { @@ -198,8 +148,6 @@ struct _qemuDomainJobObj { unsigned long long asyncStarted; /* When the current async job star= ted */ int phase; /* Job phase (mainly for migration= s) */ unsigned long long mask; /* Jobs allowed during async job */ - qemuDomainJobInfoPtr current; /* async job progress data */ - qemuDomainJobInfoPtr completed; /* statistics data of a recently c= ompleted job */ bool abortJob; /* abort of the job requested */ char *error; /* job event completion error */ unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ @@ -213,9 +161,6 @@ const char *qemuDomainAsyncJobPhaseToString(qemuDomainA= syncJob job, int qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, const char *phase); =20 -void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, - virDomainObjPtr vm); - int qemuDomainObjBeginJob(virQEMUDriverPtr driver, virDomainObjPtr obj, qemuDomainJob job) @@ -262,20 +207,6 @@ void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driv= er, void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, virDomainObjPtr vm); =20 -int qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) - ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) - ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, - virDomainJobInfoPtr info) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) - ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); - bool qemuDomainTrackJob(qemuDomainJob job); =20 void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8008da6d16..27be53d3e4 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2724,6 +2724,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, { virDomainObjPtr vm; qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; int ret =3D -1; =20 virCheckFlags(0, -1); @@ -2738,6 +2739,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; + jobPriv =3D priv->job.privateData; =20 memset(info, 0, sizeof(*info)); =20 @@ -2747,9 +2749,9 @@ qemuDomainGetControlInfo(virDomainPtr dom, } else if (priv->job.active) { if (virTimeMillisNow(&info->stateTime) < 0) goto cleanup; - if (priv->job.current) { + if (jobPriv->current) { info->state =3D VIR_DOMAIN_CONTROL_JOB; - info->stateTime -=3D priv->job.current->started; + info->stateTime -=3D jobPriv->current->started; } else { if (priv->monStart > 0) { info->state =3D VIR_DOMAIN_CONTROL_OCCUPIED; @@ -3314,6 +3316,7 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, int ret =3D -1; virObjectEventPtr event =3D NULL; qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; virQEMUSaveDataPtr data =3D NULL; g_autoptr(qemuDomainSaveCookie) cookie =3D NULL; =20 @@ -3330,7 +3333,7 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, goto endjob; } =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + jobPriv->current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* Pause */ if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { @@ -3715,7 +3718,7 @@ qemuDumpWaitForCompletion(virDomainObjPtr vm) return -1; } =20 - if (priv->job.current->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STAT= US_FAILED) { + if (jobPriv->current->stats.dump.status =3D=3D QEMU_MONITOR_DUMP_STATU= S_FAILED) { if (priv->job.error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"), @@ -3726,7 +3729,7 @@ qemuDumpWaitForCompletion(virDomainObjPtr vm) =20 return -1; } - qemuDomainJobInfoUpdateTime(priv->job.current); + qemuDomainJobInfoUpdateTime(jobPriv->current); =20 return 0; } @@ -3740,6 +3743,7 @@ qemuDumpToFd(virQEMUDriverPtr driver, const char *dumpformat) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; bool detach =3D false; int ret =3D -1; =20 @@ -3755,9 +3759,9 @@ qemuDumpToFd(virQEMUDriverPtr driver, return -1; =20 if (detach) - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUM= P; + jobPriv->current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP; else - g_clear_pointer(&priv->job.current, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->current, qemuDomainJobInfoFree); =20 if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; @@ -3894,6 +3898,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, virQEMUDriverPtr driver =3D dom->conn->privateData; virDomainObjPtr vm; qemuDomainObjPrivatePtr priv =3D NULL; + qemuDomainJobPrivatePtr jobPriv; bool resume =3D false, paused =3D false; int ret =3D -1; virObjectEventPtr event =3D NULL; @@ -3918,7 +3923,8 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; =20 priv =3D vm->privateData; - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + jobPriv =3D priv->job.privateData; + jobPriv->current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; =20 /* Migrate will always stop the VM, so the resume condition is independent of whether the stop command is issued. */ @@ -7479,6 +7485,7 @@ qemuDomainObjStart(virConnectPtr conn, bool force_boot =3D (flags & VIR_DOMAIN_START_FORCE_BOOT) !=3D 0; unsigned int start_flags =3D VIR_QEMU_PROCESS_START_COLD; qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 start_flags |=3D start_paused ? VIR_QEMU_PROCESS_START_PAUSED : 0; start_flags |=3D autodestroy ? VIR_QEMU_PROCESS_START_AUTODESTROY : 0; @@ -7502,8 +7509,8 @@ qemuDomainObjStart(virConnectPtr conn, } vm->hasManagedSave =3D false; } else { - virDomainJobOperation op =3D priv->job.current->operation; - priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_REST= ORE; + virDomainJobOperation op =3D jobPriv->current->operation; + jobPriv->current->operation =3D VIR_DOMAIN_JOB_OPERATION_RESTO= RE; =20 ret =3D qemuDomainObjRestore(conn, driver, vm, managed_save, start_paused, bypass_cache, asyncJo= b); @@ -7521,7 +7528,7 @@ qemuDomainObjStart(virConnectPtr conn, return ret; } else { VIR_WARN("Ignoring incomplete managed state %s", managed_s= ave); - priv->job.current->operation =3D op; + jobPriv->current->operation =3D op; vm->hasManagedSave =3D false; } } @@ -13575,13 +13582,14 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr dr= iver, qemuDomainJobInfoPtr *jobInfo) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; int ret =3D -1; =20 *jobInfo =3D NULL; =20 if (completed) { - if (priv->job.completed && !priv->job.current) - *jobInfo =3D qemuDomainJobInfoCopy(priv->job.completed); + if (jobPriv->completed && !jobPriv->current) + *jobInfo =3D qemuDomainJobInfoCopy(jobPriv->completed); =20 return 0; } @@ -13599,11 +13607,11 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr dr= iver, if (virDomainObjCheckActive(vm) < 0) goto cleanup; =20 - if (!priv->job.current) { + if (!jobPriv->current) { ret =3D 0; goto cleanup; } - *jobInfo =3D qemuDomainJobInfoCopy(priv->job.current); + *jobInfo =3D qemuDomainJobInfoCopy(jobPriv->current); =20 switch ((*jobInfo)->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: @@ -13678,6 +13686,7 @@ qemuDomainGetJobStats(virDomainPtr dom, virQEMUDriverPtr driver =3D dom->conn->privateData; virDomainObjPtr vm; qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; g_autoptr(qemuDomainJobInfo) jobInfo =3D NULL; bool completed =3D !!(flags & VIR_DOMAIN_JOB_STATS_COMPLETED); int ret =3D -1; @@ -13692,6 +13701,7 @@ qemuDomainGetJobStats(virDomainPtr dom, goto cleanup; =20 priv =3D vm->privateData; + jobPriv =3D priv->job.privateData; if (qemuDomainGetJobStatsInternal(driver, vm, completed, &jobInfo) < 0) goto cleanup; =20 @@ -13707,7 +13717,7 @@ qemuDomainGetJobStats(virDomainPtr dom, ret =3D qemuDomainJobInfoToParams(jobInfo, type, params, nparams); =20 if (completed && ret =3D=3D 0 && !(flags & VIR_DOMAIN_JOB_STATS_KEEP_C= OMPLETED)) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); =20 cleanup: virDomainObjEndAPI(&vm); @@ -13739,6 +13749,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) virDomainObjPtr vm; int ret =3D -1; qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; int reason; =20 if (!(vm =3D qemuDomainObjFromDomain(dom))) @@ -13754,6 +13765,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) goto endjob; =20 priv =3D vm->privateData; + jobPriv =3D priv->job.privateData; =20 switch (priv->job.asyncJob) { case QEMU_ASYNC_JOB_NONE: @@ -13774,7 +13786,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) break; =20 case QEMU_ASYNC_JOB_MIGRATION_OUT: - if ((priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTC= OPY || + if ((jobPriv->current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCO= PY || (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY))) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", @@ -15442,6 +15454,7 @@ qemuDomainSnapshotCreateActiveExternal(virQEMUDrive= rPtr driver, bool resume =3D false; int ret =3D -1; qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; g_autofree char *xml =3D NULL; virDomainSnapshotDefPtr snapdef =3D virDomainSnapshotObjGetDef(snap); bool memory =3D snapdef->memory =3D=3D VIR_DOMAIN_SNAPSHOT_LOCATION_EX= TERNAL; @@ -15519,7 +15532,7 @@ qemuDomainSnapshotCreateActiveExternal(virQEMUDrive= rPtr driver, if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0)) goto cleanup; =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDU= MP; + jobPriv->current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUM= P; =20 /* allow the migration job to be cancelled or the domain to be pau= sed */ qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 0f2f92b211..c517774c9f 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1008,6 +1008,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr drive= r, unsigned int flags) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; int port; size_t i; unsigned long long mirror_speed =3D speed; @@ -1052,7 +1053,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr drive= r, return -1; =20 if (priv->job.abortJob) { - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobPriv->current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJ= ob), _("canceled by client")); @@ -1070,7 +1071,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr drive= r, } =20 qemuMigrationSrcFetchMirrorStats(driver, vm, QEMU_ASYNC_JOB_MIGRATION_= OUT, - priv->job.current); + jobPriv->current); =20 /* Okay, all disks are ready. Modify migrate_flags */ *migrate_flags &=3D ~(QEMU_MONITOR_MIGRATE_NON_SHARED_DISK | @@ -1550,7 +1551,8 @@ qemuMigrationJobCheckStatus(virQEMUDriverPtr driver, qemuDomainAsyncJob asyncJob) { qemuDomainObjPrivatePtr priv =3D vm->privateData; - qemuDomainJobInfoPtr jobInfo =3D priv->job.current; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; + qemuDomainJobInfoPtr jobInfo =3D jobPriv->current; char *error =3D NULL; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); int ret =3D -1; @@ -1620,7 +1622,8 @@ qemuMigrationAnyCompleted(virQEMUDriverPtr driver, unsigned int flags) { qemuDomainObjPrivatePtr priv =3D vm->privateData; - qemuDomainJobInfoPtr jobInfo =3D priv->job.current; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; + qemuDomainJobInfoPtr jobInfo =3D jobPriv->current; int pauseReason; =20 if (qemuMigrationJobCheckStatus(driver, vm, asyncJob) < 0) @@ -1711,7 +1714,8 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriverPtr dr= iver, unsigned int flags) { qemuDomainObjPrivatePtr priv =3D vm->privateData; - qemuDomainJobInfoPtr jobInfo =3D priv->job.current; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; + qemuDomainJobInfoPtr jobInfo =3D jobPriv->current; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); int rv; =20 @@ -1743,9 +1747,9 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriverPtr dr= iver, =20 qemuDomainJobInfoUpdateTime(jobInfo); qemuDomainJobInfoUpdateDowntime(jobInfo); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed =3D qemuDomainJobInfoCopy(jobInfo); - priv->job.completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); + jobPriv->completed =3D qemuDomainJobInfoCopy(jobInfo); + jobPriv->completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETED; =20 if (asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT && jobInfo->status =3D=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED) @@ -3018,16 +3022,16 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr drive= r, return -1; =20 if (retcode =3D=3D 0) - jobInfo =3D priv->job.completed; + jobInfo =3D jobPriv->completed; else - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); =20 /* Update times with the values sent by the destination daemon */ if (mig->jobInfo && jobInfo) { int reason; =20 /* We need to refresh migration statistics after a completed post-= copy - * migration since priv->job.completed contains obsolete data from= the + * migration since jobPriv->completed contains obsolete data from = the * time we switched to post-copy mode. */ if (virDomainObjGetState(vm, &reason) =3D=3D VIR_DOMAIN_PAUSED && @@ -3479,6 +3483,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, int ret =3D -1; unsigned int migrate_flags =3D QEMU_MONITOR_MIGRATE_BACKGROUND; qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; g_autoptr(qemuMigrationCookie) mig =3D NULL; g_autofree char *tlsAlias =3D NULL; qemuMigrationIOThreadPtr iothread =3D NULL; @@ -3636,7 +3641,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, /* explicitly do this *after* we entered the monitor, * as this is a critical section so we are guaranteed * priv->job.abortJob will not change */ - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobPriv->current->status =3D QEMU_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJob), _("canceled by client")); @@ -3741,7 +3746,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, * resume it now once we finished all block jobs and wait for the real * end of the migration. */ - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PAUSED) { + if (jobPriv->current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PAUSED) { if (qemuMigrationSrcContinue(driver, vm, QEMU_MONITOR_MIGRATION_STATUS_PRE_SWI= TCHOVER, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) @@ -3769,11 +3774,11 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, goto error; } =20 - if (priv->job.completed) { - priv->job.completed->stopped =3D priv->job.current->stopped; - qemuDomainJobInfoUpdateTime(priv->job.completed); - qemuDomainJobInfoUpdateDowntime(priv->job.completed); - ignore_value(virTimeMillisNow(&priv->job.completed->sent)); + if (jobPriv->completed) { + jobPriv->completed->stopped =3D jobPriv->current->stopped; + qemuDomainJobInfoUpdateTime(jobPriv->completed); + qemuDomainJobInfoUpdateDowntime(jobPriv->completed); + ignore_value(virTimeMillisNow(&jobPriv->completed->sent)); } =20 cookieFlags |=3D QEMU_MIGRATION_COOKIE_NETWORK | @@ -3801,7 +3806,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, =20 if (virDomainObjIsActive(vm)) { if (cancel && - priv->job.current->status !=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COM= PLETED && + jobPriv->current->status !=3D QEMU_DOMAIN_JOB_STATUS_QEMU_COMP= LETED && qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT) = =3D=3D 0) { qemuMonitorMigrateCancel(priv->mon); @@ -3814,8 +3819,8 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, QEMU_ASYNC_JOB_MIGRATION_OUT, dconn); =20 - if (priv->job.current->status !=3D QEMU_DOMAIN_JOB_STATUS_CANCELED) - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; + if (jobPriv->current->status !=3D QEMU_DOMAIN_JOB_STATUS_CANCELED) + jobPriv->current->status =3D QEMU_DOMAIN_JOB_STATUS_FAILED; } =20 if (iothread) @@ -5023,7 +5028,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, : QEMU_MIGRATION_PHASE_FINISH2); =20 qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); =20 cookie_flags =3D QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_STATS | @@ -5115,7 +5120,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, goto endjob; } =20 - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) + if (jobPriv->current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POSTCOPY) inPostCopy =3D true; =20 if (!(flags & VIR_MIGRATE_PAUSED)) { @@ -5229,9 +5234,9 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, =20 if (dom) { if (jobInfo) { - priv->job.completed =3D g_steal_pointer(&jobInfo); - priv->job.completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLET= ED; - priv->job.completed->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_= MIGRATION; + jobPriv->completed =3D g_steal_pointer(&jobInfo); + jobPriv->completed->status =3D QEMU_DOMAIN_JOB_STATUS_COMPLETE= D; + jobPriv->completed->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_M= IGRATION; } =20 if (qemuMigrationBakeCookie(mig, driver, vm, @@ -5244,7 +5249,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, * is obsolete anyway. */ if (inPostCopy) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); } =20 qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, @@ -5473,6 +5478,7 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, unsigned long apiFlags) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; virDomainJobOperation op; unsigned long long mask; =20 @@ -5489,7 +5495,7 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, if (qemuDomainObjBeginAsyncJob(driver, vm, job, op, apiFlags) < 0) return -1; =20 - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + jobPriv->current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; =20 qemuDomainObjSetAsyncJobMask(vm, mask); return 0; diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_coo= kie.c index 81b557e0a8..a0e8cba8ba 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -509,12 +509,13 @@ qemuMigrationCookieAddStatistics(qemuMigrationCookieP= tr mig, virDomainObjPtr vm) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 - if (!priv->job.completed) + if (!jobPriv->completed) return 0; =20 g_clear_pointer(&mig->jobInfo, qemuDomainJobInfoFree); - mig->jobInfo =3D qemuDomainJobInfoCopy(priv->job.completed); + mig->jobInfo =3D qemuDomainJobInfoCopy(jobPriv->completed); =20 mig->flags |=3D QEMU_MIGRATION_COOKIE_STATS; =20 @@ -1465,6 +1466,7 @@ qemuMigrationEatCookie(virQEMUDriverPtr driver, unsigned int flags) { g_autoptr(qemuMigrationCookie) mig =3D NULL; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 /* Parse & validate incoming cookie (if any) */ if (cookiein && cookieinlen && @@ -1513,7 +1515,7 @@ qemuMigrationEatCookie(virQEMUDriverPtr driver, } =20 if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobInfo) - mig->jobInfo->operation =3D priv->job.current->operation; + mig->jobInfo->operation =3D jobPriv->current->operation; =20 return g_steal_pointer(&mig); } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 126fabf5ef..652d217b5c 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -657,6 +657,7 @@ qemuProcessHandleStop(qemuMonitorPtr mon G_GNUC_UNUSED, virDomainEventSuspendedDetailType detail; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 virObjectLock(vm); =20 @@ -668,7 +669,7 @@ qemuProcessHandleStop(qemuMonitorPtr mon G_GNUC_UNUSED, if (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING && !priv->pausedShutdown) { if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { - if (priv->job.current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_PO= STCOPY) + if (jobPriv->current->status =3D=3D QEMU_DOMAIN_JOB_STATUS_POS= TCOPY) reason =3D VIR_DOMAIN_PAUSED_POSTCOPY; else reason =3D VIR_DOMAIN_PAUSED_MIGRATION; @@ -680,8 +681,8 @@ qemuProcessHandleStop(qemuMonitorPtr mon G_GNUC_UNUSED, vm->def->name, virDomainPausedReasonTypeToString(reason), detail); =20 - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (jobPriv->current) + ignore_value(virTimeMillisNow(&jobPriv->current->stopped)); =20 if (priv->signalStop) virDomainObjBroadcast(vm); @@ -1649,6 +1650,7 @@ qemuProcessHandleMigrationStatus(qemuMonitorPtr mon G= _GNUC_UNUSED, void *opaque) { qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; virQEMUDriverPtr driver =3D opaque; virObjectEventPtr event =3D NULL; g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); @@ -1661,12 +1663,13 @@ qemuProcessHandleMigrationStatus(qemuMonitorPtr mon= G_GNUC_UNUSED, qemuMonitorMigrationStatusTypeToString(status)); =20 priv =3D vm->privateData; + jobPriv =3D priv->job.privateData; if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION event without a migration job"); goto cleanup; } =20 - priv->job.current->stats.mig.status =3D status; + jobPriv->current->stats.mig.status =3D status; virDomainObjBroadcast(vm); =20 if (status =3D=3D QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY && @@ -1747,13 +1750,13 @@ qemuProcessHandleDumpCompleted(qemuMonitorPtr mon G= _GNUC_UNUSED, goto cleanup; } jobPriv->dumpCompleted =3D true; - priv->job.current->stats.dump =3D *stats; + jobPriv->current->stats.dump =3D *stats; priv->job.error =3D g_strdup(error); =20 /* Force error if extracting the DUMP_COMPLETED status failed */ if (!error && status < 0) { priv->job.error =3D g_strdup(virGetLastErrorMessage()); - priv->job.current->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_= FAILED; + jobPriv->current->stats.dump.status =3D QEMU_MONITOR_DUMP_STATUS_F= AILED; } =20 virDomainObjBroadcast(vm); @@ -3267,6 +3270,7 @@ int qemuProcessStopCPUs(virQEMUDriverPtr driver, { int ret =3D -1; qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; =20 VIR_FREE(priv->lockState); =20 @@ -3285,8 +3289,8 @@ int qemuProcessStopCPUs(virQEMUDriverPtr driver, /* de-activate netdevs after stopping CPUs */ ignore_value(qemuInterfaceStopDevices(vm->def)); =20 - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (jobPriv->current) + ignore_value(virTimeMillisNow(&jobPriv->current->stopped)); =20 /* The STOP event handler will change the domain state with the reason * saved in priv->pausedReason and it will also emit corresponding dom= ain @@ -3583,6 +3587,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, unsigned int *stopFlags) { qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; virDomainState state; int reason; unsigned long long now; @@ -3651,11 +3656,11 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, /* We reset the job parameters for backup so that the job will look * active. This is possible because we are able to recover the sta= te * of blockjobs and also the backup job allows all sub-job types */ - priv->job.current =3D g_new0(qemuDomainJobInfo, 1); - priv->job.current->operation =3D VIR_DOMAIN_JOB_OPERATION_BACKUP; - priv->job.current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; - priv->job.current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; - priv->job.current->started =3D now; + jobPriv->current =3D g_new0(qemuDomainJobInfo, 1); + jobPriv->current->operation =3D VIR_DOMAIN_JOB_OPERATION_BACKUP; + jobPriv->current->statsType =3D QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + jobPriv->current->status =3D QEMU_DOMAIN_JOB_STATUS_ACTIVE; + jobPriv->current->started =3D now; break; =20 case QEMU_ASYNC_JOB_NONE: @@ -3760,7 +3765,6 @@ qemuDomainPerfRestart(virDomainObjPtr vm) return 0; } =20 - static void qemuProcessReconnectCheckMemAliasOrderMismatch(virDomainObjPtr vm) { --=20 2.25.1 From nobody Tue May 7 09:55:32 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 207.211.31.81 as permitted sender) client-ip=207.211.31.81; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-1.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.81 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1597640880; cv=none; d=zohomail.com; s=zohoarc; b=CgGSUOOPcP9bYmo3dda9Adfws8ltek/4u2j0k4I7kRno0iopYS11lue26cLayBxy8g8XfX9YunQ+mLiYOQsq262W05iw8LKQbG0g/SUn1NavCP3PCVDvvbSoD7u2lPoNAwRY7q46y9dYI3PP8PPK5/Rt7ncFLY9HuzdhrrtpHeI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1597640880; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=9DW6Blc0M/yxqPxc41lRscGJ4fOhjY62wWHZgoph7TI=; b=kdNgh5qLo/lDNO3p5LW1uBAjB85kGwBTMrOGtgDnd7Yl//UllFxThlD4s44j0biHJ0dPugG9qOVK12Paf9pwcynkJQboeTArklYwQ/U40G4YbehmdnsTwy4q7VS9jYgFV+qujnEnMmL11JE12TKr37I5LljjRYVSa/nwSqb8adA= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.81 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by mx.zohomail.com with SMTPS id 1597640880816850.0404094559025; Sun, 16 Aug 2020 22:08:00 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-347-aRnxvE8tPTey00sW7UrR0A-1; Mon, 17 Aug 2020 01:07:57 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 90D9C2ED8; Mon, 17 Aug 2020 05:07:51 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 71BF95C1DC; Mon, 17 Aug 2020 05:07:51 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 3D9D2180B658; Mon, 17 Aug 2020 05:07:51 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 07H57jqp021753 for ; Mon, 17 Aug 2020 01:07:45 -0400 Received: by smtp.corp.redhat.com (Postfix) id 4916D2026F94; Mon, 17 Aug 2020 05:07:45 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast06.extmail.prod.ext.rdu2.redhat.com [10.11.55.22]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 444F820227B1 for ; Mon, 17 Aug 2020 05:07:42 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D6E52186E136 for ; Mon, 17 Aug 2020 05:07:42 +0000 (UTC) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-513-Kr7QHEdtP0KbZJfHP_dEag-1; Mon, 17 Aug 2020 01:07:40 -0400 Received: by mail-pg1-f196.google.com with SMTP id o13so7570228pgf.0 for ; Sun, 16 Aug 2020 22:07:40 -0700 (PDT) Received: from localhost.localdomain ([125.99.150.200]) by smtp.gmail.com with ESMTPSA id s64sm18091678pfs.111.2020.08.16.22.07.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Aug 2020 22:07:38 -0700 (PDT) X-MC-Unique: aRnxvE8tPTey00sW7UrR0A-1 X-MC-Unique: Kr7QHEdtP0KbZJfHP_dEag-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9DW6Blc0M/yxqPxc41lRscGJ4fOhjY62wWHZgoph7TI=; b=cZmrcO7N8xtupODs4B+1lDYh2j39OA60qQ41xGVDmP8R8citt0UtZQky+LuGU5i+GL w7Ndz3nlujOQSO4/IuMpf99kFk2+9spei19Zp1qoTQXDPS2YmulxCTYU7fy3akxZ811U 7Vtbm45PMpXD41dcPJ2Q0edrJAwBE0WZvT8QtYbVZFhQ/yvoG305WjCzdXw3LNmx+IFy 76rlgkYPnSsSllykbNR9XZOJ+XefxIIJ+idhhABWqzp7mV/I83o8iJjCsfAaAkcwOXqa yhwdaPFqAqIpwGGTP46Im1iPGuYD4cGZT/2ffv7RT2dcBNml4JbGGRPud+U+f6qOk3Df smRw== X-Gm-Message-State: AOAM532jNE+u6vlaovR/Jd0krTIIXWY45gbtov7iLu4Cvk/VrAMHGw0Q oNz50nUENXDHq+MSzw6bpJPbKppmPwwJqQ== X-Google-Smtp-Source: ABdhPJwW4HlhqksiFsoX7LJdzt7SqtqMFSzeuyRwOBwu0SPby5lHfD3WqxS8E5Ycx7ubt4QzledgOw== X-Received: by 2002:a62:4e96:: with SMTP id c144mr10244960pfb.27.1597640858932; Sun, 16 Aug 2020 22:07:38 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH v2 2/6] qemu_domainjob: jobs_queued parameter added to `qemuDomainJobPrivate` Date: Mon, 17 Aug 2020 10:37:17 +0530 Message-Id: <20200817050721.7063-3-pc44800@gmail.com> In-Reply-To: <20200817050721.7063-1-pc44800@gmail.com> References: <20200817050721.7063-1-pc44800@gmail.com> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0.003 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since the attribute `jobs_queued` was specific to jobs, we decided to move this from `qemuDomainObjPrivate` to `qemuDomainJobObj` structure. Signed-off-by: Prathamesh Chavan --- src/qemu/qemu_domain.h | 2 -- src/qemu/qemu_domainjob.c | 14 +++++++------- src/qemu/qemu_domainjob.h | 2 ++ src/qemu/qemu_process.c | 2 +- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 386ae17272..507f710200 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -161,8 +161,6 @@ struct _qemuDomainObjPrivate { bool pausedShutdown; virTristateBool allowReboot; =20 - int jobs_queued; - unsigned long migMaxBandwidth; char *origname; int nbdPort; /* Port used for migration with NBD */ diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 503a87bb12..7cd1aabd9e 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -365,13 +365,13 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, if (virTimeMillisNow(&now) < 0) return -1; =20 - priv->jobs_queued++; + priv->job.jobs_queued++; then =3D now + QEMU_JOB_WAIT_TIME; =20 retry: if ((!async && job !=3D QEMU_JOB_DESTROY) && cfg->maxQueuedJobs && - priv->jobs_queued > cfg->maxQueuedJobs) { + priv->job.jobs_queued > cfg->maxQueuedJobs) { goto error; } =20 @@ -502,7 +502,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, } ret =3D -2; } else if (cfg->maxQueuedJobs && - priv->jobs_queued > cfg->maxQueuedJobs) { + priv->job.jobs_queued > cfg->maxQueuedJobs) { if (blocker && agentBlocker) { virReportError(VIR_ERR_OPERATION_FAILED, _("cannot acquire state change " @@ -532,7 +532,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, } =20 cleanup: - priv->jobs_queued--; + priv->job.jobs_queued--; return ret; } =20 @@ -653,7 +653,7 @@ qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainO= bjPtr obj) qemuDomainObjPrivatePtr priv =3D obj->privateData; qemuDomainJob job =3D priv->job.active; =20 - priv->jobs_queued--; + priv->job.jobs_queued--; =20 VIR_DEBUG("Stopping job: %s (async=3D%s vm=3D%p name=3D%s)", qemuDomainJobTypeToString(job), @@ -674,7 +674,7 @@ qemuDomainObjEndAgentJob(virDomainObjPtr obj) qemuDomainObjPrivatePtr priv =3D obj->privateData; qemuDomainAgentJob agentJob =3D priv->job.agentActive; =20 - priv->jobs_queued--; + priv->job.jobs_queued--; =20 VIR_DEBUG("Stopping agent job: %s (async=3D%s vm=3D%p name=3D%s)", qemuDomainAgentJobTypeToString(agentJob), @@ -692,7 +692,7 @@ qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDo= mainObjPtr obj) { qemuDomainObjPrivatePtr priv =3D obj->privateData; =20 - priv->jobs_queued--; + priv->job.jobs_queued--; =20 VIR_DEBUG("Stopping async job: %s (vm=3D%p name=3D%s)", qemuDomainAsyncJobTypeToString(priv->job.asyncJob), diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 88051d099a..0696b79fe3 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -152,6 +152,8 @@ struct _qemuDomainJobObj { char *error; /* job event completion error */ unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ =20 + int jobs_queued; + void *privateData; /* job specific collection of data= */ qemuDomainObjPrivateJobCallbacksPtr cb; }; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 652d217b5c..e114e4c4ce 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3644,7 +3644,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, ignore_value(virTimeMillisNow(&now)); =20 /* Restore the config of the async job which is not persisted */ - priv->jobs_queued++; + priv->job.jobs_queued++; priv->job.asyncJob =3D QEMU_ASYNC_JOB_BACKUP; priv->job.asyncOwnerAPI =3D virThreadJobGet(); priv->job.asyncStarted =3D now; --=20 2.25.1 From nobody Tue May 7 09:55:32 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) client-ip=205.139.110.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1597640880; cv=none; d=zohomail.com; s=zohoarc; b=lsPvPB8S9briHHTA0W27Orrrw7pukg42JgNzJgBL2dEQGqQKDiJyH9+h1w3uaxU5tFuD6SWNjvMpcN8YIWHm/ZTs9Im+bQBQpH8gXsTiV1ZutqKDUvWf0rLUA1h17lAOVOGMWxwrHBxEA+NC8jfAWsKxZo23AY77fHmDGjgqNQM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1597640880; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+UAk78MtjEWe0Szib8bFy+uf0t/N/d1xhbg9RwMPJWQ=; b=kvOKg3yETntS2TFYAsWU4artGqIWga+FrxkOJ/XQXzTJKHSTqUhJUjiWVFyHNWclJnG7F3PbM7Tezi3vAezoHyZxRW/t2iLnKdsHFRzMBzBlXlL+PoJD4b1ObNLBFkzFeRnITcrJMQt8RJVai5j20xubUkN2dGPYempvH6F7UpQ= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by mx.zohomail.com with SMTPS id 1597640880321683.0261178668185; Sun, 16 Aug 2020 22:08:00 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-424-B85JCPw4NzmBpB45KQH-Yw-1; Mon, 17 Aug 2020 01:07:57 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BFEC43F10; Mon, 17 Aug 2020 05:07:51 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9B82F21E8E; Mon, 17 Aug 2020 05:07:51 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 6D8884EE38; Mon, 17 Aug 2020 05:07:51 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 07H57ihj021747 for ; Mon, 17 Aug 2020 01:07:44 -0400 Received: by smtp.corp.redhat.com (Postfix) id 81436110F0B1; Mon, 17 Aug 2020 05:07:44 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7CD14110F0AF for ; Mon, 17 Aug 2020 05:07:44 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 65ACE80121D for ; Mon, 17 Aug 2020 05:07:44 +0000 (UTC) Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-8-JEyeUgGTN7Knx1UFlPqPMA-1; Mon, 17 Aug 2020 01:07:42 -0400 Received: by mail-pl1-f193.google.com with SMTP id bh1so6906261plb.12 for ; Sun, 16 Aug 2020 22:07:42 -0700 (PDT) Received: from localhost.localdomain ([125.99.150.200]) by smtp.gmail.com with ESMTPSA id s64sm18091678pfs.111.2020.08.16.22.07.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Aug 2020 22:07:40 -0700 (PDT) X-MC-Unique: B85JCPw4NzmBpB45KQH-Yw-1 X-MC-Unique: JEyeUgGTN7Knx1UFlPqPMA-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+UAk78MtjEWe0Szib8bFy+uf0t/N/d1xhbg9RwMPJWQ=; b=qQdg3NR5GsKKXJYuI1qepc7iMTLh6MBmlmi2asdpzW3TVagz3gSvsz5Be6WWp+gZL6 KZ3bKbDHxOmv4cXV5cE3MzDT7/FxaMt9A2NVNZLdgJleK/Jmio5oT6GGV1CDMETiYHI2 tm2+5KdZ5cXaU9tZ4HOttJZwim7VkS+Yh3CoQakFLtJUk32mlMwCzOBjtsxAiFHNJ+lz Zqoy8YuHW8Vituu8CPi/jUqc4kxfkoEitkSmwDwlYwSz6MjY4w/MyR7I1ao6l++dm6/y fDEbKiIf6mJhQ0u77FTytygu0wC7nxLWu+I0NGO87Ci/LH6XVVaHyhNN9QOhjTVmHaa6 eN7g== X-Gm-Message-State: AOAM5315c3vNQJtk8E3GYhHClCQx2V6yitaxQJy6CJBQAGZiCx29m4Yi XQD2wQtrAX1Ijc/lRu26wZC3ZvCLh7ioCw== X-Google-Smtp-Source: ABdhPJxV+q7Ci5ivm7+KkdKEjrCynD39XugCj2LI+aeZW9vJYb/h2fUDXoDeTscyh1xqoyu2IywJrg== X-Received: by 2002:a17:90b:368c:: with SMTP id mj12mr10213327pjb.152.1597640860861; Sun, 16 Aug 2020 22:07:40 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH v2 3/6] qemu_domainjob: `maxQueuedJobs` added to `qemuDomainJobPrivate` Date: Mon, 17 Aug 2020 10:37:18 +0530 Message-Id: <20200817050721.7063-4-pc44800@gmail.com> In-Reply-To: <20200817050721.7063-1-pc44800@gmail.com> References: <20200817050721.7063-1-pc44800@gmail.com> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0.003 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config. Signed-off-by: Prathamesh Chavan --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 1ae44ae39f..677fa7ea91 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2085,11 +2085,14 @@ static void * qemuDomainObjPrivateAlloc(void *opaque) { qemuDomainObjPrivatePtr priv; + virQEMUDriverPtr driver =3D opaque; + g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); =20 if (VIR_ALLOC(priv) < 0) return NULL; =20 - if (qemuDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks) < 0) { + if (qemuDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks, + cfg->maxQueuedJobs) < 0) { virReportSystemError(errno, "%s", _("Unable to init qemu driver mutexes")); goto error; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 7cd1aabd9e..eebc144747 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -117,10 +117,12 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob = job, =20 int qemuDomainObjInitJob(qemuDomainJobObjPtr job, - qemuDomainObjPrivateJobCallbacksPtr cb) + qemuDomainObjPrivateJobCallbacksPtr cb, + unsigned int maxQueuedJobs) { memset(job, 0, sizeof(*job)); job->cb =3D cb; + job->maxQueuedJobs =3D maxQueuedJobs; =20 if (!(job->privateData =3D job->cb->allocJobPrivate())) return -1; @@ -344,7 +346,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, unsigned long long then; bool nested =3D job =3D=3D QEMU_JOB_ASYNC_NESTED; bool async =3D job =3D=3D QEMU_JOB_ASYNC; - g_autoptr(virQEMUDriverConfig) cfg =3D virQEMUDriverGetConfig(driver); const char *blocker =3D NULL; const char *agentBlocker =3D NULL; int ret =3D -1; @@ -370,8 +371,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, =20 retry: if ((!async && job !=3D QEMU_JOB_DESTROY) && - cfg->maxQueuedJobs && - priv->job.jobs_queued > cfg->maxQueuedJobs) { + priv->job.maxQueuedJobs && + priv->job.jobs_queued > priv->job.maxQueuedJobs) { goto error; } =20 @@ -501,8 +502,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, _("cannot acquire state change lock")); } ret =3D -2; - } else if (cfg->maxQueuedJobs && - priv->job.jobs_queued > cfg->maxQueuedJobs) { + } else if (priv->job.maxQueuedJobs && + priv->job.jobs_queued > priv->job.maxQueuedJobs) { if (blocker && agentBlocker) { virReportError(VIR_ERR_OPERATION_FAILED, _("cannot acquire state change " diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 0696b79fe3..11e7f2f432 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -153,6 +153,7 @@ struct _qemuDomainJobObj { unsigned long apiFlags; /* flags passed to the API which started the a= sync job */ =20 int jobs_queued; + unsigned int maxQueuedJobs; =20 void *privateData; /* job specific collection of data= */ qemuDomainObjPrivateJobCallbacksPtr cb; @@ -215,7 +216,8 @@ void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); =20 int qemuDomainObjInitJob(qemuDomainJobObjPtr job, - qemuDomainObjPrivateJobCallbacksPtr cb); + qemuDomainObjPrivateJobCallbacksPtr cb, + unsigned int maxQueuedJobs); =20 bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob); =20 --=20 2.25.1 From nobody Tue May 7 09:55:32 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) client-ip=207.211.31.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1597640886; cv=none; d=zohomail.com; s=zohoarc; b=XyRFBkdwXkSeSvIE5TdVAogvAVWg7yC67aA27vH/DHGXJArltc9etR12iHFziM9O2nqpYU/gfTCMHGAqmBM4MFWFuk9phVP/SZxHNMe6S4hN/GDHnzpCcfngII5BHBPl7Ulb+SNQfvvoghmqnAibDjg+XH/fCbk4DIpMhs4dEb4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1597640886; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ocfp003WEi5DbBbSEo/xHt0M84v5aIoHC2GQ9+ozkTM=; b=Tq0Rxa0pQ6rk3YfvqYLiP98JtHSur8gMIfxKneAoIMyyo6VJlORRlXh2Iw2YnXDgBhmqcGEaPH/RnnJw+H1Mq1p1yfmHqcD4tmiQciwAZwzpP2QWr2W1LfMEzYXkmeFMyDN0Wx3tCQ5aS23Vz1QWByavH4ET6OTloe1/K3dUntM= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by mx.zohomail.com with SMTPS id 1597640886757371.2100717032348; Sun, 16 Aug 2020 22:08:06 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-502-b7USsunLPFu492bCyKfOmQ-1; Mon, 17 Aug 2020 01:08:00 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 07EFC807338; Mon, 17 Aug 2020 05:07:54 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DDAD57A1F6; Mon, 17 Aug 2020 05:07:53 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id AA113180B791; Mon, 17 Aug 2020 05:07:53 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 07H57nJv021780 for ; Mon, 17 Aug 2020 01:07:49 -0400 Received: by smtp.corp.redhat.com (Postfix) id 2806E2022798; Mon, 17 Aug 2020 05:07:49 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 22F98203B865 for ; Mon, 17 Aug 2020 05:07:47 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8B12480019A for ; Mon, 17 Aug 2020 05:07:47 +0000 (UTC) Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-489-pip3yCrIOZeTCRqYiL9BUQ-1; Mon, 17 Aug 2020 01:07:44 -0400 Received: by mail-pj1-f44.google.com with SMTP id ep8so7130610pjb.3 for ; Sun, 16 Aug 2020 22:07:44 -0700 (PDT) Received: from localhost.localdomain ([125.99.150.200]) by smtp.gmail.com with ESMTPSA id s64sm18091678pfs.111.2020.08.16.22.07.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Aug 2020 22:07:42 -0700 (PDT) X-MC-Unique: b7USsunLPFu492bCyKfOmQ-1 X-MC-Unique: pip3yCrIOZeTCRqYiL9BUQ-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ocfp003WEi5DbBbSEo/xHt0M84v5aIoHC2GQ9+ozkTM=; b=CAmEFbwO6b8Xorg+Z25cqCHKhbTAAnnk1K8bF6/OwSZJnTkc3S9tNzBs22/D1bsMUp TMBTVP/ifTkgrh3wR5vlv6aPIBZN4Z1oW60Jtdkn3PGrHIb8oV9oByZQmiIb+1aNeGl2 98IU1I+yMavnmHFOPRIOwZCfL0mdgHmG6wP5sSFVD/uu2ubxDTiFygqu6Ns8gpD/fv8w hNp4CqJFqOnse0IW8xsHy6AmdMKP6IDSCBYMQXGMnsUKsr/8bRzTdCPGCrrQ7xkVAmwG btRQbltpNmvIrZl1UCsvNGP86IZBBav7+Vz591UBCsDC/qIW+Quj5s2VOmW3MGNi99NP GxMw== X-Gm-Message-State: AOAM531uhKUIDe/OJ/805RftTv9Oy+TY7Byu8PpplBcR4oXi4RhB3aZk Vzl1LIhmU7rv8yg9DkgvJ0NzqVt41qaCdg== X-Google-Smtp-Source: ABdhPJxsU0OakBLWj+fXBhOtWR1n/P5pgjf4ptiiEXNgsv75Q79kvIa9viGABWvpYqUfPufZgaqwGg== X-Received: by 2002:a17:902:b28b:: with SMTP id u11mr9822162plr.317.1597640863236; Sun, 16 Aug 2020 22:07:43 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH v2 4/6] qemu_domain: funciton declarations moved to correct file Date: Mon, 17 Aug 2020 10:37:19 +0530 Message-Id: <20200817050721.7063-5-pc44800@gmail.com> In-Reply-To: <20200817050721.7063-1-pc44800@gmail.com> References: <20200817050721.7063-1-pc44800@gmail.com> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Functions `qemuDomainRemoveInactiveJob` and `qemuDomainRemoveInactiveJobLocked` had their declaration misplaced in `qemu_domainjob` and were moved to `qemu_domain`. Signed-off-by: Prathamesh Chavan Reviewed-by: Erik Skultety --- src/qemu/qemu_domain.h | 6 ++++++ src/qemu/qemu_domainjob.h | 6 ------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 507f710200..3e5d115096 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -540,6 +540,12 @@ struct _qemuDomainJobPrivate { }; =20 =20 +void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver, + virDomainObjPtr vm); + +void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, + virDomainObjPtr vm); + void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, virDomainObjPtr vm); =20 diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 11e7f2f432..c5f68ca778 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -204,12 +204,6 @@ void qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr dri= ver, virDomainObjPtr obj); void qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj); =20 -void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver, - virDomainObjPtr vm); - -void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, - virDomainObjPtr vm); - bool qemuDomainTrackJob(qemuDomainJob job); =20 void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); --=20 2.25.1 From nobody Tue May 7 09:55:32 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) client-ip=205.139.110.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1597640883; cv=none; d=zohomail.com; s=zohoarc; b=kV4tmHaHkcgjb53RHhxBcO3oZpPGaqDk3clcB8mcYN21w3r4ISXccANZYukEDUtbP7AUk5rE8m9cMDEvydTe+BfGVaLYSV7hZTzQrtEdgvYrBQf/Qw81c9b4IDsPT1RqrubGzyCtiLWMrXczApcS/jIV+IRhOgvfOV3FzHiz8xU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1597640883; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=scbdyVtnx186fx3DHa5h6tmaqhKxCbEYdXr69kRAWFI=; b=kSkeexdea2HnV6kFj3HsPFYipyNGPnqeBvn/bOfU0a07MhSFnWHuHNVltNjq6HpbyxqOJNqpzsheXPX23tf7NDjYelO+SzD9IpxU4xo0dOAn/T7LqjGLPcMjwHlIilEkIMNTeojCzZYq+yc7YdXY75SyYZflvOU1TAKcER9aj1A= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 205.139.110.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by mx.zohomail.com with SMTPS id 1597640883138444.4364613590642; Sun, 16 Aug 2020 22:08:03 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-32-0sKy8HsQNY-gsDpOB-pKbA-1; Mon, 17 Aug 2020 01:07:59 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0B98781F009; Mon, 17 Aug 2020 05:07:54 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DD74A7D649; Mon, 17 Aug 2020 05:07:53 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id AD0734EE39; Mon, 17 Aug 2020 05:07:53 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 07H57oEC021786 for ; Mon, 17 Aug 2020 01:07:50 -0400 Received: by smtp.corp.redhat.com (Postfix) id 03AB92166BCC; Mon, 17 Aug 2020 05:07:50 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast05.extmail.prod.ext.rdu2.redhat.com [10.11.55.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F35E12166BA2 for ; Mon, 17 Aug 2020 05:07:49 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D4E3E805B29 for ; Mon, 17 Aug 2020 05:07:49 +0000 (UTC) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-453-BA56OlYmOj-m0Zv-ouJCyA-1; Mon, 17 Aug 2020 01:07:47 -0400 Received: by mail-pg1-f196.google.com with SMTP id m34so7544517pgl.11 for ; Sun, 16 Aug 2020 22:07:47 -0700 (PDT) Received: from localhost.localdomain ([125.99.150.200]) by smtp.gmail.com with ESMTPSA id s64sm18091678pfs.111.2020.08.16.22.07.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Aug 2020 22:07:45 -0700 (PDT) X-MC-Unique: 0sKy8HsQNY-gsDpOB-pKbA-1 X-MC-Unique: BA56OlYmOj-m0Zv-ouJCyA-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=scbdyVtnx186fx3DHa5h6tmaqhKxCbEYdXr69kRAWFI=; b=duDdpa7S/2n0GiEUVlcUlRXrlVldkSgTkwPy2f8JrkYsdEpKBtXJvgLucZz9pilbR1 EhG1rJ/YBiN8ud9CVpQ7aIZAChauWH9EzCRjkbT5h9Txh3CHTZXy+gbZaXEpnyl62vZB h/+tWTse37ykYXTedBWGDZgyY6sI1jt5/n7GsFZQcqXImEyUXFI/zPrdSFI8/04hrmsU TJiW1Gu7JZkOziYyNu2j06713kKaDoL2i4NyYzqmdj9QylgNcHnwOMOw+2TPqJjbPMG8 jqwQq4OVeCmtEPgf22edkwmbbkbNqq9VHpEwxhqqEboK21ugjcKH2QuLyERbAm/MV5UW hHaw== X-Gm-Message-State: AOAM530d2SRQer+zW7m/Ocvt84yZaEhkRbt+teI4OCiCaY7NIjg368vm NNgjlAvfQ1wjloo7qbmk0zO8kE8YlDrwLA== X-Google-Smtp-Source: ABdhPJzB4Y86vavjNB0WSdPSs4pZaO/OreHBUKYNhkM4+0Ax3g6cU4HdjYCQY1D5+V5o3+kkig2u6w== X-Received: by 2002:a63:d048:: with SMTP id s8mr8640689pgi.171.1597640865621; Sun, 16 Aug 2020 22:07:45 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH v2 5/6] virmigraiton: `qemuMigrationJobPhase` transformed for more generic use Date: Mon, 17 Aug 2020 10:37:20 +0530 Message-Id: <20200817050721.7063-6-pc44800@gmail.com> In-Reply-To: <20200817050721.7063-1-pc44800@gmail.com> References: <20200817050721.7063-1-pc44800@gmail.com> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0.004 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" `qemuMigrationJobPhase` was transformed into `virMigrationJobPhase` and a common util file `virmigration` was created to store its defination. Signed-off-by: Prathamesh Chavan --- src/hypervisor/meson.build | 1 + src/hypervisor/virmigration.c | 41 ++++++++++++++++++++ src/hypervisor/virmigration.h | 38 ++++++++++++++++++ src/libvirt_private.syms | 4 ++ src/qemu/MIGRATION.txt | 8 ++-- src/qemu/qemu_domainjob.c | 4 +- src/qemu/qemu_migration.c | 73 +++++++++++++++++------------------ src/qemu/qemu_migration.h | 17 +------- src/qemu/qemu_process.c | 48 +++++++++++------------ 9 files changed, 151 insertions(+), 83 deletions(-) create mode 100644 src/hypervisor/virmigration.c create mode 100644 src/hypervisor/virmigration.h diff --git a/src/hypervisor/meson.build b/src/hypervisor/meson.build index 85149c683e..c81bdfa2fc 100644 --- a/src/hypervisor/meson.build +++ b/src/hypervisor/meson.build @@ -3,6 +3,7 @@ hypervisor_sources =3D [ 'domain_driver.c', 'virclosecallbacks.c', 'virhostdev.c', + 'virmigration.c', ] =20 hypervisor_lib =3D static_library( diff --git a/src/hypervisor/virmigration.c b/src/hypervisor/virmigration.c new file mode 100644 index 0000000000..2cad5a6b1b --- /dev/null +++ b/src/hypervisor/virmigration.c @@ -0,0 +1,41 @@ +/* + * virmigration.c: hypervisor migration handling + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * . + */ + +#include + +#include "virmigration.h" +#include "domain_driver.h" +#include "virlog.h" + +#define VIR_FROM_THIS VIR_FROM_DOMAIN + +VIR_LOG_INIT("util.migration"); + +VIR_ENUM_IMPL(virMigrationJobPhase, + VIR_MIGRATION_PHASE_LAST, + "none", + "perform2", + "begin3", + "perform3", + "perform3_done", + "confirm3_cancelled", + "confirm3", + "prepare", + "finish2", + "finish3", +); diff --git a/src/hypervisor/virmigration.h b/src/hypervisor/virmigration.h new file mode 100644 index 0000000000..e03d71c1bb --- /dev/null +++ b/src/hypervisor/virmigration.h @@ -0,0 +1,38 @@ +/* + * virmigration.h: hypervisor migration handling + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * . + */ + +#pragma once + +#include "virenum.h" + + +typedef enum { + VIR_MIGRATION_PHASE_NONE =3D 0, + VIR_MIGRATION_PHASE_PERFORM2, + VIR_MIGRATION_PHASE_BEGIN3, + VIR_MIGRATION_PHASE_PERFORM3, + VIR_MIGRATION_PHASE_PERFORM3_DONE, + VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED, + VIR_MIGRATION_PHASE_CONFIRM3, + VIR_MIGRATION_PHASE_PREPARE, + VIR_MIGRATION_PHASE_FINISH2, + VIR_MIGRATION_PHASE_FINISH3, + + VIR_MIGRATION_PHASE_LAST +} virMigrationJobPhase; +VIR_ENUM_DECL(virMigrationJobPhase); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 01c2e710cd..cf78c2f27a 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1474,6 +1474,10 @@ virHostdevUpdateActiveSCSIDevices; virHostdevUpdateActiveUSBDevices; =20 =20 +# hypervisor/virmigration.h +virMigrationJobPhaseTypeFromString; +virMigrationJobPhaseTypeToString; + # libvirt_internal.h virConnectSupportsFeature; virDomainMigrateBegin3; diff --git a/src/qemu/MIGRATION.txt b/src/qemu/MIGRATION.txt index e861fd001e..dd044c6064 100644 --- a/src/qemu/MIGRATION.txt +++ b/src/qemu/MIGRATION.txt @@ -74,7 +74,7 @@ The sequence of calling qemuMigrationJob* helper methods = is as follows: migration type and version) has to start migration job and keep it activ= e: =20 qemuMigrationJobStart(driver, vm, QEMU_JOB_MIGRATION_{IN,OUT}); - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_*); ...do work... qemuMigrationJobContinue(vm); =20 @@ -82,7 +82,7 @@ The sequence of calling qemuMigrationJob* helper methods = is as follows: =20 if (!qemuMigrationJobIsActive(vm, QEMU_JOB_MIGRATION_{IN,OUT})) return; - qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobStartPhase(driver, vm, VIR_MIGRATION_PHASE_*); ...do work... qemuMigrationJobContinue(vm); =20 @@ -90,11 +90,11 @@ The sequence of calling qemuMigrationJob* helper method= s is as follows: =20 if (!qemuMigrationJobIsActive(vm, QEMU_JOB_MIGRATION_{IN,OUT})) return; - qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobStartPhase(driver, vm, VIR_MIGRATION_PHASE_*); ...do work... qemuMigrationJobFinish(driver, vm); =20 While migration job is running (i.e., after qemuMigrationJobStart* but bef= ore qemuMigrationJob{Continue,Finish}), migration phase can be advanced using =20 - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_*); diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index eebc144747..18abc0d986 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -70,7 +70,7 @@ qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, switch (job) { case QEMU_ASYNC_JOB_MIGRATION_OUT: case QEMU_ASYNC_JOB_MIGRATION_IN: - return qemuMigrationJobPhaseTypeToString(phase); + return virMigrationJobPhaseTypeToString(phase); =20 case QEMU_ASYNC_JOB_SAVE: case QEMU_ASYNC_JOB_DUMP: @@ -96,7 +96,7 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, switch (job) { case QEMU_ASYNC_JOB_MIGRATION_OUT: case QEMU_ASYNC_JOB_MIGRATION_IN: - return qemuMigrationJobPhaseTypeFromString(phase); + return virMigrationJobPhaseTypeFromString(phase); =20 case QEMU_ASYNC_JOB_SAVE: case QEMU_ASYNC_JOB_DUMP: diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index c517774c9f..996c03e948 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -67,8 +67,8 @@ =20 VIR_LOG_INIT("qemu.qemu_migration"); =20 -VIR_ENUM_IMPL(qemuMigrationJobPhase, - QEMU_MIGRATION_PHASE_LAST, +VIR_ENUM_IMPL(virMigrationJobPhase, + VIR_MIGRATION_PHASE_LAST, "none", "perform2", "begin3", @@ -91,13 +91,13 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, static void qemuMigrationJobSetPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); =20 static void qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); =20 static void @@ -2027,13 +2027,13 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm, " was closed; canceling the migration", vm->def->name); =20 - switch ((qemuMigrationJobPhase) priv->job.phase) { - case QEMU_MIGRATION_PHASE_BEGIN3: + switch ((virMigrationJobPhase) priv->job.phase) { + case VIR_MIGRATION_PHASE_BEGIN3: /* just forget we were about to migrate */ qemuDomainObjDiscardAsyncJob(driver, vm); break; =20 - case QEMU_MIGRATION_PHASE_PERFORM3_DONE: + case VIR_MIGRATION_PHASE_PERFORM3_DONE: VIR_WARN("Migration of domain %s finished but we don't know if the" " domain was successfully started on destination or not", vm->def->name); @@ -2043,19 +2043,19 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm, qemuDomainObjDiscardAsyncJob(driver, vm); break; =20 - case QEMU_MIGRATION_PHASE_PERFORM3: + case VIR_MIGRATION_PHASE_PERFORM3: /* cannot be seen without an active migration API; unreachable */ - case QEMU_MIGRATION_PHASE_CONFIRM3: - case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: + case VIR_MIGRATION_PHASE_CONFIRM3: + case VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED: /* all done; unreachable */ - case QEMU_MIGRATION_PHASE_PREPARE: - case QEMU_MIGRATION_PHASE_FINISH2: - case QEMU_MIGRATION_PHASE_FINISH3: + case VIR_MIGRATION_PHASE_PREPARE: + case VIR_MIGRATION_PHASE_FINISH2: + case VIR_MIGRATION_PHASE_FINISH3: /* incoming migration; unreachable */ - case QEMU_MIGRATION_PHASE_PERFORM2: + case VIR_MIGRATION_PHASE_PERFORM2: /* single phase outgoing migration; unreachable */ - case QEMU_MIGRATION_PHASE_NONE: - case QEMU_MIGRATION_PHASE_LAST: + case VIR_MIGRATION_PHASE_NONE: + case VIR_MIGRATION_PHASE_LAST: /* unreachable */ ; } @@ -2091,7 +2091,7 @@ qemuMigrationSrcBeginPhase(virQEMUDriverPtr driver, * change protection. */ if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT) - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_BEGIN3); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_BEGIN3); =20 if (!qemuMigrationSrcIsAllowed(driver, vm, true, flags)) return NULL; @@ -2550,7 +2550,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, flags) < 0) goto cleanup; - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PREPARE); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PREPARE); =20 /* Domain starts inactive, even if the domain XML had an id field. */ vm->def->id =3D -1; @@ -3011,10 +3011,9 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver, =20 virCheckFlags(QEMU_MIGRATION_FLAGS, -1); =20 - qemuMigrationJobSetPhase(driver, vm, - retcode =3D=3D 0 - ? QEMU_MIGRATION_PHASE_CONFIRM3 - : QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED); + qemuMigrationJobSetPhase(driver, vm, retcode =3D=3D 0 + ? VIR_MIGRATION_PHASE_CONFIRM3 + : VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED); =20 if (!(mig =3D qemuMigrationEatCookie(driver, vm->def, priv->origname, = priv, cookiein, cookieinlen, @@ -3104,7 +3103,7 @@ qemuMigrationSrcConfirm(virQEMUDriverPtr driver, unsigned int flags, int cancelled) { - qemuMigrationJobPhase phase; + virMigrationJobPhase phase; virQEMUDriverConfigPtr cfg =3D NULL; int ret =3D -1; =20 @@ -3114,9 +3113,9 @@ qemuMigrationSrcConfirm(virQEMUDriverPtr driver, goto cleanup; =20 if (cancelled) - phase =3D QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED; + phase =3D VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED; else - phase =3D QEMU_MIGRATION_PHASE_CONFIRM3; + phase =3D VIR_MIGRATION_PHASE_CONFIRM3; =20 qemuMigrationJobStartPhase(driver, vm, phase); virCloseCallbacksUnset(driver->closeCallbacks, vm, @@ -4064,7 +4063,7 @@ qemuMigrationSrcPerformPeer2Peer2(virQEMUDriverPtr dr= iver, * until the migration is complete. */ VIR_DEBUG("Perform %p", sconn); - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM2); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM2); if (flags & VIR_MIGRATE_TUNNELLED) ret =3D qemuMigrationSrcPerformTunnel(driver, vm, st, NULL, NULL, 0, NULL, NULL, @@ -4302,7 +4301,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr dr= iver, * confirm migration completion. */ VIR_DEBUG("Perform3 %p uri=3D%s", sconn, NULLSTR(uri)); - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM3); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM3); VIR_FREE(cookiein); cookiein =3D g_steal_pointer(&cookieout); cookieinlen =3D cookieoutlen; @@ -4328,7 +4327,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr dr= iver, virErrorPreserveLast(&orig_err); } else { qemuMigrationJobSetPhase(driver, vm, - QEMU_MIGRATION_PHASE_PERFORM3_DONE); + VIR_MIGRATION_PHASE_PERFORM3_DONE); } =20 /* If Perform returns < 0, then we need to cancel the VM @@ -4692,7 +4691,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver, migParams, flags, dname, re= source, &v3proto); } else { - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM2= ); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM2); ret =3D qemuMigrationSrcPerformNative(driver, vm, persist_xml, uri= , cookiein, cookieinlen, cookieout, cookieoutlen, flags, resource, NULL, NULL, 0= , NULL, @@ -4777,7 +4776,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, return ret; } =20 - qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM3); + qemuMigrationJobStartPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM3); virCloseCallbacksUnset(driver->closeCallbacks, vm, qemuMigrationSrcCleanup); =20 @@ -4791,7 +4790,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, goto endjob; } =20 - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM3_DON= E); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM3_DONE= ); =20 if (virCloseCallbacksSet(driver->closeCallbacks, vm, conn, qemuMigrationSrcCleanup) < 0) @@ -5024,8 +5023,8 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, ignore_value(virTimeMillisNow(&timeReceived)); =20 qemuMigrationJobStartPhase(driver, vm, - v3proto ? QEMU_MIGRATION_PHASE_FINISH3 - : QEMU_MIGRATION_PHASE_FINISH2); + v3proto ? VIR_MIGRATION_PHASE_FINISH3 + : VIR_MIGRATION_PHASE_FINISH2); =20 qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); @@ -5504,14 +5503,14 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, static void qemuMigrationJobSetPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) { qemuDomainObjPrivatePtr priv =3D vm->privateData; =20 if (phase < priv->job.phase) { VIR_ERROR(_("migration protocol going backwards %s =3D> %s"), - qemuMigrationJobPhaseTypeToString(priv->job.phase), - qemuMigrationJobPhaseTypeToString(phase)); + virMigrationJobPhaseTypeToString(priv->job.phase), + virMigrationJobPhaseTypeToString(phase)); return; } =20 @@ -5521,7 +5520,7 @@ qemuMigrationJobSetPhase(virQEMUDriverPtr driver, static void qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) { qemuMigrationJobSetPhase(driver, vm, phase); } diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index b6f88d3fd9..b05f5254b4 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -24,6 +24,7 @@ #include "qemu_conf.h" #include "qemu_domain.h" #include "qemu_migration_params.h" +#include "virmigration.h" #include "virenum.h" =20 /* @@ -87,22 +88,6 @@ NULL =20 =20 -typedef enum { - QEMU_MIGRATION_PHASE_NONE =3D 0, - QEMU_MIGRATION_PHASE_PERFORM2, - QEMU_MIGRATION_PHASE_BEGIN3, - QEMU_MIGRATION_PHASE_PERFORM3, - QEMU_MIGRATION_PHASE_PERFORM3_DONE, - QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED, - QEMU_MIGRATION_PHASE_CONFIRM3, - QEMU_MIGRATION_PHASE_PREPARE, - QEMU_MIGRATION_PHASE_FINISH2, - QEMU_MIGRATION_PHASE_FINISH3, - - QEMU_MIGRATION_PHASE_LAST -} qemuMigrationJobPhase; -VIR_ENUM_DECL(qemuMigrationJobPhase); - char * qemuMigrationSrcBegin(virConnectPtr conn, virDomainObjPtr vm, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index e114e4c4ce..0f2cd47044 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3437,24 +3437,24 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driv= er, (state =3D=3D VIR_DOMAIN_RUNNING && reason =3D=3D VIR_DOMAIN_RUNNING_POSTCOPY); =20 - switch ((qemuMigrationJobPhase) job->phase) { - case QEMU_MIGRATION_PHASE_NONE: - case QEMU_MIGRATION_PHASE_PERFORM2: - case QEMU_MIGRATION_PHASE_BEGIN3: - case QEMU_MIGRATION_PHASE_PERFORM3: - case QEMU_MIGRATION_PHASE_PERFORM3_DONE: - case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: - case QEMU_MIGRATION_PHASE_CONFIRM3: - case QEMU_MIGRATION_PHASE_LAST: + switch ((virMigrationJobPhase) job->phase) { + case VIR_MIGRATION_PHASE_NONE: + case VIR_MIGRATION_PHASE_PERFORM2: + case VIR_MIGRATION_PHASE_BEGIN3: + case VIR_MIGRATION_PHASE_PERFORM3: + case VIR_MIGRATION_PHASE_PERFORM3_DONE: + case VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED: + case VIR_MIGRATION_PHASE_CONFIRM3: + case VIR_MIGRATION_PHASE_LAST: /* N/A for incoming migration */ break; =20 - case QEMU_MIGRATION_PHASE_PREPARE: + case VIR_MIGRATION_PHASE_PREPARE: VIR_DEBUG("Killing unfinished incoming migration for domain %s", vm->def->name); return -1; =20 - case QEMU_MIGRATION_PHASE_FINISH2: + case VIR_MIGRATION_PHASE_FINISH2: /* source domain is already killed so let's just resume the domain * and hope we are all set */ VIR_DEBUG("Incoming migration finished, resuming domain %s", @@ -3466,7 +3466,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, } break; =20 - case QEMU_MIGRATION_PHASE_FINISH3: + case VIR_MIGRATION_PHASE_FINISH3: /* migration finished, we started resuming the domain but didn't * confirm success or failure yet; killing it seems safest unless * we already started guest CPUs or we were in post-copy mode */ @@ -3498,22 +3498,22 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr dri= ver, reason =3D=3D VIR_DOMAIN_PAUSED_POSTCOPY_FAILED); bool resume =3D false; =20 - switch ((qemuMigrationJobPhase) job->phase) { - case QEMU_MIGRATION_PHASE_NONE: - case QEMU_MIGRATION_PHASE_PREPARE: - case QEMU_MIGRATION_PHASE_FINISH2: - case QEMU_MIGRATION_PHASE_FINISH3: - case QEMU_MIGRATION_PHASE_LAST: + switch ((virMigrationJobPhase) job->phase) { + case VIR_MIGRATION_PHASE_NONE: + case VIR_MIGRATION_PHASE_PREPARE: + case VIR_MIGRATION_PHASE_FINISH2: + case VIR_MIGRATION_PHASE_FINISH3: + case VIR_MIGRATION_PHASE_LAST: /* N/A for outgoing migration */ break; =20 - case QEMU_MIGRATION_PHASE_BEGIN3: + case VIR_MIGRATION_PHASE_BEGIN3: /* nothing happened so far, just forget we were about to migrate t= he * domain */ break; =20 - case QEMU_MIGRATION_PHASE_PERFORM2: - case QEMU_MIGRATION_PHASE_PERFORM3: + case VIR_MIGRATION_PHASE_PERFORM2: + case VIR_MIGRATION_PHASE_PERFORM3: /* migration is still in progress, let's cancel it and resume the * domain; however we can only do that before migration enters * post-copy mode @@ -3531,7 +3531,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr drive= r, } break; =20 - case QEMU_MIGRATION_PHASE_PERFORM3_DONE: + case VIR_MIGRATION_PHASE_PERFORM3_DONE: /* migration finished but we didn't have a chance to get the result * of Finish3 step; third party needs to check what to do next; in * post-copy mode we can use PAUSED_POSTCOPY_FAILED state for this @@ -3540,7 +3540,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr drive= r, qemuMigrationAnyPostcopyFailed(driver, vm); break; =20 - case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: + case VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED: /* Finish3 failed, we need to resume the domain, but once we enter * post-copy mode there's no way back, so let's just mark the doma= in * as broken in that case @@ -3554,7 +3554,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr drive= r, } break; =20 - case QEMU_MIGRATION_PHASE_CONFIRM3: + case VIR_MIGRATION_PHASE_CONFIRM3: /* migration completed, we need to kill the domain here */ *stopFlags |=3D VIR_QEMU_PROCESS_STOP_MIGRATED; return -1; --=20 2.25.1 From nobody Tue May 7 09:55:32 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) client-ip=207.211.31.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1597640887; cv=none; d=zohomail.com; s=zohoarc; b=bNNz1akXMWJZye7JGc2yY6woHUKH4S1EKwYHYq1GlLIGn3FH8VJVAY1ioIsfiyDNjXHcJVtGNnz5hbA7HxJ/z8r+YR1VAMeSLyEe7RkexO4whWMCDXJ7Sq9sLGCTAn/FkiCPBAltI+eRl8zUsvgFz6rattnK6K8/lhRpHfrJBWA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1597640887; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=c7WmzbSEaQ01dfDdwEQPsyfAI9JB8GFS2xEVLWCvd4U=; b=D/f0yziguE6YEAlHWYHkaLf5G0AlwBmBPVXr6eWlOXu79t8xdYWK/uEtk3+jpVjSh8iIUa7cEPdMoc74QmoTZ2ORuO57lbsWnpn79P5TYK8xEE8UacjD0ts6bw1d311qJDhI71Fj9Bk90C2WOrgoguxoDPBRLbBmhfbChMsVNaE= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by mx.zohomail.com with SMTPS id 1597640887578695.6552920203134; Sun, 16 Aug 2020 22:08:07 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-245-_AraQnHEP6-RvTXBPts5DQ-1; Mon, 17 Aug 2020 01:08:04 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D422581F01B; Mon, 17 Aug 2020 05:07:56 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AF02421E8F; Mon, 17 Aug 2020 05:07:56 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 7D3E74EE3A; Mon, 17 Aug 2020 05:07:56 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 07H57tHx021808 for ; Mon, 17 Aug 2020 01:07:55 -0400 Received: by smtp.corp.redhat.com (Postfix) id 2A3755EDD4; Mon, 17 Aug 2020 05:07:55 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast01.extmail.prod.ext.rdu2.redhat.com [10.11.55.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2590A5F266 for ; Mon, 17 Aug 2020 05:07:52 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5448489CD4A for ; Mon, 17 Aug 2020 05:07:52 +0000 (UTC) Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-354-uhlmKFmQNVeBn2AH7C0Kww-1; Mon, 17 Aug 2020 01:07:49 -0400 Received: by mail-pj1-f41.google.com with SMTP id t6so7133006pjr.0 for ; Sun, 16 Aug 2020 22:07:49 -0700 (PDT) Received: from localhost.localdomain ([125.99.150.200]) by smtp.gmail.com with ESMTPSA id s64sm18091678pfs.111.2020.08.16.22.07.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Aug 2020 22:07:47 -0700 (PDT) X-MC-Unique: _AraQnHEP6-RvTXBPts5DQ-1 X-MC-Unique: uhlmKFmQNVeBn2AH7C0Kww-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c7WmzbSEaQ01dfDdwEQPsyfAI9JB8GFS2xEVLWCvd4U=; b=ik2K0Qh4T7wOQgoweicClMifQkouZqqkF6oZ674c7exX9N3vq7Xynd/cpo/Fc343HC aIeRb9TinQfd5j/HR6ZLWlVSDhas8+b0caSjKnp8vAMsKD8SXgwd0LsHhTIIwVnEf9nx PsfNbDHLiqlPvFVh6OwuJc8VTYixCnbcWl7CAQxdhl77wU8PXhXTJoL6jq2tEFMWyNcO /T9N/sLQf12EJrp8q1kTlrtM+kf3DoO8nWmvo/J+irJss28tzMi2N4SJnCwEjBmZXb/5 vv32U6oo8S24gy+2fF7SmMKhyDOiO9dhkwxc8deXoXQb2p6mgFM9gqvX8Zq2neo1GVTZ rADg== X-Gm-Message-State: AOAM5330nux2uEJlmeoO4kqYUsrC8Q0jMXAc+eCbkgaUpKo1URXI6ePY hW3TIRra11a7Em134NoS9UQbYmAq+Ny1cw== X-Google-Smtp-Source: ABdhPJxC6GshsKhRsMmirCi0Q0yjO3Nrt0xM5vmQ7pAqRx0WrjMQP2HLS5kjMhskxUft9gKo41URiw== X-Received: by 2002:a17:902:c081:: with SMTP id j1mr9785217pld.136.1597640868076; Sun, 16 Aug 2020 22:07:48 -0700 (PDT) From: Prathamesh Chavan To: libvir-list@redhat.com Subject: [GSoC][PATCH v2 6/6] qemu_domainjob: remove dependency on `qemuDomainDiskPrivatePtr` Date: Mon, 17 Aug 2020 10:37:21 +0530 Message-Id: <20200817050721.7063-7-pc44800@gmail.com> In-Reply-To: <20200817050721.7063-1-pc44800@gmail.com> References: <20200817050721.7063-1-pc44800@gmail.com> MIME-Version: 1.0 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-loop: libvir-list@redhat.com Cc: Prathamesh Chavan X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Dependency on qemu-specific `diskPrivatePtr` was removed by moving the funcitons `qemuDomainObjPrivateXMLParseJobNBD` and `qemuDomainObjPrivateXMLFormatNBDMigration` to `qemu_domain`, and moving their calls inside the `parseJob` and `formatJob` callback functions. Signed-off-by: Prathamesh Chavan Reviewed-by: Erik Skultety --- src/qemu/qemu_domain.c | 150 ++++++++++++++++++++++++++++++++++++- src/qemu/qemu_domainjob.c | 152 +------------------------------------- src/qemu/qemu_domainjob.h | 6 +- 3 files changed, 155 insertions(+), 153 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 677fa7ea91..615a8a293c 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -562,12 +562,70 @@ qemuJobResetPrivate(void *opaque) } =20 =20 +static int +qemuDomainObjPrivateXMLFormatNBDMigrationSource(virBufferPtr buf, + virStorageSourcePtr src, + virDomainXMLOptionPtr xmlo= pt) +{ + g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); + + virBufferAsprintf(&attrBuf, " type=3D'%s' format=3D'%s'", + virStorageTypeToString(src->type), + virStorageFileFormatTypeToString(src->format)); + + if (virDomainDiskSourceFormat(&childBuf, src, "source", 0, false, + VIR_DOMAIN_DEF_FORMAT_STATUS, + false, false, xmlopt) < 0) + return -1; + + virXMLFormatElement(buf, "migrationSource", &attrBuf, &childBuf); + + return 0; +} + + +static int +qemuDomainObjPrivateXMLFormatNBDMigration(virBufferPtr buf, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + size_t i; + virDomainDiskDefPtr disk; + qemuDomainDiskPrivatePtr diskPriv; + + for (i =3D 0; i < vm->def->ndisks; i++) { + g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); + disk =3D vm->def->disks[i]; + diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); + + virBufferAsprintf(&attrBuf, " dev=3D'%s' migrating=3D'%s'", + disk->dst, diskPriv->migrating ? "yes" : "no"); + + if (diskPriv->migrSource && + qemuDomainObjPrivateXMLFormatNBDMigrationSource(&childBuf, + diskPriv->migr= Source, + priv->driver->= xmlopt) < 0) + return -1; + + virXMLFormatElement(buf, "disk", &attrBuf, &childBuf); + } + + return 0; +} + static int qemuDomainFormatJobPrivate(virBufferPtr buf, - qemuDomainJobObjPtr job) + qemuDomainJobObjPtr job, + virDomainObjPtr vm) { qemuDomainJobPrivatePtr priv =3D job->privateData; =20 + if (job->asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT && + qemuDomainObjPrivateXMLFormatNBDMigration(buf, vm) < 0) + return -1; + if (priv->migParams) qemuMigrationParamsFormat(buf, priv->migParams); =20 @@ -617,12 +675,100 @@ qemuDomainEventEmitJobCompleted(virQEMUDriverPtr dri= ver, virObjectEventStateQueue(driver->domainEventState, event); } =20 +static int +qemuDomainObjPrivateXMLParseJobNBDSource(xmlNodePtr node, + xmlXPathContextPtr ctxt, + virDomainDiskDefPtr disk, + virDomainXMLOptionPtr xmlopt) +{ + VIR_XPATH_NODE_AUTORESTORE(ctxt); + qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); + g_autofree char *format =3D NULL; + g_autofree char *type =3D NULL; + g_autoptr(virStorageSource) migrSource =3D NULL; + xmlNodePtr sourceNode; + + ctxt->node =3D node; + + if (!(ctxt->node =3D virXPathNode("./migrationSource", ctxt))) + return 0; + + if (!(type =3D virXMLPropString(ctxt->node, "type"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing storage source type")); + return -1; + } + + if (!(format =3D virXMLPropString(ctxt->node, "format"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing storage source format")); + return -1; + } + + if (!(migrSource =3D virDomainStorageSourceParseBase(type, format, NUL= L))) + return -1; + + /* newer libvirt uses the subelement instead of formatting the + * source directly into */ + if ((sourceNode =3D virXPathNode("./source", ctxt))) + ctxt->node =3D sourceNode; + + if (virDomainStorageSourceParse(ctxt->node, ctxt, migrSource, + VIR_DOMAIN_DEF_PARSE_STATUS, xmlopt) <= 0) + return -1; + + diskPriv->migrSource =3D g_steal_pointer(&migrSource); + return 0; +} + + +static int +qemuDomainObjPrivateXMLParseJobNBD(virDomainObjPtr vm, + xmlXPathContextPtr ctxt) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + g_autofree xmlNodePtr *nodes =3D NULL; + size_t i; + int n; + + if ((n =3D virXPathNodeSet("./disk[@migrating=3D'yes']", ctxt, &nodes)= ) < 0) + return -1; + + if (n > 0) { + if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { + VIR_WARN("Found disks marked for migration but we were not " + "migrating"); + n =3D 0; + } + for (i =3D 0; i < n; i++) { + virDomainDiskDefPtr disk; + g_autofree char *dst =3D NULL; + + if ((dst =3D virXMLPropString(nodes[i], "dev")) && + (disk =3D virDomainDiskByTarget(vm->def, dst))) { + QEMU_DOMAIN_DISK_PRIVATE(disk)->migrating =3D true; + + if (qemuDomainObjPrivateXMLParseJobNBDSource(nodes[i], ctx= t, + disk, + priv->driver-= >xmlopt) < 0) + return -1; + } + } + } + + return 0; +} + static int qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt, - qemuDomainJobObjPtr job) + qemuDomainJobObjPtr job, + virDomainObjPtr vm) { qemuDomainJobPrivatePtr priv =3D job->privateData; =20 + if (qemuDomainObjPrivateXMLParseJobNBD(vm, ctxt) < 0) + return -1; + if (qemuMigrationParamsParse(ctxt, &priv->migParams) < 0) return -1; =20 diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 18abc0d986..ae4ac9e0c1 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -19,7 +19,7 @@ #include =20 #include "qemu_domain.h" -#include "qemu_migration.h" +#include "virmigration.h" #include "qemu_domainjob.h" #include "viralloc.h" #include "virlog.h" @@ -717,66 +717,11 @@ qemuDomainObjAbortAsyncJob(virDomainObjPtr obj) virDomainObjBroadcast(obj); } =20 - -static int -qemuDomainObjPrivateXMLFormatNBDMigrationSource(virBufferPtr buf, - virStorageSourcePtr src, - virDomainXMLOptionPtr xmlo= pt) -{ - g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; - g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - - virBufferAsprintf(&attrBuf, " type=3D'%s' format=3D'%s'", - virStorageTypeToString(src->type), - virStorageFileFormatTypeToString(src->format)); - - if (virDomainDiskSourceFormat(&childBuf, src, "source", 0, false, - VIR_DOMAIN_DEF_FORMAT_STATUS, - false, false, xmlopt) < 0) - return -1; - - virXMLFormatElement(buf, "migrationSource", &attrBuf, &childBuf); - - return 0; -} - - -static int -qemuDomainObjPrivateXMLFormatNBDMigration(virBufferPtr buf, - virDomainObjPtr vm) -{ - qemuDomainObjPrivatePtr priv =3D vm->privateData; - size_t i; - virDomainDiskDefPtr disk; - qemuDomainDiskPrivatePtr diskPriv; - - for (i =3D 0; i < vm->def->ndisks; i++) { - g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; - g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); - disk =3D vm->def->disks[i]; - diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); - - virBufferAsprintf(&attrBuf, " dev=3D'%s' migrating=3D'%s'", - disk->dst, diskPriv->migrating ? "yes" : "no"); - - if (diskPriv->migrSource && - qemuDomainObjPrivateXMLFormatNBDMigrationSource(&childBuf, - diskPriv->migr= Source, - priv->driver->= xmlopt) < 0) - return -1; - - virXMLFormatElement(buf, "disk", &attrBuf, &childBuf); - } - - return 0; -} - int qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, virDomainObjPtr vm) { qemuDomainObjPrivatePtr priv =3D vm->privateData; - qemuDomainJobObjPtr jobObj =3D &priv->job; g_auto(virBuffer) attrBuf =3D VIR_BUFFER_INITIALIZER; g_auto(virBuffer) childBuf =3D VIR_BUFFER_INIT_CHILD(buf); qemuDomainJob job =3D priv->job.active; @@ -801,11 +746,7 @@ qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_NONE) virBufferAsprintf(&attrBuf, " flags=3D'0x%lx'", priv->job.apiFlags= ); =20 - if (priv->job.asyncJob =3D=3D QEMU_ASYNC_JOB_MIGRATION_OUT && - qemuDomainObjPrivateXMLFormatNBDMigration(&childBuf, vm) < 0) - return -1; - - if (jobObj->cb->formatJob(&childBuf, jobObj) < 0) + if (priv->job.cb->formatJob(&childBuf, &priv->job, vm) < 0) return -1; =20 virXMLFormatElement(buf, "job", &attrBuf, &childBuf); @@ -814,90 +755,6 @@ qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, } =20 =20 -static int -qemuDomainObjPrivateXMLParseJobNBDSource(xmlNodePtr node, - xmlXPathContextPtr ctxt, - virDomainDiskDefPtr disk, - virDomainXMLOptionPtr xmlopt) -{ - VIR_XPATH_NODE_AUTORESTORE(ctxt); - qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); - g_autofree char *format =3D NULL; - g_autofree char *type =3D NULL; - g_autoptr(virStorageSource) migrSource =3D NULL; - xmlNodePtr sourceNode; - - ctxt->node =3D node; - - if (!(ctxt->node =3D virXPathNode("./migrationSource", ctxt))) - return 0; - - if (!(type =3D virXMLPropString(ctxt->node, "type"))) { - virReportError(VIR_ERR_XML_ERROR, "%s", - _("missing storage source type")); - return -1; - } - - if (!(format =3D virXMLPropString(ctxt->node, "format"))) { - virReportError(VIR_ERR_XML_ERROR, "%s", - _("missing storage source format")); - return -1; - } - - if (!(migrSource =3D virDomainStorageSourceParseBase(type, format, NUL= L))) - return -1; - - /* newer libvirt uses the subelement instead of formatting the - * source directly into */ - if ((sourceNode =3D virXPathNode("./source", ctxt))) - ctxt->node =3D sourceNode; - - if (virDomainStorageSourceParse(ctxt->node, ctxt, migrSource, - VIR_DOMAIN_DEF_PARSE_STATUS, xmlopt) <= 0) - return -1; - - diskPriv->migrSource =3D g_steal_pointer(&migrSource); - return 0; -} - - -static int -qemuDomainObjPrivateXMLParseJobNBD(virDomainObjPtr vm, - xmlXPathContextPtr ctxt) -{ - qemuDomainObjPrivatePtr priv =3D vm->privateData; - g_autofree xmlNodePtr *nodes =3D NULL; - size_t i; - int n; - - if ((n =3D virXPathNodeSet("./disk[@migrating=3D'yes']", ctxt, &nodes)= ) < 0) - return -1; - - if (n > 0) { - if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_MIGRATION_OUT) { - VIR_WARN("Found disks marked for migration but we were not " - "migrating"); - n =3D 0; - } - for (i =3D 0; i < n; i++) { - virDomainDiskDefPtr disk; - g_autofree char *dst =3D NULL; - - if ((dst =3D virXMLPropString(nodes[i], "dev")) && - (disk =3D virDomainDiskByTarget(vm->def, dst))) { - QEMU_DOMAIN_DISK_PRIVATE(disk)->migrating =3D true; - - if (qemuDomainObjPrivateXMLParseJobNBDSource(nodes[i], ctx= t, - disk, - priv->driver-= >xmlopt) < 0) - return -1; - } - } - } - - return 0; -} - int qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm, xmlXPathContextPtr ctxt) @@ -949,10 +806,7 @@ qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm, return -1; } =20 - if (qemuDomainObjPrivateXMLParseJobNBD(vm, ctxt) < 0) - return -1; - - if (job->cb->parseJob(ctxt, job) < 0) + if (priv->job.cb->parseJob(ctxt, job, vm) < 0) return -1; =20 return 0; diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index c5f68ca778..8f3dfbcf1e 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -105,9 +105,11 @@ typedef void *(*qemuDomainObjPrivateJobAlloc)(void); typedef void (*qemuDomainObjPrivateJobFree)(void *); typedef void (*qemuDomainObjPrivateJobReset)(void *); typedef int (*qemuDomainObjPrivateJobFormat)(virBufferPtr, - qemuDomainJobObjPtr); + qemuDomainJobObjPtr, + virDomainObjPtr); typedef int (*qemuDomainObjPrivateJobParse)(xmlXPathContextPtr, - qemuDomainJobObjPtr); + qemuDomainJobObjPtr, + virDomainObjPtr); typedef void (*qemuDomainObjJobInfoSetOperation)(qemuDomainJobObjPtr, virDomainJobOperation); typedef void (*qemuDomainObjCurrentJobInfoInit)(qemuDomainJobObjPtr, --=20 2.25.1