From nobody Sat May 4 04:08:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1493720588925684.1410042029477; Tue, 2 May 2017 03:23:08 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 36E9FC24303D; Tue, 2 May 2017 10:23:07 +0000 (UTC) Received: from colo-mx.corp.redhat.com (unknown [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0BF72189CB; Tue, 2 May 2017 10:23:07 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 9DAB84EBD7; Tue, 2 May 2017 10:23:06 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v42AN5Tw027557 for ; Tue, 2 May 2017 06:23:05 -0400 Received: by smtp.corp.redhat.com (Postfix) id 20B257F6A4; Tue, 2 May 2017 10:23:05 +0000 (UTC) Received: from mx1.redhat.com (ext-mx08.extmail.prod.ext.phx2.redhat.com [10.5.110.32]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1A7B77F6AB for ; Tue, 2 May 2017 10:23:01 +0000 (UTC) Received: from relay.sw.ru (mailhub.sw.ru [195.214.232.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 134ADC243039 for ; Tue, 2 May 2017 10:22:58 +0000 (UTC) Received: from dim-vz7.qa.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v42A6vYP021064 for ; Tue, 2 May 2017 13:06:57 +0300 (MSK) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 36E9FC24303D Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=libvir-list-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 36E9FC24303D DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 134ADC243039 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=pass (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=nshirokovskiy@virtuozzo.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 134ADC243039 From: Nikolay Shirokovskiy To: libvir-list@redhat.com Date: Tue, 2 May 2017 13:06:33 +0300 Message-Id: <1493719596-35375-2-git-send-email-nshirokovskiy@virtuozzo.com> In-Reply-To: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> References: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> X-Greylist: Delayed for 00:15:49 by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:59 +0000 (UTC) for IP:'195.214.232.25' DOMAIN:'mailhub.sw.ru' HELO:'relay.sw.ru' FROM:'nshirokovskiy@virtuozzo.com' RCPT:'' X-RedHat-Spam-Score: 3.599 *** (BAYES_99, BAYES_999, DCC_REPUT_13_19, SPF_PASS) 195.214.232.25 mailhub.sw.ru 195.214.232.25 mailhub.sw.ru X-Scanned-By: MIMEDefang 2.78 on 10.5.110.32 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH v2 RFC 1/4] qemu: replace nested job with interruptible async job state X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:23:08 +0000 (UTC) X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" Nested job is a construction that gives way to run regular jobs while async= job is running. But the period when another job is actually can be started is w= hen async job waits for some event with domain lock dropped. Let's code this condition straitforward using asyncInterruptible flag. Upon awaking in async job it is important to wait while any regular job that is started meanwhile= is finished so the two are not overlapped and don't use domain or monitor obje= ct simultaneously. This solution isolates jobs even better than nested job appoach. In the latter it is possible that concurrent job awaits reply from monitor, then monitor event is delivered and async job awakes and continues to run until = it needs to send command to monitor. Only at this moment it stops on acquiring nested job condition. qemuDomainObjEnterMonitorInternal needs asyncJob argument and can not use domain job state because the function can be called simultaneously from asy= nc job and concurrend job and can not determine context due to nesting without= help from callers. With qemuDomainObjWaitInternal there is no cases when this function is called from concurrent job when async job is running currently.= But just for the case detection is possible and implemented because this state = of affairs can be changed one day. So qemuDomainObj{Wait,Sleep} are safe to use from regular or async job cont= ext. There will be no situations when qemuDomainObjEnterMonitor should be change= d to qemuDomainObjEnterMonitorAsync because function begins being used from async job context. --- src/conf/domain_conf.c | 19 ----- src/conf/domain_conf.h | 1 - src/libvirt_private.syms | 1 - src/qemu/qemu_domain.c | 194 ++++++++++++++++++++++++++++++++++++------= ---- src/qemu/qemu_domain.h | 23 +++++- src/qemu/qemu_driver.c | 2 +- src/qemu/qemu_migration.c | 70 ++++++++--------- src/qemu/qemu_process.c | 8 +- 8 files changed, 212 insertions(+), 106 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 7f5da4e..ee30e4e 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -3015,25 +3015,6 @@ virDomainObjBroadcast(virDomainObjPtr vm) } =20 =20 -int -virDomainObjWait(virDomainObjPtr vm) -{ - if (virCondWait(&vm->cond, &vm->parent.lock) < 0) { - virReportSystemError(errno, "%s", - _("failed to wait for domain condition")); - return -1; - } - - if (!virDomainObjIsActive(vm)) { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("domain is not running")); - return -1; - } - - return 0; -} - - /** * Waits for domain condition to be triggered for a specific period of tim= e. * diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 31c7a92..4a4a5e6 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2577,7 +2577,6 @@ bool virDomainObjTaint(virDomainObjPtr obj, virDomainTaintFlags taint); =20 void virDomainObjBroadcast(virDomainObjPtr vm); -int virDomainObjWait(virDomainObjPtr vm); int virDomainObjWaitUntil(virDomainObjPtr vm, unsigned long long whenms); =20 diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index e6901a8..e974108 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -452,7 +452,6 @@ virDomainObjSetMetadata; virDomainObjSetState; virDomainObjTaint; virDomainObjUpdateModificationImpact; -virDomainObjWait; virDomainObjWaitUntil; virDomainOSTypeFromString; virDomainOSTypeToString; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 13fd706..2a51b5e 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -323,6 +323,7 @@ qemuDomainObjResetAsyncJob(qemuDomainObjPrivatePtr priv) job->spiceMigration =3D false; job->spiceMigrated =3D false; job->postcopyEnabled =3D false; + job->asyncInterruptible =3D false; VIR_FREE(job->current); } =20 @@ -3622,15 +3623,16 @@ qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj) } =20 static bool -qemuDomainNestedJobAllowed(qemuDomainObjPrivatePtr priv, qemuDomainJob job) +qemuDomainAsyncJobCompatible(qemuDomainObjPrivatePtr priv, qemuDomainJob j= ob) { - return !priv->job.asyncJob || (priv->job.mask & JOB_MASK(job)) !=3D 0; + return !priv->job.asyncJob || + (priv->job.asyncInterruptible && (priv->job.mask & JOB_MASK(job= )) !=3D 0); } =20 bool qemuDomainJobAllowed(qemuDomainObjPrivatePtr priv, qemuDomainJob job) { - return !priv->job.active && qemuDomainNestedJobAllowed(priv, job); + return !priv->job.active && qemuDomainAsyncJobCompatible(priv, job); } =20 /* Give up waiting for mutex after 30 seconds */ @@ -3648,7 +3650,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, qemuDomainObjPrivatePtr priv =3D obj->privateData; unsigned long long now; unsigned long long then; - bool nested =3D job =3D=3D QEMU_JOB_ASYNC_NESTED; bool async =3D job =3D=3D QEMU_JOB_ASYNC; virQEMUDriverConfigPtr cfg =3D virQEMUDriverGetConfig(driver); const char *blocker =3D NULL; @@ -3681,7 +3682,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, goto error; } =20 - while (!nested && !qemuDomainNestedJobAllowed(priv, job)) { + while (!qemuDomainAsyncJobCompatible(priv, job)) { VIR_DEBUG("Waiting for async job (vm=3D%p name=3D%s)", obj, obj->d= ef->name); if (virCondWaitUntil(&priv->job.asyncCond, &obj->parent.lock, then= ) < 0) goto error; @@ -3695,7 +3696,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, =20 /* No job is active but a new async job could have been started while = obj * was unlocked, so we need to recheck it. */ - if (!nested && !qemuDomainNestedJobAllowed(priv, job)) + if (!qemuDomainAsyncJobCompatible(priv, job)) goto retry; =20 qemuDomainObjResetJob(priv); @@ -3750,7 +3751,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, priv->job.asyncOwner, NULLSTR(priv->job.asyncOwnerAPI), duration / 1000, asyncDuration / 1000); =20 - if (nested || qemuDomainNestedJobAllowed(priv, job)) + if (qemuDomainAsyncJobCompatible(priv, job)) blocker =3D priv->job.ownerAPI; else blocker =3D priv->job.asyncOwnerAPI; @@ -3870,7 +3871,7 @@ qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomai= nObjPtr obj) qemuDomainObjResetJob(priv); if (qemuDomainTrackJob(job)) qemuDomainObjSaveJob(driver, obj); - virCondSignal(&priv->job.cond); + virCondBroadcast(&priv->job.cond); } =20 void @@ -3907,44 +3908,25 @@ qemuDomainObjAbortAsyncJob(virDomainObjPtr obj) * * To be called immediately before any QEMU monitor API call * Must have already either called qemuDomainObjBeginJob() and checked - * that the VM is still active; may not be used for nested async jobs. + * that the VM is still active. * * To be followed with qemuDomainObjExitMonitor() once complete */ -static int -qemuDomainObjEnterMonitorInternal(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob) +void +qemuDomainObjEnterMonitor(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDom= ainObjPtr obj) { qemuDomainObjPrivatePtr priv =3D obj->privateData; =20 - if (asyncJob !=3D QEMU_ASYNC_JOB_NONE) { - int ret; - if ((ret =3D qemuDomainObjBeginNestedJob(driver, obj, asyncJob)) <= 0) - return ret; - if (!virDomainObjIsActive(obj)) { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("domain is no longer running")); - qemuDomainObjEndJob(driver, obj); - return -1; - } - } else if (priv->job.asyncOwner =3D=3D virThreadSelfID()) { - VIR_WARN("This thread seems to be the async job owner; entering" - " monitor without asking for a nested job is dangerous"); - } - VIR_DEBUG("Entering monitor (mon=3D%p vm=3D%p name=3D%s)", priv->mon, obj, obj->def->name); virObjectLock(priv->mon); virObjectRef(priv->mon); ignore_value(virTimeMillisNow(&priv->monStart)); virObjectUnlock(obj); - - return 0; } =20 static void ATTRIBUTE_NONNULL(1) -qemuDomainObjExitMonitorInternal(virQEMUDriverPtr driver, +qemuDomainObjExitMonitorInternal(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv =3D obj->privateData; @@ -3962,17 +3944,8 @@ qemuDomainObjExitMonitorInternal(virQEMUDriverPtr dr= iver, priv->monStart =3D 0; if (!hasRefs) priv->mon =3D NULL; - - if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) - qemuDomainObjEndJob(driver, obj); } =20 -void qemuDomainObjEnterMonitor(virQEMUDriverPtr driver, - virDomainObjPtr obj) -{ - ignore_value(qemuDomainObjEnterMonitorInternal(driver, obj, - QEMU_ASYNC_JOB_NONE)); -} =20 /* obj must NOT be locked before calling * @@ -4014,9 +3987,134 @@ int qemuDomainObjExitMonitor(virQEMUDriverPtr drive= r, int qemuDomainObjEnterMonitorAsync(virQEMUDriverPtr driver, virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob) + qemuDomainAsyncJob asyncJob ATTRIBUTE_UNUSE= D) { - return qemuDomainObjEnterMonitorInternal(driver, obj, asyncJob); + qemuDomainObjEnterMonitor(driver, obj); + return 0; +} + + +void +qemuDomainObjEnterInterruptible(virDomainObjPtr obj, + qemuDomainJobContextPtr ctx) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + struct qemuDomainJobObj *job =3D &priv->job; + + /* Second clause helps to detect situation when this function is + * called from from concurrent regular job. + */ + ctx->async =3D job->asyncJob && !job->active; + + if (!ctx->async) + return; + + job->asyncInterruptible =3D true; + VIR_DEBUG("Async job enters interruptible state. " + "(obj=3D%p name=3D%s, async=3D%s)", + obj, obj->def->name, + qemuDomainAsyncJobTypeToString(job->asyncJob)); + virCondBroadcast(&priv->job.asyncCond); +} + + +int +qemuDomainObjExitInterruptible(virDomainObjPtr obj, + qemuDomainJobContextPtr ctx) +{ + qemuDomainObjPrivatePtr priv =3D obj->privateData; + struct qemuDomainJobObj *job =3D &priv->job; + virErrorPtr err =3D NULL; + int ret =3D -1; + + if (!ctx->async) + return 0; + + job->asyncInterruptible =3D false; + VIR_DEBUG("Async job exits interruptible state. " + "(obj=3D%p name=3D%s, async=3D%s)", + obj, obj->def->name, + qemuDomainAsyncJobTypeToString(job->asyncJob)); + + err =3D virSaveLastError(); + + /* Before continuing async job wait until any job started in + * meanwhile is finished. + */ + while (job->active) { + if (virCondWait(&priv->job.cond, &obj->parent.lock) < 0) { + virReportSystemError(errno, "%s", + _("failed to wait for job condition")); + goto cleanup; + } + } + + if (!virDomainObjIsActive(obj)) { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("domain is not running")); + goto cleanup; + } + + ret =3D 0; + cleanup: + if (err) { + virSetError(err); + virFreeError(err); + } + return ret; +} + + +/* + * obj must be locked before calling. Must be used within context of regul= ar or + * async job + * + * Wait on obj lock. In case of async job regular compatible jobs are allo= wed + * to run while waiting. Any regular job than is started during the wait is + * finished before return from this function. + */ +int +qemuDomainObjWait(virDomainObjPtr obj) +{ + qemuDomainJobContext ctx; + int rc =3D 0; + + qemuDomainObjEnterInterruptible(obj, &ctx); + + if (virCondWait(&obj->cond, &obj->parent.lock) < 0) { + virReportSystemError(errno, "%s", + _("failed to wait for domain condition")); + rc =3D -1; + } + + if (qemuDomainObjExitInterruptible(obj, &ctx) < 0 || rc < 0) + return -1; + + return 0; +} + + +/* + * obj must be locked before calling. Must be used within context of regul= ar or + * async job + * + * Sleep with obj lock dropped. In case of async job regular compatible jo= bs + * are allowed to run while sleeping. Any regular job than is started duri= ng the + * sleep is finished before return from this function. + */ +int +qemuDomainObjSleep(virDomainObjPtr obj, unsigned long nsec) +{ + struct timespec ts =3D { .tv_sec =3D 0, .tv_nsec =3D nsec }; + qemuDomainJobContext ctx; + + qemuDomainObjEnterInterruptible(obj, &ctx); + + virObjectUnlock(obj); + nanosleep(&ts, NULL); + virObjectLock(obj); + + return qemuDomainObjExitInterruptible(obj, &ctx); } =20 =20 @@ -4063,16 +4161,26 @@ qemuDomainObjExitAgent(virDomainObjPtr obj, qemuAge= ntPtr agent) =20 void qemuDomainObjEnterRemote(virDomainObjPtr obj) { + qemuDomainJobContext ctx; + VIR_DEBUG("Entering remote (vm=3D%p name=3D%s)", obj, obj->def->name); + + qemuDomainObjEnterInterruptible(obj, &ctx); virObjectUnlock(obj); } =20 -void qemuDomainObjExitRemote(virDomainObjPtr obj) +int qemuDomainObjExitRemote(virDomainObjPtr obj) { + /* enter/exit remote MUST be called only in the context of async job */ + qemuDomainJobContext ctx =3D { .async =3D true }; + virObjectLock(obj); + VIR_DEBUG("Exited remote (vm=3D%p name=3D%s)", obj, obj->def->name); + + return qemuDomainObjExitInterruptible(obj, &ctx); } =20 =20 diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index aebd91a..39b3aed 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -132,6 +132,8 @@ struct qemuDomainJobObj { =20 virCond asyncCond; /* Use to coordinate with async jo= bs */ qemuDomainAsyncJob asyncJob; /* Currently active async job */ + bool asyncInterruptible; /* Jobs compatible with current as= ync job + are allowed to run */ unsigned long long asyncOwner; /* Thread which set current async = job */ const char *asyncOwnerAPI; /* The API which owns the async jo= b */ unsigned long long asyncStarted; /* When the current async job star= ted */ @@ -473,6 +475,23 @@ int qemuDomainObjEnterMonitorAsync(virQEMUDriverPtr dr= iver, ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_RETURN_CHECK; =20 =20 +typedef struct _qemuDomainJobContext qemuDomainJobContext; +typedef qemuDomainJobContext *qemuDomainJobContextPtr; +struct _qemuDomainJobContext { + bool async; +}; + +void qemuDomainObjEnterInterruptible(virDomainObjPtr obj, qemuDomainJobCon= textPtr ctx) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); +int qemuDomainObjExitInterruptible(virDomainObjPtr obj, qemuDomainJobConte= xtPtr ctx) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_RETURN_CHECK; + +int qemuDomainObjWait(virDomainObjPtr obj) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_RETURN_CHECK; + +int qemuDomainObjSleep(virDomainObjPtr obj, unsigned long nsec) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_RETURN_CHECK; + qemuAgentPtr qemuDomainObjEnterAgent(virDomainObjPtr obj) ATTRIBUTE_NONNULL(1); void qemuDomainObjExitAgent(virDomainObjPtr obj, qemuAgentPtr agent) @@ -481,8 +500,8 @@ void qemuDomainObjExitAgent(virDomainObjPtr obj, qemuAg= entPtr agent) =20 void qemuDomainObjEnterRemote(virDomainObjPtr obj) ATTRIBUTE_NONNULL(1); -void qemuDomainObjExitRemote(virDomainObjPtr obj) - ATTRIBUTE_NONNULL(1); +int qemuDomainObjExitRemote(virDomainObjPtr obj) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_RETURN_CHECK; =20 virDomainDefPtr qemuDomainDefCopy(virQEMUDriverPtr driver, virDomainDefPtr src, diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a5c664e..28e16e5 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -16260,7 +16260,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE= (disk); qemuBlockJobUpdate(driver, vm, QEMU_ASYNC_JOB_NONE, disk); while (diskPriv->blockjob) { - if (virDomainObjWait(vm) < 0) { + if (qemuDomainObjWait(vm) < 0) { ret =3D -1; goto endjob; } diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index b4507a3..e848a62 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -840,7 +840,7 @@ qemuMigrationCancelDriveMirror(virQEMUDriverPtr driver, if (failed && !err) err =3D virSaveLastError(); =20 - if (virDomainObjWait(vm) < 0) + if (qemuDomainObjWait(vm) < 0) goto cleanup; } =20 @@ -979,7 +979,7 @@ qemuMigrationDriveMirror(virQEMUDriverPtr driver, goto cleanup; } =20 - if (virDomainObjWait(vm) < 0) + if (qemuDomainObjWait(vm) < 0) goto cleanup; } =20 @@ -1334,7 +1334,7 @@ qemuMigrationWaitForSpice(virDomainObjPtr vm) =20 VIR_DEBUG("Waiting for SPICE to finish migration"); while (!priv->job.spiceMigrated && !priv->job.abortJob) { - if (virDomainObjWait(vm) < 0) + if (qemuDomainObjWait(vm) < 0) return -1; } return 0; @@ -1569,7 +1569,7 @@ qemuMigrationWaitForCompletion(virQEMUDriverPtr drive= r, qemuDomainObjPrivatePtr priv =3D vm->privateData; qemuDomainJobInfoPtr jobInfo =3D priv->job.current; bool events =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVE= NT); - int rv; + int rv, rc; =20 flags |=3D QEMU_MIGRATION_COMPLETED_UPDATE_STATS; =20 @@ -1579,18 +1579,15 @@ qemuMigrationWaitForCompletion(virQEMUDriverPtr dri= ver, if (rv < 0) return rv; =20 - if (events) { - if (virDomainObjWait(vm) < 0) { - jobInfo->type =3D VIR_DOMAIN_JOB_FAILED; - return -2; - } - } else { - /* Poll every 50ms for progress & to allow cancellation */ - struct timespec ts =3D { .tv_sec =3D 0, .tv_nsec =3D 50 * 1000= * 1000ull }; + /* Poll every 50ms for progress & to allow cancellation */ + if (events) + rc =3D qemuDomainObjWait(vm); + else + rc =3D qemuDomainObjSleep(vm, 50 * 1000 * 1000ul); =20 - virObjectUnlock(vm); - nanosleep(&ts, NULL); - virObjectLock(vm); + if (rc < 0) { + jobInfo->type =3D VIR_DOMAIN_JOB_FAILED; + return -2; } } =20 @@ -1623,7 +1620,7 @@ qemuMigrationWaitForDestCompletion(virQEMUDriverPtr d= river, =20 while ((rv =3D qemuMigrationCompleted(driver, vm, asyncJob, NULL, flags)) !=3D 1) { - if (rv < 0 || virDomainObjWait(vm) < 0) + if (rv < 0 || qemuDomainObjWait(vm) < 0) return -1; } =20 @@ -3842,7 +3839,7 @@ qemuMigrationRun(virQEMUDriverPtr driver, if (priv->monJSON) { while (virDomainObjGetState(vm, NULL) =3D=3D VIR_DOMAIN_RUNNING) { priv->signalStop =3D true; - rc =3D virDomainObjWait(vm); + rc =3D qemuDomainObjWait(vm); priv->signalStop =3D false; if (rc < 0) goto cancelPostCopy; @@ -4139,27 +4136,20 @@ static int doPeer2PeerMigrate2(virQEMUDriverPtr dri= ver, qemuDomainObjEnterRemote(vm); ret =3D dconn->driver->domainMigratePrepareTunnel (dconn, st, destflags, dname, resource, dom_xml); - qemuDomainObjExitRemote(vm); + if (qemuDomainObjExitRemote(vm) < 0) + ret =3D -1; } else { qemuDomainObjEnterRemote(vm); ret =3D dconn->driver->domainMigratePrepare2 (dconn, &cookie, &cookielen, NULL, &uri_out, destflags, dname, resource, dom_xml); - qemuDomainObjExitRemote(vm); + if (qemuDomainObjExitRemote(vm) < 0) + ret =3D -1; } VIR_FREE(dom_xml); if (ret =3D=3D -1) goto cleanup; =20 - /* the domain may have shutdown or crashed while we had the locks drop= ped - * in qemuDomainObjEnterRemote, so check again - */ - if (!virDomainObjIsActive(vm)) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("guest unexpectedly quit")); - goto cleanup; - } - if (!(flags & VIR_MIGRATE_TUNNELLED) && (uri_out =3D=3D NULL)) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -4206,7 +4196,7 @@ static int doPeer2PeerMigrate2(virQEMUDriverPtr drive= r, ddomain =3D dconn->driver->domainMigrateFinish2 (dconn, dname, cookie, cookielen, uri_out ? uri_out : dconnuri, destflags, cancelled); - qemuDomainObjExitRemote(vm); + ignore_value(qemuDomainObjExitRemote(vm)); if (cancelled && ddomain) VIR_ERROR(_("finish step ignored that migration was cancelled")); =20 @@ -4367,7 +4357,8 @@ doPeer2PeerMigrate3(virQEMUDriverPtr driver, (dconn, st, cookiein, cookieinlen, &cookieout, &cookieoutl= en, destflags, dname, bandwidth, dom_xml); } - qemuDomainObjExitRemote(vm); + if (qemuDomainObjExitRemote(vm) < 0) + ret =3D -1; } else { qemuDomainObjEnterRemote(vm); if (useParams) { @@ -4379,7 +4370,8 @@ doPeer2PeerMigrate3(virQEMUDriverPtr driver, (dconn, cookiein, cookieinlen, &cookieout, &cookieoutlen, uri, &uri_out, destflags, dname, bandwidth, dom_xml); } - qemuDomainObjExitRemote(vm); + if (qemuDomainObjExitRemote(vm) < 0) + ret =3D -1; } VIR_FREE(dom_xml); if (ret =3D=3D -1) @@ -4475,7 +4467,7 @@ doPeer2PeerMigrate3(virQEMUDriverPtr driver, ddomain =3D dconn->driver->domainMigrateFinish3Params (dconn, params, nparams, cookiein, cookieinlen, &cookieout, &cookieoutlen, destflags, cancelled); - qemuDomainObjExitRemote(vm); + ignore_value(qemuDomainObjExitRemote(vm)); } } else { dname =3D dname ? dname : vm->def->name; @@ -4483,7 +4475,7 @@ doPeer2PeerMigrate3(virQEMUDriverPtr driver, ddomain =3D dconn->driver->domainMigrateFinish3 (dconn, dname, cookiein, cookieinlen, &cookieout, &cookieoutle= n, dconnuri, uri, destflags, cancelled); - qemuDomainObjExitRemote(vm); + ignore_value(qemuDomainObjExitRemote(vm)); } =20 if (cancelled) { @@ -4624,6 +4616,7 @@ static int doPeer2PeerMigrate(virQEMUDriverPtr driver, bool offline =3D false; virQEMUDriverConfigPtr cfg =3D virQEMUDriverGetConfig(driver); bool useParams; + int rc; =20 VIR_DEBUG("driver=3D%p, sconn=3D%p, vm=3D%p, xmlin=3D%s, dconnuri=3D%s= , uri=3D%s, " "graphicsuri=3D%s, listenAddress=3D%s, nmigrate_disks=3D%zu,= " @@ -4661,7 +4654,8 @@ static int doPeer2PeerMigrate(virQEMUDriverPtr driver, =20 qemuDomainObjEnterRemote(vm); dconn =3D virConnectOpenAuth(dconnuri, &virConnectAuthConfig, 0); - qemuDomainObjExitRemote(vm); + rc =3D qemuDomainObjExitRemote(vm); + if (dconn =3D=3D NULL) { virReportError(VIR_ERR_OPERATION_FAILED, _("Failed to connect to remote libvirt URI %s: %s"), @@ -4670,6 +4664,9 @@ static int doPeer2PeerMigrate(virQEMUDriverPtr driver, return -1; } =20 + if (rc < 0) + goto cleanup; + if (virConnectSetKeepAlive(dconn, cfg->keepAliveInterval, cfg->keepAliveCount) < 0) goto cleanup; @@ -4693,7 +4690,8 @@ static int doPeer2PeerMigrate(virQEMUDriverPtr driver, if (flags & VIR_MIGRATE_OFFLINE) offline =3D VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn, VIR_DRV_FEATURE_MIGRATION_OFFLI= NE); - qemuDomainObjExitRemote(vm); + if (qemuDomainObjExitRemote(vm) < 0) + goto cleanup; =20 if (!p2p) { virReportError(VIR_ERR_OPERATION_FAILED, "%s", @@ -4747,7 +4745,7 @@ static int doPeer2PeerMigrate(virQEMUDriverPtr driver, qemuDomainObjEnterRemote(vm); virConnectUnregisterCloseCallback(dconn, qemuMigrationConnectionClosed= ); virObjectUnref(dconn); - qemuDomainObjExitRemote(vm); + ignore_value(qemuDomainObjExitRemote(vm)); if (orig_err) { virSetError(orig_err); virFreeError(orig_err); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index ea3e45c..87a9511 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -209,6 +209,8 @@ qemuConnectAgent(virQEMUDriverPtr driver, virDomainObjP= tr vm) qemuDomainObjPrivatePtr priv =3D vm->privateData; qemuAgentPtr agent =3D NULL; virDomainChrDefPtr config =3D qemuFindAgentConfig(vm->def); + qemuDomainJobContext ctx; + int rc; =20 if (!config) return 0; @@ -232,6 +234,7 @@ qemuConnectAgent(virQEMUDriverPtr driver, virDomainObjP= tr vm) * deleted while the agent is active */ virObjectRef(vm); =20 + qemuDomainObjEnterInterruptible(vm, &ctx); virObjectUnlock(vm); =20 agent =3D qemuAgentOpen(vm, @@ -239,14 +242,13 @@ qemuConnectAgent(virQEMUDriverPtr driver, virDomainOb= jPtr vm) &agentCallbacks); =20 virObjectLock(vm); + rc =3D qemuDomainObjExitInterruptible(vm, &ctx); =20 if (agent =3D=3D NULL) virObjectUnref(vm); =20 - if (!virDomainObjIsActive(vm)) { + if (rc < 0) { qemuAgentClose(agent); - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("guest crashed while connecting to the guest agen= t")); return -1; } =20 --=20 1.8.3.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list From nobody Sat May 4 04:08:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1493720579283811.7188956135316; Tue, 2 May 2017 03:22:59 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 883E02DD2AD; Tue, 2 May 2017 10:22:57 +0000 (UTC) Received: from colo-mx.corp.redhat.com (unknown [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 51CCD183FD; Tue, 2 May 2017 10:22:57 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 5B1D54ED32; Tue, 2 May 2017 10:22:56 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v42AMtgu027504 for ; Tue, 2 May 2017 06:22:55 -0400 Received: by smtp.corp.redhat.com (Postfix) id 2FE5477DFB; Tue, 2 May 2017 10:22:55 +0000 (UTC) Received: from mx1.redhat.com (ext-mx08.extmail.prod.ext.phx2.redhat.com [10.5.110.32]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 27F7A78C07 for ; Tue, 2 May 2017 10:22:53 +0000 (UTC) Received: from relay.sw.ru (mailhub.sw.ru [195.214.232.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 84C5EC24302B for ; Tue, 2 May 2017 10:22:51 +0000 (UTC) Received: from dim-vz7.qa.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v42A6vYQ021064 for ; Tue, 2 May 2017 13:06:57 +0300 (MSK) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 883E02DD2AD Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=libvir-list-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 883E02DD2AD DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 84C5EC24302B Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=pass (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=nshirokovskiy@virtuozzo.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 84C5EC24302B From: Nikolay Shirokovskiy To: libvir-list@redhat.com Date: Tue, 2 May 2017 13:06:34 +0300 Message-Id: <1493719596-35375-3-git-send-email-nshirokovskiy@virtuozzo.com> In-Reply-To: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> References: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> X-Greylist: Delayed for 00:15:49 by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:52 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:52 +0000 (UTC) for IP:'195.214.232.25' DOMAIN:'mailhub.sw.ru' HELO:'relay.sw.ru' FROM:'nshirokovskiy@virtuozzo.com' RCPT:'' X-RedHat-Spam-Score: 0.699 (BAYES_50, DCC_REPUT_13_19, SPF_PASS) 195.214.232.25 mailhub.sw.ru 195.214.232.25 mailhub.sw.ru X-Scanned-By: MIMEDefang 2.78 on 10.5.110.32 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH v2 RFC 2/4] qemu: remove liveness check from qemuDomainObjExitMonitor X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Tue, 02 May 2017 10:22:58 +0000 (UTC) X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" As qemuProcessStop is called only in the context of some job and jobs can not overlap now this check becomes useless. Previously async job can continue to run while concurrent regular job is still waiting for qemu response. Now it is not possible. --- src/qemu/qemu_domain.c | 33 ++++++++------------------------- 1 file changed, 8 insertions(+), 25 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 2a51b5e..5aaa97a 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -3925,9 +3925,13 @@ qemuDomainObjEnterMonitor(virQEMUDriverPtr driver AT= TRIBUTE_UNUSED, virDomainObj virObjectUnlock(obj); } =20 -static void ATTRIBUTE_NONNULL(1) -qemuDomainObjExitMonitorInternal(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, - virDomainObjPtr obj) +/* obj must NOT be locked before calling + * + * Should be paired with an earlier qemuDomainObjEnterMonitor call + */ +int +qemuDomainObjExitMonitor(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, + virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv =3D obj->privateData; bool hasRefs; @@ -3944,32 +3948,11 @@ qemuDomainObjExitMonitorInternal(virQEMUDriverPtr d= river ATTRIBUTE_UNUSED, priv->monStart =3D 0; if (!hasRefs) priv->mon =3D NULL; -} =20 - -/* obj must NOT be locked before calling - * - * Should be paired with an earlier qemuDomainObjEnterMonitor() call - * - * Returns -1 if the domain is no longer alive after exiting the monitor. - * In that case, the caller should be careful when using obj's data, - * e.g. the live definition in vm->def has been freed by qemuProcessStop - * and replaced by the persistent definition, so pointers stolen - * from the live definition could no longer be valid. - */ -int qemuDomainObjExitMonitor(virQEMUDriverPtr driver, - virDomainObjPtr obj) -{ - qemuDomainObjExitMonitorInternal(driver, obj); - if (!virDomainObjIsActive(obj)) { - if (!virGetLastError()) - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("domain is no longer running")); - return -1; - } return 0; } =20 + /* * obj must be locked before calling * --=20 1.8.3.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list From nobody Sat May 4 04:08:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 149372058077741.609847054138754; Tue, 2 May 2017 03:23:00 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D6B24E449E; Tue, 2 May 2017 10:22:58 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9F17671CBA; Tue, 2 May 2017 10:22:58 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 5042118523D1; Tue, 2 May 2017 10:22:58 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v42AMvRE027516 for ; Tue, 2 May 2017 06:22:57 -0400 Received: by smtp.corp.redhat.com (Postfix) id 1C25C173A8; Tue, 2 May 2017 10:22:57 +0000 (UTC) Received: from mx1.redhat.com (ext-mx08.extmail.prod.ext.phx2.redhat.com [10.5.110.32]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 167B517171 for ; Tue, 2 May 2017 10:22:55 +0000 (UTC) Received: from relay.sw.ru (mailhub.sw.ru [195.214.232.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 96DD6C243038 for ; Tue, 2 May 2017 10:22:53 +0000 (UTC) Received: from dim-vz7.qa.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v42A6vYR021064 for ; Tue, 2 May 2017 13:06:57 +0300 (MSK) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com D6B24E449E Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=libvir-list-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com D6B24E449E DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 96DD6C243038 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=pass (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=nshirokovskiy@virtuozzo.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 96DD6C243038 From: Nikolay Shirokovskiy To: libvir-list@redhat.com Date: Tue, 2 May 2017 13:06:35 +0300 Message-Id: <1493719596-35375-4-git-send-email-nshirokovskiy@virtuozzo.com> In-Reply-To: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> References: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> X-Greylist: Delayed for 00:15:49 by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:54 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:54 +0000 (UTC) for IP:'195.214.232.25' DOMAIN:'mailhub.sw.ru' HELO:'relay.sw.ru' FROM:'nshirokovskiy@virtuozzo.com' RCPT:'' X-RedHat-Spam-Score: 0.699 (BAYES_50, DCC_REPUT_13_19, SPF_PASS) 195.214.232.25 mailhub.sw.ru 195.214.232.25 mailhub.sw.ru X-Scanned-By: MIMEDefang 2.78 on 10.5.110.32 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH v2 RFC 3/4] qemu: remove nesting job usage from qemuProcessStop X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Tue, 02 May 2017 10:22:59 +0000 (UTC) X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" 81f50cb92 adds getting nested job on stopping domain to prevent leaking of monitor object on race between aborting job and stopping domain cased by th= at abort. One of the causes of this problem is that async job and concurrent regular = job were not fully isolated and async job can continue to run when concurrent j= ob is not finished yet. Now this is not possible and we can safely remove gett= ing nested job condition in qemuProcessStop as stopping always done within cont= ext of some job. --- src/qemu/qemu_process.c | 21 +++------------------ 1 file changed, 3 insertions(+), 18 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 87a9511..d9f4af7 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -6149,7 +6149,7 @@ qemuProcessBeginStopJob(virQEMUDriverPtr driver, void qemuProcessStop(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainShutoffReason reason, - qemuDomainAsyncJob asyncJob, + qemuDomainAsyncJob asyncJob ATTRIBUTE_UNUSED, unsigned int flags) { int ret; @@ -6163,30 +6163,19 @@ void qemuProcessStop(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg =3D virQEMUDriverGetConfig(driver); =20 VIR_DEBUG("Shutting down vm=3D%p name=3D%s id=3D%d pid=3D%lld, " - "reason=3D%s, asyncJob=3D%s, flags=3D%x", + "reason=3D%s, flags=3D%x", vm, vm->def->name, vm->def->id, (long long) vm->pid, virDomainShutoffReasonTypeToString(reason), - qemuDomainAsyncJobTypeToString(asyncJob), flags); =20 /* This method is routinely used in clean up paths. Disable error * reporting so we don't squash a legit error. */ orig_err =3D virSaveLastError(); =20 - if (asyncJob !=3D QEMU_ASYNC_JOB_NONE) { - if (qemuDomainObjBeginNestedJob(driver, vm, asyncJob) < 0) - goto cleanup; - } else if (priv->job.asyncJob !=3D QEMU_ASYNC_JOB_NONE && - priv->job.asyncOwner =3D=3D virThreadSelfID() && - priv->job.active !=3D QEMU_JOB_ASYNC_NESTED) { - VIR_WARN("qemuProcessStop called without a nested job (async=3D%s)= ", - qemuDomainAsyncJobTypeToString(asyncJob)); - } - if (!virDomainObjIsActive(vm)) { VIR_DEBUG("VM '%s' not active", vm->def->name); - goto endjob; + goto cleanup; } =20 qemuProcessBuildDestroyHugepagesPath(driver, vm, false); @@ -6467,10 +6456,6 @@ void qemuProcessStop(virQEMUDriverPtr driver, =20 virDomainObjRemoveTransientDef(vm); =20 - endjob: - if (asyncJob !=3D QEMU_ASYNC_JOB_NONE) - qemuDomainObjEndJob(driver, vm); - cleanup: if (orig_err) { virSetError(orig_err); --=20 1.8.3.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list From nobody Sat May 4 04:08:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 14937205879911001.0928587362987; Tue, 2 May 2017 03:23:07 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 28D58F5756; Tue, 2 May 2017 10:23:06 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EA35A78C01; Tue, 2 May 2017 10:23:05 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 99C8318523D4; Tue, 2 May 2017 10:23:05 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v42AMv7b027521 for ; Tue, 2 May 2017 06:22:58 -0400 Received: by smtp.corp.redhat.com (Postfix) id BC38C17993; Tue, 2 May 2017 10:22:57 +0000 (UTC) Received: from mx1.redhat.com (ext-mx08.extmail.prod.ext.phx2.redhat.com [10.5.110.32]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B5A8717A65 for ; Tue, 2 May 2017 10:22:57 +0000 (UTC) Received: from relay.sw.ru (mailhub.sw.ru [195.214.232.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DFC66C24302A for ; Tue, 2 May 2017 10:22:55 +0000 (UTC) Received: from dim-vz7.qa.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v42A6vYS021064 for ; Tue, 2 May 2017 13:06:57 +0300 (MSK) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 28D58F5756 Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=libvir-list-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 28D58F5756 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com DFC66C24302A Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=pass (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=nshirokovskiy@virtuozzo.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com DFC66C24302A From: Nikolay Shirokovskiy To: libvir-list@redhat.com Date: Tue, 2 May 2017 13:06:36 +0300 Message-Id: <1493719596-35375-5-git-send-email-nshirokovskiy@virtuozzo.com> In-Reply-To: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> References: <1493719596-35375-1-git-send-email-nshirokovskiy@virtuozzo.com> X-Greylist: Delayed for 00:15:49 by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:56 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 02 May 2017 10:22:56 +0000 (UTC) for IP:'195.214.232.25' DOMAIN:'mailhub.sw.ru' HELO:'relay.sw.ru' FROM:'nshirokovskiy@virtuozzo.com' RCPT:'' X-RedHat-Spam-Score: 0.699 (BAYES_50, DCC_REPUT_13_19, SPF_PASS) 195.214.232.25 mailhub.sw.ru 195.214.232.25 mailhub.sw.ru X-Scanned-By: MIMEDefang 2.78 on 10.5.110.32 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH v2 RFC 4/4] qemu: remove the rest of nested job parts X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 02 May 2017 10:23:07 +0000 (UTC) X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" It is not clear whether nested job could be saved in status file or not so it stays in job types enum. --- src/qemu/qemu_domain.c | 28 +--------------------------- src/qemu/qemu_domain.h | 6 +----- src/qemu/qemu_process.c | 2 +- 3 files changed, 3 insertions(+), 33 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 5aaa97a..20580c6 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -87,7 +87,7 @@ VIR_ENUM_IMPL(qemuDomainJob, QEMU_JOB_LAST, "abort", "migration operation", "none", /* async job is never stored in job.active */ - "async nested", + "async nested", /* keep for backward compatibility */ ); =20 VIR_ENUM_IMPL(qemuDomainAsyncJob, QEMU_ASYNC_JOB_LAST, @@ -3600,8 +3600,6 @@ qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver,= virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv =3D obj->privateData; =20 - if (priv->job.active =3D=3D QEMU_JOB_ASYNC_NESTED) - qemuDomainObjResetJob(priv); qemuDomainObjResetAsyncJob(priv); qemuDomainObjSaveJob(driver, obj); } @@ -3825,30 +3823,6 @@ int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driv= er, return 0; } =20 -int -qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob) -{ - qemuDomainObjPrivatePtr priv =3D obj->privateData; - - if (asyncJob !=3D priv->job.asyncJob) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("unexpected async job %d"), asyncJob); - return -1; - } - - if (priv->job.asyncOwner !=3D virThreadSelfID()) { - VIR_WARN("This thread doesn't seem to be the async job owner: %llu= ", - priv->job.asyncOwner); - } - - return qemuDomainObjBeginJobInternal(driver, obj, - QEMU_JOB_ASYNC_NESTED, - QEMU_ASYNC_JOB_NONE); -} - - /* * obj must be locked and have a reference before calling * diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 39b3aed..062ad1d 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -76,7 +76,7 @@ typedef enum { =20 /* The following two items must always be the last items before JOB_LA= ST */ QEMU_JOB_ASYNC, /* Asynchronous job */ - QEMU_JOB_ASYNC_NESTED, /* Normal job within an async job */ + QEMU_JOB_PLACEHOLDER_1, /* Have to keep for backward compatibility */ =20 QEMU_JOB_LAST } qemuDomainJob; @@ -439,10 +439,6 @@ int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, qemuDomainAsyncJob asyncJob, virDomainJobOperation operation) ATTRIBUTE_RETURN_CHECK; -int qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainAsyncJob asyncJob) - ATTRIBUTE_RETURN_CHECK; =20 void qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index d9f4af7..a99a467 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3177,7 +3177,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, case QEMU_JOB_MIGRATION_OP: case QEMU_JOB_ABORT: case QEMU_JOB_ASYNC: - case QEMU_JOB_ASYNC_NESTED: + case QEMU_JOB_PLACEHOLDER_1: /* async job was already handled above */ case QEMU_JOB_NONE: case QEMU_JOB_LAST: --=20 1.8.3.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list