From nobody Tue Apr 30 17:31:53 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1491563218070266.844067120192; Fri, 7 Apr 2017 04:06:58 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C4B21B0824; Fri, 7 Apr 2017 11:06:55 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 939421800C; Fri, 7 Apr 2017 11:06:55 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 42CE018523CB; Fri, 7 Apr 2017 11:06:55 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v37B6oNw006876 for ; Fri, 7 Apr 2017 07:06:50 -0400 Received: by smtp.corp.redhat.com (Postfix) id B566018A79; Fri, 7 Apr 2017 11:06:50 +0000 (UTC) Received: from mx1.redhat.com (ext-mx08.extmail.prod.ext.phx2.redhat.com [10.5.110.32]) by smtp.corp.redhat.com (Postfix) with ESMTPS id ACDEA51DE0 for ; Fri, 7 Apr 2017 11:06:47 +0000 (UTC) Received: from relay.sw.ru (mailhub.sw.ru [195.214.232.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 324CFC054918 for ; Fri, 7 Apr 2017 11:06:45 +0000 (UTC) Received: from dim-vz7.qa.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v37B6gA2002435 for ; Fri, 7 Apr 2017 14:06:43 +0300 (MSK) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com C4B21B0824 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=libvir-list-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com C4B21B0824 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 324CFC054918 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=pass (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=nshirokovskiy@virtuozzo.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 324CFC054918 From: Nikolay Shirokovskiy To: libvir-list@redhat.com Date: Fri, 7 Apr 2017 14:06:24 +0300 Message-Id: <1491563185-335738-2-git-send-email-nshirokovskiy@virtuozzo.com> In-Reply-To: <1491563185-335738-1-git-send-email-nshirokovskiy@virtuozzo.com> References: <1491563185-335738-1-git-send-email-nshirokovskiy@virtuozzo.com> X-Greylist: Delayed for 73:03:53 by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 07 Apr 2017 11:06:46 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 07 Apr 2017 11:06:46 +0000 (UTC) for IP:'195.214.232.25' DOMAIN:'mailhub.sw.ru' HELO:'relay.sw.ru' FROM:'nshirokovskiy@virtuozzo.com' RCPT:'' X-RedHat-Spam-Score: 0.799 (BAYES_50, SPF_PASS) 195.214.232.25 mailhub.sw.ru 195.214.232.25 mailhub.sw.ru X-Scanned-By: MIMEDefang 2.78 on 10.5.110.32 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH 1/2] qemu: take current async job into account in qemuBlockNodeNamesDetect X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Fri, 07 Apr 2017 11:06:57 +0000 (UTC) X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" Becase it can be called during migration out (namely on cancelling blockjobs). --- src/qemu/qemu_block.c | 6 ++++-- src/qemu/qemu_block.h | 4 +++- src/qemu/qemu_blockjob.c | 9 ++++++--- src/qemu/qemu_blockjob.h | 4 ++++ src/qemu/qemu_driver.c | 11 ++++++----- src/qemu/qemu_migration.c | 28 ++++++++++++++++------------ src/qemu/qemu_process.c | 2 +- 7 files changed, 40 insertions(+), 24 deletions(-) diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c index 586d568..29b5c47 100644 --- a/src/qemu/qemu_block.c +++ b/src/qemu/qemu_block.c @@ -336,7 +336,8 @@ qemuBlockDiskDetectNodes(virDomainDiskDefPtr disk, =20 int qemuBlockNodeNamesDetect(virQEMUDriverPtr driver, - virDomainObjPtr vm) + virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob) { qemuDomainObjPrivatePtr priv =3D vm->privateData; virHashTablePtr disktable =3D NULL; @@ -350,7 +351,8 @@ qemuBlockNodeNamesDetect(virQEMUDriverPtr driver, if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_QUERY_NAMED_BLOCK_NODES)) return 0; =20 - qemuDomainObjEnterMonitor(driver, vm); + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1; =20 disktable =3D qemuMonitorGetBlockInfo(qemuDomainGetMonitor(vm)); data =3D qemuMonitorQueryNamedBlockNodes(qemuDomainGetMonitor(vm)); diff --git a/src/qemu/qemu_block.h b/src/qemu/qemu_block.h index 9d6a246..2af15a6 100644 --- a/src/qemu/qemu_block.h +++ b/src/qemu/qemu_block.h @@ -22,6 +22,7 @@ # include "internal.h" =20 # include "qemu_conf.h" +# include "qemu_domain.h" =20 # include "virhash.h" # include "virjson.h" @@ -46,7 +47,8 @@ qemuBlockNodeNameGetBackingChain(virJSONValuePtr data); =20 int qemuBlockNodeNamesDetect(virQEMUDriverPtr driver, - virDomainObjPtr vm); + virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob); =20 virHashTablePtr qemuBlockGetNodeData(virJSONValuePtr data); diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index 0601e68..415768d 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -55,13 +55,14 @@ VIR_LOG_INIT("qemu.qemu_blockjob"); int qemuBlockJobUpdate(virQEMUDriverPtr driver, virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob, virDomainDiskDefPtr disk) { qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(disk); int status =3D diskPriv->blockJobStatus; =20 if (status !=3D -1) { - qemuBlockJobEventProcess(driver, vm, disk, + qemuBlockJobEventProcess(driver, vm, disk, asyncJob, diskPriv->blockJobType, diskPriv->blockJobStatus); diskPriv->blockJobStatus =3D -1; @@ -87,6 +88,7 @@ void qemuBlockJobEventProcess(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainDiskDefPtr disk, + qemuDomainAsyncJob asyncJob, int type, int status) { @@ -167,7 +169,7 @@ qemuBlockJobEventProcess(virQEMUDriverPtr driver, disk->mirrorJob =3D VIR_DOMAIN_BLOCK_JOB_TYPE_UNKNOWN; ignore_value(qemuDomainDetermineDiskChain(driver, vm, disk, true, true)); - ignore_value(qemuBlockNodeNamesDetect(driver, vm)); + ignore_value(qemuBlockNodeNamesDetect(driver, vm, asyncJob)); diskPriv->blockjob =3D false; break; =20 @@ -247,9 +249,10 @@ qemuBlockJobSyncBegin(virDomainDiskDefPtr disk) void qemuBlockJobSyncEnd(virQEMUDriverPtr driver, virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob, virDomainDiskDefPtr disk) { VIR_DEBUG("disk=3D%s", disk->dst); - qemuBlockJobUpdate(driver, vm, disk); + qemuBlockJobUpdate(driver, vm, asyncJob, disk); QEMU_DOMAIN_DISK_PRIVATE(disk)->blockJobSync =3D false; } diff --git a/src/qemu/qemu_blockjob.h b/src/qemu/qemu_blockjob.h index 775ce95..47aa4c1 100644 --- a/src/qemu/qemu_blockjob.h +++ b/src/qemu/qemu_blockjob.h @@ -24,19 +24,23 @@ =20 # include "internal.h" # include "qemu_conf.h" +# include "qemu_domain.h" =20 int qemuBlockJobUpdate(virQEMUDriverPtr driver, virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob, virDomainDiskDefPtr disk); void qemuBlockJobEventProcess(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainDiskDefPtr disk, + qemuDomainAsyncJob asyncJob, int type, int status); =20 void qemuBlockJobSyncBegin(virDomainDiskDefPtr disk); void qemuBlockJobSyncEnd(virQEMUDriverPtr driver, virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob, virDomainDiskDefPtr disk); =20 #endif /* __QEMU_BLOCKJOB_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 388af4f..6b7370f 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4513,7 +4513,7 @@ processBlockJobEvent(virQEMUDriverPtr driver, } =20 if ((disk =3D qemuProcessFindDomainDiskByAlias(vm, diskAlias))) - qemuBlockJobEventProcess(driver, vm, disk, type, status); + qemuBlockJobEventProcess(driver, vm, disk, QEMU_ASYNC_JOB_NONE, ty= pe, status); =20 endjob: qemuDomainObjEndJob(driver, vm); @@ -16234,24 +16234,25 @@ qemuDomainBlockJobAbort(virDomainPtr dom, * event to pull and let qemuBlockJobEventProcess() handle * the rest as usual */ qemuBlockJobEventProcess(driver, vm, disk, + QEMU_ASYNC_JOB_NONE, VIR_DOMAIN_BLOCK_JOB_TYPE_PULL, VIR_DOMAIN_BLOCK_JOB_CANCELED); } else { qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE= (disk); - qemuBlockJobUpdate(driver, vm, disk); + qemuBlockJobUpdate(driver, vm, QEMU_ASYNC_JOB_NONE, disk); while (diskPriv->blockjob) { if (virDomainObjWait(vm) < 0) { ret =3D -1; goto endjob; } - qemuBlockJobUpdate(driver, vm, disk); + qemuBlockJobUpdate(driver, vm, QEMU_ASYNC_JOB_NONE, disk); } } } =20 endjob: if (disk) - qemuBlockJobSyncEnd(driver, vm, disk); + qemuBlockJobSyncEnd(driver, vm, QEMU_ASYNC_JOB_NONE, disk); qemuDomainObjEndJob(driver, vm); =20 cleanup: @@ -20399,7 +20400,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, goto endjob; =20 if (!src->nodebacking && - qemuBlockNodeNamesDetect(driver, vm) < 0) + qemuBlockNodeNamesDetect(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) goto endjob; =20 if (!src->nodebacking) { diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 68e72b3..d7ff415 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -600,7 +600,8 @@ qemuMigrationStopNBDServer(virQEMUDriverPtr driver, */ static int qemuMigrationDriveMirrorReady(virQEMUDriverPtr driver, - virDomainObjPtr vm) + virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob) { size_t i; size_t notReady =3D 0; @@ -613,7 +614,7 @@ qemuMigrationDriveMirrorReady(virQEMUDriverPtr driver, if (!diskPriv->migrating) continue; =20 - status =3D qemuBlockJobUpdate(driver, vm, disk); + status =3D qemuBlockJobUpdate(driver, vm, asyncJob, disk); if (status =3D=3D VIR_DOMAIN_BLOCK_JOB_FAILED) { virReportError(VIR_ERR_OPERATION_FAILED, _("migration of disk %s failed"), @@ -648,6 +649,7 @@ qemuMigrationDriveMirrorReady(virQEMUDriverPtr driver, static int qemuMigrationDriveMirrorCancelled(virQEMUDriverPtr driver, virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob, bool check) { size_t i; @@ -662,7 +664,7 @@ qemuMigrationDriveMirrorCancelled(virQEMUDriverPtr driv= er, if (!diskPriv->migrating) continue; =20 - status =3D qemuBlockJobUpdate(driver, vm, disk); + status =3D qemuBlockJobUpdate(driver, vm, asyncJob, disk); switch (status) { case VIR_DOMAIN_BLOCK_JOB_FAILED: if (check) { @@ -674,7 +676,7 @@ qemuMigrationDriveMirrorCancelled(virQEMUDriverPtr driv= er, /* fallthrough */ case VIR_DOMAIN_BLOCK_JOB_CANCELED: case VIR_DOMAIN_BLOCK_JOB_COMPLETED: - qemuBlockJobSyncEnd(driver, vm, disk); + qemuBlockJobSyncEnd(driver, vm, asyncJob, disk); diskPriv->migrating =3D false; break; =20 @@ -722,7 +724,7 @@ qemuMigrationCancelOneDriveMirror(virQEMUDriverPtr driv= er, int status; int rv; =20 - status =3D qemuBlockJobUpdate(driver, vm, disk); + status =3D qemuBlockJobUpdate(driver, vm, asyncJob, disk); switch (status) { case VIR_DOMAIN_BLOCK_JOB_FAILED: case VIR_DOMAIN_BLOCK_JOB_CANCELED: @@ -799,12 +801,13 @@ qemuMigrationCancelDriveMirror(virQEMUDriverPtr drive= r, err =3D virSaveLastError(); failed =3D true; } - qemuBlockJobSyncEnd(driver, vm, disk); + qemuBlockJobSyncEnd(driver, vm, asyncJob, disk); diskPriv->migrating =3D false; } } =20 - while ((rv =3D qemuMigrationDriveMirrorCancelled(driver, vm, check)) != =3D 1) { + while ((rv =3D qemuMigrationDriveMirrorCancelled(driver, vm, asyncJob, + check)) !=3D 1) { if (check && !failed && dconn && virConnectIsAlive(dconn) <=3D 0) { virReportError(VIR_ERR_OPERATION_FAILED, "%s", @@ -930,7 +933,7 @@ qemuMigrationDriveMirror(virQEMUDriverPtr driver, VIR_FREE(nbd_dest); =20 if (qemuDomainObjExitMonitor(driver, vm) < 0 || mon_ret < 0) { - qemuBlockJobSyncEnd(driver, vm, disk); + qemuBlockJobSyncEnd(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, = disk); goto cleanup; } diskPriv->migrating =3D true; @@ -941,7 +944,8 @@ qemuMigrationDriveMirror(virQEMUDriverPtr driver, } } =20 - while ((rv =3D qemuMigrationDriveMirrorReady(driver, vm)) !=3D 1) { + while ((rv =3D qemuMigrationDriveMirrorReady(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_OU= T)) !=3D 1) { if (rv < 0) goto cleanup; =20 @@ -1475,7 +1479,7 @@ qemuMigrationCompleted(virQEMUDriverPtr driver, goto error; =20 if (flags & QEMU_MIGRATION_COMPLETED_CHECK_STORAGE && - qemuMigrationDriveMirrorReady(driver, vm) < 0) + qemuMigrationDriveMirrorReady(driver, vm, asyncJob) < 0) goto error; =20 if (flags & QEMU_MIGRATION_COMPLETED_ABORT_ON_ERROR && @@ -5573,7 +5577,7 @@ qemuMigrationCancel(virQEMUDriverPtr driver, VIR_DEBUG("Drive mirror on disk %s is still running", disk->ds= t); } else { VIR_DEBUG("Drive mirror on disk %s is gone", disk->dst); - qemuBlockJobSyncEnd(driver, vm, disk); + qemuBlockJobSyncEnd(driver, vm, QEMU_ASYNC_JOB_NONE, disk); diskPriv->migrating =3D false; } } @@ -5595,7 +5599,7 @@ qemuMigrationCancel(virQEMUDriverPtr driver, qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE= (disk); =20 if (diskPriv->migrating) { - qemuBlockJobSyncEnd(driver, vm, disk); + qemuBlockJobSyncEnd(driver, vm, QEMU_ASYNC_JOB_NONE, disk); diskPriv->migrating =3D false; } } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 8323a18..a479f44 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3489,7 +3489,7 @@ qemuProcessReconnect(void *opaque) if (qemuProcessRefreshDisks(driver, obj, QEMU_ASYNC_JOB_NONE) < 0) goto error; =20 - if (qemuBlockNodeNamesDetect(driver, obj) < 0) + if (qemuBlockNodeNamesDetect(driver, obj, QEMU_ASYNC_JOB_NONE) < 0) goto error; =20 if (qemuRefreshVirtioChannelState(driver, obj, QEMU_ASYNC_JOB_NONE) < = 0) --=20 1.8.3.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list From nobody Tue Apr 30 17:31:53 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx.zohomail.com with SMTPS id 1491563215940122.01152573420029; Fri, 7 Apr 2017 04:06:55 -0700 (PDT) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 508492E8354; Fri, 7 Apr 2017 11:06:54 +0000 (UTC) Received: from colo-mx.corp.redhat.com (unknown [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 21005179FC; Fri, 7 Apr 2017 11:06:54 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id C2ECF5EC64; Fri, 7 Apr 2017 11:06:53 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v37B6p7e006884 for ; Fri, 7 Apr 2017 07:06:51 -0400 Received: by smtp.corp.redhat.com (Postfix) id 240401751E; Fri, 7 Apr 2017 11:06:51 +0000 (UTC) Received: from mx1.redhat.com (ext-mx09.extmail.prod.ext.phx2.redhat.com [10.5.110.38]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1E21E17107 for ; Fri, 7 Apr 2017 11:06:46 +0000 (UTC) Received: from relay.sw.ru (mailhub.sw.ru [195.214.232.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 306956810A for ; Fri, 7 Apr 2017 11:06:45 +0000 (UTC) Received: from dim-vz7.qa.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id v37B6gA3002435 for ; Fri, 7 Apr 2017 14:06:43 +0300 (MSK) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 508492E8354 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=libvir-list-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 508492E8354 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 306956810A Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=pass (p=none dis=none) header.from=virtuozzo.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=nshirokovskiy@virtuozzo.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 306956810A From: Nikolay Shirokovskiy To: libvir-list@redhat.com Date: Fri, 7 Apr 2017 14:06:25 +0300 Message-Id: <1491563185-335738-3-git-send-email-nshirokovskiy@virtuozzo.com> In-Reply-To: <1491563185-335738-1-git-send-email-nshirokovskiy@virtuozzo.com> References: <1491563185-335738-1-git-send-email-nshirokovskiy@virtuozzo.com> X-Greylist: Delayed for 73:03:53 by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 07 Apr 2017 11:06:46 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 07 Apr 2017 11:06:46 +0000 (UTC) for IP:'195.214.232.25' DOMAIN:'mailhub.sw.ru' HELO:'relay.sw.ru' FROM:'nshirokovskiy@virtuozzo.com' RCPT:'' X-RedHat-Spam-Score: 0.799 (BAYES_50, SPF_PASS) 195.214.232.25 mailhub.sw.ru 195.214.232.25 mailhub.sw.ru X-Scanned-By: MIMEDefang 2.78 on 10.5.110.38 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-loop: libvir-list@redhat.com Subject: [libvirt] [PATCH 2/2] qemu: migration: fix race on cancelling drive mirror X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 07 Apr 2017 11:06:55 +0000 (UTC) X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" 0feebab2 adds calling qemuBlockNodeNamesDetect for completed job on updating block jobs. This affects cancelling drive mirror logic as this function drops vm lock. Now we have to recheck all disks before the disk with the completed block job before going to wait for block job events. --- src/qemu/qemu_migration.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index d7ff415..769de97 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -654,9 +654,11 @@ qemuMigrationDriveMirrorCancelled(virQEMUDriverPtr dri= ver, { size_t i; size_t active =3D 0; + size_t completed =3D 0; int status; bool failed =3D false; =20 + retry: for (i =3D 0; i < vm->def->ndisks; i++) { virDomainDiskDefPtr disk =3D vm->def->disks[i]; qemuDomainDiskPrivatePtr diskPriv =3D QEMU_DOMAIN_DISK_PRIVATE(dis= k); @@ -683,6 +685,19 @@ qemuMigrationDriveMirrorCancelled(virQEMUDriverPtr dri= ver, default: active++; } + + if (status =3D=3D VIR_DOMAIN_BLOCK_JOB_COMPLETED) + completed++; + } + + /* Updating completed block job drops the lock thus we have to recheck + * block jobs for disks that reside before the disk(s) with completed + * block job. + */ + if (completed > 0) { + completed =3D 0; + active =3D 0; + goto retry; } =20 if (failed) { --=20 1.8.3.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list