From nobody Mon Feb 9 06:50:24 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) client-ip=216.205.24.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1613058016; cv=none; d=zohomail.com; s=zohoarc; b=DNs/Bum5RuAopIH+BOja4psRxfJ2ww+9/oDePj2XseNlupz/BS9fJ1Kezp2Eby/pqtYHdzXAmscR2kwYrzi/A2VlfaTJfdzAshQXeYOD1FcTjKMec3MmYBAdG+cIwjLVd62Mb24/K3fIBQkUcZA/fR6wzqqJoCn2Gb1vETUq7yI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1613058016; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=baVGMNi7jJpHQsTPUMTqTE+jy+ykJqaeYj+8iQPn818=; b=Fj4wG1Tc4gnwvDFBJQ3Fi6uc89nt/fXrkQXfi667ZLYPdWerpbVfBXmFht+pr8xEiLj3NSVzvatxxAO6JHwTUdkzhcjC8yauvpkFW9M1fKIIMumdgzh4mYykNITpSBW1I/PXy/cqjwb4UsLoE69tstadNescuBXmfPdfrFcHFQ8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 216.205.24.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mx.zohomail.com with SMTPS id 1613058015961486.3083155496075; Thu, 11 Feb 2021 07:40:15 -0800 (PST) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-188-36eb25mdPTeYB_ZT1iwjMA-1; Thu, 11 Feb 2021 10:38:47 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 953FB8030D3; Thu, 11 Feb 2021 15:38:40 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 661E710021AA; Thu, 11 Feb 2021 15:38:40 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 2D99C5807F; Thu, 11 Feb 2021 15:38:40 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 11BFcZjk019356 for ; Thu, 11 Feb 2021 10:38:35 -0500 Received: by smtp.corp.redhat.com (Postfix) id 0510B5D74F; Thu, 11 Feb 2021 15:38:35 +0000 (UTC) Received: from speedmetal.lan (unknown [10.40.208.53]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1BC415D6B1 for ; Thu, 11 Feb 2021 15:38:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613058014; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=baVGMNi7jJpHQsTPUMTqTE+jy+ykJqaeYj+8iQPn818=; b=STeJ8fBt2mDW5wiDLbNTNl1y8vI+driU2nLz6HOJoPago1jjZSitMc50p0LzD9TNEHKCKQ USJvFKRCkPgssm6o+vnUDTXeNGjRGIMPILaaSUwah2BPD8nf+M+FEHS1n8STN6WXSFtPTD Z5mlUxyo5w7xsVRYM1cn/RaZu+QmAOI= X-MC-Unique: 36eb25mdPTeYB_ZT1iwjMA-1 From: Peter Krempa To: libvir-list@redhat.com Subject: [PATCH 18/19] qemu: migration: Migrate block dirty bitmaps corresponding to checkpoints Date: Thu, 11 Feb 2021 16:37:57 +0100 Message-Id: <4736b758acda3aa163b47294ce147b6732855421.1613057278.git.pkrempa@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-loop: libvir-list@redhat.com X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" Preserve block dirty bitmaps after migration with QEMU_MONITOR_MIGRATE_NON_SHARED_(DISK|INC). This patch implements functions which offer the bitmaps to the destination, check for eligibility on destination and then configure source for the migration. Signed-off-by: Peter Krempa --- src/qemu/qemu_migration.c | 333 +++++++++++++++++++++++++++++++++++++- 1 file changed, 331 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 36424f8493..16bfad0390 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2203,6 +2203,91 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm, } +/** + * qemuMigrationSrcBeginPhaseBlockDirtyBitmaps: + * @mig: migration cookie struct + * @vm: domain object + * @migrate_disks: disks which are being migrated + * @nmigrage_disks: number of @migrate_disks + * + * Enumerates block dirty bitmaps on disks which will undergo storage migr= ation + * and fills them into @mig to be offered to the destination. + */ +static int +qemuMigrationSrcBeginPhaseBlockDirtyBitmaps(qemuMigrationCookiePtr mig, + virDomainObjPtr vm, + const char **migrate_disks, + size_t nmigrate_disks) + +{ + GSList *disks =3D NULL; + qemuDomainObjPrivatePtr priv =3D vm->privateData; + size_t i; + + g_autoptr(GHashTable) blockNamedNodeData =3D NULL; + + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, priv->job.a= syncJob))) + return -1; + + for (i =3D 0; i < vm->def->ndisks; i++) { + qemuMigrationBlockDirtyBitmapsDiskPtr disk; + GSList *bitmaps =3D NULL; + virDomainDiskDefPtr diskdef =3D vm->def->disks[i]; + qemuBlockNamedNodeDataPtr nodedata =3D virHashLookup(blockNamedNod= eData, diskdef->src->nodeformat); + size_t j; + + if (!nodedata) + continue; + + if (migrate_disks) { + bool migrating =3D false; + + for (j =3D 0; j < nmigrate_disks; j++) { + if (STREQ(migrate_disks[j], diskdef->dst)) { + migrating =3D true; + break; + } + } + + if (!migrating) + continue; + } + + for (j =3D 0; j < nodedata->nbitmaps; j++) { + qemuMigrationBlockDirtyBitmapsDiskBitmapPtr bitmap; + + if (!qemuBlockBitmapChainIsValid(diskdef->src, + nodedata->bitmaps[j]->name, + blockNamedNodeData)) + continue; + + bitmap =3D g_new0(qemuMigrationBlockDirtyBitmapsDiskBitmap, 1); + bitmap->bitmapname =3D g_strdup(nodedata->bitmaps[j]->name); + bitmap->alias =3D g_strdup_printf("libvirt-%s-%s", + diskdef->dst, + nodedata->bitmaps[j]->name); + bitmaps =3D g_slist_prepend(bitmaps, bitmap); + } + + if (!bitmaps) + continue; + + disk =3D g_new0(qemuMigrationBlockDirtyBitmapsDisk, 1); + disk->target =3D g_strdup(diskdef->dst); + disk->bitmaps =3D bitmaps; + disks =3D g_slist_prepend(disks, disk); + } + + if (!disks) + return 0; + + mig->blockDirtyBitmaps =3D disks; + mig->flags |=3D QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMAPS; + + return 0; +} + + /* The caller is supposed to lock the vm and start a migration job. */ static char * qemuMigrationSrcBeginPhase(virQEMUDriverPtr driver, @@ -2315,6 +2400,12 @@ qemuMigrationSrcBeginPhase(virQEMUDriverPtr driver, if (!(mig =3D qemuMigrationCookieNew(vm->def, priv->origname))) return NULL; + if (cookieFlags & QEMU_MIGRATION_COOKIE_NBD && + virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP) && + qemuMigrationSrcBeginPhaseBlockDirtyBitmaps(mig, vm, migrate_disks, + nmigrate_disks) < 0) + return NULL; + if (qemuMigrationCookieFormat(mig, driver, vm, QEMU_MIGRATION_SOURCE, cookieout, cookieoutlen, @@ -2528,6 +2619,92 @@ qemuMigrationDstPrepare(virDomainObjPtr vm, migrateFrom, fd, NULL); } + +/** + * qemuMigrationDstPrepareAnyBlockDirtyBitmaps: + * @vm: domain object + * @mig: migration cookie + * @migParams: migration parameters + * @flags: migration flags + * + * Checks whether block dirty bitmaps offered by the migration source are + * to be migrated (e.g. they don't exist, the destination is compatible et= c) + * and sets up destination qemu for migrating the bitmaps as well as updat= es the + * list of eligible bitmaps in the migration cookie to be sent back to the + * source. + */ +static int +qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDomainObjPtr vm, + qemuMigrationCookiePtr mig, + qemuMigrationParamsPtr migPara= ms, + unsigned int flags) +{ + qemuDomainObjPrivatePtr priv =3D vm->privateData; + g_autoptr(virJSONValue) mapping =3D NULL; + g_autoptr(GHashTable) blockNamedNodeData =3D NULL; + GSList *nextdisk; + + if (!mig->nbd || + !mig->blockDirtyBitmaps || + !(flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_IN= C)) || + !virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_BLOCK_BI= TMAP_MAPPING)) + return 0; + + if (qemuMigrationCookieBlockDirtyBitmapsMatchDisks(vm->def, mig->block= DirtyBitmaps) < 0) + return -1; + + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_MIGRATION_IN))) + return -1; + + for (nextdisk =3D mig->blockDirtyBitmaps; nextdisk; nextdisk =3D nextd= isk->next) { + qemuMigrationBlockDirtyBitmapsDiskPtr disk =3D nextdisk->data; + qemuBlockNamedNodeDataPtr nodedata; + GSList *nextbitmap; + + if (!(nodedata =3D virHashLookup(blockNamedNodeData, disk->nodenam= e))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to find data for block node '%s'"), + disk->nodename); + return -1; + } + + /* don't migrate bitmaps into non-qcow2v3+ images */ + if (disk->disk->src->format !=3D VIR_STORAGE_FILE_QCOW2 || + nodedata->qcow2v2) { + disk->skip =3D true; + continue; + } + + for (nextbitmap =3D disk->bitmaps; nextbitmap; nextbitmap =3D next= bitmap->next) { + qemuMigrationBlockDirtyBitmapsDiskBitmapPtr bitmap =3D nextbit= map->data; + size_t k; + + /* don't migrate into existing bitmaps */ + for (k =3D 0; k < nodedata->nbitmaps; k++) { + if (STREQ(bitmap->bitmapname, nodedata->bitmaps[k]->name))= { + bitmap->skip =3D true; + break; + } + } + + if (bitmap->skip) + continue; + } + } + + if (qemuMigrationCookieBlockDirtyBitmapsToParams(mig->blockDirtyBitmap= s, + &mapping) < 0) + return -1; + + if (!mapping) + return 0; + + qemuMigrationParamsSetBlockDirtyBitmapMapping(migParams, &mapping); + mig->flags |=3D QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMAPS; + return 0; +} + + static int qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, virConnectPtr dconn, @@ -2677,7 +2854,8 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, QEMU_MIGRATION_COOKIE_CPU_HOTPLUG= | QEMU_MIGRATION_COOKIE_CPU | QEMU_MIGRATION_COOKIE_ALLOW_REBOO= T | - QEMU_MIGRATION_COOKIE_CAPS))) + QEMU_MIGRATION_COOKIE_CAPS | + QEMU_MIGRATION_COOKIE_BLOCK_DIRTY= _BITMAPS))) goto cleanup; if (!(vm =3D virDomainObjListAdd(driver->domains, *def, @@ -2770,6 +2948,9 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, goto stopjob; } + if (qemuMigrationDstPrepareAnyBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) + goto stopjob; + if (qemuMigrationParamsCheck(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, migParams, mig->caps->automatic) < 0) goto stopjob; @@ -3653,6 +3834,145 @@ qemuMigrationSetDBusVMState(virQEMUDriverPtr driver, } +/** + * qemuMigrationSrcRunPrepareBlockDirtyBitmapsMerge: + * @vm: domain object + * @mig: migration cookie + * + * When migrating full disks, which means that the backing chain of the di= sk + * will be squashed into a single image we need to calculate bitmaps + * corresponding to the checkpoints which express the same set of changes + * for migration. + * + * This function prepares temporary bitmaps and corresponding merges, upda= tes + * the data so that the temporary bitmaps are used and registers the tempo= rary + * bitmaps for deletion on failed migration. + */ +static int +qemuMigrationSrcRunPrepareBlockDirtyBitmapsMerge(virDomainObjPtr vm, + qemuMigrationCookiePtr mi= g) +{ + g_autoslist(qemuDomainJobPrivateMigrateTempBitmap) tmpbitmaps =3D NULL; + qemuDomainObjPrivatePtr priv =3D vm->privateData; + qemuDomainJobPrivatePtr jobPriv =3D priv->job.privateData; + virQEMUDriverPtr driver =3D priv->driver; + g_autoptr(virJSONValue) actions =3D virJSONValueNewArray(); + g_autoptr(GHashTable) blockNamedNodeData =3D NULL; + GSList *nextdisk; + int rc; + + if (!(blockNamedNodeData =3D qemuBlockGetNamedNodeData(vm, QEMU_ASYNC_= JOB_MIGRATION_OUT))) + return -1; + + for (nextdisk =3D mig->blockDirtyBitmaps; nextdisk; nextdisk =3D nextd= isk->next) { + qemuMigrationBlockDirtyBitmapsDiskPtr disk =3D nextdisk->data; + GSList *nextbitmap; + + /* if a disk doesn't have a backing chain we don't need the code b= elow */ + if (!virStorageSourceHasBacking(disk->disk->src)) + continue; + + for (nextbitmap =3D disk->bitmaps; nextbitmap; nextbitmap =3D next= bitmap->next) { + qemuMigrationBlockDirtyBitmapsDiskBitmapPtr bitmap =3D nextbit= map->data; + qemuDomainJobPrivateMigrateTempBitmapPtr tmpbmp; + virStorageSourcePtr n; + unsigned long long granularity =3D 0; + g_autoptr(virJSONValue) merge =3D virJSONValueNewArray(); + + for (n =3D disk->disk->src; virStorageSourceIsBacking(n); n = =3D n->backingStore) { + qemuBlockNamedNodeDataBitmapPtr b; + + if (!(b =3D qemuBlockNamedNodeDataGetBitmapByName(blockNam= edNodeData, n, + bitmap->bi= tmapname))) + break; + + if (granularity =3D=3D 0) + granularity =3D b->granularity; + + if (qemuMonitorTransactionBitmapMergeSourceAddBitmap(merge, + n->no= deformat, + b->na= me) < 0) + return -1; + } + + bitmap->sourcebitmap =3D g_strdup_printf("libvirt-migration-%s= ", bitmap->alias); + bitmap->persistent =3D VIR_TRISTATE_BOOL_YES; + + if (qemuMonitorTransactionBitmapAdd(actions, + disk->disk->src->nodeforma= t, + bitmap->sourcebitmap, + false, false, granularity)= < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + disk->disk->src->nodefor= mat, + bitmap->sourcebitmap, + &merge) < 0) + return -1; + + tmpbmp =3D g_new0(qemuDomainJobPrivateMigrateTempBitmap, 1); + tmpbmp->nodename =3D g_strdup(disk->disk->src->nodeformat); + tmpbmp->bitmapname =3D g_strdup(bitmap->sourcebitmap); + tmpbitmaps =3D g_slist_prepend(tmpbitmaps, tmpbmp); + } + } + + if (qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_MIGRATIO= N_OUT) < 0) + return -1; + + rc =3D qemuMonitorTransaction(priv->mon, &actions); + + if (qemuDomainObjExitMonitor(driver, vm) < 0 || rc < 0) + return -1; + + jobPriv->migTempBitmaps =3D g_steal_pointer(&tmpbitmaps); + + return 0; +} + + +/** + * qemuMigrationSrcRunPrepareBlockDirtyBitmaps: + * @vm: domain object + * @mig: migration cookie + * @migParams: migration parameters + * @flags: migration flags + * + * Configures the source for bitmap migration when the destination asks + * for bitmaps. + */ +static int +qemuMigrationSrcRunPrepareBlockDirtyBitmaps(virDomainObjPtr vm, + qemuMigrationCookiePtr mig, + qemuMigrationParamsPtr migPara= ms, + unsigned int flags) + +{ + g_autoptr(virJSONValue) mapping =3D NULL; + + if (!mig->blockDirtyBitmaps) + return 0; + + if (qemuMigrationCookieBlockDirtyBitmapsMatchDisks(vm->def, mig->block= DirtyBitmaps) < 0) + return -1; + + /* For QEMU_MONITOR_MIGRATE_NON_SHARED_INC we can migrate the bitmaps + * directly, otherwise we must create merged bitmaps from the whole + * chain */ + + if (!(flags & QEMU_MONITOR_MIGRATE_NON_SHARED_INC) && + qemuMigrationSrcRunPrepareBlockDirtyBitmapsMerge(vm, mig)) + return -1; + + if (qemuMigrationCookieBlockDirtyBitmapsToParams(mig->blockDirtyBitmap= s, + &mapping) < 0) + return -1; + + qemuMigrationParamsSetBlockDirtyBitmapMapping(migParams, &mapping); + return 0; +} + + static int qemuMigrationSrcRun(virQEMUDriverPtr driver, virDomainObjPtr vm, @@ -3709,6 +4029,10 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, cookieFlags |=3D QEMU_MIGRATION_COOKIE_NBD; } + if (cookieFlags & QEMU_MIGRATION_COOKIE_NBD && + virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_BLOCK_BIT= MAP_MAPPING)) + cookieFlags |=3D QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMAPS; + if (virLockManagerPluginUsesState(driver->lockManager) && !cookieout) { virReportError(VIR_ERR_INTERNAL_ERROR, @@ -3741,13 +4065,18 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, cookiein, cookieinlen, cookieFlags | QEMU_MIGRATION_COOKIE_GRAPHICS | - QEMU_MIGRATION_COOKIE_CAPS); + QEMU_MIGRATION_COOKIE_CAPS | + QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMA= PS); if (!mig) goto error; if (qemuMigrationSrcGraphicsRelocate(driver, vm, mig, graphicsuri) < 0) VIR_WARN("unable to provide data for graphics client relocation"); + if (mig->blockDirtyBitmaps && + qemuMigrationSrcRunPrepareBlockDirtyBitmaps(vm, mig, migParams, fl= ags) < 0) + goto error; + if (qemuMigrationParamsCheck(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, migParams, mig->caps->automatic) < 0) goto error; --=20 2.29.2