From nobody Sun Feb 8 17:08:25 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1678464745; cv=none; d=zohomail.com; s=zohoarc; b=DJws46PFZtkuOPAeT7JkXazQHS2R20lA5jeDL+U9AZUUWFixBovW6Tv0AaBaSNF/HvCYjsupKrh13TwoaEa6Qdt440wVzbMNOT9UNiX8suIpbzvVSSb/mwS6hl4k6+hAIpAsFCtrE3EVcmlhGRNx3kmHdNiazV41M4xJuclPNDU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1678464745; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=37ByEOGzWnCx+yAJyPKvwNK65p5NaAswm/i4vMfSTyg=; b=MhaB2KmC9lxrjIQi3biKwH3v/LqmSa3I/B2LsDHS9s38mKF3i2pxPDTSCWOVro4QsKpPAE0N69+FS9rklKpV69ucQseM8FHaqPhbqUEMH1lmpDgCn0egGNLzVtO3jpiw/CIrqK5BU2+2QfLuhIYgAECumySWQ2e5m8v4GJP/l+Y= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1678464745732491.81993029614193; Fri, 10 Mar 2023 08:12:25 -0800 (PST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-473-JyahTM9kNsegGZ90iYd-vQ-1; Fri, 10 Mar 2023 11:11:34 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 95AB3857F47; Fri, 10 Mar 2023 16:10:51 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7E4BE1121315; Fri, 10 Mar 2023 16:10:51 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 68CE91946A6F; Fri, 10 Mar 2023 16:10:51 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 107881946A74 for ; Fri, 10 Mar 2023 16:10:50 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id E94B42026D68; Fri, 10 Mar 2023 16:10:49 +0000 (UTC) Received: from rein.int.mamuti.net (unknown [10.45.226.138]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8F3DA2026D4B for ; Fri, 10 Mar 2023 16:10:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678464744; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=37ByEOGzWnCx+yAJyPKvwNK65p5NaAswm/i4vMfSTyg=; b=S+e8FHUQ9ng0cwuNVsYwhmafynMf3+a4S75ogd8Vku7UwwQzZNm4bywMCrMFUzawn7OtUV wVIgDcgXX3F2CJmvQUyIcgeunQaUfqz0+Un/sTPvUqLFRvkT5LWpuuDF0eT4qGGIK0epLl lHJ6S1v6OEeCTfm9CjuxrCY7x4pFCJw= X-MC-Unique: JyahTM9kNsegGZ90iYd-vQ-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Jiri Denemark To: libvir-list@redhat.com Subject: [libvirt PATCH 28/51] qemu/qemu_migration: Update format strings in translated messages Date: Fri, 10 Mar 2023 17:09:44 +0100 Message-Id: <91bcdd3de20aa2c79ae253b01594d14062e776ae.1678463799.git.jdenemar@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1678464747426100003 Content-Type: text/plain; charset="utf-8" Signed-off-by: Jiri Denemark --- src/qemu/qemu_migration.c | 105 +++++++++++++++++++------------------- 1 file changed, 52 insertions(+), 53 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 2720f0b083..a18910f7ad 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -90,7 +90,7 @@ qemuMigrationJobIsAllowed(virDomainObj *vm) if (vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_IN || vm->job->asyncJob =3D=3D VIR_ASYNC_JOB_MIGRATION_OUT) { virReportError(VIR_ERR_OPERATION_INVALID, - _("another migration job is already running for dom= ain '%s'"), + _("another migration job is already running for dom= ain '%1$s'"), vm->def->name); return false; } @@ -139,7 +139,7 @@ qemuMigrationCheckPhase(virDomainObj *vm, if (phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && phase < vm->job->phase) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("migration protocol going backwards %s =3D> %s"), + _("migration protocol going backwards %1$s =3D> %2$= s"), qemuMigrationJobPhaseTypeToString(vm->job->phase), qemuMigrationJobPhaseTypeToString(phase)); return -1; @@ -190,9 +190,9 @@ qemuMigrationJobIsActive(virDomainObj *vm, const char *msg; =20 if (job =3D=3D VIR_ASYNC_JOB_MIGRATION_IN) - msg =3D _("domain '%s' is not processing incoming migration"); + msg =3D _("domain '%1$s' is not processing incoming migration"= ); else - msg =3D _("domain '%s' is not being migrated"); + msg =3D _("domain '%1$s' is not being migrated"); =20 virReportError(VIR_ERR_OPERATION_INVALID, msg, vm->def->name); return false; @@ -250,7 +250,7 @@ qemuMigrationSrcRestoreDomainState(virQEMUDriver *drive= r, virDomainObj *vm) /* Hm, we already know we are in error here. We don't want to * overwrite the previous error, though, so we just throw some= thing * to the logs and hope for the best */ - VIR_ERROR(_("Failed to resume guest %s after failure"), vm->de= f->name); + VIR_ERROR(_("Failed to resume guest %1$s after failure"), vm->= def->name); goto cleanup; } ret =3D true; @@ -291,7 +291,7 @@ qemuMigrationDstPrecreateDisk(virConnectPtr *conn, =20 if (!(volName =3D strrchr(basePath, '/'))) { virReportError(VIR_ERR_INVALID_ARG, - _("malformed disk path: %s"), + _("malformed disk path: %1$s"), disk->src->path); goto cleanup; } @@ -340,7 +340,7 @@ qemuMigrationDstPrecreateDisk(virConnectPtr *conn, case VIR_STORAGE_TYPE_NONE: case VIR_STORAGE_TYPE_LAST: virReportError(VIR_ERR_INTERNAL_ERROR, - _("cannot precreate storage for disk type '%s'"), + _("cannot precreate storage for disk type '%1$s'"), virStorageTypeToString(disk->src->type)); goto cleanup; } @@ -446,7 +446,7 @@ qemuMigrationDstPrecreateStorage(virDomainObj *vm, =20 if (!(disk =3D virDomainDiskByTarget(vm->def, nbd->disks[i].target= ))) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("unable to find disk by target: %s"), + _("unable to find disk by target: %1$s"), nbd->disks[i].target); goto cleanup; } @@ -526,7 +526,7 @@ qemuMigrationDstStartNBDServer(virQEMUDriver *driver, return -1; =20 if (!uri->scheme) { - virReportError(VIR_ERR_INVALID_ARG, _("No URI scheme specified= : %s"), nbdURI); + virReportError(VIR_ERR_INVALID_ARG, _("No URI scheme specified= : %1$s"), nbdURI); return -1; } =20 @@ -537,7 +537,7 @@ qemuMigrationDstStartNBDServer(virQEMUDriver *driver, * we should rather error out instead of auto-allocating a= port * as that would be the exact opposite of what was request= ed. */ virReportError(VIR_ERR_INVALID_ARG, - _("URI with tcp scheme did not provide a se= rver part: %s"), + _("URI with tcp scheme did not provide a se= rver part: %1$s"), nbdURI); return -1; } @@ -554,7 +554,7 @@ qemuMigrationDstStartNBDServer(virQEMUDriver *driver, server.socket =3D (char *)uri->path; } else { virReportError(VIR_ERR_INVALID_ARG, - _("Unsupported scheme in disks URI: %s"), + _("Unsupported scheme in disks URI: %1$s"), uri->scheme); return -1; } @@ -574,7 +574,7 @@ qemuMigrationDstStartNBDServer(virQEMUDriver *driver, =20 if (disk->src->readonly || virStorageSourceIsEmpty(disk->src)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, - _("Cannot migrate empty or read-only disk %s"), + _("Cannot migrate empty or read-only disk %1$s"= ), disk->dst); goto cleanup; } @@ -655,11 +655,11 @@ qemuMigrationNBDReportMirrorError(qemuBlockJobData *j= ob, { if (job->errmsg) { virReportError(VIR_ERR_OPERATION_FAILED, - _("migration of disk %s failed: %s"), + _("migration of disk %1$s failed: %2$s"), diskdst, job->errmsg); } else { virReportError(VIR_ERR_OPERATION_FAILED, - _("migration of disk %s failed"), diskdst); + _("migration of disk %1$s failed"), diskdst); } } =20 @@ -692,7 +692,7 @@ qemuMigrationSrcNBDStorageCopyReady(virDomainObj *vm, =20 if (!(job =3D qemuBlockJobDiskGetJob(disk))) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("missing block job data for disk '%s'"), disk= ->dst); + _("missing block job data for disk '%1$s'"), di= sk->dst); return -1; } =20 @@ -1158,7 +1158,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, =20 if (mirror_speed > LLONG_MAX >> 20) { virReportError(VIR_ERR_OVERFLOW, - _("bandwidth must be less than %llu"), + _("bandwidth must be less than %1$llu"), LLONG_MAX >> 20); return -1; } @@ -1201,7 +1201,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, return -1; } else { virReportError(VIR_ERR_INVALID_ARG, - _("Unsupported scheme in disks URI: %s"), + _("Unsupported scheme in disks URI: %1$s"), uri->scheme); return -1; } @@ -1232,7 +1232,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver, =20 if (vm->job->abortJob) { vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; - virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), + virReportError(VIR_ERR_OPERATION_ABORTED, _("%1$s: %2$s"), virDomainAsyncJobTypeToString(vm->job->asyncJob= ), _("canceled by client")); return -1; @@ -1286,7 +1286,7 @@ qemuMigrationSrcIsAllowedHostdev(const virDomainDef *= def) case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_SCSI_HOST: case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_MDEV: virReportError(VIR_ERR_OPERATION_UNSUPPORTED, - _("cannot migrate a domain with "), + _("cannot migrate a domain with "), virDomainHostdevSubsysTypeToString(hostdev-= >source.subsys.type)); return false; =20 @@ -1311,11 +1311,11 @@ qemuMigrationSrcIsAllowedHostdev(const virDomainDef= *def) virDomainNetType actualType =3D virDomainNetGetActualT= ype(hostdev->parentnet); =20 virReportError(VIR_ERR_OPERATION_UNSUPPORTED, - _("cannot migrate a domain with "), + _("cannot migrate a domain with "), virDomainNetTypeToString(actualType)); } else { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, - _("cannot migrate a domain with "), + _("cannot migrate a domain with "), virDomainHostdevSubsysTypeToString(host= dev->source.subsys.type)); } return false; @@ -1388,7 +1388,7 @@ qemuMigrationSrcIsAllowed(virDomainObj *vm, =20 if (nsnapshots > 0) { virReportError(VIR_ERR_OPERATION_INVALID, - _("cannot migrate domain with %d snapshots"), + _("cannot migrate domain with %1$d snapshots"), nsnapshots); return false; } @@ -1412,7 +1412,7 @@ qemuMigrationSrcIsAllowed(virDomainObj *vm, if (blockers && blockers[0]) { g_autofree char *reasons =3D g_strjoinv("; ", blockers); virReportError(VIR_ERR_OPERATION_INVALID, - _("cannot migrate domain: %s"), reasons); + _("cannot migrate domain: %1$s"), reasons); return false; } } else { @@ -1500,8 +1500,7 @@ qemuMigrationSrcIsAllowed(virDomainObj *vm, } if (shmem->role !=3D VIR_DOMAIN_SHMEM_ROLE_MASTER) { virReportError(VIR_ERR_OPERATION_INVALID, - _("shmem device '%s' cannot be migrated, " - "only shmem with role=3D'%s' can be migra= ted"), + _("shmem device '%1$s' cannot be migrated, = only shmem with role=3D'%2$s' can be migrated"), shmem->name, virDomainShmemRoleTypeToString(VIR_DOMAIN_S= HMEM_ROLE_MASTER)); return false; @@ -1860,31 +1859,31 @@ qemuMigrationJobCheckStatus(virDomainObj *vm, switch (jobData->status) { case VIR_DOMAIN_JOB_STATUS_NONE: virReportError(VIR_ERR_OPERATION_FAILED, - _("job '%s' is not active"), + _("job '%1$s' is not active"), qemuMigrationJobName(vm)); return -1; =20 case VIR_DOMAIN_JOB_STATUS_FAILED: if (error) { virReportError(VIR_ERR_OPERATION_FAILED, - _("job '%s' failed: %s"), + _("job '%1$s' failed: %2$s"), qemuMigrationJobName(vm), error); } else { virReportError(VIR_ERR_OPERATION_FAILED, - _("job '%s' unexpectedly failed"), + _("job '%1$s' unexpectedly failed"), qemuMigrationJobName(vm)); } return -1; =20 case VIR_DOMAIN_JOB_STATUS_CANCELED: virReportError(VIR_ERR_OPERATION_ABORTED, - _("job '%s' canceled by client"), + _("job '%1$s' canceled by client"), qemuMigrationJobName(vm)); return -1; =20 case VIR_DOMAIN_JOB_STATUS_POSTCOPY_PAUSED: virReportError(VIR_ERR_OPERATION_FAILED, - _("job '%s' failed in post-copy phase"), + _("job '%1$s' failed in post-copy phase"), qemuMigrationJobName(vm)); return -1; =20 @@ -1937,7 +1936,7 @@ qemuMigrationAnyCompleted(virDomainObj *vm, virDomainObjGetState(vm, &pauseReason) =3D=3D VIR_DOMAIN_PAUSED && pauseReason =3D=3D VIR_DOMAIN_PAUSED_IOERROR) { virReportError(VIR_ERR_OPERATION_FAILED, - _("job '%s' failed due to I/O error"), + _("job '%1$s' failed due to I/O error"), qemuMigrationJobName(vm)); goto error; } @@ -2109,7 +2108,7 @@ qemuMigrationSrcGraphicsRelocate(virDomainObj *vm, =20 if ((type =3D virDomainGraphicsTypeFromString(uri->scheme)) < 0) { virReportError(VIR_ERR_INVALID_ARG, - _("unknown graphics type %s"), uri->scheme); + _("unknown graphics type %1$s"), uri->scheme); return -1; } =20 @@ -2124,7 +2123,7 @@ qemuMigrationSrcGraphicsRelocate(virDomainObj *vm, if (STRCASEEQ(param->name, "tlsPort")) { if (virStrToLong_i(param->value, NULL, 10, &tlsPort) < 0) { virReportError(VIR_ERR_INVALID_ARG, - _("invalid tlsPort number: %s"), + _("invalid tlsPort number: %1$s"), param->value); return -1; } @@ -2180,8 +2179,8 @@ qemuMigrationDstOPDRelocate(virQEMUDriver *driver G_G= NUC_UNUSED, if (virNetDevOpenvswitchSetMigrateData(cookie->network->net[i]= .portdata, netptr->ifname) !=3D 0)= { virReportError(VIR_ERR_INTERNAL_ERROR, - _("Unable to run command to set OVS port da= ta for " - "interface %s"), netptr->ifname); + _("Unable to run command to set OVS port da= ta for interface %1$s"), + netptr->ifname); return -1; } break; @@ -2621,7 +2620,7 @@ qemuMigrationSrcBeginPhase(virQEMUDriver *driver, =20 if (j =3D=3D vm->def->ndisks) { virReportError(VIR_ERR_INVALID_ARG, - _("disk target %s not found"), + _("disk target %1$s not found"), migrate_disks[i]); return NULL; } @@ -2688,14 +2687,14 @@ qemuMigrationAnyCanResume(virDomainObj *vm, if (vm->job->asyncOwner !=3D 0 && vm->job->asyncOwner !=3D virThreadSelfID()) { virReportError(VIR_ERR_OPERATION_INVALID, - _("migration of domain %s is being actively monitor= ed by another thread"), + _("migration of domain %1$s is being actively monit= ored by another thread"), vm->def->name); return false; } =20 if (!virDomainObjIsPostcopy(vm, vm->job)) { virReportError(VIR_ERR_OPERATION_INVALID, - _("migration of domain %s is not in post-copy phase= "), + _("migration of domain %1$s is not in post-copy pha= se"), vm->def->name); return false; } @@ -2703,14 +2702,14 @@ qemuMigrationAnyCanResume(virDomainObj *vm, if (vm->job->phase < QEMU_MIGRATION_PHASE_POSTCOPY_FAILED && !virDomainObjIsFailedPostcopy(vm, vm->job)) { virReportError(VIR_ERR_OPERATION_INVALID, - _("post-copy migration of domain %s has not failed"= ), + _("post-copy migration of domain %1$s has not faile= d"), vm->def->name); return false; } =20 if (vm->job->phase > expectedPhase) { virReportError(VIR_ERR_OPERATION_INVALID, - _("resuming failed post-copy migration of domain %s= already in progress"), + _("resuming failed post-copy migration of domain %1= $s already in progress"), vm->def->name); return false; } @@ -2997,7 +2996,7 @@ qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDomain= Obj *vm, =20 if (!(nodedata =3D virHashLookup(blockNamedNodeData, disk->nodenam= e))) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("failed to find data for block node '%s'"), + _("failed to find data for block node '%1$s'"), disk->nodename); return -1; } @@ -3443,7 +3442,7 @@ qemuMigrationDstPrepareResume(virQEMUDriver *driver, vm =3D virDomainObjListFindByName(driver->domains, def->name); if (!vm) { virReportError(VIR_ERR_NO_DOMAIN, - _("no domain with matching name '%s'"), def->name); + _("no domain with matching name '%1$s'"), def->name= ); qemuMigrationDstErrorReport(driver, def->name); return -1; } @@ -3746,7 +3745,7 @@ qemuMigrationDstPrepareDirect(virQEMUDriver *driver, =20 if (uri->scheme =3D=3D NULL) { virReportError(VIR_ERR_INVALID_ARG, - _("missing scheme in migration URI: %s"), + _("missing scheme in migration URI: %1$s"), uri_in); goto cleanup; } @@ -3755,7 +3754,7 @@ qemuMigrationDstPrepareDirect(virQEMUDriver *driver, STRNEQ(uri->scheme, "rdma") && STRNEQ(uri->scheme, "unix")) { virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, - _("unsupported scheme %s in migration URI %s"), + _("unsupported scheme %1$s in migration URI %2$= s"), uri->scheme, uri_in); goto cleanup; } @@ -3765,8 +3764,9 @@ qemuMigrationDstPrepareDirect(virQEMUDriver *driver, listenAddress =3D uri->path; } else { if (uri->server =3D=3D NULL) { - virReportError(VIR_ERR_INVALID_ARG, _("missing host in mig= ration" - " URI: %s"), uri_in); + virReportError(VIR_ERR_INVALID_ARG, + _("missing host in migration URI: %1$s"), + uri_in); goto cleanup; } =20 @@ -4332,7 +4332,7 @@ qemuMigrationSrcConnect(virQEMUDriver *driver, =20 /* Migration expects a blocking FD */ if (virSetBlocking(spec->dest.fd.qemu, true) < 0) { - virReportSystemError(errno, _("Unable to set FD %d blocking"), + virReportSystemError(errno, _("Unable to set FD %1$d blocking"), spec->dest.fd.qemu); goto cleanup; } @@ -4604,7 +4604,7 @@ qemuMigrationSrcStart(virDomainObj *vm, } =20 virReportError(VIR_ERR_INTERNAL_ERROR, - _("unexpected migration schema: %d"), spec->destType); + _("unexpected migration schema: %1$d"), spec->destType); return -1; } =20 @@ -4745,8 +4745,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, if (virLockManagerPluginUsesState(driver->lockManager) && !cookieout) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("Migration with lock driver %s requires" - " cookie support"), + _("Migration with lock driver %1$s requires cookie = support"), virLockManagerPluginGetName(driver->lockManager)); return -1; } @@ -4884,7 +4883,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, * as this is a critical section so we are guaranteed * vm->job->abortJob will not change */ vm->job->current->status =3D VIR_DOMAIN_JOB_STATUS_CANCELED; - virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), + virReportError(VIR_ERR_OPERATION_ABORTED, _("%1$s: %2$s"), virDomainAsyncJobTypeToString(vm->job->asyncJob), _("canceled by client")); goto exit_monitor; @@ -5130,7 +5129,7 @@ qemuMigrationSrcPerformNative(virQEMUDriver *driver, =20 if (uribits->scheme =3D=3D NULL) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("missing scheme in migration URI: %s"), + _("missing scheme in migration URI: %1$s"), uri); return -1; } @@ -6302,7 +6301,7 @@ qemuMigrationDstVPAssociatePortProfiles(virDomainDef = *def) VIR_NETDEV_VPORT_PROFILE_OP= _MIGRATE_IN_FINISH, false) < 0) { virReportError(VIR_ERR_OPERATION_FAILED, - _("Port profile Associate failed for %s"), + _("Port profile Associate failed for %1$s"), net->ifname); goto err_exit; } --=20 2.39.2