From nobody Mon Dec 15 02:55:35 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) client-ip=170.10.133.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1658846328; cv=none; d=zohomail.com; s=zohoarc; b=EQvaV+esltOW03Z5olJxpvZ7GGjDipb8W1p3e8GIJtkzghkkjf2iQrC0jWLEbh61+OXPQ+t/BHIcTC/jKdgVTNAIRL34yTLR7tt5LIwCovugf0SCjgk9jErT9Sg5IzbE4w4rSiwVPH7vPvnKjkNdtFR6Bbq5Kgq7zj2YrjpvA74= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1658846328; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=nF2r16Av3dvFaHeEhqv2h3AaHdV68+5aFpPeXLLcfDo=; b=c4gJOYb3XsWabQuOjNb+NNT2QIKi8JRbxEupDhxM81mncRxBdhAUvJtHnMljO0GH2j2AtVyn1L00q3iy/6HRtpir+JclI03wkCb4aW+9vOST7i9rthujey2acvvmDuWpiJiOc7URAzX9rSLj4GtCpgr5bcushNLd3b6pIe93LYM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.zohomail.com with SMTPS id 1658846328533290.0390112361281; Tue, 26 Jul 2022 07:38:48 -0700 (PDT) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-294-26RKBq3pNw-lNTEl2rGlfA-1; Tue, 26 Jul 2022 10:38:44 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B17F38039A5; Tue, 26 Jul 2022 14:38:37 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E1EF492CA4; Tue, 26 Jul 2022 14:38:37 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 6CA93194705E; Tue, 26 Jul 2022 14:38:37 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 8360B19451F3 for ; Tue, 26 Jul 2022 14:38:31 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 63F9E400EA8E; Tue, 26 Jul 2022 14:38:31 +0000 (UTC) Received: from speedmetal.lan (unknown [10.40.208.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id E8B4F400DFD7 for ; Tue, 26 Jul 2022 14:38:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1658846327; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=nF2r16Av3dvFaHeEhqv2h3AaHdV68+5aFpPeXLLcfDo=; b=Ut/0AUkuC08F2AgwLgkggs/6nMbjf7XU2WmBXb7wL6Wnr6MaVnRuEH6Lu47IXID1OBKIle Vmx6CcEZylzvFOrhqekeRQICcUgH4uV1ivzOtCDOZllhfVE7/X4P6X606qwZa/3WUzOaI3 mAG0nqGHsbbOUuiVaO9DtMbYADWUOo0= X-MC-Unique: 26RKBq3pNw-lNTEl2rGlfA-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Peter Krempa To: libvir-list@redhat.com Subject: [PATCH 30/80] qemu: process: Remove pre-blockdev code paths Date: Tue, 26 Jul 2022 16:37:08 +0200 Message-Id: <855f7d050d2bc7cb0adf6010136f11ec16ae72fe.1658843187.git.pkrempa@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=libvir-list-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1658846330242100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Peter Krempa Reviewed-by: Pavel Hrdina --- src/qemu/qemu_process.c | 182 +--------------------------------------- 1 file changed, 2 insertions(+), 180 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 980e06ce79..6083ee10d8 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -873,50 +873,6 @@ qemuProcessHandleIOError(qemuMonitor *mon G_GNUC_UNUSE= D, virObjectEventStateQueue(driver->domainEventState, lifecycleEvent); } -static void -qemuProcessHandleBlockJob(qemuMonitor *mon G_GNUC_UNUSED, - virDomainObj *vm, - const char *diskAlias, - int type, - int status, - const char *error) -{ - qemuDomainObjPrivate *priv; - virDomainDiskDef *disk; - g_autoptr(qemuBlockJobData) job =3D NULL; - - virObjectLock(vm); - - priv =3D vm->privateData; - - /* with QEMU_CAPS_BLOCKDEV we handle block job events via JOB_STATUS_C= HANGE */ - if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV)) - goto cleanup; - - VIR_DEBUG("Block job for device %s (domain: %p,%s) type %d status %d", - diskAlias, vm, vm->def->name, type, status); - - if (!(disk =3D qemuProcessFindDomainDiskByAliasOrQOM(vm, diskAlias, NU= LL))) - goto cleanup; - - job =3D qemuBlockJobDiskGetJob(disk); - - if (job && job->synchronous) { - /* We have a SYNC API waiting for this event, dispatch it back */ - job->newstate =3D status; - VIR_FREE(job->errmsg); - job->errmsg =3D g_strdup(error); - virDomainObjBroadcast(vm); - } else { - /* there is no waiting SYNC API, dispatch the update to a thread */ - qemuProcessEventSubmit(vm, QEMU_PROCESS_EVENT_BLOCK_JOB, - type, status, g_strdup(diskAlias)); - } - - cleanup: - virObjectUnlock(vm); -} - static void qemuProcessHandleJobStatusChange(qemuMonitor *mon G_GNUC_UNUSED, @@ -935,11 +891,6 @@ qemuProcessHandleJobStatusChange(qemuMonitor *mon G_GN= UC_UNUSED, jobname, vm, vm->def->name, qemuMonitorJobStatusTypeToString(status), status); - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV)) { - VIR_DEBUG("job '%s' handled by old blockjob handler", jobname); - goto cleanup; - } - if ((jobnewstate =3D qemuBlockjobConvertMonitorStatus(status)) =3D=3D = QEMU_BLOCKJOB_STATE_LAST) goto cleanup; @@ -1822,7 +1773,6 @@ static qemuMonitorCallbacks monitorCallbacks =3D { .domainWatchdog =3D qemuProcessHandleWatchdog, .domainIOError =3D qemuProcessHandleIOError, .domainGraphics =3D qemuProcessHandleGraphics, - .domainBlockJob =3D qemuProcessHandleBlockJob, .jobStatusChange =3D qemuProcessHandleJobStatusChange, .domainTrayChange =3D qemuProcessHandleTrayChange, .domainPMWakeup =3D qemuProcessHandlePMWakeup, @@ -6834,10 +6784,8 @@ qemuProcessPrepareHostStorage(virQEMUDriver *driver, virDomainObj *vm, unsigned int flags) { - qemuDomainObjPrivate *priv =3D vm->privateData; size_t i; bool cold_boot =3D flags & VIR_QEMU_PROCESS_START_COLD; - bool blockdev =3D virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV); for (i =3D vm->def->ndisks; i > 0; i--) { size_t idx =3D i - 1; @@ -6847,7 +6795,7 @@ qemuProcessPrepareHostStorage(virQEMUDriver *driver, continue; /* backing chain needs to be redetected if we aren't using blockde= v */ - if (!blockdev || qemuDiskBusIsSD(disk->bus)) + if (qemuDiskBusIsSD(disk->bus)) virStorageSourceBackingStoreClear(disk->src); /* @@ -7294,13 +7242,9 @@ qemuProcessSetupDiskThrottlingBlockdev(virQEMUDriver= *driver, virDomainObj *vm, virDomainAsyncJob asyncJob) { - qemuDomainObjPrivate *priv =3D vm->privateData; size_t i; int ret =3D -1; - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV)) - return 0; - VIR_DEBUG("Setting up disk throttling for -blockdev via block_set_io_t= hrottle"); if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) @@ -7462,11 +7406,6 @@ static int qemuProcessSetupDisksTransient(virDomainObj *vm, virDomainAsyncJob asyncJob) { - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!(virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV))) - return 0; - if (qemuProcessSetupDisksTransientSnapshot(vm, asyncJob) < 0) return -1; @@ -7886,8 +7825,6 @@ qemuProcessRefreshState(virQEMUDriver *driver, virDomainObj *vm, virDomainAsyncJob asyncJob) { - qemuDomainObjPrivate *priv =3D vm->privateData; - VIR_DEBUG("Fetching list of active devices"); if (qemuDomainUpdateDeviceList(driver, vm, asyncJob) < 0) return -1; @@ -7903,9 +7840,6 @@ qemuProcessRefreshState(virQEMUDriver *driver, VIR_DEBUG("Updating disk data"); if (qemuProcessRefreshDisks(driver, vm, asyncJob) < 0) return -1; - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV) && - qemuBlockNodeNamesDetect(driver, vm, asyncJob) < 0) - return -1; return 0; } @@ -8750,101 +8684,6 @@ qemuProcessRefreshCPU(virQEMUDriver *driver, } -static int -qemuProcessRefreshLegacyBlockjob(void *payload, - const char *name, - void *opaque) -{ - const char *jobname =3D name; - virDomainObj *vm =3D opaque; - qemuMonitorBlockJobInfo *info =3D payload; - virDomainDiskDef *disk; - qemuBlockJobData *job; - qemuBlockJobType jobtype =3D info->type; - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (!(disk =3D qemuProcessFindDomainDiskByAliasOrQOM(vm, jobname, jobn= ame))) { - VIR_DEBUG("could not find disk for block job '%s'", jobname); - return 0; - } - - if (jobtype =3D=3D QEMU_BLOCKJOB_TYPE_COMMIT && - disk->mirrorJob =3D=3D VIR_DOMAIN_BLOCK_JOB_TYPE_ACTIVE_COMMIT) - jobtype =3D disk->mirrorJob; - - if (!(job =3D qemuBlockJobDiskNew(vm, disk, jobtype, jobname))) - return -1; - - if (disk->mirror) { - if ((!info->ready_present && info->end =3D=3D info->cur) || - info->ready) { - disk->mirrorState =3D VIR_DOMAIN_DISK_MIRROR_STATE_READY; - job->state =3D VIR_DOMAIN_BLOCK_JOB_READY; - } - - /* Pre-blockdev block copy labelled the chain of the mirrored devi= ce - * just before pivoting. At that point it was no longer known whet= her - * it's even necessary (e.g. disk is being reused). This code fixes - * the labelling in case the job was started in a libvirt version - * which did not label the chain when the block copy is being star= ted. - * Note that we can't do much on failure. */ - if (disk->mirrorJob =3D=3D VIR_DOMAIN_BLOCK_JOB_TYPE_COPY) { - if (qemuDomainDetermineDiskChain(priv->driver, vm, disk, - disk->mirror, true) < 0) - goto cleanup; - - if (disk->mirror->format && - disk->mirror->format !=3D VIR_STORAGE_FILE_RAW && - (qemuDomainNamespaceSetupDisk(vm, disk->mirror, NULL) < 0 = || - qemuSetupImageChainCgroup(vm, disk->mirror) < 0 || - qemuSecuritySetImageLabel(priv->driver, vm, disk->mirror, - true, true) < 0)) - goto cleanup; - } - } - - qemuBlockJobStarted(job, vm); - - cleanup: - qemuBlockJobStartupFinalize(vm, job); - - return 0; -} - - -static int -qemuProcessRefreshLegacyBlockjobs(virQEMUDriver *driver, - virDomainObj *vm) -{ - g_autoptr(GHashTable) blockJobs =3D NULL; - - qemuDomainObjEnterMonitor(driver, vm); - blockJobs =3D qemuMonitorGetAllBlockJobInfo(qemuDomainGetMonitor(vm), = true); - qemuDomainObjExitMonitor(vm); - - if (!blockJobs) - return -1; - - if (virHashForEach(blockJobs, qemuProcessRefreshLegacyBlockjob, vm) < = 0) - return -1; - - return 0; -} - - -static int -qemuProcessRefreshBlockjobs(virQEMUDriver *driver, - virDomainObj *vm) -{ - qemuDomainObjPrivate *priv =3D vm->privateData; - - if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV)) - return qemuBlockJobRefreshJobs(driver, vm); - - return qemuProcessRefreshLegacyBlockjobs(driver, vm); -} - - struct qemuProcessReconnectData { virQEMUDriver *driver; virDomainObj *obj; @@ -8952,19 +8791,6 @@ qemuProcessReconnect(void *opaque) if (virDomainDiskTranslateSourcePool(disk) < 0) goto error; - - /* backing chains need to be refreshed only if they could change */ - if (priv->reconnectBlockjobs !=3D VIR_TRISTATE_BOOL_NO && - !virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV)) { - /* This should be the only place that calls - * qemuDomainDetermineDiskChain with @report_broken =3D=3D fal= se - * to guarantee best-effort domain reconnect */ - virStorageSourceBackingStoreClear(disk->src); - if (qemuDomainDetermineDiskChain(driver, obj, disk, NULL, fals= e) < 0) - goto error; - } else { - VIR_DEBUG("skipping backing chain detection for '%s'", disk->d= st); - } } for (i =3D 0; i < obj->def->ngraphics; i++) { @@ -9054,10 +8880,6 @@ qemuProcessReconnect(void *opaque) QEMU_DOMAIN_DISK_PRIVATE(disk)->transientOverlayCreated =3D tr= ue; } - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV) && - qemuBlockNodeNamesDetect(driver, obj, VIR_ASYNC_JOB_NONE) < 0) - goto error; - if (qemuRefreshVirtioChannelState(driver, obj, VIR_ASYNC_JOB_NONE) < 0) goto error; @@ -9070,7 +8892,7 @@ qemuProcessReconnect(void *opaque) if (qemuProcessRecoverJob(driver, obj, &oldjob, &stopFlags) < 0) goto error; - if (qemuProcessRefreshBlockjobs(driver, obj) < 0) + if (qemuBlockJobRefreshJobs(driver, obj) < 0) goto error; if (qemuProcessUpdateDevices(driver, obj) < 0) --=20 2.36.1