From nobody Tue Feb 10 04:03:36 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) client-ip=207.211.31.120; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-1.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1588765444; cv=none; d=zohomail.com; s=zohoarc; b=nUXag6mIl5dT33dCYB/O9O347jKqyOUZKKmoSah9n2vbbxUhRBZo52QTmO0Y/0WFwYjffdg4PKAzk4EhvosBsN7nVcBNzdBMWEDe80wOlWrMumxa/SR4ib5h8QL9Q65GWzJcMR4R/MHZOdxeCrBNroTp8IdVgmjLVf35FcHyCr4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1588765444; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=PXT0fH5mUSTU09qQp5y8mWPp8+SbEeiJMV/XzqiRKhs=; b=PJQzbRvaNHdIJ+jzQuIKI4BlhaG0kPn5cntJuXU7pzwsCH/uQ3dZK0zuNELgdyoy7kTZ/jew0y03GX34fIx1NL52IvIHB0RCus93Eh0qesf099hCfQPC8P6+9NdPKEOXqFExZglvtCbOe4cv0wOgv4IHOI0adMRTXRq6DPxvi9M= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 207.211.31.120 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by mx.zohomail.com with SMTPS id 158876544453763.8909184213843; Wed, 6 May 2020 04:44:04 -0700 (PDT) Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-20-_sPVsUH9OXiT8Dm1Zz3dGw-1; Wed, 06 May 2020 07:43:03 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 78A1E107ACCA; Wed, 6 May 2020 11:42:55 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 51E3A10013D9; Wed, 6 May 2020 11:42:55 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 0A5431809563; Wed, 6 May 2020 11:42:55 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 046Bgb5c026812 for ; Wed, 6 May 2020 07:42:37 -0400 Received: by smtp.corp.redhat.com (Postfix) id 7A9FF10021B3; Wed, 6 May 2020 11:42:37 +0000 (UTC) Received: from ibm-p8-15-fsp.mgmt.pnr.lab.eng.rdu2.redhat.com (unknown [10.40.195.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id B260E1002398; Wed, 6 May 2020 11:42:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588765443; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=PXT0fH5mUSTU09qQp5y8mWPp8+SbEeiJMV/XzqiRKhs=; b=EnVZheOmsFTWbYJunmvA0UMgOuXQ8yro3vKuS+FT4m3puwLNpb5aWVe6o2wlYHZIWs1tk4 DlTyUKx6nm9DboLgLzh/7kbOVANui6htAvzqsBxHswwEwN16WXllwzHj6aL85UQtCWrmEq sYWDPc/5bjf+PbsPdBKiwrQ6KFr6W2A= X-MC-Unique: _sPVsUH9OXiT8Dm1Zz3dGw-1 From: Pavel Mores To: libvir-list@redhat.com Subject: [libvirt PATCH v2 08/10] qemu: block: add function to wait for blockcommits and collect results Date: Wed, 6 May 2020 13:42:24 +0200 Message-Id: <20200506114226.2538196-9-pmores@redhat.com> In-Reply-To: <20200506114226.2538196-1-pmores@redhat.com> References: <20200506114226.2538196-1-pmores@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-loop: libvir-list@redhat.com Cc: Pavel Mores X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: libvir-list-bounces@redhat.com Errors-To: libvir-list-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) Content-Type: text/plain; charset="utf-8" This is the third phase of snapshot deletion. Blockcommits to delete the snapshot have been launched and now we can wait for them to finish, check results and report errors if any. Signed-off-by: Pavel Mores --- src/qemu/qemu_driver.c | 59 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 35b7fb69d5..a2629e9002 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -16941,6 +16941,65 @@ qemuDomainSnapshotDeleteExternalLaunchJobs(virDoma= inObjPtr vm, } =20 =20 +static int +qemuDomainSnapshotDeleteExternalWaitForJobs(virDomainObjPtr vm, + virQEMUDriverPtr driver, + const virBlockCommitDesc *bloc= kCommitDescs, + int numDescs) +{ + size_t i; + + for (i =3D 0; i < numDescs; i++) { + virDomainDiskDefPtr disk =3D blockCommitDescs[i].disk; + bool isActive =3D blockCommitDescs[i].isActive; + + /* wait for the blockcommit job to finish (in particular, reach + * one of the finished QEMU_BLOCKJOB_STATE_* states)... */ + g_autoptr(qemuBlockJobData) job =3D NULL; + + if (!(job =3D qemuBlockJobDiskGetJob(disk))) { + virReportError(VIR_ERR_OPERATION_INVALID, + _("disk %s does not have an active block job"), di= sk->dst); + return -1; + } + + qemuBlockJobSyncBegin(job); + qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + while (job->state !=3D QEMU_BLOCKJOB_STATE_READY && + job->state !=3D QEMU_BLOCKJOB_STATE_FAILED && + job->state !=3D QEMU_BLOCKJOB_STATE_CANCELLED && + job->state !=3D QEMU_BLOCKJOB_STATE_COMPLETED) { + + if (virDomainObjWait(vm) < 0) + return -1; + qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + } + qemuBlockJobSyncEnd(vm, job, QEMU_ASYNC_JOB_NONE); + + if ((isActive && job->state !=3D QEMU_BLOCKJOB_STATE_READY) || + (!isActive && job->state !=3D QEMU_BLOCKJOB_STATE_COMPLETED)) { + virReportError(VIR_ERR_OPERATION_INVALID, + _("blockcomit job failed for disk %s"), disk->dst); + /* TODO Apr 30, 2020: how to handle this? Bailing out doesn't + * seem an obvious option in this case as all blockjobs are now + * created and running - if any of them are to fail they will, + * regardless of whether we break here. It might make more + * sense to continue and at least report all errors. */ + /*return -1;*/ + } + + /* ... and pivot if necessary */ + if (isActive) { + if (qemuDomainBlockJobAbortImpl(driver, vm, disk->dst, + VIR_DOMAIN_BLOCK_JOB_ABORT_PIV= OT) < 0) + return -1; + } + } + + return 0; +} + + static int qemuDomainSnapshotDelete(virDomainSnapshotPtr snapshot, unsigned int flags) --=20 2.24.1