From nobody Tue Feb 10 02:28:01 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=virtuozzo.com ARC-Seal: i=1; a=rsa-sha256; t=1589302816; cv=none; d=zohomail.com; s=zohoarc; b=KngV1VF8XukmIHkljIbfRx58ayHyVoM/RW4dDP4LPC1Xpez71IGpx7mlnq8yGzBj3NQvkFQ0HQEl1v8+l0cAtZ4gykhZ4UouIvYkESPaDDToB8gboUogWvLgxs9Ll8UamUffqO1h2UssYao3G24PQOP/j0ShZjWHFJnueI+gRMU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589302816; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=i5h4YwmUs4N8f5c1LpiWiXBjXXJMJMScC3bxv6aBO8g=; b=KMB8GwmzPZQuVrXrmc3m4LQv5UUPYm0QJI7zHzpjmqirvnq0Edpkg3yqV+zROLHiuXRbECBNGl/qNdLU51NVckp2tYsrO6t/q+ab5V/0FEpHbWWUjqgVRlWh2o97HLmoss4h4/+R6gwG4sMf+HOB3zKpwsbcmPlTZ4+jP4BJYPw= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1589302816067857.2287361543972; Tue, 12 May 2020 10:00:16 -0700 (PDT) Received: from localhost ([::1]:45864 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jYYGI-0003ym-Sq for importer@patchew.org; Tue, 12 May 2020 13:00:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52390) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jYY7S-0004n9-7s; Tue, 12 May 2020 12:51:02 -0400 Received: from relay.sw.ru ([185.231.240.75]:50550) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jYY7P-0001RB-2q; Tue, 12 May 2020 12:51:01 -0400 Received: from dhcp-172-16-25-136.sw.ru ([172.16.25.136] helo=localhost.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1jYY7D-0004nT-U4; Tue, 12 May 2020 19:50:47 +0300 From: Andrey Shinkevich To: qemu-block@nongnu.org Subject: [PATCH v4 15/15] block: apply COR-filter to block-stream jobs Date: Tue, 12 May 2020 19:50:45 +0300 Message-Id: <1589302245-893269-16-git-send-email-andrey.shinkevich@virtuozzo.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1589302245-893269-1-git-send-email-andrey.shinkevich@virtuozzo.com> References: <1589302245-893269-1-git-send-email-andrey.shinkevich@virtuozzo.com> Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=185.231.240.75; envelope-from=andrey.shinkevich@virtuozzo.com; helo=relay.sw.ru X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/12 10:53:25 X-ACL-Warn: Detected OS = Linux 3.1-3.10 X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, fam@euphon.net, vsementsov@virtuozzo.com, armbru@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, andrey.shinkevich@virtuozzo.com, den@openvz.org, mreitz@redhat.com, jsnow@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The patch completes the series with the COR-filter insertion to any block-stream operation. It also makes changes to the iotests 030 and 141.out. Signed-off-by: Andrey Shinkevich --- block/stream.c | 83 ++++++++++++++++++++++++++++++++----------= ---- tests/qemu-iotests/030 | 8 +++-- tests/qemu-iotests/141.out | 2 +- 3 files changed, 64 insertions(+), 29 deletions(-) diff --git a/block/stream.c b/block/stream.c index e0b35f8..a74a07b 100644 --- a/block/stream.c +++ b/block/stream.c @@ -19,6 +19,7 @@ #include "qapi/qmp/qerror.h" #include "qemu/ratelimit.h" #include "sysemu/block-backend.h" +#include "block/copy-on-read.h" =20 enum { /* @@ -32,8 +33,11 @@ enum { typedef struct StreamBlockJob { BlockJob common; BlockDriverState *bottom; + BlockDriverState *cor_filter_bs; + BlockDriverState *target_bs; BlockdevOnError on_error; char *backing_file_str; + char *base_fmt; bool bs_read_only; bool chain_frozen; } StreamBlockJob; @@ -52,33 +56,25 @@ static void stream_abort(Job *job) StreamBlockJob *s =3D container_of(job, StreamBlockJob, common.job); =20 if (s->chain_frozen) { - BlockJob *bjob =3D &s->common; - bdrv_unfreeze_chain(blk_bs(bjob->blk), s->bottom); + bdrv_unfreeze_chain(s->cor_filter_bs, s->bottom); } } =20 static int stream_prepare(Job *job) { StreamBlockJob *s =3D container_of(job, StreamBlockJob, common.job); - BlockJob *bjob =3D &s->common; - BlockDriverState *bs =3D blk_bs(bjob->blk); + BlockDriverState *bs =3D s->target_bs; BlockDriverState *base =3D backing_bs(s->bottom); Error *local_err =3D NULL; int ret =3D 0; =20 - bdrv_unfreeze_chain(bs, s->bottom); + bdrv_unfreeze_chain(s->cor_filter_bs, s->bottom); s->chain_frozen =3D false; =20 if (bs->backing) { - const char *base_id =3D NULL, *base_fmt =3D NULL; - if (base) { - base_id =3D s->backing_file_str; - if (base->drv) { - base_fmt =3D base->drv->format_name; - } - } bdrv_set_backing_hd(bs, base, &local_err); - ret =3D bdrv_change_backing_file(bs, base_id, base_fmt); + ret =3D bdrv_change_backing_file(bs, s->backing_file_str, + s->base_fmt); if (local_err) { error_report_err(local_err); return -EPERM; @@ -92,7 +88,9 @@ static void stream_clean(Job *job) { StreamBlockJob *s =3D container_of(job, StreamBlockJob, common.job); BlockJob *bjob =3D &s->common; - BlockDriverState *bs =3D blk_bs(bjob->blk); + BlockDriverState *bs =3D s->target_bs; + + bdrv_cor_filter_drop(s->cor_filter_bs); =20 /* Reopen the image back in read-only mode if necessary */ if (s->bs_read_only) { @@ -102,13 +100,14 @@ static void stream_clean(Job *job) } =20 g_free(s->backing_file_str); + g_free(s->base_fmt); } =20 static int coroutine_fn stream_run(Job *job, Error **errp) { StreamBlockJob *s =3D container_of(job, StreamBlockJob, common.job); BlockBackend *blk =3D s->common.blk; - BlockDriverState *bs =3D blk_bs(blk); + BlockDriverState *bs =3D s->target_bs; bool enable_cor =3D !backing_bs(s->bottom); int64_t len; int64_t offset =3D 0; @@ -156,8 +155,8 @@ static int coroutine_fn stream_run(Job *job, Error **er= rp) } else if (ret >=3D 0) { /* Copy if allocated in the intermediate images. Limit to the * known-unallocated area [offset, offset+n*BDRV_SECTOR_SIZE).= */ - ret =3D bdrv_is_allocated_above(backing_bs(bs), s->bottom, tru= e, - offset, n, &n); + ret =3D bdrv_is_allocated_above(bdrv_filtered_cow_bs(bs), s->b= ottom, + true, offset, n, &n); /* Finish early if end of backing file has been reached */ if (ret =3D=3D 0 && n =3D=3D 0) { n =3D len - offset; @@ -225,7 +224,13 @@ void stream_start(const char *job_id, BlockDriverState= *bs, BlockDriverState *iter; bool bs_read_only; int basic_flags =3D BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGE= D; + BlockDriverState *cor_filter_bs =3D NULL; BlockDriverState *bottom =3D bdrv_find_overlay(bs, base); + char *base_fmt =3D NULL; + + if (base && base->drv) { + base_fmt =3D g_strdup(base->drv->format_name); + } =20 if (bdrv_freeze_chain(bs, bottom, errp) < 0) { return; @@ -240,17 +245,35 @@ void stream_start(const char *job_id, BlockDriverStat= e *bs, } } =20 - /* Prevent concurrent jobs trying to modify the graph structure here, = we - * already have our own plans. Also don't allow resize as the image si= ze is - * queried only at the job start and then cached. */ - s =3D block_job_create(job_id, &stream_job_driver, NULL, bs, - basic_flags | BLK_PERM_GRAPH_MOD, - basic_flags | BLK_PERM_WRITE, + cor_filter_bs =3D bdrv_cor_filter_append(bs, filter_node_name, errp); + if (cor_filter_bs =3D=3D NULL) { + goto fail; + } + + if (bdrv_freeze_chain(cor_filter_bs, bs, errp) < 0) { + bdrv_cor_filter_drop(cor_filter_bs); + cor_filter_bs =3D NULL; + goto fail; + } + + s =3D block_job_create(job_id, &stream_job_driver, NULL, cor_filter_bs, + BLK_PERM_CONSISTENT_READ, + basic_flags | BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD, speed, creation_flags, NULL, NULL, errp); if (!s) { goto fail; } =20 + /* + * Prevent concurrent jobs trying to modify the graph structure here, = we + * already have our own plans. Also don't allow resize as the image si= ze is + * queried only at the job start and then cached. + */ + if (block_job_add_bdrv(&s->common, "active node", bs, + basic_flags | BLK_PERM_GRAPH_MOD, + basic_flags | BLK_PERM_WRITE, &error_abort)) { + goto fail; + } /* Block all intermediate nodes between bs and base, because they will * disappear from the chain after this operation. The streaming job re= ads * every block only once, assuming that it doesn't change, so forbid w= rites @@ -259,13 +282,17 @@ void stream_start(const char *job_id, BlockDriverStat= e *bs, * due to parallel block jobs running. */ base =3D backing_bs(bottom); - for (iter =3D backing_bs(bs); iter && iter !=3D base; iter =3D backing= _bs(iter)) { + for (iter =3D bdrv_filtered_bs(bs); iter && iter !=3D base; + iter =3D bdrv_filtered_bs(iter)) { block_job_add_bdrv(&s->common, "intermediate node", iter, 0, basic_flags, &error_abort); } =20 + s->cor_filter_bs =3D cor_filter_bs; + s->target_bs =3D bs; s->bottom =3D bottom; s->backing_file_str =3D g_strdup(backing_file_str); + s->base_fmt =3D base_fmt; s->bs_read_only =3D bs_read_only; s->chain_frozen =3D true; =20 @@ -278,5 +305,11 @@ fail: if (bs_read_only) { bdrv_reopen_set_read_only(bs, true, NULL); } - bdrv_unfreeze_chain(bs, bottom); + + if (cor_filter_bs) { + bdrv_unfreeze_chain(cor_filter_bs, bottom); + bdrv_cor_filter_drop(cor_filter_bs); + } else { + bdrv_unfreeze_chain(bs, bottom); + } } diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030 index d7638cd..9856a08 100755 --- a/tests/qemu-iotests/030 +++ b/tests/qemu-iotests/030 @@ -269,12 +269,14 @@ class TestParallelOps(iotests.QMPTestCase): self.assert_no_active_block_jobs() =20 # Set a speed limit to make sure that this job blocks the rest - result =3D self.vm.qmp('block-stream', device=3D'node4', job_id=3D= 'stream-node4', base=3Dself.imgs[1], speed=3D1024*1024) + result =3D self.vm.qmp('block-stream', device=3D'node4', + job_id=3D'stream-node4', base=3Dself.imgs[1], + filter_node_name=3D'stream-filter', speed=3D1024*1024) self.assert_qmp(result, 'return', {}) =20 result =3D self.vm.qmp('block-stream', device=3D'node5', job_id=3D= 'stream-node5', base=3Dself.imgs[2]) self.assert_qmp(result, 'error/desc', - "Node 'node4' is busy: block device is in use by block job: st= ream") + "Node 'stream-filter' is busy: block device is in use by block= job: stream") =20 result =3D self.vm.qmp('block-stream', device=3D'node3', job_id=3D= 'stream-node3', base=3Dself.imgs[2]) self.assert_qmp(result, 'error/desc', @@ -287,7 +289,7 @@ class TestParallelOps(iotests.QMPTestCase): # block-commit should also fail if it touches nodes used by the st= ream job result =3D self.vm.qmp('block-commit', device=3D'drive0', base=3Ds= elf.imgs[4], job_id=3D'commit-node4') self.assert_qmp(result, 'error/desc', - "Node 'node4' is busy: block device is in use by block job: st= ream") + "Node 'stream-filter' is busy: block device is in use by block= job: stream") =20 result =3D self.vm.qmp('block-commit', device=3D'drive0', base=3Ds= elf.imgs[1], top=3Dself.imgs[3], job_id=3D'commit-node1') self.assert_qmp(result, 'error/desc', diff --git a/tests/qemu-iotests/141.out b/tests/qemu-iotests/141.out index 263b680..cf18558 100644 --- a/tests/qemu-iotests/141.out +++ b/tests/qemu-iotests/141.out @@ -99,7 +99,7 @@ wrote 1048576/1048576 bytes at offset 0 {"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event"= : "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}} {"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event"= : "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}} {'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}} -{"error": {"class": "GenericError", "desc": "Node drv0 is in use"}} +{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block de= vice is in use by block job: stream"}} {'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}} {"return": {}} {"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event"= : "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}} --=20 1.8.3.1