1
The following changes since commit 31ee895047bdcf7387e3570cbd2a473c6f744b08:
1
The following changes since commit ac793156f650ae2d77834932d72224175ee69086:
2
2
3
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2021-01-25 15:56:13 +0000)
3
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20201020-1' into staging (2020-10-20 21:11:35 +0100)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://github.com/XanClic/qemu.git tags/pull-block-2021-01-26
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to bb24cdc5efee580e81f71c5ff0fd980f2cc179d0:
9
for you to fetch changes up to 32a3fd65e7e3551337fd26bfc0e2f899d70c028c:
10
10
11
iotests/178: Pass value to invalid option (2021-01-26 14:36:37 +0100)
11
iotests: add commit top->base cases to 274 (2020-10-22 09:55:39 +0100)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Block patches:
14
Pull request
15
- Make backup block jobs use asynchronous requests with the block-copy
15
16
module
16
v2:
17
- Use COR filter node for stream block jobs
17
* Fix format string issues on 32-bit hosts [Peter]
18
- Make coroutine-sigaltstack’s qemu_coroutine_new() function thread-safe
18
* Fix qemu-nbd.c CONFIG_POSIX ifdef issue [Eric]
19
- Report error string when file locking fails with an unexpected errno
19
* Fix missing eventfd.h header on macOS [Peter]
20
- iotest fixes, additions, and some refactoring
20
* Drop unreliable vhost-user-blk test (will send a new patch when ready) [Peter]
21
22
This pull request contains the vhost-user-blk server by Coiby Xu along with my
23
additions, block/nvme.c alignment and hardware error statistics by Philippe
24
Mathieu-Daudé, and bdrv_co_block_status_above() fixes by Vladimir
25
Sementsov-Ogievskiy.
21
26
22
----------------------------------------------------------------
27
----------------------------------------------------------------
23
Alberto Garcia (1):
24
iotests: Add test for the regression fixed in c8bf9a9169
25
28
26
Andrey Shinkevich (10):
29
Coiby Xu (6):
27
copy-on-read: support preadv/pwritev_part functions
30
libvhost-user: Allow vu_message_read to be replaced
28
block: add API function to insert a node
31
libvhost-user: remove watch for kick_fd when de-initialize vu-dev
29
copy-on-read: add filter drop function
32
util/vhost-user-server: generic vhost user server
30
qapi: add filter-node-name to block-stream
33
block: move logical block size check function to a common utility
31
qapi: copy-on-read filter: add 'bottom' option
34
function
32
iotests: add #310 to test bottom node in COR driver
35
block/export: vhost-user block device backend server
33
block: include supported_read_flags into BDS structure
36
MAINTAINERS: Add vhost-user block device backend server maintainer
34
copy-on-read: skip non-guest reads if no copy needed
35
stream: rework backing-file changing
36
block: apply COR-filter to block-stream jobs
37
37
38
David Edmondson (1):
38
Philippe Mathieu-Daudé (1):
39
block: report errno when flock fcntl fails
39
block/nvme: Add driver statistics for access alignment and hw errors
40
40
41
Max Reitz (14):
41
Stefan Hajnoczi (16):
42
iotests.py: Assume a couple of variables as given
42
util/vhost-user-server: s/fileds/fields/ typo fix
43
iotests/297: Rewrite in Python and extend reach
43
util/vhost-user-server: drop unnecessary QOM cast
44
iotests: Move try_remove to iotests.py
44
util/vhost-user-server: drop unnecessary watch deletion
45
iotests/129: Remove test images in tearDown()
45
block/export: consolidate request structs into VuBlockReq
46
iotests/129: Do not check @busy
46
util/vhost-user-server: drop unused DevicePanicNotifier
47
iotests/129: Use throttle node
47
util/vhost-user-server: fix memory leak in vu_message_read()
48
iotests/129: Actually test a commit job
48
util/vhost-user-server: check EOF when reading payload
49
iotests/129: Limit mirror job's buffer size
49
util/vhost-user-server: rework vu_client_trip() coroutine lifecycle
50
iotests/129: Clean up pylint and mypy complaints
50
block/export: report flush errors
51
iotests/300: Clean up pylint and mypy complaints
51
block/export: convert vhost-user-blk server to block export API
52
coroutine-sigaltstack: Add SIGUSR2 mutex
52
util/vhost-user-server: move header to include/
53
iotests/129: Limit backup's max-chunk/max-workers
53
util/vhost-user-server: use static library in meson.build
54
iotests/118: Drop 'change' test
54
qemu-storage-daemon: avoid compiling blockdev_ss twice
55
iotests/178: Pass value to invalid option
55
block: move block exports to libblockdev
56
block/export: add iothread and fixed-iothread options
57
block/export: add vhost-user-blk multi-queue support
56
58
57
Vladimir Sementsov-Ogievskiy (27):
59
Vladimir Sementsov-Ogievskiy (5):
58
iotests: fix _check_o_direct
60
block/io: fix bdrv_co_block_status_above
59
qapi: block-stream: add "bottom" argument
61
block/io: bdrv_common_block_status_above: support include_base
60
iotests: 30: prepare to COR filter insertion by stream job
62
block/io: bdrv_common_block_status_above: support bs == base
61
block/stream: add s->target_bs
63
block/io: fix bdrv_is_allocated_above
62
qapi: backup: add perf.use-copy-range parameter
64
iotests: add commit top->base cases to 274
63
block/block-copy: More explicit call_state
64
block/block-copy: implement block_copy_async
65
block/block-copy: add max_chunk and max_workers parameters
66
block/block-copy: add list of all call-states
67
block/block-copy: add ratelimit to block-copy
68
block/block-copy: add block_copy_cancel
69
blockjob: add set_speed to BlockJobDriver
70
job: call job_enter from job_pause
71
qapi: backup: add max-chunk and max-workers to x-perf struct
72
iotests: 56: prepare for backup over block-copy
73
iotests: 185: prepare for backup over block-copy
74
iotests: 219: prepare for backup over block-copy
75
iotests: 257: prepare for backup over block-copy
76
block/block-copy: make progress_bytes_callback optional
77
block/backup: drop extra gotos from backup_run()
78
backup: move to block-copy
79
qapi: backup: disable copy_range by default
80
block/block-copy: drop unused block_copy_set_progress_callback()
81
block/block-copy: drop unused argument of block_copy()
82
simplebench/bench_block_job: use correct shebang line with python3
83
simplebench: bench_block_job: add cmd_options argument
84
simplebench: add bench-backup.py
85
65
86
qapi/block-core.json | 66 +++++-
66
MAINTAINERS | 9 +
87
block/backup-top.h | 1 +
67
qapi/block-core.json | 24 +-
88
block/copy-on-read.h | 32 +++
68
qapi/block-export.json | 36 +-
89
include/block/block-copy.h | 61 ++++-
69
block/coroutines.h | 2 +
90
include/block/block.h | 10 +-
70
block/export/vhost-user-blk-server.h | 19 +
91
include/block/block_int.h | 15 +-
71
contrib/libvhost-user/libvhost-user.h | 21 +
92
include/block/blockjob_int.h | 2 +
72
include/qemu/vhost-user-server.h | 65 +++
93
block.c | 25 ++
73
util/block-helpers.h | 19 +
94
block/backup-top.c | 6 +-
74
block/export/export.c | 37 +-
95
block/backup.c | 233 ++++++++++++-------
75
block/export/vhost-user-blk-server.c | 431 ++++++++++++++++++++
96
block/block-copy.c | 227 +++++++++++++++---
76
block/io.c | 132 +++---
97
block/copy-on-read.c | 184 ++++++++++++++-
77
block/nvme.c | 27 ++
98
block/file-posix.c | 38 ++-
78
block/qcow2.c | 16 +-
99
block/io.c | 10 +-
79
contrib/libvhost-user/libvhost-user-glib.c | 2 +-
100
block/monitor/block-hmp-cmds.c | 7 +-
80
contrib/libvhost-user/libvhost-user.c | 15 +-
101
block/replication.c | 2 +
81
hw/core/qdev-properties-system.c | 31 +-
102
block/stream.c | 185 +++++++++------
82
nbd/server.c | 2 -
103
blockdev.c | 83 +++++--
83
qemu-nbd.c | 21 +-
104
blockjob.c | 6 +
84
softmmu/vl.c | 4 +
105
job.c | 3 +
85
stubs/blk-exp-close-all.c | 7 +
106
util/coroutine-sigaltstack.c | 9 +
86
tests/vhost-user-bridge.c | 2 +
107
scripts/simplebench/bench-backup.py | 167 ++++++++++++++
87
tools/virtiofsd/fuse_virtio.c | 4 +-
108
scripts/simplebench/bench-example.py | 2 +-
88
util/block-helpers.c | 46 +++
109
scripts/simplebench/bench_block_job.py | 13 +-
89
util/vhost-user-server.c | 446 +++++++++++++++++++++
110
tests/qemu-iotests/030 | 12 +-
90
block/export/meson.build | 3 +-
111
tests/qemu-iotests/056 | 9 +-
91
contrib/libvhost-user/meson.build | 1 +
112
tests/qemu-iotests/109.out | 24 ++
92
meson.build | 22 +-
113
tests/qemu-iotests/118 | 20 +-
93
nbd/meson.build | 2 +
114
tests/qemu-iotests/118.out | 4 +-
94
storage-daemon/meson.build | 3 +-
115
tests/qemu-iotests/124 | 8 +-
95
stubs/meson.build | 1 +
116
tests/qemu-iotests/129 | 79 ++++---
96
tests/qemu-iotests/274 | 20 +
117
tests/qemu-iotests/141.out | 2 +-
97
tests/qemu-iotests/274.out | 68 ++++
118
tests/qemu-iotests/178 | 2 +-
98
util/meson.build | 4 +
119
tests/qemu-iotests/178.out.qcow2 | 2 +-
99
33 files changed, 1420 insertions(+), 122 deletions(-)
120
tests/qemu-iotests/178.out.raw | 2 +-
100
create mode 100644 block/export/vhost-user-blk-server.h
121
tests/qemu-iotests/185 | 3 +-
101
create mode 100644 include/qemu/vhost-user-server.h
122
tests/qemu-iotests/185.out | 3 +-
102
create mode 100644 util/block-helpers.h
123
tests/qemu-iotests/219 | 13 +-
103
create mode 100644 block/export/vhost-user-blk-server.c
124
tests/qemu-iotests/245 | 20 +-
104
create mode 100644 stubs/blk-exp-close-all.c
125
tests/qemu-iotests/257 | 1 +
105
create mode 100644 util/block-helpers.c
126
tests/qemu-iotests/257.out | 306 ++++++++++++-------------
106
create mode 100644 util/vhost-user-server.c
127
tests/qemu-iotests/297 | 112 +++++++--
128
tests/qemu-iotests/297.out | 5 +-
129
tests/qemu-iotests/300 | 19 +-
130
tests/qemu-iotests/310 | 117 ++++++++++
131
tests/qemu-iotests/310.out | 15 ++
132
tests/qemu-iotests/313 | 104 +++++++++
133
tests/qemu-iotests/313.out | 29 +++
134
tests/qemu-iotests/common.rc | 7 +-
135
tests/qemu-iotests/group | 2 +
136
tests/qemu-iotests/iotests.py | 37 +--
137
51 files changed, 1797 insertions(+), 547 deletions(-)
138
create mode 100644 block/copy-on-read.h
139
create mode 100755 scripts/simplebench/bench-backup.py
140
create mode 100755 tests/qemu-iotests/310
141
create mode 100644 tests/qemu-iotests/310.out
142
create mode 100755 tests/qemu-iotests/313
143
create mode 100644 tests/qemu-iotests/313.out
144
107
145
--
108
--
146
2.29.2
109
2.26.2
147
110
148
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Unfortunately commit "iotests: handle tmpfs" breaks running iotests
4
with -nbd -nocache, as _check_o_direct tries to create
5
$TEST_IMG.test_o_direct, but in case of nbd TEST_IMG is something like
6
nbd+unix:///... , and test fails with message
7
8
qemu-img: nbd+unix:///?socket[...]test_o_direct: Protocol driver
9
'nbd' does not support image creation, and opening the image
10
failed: Failed to connect to '/tmp/tmp.[...]/nbd/test_o_direct': No
11
such file or directory
12
13
Use TEST_DIR instead.
14
15
Fixes: cfdca2b9f9d4ca26bb2b2dfe8de3149092e39170
16
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
17
Message-Id: <20201218182012.47607-1-vsementsov@virtuozzo.com>
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
---
20
tests/qemu-iotests/common.rc | 7 ++++---
21
1 file changed, 4 insertions(+), 3 deletions(-)
22
23
diff --git a/tests/qemu-iotests/common.rc b/tests/qemu-iotests/common.rc
24
index XXXXXXX..XXXXXXX 100644
25
--- a/tests/qemu-iotests/common.rc
26
+++ b/tests/qemu-iotests/common.rc
27
@@ -XXX,XX +XXX,XX @@ _supported_cache_modes()
28
# Check whether the filesystem supports O_DIRECT
29
_check_o_direct()
30
{
31
- $QEMU_IMG create -f raw "$TEST_IMG".test_o_direct 1M > /dev/null
32
- out=$($QEMU_IO -f raw -t none -c quit "$TEST_IMG".test_o_direct 2>&1)
33
- rm -f "$TEST_IMG".test_o_direct
34
+ testfile="$TEST_DIR"/_check_o_direct
35
+ $QEMU_IMG create -f raw "$testfile" 1M > /dev/null
36
+ out=$($QEMU_IO -f raw -t none -c quit "$testfile" 2>&1)
37
+ rm -f "$testfile"
38
39
[[ "$out" != *"O_DIRECT"* ]]
40
}
41
--
42
2.29.2
43
44
diff view generated by jsdifflib
Deleted patch
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
2
1
3
Add support for the recently introduced functions
4
bdrv_co_preadv_part()
5
and
6
bdrv_co_pwritev_part()
7
to the COR-filter driver.
8
9
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
10
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
Message-Id: <20201216061703.70908-2-vsementsov@virtuozzo.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
block/copy-on-read.c | 28 ++++++++++++++++------------
15
1 file changed, 16 insertions(+), 12 deletions(-)
16
17
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/block/copy-on-read.c
20
+++ b/block/copy-on-read.c
21
@@ -XXX,XX +XXX,XX @@ static int64_t cor_getlength(BlockDriverState *bs)
22
}
23
24
25
-static int coroutine_fn cor_co_preadv(BlockDriverState *bs,
26
- uint64_t offset, uint64_t bytes,
27
- QEMUIOVector *qiov, int flags)
28
+static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
29
+ uint64_t offset, uint64_t bytes,
30
+ QEMUIOVector *qiov,
31
+ size_t qiov_offset,
32
+ int flags)
33
{
34
- return bdrv_co_preadv(bs->file, offset, bytes, qiov,
35
- flags | BDRV_REQ_COPY_ON_READ);
36
+ return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
37
+ flags | BDRV_REQ_COPY_ON_READ);
38
}
39
40
41
-static int coroutine_fn cor_co_pwritev(BlockDriverState *bs,
42
- uint64_t offset, uint64_t bytes,
43
- QEMUIOVector *qiov, int flags)
44
+static int coroutine_fn cor_co_pwritev_part(BlockDriverState *bs,
45
+ uint64_t offset,
46
+ uint64_t bytes,
47
+ QEMUIOVector *qiov,
48
+ size_t qiov_offset, int flags)
49
{
50
-
51
- return bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags);
52
+ return bdrv_co_pwritev_part(bs->file, offset, bytes, qiov, qiov_offset,
53
+ flags);
54
}
55
56
57
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_copy_on_read = {
58
59
.bdrv_getlength = cor_getlength,
60
61
- .bdrv_co_preadv = cor_co_preadv,
62
- .bdrv_co_pwritev = cor_co_pwritev,
63
+ .bdrv_co_preadv_part = cor_co_preadv_part,
64
+ .bdrv_co_pwritev_part = cor_co_pwritev_part,
65
.bdrv_co_pwrite_zeroes = cor_co_pwrite_zeroes,
66
.bdrv_co_pdiscard = cor_co_pdiscard,
67
.bdrv_co_pwritev_compressed = cor_co_pwritev_compressed,
68
--
69
2.29.2
70
71
diff view generated by jsdifflib
Deleted patch
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
2
1
3
Provide API for insertion a node to backing chain.
4
5
Suggested-by: Max Reitz <mreitz@redhat.com>
6
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
7
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Message-Id: <20201216061703.70908-3-vsementsov@virtuozzo.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
---
12
include/block/block.h | 2 ++
13
block.c | 25 +++++++++++++++++++++++++
14
2 files changed, 27 insertions(+)
15
16
diff --git a/include/block/block.h b/include/block/block.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/block/block.h
19
+++ b/include/block/block.h
20
@@ -XXX,XX +XXX,XX @@ void bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top,
21
Error **errp);
22
void bdrv_replace_node(BlockDriverState *from, BlockDriverState *to,
23
Error **errp);
24
+BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *node_options,
25
+ int flags, Error **errp);
26
27
int bdrv_parse_aio(const char *mode, int *flags);
28
int bdrv_parse_cache_mode(const char *mode, int *flags, bool *writethrough);
29
diff --git a/block.c b/block.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/block.c
32
+++ b/block.c
33
@@ -XXX,XX +XXX,XX @@ static void bdrv_delete(BlockDriverState *bs)
34
g_free(bs);
35
}
36
37
+BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *node_options,
38
+ int flags, Error **errp)
39
+{
40
+ BlockDriverState *new_node_bs;
41
+ Error *local_err = NULL;
42
+
43
+ new_node_bs = bdrv_open(NULL, NULL, node_options, flags, errp);
44
+ if (new_node_bs == NULL) {
45
+ error_prepend(errp, "Could not create node: ");
46
+ return NULL;
47
+ }
48
+
49
+ bdrv_drained_begin(bs);
50
+ bdrv_replace_node(bs, new_node_bs, &local_err);
51
+ bdrv_drained_end(bs);
52
+
53
+ if (local_err) {
54
+ bdrv_unref(new_node_bs);
55
+ error_propagate(errp, local_err);
56
+ return NULL;
57
+ }
58
+
59
+ return new_node_bs;
60
+}
61
+
62
/*
63
* Run consistency checks on an image
64
*
65
--
66
2.29.2
67
68
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Experiments show, that copy_range is not always making things faster.
3
Keep statistics of some hardware errors, and number of
4
So, to make experimentation simpler, let's add a parameter. Some more
4
aligned/unaligned I/O accesses.
5
perf parameters will be added soon, so here is a new struct.
6
5
7
For now, add new backup qmp parameter with x- prefix for the following
6
QMP example booting a full RHEL 8.3 aarch64 guest:
8
reasons:
9
7
10
- We are going to add more performance parameters, some will be
8
{ "execute": "query-blockstats" }
11
related to the whole block-copy process, some only to background
9
{
12
copying in backup (ignored for copy-before-write operations).
10
"return": [
13
- On the other hand, we are going to use block-copy interface in other
11
{
14
block jobs, which will need performance options as well.. And it
12
"device": "",
15
should be the same structure or at least somehow related.
13
"node-name": "drive0",
14
"stats": {
15
"flush_total_time_ns": 6026948,
16
"wr_highest_offset": 3383991230464,
17
"wr_total_time_ns": 807450995,
18
"failed_wr_operations": 0,
19
"failed_rd_operations": 0,
20
"wr_merged": 3,
21
"wr_bytes": 50133504,
22
"failed_unmap_operations": 0,
23
"failed_flush_operations": 0,
24
"account_invalid": false,
25
"rd_total_time_ns": 1846979900,
26
"flush_operations": 130,
27
"wr_operations": 659,
28
"rd_merged": 1192,
29
"rd_bytes": 218244096,
30
"account_failed": false,
31
"idle_time_ns": 2678641497,
32
"rd_operations": 7406,
33
},
34
"driver-specific": {
35
"driver": "nvme",
36
"completion-errors": 0,
37
"unaligned-accesses": 2959,
38
"aligned-accesses": 4477
39
},
40
"qdev": "/machine/peripheral-anon/device[0]/virtio-backend"
41
}
42
]
43
}
16
44
17
So, there are too much unclean things about how the interface and now
45
Suggested-by: Stefan Hajnoczi <stefanha@gmail.com>
18
we need the new options mostly for testing. Let's keep them
46
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
19
experimental for a while.
47
Acked-by: Markus Armbruster <armbru@redhat.com>
20
48
Message-id: 20201001162939.1567915-1-philmd@redhat.com
21
In do_backup_common() new x-perf parameter handled in a way to
49
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
22
make further options addition simpler.
23
24
We add use-copy-range with default=true, and we'll change the default
25
in further patch, after moving backup to use block-copy.
26
27
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
28
Reviewed-by: Max Reitz <mreitz@redhat.com>
29
Message-Id: <20210116214705.822267-2-vsementsov@virtuozzo.com>
30
[mreitz: s/5\.2/6.0/]
31
Signed-off-by: Max Reitz <mreitz@redhat.com>
32
---
50
---
33
qapi/block-core.json | 17 ++++++++++++++++-
51
qapi/block-core.json | 24 +++++++++++++++++++++++-
34
block/backup-top.h | 1 +
52
block/nvme.c | 27 +++++++++++++++++++++++++++
35
include/block/block-copy.h | 2 +-
53
2 files changed, 50 insertions(+), 1 deletion(-)
36
include/block/block_int.h | 3 +++
37
block/backup-top.c | 4 +++-
38
block/backup.c | 6 +++++-
39
block/block-copy.c | 4 ++--
40
block/replication.c | 2 ++
41
blockdev.c | 8 ++++++++
42
9 files changed, 41 insertions(+), 6 deletions(-)
43
54
44
diff --git a/qapi/block-core.json b/qapi/block-core.json
55
diff --git a/qapi/block-core.json b/qapi/block-core.json
45
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
46
--- a/qapi/block-core.json
57
--- a/qapi/block-core.json
47
+++ b/qapi/block-core.json
58
+++ b/qapi/block-core.json
48
@@ -XXX,XX +XXX,XX @@
59
@@ -XXX,XX +XXX,XX @@
49
{ 'struct': 'BlockdevSnapshot',
60
'discard-nb-failed': 'uint64',
50
'data': { 'node': 'str', 'overlay': 'str' } }
61
'discard-bytes-ok': 'uint64' } }
51
62
52
+##
63
+##
53
+# @BackupPerf:
64
+# @BlockStatsSpecificNvme:
54
+#
65
+#
55
+# Optional parameters for backup. These parameters don't affect
66
+# NVMe driver statistics
56
+# functionality, but may significantly affect performance.
57
+#
67
+#
58
+# @use-copy-range: Use copy offloading. Default true.
68
+# @completion-errors: The number of completion errors.
59
+#
69
+#
60
+# Since: 6.0
70
+# @aligned-accesses: The number of aligned accesses performed by
71
+# the driver.
72
+#
73
+# @unaligned-accesses: The number of unaligned accesses performed by
74
+# the driver.
75
+#
76
+# Since: 5.2
61
+##
77
+##
62
+{ 'struct': 'BackupPerf',
78
+{ 'struct': 'BlockStatsSpecificNvme',
63
+ 'data': { '*use-copy-range': 'bool' }}
79
+ 'data': {
80
+ 'completion-errors': 'uint64',
81
+ 'aligned-accesses': 'uint64',
82
+ 'unaligned-accesses': 'uint64' } }
64
+
83
+
65
##
84
##
66
# @BackupCommon:
85
# @BlockStatsSpecific:
67
#
86
#
68
@@ -XXX,XX +XXX,XX @@
87
@@ -XXX,XX +XXX,XX @@
69
# above node specified by @drive. If this option is not given,
88
'discriminator': 'driver',
70
# a node name is autogenerated. (Since: 4.2)
89
'data': {
71
#
90
'file': 'BlockStatsSpecificFile',
72
+# @x-perf: Performance options. (Since 6.0)
91
- 'host_device': 'BlockStatsSpecificFile' } }
73
+#
92
+ 'host_device': 'BlockStatsSpecificFile',
74
# Note: @on-source-error and @on-target-error only affect background
93
+ 'nvme': 'BlockStatsSpecificNvme' } }
75
# I/O. If an error occurs during a guest write request, the device's
76
# rerror/werror actions will be used.
77
@@ -XXX,XX +XXX,XX @@
78
'*on-source-error': 'BlockdevOnError',
79
'*on-target-error': 'BlockdevOnError',
80
'*auto-finalize': 'bool', '*auto-dismiss': 'bool',
81
- '*filter-node-name': 'str' } }
82
+ '*filter-node-name': 'str', '*x-perf': 'BackupPerf' } }
83
94
84
##
95
##
85
# @DriveBackup:
96
# @BlockStats:
86
diff --git a/block/backup-top.h b/block/backup-top.h
97
diff --git a/block/nvme.c b/block/nvme.c
87
index XXXXXXX..XXXXXXX 100644
98
index XXXXXXX..XXXXXXX 100644
88
--- a/block/backup-top.h
99
--- a/block/nvme.c
89
+++ b/block/backup-top.h
100
+++ b/block/nvme.c
90
@@ -XXX,XX +XXX,XX @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
101
@@ -XXX,XX +XXX,XX @@ struct BDRVNVMeState {
91
BlockDriverState *target,
102
92
const char *filter_node_name,
103
/* PCI address (required for nvme_refresh_filename()) */
93
uint64_t cluster_size,
104
char *device;
94
+ BackupPerf *perf,
105
+
95
BdrvRequestFlags write_flags,
106
+ struct {
96
BlockCopyState **bcs,
107
+ uint64_t completion_errors;
97
Error **errp);
108
+ uint64_t aligned_accesses;
98
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
109
+ uint64_t unaligned_accesses;
99
index XXXXXXX..XXXXXXX 100644
110
+ } stats;
100
--- a/include/block/block-copy.h
111
};
101
+++ b/include/block/block-copy.h
112
102
@@ -XXX,XX +XXX,XX @@ typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
113
#define NVME_BLOCK_OPT_DEVICE "device"
103
typedef struct BlockCopyState BlockCopyState;
114
@@ -XXX,XX +XXX,XX @@ static bool nvme_process_completion(NVMeQueuePair *q)
104
115
break;
105
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
116
}
106
- int64_t cluster_size,
117
ret = nvme_translate_error(c);
107
+ int64_t cluster_size, bool use_copy_range,
118
+ if (ret) {
108
BdrvRequestFlags write_flags,
119
+ s->stats.completion_errors++;
109
Error **errp);
120
+ }
110
121
q->cq.head = (q->cq.head + 1) % NVME_QUEUE_SIZE;
111
diff --git a/include/block/block_int.h b/include/block/block_int.h
122
if (!q->cq.head) {
112
index XXXXXXX..XXXXXXX 100644
123
q->cq_phase = !q->cq_phase;
113
--- a/include/block/block_int.h
124
@@ -XXX,XX +XXX,XX @@ static int nvme_co_prw(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
114
+++ b/include/block/block_int.h
125
assert(QEMU_IS_ALIGNED(bytes, s->page_size));
115
@@ -XXX,XX +XXX,XX @@ void mirror_start(const char *job_id, BlockDriverState *bs,
126
assert(bytes <= s->max_transfer);
116
* @sync_mode: What parts of the disk image should be copied to the destination.
127
if (nvme_qiov_aligned(bs, qiov)) {
117
* @sync_bitmap: The dirty bitmap if sync_mode is 'bitmap' or 'incremental'
128
+ s->stats.aligned_accesses++;
118
* @bitmap_mode: The bitmap synchronization policy to use.
129
return nvme_co_prw_aligned(bs, offset, bytes, qiov, is_write, flags);
119
+ * @perf: Performance options. All actual fields assumed to be present,
120
+ * all ".has_*" fields are ignored.
121
* @on_source_error: The action to take upon error reading from the source.
122
* @on_target_error: The action to take upon error writing to the target.
123
* @creation_flags: Flags that control the behavior of the Job lifetime.
124
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
125
BitmapSyncMode bitmap_mode,
126
bool compress,
127
const char *filter_node_name,
128
+ BackupPerf *perf,
129
BlockdevOnError on_source_error,
130
BlockdevOnError on_target_error,
131
int creation_flags,
132
diff --git a/block/backup-top.c b/block/backup-top.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/block/backup-top.c
135
+++ b/block/backup-top.c
136
@@ -XXX,XX +XXX,XX @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
137
BlockDriverState *target,
138
const char *filter_node_name,
139
uint64_t cluster_size,
140
+ BackupPerf *perf,
141
BdrvRequestFlags write_flags,
142
BlockCopyState **bcs,
143
Error **errp)
144
@@ -XXX,XX +XXX,XX @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
145
146
state->cluster_size = cluster_size;
147
state->bcs = block_copy_state_new(top->backing, state->target,
148
- cluster_size, write_flags, &local_err);
149
+ cluster_size, perf->use_copy_range,
150
+ write_flags, &local_err);
151
if (local_err) {
152
error_prepend(&local_err, "Cannot create block-copy-state: ");
153
goto fail;
154
diff --git a/block/backup.c b/block/backup.c
155
index XXXXXXX..XXXXXXX 100644
156
--- a/block/backup.c
157
+++ b/block/backup.c
158
@@ -XXX,XX +XXX,XX @@ typedef struct BackupBlockJob {
159
uint64_t len;
160
uint64_t bytes_read;
161
int64_t cluster_size;
162
+ BackupPerf perf;
163
164
BlockCopyState *bcs;
165
} BackupBlockJob;
166
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
167
BitmapSyncMode bitmap_mode,
168
bool compress,
169
const char *filter_node_name,
170
+ BackupPerf *perf,
171
BlockdevOnError on_source_error,
172
BlockdevOnError on_target_error,
173
int creation_flags,
174
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
175
(compress ? BDRV_REQ_WRITE_COMPRESSED : 0),
176
177
backup_top = bdrv_backup_top_append(bs, target, filter_node_name,
178
- cluster_size, write_flags, &bcs, errp);
179
+ cluster_size, perf,
180
+ write_flags, &bcs, errp);
181
if (!backup_top) {
182
goto error;
183
}
130
}
184
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
131
+ s->stats.unaligned_accesses++;
185
job->bcs = bcs;
132
trace_nvme_prw_buffered(s, offset, bytes, qiov->niov, is_write);
186
job->cluster_size = cluster_size;
133
buf = qemu_try_memalign(s->page_size, bytes);
187
job->len = len;
134
188
+ job->perf = *perf;
135
@@ -XXX,XX +XXX,XX @@ static void nvme_unregister_buf(BlockDriverState *bs, void *host)
189
136
qemu_vfio_dma_unmap(s->vfio, host);
190
block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
191
block_copy_set_progress_meter(bcs, &job->common.job.progress);
192
diff --git a/block/block-copy.c b/block/block-copy.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/block/block-copy.c
195
+++ b/block/block-copy.c
196
@@ -XXX,XX +XXX,XX @@ static uint32_t block_copy_max_transfer(BdrvChild *source, BdrvChild *target)
197
}
137
}
198
138
199
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
139
+static BlockStatsSpecific *nvme_get_specific_stats(BlockDriverState *bs)
200
- int64_t cluster_size,
140
+{
201
+ int64_t cluster_size, bool use_copy_range,
141
+ BlockStatsSpecific *stats = g_new(BlockStatsSpecific, 1);
202
BdrvRequestFlags write_flags, Error **errp)
142
+ BDRVNVMeState *s = bs->opaque;
203
{
204
BlockCopyState *s;
205
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
206
* We enable copy-range, but keep small copy_size, until first
207
* successful copy_range (look at block_copy_do_copy).
208
*/
209
- s->use_copy_range = true;
210
+ s->use_copy_range = use_copy_range;
211
s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
212
}
213
214
diff --git a/block/replication.c b/block/replication.c
215
index XXXXXXX..XXXXXXX 100644
216
--- a/block/replication.c
217
+++ b/block/replication.c
218
@@ -XXX,XX +XXX,XX @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
219
int64_t active_length, hidden_length, disk_length;
220
AioContext *aio_context;
221
Error *local_err = NULL;
222
+ BackupPerf perf = { .use_copy_range = true };
223
224
aio_context = bdrv_get_aio_context(bs);
225
aio_context_acquire(aio_context);
226
@@ -XXX,XX +XXX,XX @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
227
s->backup_job = backup_job_create(
228
NULL, s->secondary_disk->bs, s->hidden_disk->bs,
229
0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
230
+ &perf,
231
BLOCKDEV_ON_ERROR_REPORT,
232
BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
233
backup_job_completed, bs, NULL, &local_err);
234
diff --git a/blockdev.c b/blockdev.c
235
index XXXXXXX..XXXXXXX 100644
236
--- a/blockdev.c
237
+++ b/blockdev.c
238
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
239
{
240
BlockJob *job = NULL;
241
BdrvDirtyBitmap *bmap = NULL;
242
+ BackupPerf perf = { .use_copy_range = true };
243
int job_flags = JOB_DEFAULT;
244
245
if (!backup->has_speed) {
246
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
247
backup->compress = false;
248
}
249
250
+ if (backup->x_perf) {
251
+ if (backup->x_perf->has_use_copy_range) {
252
+ perf.use_copy_range = backup->x_perf->use_copy_range;
253
+ }
254
+ }
255
+
143
+
256
if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
144
+ stats->driver = BLOCKDEV_DRIVER_NVME;
257
(backup->sync == MIRROR_SYNC_MODE_INCREMENTAL)) {
145
+ stats->u.nvme = (BlockStatsSpecificNvme) {
258
/* done before desugaring 'incremental' to print the right message */
146
+ .completion_errors = s->stats.completion_errors,
259
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
147
+ .aligned_accesses = s->stats.aligned_accesses,
260
backup->sync, bmap, backup->bitmap_mode,
148
+ .unaligned_accesses = s->stats.unaligned_accesses,
261
backup->compress,
149
+ };
262
backup->filter_node_name,
150
+
263
+ &perf,
151
+ return stats;
264
backup->on_source_error,
152
+}
265
backup->on_target_error,
153
+
266
job_flags, NULL, NULL, txn, errp);
154
static const char *const nvme_strong_runtime_opts[] = {
155
NVME_BLOCK_OPT_DEVICE,
156
NVME_BLOCK_OPT_NAMESPACE,
157
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_nvme = {
158
.bdrv_refresh_filename = nvme_refresh_filename,
159
.bdrv_refresh_limits = nvme_refresh_limits,
160
.strong_runtime_opts = nvme_strong_runtime_opts,
161
+ .bdrv_get_specific_stats = nvme_get_specific_stats,
162
163
.bdrv_detach_aio_context = nvme_detach_aio_context,
164
.bdrv_attach_aio_context = nvme_attach_aio_context,
267
--
165
--
268
2.29.2
166
2.26.2
269
167
270
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
Add the new member supported_read_flags to the BlockDriverState
3
Allow vu_message_read to be replaced by one which will make use of the
4
structure. It will control the flags set for copy-on-read operations.
4
QIOChannel functions. Thus reading vhost-user message won't stall the
5
Make the block generic layer evaluate supported read flags before they
5
guest. For slave channel, we still use the default vu_message_read.
6
go to a block driver.
7
6
8
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
9
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
8
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
10
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
[vsementsov: use assert instead of abort]
10
Message-id: 20200918080912.321299-2-coiby.xu@gmail.com
12
Reviewed-by: Max Reitz <mreitz@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Message-Id: <20201216061703.70908-8-vsementsov@virtuozzo.com>
14
Signed-off-by: Max Reitz <mreitz@redhat.com>
15
---
12
---
16
include/block/block_int.h | 4 ++++
13
contrib/libvhost-user/libvhost-user.h | 21 +++++++++++++++++++++
17
block/io.c | 10 ++++++++--
14
contrib/libvhost-user/libvhost-user-glib.c | 2 +-
18
2 files changed, 12 insertions(+), 2 deletions(-)
15
contrib/libvhost-user/libvhost-user.c | 14 +++++++-------
16
tests/vhost-user-bridge.c | 2 ++
17
tools/virtiofsd/fuse_virtio.c | 4 ++--
18
5 files changed, 33 insertions(+), 10 deletions(-)
19
19
20
diff --git a/include/block/block_int.h b/include/block/block_int.h
20
diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
21
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/block/block_int.h
22
--- a/contrib/libvhost-user/libvhost-user.h
23
+++ b/include/block/block_int.h
23
+++ b/contrib/libvhost-user/libvhost-user.h
24
@@ -XXX,XX +XXX,XX @@ struct BlockDriverState {
24
@@ -XXX,XX +XXX,XX @@
25
/* I/O Limits */
25
*/
26
BlockLimits bl;
26
#define VHOST_USER_MAX_RAM_SLOTS 32
27
27
28
+ /*
28
+#define VHOST_USER_HDR_SIZE offsetof(VhostUserMsg, payload.u64)
29
+ * Flags honored during pread
29
+
30
typedef enum VhostSetConfigType {
31
VHOST_SET_CONFIG_TYPE_MASTER = 0,
32
VHOST_SET_CONFIG_TYPE_MIGRATION = 1,
33
@@ -XXX,XX +XXX,XX @@ typedef uint64_t (*vu_get_features_cb) (VuDev *dev);
34
typedef void (*vu_set_features_cb) (VuDev *dev, uint64_t features);
35
typedef int (*vu_process_msg_cb) (VuDev *dev, VhostUserMsg *vmsg,
36
int *do_reply);
37
+typedef bool (*vu_read_msg_cb) (VuDev *dev, int sock, VhostUserMsg *vmsg);
38
typedef void (*vu_queue_set_started_cb) (VuDev *dev, int qidx, bool started);
39
typedef bool (*vu_queue_is_processed_in_order_cb) (VuDev *dev, int qidx);
40
typedef int (*vu_get_config_cb) (VuDev *dev, uint8_t *config, uint32_t len);
41
@@ -XXX,XX +XXX,XX @@ struct VuDev {
42
bool broken;
43
uint16_t max_queues;
44
45
+ /* @read_msg: custom method to read vhost-user message
46
+ *
47
+ * Read data from vhost_user socket fd and fill up
48
+ * the passed VhostUserMsg *vmsg struct.
49
+ *
50
+ * If reading fails, it should close the received set of file
51
+ * descriptors as socket message's auxiliary data.
52
+ *
53
+ * For the details, please refer to vu_message_read in libvhost-user.c
54
+ * which will be used by default if not custom method is provided when
55
+ * calling vu_init
56
+ *
57
+ * Returns: true if vhost-user message successfully received,
58
+ * otherwise return false.
59
+ *
30
+ */
60
+ */
31
+ unsigned int supported_read_flags;
61
+ vu_read_msg_cb read_msg;
32
/* Flags honored during pwrite (so far: BDRV_REQ_FUA,
62
/* @set_watch: add or update the given fd to the watch set,
33
* BDRV_REQ_WRITE_UNCHANGED).
63
* call cb when condition is met */
34
* If a driver does not support BDRV_REQ_WRITE_UNCHANGED, those
64
vu_set_watch_cb set_watch;
35
diff --git a/block/io.c b/block/io.c
65
@@ -XXX,XX +XXX,XX @@ bool vu_init(VuDev *dev,
66
uint16_t max_queues,
67
int socket,
68
vu_panic_cb panic,
69
+ vu_read_msg_cb read_msg,
70
vu_set_watch_cb set_watch,
71
vu_remove_watch_cb remove_watch,
72
const VuDevIface *iface);
73
diff --git a/contrib/libvhost-user/libvhost-user-glib.c b/contrib/libvhost-user/libvhost-user-glib.c
36
index XXXXXXX..XXXXXXX 100644
74
index XXXXXXX..XXXXXXX 100644
37
--- a/block/io.c
75
--- a/contrib/libvhost-user/libvhost-user-glib.c
38
+++ b/block/io.c
76
+++ b/contrib/libvhost-user/libvhost-user-glib.c
39
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
77
@@ -XXX,XX +XXX,XX @@ vug_init(VugDev *dev, uint16_t max_queues, int socket,
40
if (flags & BDRV_REQ_COPY_ON_READ) {
78
g_assert(dev);
41
int64_t pnum;
79
g_assert(iface);
42
80
43
+ /* The flag BDRV_REQ_COPY_ON_READ has reached its addressee */
81
- if (!vu_init(&dev->parent, max_queues, socket, panic, set_watch,
44
+ flags &= ~BDRV_REQ_COPY_ON_READ;
82
+ if (!vu_init(&dev->parent, max_queues, socket, panic, NULL, set_watch,
45
+
83
remove_watch, iface)) {
46
ret = bdrv_is_allocated(bs, offset, bytes, &pnum);
84
return false;
47
if (ret < 0) {
85
}
48
goto out;
86
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
49
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
87
index XXXXXXX..XXXXXXX 100644
88
--- a/contrib/libvhost-user/libvhost-user.c
89
+++ b/contrib/libvhost-user/libvhost-user.c
90
@@ -XXX,XX +XXX,XX @@
91
/* The version of inflight buffer */
92
#define INFLIGHT_VERSION 1
93
94
-#define VHOST_USER_HDR_SIZE offsetof(VhostUserMsg, payload.u64)
95
-
96
/* The version of the protocol we support */
97
#define VHOST_USER_VERSION 1
98
#define LIBVHOST_USER_DEBUG 0
99
@@ -XXX,XX +XXX,XX @@ have_userfault(void)
100
}
101
102
static bool
103
-vu_message_read(VuDev *dev, int conn_fd, VhostUserMsg *vmsg)
104
+vu_message_read_default(VuDev *dev, int conn_fd, VhostUserMsg *vmsg)
105
{
106
char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = {};
107
struct iovec iov = {
108
@@ -XXX,XX +XXX,XX @@ vu_process_message_reply(VuDev *dev, const VhostUserMsg *vmsg)
50
goto out;
109
goto out;
51
}
110
}
52
111
53
+ assert(!(flags & ~bs->supported_read_flags));
112
- if (!vu_message_read(dev, dev->slave_fd, &msg_reply)) {
54
+
113
+ if (!vu_message_read_default(dev, dev->slave_fd, &msg_reply)) {
55
max_bytes = ROUND_UP(MAX(0, total_bytes - offset), align);
56
if (bytes <= max_bytes && bytes <= max_transfer) {
57
- ret = bdrv_driver_preadv(bs, offset, bytes, qiov, qiov_offset, 0);
58
+ ret = bdrv_driver_preadv(bs, offset, bytes, qiov, qiov_offset, flags);
59
goto out;
114
goto out;
60
}
115
}
61
116
62
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
117
@@ -XXX,XX +XXX,XX @@ vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg)
63
118
/* Wait for QEMU to confirm that it's registered the handler for the
64
ret = bdrv_driver_preadv(bs, offset + bytes - bytes_remaining,
119
* faults.
65
num, qiov,
120
*/
66
- qiov_offset + bytes - bytes_remaining, 0);
121
- if (!vu_message_read(dev, dev->sock, vmsg) ||
67
+ qiov_offset + bytes - bytes_remaining,
122
+ if (!dev->read_msg(dev, dev->sock, vmsg) ||
68
+ flags);
123
vmsg->size != sizeof(vmsg->payload.u64) ||
69
max_bytes -= num;
124
vmsg->payload.u64 != 0) {
70
} else {
125
vu_panic(dev, "failed to receive valid ack for postcopy set-mem-table");
71
num = bytes_remaining;
126
@@ -XXX,XX +XXX,XX @@ vu_dispatch(VuDev *dev)
127
int reply_requested;
128
bool need_reply, success = false;
129
130
- if (!vu_message_read(dev, dev->sock, &vmsg)) {
131
+ if (!dev->read_msg(dev, dev->sock, &vmsg)) {
132
goto end;
133
}
134
135
@@ -XXX,XX +XXX,XX @@ vu_init(VuDev *dev,
136
uint16_t max_queues,
137
int socket,
138
vu_panic_cb panic,
139
+ vu_read_msg_cb read_msg,
140
vu_set_watch_cb set_watch,
141
vu_remove_watch_cb remove_watch,
142
const VuDevIface *iface)
143
@@ -XXX,XX +XXX,XX @@ vu_init(VuDev *dev,
144
145
dev->sock = socket;
146
dev->panic = panic;
147
+ dev->read_msg = read_msg ? read_msg : vu_message_read_default;
148
dev->set_watch = set_watch;
149
dev->remove_watch = remove_watch;
150
dev->iface = iface;
151
@@ -XXX,XX +XXX,XX @@ static void _vu_queue_notify(VuDev *dev, VuVirtq *vq, bool sync)
152
153
vu_message_write(dev, dev->slave_fd, &vmsg);
154
if (ack) {
155
- vu_message_read(dev, dev->slave_fd, &vmsg);
156
+ vu_message_read_default(dev, dev->slave_fd, &vmsg);
157
}
158
return;
159
}
160
diff --git a/tests/vhost-user-bridge.c b/tests/vhost-user-bridge.c
161
index XXXXXXX..XXXXXXX 100644
162
--- a/tests/vhost-user-bridge.c
163
+++ b/tests/vhost-user-bridge.c
164
@@ -XXX,XX +XXX,XX @@ vubr_accept_cb(int sock, void *ctx)
165
VHOST_USER_BRIDGE_MAX_QUEUES,
166
conn_fd,
167
vubr_panic,
168
+ NULL,
169
vubr_set_watch,
170
vubr_remove_watch,
171
&vuiface)) {
172
@@ -XXX,XX +XXX,XX @@ vubr_new(const char *path, bool client)
173
VHOST_USER_BRIDGE_MAX_QUEUES,
174
dev->sock,
175
vubr_panic,
176
+ NULL,
177
vubr_set_watch,
178
vubr_remove_watch,
179
&vuiface)) {
180
diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/tools/virtiofsd/fuse_virtio.c
183
+++ b/tools/virtiofsd/fuse_virtio.c
184
@@ -XXX,XX +XXX,XX @@ int virtio_session_mount(struct fuse_session *se)
185
se->vu_socketfd = data_sock;
186
se->virtio_dev->se = se;
187
pthread_rwlock_init(&se->virtio_dev->vu_dispatch_rwlock, NULL);
188
- vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, fv_set_watch,
189
- fv_remove_watch, &fv_iface);
190
+ vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, NULL,
191
+ fv_set_watch, fv_remove_watch, &fv_iface);
192
193
return 0;
194
}
72
--
195
--
73
2.29.2
196
2.26.2
74
197
75
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
They will be used for backup.
3
When the client is running in gdb and quit command is run in gdb,
4
QEMU will still dispatch the event which will cause segment fault in
5
the callback function.
4
6
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-Id: <20210116214705.822267-5-vsementsov@virtuozzo.com>
9
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
Message-id: 20200918080912.321299-3-coiby.xu@gmail.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
12
---
10
include/block/block-copy.h | 6 ++++++
13
contrib/libvhost-user/libvhost-user.c | 1 +
11
block/block-copy.c | 11 +++++++++--
14
1 file changed, 1 insertion(+)
12
2 files changed, 15 insertions(+), 2 deletions(-)
13
15
14
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
16
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/include/block/block-copy.h
18
--- a/contrib/libvhost-user/libvhost-user.c
17
+++ b/include/block/block-copy.h
19
+++ b/contrib/libvhost-user/libvhost-user.c
18
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
20
@@ -XXX,XX +XXX,XX @@ vu_deinit(VuDev *dev)
19
*
20
* Caller is responsible to call block_copy_call_free() to free
21
* BlockCopyCallState object.
22
+ *
23
+ * @max_workers means maximum of parallel coroutines to execute sub-requests,
24
+ * must be > 0.
25
+ *
26
+ * @max_chunk means maximum length for one IO operation. Zero means unlimited.
27
*/
28
BlockCopyCallState *block_copy_async(BlockCopyState *s,
29
int64_t offset, int64_t bytes,
30
+ int max_workers, int64_t max_chunk,
31
BlockCopyAsyncCallbackFunc cb,
32
void *cb_opaque);
33
34
diff --git a/block/block-copy.c b/block/block-copy.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/block/block-copy.c
37
+++ b/block/block-copy.c
38
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
39
BlockCopyState *s;
40
int64_t offset;
41
int64_t bytes;
42
+ int max_workers;
43
+ int64_t max_chunk;
44
BlockCopyAsyncCallbackFunc cb;
45
void *cb_opaque;
46
47
@@ -XXX,XX +XXX,XX @@ static BlockCopyTask *block_copy_task_create(BlockCopyState *s,
48
int64_t offset, int64_t bytes)
49
{
50
BlockCopyTask *task;
51
+ int64_t max_chunk = MIN_NON_ZERO(s->copy_size, call_state->max_chunk);
52
53
if (!bdrv_dirty_bitmap_next_dirty_area(s->copy_bitmap,
54
offset, offset + bytes,
55
- s->copy_size, &offset, &bytes))
56
+ max_chunk, &offset, &bytes))
57
{
58
return NULL;
59
}
60
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
61
bytes = end - offset;
62
63
if (!aio && bytes) {
64
- aio = aio_task_pool_new(BLOCK_COPY_MAX_WORKERS);
65
+ aio = aio_task_pool_new(call_state->max_workers);
66
}
21
}
67
22
68
ret = block_copy_task_run(aio, task);
23
if (vq->kick_fd != -1) {
69
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
24
+ dev->remove_watch(dev, vq->kick_fd);
70
.s = s,
25
close(vq->kick_fd);
71
.offset = start,
26
vq->kick_fd = -1;
72
.bytes = bytes,
27
}
73
+ .max_workers = BLOCK_COPY_MAX_WORKERS,
74
};
75
76
int ret = block_copy_common(&call_state);
77
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn block_copy_async_co_entry(void *opaque)
78
79
BlockCopyCallState *block_copy_async(BlockCopyState *s,
80
int64_t offset, int64_t bytes,
81
+ int max_workers, int64_t max_chunk,
82
BlockCopyAsyncCallbackFunc cb,
83
void *cb_opaque)
84
{
85
@@ -XXX,XX +XXX,XX @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
86
.s = s,
87
.offset = offset,
88
.bytes = bytes,
89
+ .max_workers = max_workers,
90
+ .max_chunk = max_chunk,
91
.cb = cb,
92
.cb_opaque = cb_opaque,
93
94
--
28
--
95
2.29.2
29
2.26.2
96
30
97
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
Add script to benchmark new backup architecture.
3
Sharing QEMU devices via vhost-user protocol.
4
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
5
Only one vhost-user client can connect to the server one time.
6
Message-Id: <20210116214705.822267-24-vsementsov@virtuozzo.com>
6
7
[mreitz: s/not unsupported/not supported/]
7
Suggested-by: Kevin Wolf <kwolf@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
12
Message-id: 20200918080912.321299-4-coiby.xu@gmail.com
13
[Fixed size_t %lu -> %zu format string compiler error.
14
--Stefan]
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
16
---
10
scripts/simplebench/bench-backup.py | 167 ++++++++++++++++++++++++++++
17
util/vhost-user-server.h | 65 ++++++
11
1 file changed, 167 insertions(+)
18
util/vhost-user-server.c | 428 +++++++++++++++++++++++++++++++++++++++
12
create mode 100755 scripts/simplebench/bench-backup.py
19
util/meson.build | 1 +
20
3 files changed, 494 insertions(+)
21
create mode 100644 util/vhost-user-server.h
22
create mode 100644 util/vhost-user-server.c
13
23
14
diff --git a/scripts/simplebench/bench-backup.py b/scripts/simplebench/bench-backup.py
24
diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
15
new file mode 100755
25
new file mode 100644
16
index XXXXXXX..XXXXXXX
26
index XXXXXXX..XXXXXXX
17
--- /dev/null
27
--- /dev/null
18
+++ b/scripts/simplebench/bench-backup.py
28
+++ b/util/vhost-user-server.h
19
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@
20
+#!/usr/bin/env python3
30
+/*
21
+#
31
+ * Sharing QEMU devices via vhost-user protocol
22
+# Bench backup block-job
32
+ *
23
+#
33
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
24
+# Copyright (c) 2020 Virtuozzo International GmbH.
34
+ * Copyright (c) 2020 Red Hat, Inc.
25
+#
35
+ *
26
+# This program is free software; you can redistribute it and/or modify
36
+ * This work is licensed under the terms of the GNU GPL, version 2 or
27
+# it under the terms of the GNU General Public License as published by
37
+ * later. See the COPYING file in the top-level directory.
28
+# the Free Software Foundation; either version 2 of the License, or
38
+ */
29
+# (at your option) any later version.
39
+
30
+#
40
+#ifndef VHOST_USER_SERVER_H
31
+# This program is distributed in the hope that it will be useful,
41
+#define VHOST_USER_SERVER_H
32
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
42
+
33
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
43
+#include "contrib/libvhost-user/libvhost-user.h"
34
+# GNU General Public License for more details.
44
+#include "io/channel-socket.h"
35
+#
45
+#include "io/channel-file.h"
36
+# You should have received a copy of the GNU General Public License
46
+#include "io/net-listener.h"
37
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
47
+#include "qemu/error-report.h"
38
+#
48
+#include "qapi/error.h"
39
+
49
+#include "standard-headers/linux/virtio_blk.h"
40
+import argparse
50
+
41
+import json
51
+typedef struct VuFdWatch {
42
+
52
+ VuDev *vu_dev;
43
+import simplebench
53
+ int fd; /*kick fd*/
44
+from results_to_text import results_to_text
54
+ void *pvt;
45
+from bench_block_job import bench_block_copy, drv_file, drv_nbd
55
+ vu_watch_cb cb;
46
+
56
+ bool processing;
47
+
57
+ QTAILQ_ENTRY(VuFdWatch) next;
48
+def bench_func(env, case):
58
+} VuFdWatch;
49
+ """ Handle one "cell" of benchmarking table. """
59
+
50
+ cmd_options = env['cmd-options'] if 'cmd-options' in env else {}
60
+typedef struct VuServer VuServer;
51
+ return bench_block_copy(env['qemu-binary'], env['cmd'],
61
+typedef void DevicePanicNotifierFn(VuServer *server);
52
+ cmd_options,
62
+
53
+ case['source'], case['target'])
63
+struct VuServer {
54
+
64
+ QIONetListener *listener;
55
+
65
+ AioContext *ctx;
56
+def bench(args):
66
+ DevicePanicNotifierFn *device_panic_notifier;
57
+ test_cases = []
67
+ int max_queues;
58
+
68
+ const VuDevIface *vu_iface;
59
+ sources = {}
69
+ VuDev vu_dev;
60
+ targets = {}
70
+ QIOChannel *ioc; /* The I/O channel with the client */
61
+ for d in args.dir:
71
+ QIOChannelSocket *sioc; /* The underlying data channel with the client */
62
+ label, path = d.split(':') # paths with colon not supported
72
+ /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
63
+ sources[label] = drv_file(path + '/test-source')
73
+ QIOChannel *ioc_slave;
64
+ targets[label] = drv_file(path + '/test-target')
74
+ QIOChannelSocket *sioc_slave;
65
+
75
+ Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
66
+ if args.nbd:
76
+ QTAILQ_HEAD(, VuFdWatch) vu_fd_watches;
67
+ nbd = args.nbd.split(':')
77
+ /* restart coroutine co_trip if AIOContext is changed */
68
+ host = nbd[0]
78
+ bool aio_context_changed;
69
+ port = '10809' if len(nbd) == 1 else nbd[1]
79
+ bool processing_msg;
70
+ drv = drv_nbd(host, port)
80
+};
71
+ sources['nbd'] = drv
81
+
72
+ targets['nbd'] = drv
82
+bool vhost_user_server_start(VuServer *server,
73
+
83
+ SocketAddress *unix_socket,
74
+ for t in args.test:
84
+ AioContext *ctx,
75
+ src, dst = t.split(':')
85
+ uint16_t max_queues,
76
+
86
+ DevicePanicNotifierFn *device_panic_notifier,
77
+ test_cases.append({
87
+ const VuDevIface *vu_iface,
78
+ 'id': t,
88
+ Error **errp);
79
+ 'source': sources[src],
89
+
80
+ 'target': targets[dst]
90
+void vhost_user_server_stop(VuServer *server);
81
+ })
91
+
82
+
92
+void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx);
83
+ binaries = [] # list of (<label>, <path>, [<options>])
93
+
84
+ for i, q in enumerate(args.env):
94
+#endif /* VHOST_USER_SERVER_H */
85
+ name_path = q.split(':')
95
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
86
+ if len(name_path) == 1:
96
new file mode 100644
87
+ label = f'q{i}'
97
index XXXXXXX..XXXXXXX
88
+ path_opts = name_path[0].split(',')
98
--- /dev/null
89
+ else:
99
+++ b/util/vhost-user-server.c
90
+ assert len(name_path) == 2 # paths with colon not supported
100
@@ -XXX,XX +XXX,XX @@
91
+ label = name_path[0]
101
+/*
92
+ path_opts = name_path[1].split(',')
102
+ * Sharing QEMU devices via vhost-user protocol
93
+
103
+ *
94
+ binaries.append((label, path_opts[0], path_opts[1:]))
104
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
95
+
105
+ * Copyright (c) 2020 Red Hat, Inc.
96
+ test_envs = []
106
+ *
97
+
107
+ * This work is licensed under the terms of the GNU GPL, version 2 or
98
+ bin_paths = {}
108
+ * later. See the COPYING file in the top-level directory.
99
+ for i, q in enumerate(args.env):
109
+ */
100
+ opts = q.split(',')
110
+#include "qemu/osdep.h"
101
+ label_path = opts[0]
111
+#include "qemu/main-loop.h"
102
+ opts = opts[1:]
112
+#include "vhost-user-server.h"
103
+
113
+
104
+ if ':' in label_path:
114
+static void vmsg_close_fds(VhostUserMsg *vmsg)
105
+ # path with colon inside is not supported
115
+{
106
+ label, path = label_path.split(':')
116
+ int i;
107
+ bin_paths[label] = path
117
+ for (i = 0; i < vmsg->fd_num; i++) {
108
+ elif label_path in bin_paths:
118
+ close(vmsg->fds[i]);
109
+ label = label_path
119
+ }
110
+ path = bin_paths[label]
120
+}
111
+ else:
121
+
112
+ path = label_path
122
+static void vmsg_unblock_fds(VhostUserMsg *vmsg)
113
+ label = f'q{i}'
123
+{
114
+ bin_paths[label] = path
124
+ int i;
115
+
125
+ for (i = 0; i < vmsg->fd_num; i++) {
116
+ x_perf = {}
126
+ qemu_set_nonblock(vmsg->fds[i]);
117
+ is_mirror = False
127
+ }
118
+ for opt in opts:
128
+}
119
+ if opt == 'mirror':
129
+
120
+ is_mirror = True
130
+static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
121
+ elif opt == 'copy-range=on':
131
+ gpointer opaque);
122
+ x_perf['use-copy-range'] = True
132
+
123
+ elif opt == 'copy-range=off':
133
+static void close_client(VuServer *server)
124
+ x_perf['use-copy-range'] = False
134
+{
125
+ elif opt.startswith('max-workers='):
135
+ /*
126
+ x_perf['max-workers'] = int(opt.split('=')[1])
136
+ * Before closing the client
127
+
137
+ *
128
+ if is_mirror:
138
+ * 1. Let vu_client_trip stop processing new vhost-user msg
129
+ assert not x_perf
139
+ *
130
+ test_envs.append({
140
+ * 2. remove kick_handler
131
+ 'id': f'mirror({label})',
141
+ *
132
+ 'cmd': 'blockdev-mirror',
142
+ * 3. wait for the kick handler to be finished
133
+ 'qemu-binary': path
143
+ *
134
+ })
144
+ * 4. wait for the current vhost-user msg to be finished processing
135
+ else:
145
+ */
136
+ test_envs.append({
146
+
137
+ 'id': f'backup({label})\n' + '\n'.join(opts),
147
+ QIOChannelSocket *sioc = server->sioc;
138
+ 'cmd': 'blockdev-backup',
148
+ /* When this is set vu_client_trip will stop new processing vhost-user message */
139
+ 'cmd-options': {'x-perf': x_perf} if x_perf else {},
149
+ server->sioc = NULL;
140
+ 'qemu-binary': path
150
+
141
+ })
151
+ VuFdWatch *vu_fd_watch, *next;
142
+
152
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
143
+ result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
153
+ aio_set_fd_handler(server->ioc->ctx, vu_fd_watch->fd, true, NULL,
144
+ with open('results.json', 'w') as f:
154
+ NULL, NULL, NULL);
145
+ json.dump(result, f, indent=4)
155
+ }
146
+ print(results_to_text(result))
156
+
147
+
157
+ while (!QTAILQ_EMPTY(&server->vu_fd_watches)) {
148
+
158
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
149
+class ExtendAction(argparse.Action):
159
+ if (!vu_fd_watch->processing) {
150
+ def __call__(self, parser, namespace, values, option_string=None):
160
+ QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
151
+ items = getattr(namespace, self.dest) or []
161
+ g_free(vu_fd_watch);
152
+ items.extend(values)
162
+ }
153
+ setattr(namespace, self.dest, items)
163
+ }
154
+
164
+ }
155
+
165
+
156
+if __name__ == '__main__':
166
+ while (server->processing_msg) {
157
+ p = argparse.ArgumentParser('Backup benchmark', epilog='''
167
+ if (server->ioc->read_coroutine) {
158
+ENV format
168
+ server->ioc->read_coroutine = NULL;
159
+
169
+ qio_channel_set_aio_fd_handler(server->ioc, server->ioc->ctx, NULL,
160
+ (LABEL:PATH|LABEL|PATH)[,max-workers=N][,use-copy-range=(on|off)][,mirror]
170
+ NULL, server->ioc);
161
+
171
+ server->processing_msg = false;
162
+ LABEL short name for the binary
172
+ }
163
+ PATH path to the binary
173
+ }
164
+ max-workers set x-perf.max-workers of backup job
174
+
165
+ use-copy-range set x-perf.use-copy-range of backup job
175
+ vu_deinit(&server->vu_dev);
166
+ mirror use mirror job instead of backup''',
176
+ object_unref(OBJECT(sioc));
167
+ formatter_class=argparse.RawTextHelpFormatter)
177
+ object_unref(OBJECT(server->ioc));
168
+ p.add_argument('--env', nargs='+', help='''\
178
+}
169
+Qemu binaries with labels and options, see below
179
+
170
+"ENV format" section''',
180
+static void panic_cb(VuDev *vu_dev, const char *buf)
171
+ action=ExtendAction)
181
+{
172
+ p.add_argument('--dir', nargs='+', help='''\
182
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
173
+Directories, each containing "test-source" and/or
183
+
174
+"test-target" files, raw images to used in
184
+ /* avoid while loop in close_client */
175
+benchmarking. File path with label, like
185
+ server->processing_msg = false;
176
+label:/path/to/directory''',
186
+
177
+ action=ExtendAction)
187
+ if (buf) {
178
+ p.add_argument('--nbd', help='''\
188
+ error_report("vu_panic: %s", buf);
179
+host:port for remote NBD image, (or just host, for
189
+ }
180
+default port 10809). Use it in tests, label is "nbd"
190
+
181
+(but you cannot create test nbd:nbd).''')
191
+ if (server->sioc) {
182
+ p.add_argument('--test', nargs='+', help='''\
192
+ close_client(server);
183
+Tests, in form source-dir-label:target-dir-label''',
193
+ }
184
+ action=ExtendAction)
194
+
185
+
195
+ if (server->device_panic_notifier) {
186
+ bench(p.parse_args())
196
+ server->device_panic_notifier(server);
197
+ }
198
+
199
+ /*
200
+ * Set the callback function for network listener so another
201
+ * vhost-user client can connect to this server
202
+ */
203
+ qio_net_listener_set_client_func(server->listener,
204
+ vu_accept,
205
+ server,
206
+ NULL);
207
+}
208
+
209
+static bool coroutine_fn
210
+vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
211
+{
212
+ struct iovec iov = {
213
+ .iov_base = (char *)vmsg,
214
+ .iov_len = VHOST_USER_HDR_SIZE,
215
+ };
216
+ int rc, read_bytes = 0;
217
+ Error *local_err = NULL;
218
+ /*
219
+ * Store fds/nfds returned from qio_channel_readv_full into
220
+ * temporary variables.
221
+ *
222
+ * VhostUserMsg is a packed structure, gcc will complain about passing
223
+ * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
224
+ * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
225
+ * thus two temporary variables nfds and fds are used here.
226
+ */
227
+ size_t nfds = 0, nfds_t = 0;
228
+ const size_t max_fds = G_N_ELEMENTS(vmsg->fds);
229
+ int *fds_t = NULL;
230
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
231
+ QIOChannel *ioc = server->ioc;
232
+
233
+ if (!ioc) {
234
+ error_report_err(local_err);
235
+ goto fail;
236
+ }
237
+
238
+ assert(qemu_in_coroutine());
239
+ do {
240
+ /*
241
+ * qio_channel_readv_full may have short reads, keeping calling it
242
+ * until getting VHOST_USER_HDR_SIZE or 0 bytes in total
243
+ */
244
+ rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
245
+ if (rc < 0) {
246
+ if (rc == QIO_CHANNEL_ERR_BLOCK) {
247
+ qio_channel_yield(ioc, G_IO_IN);
248
+ continue;
249
+ } else {
250
+ error_report_err(local_err);
251
+ return false;
252
+ }
253
+ }
254
+ read_bytes += rc;
255
+ if (nfds_t > 0) {
256
+ if (nfds + nfds_t > max_fds) {
257
+ error_report("A maximum of %zu fds are allowed, "
258
+ "however got %zu fds now",
259
+ max_fds, nfds + nfds_t);
260
+ goto fail;
261
+ }
262
+ memcpy(vmsg->fds + nfds, fds_t,
263
+ nfds_t *sizeof(vmsg->fds[0]));
264
+ nfds += nfds_t;
265
+ g_free(fds_t);
266
+ }
267
+ if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
268
+ break;
269
+ }
270
+ iov.iov_base = (char *)vmsg + read_bytes;
271
+ iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
272
+ } while (true);
273
+
274
+ vmsg->fd_num = nfds;
275
+ /* qio_channel_readv_full will make socket fds blocking, unblock them */
276
+ vmsg_unblock_fds(vmsg);
277
+ if (vmsg->size > sizeof(vmsg->payload)) {
278
+ error_report("Error: too big message request: %d, "
279
+ "size: vmsg->size: %u, "
280
+ "while sizeof(vmsg->payload) = %zu",
281
+ vmsg->request, vmsg->size, sizeof(vmsg->payload));
282
+ goto fail;
283
+ }
284
+
285
+ struct iovec iov_payload = {
286
+ .iov_base = (char *)&vmsg->payload,
287
+ .iov_len = vmsg->size,
288
+ };
289
+ if (vmsg->size) {
290
+ rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
291
+ if (rc == -1) {
292
+ error_report_err(local_err);
293
+ goto fail;
294
+ }
295
+ }
296
+
297
+ return true;
298
+
299
+fail:
300
+ vmsg_close_fds(vmsg);
301
+
302
+ return false;
303
+}
304
+
305
+
306
+static void vu_client_start(VuServer *server);
307
+static coroutine_fn void vu_client_trip(void *opaque)
308
+{
309
+ VuServer *server = opaque;
310
+
311
+ while (!server->aio_context_changed && server->sioc) {
312
+ server->processing_msg = true;
313
+ vu_dispatch(&server->vu_dev);
314
+ server->processing_msg = false;
315
+ }
316
+
317
+ if (server->aio_context_changed && server->sioc) {
318
+ server->aio_context_changed = false;
319
+ vu_client_start(server);
320
+ }
321
+}
322
+
323
+static void vu_client_start(VuServer *server)
324
+{
325
+ server->co_trip = qemu_coroutine_create(vu_client_trip, server);
326
+ aio_co_enter(server->ctx, server->co_trip);
327
+}
328
+
329
+/*
330
+ * a wrapper for vu_kick_cb
331
+ *
332
+ * since aio_dispatch can only pass one user data pointer to the
333
+ * callback function, pack VuDev and pvt into a struct. Then unpack it
334
+ * and pass them to vu_kick_cb
335
+ */
336
+static void kick_handler(void *opaque)
337
+{
338
+ VuFdWatch *vu_fd_watch = opaque;
339
+ vu_fd_watch->processing = true;
340
+ vu_fd_watch->cb(vu_fd_watch->vu_dev, 0, vu_fd_watch->pvt);
341
+ vu_fd_watch->processing = false;
342
+}
343
+
344
+
345
+static VuFdWatch *find_vu_fd_watch(VuServer *server, int fd)
346
+{
347
+
348
+ VuFdWatch *vu_fd_watch, *next;
349
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
350
+ if (vu_fd_watch->fd == fd) {
351
+ return vu_fd_watch;
352
+ }
353
+ }
354
+ return NULL;
355
+}
356
+
357
+static void
358
+set_watch(VuDev *vu_dev, int fd, int vu_evt,
359
+ vu_watch_cb cb, void *pvt)
360
+{
361
+
362
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
363
+ g_assert(vu_dev);
364
+ g_assert(fd >= 0);
365
+ g_assert(cb);
366
+
367
+ VuFdWatch *vu_fd_watch = find_vu_fd_watch(server, fd);
368
+
369
+ if (!vu_fd_watch) {
370
+ VuFdWatch *vu_fd_watch = g_new0(VuFdWatch, 1);
371
+
372
+ QTAILQ_INSERT_TAIL(&server->vu_fd_watches, vu_fd_watch, next);
373
+
374
+ vu_fd_watch->fd = fd;
375
+ vu_fd_watch->cb = cb;
376
+ qemu_set_nonblock(fd);
377
+ aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
378
+ NULL, NULL, vu_fd_watch);
379
+ vu_fd_watch->vu_dev = vu_dev;
380
+ vu_fd_watch->pvt = pvt;
381
+ }
382
+}
383
+
384
+
385
+static void remove_watch(VuDev *vu_dev, int fd)
386
+{
387
+ VuServer *server;
388
+ g_assert(vu_dev);
389
+ g_assert(fd >= 0);
390
+
391
+ server = container_of(vu_dev, VuServer, vu_dev);
392
+
393
+ VuFdWatch *vu_fd_watch = find_vu_fd_watch(server, fd);
394
+
395
+ if (!vu_fd_watch) {
396
+ return;
397
+ }
398
+ aio_set_fd_handler(server->ioc->ctx, fd, true, NULL, NULL, NULL, NULL);
399
+
400
+ QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
401
+ g_free(vu_fd_watch);
402
+}
403
+
404
+
405
+static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
406
+ gpointer opaque)
407
+{
408
+ VuServer *server = opaque;
409
+
410
+ if (server->sioc) {
411
+ warn_report("Only one vhost-user client is allowed to "
412
+ "connect the server one time");
413
+ return;
414
+ }
415
+
416
+ if (!vu_init(&server->vu_dev, server->max_queues, sioc->fd, panic_cb,
417
+ vu_message_read, set_watch, remove_watch, server->vu_iface)) {
418
+ error_report("Failed to initialize libvhost-user");
419
+ return;
420
+ }
421
+
422
+ /*
423
+ * Unset the callback function for network listener to make another
424
+ * vhost-user client keeping waiting until this client disconnects
425
+ */
426
+ qio_net_listener_set_client_func(server->listener,
427
+ NULL,
428
+ NULL,
429
+ NULL);
430
+ server->sioc = sioc;
431
+ /*
432
+ * Increase the object reference, so sioc will not freed by
433
+ * qio_net_listener_channel_func which will call object_unref(OBJECT(sioc))
434
+ */
435
+ object_ref(OBJECT(server->sioc));
436
+ qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
437
+ server->ioc = QIO_CHANNEL(sioc);
438
+ object_ref(OBJECT(server->ioc));
439
+ qio_channel_attach_aio_context(server->ioc, server->ctx);
440
+ qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
441
+ vu_client_start(server);
442
+}
443
+
444
+
445
+void vhost_user_server_stop(VuServer *server)
446
+{
447
+ if (server->sioc) {
448
+ close_client(server);
449
+ }
450
+
451
+ if (server->listener) {
452
+ qio_net_listener_disconnect(server->listener);
453
+ object_unref(OBJECT(server->listener));
454
+ }
455
+
456
+}
457
+
458
+void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx)
459
+{
460
+ VuFdWatch *vu_fd_watch, *next;
461
+ void *opaque = NULL;
462
+ IOHandler *io_read = NULL;
463
+ bool attach;
464
+
465
+ server->ctx = ctx ? ctx : qemu_get_aio_context();
466
+
467
+ if (!server->sioc) {
468
+ /* not yet serving any client*/
469
+ return;
470
+ }
471
+
472
+ if (ctx) {
473
+ qio_channel_attach_aio_context(server->ioc, ctx);
474
+ server->aio_context_changed = true;
475
+ io_read = kick_handler;
476
+ attach = true;
477
+ } else {
478
+ qio_channel_detach_aio_context(server->ioc);
479
+ /* server->ioc->ctx keeps the old AioConext */
480
+ ctx = server->ioc->ctx;
481
+ attach = false;
482
+ }
483
+
484
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
485
+ if (vu_fd_watch->cb) {
486
+ opaque = attach ? vu_fd_watch : NULL;
487
+ aio_set_fd_handler(ctx, vu_fd_watch->fd, true,
488
+ io_read, NULL, NULL,
489
+ opaque);
490
+ }
491
+ }
492
+}
493
+
494
+
495
+bool vhost_user_server_start(VuServer *server,
496
+ SocketAddress *socket_addr,
497
+ AioContext *ctx,
498
+ uint16_t max_queues,
499
+ DevicePanicNotifierFn *device_panic_notifier,
500
+ const VuDevIface *vu_iface,
501
+ Error **errp)
502
+{
503
+ QIONetListener *listener = qio_net_listener_new();
504
+ if (qio_net_listener_open_sync(listener, socket_addr, 1,
505
+ errp) < 0) {
506
+ object_unref(OBJECT(listener));
507
+ return false;
508
+ }
509
+
510
+ /* zero out unspecified fileds */
511
+ *server = (VuServer) {
512
+ .listener = listener,
513
+ .vu_iface = vu_iface,
514
+ .max_queues = max_queues,
515
+ .ctx = ctx,
516
+ .device_panic_notifier = device_panic_notifier,
517
+ };
518
+
519
+ qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
520
+
521
+ qio_net_listener_set_client_func(server->listener,
522
+ vu_accept,
523
+ server,
524
+ NULL);
525
+
526
+ QTAILQ_INIT(&server->vu_fd_watches);
527
+ return true;
528
+}
529
diff --git a/util/meson.build b/util/meson.build
530
index XXXXXXX..XXXXXXX 100644
531
--- a/util/meson.build
532
+++ b/util/meson.build
533
@@ -XXX,XX +XXX,XX @@ if have_block
534
util_ss.add(files('main-loop.c'))
535
util_ss.add(files('nvdimm-utils.c'))
536
util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
537
+ util_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-server.c'))
538
util_ss.add(files('qemu-coroutine-sleep.c'))
539
util_ss.add(files('qemu-co-shared-resource.c'))
540
util_ss.add(files('thread-pool.c', 'qemu-timer.c'))
187
--
541
--
188
2.29.2
542
2.26.2
189
543
190
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
Provide API for the COR-filter removal. Also, drop the filter child
3
Move the constants from hw/core/qdev-properties.c to
4
permissions for an inactive state when the filter node is being
4
util/block-helpers.h so that knowledge of the min/max values is
5
removed.
6
To insert the filter, the block generic layer function
7
bdrv_insert_node() can be used.
8
The new function bdrv_cor_filter_drop() may be considered as an
9
intermediate solution before the QEMU permission update system has
10
overhauled. Then we are able to implement the API function
11
bdrv_remove_node() on the block generic layer.
12
5
13
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
6
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
15
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Message-Id: <20201216061703.70908-4-vsementsov@virtuozzo.com>
9
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
11
Message-id: 20200918080912.321299-5-coiby.xu@gmail.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
18
---
13
---
19
block/copy-on-read.h | 32 +++++++++++++++++++++++++
14
util/block-helpers.h | 19 +++++++++++++
20
block/copy-on-read.c | 56 ++++++++++++++++++++++++++++++++++++++++++++
15
hw/core/qdev-properties-system.c | 31 ++++-----------------
21
2 files changed, 88 insertions(+)
16
util/block-helpers.c | 46 ++++++++++++++++++++++++++++++++
22
create mode 100644 block/copy-on-read.h
17
util/meson.build | 1 +
18
4 files changed, 71 insertions(+), 26 deletions(-)
19
create mode 100644 util/block-helpers.h
20
create mode 100644 util/block-helpers.c
23
21
24
diff --git a/block/copy-on-read.h b/block/copy-on-read.h
22
diff --git a/util/block-helpers.h b/util/block-helpers.h
25
new file mode 100644
23
new file mode 100644
26
index XXXXXXX..XXXXXXX
24
index XXXXXXX..XXXXXXX
27
--- /dev/null
25
--- /dev/null
28
+++ b/block/copy-on-read.h
26
+++ b/util/block-helpers.h
27
@@ -XXX,XX +XXX,XX @@
28
+#ifndef BLOCK_HELPERS_H
29
+#define BLOCK_HELPERS_H
30
+
31
+#include "qemu/units.h"
32
+
33
+/* lower limit is sector size */
34
+#define MIN_BLOCK_SIZE INT64_C(512)
35
+#define MIN_BLOCK_SIZE_STR "512 B"
36
+/*
37
+ * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
38
+ * matches qcow2 cluster size limit
39
+ */
40
+#define MAX_BLOCK_SIZE (2 * MiB)
41
+#define MAX_BLOCK_SIZE_STR "2 MiB"
42
+
43
+void check_block_size(const char *id, const char *name, int64_t value,
44
+ Error **errp);
45
+
46
+#endif /* BLOCK_HELPERS_H */
47
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/core/qdev-properties-system.c
50
+++ b/hw/core/qdev-properties-system.c
51
@@ -XXX,XX +XXX,XX @@
52
#include "sysemu/blockdev.h"
53
#include "net/net.h"
54
#include "hw/pci/pci.h"
55
+#include "util/block-helpers.h"
56
57
static bool check_prop_still_unset(DeviceState *dev, const char *name,
58
const void *old_val, const char *new_val,
59
@@ -XXX,XX +XXX,XX @@ const PropertyInfo qdev_prop_losttickpolicy = {
60
61
/* --- blocksize --- */
62
63
-/* lower limit is sector size */
64
-#define MIN_BLOCK_SIZE 512
65
-#define MIN_BLOCK_SIZE_STR "512 B"
66
-/*
67
- * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
68
- * matches qcow2 cluster size limit
69
- */
70
-#define MAX_BLOCK_SIZE (2 * MiB)
71
-#define MAX_BLOCK_SIZE_STR "2 MiB"
72
-
73
static void set_blocksize(Object *obj, Visitor *v, const char *name,
74
void *opaque, Error **errp)
75
{
76
@@ -XXX,XX +XXX,XX @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
77
Property *prop = opaque;
78
uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
79
uint64_t value;
80
+ Error *local_err = NULL;
81
82
if (dev->realized) {
83
qdev_prop_set_after_realize(dev, name, errp);
84
@@ -XXX,XX +XXX,XX @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
85
if (!visit_type_size(v, name, &value, errp)) {
86
return;
87
}
88
- /* value of 0 means "unset" */
89
- if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
90
- error_setg(errp,
91
- "Property %s.%s doesn't take value %" PRIu64
92
- " (minimum: " MIN_BLOCK_SIZE_STR
93
- ", maximum: " MAX_BLOCK_SIZE_STR ")",
94
- dev->id ? : "", name, value);
95
+ check_block_size(dev->id ? : "", name, value, &local_err);
96
+ if (local_err) {
97
+ error_propagate(errp, local_err);
98
return;
99
}
100
-
101
- /* We rely on power-of-2 blocksizes for bitmasks */
102
- if ((value & (value - 1)) != 0) {
103
- error_setg(errp,
104
- "Property %s.%s doesn't take value '%" PRId64 "', "
105
- "it's not a power of 2", dev->id ?: "", name, (int64_t)value);
106
- return;
107
- }
108
-
109
*ptr = value;
110
}
111
112
diff --git a/util/block-helpers.c b/util/block-helpers.c
113
new file mode 100644
114
index XXXXXXX..XXXXXXX
115
--- /dev/null
116
+++ b/util/block-helpers.c
29
@@ -XXX,XX +XXX,XX @@
117
@@ -XXX,XX +XXX,XX @@
30
+/*
118
+/*
31
+ * Copy-on-read filter block driver
119
+ * Block utility functions
32
+ *
120
+ *
33
+ * The filter driver performs Copy-On-Read (COR) operations
121
+ * Copyright IBM, Corp. 2011
122
+ * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
34
+ *
123
+ *
35
+ * Copyright (c) 2018-2020 Virtuozzo International GmbH.
124
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
36
+ *
125
+ * See the COPYING file in the top-level directory.
37
+ * Author:
38
+ * Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
39
+ *
40
+ * This program is free software; you can redistribute it and/or modify
41
+ * it under the terms of the GNU General Public License as published by
42
+ * the Free Software Foundation; either version 2 of the License, or
43
+ * (at your option) any later version.
44
+ *
45
+ * This program is distributed in the hope that it will be useful,
46
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
47
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48
+ * GNU General Public License for more details.
49
+ *
50
+ * You should have received a copy of the GNU General Public License
51
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
52
+ */
126
+ */
53
+
127
+
54
+#ifndef BLOCK_COPY_ON_READ
128
+#include "qemu/osdep.h"
55
+#define BLOCK_COPY_ON_READ
129
+#include "qapi/error.h"
130
+#include "qapi/qmp/qerror.h"
131
+#include "block-helpers.h"
56
+
132
+
57
+#include "block/block_int.h"
133
+/**
58
+
134
+ * check_block_size:
59
+void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
135
+ * @id: The unique ID of the object
60
+
136
+ * @name: The name of the property being validated
61
+#endif /* BLOCK_COPY_ON_READ */
137
+ * @value: The block size in bytes
62
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
138
+ * @errp: A pointer to an area to store an error
63
index XXXXXXX..XXXXXXX 100644
139
+ *
64
--- a/block/copy-on-read.c
140
+ * This function checks that the block size meets the following conditions:
65
+++ b/block/copy-on-read.c
141
+ * 1. At least MIN_BLOCK_SIZE
66
@@ -XXX,XX +XXX,XX @@
142
+ * 2. No larger than MAX_BLOCK_SIZE
67
#include "qemu/osdep.h"
143
+ * 3. A power of 2
68
#include "block/block_int.h"
144
+ */
69
#include "qemu/module.h"
145
+void check_block_size(const char *id, const char *name, int64_t value,
70
+#include "qapi/error.h"
146
+ Error **errp)
71
+#include "block/copy-on-read.h"
147
+{
72
+
148
+ /* value of 0 means "unset" */
73
+
149
+ if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
74
+typedef struct BDRVStateCOR {
150
+ error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
75
+ bool active;
151
+ id, name, value, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);
76
+} BDRVStateCOR;
77
78
79
static int cor_open(BlockDriverState *bs, QDict *options, int flags,
80
Error **errp)
81
{
82
+ BDRVStateCOR *state = bs->opaque;
83
+
84
bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
85
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
86
false, errp);
87
@@ -XXX,XX +XXX,XX @@ static int cor_open(BlockDriverState *bs, QDict *options, int flags,
88
((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
89
bs->file->bs->supported_zero_flags);
90
91
+ state->active = true;
92
+
93
+ /*
94
+ * We don't need to call bdrv_child_refresh_perms() now as the permissions
95
+ * will be updated later when the filter node gets its parent.
96
+ */
97
+
98
return 0;
99
}
100
101
@@ -XXX,XX +XXX,XX @@ static void cor_child_perm(BlockDriverState *bs, BdrvChild *c,
102
uint64_t perm, uint64_t shared,
103
uint64_t *nperm, uint64_t *nshared)
104
{
105
+ BDRVStateCOR *s = bs->opaque;
106
+
107
+ if (!s->active) {
108
+ /*
109
+ * While the filter is being removed
110
+ */
111
+ *nperm = 0;
112
+ *nshared = BLK_PERM_ALL;
113
+ return;
152
+ return;
114
+ }
153
+ }
115
+
154
+
116
*nperm = perm & PERM_PASSTHROUGH;
155
+ /* We rely on power-of-2 blocksizes for bitmasks */
117
*nshared = (shared & PERM_PASSTHROUGH) | PERM_UNCHANGED;
156
+ if ((value & (value - 1)) != 0) {
118
157
+ error_setg(errp,
119
@@ -XXX,XX +XXX,XX @@ static void cor_lock_medium(BlockDriverState *bs, bool locked)
158
+ "Property %s.%s doesn't take value '%" PRId64
120
159
+ "', it's not a power of 2",
121
static BlockDriver bdrv_copy_on_read = {
160
+ id, name, value);
122
.format_name = "copy-on-read",
123
+ .instance_size = sizeof(BDRVStateCOR),
124
125
.bdrv_open = cor_open,
126
.bdrv_child_perm = cor_child_perm,
127
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_copy_on_read = {
128
.is_filter = true,
129
};
130
131
+
132
+void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
133
+{
134
+ BdrvChild *child;
135
+ BlockDriverState *bs;
136
+ BDRVStateCOR *s = cor_filter_bs->opaque;
137
+
138
+ child = bdrv_filter_child(cor_filter_bs);
139
+ if (!child) {
140
+ return;
161
+ return;
141
+ }
162
+ }
142
+ bs = child->bs;
143
+
144
+ /* Retain the BDS until we complete the graph change. */
145
+ bdrv_ref(bs);
146
+ /* Hold a guest back from writing while permissions are being reset. */
147
+ bdrv_drained_begin(bs);
148
+ /* Drop permissions before the graph change. */
149
+ s->active = false;
150
+ bdrv_child_refresh_perms(cor_filter_bs, child, &error_abort);
151
+ bdrv_replace_node(cor_filter_bs, bs, &error_abort);
152
+
153
+ bdrv_drained_end(bs);
154
+ bdrv_unref(bs);
155
+ bdrv_unref(cor_filter_bs);
156
+}
163
+}
157
+
164
diff --git a/util/meson.build b/util/meson.build
158
+
165
index XXXXXXX..XXXXXXX 100644
159
static void bdrv_copy_on_read_init(void)
166
--- a/util/meson.build
160
{
167
+++ b/util/meson.build
161
bdrv_register(&bdrv_copy_on_read);
168
@@ -XXX,XX +XXX,XX @@ if have_block
169
util_ss.add(files('nvdimm-utils.c'))
170
util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
171
util_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-server.c'))
172
+ util_ss.add(files('block-helpers.c'))
173
util_ss.add(files('qemu-coroutine-sleep.c'))
174
util_ss.add(files('qemu-co-shared-resource.c'))
175
util_ss.add(files('thread-pool.c', 'qemu-timer.c'))
162
--
176
--
163
2.29.2
177
2.26.2
164
178
165
diff view generated by jsdifflib
Deleted patch
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
2
1
3
Provide the possibility to pass the 'filter-node-name' parameter to the
4
block-stream job as it is done for the commit block job.
5
6
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
7
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
[vsementsov: comment indentation, s/Since: 5.2/Since: 6.0/]
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
10
Message-Id: <20201216061703.70908-5-vsementsov@virtuozzo.com>
11
[mreitz: s/commit/stream/]
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
qapi/block-core.json | 6 ++++++
15
include/block/block_int.h | 7 ++++++-
16
block/monitor/block-hmp-cmds.c | 4 ++--
17
block/stream.c | 4 +++-
18
blockdev.c | 4 +++-
19
5 files changed, 20 insertions(+), 5 deletions(-)
20
21
diff --git a/qapi/block-core.json b/qapi/block-core.json
22
index XXXXXXX..XXXXXXX 100644
23
--- a/qapi/block-core.json
24
+++ b/qapi/block-core.json
25
@@ -XXX,XX +XXX,XX @@
26
# 'stop' and 'enospc' can only be used if the block device
27
# supports io-status (see BlockInfo). Since 1.3.
28
#
29
+# @filter-node-name: the node name that should be assigned to the
30
+# filter driver that the stream job inserts into the graph
31
+# above @device. If this option is not given, a node name is
32
+# autogenerated. (Since: 6.0)
33
+#
34
# @auto-finalize: When false, this job will wait in a PENDING state after it has
35
# finished its work, waiting for @block-job-finalize before
36
# making any block graph changes.
37
@@ -XXX,XX +XXX,XX @@
38
'data': { '*job-id': 'str', 'device': 'str', '*base': 'str',
39
'*base-node': 'str', '*backing-file': 'str', '*speed': 'int',
40
'*on-error': 'BlockdevOnError',
41
+ '*filter-node-name': 'str',
42
'*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
43
44
##
45
diff --git a/include/block/block_int.h b/include/block/block_int.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/include/block/block_int.h
48
+++ b/include/block/block_int.h
49
@@ -XXX,XX +XXX,XX @@ int is_windows_drive(const char *filename);
50
* See @BlockJobCreateFlags
51
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
52
* @on_error: The action to take upon error.
53
+ * @filter_node_name: The node name that should be assigned to the filter
54
+ * driver that the stream job inserts into the graph above
55
+ * @bs. NULL means that a node name should be autogenerated.
56
* @errp: Error object.
57
*
58
* Start a streaming operation on @bs. Clusters that are unallocated
59
@@ -XXX,XX +XXX,XX @@ int is_windows_drive(const char *filename);
60
void stream_start(const char *job_id, BlockDriverState *bs,
61
BlockDriverState *base, const char *backing_file_str,
62
int creation_flags, int64_t speed,
63
- BlockdevOnError on_error, Error **errp);
64
+ BlockdevOnError on_error,
65
+ const char *filter_node_name,
66
+ Error **errp);
67
68
/**
69
* commit_start:
70
diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/block/monitor/block-hmp-cmds.c
73
+++ b/block/monitor/block-hmp-cmds.c
74
@@ -XXX,XX +XXX,XX @@ void hmp_block_stream(Monitor *mon, const QDict *qdict)
75
76
qmp_block_stream(true, device, device, base != NULL, base, false, NULL,
77
false, NULL, qdict_haskey(qdict, "speed"), speed, true,
78
- BLOCKDEV_ON_ERROR_REPORT, false, false, false, false,
79
- &error);
80
+ BLOCKDEV_ON_ERROR_REPORT, false, NULL, false, false, false,
81
+ false, &error);
82
83
hmp_handle_error(mon, error);
84
}
85
diff --git a/block/stream.c b/block/stream.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/block/stream.c
88
+++ b/block/stream.c
89
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver stream_job_driver = {
90
void stream_start(const char *job_id, BlockDriverState *bs,
91
BlockDriverState *base, const char *backing_file_str,
92
int creation_flags, int64_t speed,
93
- BlockdevOnError on_error, Error **errp)
94
+ BlockdevOnError on_error,
95
+ const char *filter_node_name,
96
+ Error **errp)
97
{
98
StreamBlockJob *s;
99
BlockDriverState *iter;
100
diff --git a/blockdev.c b/blockdev.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/blockdev.c
103
+++ b/blockdev.c
104
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
105
bool has_backing_file, const char *backing_file,
106
bool has_speed, int64_t speed,
107
bool has_on_error, BlockdevOnError on_error,
108
+ bool has_filter_node_name, const char *filter_node_name,
109
bool has_auto_finalize, bool auto_finalize,
110
bool has_auto_dismiss, bool auto_dismiss,
111
Error **errp)
112
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
113
}
114
115
stream_start(has_job_id ? job_id : NULL, bs, base_bs, base_name,
116
- job_flags, has_speed ? speed : 0, on_error, &local_err);
117
+ job_flags, has_speed ? speed : 0, on_error,
118
+ filter_node_name, &local_err);
119
if (local_err) {
120
error_propagate(errp, local_err);
121
goto out;
122
--
123
2.29.2
124
125
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
Signed-off-by: Alberto Garcia <berto@igalia.com>
3
By making use of libvhost-user, block device drive can be shared to
4
Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
4
the connected vhost-user client. Only one client can connect to the
5
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
5
server one time.
6
Message-Id: <20210112170540.2912-1-berto@igalia.com>
6
7
[mreitz: Add "# group:" line]
7
Since vhost-user-server needs a block drive to be created first, delay
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
8
the creation of this object.
9
10
Suggested-by: Kevin Wolf <kwolf@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
13
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
15
Message-id: 20200918080912.321299-6-coiby.xu@gmail.com
16
[Shorten "vhost_user_blk_server" string to "vhost_user_blk" to avoid the
17
following compiler warning:
18
../block/export/vhost-user-blk-server.c:178:50: error: ‘%s’ directive output truncated writing 21 bytes into a region of size 20 [-Werror=format-truncation=]
19
and fix "Invalid size %ld ..." ssize_t format string arguments for
20
32-bit hosts.
21
--Stefan]
22
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
23
---
10
tests/qemu-iotests/313 | 104 +++++++++++++++++++++++++++++++++++++
24
block/export/vhost-user-blk-server.h | 36 ++
11
tests/qemu-iotests/313.out | 29 +++++++++++
25
block/export/vhost-user-blk-server.c | 661 +++++++++++++++++++++++++++
12
tests/qemu-iotests/group | 1 +
26
softmmu/vl.c | 4 +
13
3 files changed, 134 insertions(+)
27
block/meson.build | 1 +
14
create mode 100755 tests/qemu-iotests/313
28
4 files changed, 702 insertions(+)
15
create mode 100644 tests/qemu-iotests/313.out
29
create mode 100644 block/export/vhost-user-blk-server.h
30
create mode 100644 block/export/vhost-user-blk-server.c
16
31
17
diff --git a/tests/qemu-iotests/313 b/tests/qemu-iotests/313
32
diff --git a/block/export/vhost-user-blk-server.h b/block/export/vhost-user-blk-server.h
18
new file mode 100755
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/tests/qemu-iotests/313
22
@@ -XXX,XX +XXX,XX @@
23
+#!/usr/bin/env bash
24
+# group: rw auto quick
25
+#
26
+# Test for the regression fixed in commit c8bf9a9169
27
+#
28
+# Copyright (C) 2020 Igalia, S.L.
29
+# Author: Alberto Garcia <berto@igalia.com>
30
+# Based on a test case by Maxim Levitsky <mlevitsk@redhat.com>
31
+#
32
+# This program is free software; you can redistribute it and/or modify
33
+# it under the terms of the GNU General Public License as published by
34
+# the Free Software Foundation; either version 2 of the License, or
35
+# (at your option) any later version.
36
+#
37
+# This program is distributed in the hope that it will be useful,
38
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
39
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
40
+# GNU General Public License for more details.
41
+#
42
+# You should have received a copy of the GNU General Public License
43
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
44
+#
45
+
46
+# creator
47
+owner=berto@igalia.com
48
+
49
+seq=`basename $0`
50
+echo "QA output created by $seq"
51
+
52
+status=1 # failure is the default!
53
+
54
+_cleanup()
55
+{
56
+ _cleanup_test_img
57
+}
58
+trap "_cleanup; exit \$status" 0 1 2 3 15
59
+
60
+# get standard environment, filters and checks
61
+. ./common.rc
62
+. ./common.filter
63
+
64
+_supported_fmt qcow2
65
+_supported_proto file
66
+_supported_os Linux
67
+_unsupported_imgopts cluster_size refcount_bits extended_l2 compat=0.10 data_file
68
+
69
+# The cluster size must be at least the granularity of the mirror job (4KB)
70
+# Note that larger cluster sizes will produce very large images (several GBs)
71
+cluster_size=4096
72
+refcount_bits=64 # Make it equal to the L2 entry size for convenience
73
+options="cluster_size=${cluster_size},refcount_bits=${refcount_bits}"
74
+
75
+# Number of refcount entries per refcount blocks
76
+ref_entries=$(( ${cluster_size} * 8 / ${refcount_bits} ))
77
+
78
+# Number of data clusters needed to fill a refcount block
79
+# Equals ${ref_entries} minus two (one L2 table and one refcount block)
80
+data_clusters_per_refblock=$(( ${ref_entries} - 2 ))
81
+
82
+# Number of entries in the refcount cache
83
+ref_blocks=4
84
+
85
+# Write enough data clusters to fill the refcount cache and allocate
86
+# one more refcount block.
87
+# Subtract 3 clusters from the total: qcow2 header, refcount table, L1 table
88
+total_data_clusters=$(( ${data_clusters_per_refblock} * ${ref_blocks} + 1 - 3 ))
89
+
90
+# Total size to write in bytes
91
+total_size=$(( ${total_data_clusters} * ${cluster_size} ))
92
+
93
+echo
94
+echo '### Create the image'
95
+echo
96
+TEST_IMG_FILE=$TEST_IMG.base _make_test_img -o $options $total_size | _filter_img_create_size
97
+
98
+echo
99
+echo '### Write data to allocate more refcount blocks than the cache can hold'
100
+echo
101
+$QEMU_IO -c "write -P 1 0 $total_size" $TEST_IMG.base | _filter_qemu_io
102
+
103
+echo
104
+echo '### Create an overlay'
105
+echo
106
+_make_test_img -F $IMGFMT -b $TEST_IMG.base -o $options | _filter_img_create_size
107
+
108
+echo
109
+echo '### Fill the overlay with zeroes'
110
+echo
111
+$QEMU_IO -c "write -z 0 $total_size" $TEST_IMG | _filter_qemu_io
112
+
113
+echo
114
+echo '### Commit changes to the base image'
115
+echo
116
+$QEMU_IMG commit $TEST_IMG
117
+
118
+echo
119
+echo '### Check the base image'
120
+echo
121
+$QEMU_IMG check $TEST_IMG.base
122
+
123
+# success, all done
124
+echo "*** done"
125
+rm -f $seq.full
126
+status=0
127
diff --git a/tests/qemu-iotests/313.out b/tests/qemu-iotests/313.out
128
new file mode 100644
33
new file mode 100644
129
index XXXXXXX..XXXXXXX
34
index XXXXXXX..XXXXXXX
130
--- /dev/null
35
--- /dev/null
131
+++ b/tests/qemu-iotests/313.out
36
+++ b/block/export/vhost-user-blk-server.h
132
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
133
+QA output created by 313
38
+/*
134
+
39
+ * Sharing QEMU block devices via vhost-user protocal
135
+### Create the image
40
+ *
136
+
41
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
137
+Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=SIZE
42
+ * Copyright (c) 2020 Red Hat, Inc.
138
+
43
+ *
139
+### Write data to allocate more refcount blocks than the cache can hold
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or
140
+
45
+ * later. See the COPYING file in the top-level directory.
141
+wrote 8347648/8347648 bytes at offset 0
46
+ */
142
+7.961 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
47
+
143
+
48
+#ifndef VHOST_USER_BLK_SERVER_H
144
+### Create an overlay
49
+#define VHOST_USER_BLK_SERVER_H
145
+
50
+#include "util/vhost-user-server.h"
146
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE backing_file=TEST_DIR/t.IMGFMT.base backing_fmt=IMGFMT
51
+
147
+
52
+typedef struct VuBlockDev VuBlockDev;
148
+### Fill the overlay with zeroes
53
+#define TYPE_VHOST_USER_BLK_SERVER "vhost-user-blk-server"
149
+
54
+#define VHOST_USER_BLK_SERVER(obj) \
150
+wrote 8347648/8347648 bytes at offset 0
55
+ OBJECT_CHECK(VuBlockDev, obj, TYPE_VHOST_USER_BLK_SERVER)
151
+7.961 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
56
+
152
+
57
+/* vhost user block device */
153
+### Commit changes to the base image
58
+struct VuBlockDev {
154
+
59
+ Object parent_obj;
155
+Image committed.
60
+ char *node_name;
156
+
61
+ SocketAddress *addr;
157
+### Check the base image
62
+ AioContext *ctx;
158
+
63
+ VuServer vu_server;
159
+No errors were found on the image.
64
+ bool running;
160
+Image end offset: 8396800
65
+ uint32_t blk_size;
161
+*** done
66
+ BlockBackend *backend;
162
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
67
+ QIOChannelSocket *sioc;
68
+ QTAILQ_ENTRY(VuBlockDev) next;
69
+ struct virtio_blk_config blkcfg;
70
+ bool writable;
71
+};
72
+
73
+#endif /* VHOST_USER_BLK_SERVER_H */
74
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
75
new file mode 100644
76
index XXXXXXX..XXXXXXX
77
--- /dev/null
78
+++ b/block/export/vhost-user-blk-server.c
79
@@ -XXX,XX +XXX,XX @@
80
+/*
81
+ * Sharing QEMU block devices via vhost-user protocal
82
+ *
83
+ * Parts of the code based on nbd/server.c.
84
+ *
85
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
86
+ * Copyright (c) 2020 Red Hat, Inc.
87
+ *
88
+ * This work is licensed under the terms of the GNU GPL, version 2 or
89
+ * later. See the COPYING file in the top-level directory.
90
+ */
91
+#include "qemu/osdep.h"
92
+#include "block/block.h"
93
+#include "vhost-user-blk-server.h"
94
+#include "qapi/error.h"
95
+#include "qom/object_interfaces.h"
96
+#include "sysemu/block-backend.h"
97
+#include "util/block-helpers.h"
98
+
99
+enum {
100
+ VHOST_USER_BLK_MAX_QUEUES = 1,
101
+};
102
+struct virtio_blk_inhdr {
103
+ unsigned char status;
104
+};
105
+
106
+typedef struct VuBlockReq {
107
+ VuVirtqElement *elem;
108
+ int64_t sector_num;
109
+ size_t size;
110
+ struct virtio_blk_inhdr *in;
111
+ struct virtio_blk_outhdr out;
112
+ VuServer *server;
113
+ struct VuVirtq *vq;
114
+} VuBlockReq;
115
+
116
+static void vu_block_req_complete(VuBlockReq *req)
117
+{
118
+ VuDev *vu_dev = &req->server->vu_dev;
119
+
120
+ /* IO size with 1 extra status byte */
121
+ vu_queue_push(vu_dev, req->vq, req->elem, req->size + 1);
122
+ vu_queue_notify(vu_dev, req->vq);
123
+
124
+ if (req->elem) {
125
+ free(req->elem);
126
+ }
127
+
128
+ g_free(req);
129
+}
130
+
131
+static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
132
+{
133
+ return container_of(server, VuBlockDev, vu_server);
134
+}
135
+
136
+static int coroutine_fn
137
+vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
138
+ uint32_t iovcnt, uint32_t type)
139
+{
140
+ struct virtio_blk_discard_write_zeroes desc;
141
+ ssize_t size = iov_to_buf(iov, iovcnt, 0, &desc, sizeof(desc));
142
+ if (unlikely(size != sizeof(desc))) {
143
+ error_report("Invalid size %zd, expect %zu", size, sizeof(desc));
144
+ return -EINVAL;
145
+ }
146
+
147
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
148
+ uint64_t range[2] = { le64_to_cpu(desc.sector) << 9,
149
+ le32_to_cpu(desc.num_sectors) << 9 };
150
+ if (type == VIRTIO_BLK_T_DISCARD) {
151
+ if (blk_co_pdiscard(vdev_blk->backend, range[0], range[1]) == 0) {
152
+ return 0;
153
+ }
154
+ } else if (type == VIRTIO_BLK_T_WRITE_ZEROES) {
155
+ if (blk_co_pwrite_zeroes(vdev_blk->backend,
156
+ range[0], range[1], 0) == 0) {
157
+ return 0;
158
+ }
159
+ }
160
+
161
+ return -EINVAL;
162
+}
163
+
164
+static void coroutine_fn vu_block_flush(VuBlockReq *req)
165
+{
166
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
167
+ BlockBackend *backend = vdev_blk->backend;
168
+ blk_co_flush(backend);
169
+}
170
+
171
+struct req_data {
172
+ VuServer *server;
173
+ VuVirtq *vq;
174
+ VuVirtqElement *elem;
175
+};
176
+
177
+static void coroutine_fn vu_block_virtio_process_req(void *opaque)
178
+{
179
+ struct req_data *data = opaque;
180
+ VuServer *server = data->server;
181
+ VuVirtq *vq = data->vq;
182
+ VuVirtqElement *elem = data->elem;
183
+ uint32_t type;
184
+ VuBlockReq *req;
185
+
186
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
187
+ BlockBackend *backend = vdev_blk->backend;
188
+
189
+ struct iovec *in_iov = elem->in_sg;
190
+ struct iovec *out_iov = elem->out_sg;
191
+ unsigned in_num = elem->in_num;
192
+ unsigned out_num = elem->out_num;
193
+ /* refer to hw/block/virtio_blk.c */
194
+ if (elem->out_num < 1 || elem->in_num < 1) {
195
+ error_report("virtio-blk request missing headers");
196
+ free(elem);
197
+ return;
198
+ }
199
+
200
+ req = g_new0(VuBlockReq, 1);
201
+ req->server = server;
202
+ req->vq = vq;
203
+ req->elem = elem;
204
+
205
+ if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
206
+ sizeof(req->out)) != sizeof(req->out))) {
207
+ error_report("virtio-blk request outhdr too short");
208
+ goto err;
209
+ }
210
+
211
+ iov_discard_front(&out_iov, &out_num, sizeof(req->out));
212
+
213
+ if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) {
214
+ error_report("virtio-blk request inhdr too short");
215
+ goto err;
216
+ }
217
+
218
+ /* We always touch the last byte, so just see how big in_iov is. */
219
+ req->in = (void *)in_iov[in_num - 1].iov_base
220
+ + in_iov[in_num - 1].iov_len
221
+ - sizeof(struct virtio_blk_inhdr);
222
+ iov_discard_back(in_iov, &in_num, sizeof(struct virtio_blk_inhdr));
223
+
224
+ type = le32_to_cpu(req->out.type);
225
+ switch (type & ~VIRTIO_BLK_T_BARRIER) {
226
+ case VIRTIO_BLK_T_IN:
227
+ case VIRTIO_BLK_T_OUT: {
228
+ ssize_t ret = 0;
229
+ bool is_write = type & VIRTIO_BLK_T_OUT;
230
+ req->sector_num = le64_to_cpu(req->out.sector);
231
+
232
+ int64_t offset = req->sector_num * vdev_blk->blk_size;
233
+ QEMUIOVector qiov;
234
+ if (is_write) {
235
+ qemu_iovec_init_external(&qiov, out_iov, out_num);
236
+ ret = blk_co_pwritev(backend, offset, qiov.size,
237
+ &qiov, 0);
238
+ } else {
239
+ qemu_iovec_init_external(&qiov, in_iov, in_num);
240
+ ret = blk_co_preadv(backend, offset, qiov.size,
241
+ &qiov, 0);
242
+ }
243
+ if (ret >= 0) {
244
+ req->in->status = VIRTIO_BLK_S_OK;
245
+ } else {
246
+ req->in->status = VIRTIO_BLK_S_IOERR;
247
+ }
248
+ break;
249
+ }
250
+ case VIRTIO_BLK_T_FLUSH:
251
+ vu_block_flush(req);
252
+ req->in->status = VIRTIO_BLK_S_OK;
253
+ break;
254
+ case VIRTIO_BLK_T_GET_ID: {
255
+ size_t size = MIN(iov_size(&elem->in_sg[0], in_num),
256
+ VIRTIO_BLK_ID_BYTES);
257
+ snprintf(elem->in_sg[0].iov_base, size, "%s", "vhost_user_blk");
258
+ req->in->status = VIRTIO_BLK_S_OK;
259
+ req->size = elem->in_sg[0].iov_len;
260
+ break;
261
+ }
262
+ case VIRTIO_BLK_T_DISCARD:
263
+ case VIRTIO_BLK_T_WRITE_ZEROES: {
264
+ int rc;
265
+ rc = vu_block_discard_write_zeroes(req, &elem->out_sg[1],
266
+ out_num, type);
267
+ if (rc == 0) {
268
+ req->in->status = VIRTIO_BLK_S_OK;
269
+ } else {
270
+ req->in->status = VIRTIO_BLK_S_IOERR;
271
+ }
272
+ break;
273
+ }
274
+ default:
275
+ req->in->status = VIRTIO_BLK_S_UNSUPP;
276
+ break;
277
+ }
278
+
279
+ vu_block_req_complete(req);
280
+ return;
281
+
282
+err:
283
+ free(elem);
284
+ g_free(req);
285
+ return;
286
+}
287
+
288
+static void vu_block_process_vq(VuDev *vu_dev, int idx)
289
+{
290
+ VuServer *server;
291
+ VuVirtq *vq;
292
+ struct req_data *req_data;
293
+
294
+ server = container_of(vu_dev, VuServer, vu_dev);
295
+ assert(server);
296
+
297
+ vq = vu_get_queue(vu_dev, idx);
298
+ assert(vq);
299
+ VuVirtqElement *elem;
300
+ while (1) {
301
+ elem = vu_queue_pop(vu_dev, vq, sizeof(VuVirtqElement) +
302
+ sizeof(VuBlockReq));
303
+ if (elem) {
304
+ req_data = g_new0(struct req_data, 1);
305
+ req_data->server = server;
306
+ req_data->vq = vq;
307
+ req_data->elem = elem;
308
+ Coroutine *co = qemu_coroutine_create(vu_block_virtio_process_req,
309
+ req_data);
310
+ aio_co_enter(server->ioc->ctx, co);
311
+ } else {
312
+ break;
313
+ }
314
+ }
315
+}
316
+
317
+static void vu_block_queue_set_started(VuDev *vu_dev, int idx, bool started)
318
+{
319
+ VuVirtq *vq;
320
+
321
+ assert(vu_dev);
322
+
323
+ vq = vu_get_queue(vu_dev, idx);
324
+ vu_set_queue_handler(vu_dev, vq, started ? vu_block_process_vq : NULL);
325
+}
326
+
327
+static uint64_t vu_block_get_features(VuDev *dev)
328
+{
329
+ uint64_t features;
330
+ VuServer *server = container_of(dev, VuServer, vu_dev);
331
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
332
+ features = 1ull << VIRTIO_BLK_F_SIZE_MAX |
333
+ 1ull << VIRTIO_BLK_F_SEG_MAX |
334
+ 1ull << VIRTIO_BLK_F_TOPOLOGY |
335
+ 1ull << VIRTIO_BLK_F_BLK_SIZE |
336
+ 1ull << VIRTIO_BLK_F_FLUSH |
337
+ 1ull << VIRTIO_BLK_F_DISCARD |
338
+ 1ull << VIRTIO_BLK_F_WRITE_ZEROES |
339
+ 1ull << VIRTIO_BLK_F_CONFIG_WCE |
340
+ 1ull << VIRTIO_F_VERSION_1 |
341
+ 1ull << VIRTIO_RING_F_INDIRECT_DESC |
342
+ 1ull << VIRTIO_RING_F_EVENT_IDX |
343
+ 1ull << VHOST_USER_F_PROTOCOL_FEATURES;
344
+
345
+ if (!vdev_blk->writable) {
346
+ features |= 1ull << VIRTIO_BLK_F_RO;
347
+ }
348
+
349
+ return features;
350
+}
351
+
352
+static uint64_t vu_block_get_protocol_features(VuDev *dev)
353
+{
354
+ return 1ull << VHOST_USER_PROTOCOL_F_CONFIG |
355
+ 1ull << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD;
356
+}
357
+
358
+static int
359
+vu_block_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
360
+{
361
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
362
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
363
+ memcpy(config, &vdev_blk->blkcfg, len);
364
+
365
+ return 0;
366
+}
367
+
368
+static int
369
+vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
370
+ uint32_t offset, uint32_t size, uint32_t flags)
371
+{
372
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
373
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
374
+ uint8_t wce;
375
+
376
+ /* don't support live migration */
377
+ if (flags != VHOST_SET_CONFIG_TYPE_MASTER) {
378
+ return -EINVAL;
379
+ }
380
+
381
+ if (offset != offsetof(struct virtio_blk_config, wce) ||
382
+ size != 1) {
383
+ return -EINVAL;
384
+ }
385
+
386
+ wce = *data;
387
+ vdev_blk->blkcfg.wce = wce;
388
+ blk_set_enable_write_cache(vdev_blk->backend, wce);
389
+ return 0;
390
+}
391
+
392
+/*
393
+ * When the client disconnects, it sends a VHOST_USER_NONE request
394
+ * and vu_process_message will simple call exit which cause the VM
395
+ * to exit abruptly.
396
+ * To avoid this issue, process VHOST_USER_NONE request ahead
397
+ * of vu_process_message.
398
+ *
399
+ */
400
+static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
401
+{
402
+ if (vmsg->request == VHOST_USER_NONE) {
403
+ dev->panic(dev, "disconnect");
404
+ return true;
405
+ }
406
+ return false;
407
+}
408
+
409
+static const VuDevIface vu_block_iface = {
410
+ .get_features = vu_block_get_features,
411
+ .queue_set_started = vu_block_queue_set_started,
412
+ .get_protocol_features = vu_block_get_protocol_features,
413
+ .get_config = vu_block_get_config,
414
+ .set_config = vu_block_set_config,
415
+ .process_msg = vu_block_process_msg,
416
+};
417
+
418
+static void blk_aio_attached(AioContext *ctx, void *opaque)
419
+{
420
+ VuBlockDev *vub_dev = opaque;
421
+ aio_context_acquire(ctx);
422
+ vhost_user_server_set_aio_context(&vub_dev->vu_server, ctx);
423
+ aio_context_release(ctx);
424
+}
425
+
426
+static void blk_aio_detach(void *opaque)
427
+{
428
+ VuBlockDev *vub_dev = opaque;
429
+ AioContext *ctx = vub_dev->vu_server.ctx;
430
+ aio_context_acquire(ctx);
431
+ vhost_user_server_set_aio_context(&vub_dev->vu_server, NULL);
432
+ aio_context_release(ctx);
433
+}
434
+
435
+static void
436
+vu_block_initialize_config(BlockDriverState *bs,
437
+ struct virtio_blk_config *config, uint32_t blk_size)
438
+{
439
+ config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
440
+ config->blk_size = blk_size;
441
+ config->size_max = 0;
442
+ config->seg_max = 128 - 2;
443
+ config->min_io_size = 1;
444
+ config->opt_io_size = 1;
445
+ config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
446
+ config->max_discard_sectors = 32768;
447
+ config->max_discard_seg = 1;
448
+ config->discard_sector_alignment = config->blk_size >> 9;
449
+ config->max_write_zeroes_sectors = 32768;
450
+ config->max_write_zeroes_seg = 1;
451
+}
452
+
453
+static VuBlockDev *vu_block_init(VuBlockDev *vu_block_device, Error **errp)
454
+{
455
+
456
+ BlockBackend *blk;
457
+ Error *local_error = NULL;
458
+ const char *node_name = vu_block_device->node_name;
459
+ bool writable = vu_block_device->writable;
460
+ uint64_t perm = BLK_PERM_CONSISTENT_READ;
461
+ int ret;
462
+
463
+ AioContext *ctx;
464
+
465
+ BlockDriverState *bs = bdrv_lookup_bs(node_name, node_name, &local_error);
466
+
467
+ if (!bs) {
468
+ error_propagate(errp, local_error);
469
+ return NULL;
470
+ }
471
+
472
+ if (bdrv_is_read_only(bs)) {
473
+ writable = false;
474
+ }
475
+
476
+ if (writable) {
477
+ perm |= BLK_PERM_WRITE;
478
+ }
479
+
480
+ ctx = bdrv_get_aio_context(bs);
481
+ aio_context_acquire(ctx);
482
+ bdrv_invalidate_cache(bs, NULL);
483
+ aio_context_release(ctx);
484
+
485
+ /*
486
+ * Don't allow resize while the vhost user server is running,
487
+ * otherwise we don't care what happens with the node.
488
+ */
489
+ blk = blk_new(bdrv_get_aio_context(bs), perm,
490
+ BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
491
+ BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
492
+ ret = blk_insert_bs(blk, bs, errp);
493
+
494
+ if (ret < 0) {
495
+ goto fail;
496
+ }
497
+
498
+ blk_set_enable_write_cache(blk, false);
499
+
500
+ blk_set_allow_aio_context_change(blk, true);
501
+
502
+ vu_block_device->blkcfg.wce = 0;
503
+ vu_block_device->backend = blk;
504
+ if (!vu_block_device->blk_size) {
505
+ vu_block_device->blk_size = BDRV_SECTOR_SIZE;
506
+ }
507
+ vu_block_device->blkcfg.blk_size = vu_block_device->blk_size;
508
+ blk_set_guest_block_size(blk, vu_block_device->blk_size);
509
+ vu_block_initialize_config(bs, &vu_block_device->blkcfg,
510
+ vu_block_device->blk_size);
511
+ return vu_block_device;
512
+
513
+fail:
514
+ blk_unref(blk);
515
+ return NULL;
516
+}
517
+
518
+static void vu_block_deinit(VuBlockDev *vu_block_device)
519
+{
520
+ if (vu_block_device->backend) {
521
+ blk_remove_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
522
+ blk_aio_detach, vu_block_device);
523
+ }
524
+
525
+ blk_unref(vu_block_device->backend);
526
+}
527
+
528
+static void vhost_user_blk_server_stop(VuBlockDev *vu_block_device)
529
+{
530
+ vhost_user_server_stop(&vu_block_device->vu_server);
531
+ vu_block_deinit(vu_block_device);
532
+}
533
+
534
+static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
535
+ Error **errp)
536
+{
537
+ AioContext *ctx;
538
+ SocketAddress *addr = vu_block_device->addr;
539
+
540
+ if (!vu_block_init(vu_block_device, errp)) {
541
+ return;
542
+ }
543
+
544
+ ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
545
+
546
+ if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
547
+ VHOST_USER_BLK_MAX_QUEUES,
548
+ NULL, &vu_block_iface,
549
+ errp)) {
550
+ goto error;
551
+ }
552
+
553
+ blk_add_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
554
+ blk_aio_detach, vu_block_device);
555
+ vu_block_device->running = true;
556
+ return;
557
+
558
+ error:
559
+ vu_block_deinit(vu_block_device);
560
+}
561
+
562
+static bool vu_prop_modifiable(VuBlockDev *vus, Error **errp)
563
+{
564
+ if (vus->running) {
565
+ error_setg(errp, "The property can't be modified "
566
+ "while the server is running");
567
+ return false;
568
+ }
569
+ return true;
570
+}
571
+
572
+static void vu_set_node_name(Object *obj, const char *value, Error **errp)
573
+{
574
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
575
+
576
+ if (!vu_prop_modifiable(vus, errp)) {
577
+ return;
578
+ }
579
+
580
+ if (vus->node_name) {
581
+ g_free(vus->node_name);
582
+ }
583
+
584
+ vus->node_name = g_strdup(value);
585
+}
586
+
587
+static char *vu_get_node_name(Object *obj, Error **errp)
588
+{
589
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
590
+ return g_strdup(vus->node_name);
591
+}
592
+
593
+static void free_socket_addr(SocketAddress *addr)
594
+{
595
+ g_free(addr->u.q_unix.path);
596
+ g_free(addr);
597
+}
598
+
599
+static void vu_set_unix_socket(Object *obj, const char *value,
600
+ Error **errp)
601
+{
602
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
603
+
604
+ if (!vu_prop_modifiable(vus, errp)) {
605
+ return;
606
+ }
607
+
608
+ if (vus->addr) {
609
+ free_socket_addr(vus->addr);
610
+ }
611
+
612
+ SocketAddress *addr = g_new0(SocketAddress, 1);
613
+ addr->type = SOCKET_ADDRESS_TYPE_UNIX;
614
+ addr->u.q_unix.path = g_strdup(value);
615
+ vus->addr = addr;
616
+}
617
+
618
+static char *vu_get_unix_socket(Object *obj, Error **errp)
619
+{
620
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
621
+ return g_strdup(vus->addr->u.q_unix.path);
622
+}
623
+
624
+static bool vu_get_block_writable(Object *obj, Error **errp)
625
+{
626
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
627
+ return vus->writable;
628
+}
629
+
630
+static void vu_set_block_writable(Object *obj, bool value, Error **errp)
631
+{
632
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
633
+
634
+ if (!vu_prop_modifiable(vus, errp)) {
635
+ return;
636
+ }
637
+
638
+ vus->writable = value;
639
+}
640
+
641
+static void vu_get_blk_size(Object *obj, Visitor *v, const char *name,
642
+ void *opaque, Error **errp)
643
+{
644
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
645
+ uint32_t value = vus->blk_size;
646
+
647
+ visit_type_uint32(v, name, &value, errp);
648
+}
649
+
650
+static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
651
+ void *opaque, Error **errp)
652
+{
653
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
654
+
655
+ Error *local_err = NULL;
656
+ uint32_t value;
657
+
658
+ if (!vu_prop_modifiable(vus, errp)) {
659
+ return;
660
+ }
661
+
662
+ visit_type_uint32(v, name, &value, &local_err);
663
+ if (local_err) {
664
+ goto out;
665
+ }
666
+
667
+ check_block_size(object_get_typename(obj), name, value, &local_err);
668
+ if (local_err) {
669
+ goto out;
670
+ }
671
+
672
+ vus->blk_size = value;
673
+
674
+out:
675
+ error_propagate(errp, local_err);
676
+}
677
+
678
+static void vhost_user_blk_server_instance_finalize(Object *obj)
679
+{
680
+ VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
681
+
682
+ vhost_user_blk_server_stop(vub);
683
+
684
+ /*
685
+ * Unlike object_property_add_str, object_class_property_add_str
686
+ * doesn't have a release method. Thus manual memory freeing is
687
+ * needed.
688
+ */
689
+ free_socket_addr(vub->addr);
690
+ g_free(vub->node_name);
691
+}
692
+
693
+static void vhost_user_blk_server_complete(UserCreatable *obj, Error **errp)
694
+{
695
+ VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
696
+
697
+ vhost_user_blk_server_start(vub, errp);
698
+}
699
+
700
+static void vhost_user_blk_server_class_init(ObjectClass *klass,
701
+ void *class_data)
702
+{
703
+ UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
704
+ ucc->complete = vhost_user_blk_server_complete;
705
+
706
+ object_class_property_add_bool(klass, "writable",
707
+ vu_get_block_writable,
708
+ vu_set_block_writable);
709
+
710
+ object_class_property_add_str(klass, "node-name",
711
+ vu_get_node_name,
712
+ vu_set_node_name);
713
+
714
+ object_class_property_add_str(klass, "unix-socket",
715
+ vu_get_unix_socket,
716
+ vu_set_unix_socket);
717
+
718
+ object_class_property_add(klass, "logical-block-size", "uint32",
719
+ vu_get_blk_size, vu_set_blk_size,
720
+ NULL, NULL);
721
+}
722
+
723
+static const TypeInfo vhost_user_blk_server_info = {
724
+ .name = TYPE_VHOST_USER_BLK_SERVER,
725
+ .parent = TYPE_OBJECT,
726
+ .instance_size = sizeof(VuBlockDev),
727
+ .instance_finalize = vhost_user_blk_server_instance_finalize,
728
+ .class_init = vhost_user_blk_server_class_init,
729
+ .interfaces = (InterfaceInfo[]) {
730
+ {TYPE_USER_CREATABLE},
731
+ {}
732
+ },
733
+};
734
+
735
+static void vhost_user_blk_server_register_types(void)
736
+{
737
+ type_register_static(&vhost_user_blk_server_info);
738
+}
739
+
740
+type_init(vhost_user_blk_server_register_types)
741
diff --git a/softmmu/vl.c b/softmmu/vl.c
163
index XXXXXXX..XXXXXXX 100644
742
index XXXXXXX..XXXXXXX 100644
164
--- a/tests/qemu-iotests/group
743
--- a/softmmu/vl.c
165
+++ b/tests/qemu-iotests/group
744
+++ b/softmmu/vl.c
166
@@ -XXX,XX +XXX,XX @@
745
@@ -XXX,XX +XXX,XX @@ static bool object_create_initial(const char *type, QemuOpts *opts)
167
309 rw auto quick
746
}
168
310 rw quick
747
#endif
169
312 rw quick
748
170
+313 rw auto quick
749
+ /* Reason: vhost-user-blk-server property "node-name" */
750
+ if (g_str_equal(type, "vhost-user-blk-server")) {
751
+ return false;
752
+ }
753
/*
754
* Reason: filter-* property "netdev" etc.
755
*/
756
diff --git a/block/meson.build b/block/meson.build
757
index XXXXXXX..XXXXXXX 100644
758
--- a/block/meson.build
759
+++ b/block/meson.build
760
@@ -XXX,XX +XXX,XX @@ block_ss.add(when: 'CONFIG_WIN32', if_true: files('file-win32.c', 'win32-aio.c')
761
block_ss.add(when: 'CONFIG_POSIX', if_true: [files('file-posix.c'), coref, iokit])
762
block_ss.add(when: 'CONFIG_LIBISCSI', if_true: files('iscsi-opts.c'))
763
block_ss.add(when: 'CONFIG_LINUX', if_true: files('nvme.c'))
764
+block_ss.add(when: 'CONFIG_LINUX', if_true: files('export/vhost-user-blk-server.c', '../contrib/libvhost-user/libvhost-user.c'))
765
block_ss.add(when: 'CONFIG_REPLICATION', if_true: files('replication.c'))
766
block_ss.add(when: 'CONFIG_SHEEPDOG', if_true: files('sheepdog.c'))
767
block_ss.add(when: ['CONFIG_LINUX_AIO', libaio], if_true: files('linux-aio.c'))
171
--
768
--
172
2.29.2
769
2.26.2
173
770
174
diff view generated by jsdifflib
1
ccd3b3b8112 has deprecated short-hand boolean options (i.e., options
1
From: Coiby Xu <coiby.xu@gmail.com>
2
with values). All options without values are interpreted as boolean
3
options, so this includes the invalid option "snapshot.foo" used in
4
iotest 178.
5
2
6
So after ccd3b3b8112, 178 fails with:
3
Suggested-by: Stefano Garzarella <sgarzare@redhat.com>
4
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
5
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
7
Message-id: 20200918080912.321299-8-coiby.xu@gmail.com
8
[Removed reference to vhost-user-blk-test.c, it will be sent in a
9
separate pull request.
10
--Stefan]
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
13
MAINTAINERS | 7 +++++++
14
1 file changed, 7 insertions(+)
7
15
8
+qemu-img: warning: short-form boolean option 'snapshot.foo' deprecated
16
diff --git a/MAINTAINERS b/MAINTAINERS
9
+Please use snapshot.foo=on instead
17
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
19
+++ b/MAINTAINERS
20
@@ -XXX,XX +XXX,XX @@ L: qemu-block@nongnu.org
21
S: Supported
22
F: tests/image-fuzzer/
23
24
+Vhost-user block device backend server
25
+M: Coiby Xu <Coiby.Xu@gmail.com>
26
+S: Maintained
27
+F: block/export/vhost-user-blk-server.c
28
+F: util/vhost-user-server.c
29
+F: tests/qtest/libqos/vhost-user-blk.c
30
+
31
Replication
32
M: Wen Congyang <wencongyang2@huawei.com>
33
M: Xie Changlong <xiechanglong.d@gmail.com>
34
--
35
2.26.2
10
36
11
Suppress that deprecation warning by passing some value to it (it does
12
not matter which, because the option is invalid anyway).
13
14
Fixes: ccd3b3b8112b670fdccf8a392b8419b173ffccb4
15
("qemu-option: warn for short-form boolean options")
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
Message-Id: <20210126123834.115915-1-mreitz@redhat.com>
18
---
19
tests/qemu-iotests/178 | 2 +-
20
tests/qemu-iotests/178.out.qcow2 | 2 +-
21
tests/qemu-iotests/178.out.raw | 2 +-
22
3 files changed, 3 insertions(+), 3 deletions(-)
23
24
diff --git a/tests/qemu-iotests/178 b/tests/qemu-iotests/178
25
index XXXXXXX..XXXXXXX 100755
26
--- a/tests/qemu-iotests/178
27
+++ b/tests/qemu-iotests/178
28
@@ -XXX,XX +XXX,XX @@ $QEMU_IMG measure --image-opts # missing filename
29
$QEMU_IMG measure -f qcow2 # missing filename
30
$QEMU_IMG measure -l snap1 # missing filename
31
$QEMU_IMG measure -o , # invalid option list
32
-$QEMU_IMG measure -l snapshot.foo # invalid snapshot option
33
+$QEMU_IMG measure -l snapshot.foo=bar # invalid snapshot option
34
$QEMU_IMG measure --output foo # invalid output format
35
$QEMU_IMG measure --size -1 # invalid image size
36
$QEMU_IMG measure -O foo "$TEST_IMG" # unknown image file format
37
diff --git a/tests/qemu-iotests/178.out.qcow2 b/tests/qemu-iotests/178.out.qcow2
38
index XXXXXXX..XXXXXXX 100644
39
--- a/tests/qemu-iotests/178.out.qcow2
40
+++ b/tests/qemu-iotests/178.out.qcow2
41
@@ -XXX,XX +XXX,XX @@ qemu-img: --image-opts, -f, and -l require a filename argument.
42
qemu-img: --image-opts, -f, and -l require a filename argument.
43
qemu-img: Invalid option list: ,
44
qemu-img: Invalid parameter 'snapshot.foo'
45
-qemu-img: Failed in parsing snapshot param 'snapshot.foo'
46
+qemu-img: Failed in parsing snapshot param 'snapshot.foo=bar'
47
qemu-img: --output must be used with human or json as argument.
48
qemu-img: Invalid image size specified. Must be between 0 and 9223372036854775807.
49
qemu-img: Unknown file format 'foo'
50
diff --git a/tests/qemu-iotests/178.out.raw b/tests/qemu-iotests/178.out.raw
51
index XXXXXXX..XXXXXXX 100644
52
--- a/tests/qemu-iotests/178.out.raw
53
+++ b/tests/qemu-iotests/178.out.raw
54
@@ -XXX,XX +XXX,XX @@ qemu-img: --image-opts, -f, and -l require a filename argument.
55
qemu-img: --image-opts, -f, and -l require a filename argument.
56
qemu-img: Invalid option list: ,
57
qemu-img: Invalid parameter 'snapshot.foo'
58
-qemu-img: Failed in parsing snapshot param 'snapshot.foo'
59
+qemu-img: Failed in parsing snapshot param 'snapshot.foo=bar'
60
qemu-img: --output must be used with human or json as argument.
61
qemu-img: Invalid image size specified. Must be between 0 and 9223372036854775807.
62
qemu-img: Unknown file format 'foo'
63
--
64
2.29.2
65
66
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2
2
Message-id: 20200924151549.913737-3-stefanha@redhat.com
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
5
Message-Id: <20210116214705.822267-22-vsementsov@virtuozzo.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
4
---
8
scripts/simplebench/bench_block_job.py | 2 +-
5
util/vhost-user-server.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
6
1 file changed, 1 insertion(+), 1 deletion(-)
10
7
11
diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
8
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
12
index XXXXXXX..XXXXXXX 100755
9
index XXXXXXX..XXXXXXX 100644
13
--- a/scripts/simplebench/bench_block_job.py
10
--- a/util/vhost-user-server.c
14
+++ b/scripts/simplebench/bench_block_job.py
11
+++ b/util/vhost-user-server.c
15
@@ -XXX,XX +XXX,XX @@
12
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
16
-#!/usr/bin/env python
13
return false;
17
+#!/usr/bin/env python3
14
}
18
#
15
19
# Benchmark block jobs
16
- /* zero out unspecified fileds */
20
#
17
+ /* zero out unspecified fields */
18
*server = (VuServer) {
19
.listener = listener,
20
.vu_iface = vu_iface,
21
--
21
--
22
2.29.2
22
2.26.2
23
23
24
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
We already have access to the value with the correct type (ioc and sioc
2
are the same QIOChannel).
2
3
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
5
Message-id: 20200924151549.913737-4-stefanha@redhat.com
5
Message-Id: <20210116214705.822267-17-vsementsov@virtuozzo.com>
6
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
7
---
8
block/backup.c | 12 +++++-------
8
util/vhost-user-server.c | 2 +-
9
1 file changed, 5 insertions(+), 7 deletions(-)
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
10
11
diff --git a/block/backup.c b/block/backup.c
11
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
12
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
13
--- a/block/backup.c
13
--- a/util/vhost-user-server.c
14
+++ b/block/backup.c
14
+++ b/util/vhost-user-server.c
15
@@ -XXX,XX +XXX,XX @@ static void backup_init_bcs_bitmap(BackupBlockJob *job)
15
@@ -XXX,XX +XXX,XX @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
16
static int coroutine_fn backup_run(Job *job, Error **errp)
16
server->ioc = QIO_CHANNEL(sioc);
17
{
17
object_ref(OBJECT(server->ioc));
18
BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
18
qio_channel_attach_aio_context(server->ioc, server->ctx);
19
- int ret = 0;
19
- qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
20
+ int ret;
20
+ qio_channel_set_blocking(server->ioc, false, NULL);
21
21
vu_client_start(server);
22
backup_init_bcs_bitmap(s);
23
24
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
25
26
for (offset = 0; offset < s->len; ) {
27
if (yield_and_check(s)) {
28
- ret = -ECANCELED;
29
- goto out;
30
+ return -ECANCELED;
31
}
32
33
ret = block_copy_reset_unallocated(s->bcs, offset, &count);
34
if (ret < 0) {
35
- goto out;
36
+ return ret;
37
}
38
39
offset += count;
40
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
41
job_yield(job);
42
}
43
} else {
44
- ret = backup_loop(s);
45
+ return backup_loop(s);
46
}
47
48
- out:
49
- return ret;
50
+ return 0;
51
}
22
}
52
23
53
static const BlockJobDriver backup_job_driver = {
54
--
24
--
55
2.29.2
25
2.26.2
56
26
57
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
Explicitly deleting watches is not necessary since libvhost-user calls
2
remove_watch() during vu_deinit(). Add an assertion to check this
3
though.
2
4
3
The code already don't freeze base node and we try to make it prepared
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
for the situation when base node is changed during the operation. In
6
Message-id: 20200924151549.913737-5-stefanha@redhat.com
5
other words, block-stream doesn't own base node.
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
---
9
util/vhost-user-server.c | 19 ++++---------------
10
1 file changed, 4 insertions(+), 15 deletions(-)
6
11
7
Let's introduce a new interface which should replace the current one,
12
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
8
which will in better relations with the code. Specifying bottom node
9
instead of base, and requiring it to be non-filter gives us the
10
following benefits:
11
12
- drop difference between above_base and base_overlay, which will be
13
renamed to just bottom, when old interface dropped
14
15
- clean way to work with parallel streams/commits on the same backing
16
chain, which otherwise become a problem when we introduce a filter
17
for stream job
18
19
- cleaner interface. Nobody will surprised the fact that base node may
20
disappear during block-stream, when there is no word about "base" in
21
the interface.
22
23
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
24
Message-Id: <20201216061703.70908-11-vsementsov@virtuozzo.com>
25
Reviewed-by: Max Reitz <mreitz@redhat.com>
26
Signed-off-by: Max Reitz <mreitz@redhat.com>
27
---
28
qapi/block-core.json | 12 ++++---
29
include/block/block_int.h | 1 +
30
block/monitor/block-hmp-cmds.c | 3 +-
31
block/stream.c | 50 +++++++++++++++++++---------
32
blockdev.c | 59 ++++++++++++++++++++++++++++------
33
5 files changed, 94 insertions(+), 31 deletions(-)
34
35
diff --git a/qapi/block-core.json b/qapi/block-core.json
36
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
37
--- a/qapi/block-core.json
14
--- a/util/vhost-user-server.c
38
+++ b/qapi/block-core.json
15
+++ b/util/vhost-user-server.c
39
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ static void close_client(VuServer *server)
40
# @device: the device or node name of the top image
17
/* When this is set vu_client_trip will stop new processing vhost-user message */
41
#
18
server->sioc = NULL;
42
# @base: the common backing file name.
19
43
-# It cannot be set if @base-node is also set.
20
- VuFdWatch *vu_fd_watch, *next;
44
+# It cannot be set if @base-node or @bottom is also set.
21
- QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
45
#
22
- aio_set_fd_handler(server->ioc->ctx, vu_fd_watch->fd, true, NULL,
46
# @base-node: the node name of the backing file.
23
- NULL, NULL, NULL);
47
-# It cannot be set if @base is also set. (Since 2.8)
48
+# It cannot be set if @base or @bottom is also set. (Since 2.8)
49
+#
50
+# @bottom: the last node in the chain that should be streamed into
51
+# top. It cannot be set if @base or @base-node is also set.
52
+# It cannot be filter node. (Since 6.0)
53
#
54
# @backing-file: The backing file string to write into the top
55
# image. This filename is not validated.
56
@@ -XXX,XX +XXX,XX @@
57
##
58
{ 'command': 'block-stream',
59
'data': { '*job-id': 'str', 'device': 'str', '*base': 'str',
60
- '*base-node': 'str', '*backing-file': 'str', '*speed': 'int',
61
- '*on-error': 'BlockdevOnError',
62
+ '*base-node': 'str', '*backing-file': 'str', '*bottom': 'str',
63
+ '*speed': 'int', '*on-error': 'BlockdevOnError',
64
'*filter-node-name': 'str',
65
'*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
66
67
diff --git a/include/block/block_int.h b/include/block/block_int.h
68
index XXXXXXX..XXXXXXX 100644
69
--- a/include/block/block_int.h
70
+++ b/include/block/block_int.h
71
@@ -XXX,XX +XXX,XX @@ int is_windows_drive(const char *filename);
72
*/
73
void stream_start(const char *job_id, BlockDriverState *bs,
74
BlockDriverState *base, const char *backing_file_str,
75
+ BlockDriverState *bottom,
76
int creation_flags, int64_t speed,
77
BlockdevOnError on_error,
78
const char *filter_node_name,
79
diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/block/monitor/block-hmp-cmds.c
82
+++ b/block/monitor/block-hmp-cmds.c
83
@@ -XXX,XX +XXX,XX @@ void hmp_block_stream(Monitor *mon, const QDict *qdict)
84
int64_t speed = qdict_get_try_int(qdict, "speed", 0);
85
86
qmp_block_stream(true, device, device, base != NULL, base, false, NULL,
87
- false, NULL, qdict_haskey(qdict, "speed"), speed, true,
88
+ false, NULL, false, NULL,
89
+ qdict_haskey(qdict, "speed"), speed, true,
90
BLOCKDEV_ON_ERROR_REPORT, false, NULL, false, false, false,
91
false, &error);
92
93
diff --git a/block/stream.c b/block/stream.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/block/stream.c
96
+++ b/block/stream.c
97
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver stream_job_driver = {
98
99
void stream_start(const char *job_id, BlockDriverState *bs,
100
BlockDriverState *base, const char *backing_file_str,
101
+ BlockDriverState *bottom,
102
int creation_flags, int64_t speed,
103
BlockdevOnError on_error,
104
const char *filter_node_name,
105
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
106
BlockDriverState *iter;
107
bool bs_read_only;
108
int basic_flags = BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED;
109
- BlockDriverState *base_overlay = bdrv_find_overlay(bs, base);
110
+ BlockDriverState *base_overlay;
111
BlockDriverState *above_base;
112
113
- if (!base_overlay) {
114
- error_setg(errp, "'%s' is not in the backing chain of '%s'",
115
- base->node_name, bs->node_name);
116
- return;
117
- }
118
+ assert(!(base && bottom));
119
+ assert(!(backing_file_str && bottom));
120
+
121
+ if (bottom) {
122
+ /*
123
+ * New simple interface. The code is written in terms of old interface
124
+ * with @base parameter (still, it doesn't freeze link to base, so in
125
+ * this mean old code is correct for new interface). So, for now, just
126
+ * emulate base_overlay and above_base. Still, when old interface
127
+ * finally removed, we should refactor code to use only "bottom", but
128
+ * not "*base*" things.
129
+ */
130
+ assert(!bottom->drv->is_filter);
131
+ base_overlay = above_base = bottom;
132
+ } else {
133
+ base_overlay = bdrv_find_overlay(bs, base);
134
+ if (!base_overlay) {
135
+ error_setg(errp, "'%s' is not in the backing chain of '%s'",
136
+ base->node_name, bs->node_name);
137
+ return;
138
+ }
139
140
- /*
141
- * Find the node directly above @base. @base_overlay is a COW overlay, so
142
- * it must have a bdrv_cow_child(), but it is the immediate overlay of
143
- * @base, so between the two there can only be filters.
144
- */
145
- above_base = base_overlay;
146
- if (bdrv_cow_bs(above_base) != base) {
147
- above_base = bdrv_cow_bs(above_base);
148
- while (bdrv_filter_bs(above_base) != base) {
149
- above_base = bdrv_filter_bs(above_base);
150
+ /*
151
+ * Find the node directly above @base. @base_overlay is a COW overlay,
152
+ * so it must have a bdrv_cow_child(), but it is the immediate overlay
153
+ * of @base, so between the two there can only be filters.
154
+ */
155
+ above_base = base_overlay;
156
+ if (bdrv_cow_bs(above_base) != base) {
157
+ above_base = bdrv_cow_bs(above_base);
158
+ while (bdrv_filter_bs(above_base) != base) {
159
+ above_base = bdrv_filter_bs(above_base);
160
+ }
161
}
162
}
163
164
diff --git a/blockdev.c b/blockdev.c
165
index XXXXXXX..XXXXXXX 100644
166
--- a/blockdev.c
167
+++ b/blockdev.c
168
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
169
bool has_base, const char *base,
170
bool has_base_node, const char *base_node,
171
bool has_backing_file, const char *backing_file,
172
+ bool has_bottom, const char *bottom,
173
bool has_speed, int64_t speed,
174
bool has_on_error, BlockdevOnError on_error,
175
bool has_filter_node_name, const char *filter_node_name,
176
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
177
bool has_auto_dismiss, bool auto_dismiss,
178
Error **errp)
179
{
180
- BlockDriverState *bs, *iter;
181
+ BlockDriverState *bs, *iter, *iter_end;
182
BlockDriverState *base_bs = NULL;
183
+ BlockDriverState *bottom_bs = NULL;
184
AioContext *aio_context;
185
Error *local_err = NULL;
186
int job_flags = JOB_DEFAULT;
187
188
+ if (has_base && has_base_node) {
189
+ error_setg(errp, "'base' and 'base-node' cannot be specified "
190
+ "at the same time");
191
+ return;
192
+ }
193
+
194
+ if (has_base && has_bottom) {
195
+ error_setg(errp, "'base' and 'bottom' cannot be specified "
196
+ "at the same time");
197
+ return;
198
+ }
199
+
200
+ if (has_bottom && has_base_node) {
201
+ error_setg(errp, "'bottom' and 'base-node' cannot be specified "
202
+ "at the same time");
203
+ return;
204
+ }
205
+
206
if (!has_on_error) {
207
on_error = BLOCKDEV_ON_ERROR_REPORT;
208
}
209
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
210
aio_context = bdrv_get_aio_context(bs);
211
aio_context_acquire(aio_context);
212
213
- if (has_base && has_base_node) {
214
- error_setg(errp, "'base' and 'base-node' cannot be specified "
215
- "at the same time");
216
- goto out;
217
- }
24
- }
218
-
25
-
219
if (has_base) {
26
- while (!QTAILQ_EMPTY(&server->vu_fd_watches)) {
220
base_bs = bdrv_find_backing_image(bs, base);
27
- QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
221
if (base_bs == NULL) {
28
- if (!vu_fd_watch->processing) {
222
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
29
- QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
223
bdrv_refresh_filename(base_bs);
30
- g_free(vu_fd_watch);
31
- }
32
- }
33
- }
34
-
35
while (server->processing_msg) {
36
if (server->ioc->read_coroutine) {
37
server->ioc->read_coroutine = NULL;
38
@@ -XXX,XX +XXX,XX @@ static void close_client(VuServer *server)
224
}
39
}
225
40
226
- /* Check for op blockers in the whole chain between bs and base */
41
vu_deinit(&server->vu_dev);
227
- for (iter = bs; iter && iter != base_bs;
228
+ if (has_bottom) {
229
+ bottom_bs = bdrv_lookup_bs(NULL, bottom, errp);
230
+ if (!bottom_bs) {
231
+ goto out;
232
+ }
233
+ if (!bottom_bs->drv) {
234
+ error_setg(errp, "Node '%s' is not open", bottom);
235
+ goto out;
236
+ }
237
+ if (bottom_bs->drv->is_filter) {
238
+ error_setg(errp, "Node '%s' is a filter, use a non-filter node "
239
+ "as 'bottom'", bottom);
240
+ goto out;
241
+ }
242
+ if (!bdrv_chain_contains(bs, bottom_bs)) {
243
+ error_setg(errp, "Node '%s' is not in a chain starting from '%s'",
244
+ bottom, device);
245
+ goto out;
246
+ }
247
+ assert(bdrv_get_aio_context(bottom_bs) == aio_context);
248
+ }
249
+
42
+
250
+ /*
43
+ /* vu_deinit() should have called remove_watch() */
251
+ * Check for op blockers in the whole chain between bs and base (or bottom)
44
+ assert(QTAILQ_EMPTY(&server->vu_fd_watches));
252
+ */
45
+
253
+ iter_end = has_bottom ? bdrv_filter_or_cow_bs(bottom_bs) : base_bs;
46
object_unref(OBJECT(sioc));
254
+ for (iter = bs; iter && iter != iter_end;
47
object_unref(OBJECT(server->ioc));
255
iter = bdrv_filter_or_cow_bs(iter))
48
}
256
{
257
if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) {
258
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
259
}
260
261
stream_start(has_job_id ? job_id : NULL, bs, base_bs, backing_file,
262
- job_flags, has_speed ? speed : 0, on_error,
263
+ bottom_bs, job_flags, has_speed ? speed : 0, on_error,
264
filter_node_name, &local_err);
265
if (local_err) {
266
error_propagate(errp, local_err);
267
--
49
--
268
2.29.2
50
2.26.2
269
51
270
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
Only one struct is needed per request. Drop req_data and the separate
2
VuBlockReq instance. Instead let vu_queue_pop() allocate everything at
3
once.
2
4
3
It simplifies debugging.
5
This fixes the req_data memory leak in vu_block_virtio_process_req().
4
6
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Message-id: 20200924151549.913737-6-stefanha@redhat.com
7
Message-Id: <20210116214705.822267-6-vsementsov@virtuozzo.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
---
10
block/block-copy.c | 11 ++++++++++-
11
block/export/vhost-user-blk-server.c | 68 +++++++++-------------------
11
1 file changed, 10 insertions(+), 1 deletion(-)
12
1 file changed, 21 insertions(+), 47 deletions(-)
12
13
13
diff --git a/block/block-copy.c b/block/block-copy.c
14
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/block/block-copy.c
16
--- a/block/export/vhost-user-blk-server.c
16
+++ b/block/block-copy.c
17
+++ b/block/export/vhost-user-blk-server.c
17
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
18
@@ -XXX,XX +XXX,XX @@ struct virtio_blk_inhdr {
18
/* Coroutine where async block-copy is running */
19
};
19
Coroutine *co;
20
20
21
typedef struct VuBlockReq {
21
+ /* To reference all call states from BlockCopyState */
22
- VuVirtqElement *elem;
22
+ QLIST_ENTRY(BlockCopyCallState) list;
23
+ VuVirtqElement elem;
24
int64_t sector_num;
25
size_t size;
26
struct virtio_blk_inhdr *in;
27
@@ -XXX,XX +XXX,XX @@ static void vu_block_req_complete(VuBlockReq *req)
28
VuDev *vu_dev = &req->server->vu_dev;
29
30
/* IO size with 1 extra status byte */
31
- vu_queue_push(vu_dev, req->vq, req->elem, req->size + 1);
32
+ vu_queue_push(vu_dev, req->vq, &req->elem, req->size + 1);
33
vu_queue_notify(vu_dev, req->vq);
34
35
- if (req->elem) {
36
- free(req->elem);
37
- }
38
-
39
- g_free(req);
40
+ free(req);
41
}
42
43
static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
44
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_flush(VuBlockReq *req)
45
blk_co_flush(backend);
46
}
47
48
-struct req_data {
49
- VuServer *server;
50
- VuVirtq *vq;
51
- VuVirtqElement *elem;
52
-};
53
-
54
static void coroutine_fn vu_block_virtio_process_req(void *opaque)
55
{
56
- struct req_data *data = opaque;
57
- VuServer *server = data->server;
58
- VuVirtq *vq = data->vq;
59
- VuVirtqElement *elem = data->elem;
60
+ VuBlockReq *req = opaque;
61
+ VuServer *server = req->server;
62
+ VuVirtqElement *elem = &req->elem;
63
uint32_t type;
64
- VuBlockReq *req;
65
66
VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
67
BlockBackend *backend = vdev_blk->backend;
68
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
69
struct iovec *out_iov = elem->out_sg;
70
unsigned in_num = elem->in_num;
71
unsigned out_num = elem->out_num;
23
+
72
+
24
/* State */
73
/* refer to hw/block/virtio_blk.c */
25
int ret;
74
if (elem->out_num < 1 || elem->in_num < 1) {
26
bool finished;
75
error_report("virtio-blk request missing headers");
27
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
76
- free(elem);
28
bool use_copy_range;
77
- return;
29
int64_t copy_size;
78
+ goto err;
30
uint64_t len;
31
- QLIST_HEAD(, BlockCopyTask) tasks;
32
+ QLIST_HEAD(, BlockCopyTask) tasks; /* All tasks from all block-copy calls */
33
+ QLIST_HEAD(, BlockCopyCallState) calls;
34
35
BdrvRequestFlags write_flags;
36
37
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
38
}
79
}
39
80
40
QLIST_INIT(&s->tasks);
81
- req = g_new0(VuBlockReq, 1);
41
+ QLIST_INIT(&s->calls);
82
- req->server = server;
42
83
- req->vq = vq;
43
return s;
84
- req->elem = elem;
85
-
86
if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
87
sizeof(req->out)) != sizeof(req->out))) {
88
error_report("virtio-blk request outhdr too short");
89
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
90
91
err:
92
free(elem);
93
- g_free(req);
94
- return;
44
}
95
}
45
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
96
97
static void vu_block_process_vq(VuDev *vu_dev, int idx)
46
{
98
{
47
int ret;
99
- VuServer *server;
48
100
- VuVirtq *vq;
49
+ QLIST_INSERT_HEAD(&call_state->s->calls, call_state, list);
101
- struct req_data *req_data;
102
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
103
+ VuVirtq *vq = vu_get_queue(vu_dev, idx);
104
105
- server = container_of(vu_dev, VuServer, vu_dev);
106
- assert(server);
107
-
108
- vq = vu_get_queue(vu_dev, idx);
109
- assert(vq);
110
- VuVirtqElement *elem;
111
while (1) {
112
- elem = vu_queue_pop(vu_dev, vq, sizeof(VuVirtqElement) +
113
- sizeof(VuBlockReq));
114
- if (elem) {
115
- req_data = g_new0(struct req_data, 1);
116
- req_data->server = server;
117
- req_data->vq = vq;
118
- req_data->elem = elem;
119
- Coroutine *co = qemu_coroutine_create(vu_block_virtio_process_req,
120
- req_data);
121
- aio_co_enter(server->ioc->ctx, co);
122
- } else {
123
+ VuBlockReq *req;
50
+
124
+
51
do {
125
+ req = vu_queue_pop(vu_dev, vq, sizeof(VuBlockReq));
52
ret = block_copy_dirty_clusters(call_state);
126
+ if (!req) {
53
127
break;
54
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
128
}
55
call_state->cb(call_state->cb_opaque);
129
+
130
+ req->server = server;
131
+ req->vq = vq;
132
+
133
+ Coroutine *co =
134
+ qemu_coroutine_create(vu_block_virtio_process_req, req);
135
+ qemu_coroutine_enter(co);
56
}
136
}
57
58
+ QLIST_REMOVE(call_state, list);
59
+
60
return ret;
61
}
137
}
62
138
63
--
139
--
64
2.29.2
140
2.26.2
65
141
66
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
The device panic notifier callback is not used. Drop it.
2
2
3
Add new parameters to configure future backup features. The patch
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
doesn't introduce aio backup requests (so we actually have only one
4
Message-id: 20200924151549.913737-7-stefanha@redhat.com
5
worker) neither requests larger than one cluster. Still, formally we
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
satisfy these maximums anyway, so add the parameters now, to facilitate
6
---
7
further patch which will really change backup job behavior.
7
util/vhost-user-server.h | 3 ---
8
block/export/vhost-user-blk-server.c | 3 +--
9
util/vhost-user-server.c | 6 ------
10
3 files changed, 1 insertion(+), 11 deletions(-)
8
11
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
12
diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
11
Message-Id: <20210116214705.822267-11-vsementsov@virtuozzo.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
qapi/block-core.json | 13 ++++++++++++-
15
block/backup.c | 28 +++++++++++++++++++++++-----
16
block/replication.c | 2 +-
17
blockdev.c | 8 +++++++-
18
4 files changed, 43 insertions(+), 8 deletions(-)
19
20
diff --git a/qapi/block-core.json b/qapi/block-core.json
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/qapi/block-core.json
14
--- a/util/vhost-user-server.h
23
+++ b/qapi/block-core.json
15
+++ b/util/vhost-user-server.h
24
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ typedef struct VuFdWatch {
25
#
17
} VuFdWatch;
26
# @use-copy-range: Use copy offloading. Default true.
18
27
#
19
typedef struct VuServer VuServer;
28
+# @max-workers: Maximum number of parallel requests for the sustained background
20
-typedef void DevicePanicNotifierFn(VuServer *server);
29
+# copying process. Doesn't influence copy-before-write operations.
21
30
+# Default 64.
22
struct VuServer {
31
+#
23
QIONetListener *listener;
32
+# @max-chunk: Maximum request length for the sustained background copying
24
AioContext *ctx;
33
+# process. Doesn't influence copy-before-write operations.
25
- DevicePanicNotifierFn *device_panic_notifier;
34
+# 0 means unlimited. If max-chunk is non-zero then it should not be
26
int max_queues;
35
+# less than job cluster size which is calculated as maximum of
27
const VuDevIface *vu_iface;
36
+# target image cluster size and 64k. Default 0.
28
VuDev vu_dev;
37
+#
29
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
38
# Since: 6.0
30
SocketAddress *unix_socket,
39
##
31
AioContext *ctx,
40
{ 'struct': 'BackupPerf',
32
uint16_t max_queues,
41
- 'data': { '*use-copy-range': 'bool' }}
33
- DevicePanicNotifierFn *device_panic_notifier,
42
+ 'data': { '*use-copy-range': 'bool',
34
const VuDevIface *vu_iface,
43
+ '*max-workers': 'int', '*max-chunk': 'int64' } }
35
Error **errp);
44
36
45
##
37
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
46
# @BackupCommon:
47
diff --git a/block/backup.c b/block/backup.c
48
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
49
--- a/block/backup.c
39
--- a/block/export/vhost-user-blk-server.c
50
+++ b/block/backup.c
40
+++ b/block/export/vhost-user-blk-server.c
51
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
41
@@ -XXX,XX +XXX,XX @@ static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
52
return NULL;
42
ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
53
}
43
54
44
if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
55
+ cluster_size = backup_calculate_cluster_size(target, errp);
45
- VHOST_USER_BLK_MAX_QUEUES,
56
+ if (cluster_size < 0) {
46
- NULL, &vu_block_iface,
57
+ goto error;
47
+ VHOST_USER_BLK_MAX_QUEUES, &vu_block_iface,
58
+ }
48
errp)) {
59
+
60
+ if (perf->max_workers < 1) {
61
+ error_setg(errp, "max-workers must be greater than zero");
62
+ return NULL;
63
+ }
64
+
65
+ if (perf->max_chunk < 0) {
66
+ error_setg(errp, "max-chunk must be zero (which means no limit) or "
67
+ "positive");
68
+ return NULL;
69
+ }
70
+
71
+ if (perf->max_chunk && perf->max_chunk < cluster_size) {
72
+ error_setg(errp, "Required max-chunk (%" PRIi64 ") is less than backup "
73
+ "cluster size (%" PRIi64 ")", perf->max_chunk, cluster_size);
74
+ return NULL;
75
+ }
76
+
77
+
78
if (sync_bitmap) {
79
/* If we need to write to this bitmap, check that we can: */
80
if (bitmap_mode != BITMAP_SYNC_MODE_NEVER &&
81
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
82
goto error;
49
goto error;
83
}
50
}
84
51
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
85
- cluster_size = backup_calculate_cluster_size(target, errp);
52
index XXXXXXX..XXXXXXX 100644
86
- if (cluster_size < 0) {
53
--- a/util/vhost-user-server.c
87
- goto error;
54
+++ b/util/vhost-user-server.c
55
@@ -XXX,XX +XXX,XX @@ static void panic_cb(VuDev *vu_dev, const char *buf)
56
close_client(server);
57
}
58
59
- if (server->device_panic_notifier) {
60
- server->device_panic_notifier(server);
88
- }
61
- }
89
-
62
-
90
/*
63
/*
91
* If source is in backing chain of target assume that target is going to be
64
* Set the callback function for network listener so another
92
* used for "image fleecing", i.e. it should represent a kind of snapshot of
65
* vhost-user client can connect to this server
93
diff --git a/block/replication.c b/block/replication.c
66
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
94
index XXXXXXX..XXXXXXX 100644
67
SocketAddress *socket_addr,
95
--- a/block/replication.c
68
AioContext *ctx,
96
+++ b/block/replication.c
69
uint16_t max_queues,
97
@@ -XXX,XX +XXX,XX @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
70
- DevicePanicNotifierFn *device_panic_notifier,
98
int64_t active_length, hidden_length, disk_length;
71
const VuDevIface *vu_iface,
99
AioContext *aio_context;
72
Error **errp)
100
Error *local_err = NULL;
101
- BackupPerf perf = { .use_copy_range = true };
102
+ BackupPerf perf = { .use_copy_range = true, .max_workers = 1 };
103
104
aio_context = bdrv_get_aio_context(bs);
105
aio_context_acquire(aio_context);
106
diff --git a/blockdev.c b/blockdev.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/blockdev.c
109
+++ b/blockdev.c
110
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
111
{
73
{
112
BlockJob *job = NULL;
74
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
113
BdrvDirtyBitmap *bmap = NULL;
75
.vu_iface = vu_iface,
114
- BackupPerf perf = { .use_copy_range = true };
76
.max_queues = max_queues,
115
+ BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
77
.ctx = ctx,
116
int job_flags = JOB_DEFAULT;
78
- .device_panic_notifier = device_panic_notifier,
117
79
};
118
if (!backup->has_speed) {
80
119
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
81
qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
120
if (backup->x_perf->has_use_copy_range) {
121
perf.use_copy_range = backup->x_perf->use_copy_range;
122
}
123
+ if (backup->x_perf->has_max_workers) {
124
+ perf.max_workers = backup->x_perf->max_workers;
125
+ }
126
+ if (backup->x_perf->has_max_chunk) {
127
+ perf.max_chunk = backup->x_perf->max_chunk;
128
+ }
129
}
130
131
if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
132
--
82
--
133
2.29.2
83
2.26.2
134
84
135
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
fds[] is leaked when qio_channel_readv_full() fails.
2
2
3
Stream in stream_prepare calls bdrv_change_backing_file() to change
3
Use vmsg->fds[] instead of keeping a local fds[] array. Then we can
4
backing-file in the metadata of bs.
4
reuse goto fail to clean up fds. vmsg->fd_num must be zeroed before the
5
loop to make this safe.
5
6
6
It may use either backing-file parameter given by user or just take
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
filename of base on job start.
8
Message-id: 20200924151549.913737-8-stefanha@redhat.com
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
11
util/vhost-user-server.c | 50 ++++++++++++++++++----------------------
12
1 file changed, 23 insertions(+), 27 deletions(-)
8
13
9
Backing file format is determined by base on job finish.
14
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
10
11
There are some problems with this design, we solve only two by this
12
patch:
13
14
1. Consider scenario with backing-file unset. Current concept of stream
15
supports changing of the base during the job (we don't freeze link to
16
the base). So, we should not save base filename at job start,
17
18
- let's determine name of the base on job finish.
19
20
2. Using direct base to determine filename and format is not very good:
21
base node may be a filter, so its filename may be JSON, and format_name
22
is not good for storing into qcow2 metadata as backing file format.
23
24
- let's use unfiltered_base
25
26
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
27
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
28
[vsementsov: change commit subject, change logic in stream_prepare]
29
Message-Id: <20201216061703.70908-10-vsementsov@virtuozzo.com>
30
Reviewed-by: Max Reitz <mreitz@redhat.com>
31
Signed-off-by: Max Reitz <mreitz@redhat.com>
32
---
33
block/stream.c | 9 +++++----
34
blockdev.c | 8 +-------
35
2 files changed, 6 insertions(+), 11 deletions(-)
36
37
diff --git a/block/stream.c b/block/stream.c
38
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
39
--- a/block/stream.c
16
--- a/util/vhost-user-server.c
40
+++ b/block/stream.c
17
+++ b/util/vhost-user-server.c
41
@@ -XXX,XX +XXX,XX @@ static int stream_prepare(Job *job)
18
@@ -XXX,XX +XXX,XX @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
42
BlockDriverState *bs = blk_bs(bjob->blk);
19
};
43
BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
20
int rc, read_bytes = 0;
44
BlockDriverState *base = bdrv_filter_or_cow_bs(s->above_base);
45
+ BlockDriverState *unfiltered_base = bdrv_skip_filters(base);
46
Error *local_err = NULL;
21
Error *local_err = NULL;
47
int ret = 0;
22
- /*
48
23
- * Store fds/nfds returned from qio_channel_readv_full into
49
@@ -XXX,XX +XXX,XX @@ static int stream_prepare(Job *job)
24
- * temporary variables.
50
25
- *
51
if (bdrv_cow_child(unfiltered_bs)) {
26
- * VhostUserMsg is a packed structure, gcc will complain about passing
52
const char *base_id = NULL, *base_fmt = NULL;
27
- * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
53
- if (base) {
28
- * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
54
- base_id = s->backing_file_str;
29
- * thus two temporary variables nfds and fds are used here.
55
- if (base->drv) {
30
- */
56
- base_fmt = base->drv->format_name;
31
- size_t nfds = 0, nfds_t = 0;
57
+ if (unfiltered_base) {
32
const size_t max_fds = G_N_ELEMENTS(vmsg->fds);
58
+ base_id = s->backing_file_str ?: unfiltered_base->filename;
33
- int *fds_t = NULL;
59
+ if (unfiltered_base->drv) {
34
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
60
+ base_fmt = unfiltered_base->drv->format_name;
35
QIOChannel *ioc = server->ioc;
36
37
+ vmsg->fd_num = 0;
38
if (!ioc) {
39
error_report_err(local_err);
40
goto fail;
41
@@ -XXX,XX +XXX,XX @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
42
43
assert(qemu_in_coroutine());
44
do {
45
+ size_t nfds = 0;
46
+ int *fds = NULL;
47
+
48
/*
49
* qio_channel_readv_full may have short reads, keeping calling it
50
* until getting VHOST_USER_HDR_SIZE or 0 bytes in total
51
*/
52
- rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
53
+ rc = qio_channel_readv_full(ioc, &iov, 1, &fds, &nfds, &local_err);
54
if (rc < 0) {
55
if (rc == QIO_CHANNEL_ERR_BLOCK) {
56
+ assert(local_err == NULL);
57
qio_channel_yield(ioc, G_IO_IN);
58
continue;
59
} else {
60
error_report_err(local_err);
61
- return false;
62
+ goto fail;
61
}
63
}
62
}
64
}
63
bdrv_set_backing_hd(unfiltered_bs, base, &local_err);
65
- read_bytes += rc;
64
diff --git a/blockdev.c b/blockdev.c
66
- if (nfds_t > 0) {
65
index XXXXXXX..XXXXXXX 100644
67
- if (nfds + nfds_t > max_fds) {
66
--- a/blockdev.c
68
+
67
+++ b/blockdev.c
69
+ if (nfds > 0) {
68
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
70
+ if (vmsg->fd_num + nfds > max_fds) {
69
BlockDriverState *base_bs = NULL;
71
error_report("A maximum of %zu fds are allowed, "
70
AioContext *aio_context;
72
"however got %zu fds now",
71
Error *local_err = NULL;
73
- max_fds, nfds + nfds_t);
72
- const char *base_name = NULL;
74
+ max_fds, vmsg->fd_num + nfds);
73
int job_flags = JOB_DEFAULT;
75
+ g_free(fds);
74
76
goto fail;
75
if (!has_on_error) {
77
}
76
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
78
- memcpy(vmsg->fds + nfds, fds_t,
77
goto out;
79
- nfds_t *sizeof(vmsg->fds[0]));
80
- nfds += nfds_t;
81
- g_free(fds_t);
82
+ memcpy(vmsg->fds + vmsg->fd_num, fds, nfds * sizeof(vmsg->fds[0]));
83
+ vmsg->fd_num += nfds;
84
+ g_free(fds);
78
}
85
}
79
assert(bdrv_get_aio_context(base_bs) == aio_context);
86
- if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
80
- base_name = base;
87
- break;
81
}
88
+
82
89
+ if (rc == 0) { /* socket closed */
83
if (has_base_node) {
90
+ goto fail;
84
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
85
}
91
}
86
assert(bdrv_get_aio_context(base_bs) == aio_context);
92
- iov.iov_base = (char *)vmsg + read_bytes;
87
bdrv_refresh_filename(base_bs);
93
- iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
88
- base_name = base_bs->filename;
94
- } while (true);
89
}
95
90
96
- vmsg->fd_num = nfds;
91
/* Check for op blockers in the whole chain between bs and base */
97
+ iov.iov_base += rc;
92
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
98
+ iov.iov_len -= rc;
93
goto out;
99
+ read_bytes += rc;
94
}
100
+ } while (read_bytes != VHOST_USER_HDR_SIZE);
95
101
+
96
- /* backing_file string overrides base bs filename */
102
/* qio_channel_readv_full will make socket fds blocking, unblock them */
97
- base_name = has_backing_file ? backing_file : base_name;
103
vmsg_unblock_fds(vmsg);
98
-
104
if (vmsg->size > sizeof(vmsg->payload)) {
99
if (has_auto_finalize && !auto_finalize) {
100
job_flags |= JOB_MANUAL_FINALIZE;
101
}
102
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
103
job_flags |= JOB_MANUAL_DISMISS;
104
}
105
106
- stream_start(has_job_id ? job_id : NULL, bs, base_bs, base_name,
107
+ stream_start(has_job_id ? job_id : NULL, bs, base_bs, backing_file,
108
job_flags, has_speed ? speed : 0, on_error,
109
filter_node_name, &local_err);
110
if (local_err) {
111
--
105
--
112
2.29.2
106
2.26.2
113
107
114
diff view generated by jsdifflib
1
Commit 0afec75734331 removed the 'change' QMP command, so we can no
1
Unexpected EOF is an error that must be reported.
2
longer test it in 118.
3
2
4
Fixes: 0afec75734331a0b52fa3aa4235220eda8c7846f
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
('qmp: remove deprecated "change" command')
4
Message-id: 20200924151549.913737-9-stefanha@redhat.com
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-Id: <20210126104833.57026-1-mreitz@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
---
6
---
10
tests/qemu-iotests/118 | 20 +-------------------
7
util/vhost-user-server.c | 6 ++++--
11
tests/qemu-iotests/118.out | 4 ++--
8
1 file changed, 4 insertions(+), 2 deletions(-)
12
2 files changed, 3 insertions(+), 21 deletions(-)
13
9
14
diff --git a/tests/qemu-iotests/118 b/tests/qemu-iotests/118
10
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
15
index XXXXXXX..XXXXXXX 100755
16
--- a/tests/qemu-iotests/118
17
+++ b/tests/qemu-iotests/118
18
@@ -XXX,XX +XXX,XX @@
19
#!/usr/bin/env python3
20
# group: rw
21
#
22
-# Test case for the QMP 'change' command and all other associated
23
-# commands
24
+# Test case for media change monitor commands
25
#
26
# Copyright (C) 2015 Red Hat, Inc.
27
#
28
@@ -XXX,XX +XXX,XX @@ class ChangeBaseClass(iotests.QMPTestCase):
29
30
class GeneralChangeTestsBaseClass(ChangeBaseClass):
31
32
- def test_change(self):
33
- # 'change' requires a drive name, so skip the test for blockdev
34
- if not self.use_drive:
35
- return
36
-
37
- result = self.vm.qmp('change', device='drive0', target=new_img,
38
- arg=iotests.imgfmt)
39
- self.assert_qmp(result, 'return', {})
40
-
41
- self.wait_for_open()
42
- self.wait_for_close()
43
-
44
- result = self.vm.qmp('query-block')
45
- if self.has_real_tray:
46
- self.assert_qmp(result, 'return[0]/tray_open', False)
47
- self.assert_qmp(result, 'return[0]/inserted/image/filename', new_img)
48
-
49
def test_blockdev_change_medium(self):
50
result = self.vm.qmp('blockdev-change-medium',
51
id=self.device_name, filename=new_img,
52
diff --git a/tests/qemu-iotests/118.out b/tests/qemu-iotests/118.out
53
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
54
--- a/tests/qemu-iotests/118.out
12
--- a/util/vhost-user-server.c
55
+++ b/tests/qemu-iotests/118.out
13
+++ b/util/vhost-user-server.c
56
@@ -XXX,XX +XXX,XX @@
14
@@ -XXX,XX +XXX,XX @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
57
-.......................................................................................................................................................................
15
};
58
+...........................................................................................................................................................
16
if (vmsg->size) {
59
----------------------------------------------------------------------
17
rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
60
-Ran 167 tests
18
- if (rc == -1) {
61
+Ran 155 tests
19
- error_report_err(local_err);
62
20
+ if (rc != 1) {
63
OK
21
+ if (local_err) {
22
+ error_report_err(local_err);
23
+ }
24
goto fail;
25
}
26
}
64
--
27
--
65
2.29.2
28
2.26.2
66
29
67
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
The vu_client_trip() coroutine is leaked during AioContext switching. It
2
is also unsafe to destroy the vu_dev in panic_cb() since its callers
3
still access it in some cases.
2
4
3
Refactor common path to use BlockCopyCallState pointer as parameter, to
5
Rework the lifecycle to solve these safety issues.
4
prepare it for use in asynchronous block-copy (at least, we'll need to
5
run block-copy in a coroutine, passing the whole parameters as one
6
pointer).
7
6
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Message-id: 20200924151549.913737-10-stefanha@redhat.com
10
Message-Id: <20210116214705.822267-3-vsementsov@virtuozzo.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
10
---
13
block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
11
util/vhost-user-server.h | 29 ++--
14
1 file changed, 38 insertions(+), 13 deletions(-)
12
block/export/vhost-user-blk-server.c | 9 +-
13
util/vhost-user-server.c | 245 +++++++++++++++------------
14
3 files changed, 155 insertions(+), 128 deletions(-)
15
15
16
diff --git a/block/block-copy.c b/block/block-copy.c
16
diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/block/block-copy.c
18
--- a/util/vhost-user-server.h
19
+++ b/block/block-copy.c
19
+++ b/util/vhost-user-server.h
20
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
21
static coroutine_fn int block_copy_task_entry(AioTask *task);
21
#include "qapi/error.h"
22
22
#include "standard-headers/linux/virtio_blk.h"
23
typedef struct BlockCopyCallState {
23
24
+ /* IN parameters */
24
+/* A kick fd that we monitor on behalf of libvhost-user */
25
+ BlockCopyState *s;
25
typedef struct VuFdWatch {
26
+ int64_t offset;
26
VuDev *vu_dev;
27
+ int64_t bytes;
27
int fd; /*kick fd*/
28
+
28
void *pvt;
29
+ /* State */
29
vu_watch_cb cb;
30
bool failed;
30
- bool processing;
31
+
31
QTAILQ_ENTRY(VuFdWatch) next;
32
+ /* OUT parameters */
32
} VuFdWatch;
33
bool error_is_read;
33
34
} BlockCopyCallState;
34
-typedef struct VuServer VuServer;
35
35
-
36
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
36
-struct VuServer {
37
* Returns 1 if dirty clusters found and successfully copied, 0 if no dirty
37
+/**
38
* clusters found and -errno on failure.
38
+ * VuServer:
39
+ * A vhost-user server instance with user-defined VuDevIface callbacks.
40
+ * Vhost-user device backends can be implemented using VuServer. VuDevIface
41
+ * callbacks and virtqueue kicks run in the given AioContext.
42
+ */
43
+typedef struct {
44
QIONetListener *listener;
45
+ QEMUBH *restart_listener_bh;
46
AioContext *ctx;
47
int max_queues;
48
const VuDevIface *vu_iface;
49
+
50
+ /* Protected by ctx lock */
51
VuDev vu_dev;
52
QIOChannel *ioc; /* The I/O channel with the client */
53
QIOChannelSocket *sioc; /* The underlying data channel with the client */
54
- /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
55
- QIOChannel *ioc_slave;
56
- QIOChannelSocket *sioc_slave;
57
- Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
58
QTAILQ_HEAD(, VuFdWatch) vu_fd_watches;
59
- /* restart coroutine co_trip if AIOContext is changed */
60
- bool aio_context_changed;
61
- bool processing_msg;
62
-};
63
+
64
+ Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
65
+} VuServer;
66
67
bool vhost_user_server_start(VuServer *server,
68
SocketAddress *unix_socket,
69
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
70
71
void vhost_user_server_stop(VuServer *server);
72
73
-void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx);
74
+void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
75
+void vhost_user_server_detach_aio_context(VuServer *server);
76
77
#endif /* VHOST_USER_SERVER_H */
78
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/block/export/vhost-user-blk-server.c
81
+++ b/block/export/vhost-user-blk-server.c
82
@@ -XXX,XX +XXX,XX @@ static const VuDevIface vu_block_iface = {
83
static void blk_aio_attached(AioContext *ctx, void *opaque)
84
{
85
VuBlockDev *vub_dev = opaque;
86
- aio_context_acquire(ctx);
87
- vhost_user_server_set_aio_context(&vub_dev->vu_server, ctx);
88
- aio_context_release(ctx);
89
+ vhost_user_server_attach_aio_context(&vub_dev->vu_server, ctx);
90
}
91
92
static void blk_aio_detach(void *opaque)
93
{
94
VuBlockDev *vub_dev = opaque;
95
- AioContext *ctx = vub_dev->vu_server.ctx;
96
- aio_context_acquire(ctx);
97
- vhost_user_server_set_aio_context(&vub_dev->vu_server, NULL);
98
- aio_context_release(ctx);
99
+ vhost_user_server_detach_aio_context(&vub_dev->vu_server);
100
}
101
102
static void
103
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/util/vhost-user-server.c
106
+++ b/util/vhost-user-server.c
107
@@ -XXX,XX +XXX,XX @@
39
*/
108
*/
40
-static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
109
#include "qemu/osdep.h"
41
- int64_t offset, int64_t bytes,
110
#include "qemu/main-loop.h"
42
- bool *error_is_read)
111
+#include "block/aio-wait.h"
43
+static int coroutine_fn
112
#include "vhost-user-server.h"
44
+block_copy_dirty_clusters(BlockCopyCallState *call_state)
113
45
{
114
+/*
46
+ BlockCopyState *s = call_state->s;
115
+ * Theory of operation:
47
+ int64_t offset = call_state->offset;
116
+ *
48
+ int64_t bytes = call_state->bytes;
117
+ * VuServer is started and stopped by vhost_user_server_start() and
49
+
118
+ * vhost_user_server_stop() from the main loop thread. Starting the server
50
int ret = 0;
119
+ * opens a vhost-user UNIX domain socket and listens for incoming connections.
51
bool found_dirty = false;
120
+ * Only one connection is allowed at a time.
52
int64_t end = offset + bytes;
121
+ *
53
AioTaskPool *aio = NULL;
122
+ * The connection is handled by the vu_client_trip() coroutine in the
54
- BlockCopyCallState call_state = {false, false};
123
+ * VuServer->ctx AioContext. The coroutine consists of a vu_dispatch() loop
55
124
+ * where libvhost-user calls vu_message_read() to receive the next vhost-user
56
/*
125
+ * protocol messages over the UNIX domain socket.
57
* block_copy() user is responsible for keeping source and target in same
126
+ *
58
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
127
+ * When virtqueues are set up libvhost-user calls set_watch() to monitor kick
59
BlockCopyTask *task;
128
+ * fds. These fds are also handled in the VuServer->ctx AioContext.
60
int64_t status_bytes;
129
+ *
61
130
+ * Both vu_client_trip() and kick fd monitoring can be stopped by shutting down
62
- task = block_copy_task_create(s, &call_state, offset, bytes);
131
+ * the socket connection. Shutting down the socket connection causes
63
+ task = block_copy_task_create(s, call_state, offset, bytes);
132
+ * vu_message_read() to fail since no more data can be received from the socket.
64
if (!task) {
133
+ * After vu_dispatch() fails, vu_client_trip() calls vu_deinit() to stop
65
/* No more dirty bits in the bitmap */
134
+ * libvhost-user before terminating the coroutine. vu_deinit() calls
66
trace_block_copy_skip_range(s, offset, bytes);
135
+ * remove_watch() to stop monitoring kick fds and this stops virtqueue
67
@@ -XXX,XX +XXX,XX @@ out:
136
+ * processing.
68
137
+ *
69
aio_task_pool_free(aio);
138
+ * When vu_client_trip() has finished cleaning up it schedules a BH in the main
70
}
139
+ * loop thread to accept the next client connection.
71
- if (error_is_read && ret < 0) {
140
+ *
72
- *error_is_read = call_state.error_is_read;
141
+ * When libvhost-user detects an error it calls panic_cb() and sets the
142
+ * dev->broken flag. Both vu_client_trip() and kick fd processing stop when
143
+ * the dev->broken flag is set.
144
+ *
145
+ * It is possible to switch AioContexts using
146
+ * vhost_user_server_detach_aio_context() and
147
+ * vhost_user_server_attach_aio_context(). They stop monitoring fds in the old
148
+ * AioContext and resume monitoring in the new AioContext. The vu_client_trip()
149
+ * coroutine remains in a yielded state during the switch. This is made
150
+ * possible by QIOChannel's support for spurious coroutine re-entry in
151
+ * qio_channel_yield(). The coroutine will restart I/O when re-entered from the
152
+ * new AioContext.
153
+ */
154
+
155
static void vmsg_close_fds(VhostUserMsg *vmsg)
156
{
157
int i;
158
@@ -XXX,XX +XXX,XX @@ static void vmsg_unblock_fds(VhostUserMsg *vmsg)
159
}
160
}
161
162
-static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
163
- gpointer opaque);
164
-
165
-static void close_client(VuServer *server)
166
-{
167
- /*
168
- * Before closing the client
169
- *
170
- * 1. Let vu_client_trip stop processing new vhost-user msg
171
- *
172
- * 2. remove kick_handler
173
- *
174
- * 3. wait for the kick handler to be finished
175
- *
176
- * 4. wait for the current vhost-user msg to be finished processing
177
- */
178
-
179
- QIOChannelSocket *sioc = server->sioc;
180
- /* When this is set vu_client_trip will stop new processing vhost-user message */
181
- server->sioc = NULL;
182
-
183
- while (server->processing_msg) {
184
- if (server->ioc->read_coroutine) {
185
- server->ioc->read_coroutine = NULL;
186
- qio_channel_set_aio_fd_handler(server->ioc, server->ioc->ctx, NULL,
187
- NULL, server->ioc);
188
- server->processing_msg = false;
189
- }
73
- }
190
- }
74
191
-
75
return ret < 0 ? ret : found_dirty;
192
- vu_deinit(&server->vu_dev);
76
}
193
-
194
- /* vu_deinit() should have called remove_watch() */
195
- assert(QTAILQ_EMPTY(&server->vu_fd_watches));
196
-
197
- object_unref(OBJECT(sioc));
198
- object_unref(OBJECT(server->ioc));
199
-}
200
-
201
static void panic_cb(VuDev *vu_dev, const char *buf)
202
{
203
- VuServer *server = container_of(vu_dev, VuServer, vu_dev);
204
-
205
- /* avoid while loop in close_client */
206
- server->processing_msg = false;
207
-
208
- if (buf) {
209
- error_report("vu_panic: %s", buf);
210
- }
211
-
212
- if (server->sioc) {
213
- close_client(server);
214
- }
215
-
216
- /*
217
- * Set the callback function for network listener so another
218
- * vhost-user client can connect to this server
219
- */
220
- qio_net_listener_set_client_func(server->listener,
221
- vu_accept,
222
- server,
223
- NULL);
224
+ error_report("vu_panic: %s", buf);
225
}
226
227
static bool coroutine_fn
228
@@ -XXX,XX +XXX,XX @@ fail:
229
return false;
230
}
231
232
-
233
-static void vu_client_start(VuServer *server);
234
static coroutine_fn void vu_client_trip(void *opaque)
235
{
236
VuServer *server = opaque;
237
+ VuDev *vu_dev = &server->vu_dev;
238
239
- while (!server->aio_context_changed && server->sioc) {
240
- server->processing_msg = true;
241
- vu_dispatch(&server->vu_dev);
242
- server->processing_msg = false;
243
+ while (!vu_dev->broken && vu_dispatch(vu_dev)) {
244
+ /* Keep running */
245
}
246
247
- if (server->aio_context_changed && server->sioc) {
248
- server->aio_context_changed = false;
249
- vu_client_start(server);
250
- }
251
-}
252
+ vu_deinit(vu_dev);
253
+
254
+ /* vu_deinit() should have called remove_watch() */
255
+ assert(QTAILQ_EMPTY(&server->vu_fd_watches));
256
+
257
+ object_unref(OBJECT(server->sioc));
258
+ server->sioc = NULL;
259
260
-static void vu_client_start(VuServer *server)
261
-{
262
- server->co_trip = qemu_coroutine_create(vu_client_trip, server);
263
- aio_co_enter(server->ctx, server->co_trip);
264
+ object_unref(OBJECT(server->ioc));
265
+ server->ioc = NULL;
266
+
267
+ server->co_trip = NULL;
268
+ if (server->restart_listener_bh) {
269
+ qemu_bh_schedule(server->restart_listener_bh);
270
+ }
271
+ aio_wait_kick();
272
}
77
273
78
/*
274
/*
79
- * block_copy
275
@@ -XXX,XX +XXX,XX @@ static void vu_client_start(VuServer *server)
80
+ * block_copy_common
276
static void kick_handler(void *opaque)
81
*
277
{
82
* Copy requested region, accordingly to dirty bitmap.
278
VuFdWatch *vu_fd_watch = opaque;
83
* Collaborate with parallel block_copy requests: if they succeed it will help
279
- vu_fd_watch->processing = true;
84
@@ -XXX,XX +XXX,XX @@ out:
280
- vu_fd_watch->cb(vu_fd_watch->vu_dev, 0, vu_fd_watch->pvt);
85
* it means that some I/O operation failed in context of _this_ block_copy call,
281
- vu_fd_watch->processing = false;
86
* not some parallel operation.
282
+ VuDev *vu_dev = vu_fd_watch->vu_dev;
87
*/
283
+
88
-int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
284
+ vu_fd_watch->cb(vu_dev, 0, vu_fd_watch->pvt);
89
- bool *error_is_read)
285
+
90
+static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
286
+ /* Stop vu_client_trip() if an error occurred in vu_fd_watch->cb() */
91
{
287
+ if (vu_dev->broken) {
92
int ret;
288
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
93
289
+
94
do {
290
+ qio_channel_shutdown(server->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
95
- ret = block_copy_dirty_clusters(s, offset, bytes, error_is_read);
291
+ }
96
+ ret = block_copy_dirty_clusters(call_state);
292
}
97
293
98
if (ret == 0) {
294
-
99
- ret = block_copy_wait_one(s, offset, bytes);
295
static VuFdWatch *find_vu_fd_watch(VuServer *server, int fd)
100
+ ret = block_copy_wait_one(call_state->s, call_state->offset,
296
{
101
+ call_state->bytes);
297
102
}
298
@@ -XXX,XX +XXX,XX @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
103
299
qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
104
/*
300
server->ioc = QIO_CHANNEL(sioc);
105
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
301
object_ref(OBJECT(server->ioc));
106
return ret;
302
- qio_channel_attach_aio_context(server->ioc, server->ctx);
107
}
303
+
108
304
+ /* TODO vu_message_write() spins if non-blocking! */
109
+int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
305
qio_channel_set_blocking(server->ioc, false, NULL);
110
+ bool *error_is_read)
306
- vu_client_start(server);
307
+
308
+ server->co_trip = qemu_coroutine_create(vu_client_trip, server);
309
+
310
+ aio_context_acquire(server->ctx);
311
+ vhost_user_server_attach_aio_context(server, server->ctx);
312
+ aio_context_release(server->ctx);
313
}
314
315
-
316
void vhost_user_server_stop(VuServer *server)
317
{
318
+ aio_context_acquire(server->ctx);
319
+
320
+ qemu_bh_delete(server->restart_listener_bh);
321
+ server->restart_listener_bh = NULL;
322
+
323
if (server->sioc) {
324
- close_client(server);
325
+ VuFdWatch *vu_fd_watch;
326
+
327
+ QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
328
+ aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
329
+ NULL, NULL, NULL, vu_fd_watch);
330
+ }
331
+
332
+ qio_channel_shutdown(server->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
333
+
334
+ AIO_WAIT_WHILE(server->ctx, server->co_trip);
335
}
336
337
+ aio_context_release(server->ctx);
338
+
339
if (server->listener) {
340
qio_net_listener_disconnect(server->listener);
341
object_unref(OBJECT(server->listener));
342
}
343
+}
344
+
345
+/*
346
+ * Allow the next client to connect to the server. Called from a BH in the main
347
+ * loop.
348
+ */
349
+static void restart_listener_bh(void *opaque)
111
+{
350
+{
112
+ BlockCopyCallState call_state = {
351
+ VuServer *server = opaque;
113
+ .s = s,
352
114
+ .offset = start,
353
+ qio_net_listener_set_client_func(server->listener, vu_accept, server,
115
+ .bytes = bytes,
354
+ NULL);
116
+ };
355
}
117
+
356
118
+ int ret = block_copy_common(&call_state);
357
-void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx)
119
+
358
+/* Called with ctx acquired */
120
+ if (error_is_read && ret < 0) {
359
+void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
121
+ *error_is_read = call_state.error_is_read;
360
{
361
- VuFdWatch *vu_fd_watch, *next;
362
- void *opaque = NULL;
363
- IOHandler *io_read = NULL;
364
- bool attach;
365
+ VuFdWatch *vu_fd_watch;
366
367
- server->ctx = ctx ? ctx : qemu_get_aio_context();
368
+ server->ctx = ctx;
369
370
if (!server->sioc) {
371
- /* not yet serving any client*/
372
return;
373
}
374
375
- if (ctx) {
376
- qio_channel_attach_aio_context(server->ioc, ctx);
377
- server->aio_context_changed = true;
378
- io_read = kick_handler;
379
- attach = true;
380
- } else {
381
+ qio_channel_attach_aio_context(server->ioc, ctx);
382
+
383
+ QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
384
+ aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
385
+ NULL, vu_fd_watch);
122
+ }
386
+ }
123
+
387
+
124
+ return ret;
388
+ aio_co_schedule(ctx, server->co_trip);
125
+}
389
+}
126
+
390
+
127
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
391
+/* Called with server->ctx acquired */
128
{
392
+void vhost_user_server_detach_aio_context(VuServer *server)
129
return s->copy_bitmap;
393
+{
394
+ if (server->sioc) {
395
+ VuFdWatch *vu_fd_watch;
396
+
397
+ QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
398
+ aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
399
+ NULL, NULL, NULL, vu_fd_watch);
400
+ }
401
+
402
qio_channel_detach_aio_context(server->ioc);
403
- /* server->ioc->ctx keeps the old AioConext */
404
- ctx = server->ioc->ctx;
405
- attach = false;
406
}
407
408
- QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
409
- if (vu_fd_watch->cb) {
410
- opaque = attach ? vu_fd_watch : NULL;
411
- aio_set_fd_handler(ctx, vu_fd_watch->fd, true,
412
- io_read, NULL, NULL,
413
- opaque);
414
- }
415
- }
416
+ server->ctx = NULL;
417
}
418
419
-
420
bool vhost_user_server_start(VuServer *server,
421
SocketAddress *socket_addr,
422
AioContext *ctx,
423
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
424
const VuDevIface *vu_iface,
425
Error **errp)
426
{
427
+ QEMUBH *bh;
428
QIONetListener *listener = qio_net_listener_new();
429
if (qio_net_listener_open_sync(listener, socket_addr, 1,
430
errp) < 0) {
431
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
432
return false;
433
}
434
435
+ bh = qemu_bh_new(restart_listener_bh, server);
436
+
437
/* zero out unspecified fields */
438
*server = (VuServer) {
439
.listener = listener,
440
+ .restart_listener_bh = bh,
441
.vu_iface = vu_iface,
442
.max_queues = max_queues,
443
.ctx = ctx,
130
--
444
--
131
2.29.2
445
2.26.2
132
446
133
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
Propagate the flush return value since errors are possible.
2
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
4
Message-id: 20200924151549.913737-11-stefanha@redhat.com
5
Message-Id: <20210116214705.822267-21-vsementsov@virtuozzo.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
6
---
8
include/block/block-copy.h | 2 +-
7
block/export/vhost-user-blk-server.c | 11 +++++++----
9
block/backup-top.c | 2 +-
8
1 file changed, 7 insertions(+), 4 deletions(-)
10
block/block-copy.c | 10 ++--------
11
3 files changed, 4 insertions(+), 10 deletions(-)
12
9
13
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
10
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/include/block/block-copy.h
12
--- a/block/export/vhost-user-blk-server.c
16
+++ b/include/block/block-copy.h
13
+++ b/block/export/vhost-user-blk-server.c
17
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
14
@@ -XXX,XX +XXX,XX @@ vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
18
int64_t offset, int64_t *count);
15
return -EINVAL;
19
20
int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
21
- bool ignore_ratelimit, bool *error_is_read);
22
+ bool ignore_ratelimit);
23
24
/*
25
* Run block-copy in a coroutine, create corresponding BlockCopyCallState
26
diff --git a/block/backup-top.c b/block/backup-top.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/block/backup-top.c
29
+++ b/block/backup-top.c
30
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int backup_top_cbw(BlockDriverState *bs, uint64_t offset,
31
off = QEMU_ALIGN_DOWN(offset, s->cluster_size);
32
end = QEMU_ALIGN_UP(offset + bytes, s->cluster_size);
33
34
- return block_copy(s->bcs, off, end - off, true, NULL);
35
+ return block_copy(s->bcs, off, end - off, true);
36
}
16
}
37
17
38
static int coroutine_fn backup_top_co_pdiscard(BlockDriverState *bs,
18
-static void coroutine_fn vu_block_flush(VuBlockReq *req)
39
diff --git a/block/block-copy.c b/block/block-copy.c
19
+static int coroutine_fn vu_block_flush(VuBlockReq *req)
40
index XXXXXXX..XXXXXXX 100644
20
{
41
--- a/block/block-copy.c
21
VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
42
+++ b/block/block-copy.c
22
BlockBackend *backend = vdev_blk->backend;
43
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
23
- blk_co_flush(backend);
24
+ return blk_co_flush(backend);
44
}
25
}
45
26
46
int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
27
static void coroutine_fn vu_block_virtio_process_req(void *opaque)
47
- bool ignore_ratelimit, bool *error_is_read)
28
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
48
+ bool ignore_ratelimit)
29
break;
49
{
30
}
50
BlockCopyCallState call_state = {
31
case VIRTIO_BLK_T_FLUSH:
51
.s = s,
32
- vu_block_flush(req);
52
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
33
- req->in->status = VIRTIO_BLK_S_OK;
53
.max_workers = BLOCK_COPY_MAX_WORKERS,
34
+ if (vu_block_flush(req) == 0) {
54
};
35
+ req->in->status = VIRTIO_BLK_S_OK;
55
36
+ } else {
56
- int ret = block_copy_common(&call_state);
37
+ req->in->status = VIRTIO_BLK_S_IOERR;
57
-
38
+ }
58
- if (error_is_read && ret < 0) {
39
break;
59
- *error_is_read = call_state.error_is_read;
40
case VIRTIO_BLK_T_GET_ID: {
60
- }
41
size_t size = MIN(iov_size(&elem->in_sg[0], in_num),
61
-
62
- return ret;
63
+ return block_copy_common(&call_state);
64
}
65
66
static void coroutine_fn block_copy_async_co_entry(void *opaque)
67
--
42
--
68
2.29.2
43
2.26.2
69
44
70
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
Use the new QAPI block exports API instead of defining our own QOM
2
2
objects.
3
If the flag BDRV_REQ_PREFETCH was set, skip idling read/write
3
4
operations in COR-driver. It can be taken into account for the
4
This is a large change because the lifecycle of VuBlockDev needs to
5
COR-algorithms optimization. That check is being made during the
5
follow BlockExportDriver. QOM properties are replaced by QAPI options
6
block stream job by the moment.
6
objects.
7
7
8
Add the BDRV_REQ_PREFETCH flag to the supported_read_flags of the
8
VuBlockDev is renamed VuBlkExport and contains a BlockExport field.
9
COR-filter.
9
Several fields can be dropped since BlockExport already has equivalents.
10
10
11
block: Modify the comment for the flag BDRV_REQ_PREFETCH as we are
11
The file names and meson build integration will be adjusted in a future
12
going to use it alone and pass it to the COR-filter driver for further
12
patch. libvhost-user should probably be built as a static library that
13
processing.
13
is linked into QEMU instead of as a .c file that results in duplicate
14
14
compilation.
15
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
15
16
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
16
The new command-line syntax is:
17
Reviewed-by: Max Reitz <mreitz@redhat.com>
17
18
Message-Id: <20201216061703.70908-9-vsementsov@virtuozzo.com>
18
$ qemu-storage-daemon \
19
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
--blockdev file,node-name=drive0,filename=test.img \
20
--export vhost-user-blk,node-name=drive0,id=export0,unix-socket=/tmp/vhost-user-blk.sock
21
22
Note that unix-socket is optional because we may wish to accept chardevs
23
too in the future.
24
25
Markus noted that supported address families are not explicit in the
26
QAPI schema. It is unlikely that support for more address families will
27
be added since file descriptor passing is required and few address
28
families support it. If a new address family needs to be added, then the
29
QAPI 'features' syntax can be used to advertize them.
30
31
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
32
Acked-by: Markus Armbruster <armbru@redhat.com>
33
Message-id: 20200924151549.913737-12-stefanha@redhat.com
34
[Skip test on big-endian host architectures because this device doesn't
35
support them yet (as already mentioned in a code comment).
36
--Stefan]
37
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
20
---
38
---
21
include/block/block.h | 8 +++++---
39
qapi/block-export.json | 21 +-
22
block/copy-on-read.c | 14 ++++++++++----
40
block/export/vhost-user-blk-server.h | 23 +-
23
2 files changed, 15 insertions(+), 7 deletions(-)
41
block/export/export.c | 6 +
24
42
block/export/vhost-user-blk-server.c | 452 +++++++--------------------
25
diff --git a/include/block/block.h b/include/block/block.h
43
util/vhost-user-server.c | 10 +-
44
block/export/meson.build | 1 +
45
block/meson.build | 1 -
46
7 files changed, 156 insertions(+), 358 deletions(-)
47
48
diff --git a/qapi/block-export.json b/qapi/block-export.json
26
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
27
--- a/include/block/block.h
50
--- a/qapi/block-export.json
28
+++ b/include/block/block.h
51
+++ b/qapi/block-export.json
29
@@ -XXX,XX +XXX,XX @@ typedef enum {
52
@@ -XXX,XX +XXX,XX @@
30
BDRV_REQ_NO_FALLBACK = 0x100,
53
'data': { '*name': 'str', '*description': 'str',
31
54
'*bitmap': 'str' } }
32
/*
55
33
- * BDRV_REQ_PREFETCH may be used only together with BDRV_REQ_COPY_ON_READ
56
+##
34
- * on read request and means that caller doesn't really need data to be
57
+# @BlockExportOptionsVhostUserBlk:
35
- * written to qiov parameter which may be NULL.
58
+#
36
+ * BDRV_REQ_PREFETCH makes sense only in the context of copy-on-read
59
+# A vhost-user-blk block export.
37
+ * (i.e., together with the BDRV_REQ_COPY_ON_READ flag or when a COR
60
+#
38
+ * filter is involved), in which case it signals that the COR operation
61
+# @addr: The vhost-user socket on which to listen. Both 'unix' and 'fd'
39
+ * need not read the data into memory (qiov) but only ensure they are
62
+# SocketAddress types are supported. Passed fds must be UNIX domain
40
+ * copied to the top layer (i.e., that COR operation is done).
63
+# sockets.
41
*/
64
+# @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
42
BDRV_REQ_PREFETCH = 0x200,
65
+#
43
66
+# Since: 5.2
44
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
67
+##
68
+{ 'struct': 'BlockExportOptionsVhostUserBlk',
69
+ 'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
70
+
71
##
72
# @NbdServerAddOptions:
73
#
74
@@ -XXX,XX +XXX,XX @@
75
# An enumeration of block export types
76
#
77
# @nbd: NBD export
78
+# @vhost-user-blk: vhost-user-blk export (since 5.2)
79
#
80
# Since: 4.2
81
##
82
{ 'enum': 'BlockExportType',
83
- 'data': [ 'nbd' ] }
84
+ 'data': [ 'nbd', 'vhost-user-blk' ] }
85
86
##
87
# @BlockExportOptions:
88
@@ -XXX,XX +XXX,XX @@
89
'*writethrough': 'bool' },
90
'discriminator': 'type',
91
'data': {
92
- 'nbd': 'BlockExportOptionsNbd'
93
+ 'nbd': 'BlockExportOptionsNbd',
94
+ 'vhost-user-blk': 'BlockExportOptionsVhostUserBlk'
95
} }
96
97
##
98
diff --git a/block/export/vhost-user-blk-server.h b/block/export/vhost-user-blk-server.h
45
index XXXXXXX..XXXXXXX 100644
99
index XXXXXXX..XXXXXXX 100644
46
--- a/block/copy-on-read.c
100
--- a/block/export/vhost-user-blk-server.h
47
+++ b/block/copy-on-read.c
101
+++ b/block/export/vhost-user-blk-server.h
48
@@ -XXX,XX +XXX,XX @@ static int cor_open(BlockDriverState *bs, QDict *options, int flags,
102
@@ -XXX,XX +XXX,XX @@
103
104
#ifndef VHOST_USER_BLK_SERVER_H
105
#define VHOST_USER_BLK_SERVER_H
106
-#include "util/vhost-user-server.h"
107
108
-typedef struct VuBlockDev VuBlockDev;
109
-#define TYPE_VHOST_USER_BLK_SERVER "vhost-user-blk-server"
110
-#define VHOST_USER_BLK_SERVER(obj) \
111
- OBJECT_CHECK(VuBlockDev, obj, TYPE_VHOST_USER_BLK_SERVER)
112
+#include "block/export.h"
113
114
-/* vhost user block device */
115
-struct VuBlockDev {
116
- Object parent_obj;
117
- char *node_name;
118
- SocketAddress *addr;
119
- AioContext *ctx;
120
- VuServer vu_server;
121
- bool running;
122
- uint32_t blk_size;
123
- BlockBackend *backend;
124
- QIOChannelSocket *sioc;
125
- QTAILQ_ENTRY(VuBlockDev) next;
126
- struct virtio_blk_config blkcfg;
127
- bool writable;
128
-};
129
+/* For block/export/export.c */
130
+extern const BlockExportDriver blk_exp_vhost_user_blk;
131
132
#endif /* VHOST_USER_BLK_SERVER_H */
133
diff --git a/block/export/export.c b/block/export/export.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/block/export/export.c
136
+++ b/block/export/export.c
137
@@ -XXX,XX +XXX,XX @@
138
#include "sysemu/block-backend.h"
139
#include "block/export.h"
140
#include "block/nbd.h"
141
+#if CONFIG_LINUX
142
+#include "block/export/vhost-user-blk-server.h"
143
+#endif
144
#include "qapi/error.h"
145
#include "qapi/qapi-commands-block-export.h"
146
#include "qapi/qapi-events-block-export.h"
147
@@ -XXX,XX +XXX,XX @@
148
149
static const BlockExportDriver *blk_exp_drivers[] = {
150
&blk_exp_nbd,
151
+#if CONFIG_LINUX
152
+ &blk_exp_vhost_user_blk,
153
+#endif
154
};
155
156
/* Only accessed from the main thread */
157
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/block/export/vhost-user-blk-server.c
160
+++ b/block/export/vhost-user-blk-server.c
161
@@ -XXX,XX +XXX,XX @@
162
*/
163
#include "qemu/osdep.h"
164
#include "block/block.h"
165
+#include "contrib/libvhost-user/libvhost-user.h"
166
+#include "standard-headers/linux/virtio_blk.h"
167
+#include "util/vhost-user-server.h"
168
#include "vhost-user-blk-server.h"
169
#include "qapi/error.h"
170
#include "qom/object_interfaces.h"
171
@@ -XXX,XX +XXX,XX @@ struct virtio_blk_inhdr {
172
unsigned char status;
173
};
174
175
-typedef struct VuBlockReq {
176
+typedef struct VuBlkReq {
177
VuVirtqElement elem;
178
int64_t sector_num;
179
size_t size;
180
@@ -XXX,XX +XXX,XX @@ typedef struct VuBlockReq {
181
struct virtio_blk_outhdr out;
182
VuServer *server;
183
struct VuVirtq *vq;
184
-} VuBlockReq;
185
+} VuBlkReq;
186
187
-static void vu_block_req_complete(VuBlockReq *req)
188
+/* vhost user block device */
189
+typedef struct {
190
+ BlockExport export;
191
+ VuServer vu_server;
192
+ uint32_t blk_size;
193
+ QIOChannelSocket *sioc;
194
+ struct virtio_blk_config blkcfg;
195
+ bool writable;
196
+} VuBlkExport;
197
+
198
+static void vu_blk_req_complete(VuBlkReq *req)
199
{
200
VuDev *vu_dev = &req->server->vu_dev;
201
202
@@ -XXX,XX +XXX,XX @@ static void vu_block_req_complete(VuBlockReq *req)
203
free(req);
204
}
205
206
-static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
207
-{
208
- return container_of(server, VuBlockDev, vu_server);
209
-}
210
-
211
static int coroutine_fn
212
-vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
213
- uint32_t iovcnt, uint32_t type)
214
+vu_blk_discard_write_zeroes(BlockBackend *blk, struct iovec *iov,
215
+ uint32_t iovcnt, uint32_t type)
216
{
217
struct virtio_blk_discard_write_zeroes desc;
218
ssize_t size = iov_to_buf(iov, iovcnt, 0, &desc, sizeof(desc));
219
@@ -XXX,XX +XXX,XX @@ vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
49
return -EINVAL;
220
return -EINVAL;
50
}
221
}
51
222
52
+ bs->supported_read_flags = BDRV_REQ_PREFETCH;
223
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
224
uint64_t range[2] = { le64_to_cpu(desc.sector) << 9,
225
le32_to_cpu(desc.num_sectors) << 9 };
226
if (type == VIRTIO_BLK_T_DISCARD) {
227
- if (blk_co_pdiscard(vdev_blk->backend, range[0], range[1]) == 0) {
228
+ if (blk_co_pdiscard(blk, range[0], range[1]) == 0) {
229
return 0;
230
}
231
} else if (type == VIRTIO_BLK_T_WRITE_ZEROES) {
232
- if (blk_co_pwrite_zeroes(vdev_blk->backend,
233
- range[0], range[1], 0) == 0) {
234
+ if (blk_co_pwrite_zeroes(blk, range[0], range[1], 0) == 0) {
235
return 0;
236
}
237
}
238
@@ -XXX,XX +XXX,XX @@ vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
239
return -EINVAL;
240
}
241
242
-static int coroutine_fn vu_block_flush(VuBlockReq *req)
243
+static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
244
{
245
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
246
- BlockBackend *backend = vdev_blk->backend;
247
- return blk_co_flush(backend);
248
-}
249
-
250
-static void coroutine_fn vu_block_virtio_process_req(void *opaque)
251
-{
252
- VuBlockReq *req = opaque;
253
+ VuBlkReq *req = opaque;
254
VuServer *server = req->server;
255
VuVirtqElement *elem = &req->elem;
256
uint32_t type;
257
258
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
259
- BlockBackend *backend = vdev_blk->backend;
260
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
261
+ BlockBackend *blk = vexp->export.blk;
262
263
struct iovec *in_iov = elem->in_sg;
264
struct iovec *out_iov = elem->out_sg;
265
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
266
bool is_write = type & VIRTIO_BLK_T_OUT;
267
req->sector_num = le64_to_cpu(req->out.sector);
268
269
- int64_t offset = req->sector_num * vdev_blk->blk_size;
270
+ if (is_write && !vexp->writable) {
271
+ req->in->status = VIRTIO_BLK_S_IOERR;
272
+ break;
273
+ }
53
+
274
+
54
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
275
+ int64_t offset = req->sector_num * vexp->blk_size;
55
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
276
QEMUIOVector qiov;
56
277
if (is_write) {
57
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
278
qemu_iovec_init_external(&qiov, out_iov, out_num);
58
}
279
- ret = blk_co_pwritev(backend, offset, qiov.size,
280
- &qiov, 0);
281
+ ret = blk_co_pwritev(blk, offset, qiov.size, &qiov, 0);
282
} else {
283
qemu_iovec_init_external(&qiov, in_iov, in_num);
284
- ret = blk_co_preadv(backend, offset, qiov.size,
285
- &qiov, 0);
286
+ ret = blk_co_preadv(blk, offset, qiov.size, &qiov, 0);
59
}
287
}
60
288
if (ret >= 0) {
61
- ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
289
req->in->status = VIRTIO_BLK_S_OK;
62
- local_flags);
290
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
63
- if (ret < 0) {
291
break;
64
- return ret;
292
}
65
+ /* Skip if neither read nor write are needed */
293
case VIRTIO_BLK_T_FLUSH:
66
+ if ((local_flags & (BDRV_REQ_PREFETCH | BDRV_REQ_COPY_ON_READ)) !=
294
- if (vu_block_flush(req) == 0) {
67
+ BDRV_REQ_PREFETCH) {
295
+ if (blk_co_flush(blk) == 0) {
68
+ ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
296
req->in->status = VIRTIO_BLK_S_OK;
69
+ local_flags);
297
} else {
70
+ if (ret < 0) {
298
req->in->status = VIRTIO_BLK_S_IOERR;
71
+ return ret;
299
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
72
+ }
300
case VIRTIO_BLK_T_DISCARD:
301
case VIRTIO_BLK_T_WRITE_ZEROES: {
302
int rc;
303
- rc = vu_block_discard_write_zeroes(req, &elem->out_sg[1],
304
- out_num, type);
305
+
306
+ if (!vexp->writable) {
307
+ req->in->status = VIRTIO_BLK_S_IOERR;
308
+ break;
309
+ }
310
+
311
+ rc = vu_blk_discard_write_zeroes(blk, &elem->out_sg[1], out_num, type);
312
if (rc == 0) {
313
req->in->status = VIRTIO_BLK_S_OK;
314
} else {
315
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
316
break;
317
}
318
319
- vu_block_req_complete(req);
320
+ vu_blk_req_complete(req);
321
return;
322
323
err:
324
- free(elem);
325
+ free(req);
326
}
327
328
-static void vu_block_process_vq(VuDev *vu_dev, int idx)
329
+static void vu_blk_process_vq(VuDev *vu_dev, int idx)
330
{
331
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
332
VuVirtq *vq = vu_get_queue(vu_dev, idx);
333
334
while (1) {
335
- VuBlockReq *req;
336
+ VuBlkReq *req;
337
338
- req = vu_queue_pop(vu_dev, vq, sizeof(VuBlockReq));
339
+ req = vu_queue_pop(vu_dev, vq, sizeof(VuBlkReq));
340
if (!req) {
341
break;
73
}
342
}
74
343
@@ -XXX,XX +XXX,XX @@ static void vu_block_process_vq(VuDev *vu_dev, int idx)
75
offset += n;
344
req->vq = vq;
345
346
Coroutine *co =
347
- qemu_coroutine_create(vu_block_virtio_process_req, req);
348
+ qemu_coroutine_create(vu_blk_virtio_process_req, req);
349
qemu_coroutine_enter(co);
350
}
351
}
352
353
-static void vu_block_queue_set_started(VuDev *vu_dev, int idx, bool started)
354
+static void vu_blk_queue_set_started(VuDev *vu_dev, int idx, bool started)
355
{
356
VuVirtq *vq;
357
358
assert(vu_dev);
359
360
vq = vu_get_queue(vu_dev, idx);
361
- vu_set_queue_handler(vu_dev, vq, started ? vu_block_process_vq : NULL);
362
+ vu_set_queue_handler(vu_dev, vq, started ? vu_blk_process_vq : NULL);
363
}
364
365
-static uint64_t vu_block_get_features(VuDev *dev)
366
+static uint64_t vu_blk_get_features(VuDev *dev)
367
{
368
uint64_t features;
369
VuServer *server = container_of(dev, VuServer, vu_dev);
370
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
371
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
372
features = 1ull << VIRTIO_BLK_F_SIZE_MAX |
373
1ull << VIRTIO_BLK_F_SEG_MAX |
374
1ull << VIRTIO_BLK_F_TOPOLOGY |
375
@@ -XXX,XX +XXX,XX @@ static uint64_t vu_block_get_features(VuDev *dev)
376
1ull << VIRTIO_RING_F_EVENT_IDX |
377
1ull << VHOST_USER_F_PROTOCOL_FEATURES;
378
379
- if (!vdev_blk->writable) {
380
+ if (!vexp->writable) {
381
features |= 1ull << VIRTIO_BLK_F_RO;
382
}
383
384
return features;
385
}
386
387
-static uint64_t vu_block_get_protocol_features(VuDev *dev)
388
+static uint64_t vu_blk_get_protocol_features(VuDev *dev)
389
{
390
return 1ull << VHOST_USER_PROTOCOL_F_CONFIG |
391
1ull << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD;
392
}
393
394
static int
395
-vu_block_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
396
+vu_blk_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
397
{
398
+ /* TODO blkcfg must be little-endian for VIRTIO 1.0 */
399
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
400
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
401
- memcpy(config, &vdev_blk->blkcfg, len);
402
-
403
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
404
+ memcpy(config, &vexp->blkcfg, len);
405
return 0;
406
}
407
408
static int
409
-vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
410
+vu_blk_set_config(VuDev *vu_dev, const uint8_t *data,
411
uint32_t offset, uint32_t size, uint32_t flags)
412
{
413
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
414
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
415
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
416
uint8_t wce;
417
418
/* don't support live migration */
419
@@ -XXX,XX +XXX,XX @@ vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
420
}
421
422
wce = *data;
423
- vdev_blk->blkcfg.wce = wce;
424
- blk_set_enable_write_cache(vdev_blk->backend, wce);
425
+ vexp->blkcfg.wce = wce;
426
+ blk_set_enable_write_cache(vexp->export.blk, wce);
427
return 0;
428
}
429
430
@@ -XXX,XX +XXX,XX @@ vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
431
* of vu_process_message.
432
*
433
*/
434
-static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
435
+static int vu_blk_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
436
{
437
if (vmsg->request == VHOST_USER_NONE) {
438
dev->panic(dev, "disconnect");
439
@@ -XXX,XX +XXX,XX @@ static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
440
return false;
441
}
442
443
-static const VuDevIface vu_block_iface = {
444
- .get_features = vu_block_get_features,
445
- .queue_set_started = vu_block_queue_set_started,
446
- .get_protocol_features = vu_block_get_protocol_features,
447
- .get_config = vu_block_get_config,
448
- .set_config = vu_block_set_config,
449
- .process_msg = vu_block_process_msg,
450
+static const VuDevIface vu_blk_iface = {
451
+ .get_features = vu_blk_get_features,
452
+ .queue_set_started = vu_blk_queue_set_started,
453
+ .get_protocol_features = vu_blk_get_protocol_features,
454
+ .get_config = vu_blk_get_config,
455
+ .set_config = vu_blk_set_config,
456
+ .process_msg = vu_blk_process_msg,
457
};
458
459
static void blk_aio_attached(AioContext *ctx, void *opaque)
460
{
461
- VuBlockDev *vub_dev = opaque;
462
- vhost_user_server_attach_aio_context(&vub_dev->vu_server, ctx);
463
+ VuBlkExport *vexp = opaque;
464
+ vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
465
}
466
467
static void blk_aio_detach(void *opaque)
468
{
469
- VuBlockDev *vub_dev = opaque;
470
- vhost_user_server_detach_aio_context(&vub_dev->vu_server);
471
+ VuBlkExport *vexp = opaque;
472
+ vhost_user_server_detach_aio_context(&vexp->vu_server);
473
}
474
475
static void
476
-vu_block_initialize_config(BlockDriverState *bs,
477
+vu_blk_initialize_config(BlockDriverState *bs,
478
struct virtio_blk_config *config, uint32_t blk_size)
479
{
480
config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
481
@@ -XXX,XX +XXX,XX @@ vu_block_initialize_config(BlockDriverState *bs,
482
config->max_write_zeroes_seg = 1;
483
}
484
485
-static VuBlockDev *vu_block_init(VuBlockDev *vu_block_device, Error **errp)
486
+static void vu_blk_exp_request_shutdown(BlockExport *exp)
487
{
488
+ VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
489
490
- BlockBackend *blk;
491
- Error *local_error = NULL;
492
- const char *node_name = vu_block_device->node_name;
493
- bool writable = vu_block_device->writable;
494
- uint64_t perm = BLK_PERM_CONSISTENT_READ;
495
- int ret;
496
-
497
- AioContext *ctx;
498
-
499
- BlockDriverState *bs = bdrv_lookup_bs(node_name, node_name, &local_error);
500
-
501
- if (!bs) {
502
- error_propagate(errp, local_error);
503
- return NULL;
504
- }
505
-
506
- if (bdrv_is_read_only(bs)) {
507
- writable = false;
508
- }
509
-
510
- if (writable) {
511
- perm |= BLK_PERM_WRITE;
512
- }
513
-
514
- ctx = bdrv_get_aio_context(bs);
515
- aio_context_acquire(ctx);
516
- bdrv_invalidate_cache(bs, NULL);
517
- aio_context_release(ctx);
518
-
519
- /*
520
- * Don't allow resize while the vhost user server is running,
521
- * otherwise we don't care what happens with the node.
522
- */
523
- blk = blk_new(bdrv_get_aio_context(bs), perm,
524
- BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
525
- BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
526
- ret = blk_insert_bs(blk, bs, errp);
527
-
528
- if (ret < 0) {
529
- goto fail;
530
- }
531
-
532
- blk_set_enable_write_cache(blk, false);
533
-
534
- blk_set_allow_aio_context_change(blk, true);
535
-
536
- vu_block_device->blkcfg.wce = 0;
537
- vu_block_device->backend = blk;
538
- if (!vu_block_device->blk_size) {
539
- vu_block_device->blk_size = BDRV_SECTOR_SIZE;
540
- }
541
- vu_block_device->blkcfg.blk_size = vu_block_device->blk_size;
542
- blk_set_guest_block_size(blk, vu_block_device->blk_size);
543
- vu_block_initialize_config(bs, &vu_block_device->blkcfg,
544
- vu_block_device->blk_size);
545
- return vu_block_device;
546
-
547
-fail:
548
- blk_unref(blk);
549
- return NULL;
550
-}
551
-
552
-static void vu_block_deinit(VuBlockDev *vu_block_device)
553
-{
554
- if (vu_block_device->backend) {
555
- blk_remove_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
556
- blk_aio_detach, vu_block_device);
557
- }
558
-
559
- blk_unref(vu_block_device->backend);
560
-}
561
-
562
-static void vhost_user_blk_server_stop(VuBlockDev *vu_block_device)
563
-{
564
- vhost_user_server_stop(&vu_block_device->vu_server);
565
- vu_block_deinit(vu_block_device);
566
-}
567
-
568
-static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
569
- Error **errp)
570
-{
571
- AioContext *ctx;
572
- SocketAddress *addr = vu_block_device->addr;
573
-
574
- if (!vu_block_init(vu_block_device, errp)) {
575
- return;
576
- }
577
-
578
- ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
579
-
580
- if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
581
- VHOST_USER_BLK_MAX_QUEUES, &vu_block_iface,
582
- errp)) {
583
- goto error;
584
- }
585
-
586
- blk_add_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
587
- blk_aio_detach, vu_block_device);
588
- vu_block_device->running = true;
589
- return;
590
-
591
- error:
592
- vu_block_deinit(vu_block_device);
593
-}
594
-
595
-static bool vu_prop_modifiable(VuBlockDev *vus, Error **errp)
596
-{
597
- if (vus->running) {
598
- error_setg(errp, "The property can't be modified "
599
- "while the server is running");
600
- return false;
601
- }
602
- return true;
603
-}
604
-
605
-static void vu_set_node_name(Object *obj, const char *value, Error **errp)
606
-{
607
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
608
-
609
- if (!vu_prop_modifiable(vus, errp)) {
610
- return;
611
- }
612
-
613
- if (vus->node_name) {
614
- g_free(vus->node_name);
615
- }
616
-
617
- vus->node_name = g_strdup(value);
618
-}
619
-
620
-static char *vu_get_node_name(Object *obj, Error **errp)
621
-{
622
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
623
- return g_strdup(vus->node_name);
624
-}
625
-
626
-static void free_socket_addr(SocketAddress *addr)
627
-{
628
- g_free(addr->u.q_unix.path);
629
- g_free(addr);
630
-}
631
-
632
-static void vu_set_unix_socket(Object *obj, const char *value,
633
- Error **errp)
634
-{
635
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
636
-
637
- if (!vu_prop_modifiable(vus, errp)) {
638
- return;
639
- }
640
-
641
- if (vus->addr) {
642
- free_socket_addr(vus->addr);
643
- }
644
-
645
- SocketAddress *addr = g_new0(SocketAddress, 1);
646
- addr->type = SOCKET_ADDRESS_TYPE_UNIX;
647
- addr->u.q_unix.path = g_strdup(value);
648
- vus->addr = addr;
649
+ vhost_user_server_stop(&vexp->vu_server);
650
}
651
652
-static char *vu_get_unix_socket(Object *obj, Error **errp)
653
+static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
654
+ Error **errp)
655
{
656
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
657
- return g_strdup(vus->addr->u.q_unix.path);
658
-}
659
-
660
-static bool vu_get_block_writable(Object *obj, Error **errp)
661
-{
662
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
663
- return vus->writable;
664
-}
665
-
666
-static void vu_set_block_writable(Object *obj, bool value, Error **errp)
667
-{
668
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
669
-
670
- if (!vu_prop_modifiable(vus, errp)) {
671
- return;
672
- }
673
-
674
- vus->writable = value;
675
-}
676
-
677
-static void vu_get_blk_size(Object *obj, Visitor *v, const char *name,
678
- void *opaque, Error **errp)
679
-{
680
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
681
- uint32_t value = vus->blk_size;
682
-
683
- visit_type_uint32(v, name, &value, errp);
684
-}
685
-
686
-static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
687
- void *opaque, Error **errp)
688
-{
689
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
690
-
691
+ VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
692
+ BlockExportOptionsVhostUserBlk *vu_opts = &opts->u.vhost_user_blk;
693
Error *local_err = NULL;
694
- uint32_t value;
695
+ uint64_t logical_block_size;
696
697
- if (!vu_prop_modifiable(vus, errp)) {
698
- return;
699
- }
700
+ vexp->writable = opts->writable;
701
+ vexp->blkcfg.wce = 0;
702
703
- visit_type_uint32(v, name, &value, &local_err);
704
- if (local_err) {
705
- goto out;
706
+ if (vu_opts->has_logical_block_size) {
707
+ logical_block_size = vu_opts->logical_block_size;
708
+ } else {
709
+ logical_block_size = BDRV_SECTOR_SIZE;
710
}
711
-
712
- check_block_size(object_get_typename(obj), name, value, &local_err);
713
+ check_block_size(exp->id, "logical-block-size", logical_block_size,
714
+ &local_err);
715
if (local_err) {
716
- goto out;
717
+ error_propagate(errp, local_err);
718
+ return -EINVAL;
719
+ }
720
+ vexp->blk_size = logical_block_size;
721
+ blk_set_guest_block_size(exp->blk, logical_block_size);
722
+ vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
723
+ logical_block_size);
724
+
725
+ blk_set_allow_aio_context_change(exp->blk, true);
726
+ blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
727
+ vexp);
728
+
729
+ if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
730
+ VHOST_USER_BLK_MAX_QUEUES, &vu_blk_iface,
731
+ errp)) {
732
+ blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
733
+ blk_aio_detach, vexp);
734
+ return -EADDRNOTAVAIL;
735
}
736
737
- vus->blk_size = value;
738
-
739
-out:
740
- error_propagate(errp, local_err);
741
-}
742
-
743
-static void vhost_user_blk_server_instance_finalize(Object *obj)
744
-{
745
- VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
746
-
747
- vhost_user_blk_server_stop(vub);
748
-
749
- /*
750
- * Unlike object_property_add_str, object_class_property_add_str
751
- * doesn't have a release method. Thus manual memory freeing is
752
- * needed.
753
- */
754
- free_socket_addr(vub->addr);
755
- g_free(vub->node_name);
756
-}
757
-
758
-static void vhost_user_blk_server_complete(UserCreatable *obj, Error **errp)
759
-{
760
- VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
761
-
762
- vhost_user_blk_server_start(vub, errp);
763
+ return 0;
764
}
765
766
-static void vhost_user_blk_server_class_init(ObjectClass *klass,
767
- void *class_data)
768
+static void vu_blk_exp_delete(BlockExport *exp)
769
{
770
- UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
771
- ucc->complete = vhost_user_blk_server_complete;
772
-
773
- object_class_property_add_bool(klass, "writable",
774
- vu_get_block_writable,
775
- vu_set_block_writable);
776
-
777
- object_class_property_add_str(klass, "node-name",
778
- vu_get_node_name,
779
- vu_set_node_name);
780
-
781
- object_class_property_add_str(klass, "unix-socket",
782
- vu_get_unix_socket,
783
- vu_set_unix_socket);
784
+ VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
785
786
- object_class_property_add(klass, "logical-block-size", "uint32",
787
- vu_get_blk_size, vu_set_blk_size,
788
- NULL, NULL);
789
+ blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
790
+ vexp);
791
}
792
793
-static const TypeInfo vhost_user_blk_server_info = {
794
- .name = TYPE_VHOST_USER_BLK_SERVER,
795
- .parent = TYPE_OBJECT,
796
- .instance_size = sizeof(VuBlockDev),
797
- .instance_finalize = vhost_user_blk_server_instance_finalize,
798
- .class_init = vhost_user_blk_server_class_init,
799
- .interfaces = (InterfaceInfo[]) {
800
- {TYPE_USER_CREATABLE},
801
- {}
802
- },
803
+const BlockExportDriver blk_exp_vhost_user_blk = {
804
+ .type = BLOCK_EXPORT_TYPE_VHOST_USER_BLK,
805
+ .instance_size = sizeof(VuBlkExport),
806
+ .create = vu_blk_exp_create,
807
+ .delete = vu_blk_exp_delete,
808
+ .request_shutdown = vu_blk_exp_request_shutdown,
809
};
810
-
811
-static void vhost_user_blk_server_register_types(void)
812
-{
813
- type_register_static(&vhost_user_blk_server_info);
814
-}
815
-
816
-type_init(vhost_user_blk_server_register_types)
817
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
818
index XXXXXXX..XXXXXXX 100644
819
--- a/util/vhost-user-server.c
820
+++ b/util/vhost-user-server.c
821
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
822
Error **errp)
823
{
824
QEMUBH *bh;
825
- QIONetListener *listener = qio_net_listener_new();
826
+ QIONetListener *listener;
827
+
828
+ if (socket_addr->type != SOCKET_ADDRESS_TYPE_UNIX &&
829
+ socket_addr->type != SOCKET_ADDRESS_TYPE_FD) {
830
+ error_setg(errp, "Only socket address types 'unix' and 'fd' are supported");
831
+ return false;
832
+ }
833
+
834
+ listener = qio_net_listener_new();
835
if (qio_net_listener_open_sync(listener, socket_addr, 1,
836
errp) < 0) {
837
object_unref(OBJECT(listener));
838
diff --git a/block/export/meson.build b/block/export/meson.build
839
index XXXXXXX..XXXXXXX 100644
840
--- a/block/export/meson.build
841
+++ b/block/export/meson.build
842
@@ -1 +1,2 @@
843
block_ss.add(files('export.c'))
844
+block_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-blk-server.c', '../../contrib/libvhost-user/libvhost-user.c'))
845
diff --git a/block/meson.build b/block/meson.build
846
index XXXXXXX..XXXXXXX 100644
847
--- a/block/meson.build
848
+++ b/block/meson.build
849
@@ -XXX,XX +XXX,XX @@ block_ss.add(when: 'CONFIG_WIN32', if_true: files('file-win32.c', 'win32-aio.c')
850
block_ss.add(when: 'CONFIG_POSIX', if_true: [files('file-posix.c'), coref, iokit])
851
block_ss.add(when: 'CONFIG_LIBISCSI', if_true: files('iscsi-opts.c'))
852
block_ss.add(when: 'CONFIG_LINUX', if_true: files('nvme.c'))
853
-block_ss.add(when: 'CONFIG_LINUX', if_true: files('export/vhost-user-blk-server.c', '../contrib/libvhost-user/libvhost-user.c'))
854
block_ss.add(when: 'CONFIG_REPLICATION', if_true: files('replication.c'))
855
block_ss.add(when: 'CONFIG_SHEEPDOG', if_true: files('sheepdog.c'))
856
block_ss.add(when: ['CONFIG_LINUX_AIO', libaio], if_true: files('linux-aio.c'))
76
--
857
--
77
2.29.2
858
2.26.2
78
859
79
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
Headers used by other subsystems are located in include/. Also add the
2
vhost-user-server and vhost-user-blk-server headers to MAINTAINERS.
2
3
3
Drop unused code.
4
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Message-id: 20200924151549.913737-13-stefanha@redhat.com
6
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
---
8
MAINTAINERS | 4 +++-
9
{util => include/qemu}/vhost-user-server.h | 0
10
block/export/vhost-user-blk-server.c | 2 +-
11
util/vhost-user-server.c | 2 +-
12
4 files changed, 5 insertions(+), 3 deletions(-)
13
rename {util => include/qemu}/vhost-user-server.h (100%)
4
14
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
15
diff --git a/MAINTAINERS b/MAINTAINERS
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20210116214705.822267-20-vsementsov@virtuozzo.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
include/block/block-copy.h | 6 ------
11
block/block-copy.c | 15 ---------------
12
2 files changed, 21 deletions(-)
13
14
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/include/block/block-copy.h
17
--- a/MAINTAINERS
17
+++ b/include/block/block-copy.h
18
+++ b/MAINTAINERS
19
@@ -XXX,XX +XXX,XX @@ Vhost-user block device backend server
20
M: Coiby Xu <Coiby.Xu@gmail.com>
21
S: Maintained
22
F: block/export/vhost-user-blk-server.c
23
-F: util/vhost-user-server.c
24
+F: block/export/vhost-user-blk-server.h
25
+F: include/qemu/vhost-user-server.h
26
F: tests/qtest/libqos/vhost-user-blk.c
27
+F: util/vhost-user-server.c
28
29
Replication
30
M: Wen Congyang <wencongyang2@huawei.com>
31
diff --git a/util/vhost-user-server.h b/include/qemu/vhost-user-server.h
32
similarity index 100%
33
rename from util/vhost-user-server.h
34
rename to include/qemu/vhost-user-server.h
35
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/block/export/vhost-user-blk-server.c
38
+++ b/block/export/vhost-user-blk-server.c
18
@@ -XXX,XX +XXX,XX @@
39
@@ -XXX,XX +XXX,XX @@
19
#include "block/block.h"
40
#include "block/block.h"
20
#include "qemu/co-shared-resource.h"
41
#include "contrib/libvhost-user/libvhost-user.h"
21
42
#include "standard-headers/linux/virtio_blk.h"
22
-typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
43
-#include "util/vhost-user-server.h"
23
typedef void (*BlockCopyAsyncCallbackFunc)(void *opaque);
44
+#include "qemu/vhost-user-server.h"
24
typedef struct BlockCopyState BlockCopyState;
45
#include "vhost-user-blk-server.h"
25
typedef struct BlockCopyCallState BlockCopyCallState;
46
#include "qapi/error.h"
26
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
47
#include "qom/object_interfaces.h"
27
BdrvRequestFlags write_flags,
48
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
28
Error **errp);
29
30
-void block_copy_set_progress_callback(
31
- BlockCopyState *s,
32
- ProgressBytesCallbackFunc progress_bytes_callback,
33
- void *progress_opaque);
34
-
35
void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm);
36
37
void block_copy_state_free(BlockCopyState *s);
38
diff --git a/block/block-copy.c b/block/block-copy.c
39
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
40
--- a/block/block-copy.c
50
--- a/util/vhost-user-server.c
41
+++ b/block/block-copy.c
51
+++ b/util/vhost-user-server.c
42
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
52
@@ -XXX,XX +XXX,XX @@
43
bool skip_unallocated;
53
*/
44
54
#include "qemu/osdep.h"
45
ProgressMeter *progress;
55
#include "qemu/main-loop.h"
46
- /* progress_bytes_callback: called when some copying progress is done. */
56
+#include "qemu/vhost-user-server.h"
47
- ProgressBytesCallbackFunc progress_bytes_callback;
57
#include "block/aio-wait.h"
48
- void *progress_opaque;
58
-#include "vhost-user-server.h"
49
59
50
SharedResource *mem;
60
/*
51
61
* Theory of operation:
52
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
53
return s;
54
}
55
56
-void block_copy_set_progress_callback(
57
- BlockCopyState *s,
58
- ProgressBytesCallbackFunc progress_bytes_callback,
59
- void *progress_opaque)
60
-{
61
- s->progress_bytes_callback = progress_bytes_callback;
62
- s->progress_opaque = progress_opaque;
63
-}
64
-
65
void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm)
66
{
67
s->progress = pm;
68
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
69
t->call_state->error_is_read = error_is_read;
70
} else {
71
progress_work_done(t->s->progress, t->bytes);
72
- if (t->s->progress_bytes_callback) {
73
- t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
74
- }
75
}
76
co_put_to_shres(t->s->mem, t->bytes);
77
block_copy_task_end(t, ret);
78
--
62
--
79
2.29.2
63
2.26.2
80
64
81
diff view generated by jsdifflib
1
From: David Edmondson <david.edmondson@oracle.com>
1
Don't compile contrib/libvhost-user/libvhost-user.c again. Instead build
2
the static library once and then reuse it throughout QEMU.
2
3
3
When a call to fcntl(2) for the purpose of adding file locks fails
4
Also switch from CONFIG_LINUX to CONFIG_VHOST_USER, which is what the
4
with an error other than EAGAIN or EACCES, report the error returned
5
vhost-user tools (vhost-user-gpu, etc) do.
5
by fcntl.
6
6
7
EAGAIN or EACCES are elided as they are considered to be common
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
failures, indicating that a conflicting lock is held by another
8
Message-id: 20200924151549.913737-14-stefanha@redhat.com
9
process.
9
[Added CONFIG_LINUX again because libvhost-user doesn't build on macOS.
10
--Stefan]
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
13
block/export/export.c | 8 ++++----
14
block/export/meson.build | 2 +-
15
contrib/libvhost-user/meson.build | 1 +
16
meson.build | 6 +++++-
17
util/meson.build | 4 +++-
18
5 files changed, 14 insertions(+), 7 deletions(-)
10
19
11
No errors are elided when removing file locks.
20
diff --git a/block/export/export.c b/block/export/export.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/block/export/export.c
23
+++ b/block/export/export.c
24
@@ -XXX,XX +XXX,XX @@
25
#include "sysemu/block-backend.h"
26
#include "block/export.h"
27
#include "block/nbd.h"
28
-#if CONFIG_LINUX
29
-#include "block/export/vhost-user-blk-server.h"
30
-#endif
31
#include "qapi/error.h"
32
#include "qapi/qapi-commands-block-export.h"
33
#include "qapi/qapi-events-block-export.h"
34
#include "qemu/id.h"
35
+#ifdef CONFIG_VHOST_USER
36
+#include "vhost-user-blk-server.h"
37
+#endif
38
39
static const BlockExportDriver *blk_exp_drivers[] = {
40
&blk_exp_nbd,
41
-#if CONFIG_LINUX
42
+#ifdef CONFIG_VHOST_USER
43
&blk_exp_vhost_user_blk,
44
#endif
45
};
46
diff --git a/block/export/meson.build b/block/export/meson.build
47
index XXXXXXX..XXXXXXX 100644
48
--- a/block/export/meson.build
49
+++ b/block/export/meson.build
50
@@ -XXX,XX +XXX,XX @@
51
block_ss.add(files('export.c'))
52
-block_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-blk-server.c', '../../contrib/libvhost-user/libvhost-user.c'))
53
+block_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: files('vhost-user-blk-server.c'))
54
diff --git a/contrib/libvhost-user/meson.build b/contrib/libvhost-user/meson.build
55
index XXXXXXX..XXXXXXX 100644
56
--- a/contrib/libvhost-user/meson.build
57
+++ b/contrib/libvhost-user/meson.build
58
@@ -XXX,XX +XXX,XX @@
59
libvhost_user = static_library('vhost-user',
60
files('libvhost-user.c', 'libvhost-user-glib.c'),
61
build_by_default: false)
62
+vhost_user = declare_dependency(link_with: libvhost_user)
63
diff --git a/meson.build b/meson.build
64
index XXXXXXX..XXXXXXX 100644
65
--- a/meson.build
66
+++ b/meson.build
67
@@ -XXX,XX +XXX,XX @@ trace_events_subdirs += [
68
'util',
69
]
70
71
+vhost_user = not_found
72
+if 'CONFIG_VHOST_USER' in config_host
73
+ subdir('contrib/libvhost-user')
74
+endif
75
+
76
subdir('qapi')
77
subdir('qobject')
78
subdir('stubs')
79
@@ -XXX,XX +XXX,XX @@ if have_tools
80
install: true)
81
82
if 'CONFIG_VHOST_USER' in config_host
83
- subdir('contrib/libvhost-user')
84
subdir('contrib/vhost-user-blk')
85
subdir('contrib/vhost-user-gpu')
86
subdir('contrib/vhost-user-input')
87
diff --git a/util/meson.build b/util/meson.build
88
index XXXXXXX..XXXXXXX 100644
89
--- a/util/meson.build
90
+++ b/util/meson.build
91
@@ -XXX,XX +XXX,XX @@ if have_block
92
util_ss.add(files('main-loop.c'))
93
util_ss.add(files('nvdimm-utils.c'))
94
util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
95
- util_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-server.c'))
96
+ util_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: [
97
+ files('vhost-user-server.c'), vhost_user
98
+ ])
99
util_ss.add(files('block-helpers.c'))
100
util_ss.add(files('qemu-coroutine-sleep.c'))
101
util_ss.add(files('qemu-co-shared-resource.c'))
102
--
103
2.26.2
12
104
13
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
14
Message-Id: <20210113164447.2545785-1-david.edmondson@oracle.com>
15
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
---
18
block/file-posix.c | 38 ++++++++++++++++++++++++++++----------
19
1 file changed, 28 insertions(+), 10 deletions(-)
20
21
diff --git a/block/file-posix.c b/block/file-posix.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/block/file-posix.c
24
+++ b/block/file-posix.c
25
@@ -XXX,XX +XXX,XX @@ typedef struct RawPosixAIOData {
26
static int cdrom_reopen(BlockDriverState *bs);
27
#endif
28
29
+/*
30
+ * Elide EAGAIN and EACCES details when failing to lock, as this
31
+ * indicates that the specified file region is already locked by
32
+ * another process, which is considered a common scenario.
33
+ */
34
+#define raw_lock_error_setg_errno(errp, err, fmt, ...) \
35
+ do { \
36
+ if ((err) == EAGAIN || (err) == EACCES) { \
37
+ error_setg((errp), (fmt), ## __VA_ARGS__); \
38
+ } else { \
39
+ error_setg_errno((errp), (err), (fmt), ## __VA_ARGS__); \
40
+ } \
41
+ } while (0)
42
+
43
#if defined(__NetBSD__)
44
static int raw_normalize_devicepath(const char **filename, Error **errp)
45
{
46
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
47
if ((perm_lock_bits & bit) && !(locked_perm & bit)) {
48
ret = qemu_lock_fd(fd, off, 1, false);
49
if (ret) {
50
- error_setg(errp, "Failed to lock byte %d", off);
51
+ raw_lock_error_setg_errno(errp, -ret, "Failed to lock byte %d",
52
+ off);
53
return ret;
54
} else if (s) {
55
s->locked_perm |= bit;
56
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
57
} else if (unlock && (locked_perm & bit) && !(perm_lock_bits & bit)) {
58
ret = qemu_unlock_fd(fd, off, 1);
59
if (ret) {
60
- error_setg(errp, "Failed to unlock byte %d", off);
61
+ error_setg_errno(errp, -ret, "Failed to unlock byte %d", off);
62
return ret;
63
} else if (s) {
64
s->locked_perm &= ~bit;
65
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
66
if ((shared_perm_lock_bits & bit) && !(locked_shared_perm & bit)) {
67
ret = qemu_lock_fd(fd, off, 1, false);
68
if (ret) {
69
- error_setg(errp, "Failed to lock byte %d", off);
70
+ raw_lock_error_setg_errno(errp, -ret, "Failed to lock byte %d",
71
+ off);
72
return ret;
73
} else if (s) {
74
s->locked_shared_perm |= bit;
75
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
76
!(shared_perm_lock_bits & bit)) {
77
ret = qemu_unlock_fd(fd, off, 1);
78
if (ret) {
79
- error_setg(errp, "Failed to unlock byte %d", off);
80
+ error_setg_errno(errp, -ret, "Failed to unlock byte %d", off);
81
return ret;
82
} else if (s) {
83
s->locked_shared_perm &= ~bit;
84
@@ -XXX,XX +XXX,XX @@ static int raw_check_lock_bytes(int fd, uint64_t perm, uint64_t shared_perm,
85
ret = qemu_lock_fd_test(fd, off, 1, true);
86
if (ret) {
87
char *perm_name = bdrv_perm_names(p);
88
- error_setg(errp,
89
- "Failed to get \"%s\" lock",
90
- perm_name);
91
+
92
+ raw_lock_error_setg_errno(errp, -ret,
93
+ "Failed to get \"%s\" lock",
94
+ perm_name);
95
g_free(perm_name);
96
return ret;
97
}
98
@@ -XXX,XX +XXX,XX @@ static int raw_check_lock_bytes(int fd, uint64_t perm, uint64_t shared_perm,
99
ret = qemu_lock_fd_test(fd, off, 1, true);
100
if (ret) {
101
char *perm_name = bdrv_perm_names(p);
102
- error_setg(errp,
103
- "Failed to get shared \"%s\" lock",
104
- perm_name);
105
+
106
+ raw_lock_error_setg_errno(errp, -ret,
107
+ "Failed to get shared \"%s\" lock",
108
+ perm_name);
109
g_free(perm_name);
110
return ret;
111
}
112
--
113
2.29.2
114
115
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
Introduce libblkdev.fa to avoid recompiling blockdev_ss twice.
2
2
3
Add argument to allow additional block-job options.
3
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
4
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Message-id: 20200929125516.186715-3-stefanha@redhat.com
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
---
9
meson.build | 12 ++++++++++--
10
storage-daemon/meson.build | 3 +--
11
2 files changed, 11 insertions(+), 4 deletions(-)
4
12
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
13
diff --git a/meson.build b/meson.build
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
14
index XXXXXXX..XXXXXXX 100644
7
Message-Id: <20210116214705.822267-23-vsementsov@virtuozzo.com>
15
--- a/meson.build
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
16
+++ b/meson.build
9
---
17
@@ -XXX,XX +XXX,XX @@ blockdev_ss.add(files(
10
scripts/simplebench/bench-example.py | 2 +-
18
# os-win32.c does not
11
scripts/simplebench/bench_block_job.py | 11 +++++++----
19
blockdev_ss.add(when: 'CONFIG_POSIX', if_true: files('os-posix.c'))
12
2 files changed, 8 insertions(+), 5 deletions(-)
20
softmmu_ss.add(when: 'CONFIG_WIN32', if_true: [files('os-win32.c')])
21
-softmmu_ss.add_all(blockdev_ss)
22
23
common_ss.add(files('cpus-common.c'))
24
25
@@ -XXX,XX +XXX,XX @@ block = declare_dependency(link_whole: [libblock],
26
link_args: '@block.syms',
27
dependencies: [crypto, io])
28
29
+blockdev_ss = blockdev_ss.apply(config_host, strict: false)
30
+libblockdev = static_library('blockdev', blockdev_ss.sources() + genh,
31
+ dependencies: blockdev_ss.dependencies(),
32
+ name_suffix: 'fa',
33
+ build_by_default: false)
34
+
35
+blockdev = declare_dependency(link_whole: [libblockdev],
36
+ dependencies: [block])
37
+
38
qmp_ss = qmp_ss.apply(config_host, strict: false)
39
libqmp = static_library('qmp', qmp_ss.sources() + genh,
40
dependencies: qmp_ss.dependencies(),
41
@@ -XXX,XX +XXX,XX @@ foreach m : block_mods + softmmu_mods
42
install_dir: config_host['qemu_moddir'])
43
endforeach
44
45
-softmmu_ss.add(authz, block, chardev, crypto, io, qmp)
46
+softmmu_ss.add(authz, blockdev, chardev, crypto, io, qmp)
47
common_ss.add(qom, qemuutil)
48
49
common_ss.add_all(when: 'CONFIG_SOFTMMU', if_true: [softmmu_ss])
50
diff --git a/storage-daemon/meson.build b/storage-daemon/meson.build
51
index XXXXXXX..XXXXXXX 100644
52
--- a/storage-daemon/meson.build
53
+++ b/storage-daemon/meson.build
54
@@ -XXX,XX +XXX,XX @@
55
qsd_ss = ss.source_set()
56
qsd_ss.add(files('qemu-storage-daemon.c'))
57
-qsd_ss.add(block, chardev, qmp, qom, qemuutil)
58
-qsd_ss.add_all(blockdev_ss)
59
+qsd_ss.add(blockdev, chardev, qmp, qom, qemuutil)
60
61
subdir('qapi')
62
63
--
64
2.26.2
13
65
14
diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/scripts/simplebench/bench-example.py
17
+++ b/scripts/simplebench/bench-example.py
18
@@ -XXX,XX +XXX,XX @@ from bench_block_job import bench_block_copy, drv_file, drv_nbd
19
20
def bench_func(env, case):
21
""" Handle one "cell" of benchmarking table. """
22
- return bench_block_copy(env['qemu_binary'], env['cmd'],
23
+ return bench_block_copy(env['qemu_binary'], env['cmd'], {}
24
case['source'], case['target'])
25
26
27
diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
28
index XXXXXXX..XXXXXXX 100755
29
--- a/scripts/simplebench/bench_block_job.py
30
+++ b/scripts/simplebench/bench_block_job.py
31
@@ -XXX,XX +XXX,XX @@ def bench_block_job(cmd, cmd_args, qemu_args):
32
33
34
# Bench backup or mirror
35
-def bench_block_copy(qemu_binary, cmd, source, target):
36
+def bench_block_copy(qemu_binary, cmd, cmd_options, source, target):
37
"""Helper to run bench_block_job() for mirror or backup"""
38
assert cmd in ('blockdev-backup', 'blockdev-mirror')
39
40
source['node-name'] = 'source'
41
target['node-name'] = 'target'
42
43
- return bench_block_job(cmd,
44
- {'job-id': 'job0', 'device': 'source',
45
- 'target': 'target', 'sync': 'full'},
46
+ cmd_options['job-id'] = 'job0'
47
+ cmd_options['device'] = 'source'
48
+ cmd_options['target'] = 'target'
49
+ cmd_options['sync'] = 'full'
50
+
51
+ return bench_block_job(cmd, cmd_options,
52
[qemu_binary,
53
'-blockdev', json.dumps(source),
54
'-blockdev', json.dumps(target)])
55
--
56
2.29.2
57
58
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
Block exports are used by softmmu, qemu-storage-daemon, and qemu-nbd.
2
They are not used by other programs and are not otherwise needed in
3
libblock.
2
4
3
The test case #310 is similar to #216 by Max Reitz. The difference is
5
Undo the recent move of blockdev-nbd.c from blockdev_ss into block_ss.
4
that the test #310 involves a bottom node to the COR filter driver.
6
Since bdrv_close_all() (libblock) calls blk_exp_close_all()
7
(libblockdev) a stub function is required..
5
8
6
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
9
Make qemu-nbd.c use signal handling utility functions instead of
7
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
duplicating the code. This helps because os-posix.c is in libblockdev
8
[vsementsov: detach backing to test reads from top, limit to qcow2]
11
and it depends on a qemu_system_killed() symbol that qemu-nbd.c lacks.
9
Message-Id: <20201216061703.70908-7-vsementsov@virtuozzo.com>
12
Once we use the signal handling utility functions we also end up
10
[mreitz: Add "# group:" line]
13
providing the necessary symbol.
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
14
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
17
Reviewed-by: Eric Blake <eblake@redhat.com>
18
Message-id: 20200929125516.186715-4-stefanha@redhat.com
19
[Fixed s/ndb/nbd/ typo in commit description as suggested by Eric Blake
20
--Stefan]
21
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
22
---
13
tests/qemu-iotests/310 | 117 +++++++++++++++++++++++++++++++++++++
23
qemu-nbd.c | 21 ++++++++-------------
14
tests/qemu-iotests/310.out | 15 +++++
24
stubs/blk-exp-close-all.c | 7 +++++++
15
tests/qemu-iotests/group | 1 +
25
block/export/meson.build | 4 ++--
16
3 files changed, 133 insertions(+)
26
meson.build | 4 ++--
17
create mode 100755 tests/qemu-iotests/310
27
nbd/meson.build | 2 ++
18
create mode 100644 tests/qemu-iotests/310.out
28
stubs/meson.build | 1 +
29
6 files changed, 22 insertions(+), 17 deletions(-)
30
create mode 100644 stubs/blk-exp-close-all.c
19
31
20
diff --git a/tests/qemu-iotests/310 b/tests/qemu-iotests/310
32
diff --git a/qemu-nbd.c b/qemu-nbd.c
21
new file mode 100755
33
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX
34
--- a/qemu-nbd.c
23
--- /dev/null
35
+++ b/qemu-nbd.c
24
+++ b/tests/qemu-iotests/310
25
@@ -XXX,XX +XXX,XX @@
36
@@ -XXX,XX +XXX,XX @@
26
+#!/usr/bin/env python3
37
#include "qapi/error.h"
27
+# group: rw quick
38
#include "qemu/cutils.h"
28
+#
39
#include "sysemu/block-backend.h"
29
+# Copy-on-read tests using a COR filter with a bottom node
40
+#include "sysemu/runstate.h" /* for qemu_system_killed() prototype */
30
+#
41
#include "block/block_int.h"
31
+# Copyright (C) 2018 Red Hat, Inc.
42
#include "block/nbd.h"
32
+# Copyright (c) 2020 Virtuozzo International GmbH
43
#include "qemu/main-loop.h"
33
+#
44
@@ -XXX,XX +XXX,XX @@ QEMU_COPYRIGHT "\n"
34
+# This program is free software; you can redistribute it and/or modify
45
}
35
+# it under the terms of the GNU General Public License as published by
46
36
+# the Free Software Foundation; either version 2 of the License, or
47
#ifdef CONFIG_POSIX
37
+# (at your option) any later version.
48
-static void termsig_handler(int signum)
38
+#
49
+/*
39
+# This program is distributed in the hope that it will be useful,
50
+ * The client thread uses SIGTERM to interrupt the server. A signal
40
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
51
+ * handler ensures that "qemu-nbd -v -c" exits with a nice status code.
41
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
52
+ */
42
+# GNU General Public License for more details.
53
+void qemu_system_killed(int signum, pid_t pid)
43
+#
54
{
44
+# You should have received a copy of the GNU General Public License
55
qatomic_cmpxchg(&state, RUNNING, TERMINATE);
45
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
56
qemu_notify_event();
46
+#
57
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
47
+
58
BlockExportOptions *export_opts;
48
+import iotests
59
49
+from iotests import log, qemu_img, qemu_io_silent
60
#ifdef CONFIG_POSIX
50
+
61
- /*
51
+# Need backing file support
62
- * Exit gracefully on various signals, which includes SIGTERM used
52
+iotests.script_initialize(supported_fmts=['qcow2'],
63
- * by 'qemu-nbd -v -c'.
53
+ supported_platforms=['linux'])
64
- */
54
+
65
- struct sigaction sa_sigterm;
55
+log('')
66
- memset(&sa_sigterm, 0, sizeof(sa_sigterm));
56
+log('=== Copy-on-read across nodes ===')
67
- sa_sigterm.sa_handler = termsig_handler;
57
+log('')
68
- sigaction(SIGTERM, &sa_sigterm, NULL);
58
+
69
- sigaction(SIGINT, &sa_sigterm, NULL);
59
+# This test is similar to the 216 one by Max Reitz <mreitz@redhat.com>
70
- sigaction(SIGHUP, &sa_sigterm, NULL);
60
+# The difference is that this test case involves a bottom node to the
71
-
61
+# COR filter driver.
72
- signal(SIGPIPE, SIG_IGN);
62
+
73
+ os_setup_early_signal_handling();
63
+with iotests.FilePath('base.img') as base_img_path, \
74
+ os_setup_signal_handling();
64
+ iotests.FilePath('mid.img') as mid_img_path, \
75
#endif
65
+ iotests.FilePath('top.img') as top_img_path, \
76
66
+ iotests.VM() as vm:
77
socket_init();
67
+
78
diff --git a/stubs/blk-exp-close-all.c b/stubs/blk-exp-close-all.c
68
+ log('--- Setting up images ---')
69
+ log('')
70
+
71
+ assert qemu_img('create', '-f', iotests.imgfmt, base_img_path, '64M') == 0
72
+ assert qemu_io_silent(base_img_path, '-c', 'write -P 1 0M 1M') == 0
73
+ assert qemu_io_silent(base_img_path, '-c', 'write -P 1 3M 1M') == 0
74
+ assert qemu_img('create', '-f', iotests.imgfmt, '-b', base_img_path,
75
+ '-F', iotests.imgfmt, mid_img_path) == 0
76
+ assert qemu_io_silent(mid_img_path, '-c', 'write -P 3 2M 1M') == 0
77
+ assert qemu_io_silent(mid_img_path, '-c', 'write -P 3 4M 1M') == 0
78
+ assert qemu_img('create', '-f', iotests.imgfmt, '-b', mid_img_path,
79
+ '-F', iotests.imgfmt, top_img_path) == 0
80
+ assert qemu_io_silent(top_img_path, '-c', 'write -P 2 1M 1M') == 0
81
+
82
+# 0 1 2 3 4
83
+# top 2
84
+# mid 3 3
85
+# base 1 1
86
+
87
+ log('Done')
88
+
89
+ log('')
90
+ log('--- Doing COR ---')
91
+ log('')
92
+
93
+ vm.launch()
94
+
95
+ log(vm.qmp('blockdev-add',
96
+ node_name='node0',
97
+ driver='copy-on-read',
98
+ bottom='node2',
99
+ file={
100
+ 'driver': iotests.imgfmt,
101
+ 'file': {
102
+ 'driver': 'file',
103
+ 'filename': top_img_path
104
+ },
105
+ 'backing': {
106
+ 'node-name': 'node2',
107
+ 'driver': iotests.imgfmt,
108
+ 'file': {
109
+ 'driver': 'file',
110
+ 'filename': mid_img_path
111
+ },
112
+ 'backing': {
113
+ 'driver': iotests.imgfmt,
114
+ 'file': {
115
+ 'driver': 'file',
116
+ 'filename': base_img_path
117
+ }
118
+ },
119
+ }
120
+ }))
121
+
122
+ # Trigger COR
123
+ log(vm.qmp('human-monitor-command',
124
+ command_line='qemu-io node0 "read 0 5M"'))
125
+
126
+ vm.shutdown()
127
+
128
+ log('')
129
+ log('--- Checking COR result ---')
130
+ log('')
131
+
132
+ # Detach backing to check that we can read the data from the top level now
133
+ assert qemu_img('rebase', '-u', '-b', '', '-f', iotests.imgfmt,
134
+ top_img_path) == 0
135
+
136
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 0 0 1M') == 0
137
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 2 1M 1M') == 0
138
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 3 2M 1M') == 0
139
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 0 3M 1M') == 0
140
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 3 4M 1M') == 0
141
+
142
+ log('Done')
143
diff --git a/tests/qemu-iotests/310.out b/tests/qemu-iotests/310.out
144
new file mode 100644
79
new file mode 100644
145
index XXXXXXX..XXXXXXX
80
index XXXXXXX..XXXXXXX
146
--- /dev/null
81
--- /dev/null
147
+++ b/tests/qemu-iotests/310.out
82
+++ b/stubs/blk-exp-close-all.c
148
@@ -XXX,XX +XXX,XX @@
83
@@ -XXX,XX +XXX,XX @@
84
+#include "qemu/osdep.h"
85
+#include "block/export.h"
149
+
86
+
150
+=== Copy-on-read across nodes ===
87
+/* Only used in programs that support block exports (libblockdev.fa) */
151
+
88
+void blk_exp_close_all(void)
152
+--- Setting up images ---
89
+{
153
+
90
+}
154
+Done
91
diff --git a/block/export/meson.build b/block/export/meson.build
155
+
156
+--- Doing COR ---
157
+
158
+{"return": {}}
159
+{"return": ""}
160
+
161
+--- Checking COR result ---
162
+
163
+Done
164
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
165
index XXXXXXX..XXXXXXX 100644
92
index XXXXXXX..XXXXXXX 100644
166
--- a/tests/qemu-iotests/group
93
--- a/block/export/meson.build
167
+++ b/tests/qemu-iotests/group
94
+++ b/block/export/meson.build
168
@@ -XXX,XX +XXX,XX @@
95
@@ -XXX,XX +XXX,XX @@
169
307 rw quick export
96
-block_ss.add(files('export.c'))
170
308 rw
97
-block_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: files('vhost-user-blk-server.c'))
171
309 rw auto quick
98
+blockdev_ss.add(files('export.c'))
172
+310 rw quick
99
+blockdev_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: files('vhost-user-blk-server.c'))
173
312 rw quick
100
diff --git a/meson.build b/meson.build
101
index XXXXXXX..XXXXXXX 100644
102
--- a/meson.build
103
+++ b/meson.build
104
@@ -XXX,XX +XXX,XX @@ subdir('dump')
105
106
block_ss.add(files(
107
'block.c',
108
- 'blockdev-nbd.c',
109
'blockjob.c',
110
'job.c',
111
'qemu-io-cmds.c',
112
@@ -XXX,XX +XXX,XX @@ subdir('block')
113
114
blockdev_ss.add(files(
115
'blockdev.c',
116
+ 'blockdev-nbd.c',
117
'iothread.c',
118
'job-qmp.c',
119
))
120
@@ -XXX,XX +XXX,XX @@ if have_tools
121
qemu_io = executable('qemu-io', files('qemu-io.c'),
122
dependencies: [block, qemuutil], install: true)
123
qemu_nbd = executable('qemu-nbd', files('qemu-nbd.c'),
124
- dependencies: [block, qemuutil], install: true)
125
+ dependencies: [blockdev, qemuutil], install: true)
126
127
subdir('storage-daemon')
128
subdir('contrib/rdmacm-mux')
129
diff --git a/nbd/meson.build b/nbd/meson.build
130
index XXXXXXX..XXXXXXX 100644
131
--- a/nbd/meson.build
132
+++ b/nbd/meson.build
133
@@ -XXX,XX +XXX,XX @@
134
block_ss.add(files(
135
'client.c',
136
'common.c',
137
+))
138
+blockdev_ss.add(files(
139
'server.c',
140
))
141
diff --git a/stubs/meson.build b/stubs/meson.build
142
index XXXXXXX..XXXXXXX 100644
143
--- a/stubs/meson.build
144
+++ b/stubs/meson.build
145
@@ -XXX,XX +XXX,XX @@
146
stub_ss.add(files('arch_type.c'))
147
stub_ss.add(files('bdrv-next-monitor-owned.c'))
148
stub_ss.add(files('blk-commit-all.c'))
149
+stub_ss.add(files('blk-exp-close-all.c'))
150
stub_ss.add(files('blockdev-close-all-bdrv-states.c'))
151
stub_ss.add(files('change-state-handler.c'))
152
stub_ss.add(files('cmos.c'))
174
--
153
--
175
2.29.2
154
2.26.2
176
155
177
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
Make it possible to specify the iothread where the export will run. By
2
default the block node can be moved to other AioContexts later and the
3
export will follow. The fixed-iothread option forces strict behavior
4
that prevents changing AioContext while the export is active. See the
5
QAPI docs for details.
2
6
3
Add an option to limit copy-on-read operations to specified sub-chain
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
of backing-chain, to make copy-on-read filter useful for block-stream
8
Message-id: 20200929125516.186715-5-stefanha@redhat.com
5
job.
9
[Fix stray '#' character in block-export.json and add missing "(since:
10
5.2)" as suggested by Eric Blake.
11
--Stefan]
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
14
qapi/block-export.json | 11 ++++++++++
15
block/export/export.c | 31 +++++++++++++++++++++++++++-
16
block/export/vhost-user-blk-server.c | 5 ++++-
17
nbd/server.c | 2 --
18
4 files changed, 45 insertions(+), 4 deletions(-)
6
19
7
Suggested-by: Max Reitz <mreitz@redhat.com>
20
diff --git a/qapi/block-export.json b/qapi/block-export.json
8
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
10
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
[vsementsov: change subject, modified to freeze the chain,
12
do some fixes]
13
Message-Id: <20201216061703.70908-6-vsementsov@virtuozzo.com>
14
Signed-off-by: Max Reitz <mreitz@redhat.com>
15
---
16
qapi/block-core.json | 20 ++++++++-
17
block/copy-on-read.c | 98 +++++++++++++++++++++++++++++++++++++++++++-
18
2 files changed, 115 insertions(+), 3 deletions(-)
19
20
diff --git a/qapi/block-core.json b/qapi/block-core.json
21
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
22
--- a/qapi/block-core.json
22
--- a/qapi/block-export.json
23
+++ b/qapi/block-core.json
23
+++ b/qapi/block-export.json
24
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
25
'data': { 'throttle-group': 'str',
25
# export before completion is signalled. (since: 5.2;
26
'file' : 'BlockdevRef'
26
# default: false)
27
} }
27
#
28
+# @iothread: The name of the iothread object where the export will run. The
29
+# default is to use the thread currently associated with the
30
+# block node. (since: 5.2)
31
+#
32
+# @fixed-iothread: True prevents the block node from being moved to another
33
+# thread while the export is active. If true and @iothread is
34
+# given, export creation fails if the block node cannot be
35
+# moved to the iothread. The default is false. (since: 5.2)
36
+#
37
# Since: 4.2
38
##
39
{ 'union': 'BlockExportOptions',
40
'base': { 'type': 'BlockExportType',
41
'id': 'str',
42
+     '*fixed-iothread': 'bool',
43
+     '*iothread': 'str',
44
'node-name': 'str',
45
'*writable': 'bool',
46
'*writethrough': 'bool' },
47
diff --git a/block/export/export.c b/block/export/export.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/block/export/export.c
50
+++ b/block/export/export.c
51
@@ -XXX,XX +XXX,XX @@
52
53
#include "block/block.h"
54
#include "sysemu/block-backend.h"
55
+#include "sysemu/iothread.h"
56
#include "block/export.h"
57
#include "block/nbd.h"
58
#include "qapi/error.h"
59
@@ -XXX,XX +XXX,XX @@ static const BlockExportDriver *blk_exp_find_driver(BlockExportType type)
60
61
BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
62
{
63
+ bool fixed_iothread = export->has_fixed_iothread && export->fixed_iothread;
64
const BlockExportDriver *drv;
65
BlockExport *exp = NULL;
66
BlockDriverState *bs;
67
- BlockBackend *blk;
68
+ BlockBackend *blk = NULL;
69
AioContext *ctx;
70
uint64_t perm;
71
int ret;
72
@@ -XXX,XX +XXX,XX @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
73
ctx = bdrv_get_aio_context(bs);
74
aio_context_acquire(ctx);
75
76
+ if (export->has_iothread) {
77
+ IOThread *iothread;
78
+ AioContext *new_ctx;
28
+
79
+
29
+##
80
+ iothread = iothread_by_id(export->iothread);
30
+# @BlockdevOptionsCor:
81
+ if (!iothread) {
31
+#
82
+ error_setg(errp, "iothread \"%s\" not found", export->iothread);
32
+# Driver specific block device options for the copy-on-read driver.
83
+ goto fail;
33
+#
34
+# @bottom: The name of a non-filter node (allocation-bearing layer) that
35
+# limits the COR operations in the backing chain (inclusive), so
36
+# that no data below this node will be copied by this filter.
37
+# If option is absent, the limit is not applied, so that data
38
+# from all backing layers may be copied.
39
+#
40
+# Since: 6.0
41
+##
42
+{ 'struct': 'BlockdevOptionsCor',
43
+ 'base': 'BlockdevOptionsGenericFormat',
44
+ 'data': { '*bottom': 'str' } }
45
+
46
##
47
# @BlockdevOptions:
48
#
49
@@ -XXX,XX +XXX,XX @@
50
'bochs': 'BlockdevOptionsGenericFormat',
51
'cloop': 'BlockdevOptionsGenericFormat',
52
'compress': 'BlockdevOptionsGenericFormat',
53
- 'copy-on-read':'BlockdevOptionsGenericFormat',
54
+ 'copy-on-read':'BlockdevOptionsCor',
55
'dmg': 'BlockdevOptionsGenericFormat',
56
'file': 'BlockdevOptionsFile',
57
'ftp': 'BlockdevOptionsCurlFtp',
58
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/block/copy-on-read.c
61
+++ b/block/copy-on-read.c
62
@@ -XXX,XX +XXX,XX @@
63
#include "block/block_int.h"
64
#include "qemu/module.h"
65
#include "qapi/error.h"
66
+#include "qapi/qmp/qdict.h"
67
#include "block/copy-on-read.h"
68
69
70
typedef struct BDRVStateCOR {
71
bool active;
72
+ BlockDriverState *bottom_bs;
73
+ bool chain_frozen;
74
} BDRVStateCOR;
75
76
77
static int cor_open(BlockDriverState *bs, QDict *options, int flags,
78
Error **errp)
79
{
80
+ BlockDriverState *bottom_bs = NULL;
81
BDRVStateCOR *state = bs->opaque;
82
+ /* Find a bottom node name, if any */
83
+ const char *bottom_node = qdict_get_try_str(options, "bottom");
84
85
bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
86
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
87
@@ -XXX,XX +XXX,XX @@ static int cor_open(BlockDriverState *bs, QDict *options, int flags,
88
((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
89
bs->file->bs->supported_zero_flags);
90
91
+ if (bottom_node) {
92
+ bottom_bs = bdrv_find_node(bottom_node);
93
+ if (!bottom_bs) {
94
+ error_setg(errp, "Bottom node '%s' not found", bottom_node);
95
+ qdict_del(options, "bottom");
96
+ return -EINVAL;
97
+ }
98
+ qdict_del(options, "bottom");
99
+
100
+ if (!bottom_bs->drv) {
101
+ error_setg(errp, "Bottom node '%s' not opened", bottom_node);
102
+ return -EINVAL;
103
+ }
84
+ }
104
+
85
+
105
+ if (bottom_bs->drv->is_filter) {
86
+ new_ctx = iothread_get_aio_context(iothread);
106
+ error_setg(errp, "Bottom node '%s' is a filter", bottom_node);
87
+
107
+ return -EINVAL;
88
+ ret = bdrv_try_set_aio_context(bs, new_ctx, errp);
89
+ if (ret == 0) {
90
+ aio_context_release(ctx);
91
+ aio_context_acquire(new_ctx);
92
+ ctx = new_ctx;
93
+ } else if (fixed_iothread) {
94
+ goto fail;
108
+ }
95
+ }
109
+
110
+ if (bdrv_freeze_backing_chain(bs, bottom_bs, errp) < 0) {
111
+ return -EINVAL;
112
+ }
113
+ state->chain_frozen = true;
114
+
115
+ /*
116
+ * We do freeze the chain, so it shouldn't be removed. Still, storing a
117
+ * pointer worth bdrv_ref().
118
+ */
119
+ bdrv_ref(bottom_bs);
120
+ }
121
state->active = true;
122
+ state->bottom_bs = bottom_bs;
123
124
/*
125
* We don't need to call bdrv_child_refresh_perms() now as the permissions
126
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
127
size_t qiov_offset,
128
int flags)
129
{
130
- return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
131
- flags | BDRV_REQ_COPY_ON_READ);
132
+ int64_t n;
133
+ int local_flags;
134
+ int ret;
135
+ BDRVStateCOR *state = bs->opaque;
136
+
137
+ if (!state->bottom_bs) {
138
+ return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
139
+ flags | BDRV_REQ_COPY_ON_READ);
140
+ }
96
+ }
141
+
97
+
142
+ while (bytes) {
98
/*
143
+ local_flags = flags;
99
* Block exports are used for non-shared storage migration. Make sure
100
* that BDRV_O_INACTIVE is cleared and the image is ready for write
101
@@ -XXX,XX +XXX,XX @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
102
}
103
104
blk = blk_new(ctx, perm, BLK_PERM_ALL);
144
+
105
+
145
+ /* In case of failure, try to copy-on-read anyway */
106
+ if (!fixed_iothread) {
146
+ ret = bdrv_is_allocated(bs->file->bs, offset, bytes, &n);
107
+ blk_set_allow_aio_context_change(blk, true);
147
+ if (ret <= 0) {
148
+ ret = bdrv_is_allocated_above(bdrv_backing_chain_next(bs->file->bs),
149
+ state->bottom_bs, true, offset,
150
+ n, &n);
151
+ if (ret > 0 || ret < 0) {
152
+ local_flags |= BDRV_REQ_COPY_ON_READ;
153
+ }
154
+ /* Finish earlier if the end of a backing file has been reached */
155
+ if (n == 0) {
156
+ break;
157
+ }
158
+ }
159
+
160
+ ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
161
+ local_flags);
162
+ if (ret < 0) {
163
+ return ret;
164
+ }
165
+
166
+ offset += n;
167
+ qiov_offset += n;
168
+ bytes -= n;
169
+ }
108
+ }
170
+
109
+
171
+ return 0;
110
ret = blk_insert_bs(blk, bs, errp);
111
if (ret < 0) {
112
goto fail;
113
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/block/export/vhost-user-blk-server.c
116
+++ b/block/export/vhost-user-blk-server.c
117
@@ -XXX,XX +XXX,XX @@ static const VuDevIface vu_blk_iface = {
118
static void blk_aio_attached(AioContext *ctx, void *opaque)
119
{
120
VuBlkExport *vexp = opaque;
121
+
122
+ vexp->export.ctx = ctx;
123
vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
172
}
124
}
173
125
174
126
static void blk_aio_detach(void *opaque)
175
@@ -XXX,XX +XXX,XX @@ static void cor_lock_medium(BlockDriverState *bs, bool locked)
127
{
128
VuBlkExport *vexp = opaque;
129
+
130
vhost_user_server_detach_aio_context(&vexp->vu_server);
131
+ vexp->export.ctx = NULL;
176
}
132
}
177
133
178
134
static void
179
+static void cor_close(BlockDriverState *bs)
135
@@ -XXX,XX +XXX,XX @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
180
+{
136
vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
181
+ BDRVStateCOR *s = bs->opaque;
137
logical_block_size);
182
+
138
183
+ if (s->chain_frozen) {
139
- blk_set_allow_aio_context_change(exp->blk, true);
184
+ s->chain_frozen = false;
140
blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
185
+ bdrv_unfreeze_backing_chain(bs, s->bottom_bs);
141
vexp);
186
+ }
142
187
+
143
diff --git a/nbd/server.c b/nbd/server.c
188
+ bdrv_unref(s->bottom_bs);
144
index XXXXXXX..XXXXXXX 100644
189
+}
145
--- a/nbd/server.c
190
+
146
+++ b/nbd/server.c
191
+
147
@@ -XXX,XX +XXX,XX @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
192
static BlockDriver bdrv_copy_on_read = {
148
return ret;
193
.format_name = "copy-on-read",
149
}
194
.instance_size = sizeof(BDRVStateCOR),
150
195
151
- blk_set_allow_aio_context_change(blk, true);
196
.bdrv_open = cor_open,
152
-
197
+ .bdrv_close = cor_close,
153
QTAILQ_INIT(&exp->clients);
198
.bdrv_child_perm = cor_child_perm,
154
exp->name = g_strdup(arg->name);
199
155
exp->description = g_strdup(arg->description);
200
.bdrv_getlength = cor_getlength,
201
@@ -XXX,XX +XXX,XX @@ void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
202
bdrv_drained_begin(bs);
203
/* Drop permissions before the graph change. */
204
s->active = false;
205
+ /* unfreeze, as otherwise bdrv_replace_node() will fail */
206
+ if (s->chain_frozen) {
207
+ s->chain_frozen = false;
208
+ bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
209
+ }
210
bdrv_child_refresh_perms(cor_filter_bs, child, &error_abort);
211
bdrv_replace_node(cor_filter_bs, bs, &error_abort);
212
213
--
156
--
214
2.29.2
157
2.26.2
215
158
216
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
test_stream_parallel run parallel stream jobs, intersecting so that top
4
of one is base of another. It's OK now, but it would be a problem if
5
insert the filter, as one job will want to use another job's filter as
6
above_base node.
7
8
Correct thing to do is move to new interface: "bottom" argument instead
9
of base. This guarantees that jobs don't intersect by their actions.
10
11
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
12
Reviewed-by: Max Reitz <mreitz@redhat.com>
13
Message-Id: <20201216061703.70908-12-vsementsov@virtuozzo.com>
14
Signed-off-by: Max Reitz <mreitz@redhat.com>
15
---
16
tests/qemu-iotests/030 | 4 +++-
17
1 file changed, 3 insertions(+), 1 deletion(-)
18
19
diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
20
index XXXXXXX..XXXXXXX 100755
21
--- a/tests/qemu-iotests/030
22
+++ b/tests/qemu-iotests/030
23
@@ -XXX,XX +XXX,XX @@ class TestParallelOps(iotests.QMPTestCase):
24
node_name = 'node%d' % i
25
job_id = 'stream-%s' % node_name
26
pending_jobs.append(job_id)
27
- result = self.vm.qmp('block-stream', device=node_name, job_id=job_id, base=self.imgs[i-2], speed=1024)
28
+ result = self.vm.qmp('block-stream', device=node_name,
29
+ job_id=job_id, bottom=f'node{i-1}',
30
+ speed=1024)
31
self.assert_qmp(result, 'return', {})
32
33
for job in pending_jobs:
34
--
35
2.29.2
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Add a direct link to target bs for convenience and to simplify
4
following commit which will insert COR filter above target bs.
5
6
This is a part of original commit written by Andrey.
7
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
10
Message-Id: <20201216061703.70908-13-vsementsov@virtuozzo.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
13
block/stream.c | 23 ++++++++++-------------
14
1 file changed, 10 insertions(+), 13 deletions(-)
15
16
diff --git a/block/stream.c b/block/stream.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/block/stream.c
19
+++ b/block/stream.c
20
@@ -XXX,XX +XXX,XX @@ typedef struct StreamBlockJob {
21
BlockJob common;
22
BlockDriverState *base_overlay; /* COW overlay (stream from this) */
23
BlockDriverState *above_base; /* Node directly above the base */
24
+ BlockDriverState *target_bs;
25
BlockdevOnError on_error;
26
char *backing_file_str;
27
bool bs_read_only;
28
@@ -XXX,XX +XXX,XX @@ static void stream_abort(Job *job)
29
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
30
31
if (s->chain_frozen) {
32
- BlockJob *bjob = &s->common;
33
- bdrv_unfreeze_backing_chain(blk_bs(bjob->blk), s->above_base);
34
+ bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
35
}
36
}
37
38
static int stream_prepare(Job *job)
39
{
40
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
41
- BlockJob *bjob = &s->common;
42
- BlockDriverState *bs = blk_bs(bjob->blk);
43
- BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
44
+ BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
45
BlockDriverState *base = bdrv_filter_or_cow_bs(s->above_base);
46
BlockDriverState *unfiltered_base = bdrv_skip_filters(base);
47
Error *local_err = NULL;
48
int ret = 0;
49
50
- bdrv_unfreeze_backing_chain(bs, s->above_base);
51
+ bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
52
s->chain_frozen = false;
53
54
if (bdrv_cow_child(unfiltered_bs)) {
55
@@ -XXX,XX +XXX,XX @@ static void stream_clean(Job *job)
56
{
57
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
58
BlockJob *bjob = &s->common;
59
- BlockDriverState *bs = blk_bs(bjob->blk);
60
61
/* Reopen the image back in read-only mode if necessary */
62
if (s->bs_read_only) {
63
/* Give up write permissions before making it read-only */
64
blk_set_perm(bjob->blk, 0, BLK_PERM_ALL, &error_abort);
65
- bdrv_reopen_set_read_only(bs, true, NULL);
66
+ bdrv_reopen_set_read_only(s->target_bs, true, NULL);
67
}
68
69
g_free(s->backing_file_str);
70
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
71
{
72
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
73
BlockBackend *blk = s->common.blk;
74
- BlockDriverState *bs = blk_bs(blk);
75
- BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
76
+ BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
77
bool enable_cor = !bdrv_cow_child(s->base_overlay);
78
int64_t len;
79
int64_t offset = 0;
80
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
81
return 0;
82
}
83
84
- len = bdrv_getlength(bs);
85
+ len = bdrv_getlength(s->target_bs);
86
if (len < 0) {
87
return len;
88
}
89
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
90
* account.
91
*/
92
if (enable_cor) {
93
- bdrv_enable_copy_on_read(bs);
94
+ bdrv_enable_copy_on_read(s->target_bs);
95
}
96
97
for ( ; offset < len; offset += n) {
98
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
99
}
100
101
if (enable_cor) {
102
- bdrv_disable_copy_on_read(bs);
103
+ bdrv_disable_copy_on_read(s->target_bs);
104
}
105
106
/* Do not remove the backing file if an error was there but ignored. */
107
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
108
s->base_overlay = base_overlay;
109
s->above_base = above_base;
110
s->backing_file_str = g_strdup(backing_file_str);
111
+ s->target_bs = bs;
112
s->bs_read_only = bs_read_only;
113
s->chain_frozen = true;
114
115
--
116
2.29.2
117
118
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
Allow the number of queues to be configured using --export
2
vhost-user-blk,num-queues=N. This setting should match the QEMU --device
3
vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers
4
its own value if the vhost-user-blk backend offers fewer queues than
5
QEMU.
2
6
3
This patch completes the series with the COR-filter applied to
7
The vhost-user-blk-server.c code is already capable of multi-queue. All
4
block-stream operations.
8
virtqueue processing runs in the same AioContext. No new locking is
9
needed.
5
10
6
Adding the filter makes it possible in future implement discarding
11
Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.
7
copied regions in backing files during the block-stream job, to reduce
12
Note that the feature bit only announces the presence of the num_queues
8
the disk overuse (we need control on permissions).
13
configuration space field. It does not promise that there is more than 1
14
virtqueue, so we can set it unconditionally.
9
15
10
Also, the filter now is smart enough to do copy-on-read with specified
16
I tested multi-queue by running a random read fio test with numjobs=4 on
11
base, so we have benefit on guest reads even when doing block-stream of
17
an -smp 4 guest. After the benchmark finished the guest /proc/interrupts
12
the part of the backing chain.
18
file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/
19
directory shows that Linux blk-mq has 4 queues configured.
13
20
14
Several iotests are slightly modified due to filter insertion.
21
An automated test is included in the next commit.
15
22
16
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
23
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
17
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
24
Acked-by: Markus Armbruster <armbru@redhat.com>
18
Message-Id: <20201216061703.70908-14-vsementsov@virtuozzo.com>
25
Message-id: 20201001144604.559733-2-stefanha@redhat.com
19
Reviewed-by: Max Reitz <mreitz@redhat.com>
26
[Fixed accidental tab characters as suggested by Markus Armbruster
20
Signed-off-by: Max Reitz <mreitz@redhat.com>
27
--Stefan]
28
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
21
---
29
---
22
block/stream.c | 105 ++++++++++++++++++++++---------------
30
qapi/block-export.json | 10 +++++++---
23
tests/qemu-iotests/030 | 8 +--
31
block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------
24
tests/qemu-iotests/141.out | 2 +-
32
2 files changed, 25 insertions(+), 9 deletions(-)
25
tests/qemu-iotests/245 | 20 ++++---
26
4 files changed, 80 insertions(+), 55 deletions(-)
27
33
28
diff --git a/block/stream.c b/block/stream.c
34
diff --git a/qapi/block-export.json b/qapi/block-export.json
29
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
30
--- a/block/stream.c
36
--- a/qapi/block-export.json
31
+++ b/block/stream.c
37
+++ b/qapi/block-export.json
32
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@
33
#include "block/blockjob_int.h"
39
# SocketAddress types are supported. Passed fds must be UNIX domain
34
#include "qapi/error.h"
40
# sockets.
35
#include "qapi/qmp/qerror.h"
41
# @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
36
+#include "qapi/qmp/qdict.h"
42
+# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults
37
#include "qemu/ratelimit.h"
43
+# to 1.
38
#include "sysemu/block-backend.h"
44
#
39
+#include "block/copy-on-read.h"
45
# Since: 5.2
46
##
47
{ 'struct': 'BlockExportOptionsVhostUserBlk',
48
- 'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
49
+ 'data': { 'addr': 'SocketAddress',
50
+     '*logical-block-size': 'size',
51
+ '*num-queues': 'uint16'} }
52
53
##
54
# @NbdServerAddOptions:
55
@@ -XXX,XX +XXX,XX @@
56
{ 'union': 'BlockExportOptions',
57
'base': { 'type': 'BlockExportType',
58
'id': 'str',
59
-     '*fixed-iothread': 'bool',
60
-     '*iothread': 'str',
61
+ '*fixed-iothread': 'bool',
62
+ '*iothread': 'str',
63
'node-name': 'str',
64
'*writable': 'bool',
65
'*writethrough': 'bool' },
66
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/block/export/vhost-user-blk-server.c
69
+++ b/block/export/vhost-user-blk-server.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "util/block-helpers.h"
40
72
41
enum {
73
enum {
42
/*
74
- VHOST_USER_BLK_MAX_QUEUES = 1,
43
@@ -XXX,XX +XXX,XX @@ typedef struct StreamBlockJob {
75
+ VHOST_USER_BLK_NUM_QUEUES_DEFAULT = 1,
44
BlockJob common;
76
};
45
BlockDriverState *base_overlay; /* COW overlay (stream from this) */
77
struct virtio_blk_inhdr {
46
BlockDriverState *above_base; /* Node directly above the base */
78
unsigned char status;
47
+ BlockDriverState *cor_filter_bs;
79
@@ -XXX,XX +XXX,XX @@ static uint64_t vu_blk_get_features(VuDev *dev)
48
BlockDriverState *target_bs;
80
1ull << VIRTIO_BLK_F_DISCARD |
49
BlockdevOnError on_error;
81
1ull << VIRTIO_BLK_F_WRITE_ZEROES |
50
char *backing_file_str;
82
1ull << VIRTIO_BLK_F_CONFIG_WCE |
51
bool bs_read_only;
83
+ 1ull << VIRTIO_BLK_F_MQ |
52
- bool chain_frozen;
84
1ull << VIRTIO_F_VERSION_1 |
53
} StreamBlockJob;
85
1ull << VIRTIO_RING_F_INDIRECT_DESC |
54
86
1ull << VIRTIO_RING_F_EVENT_IDX |
55
static int coroutine_fn stream_populate(BlockBackend *blk,
87
@@ -XXX,XX +XXX,XX @@ static void blk_aio_detach(void *opaque)
56
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_populate(BlockBackend *blk,
88
89
static void
90
vu_blk_initialize_config(BlockDriverState *bs,
91
- struct virtio_blk_config *config, uint32_t blk_size)
92
+ struct virtio_blk_config *config,
93
+ uint32_t blk_size,
94
+ uint16_t num_queues)
57
{
95
{
58
assert(bytes < SIZE_MAX);
96
config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
59
97
config->blk_size = blk_size;
60
- return blk_co_preadv(blk, offset, bytes, NULL,
98
@@ -XXX,XX +XXX,XX @@ vu_blk_initialize_config(BlockDriverState *bs,
61
- BDRV_REQ_COPY_ON_READ | BDRV_REQ_PREFETCH);
99
config->seg_max = 128 - 2;
62
-}
100
config->min_io_size = 1;
63
-
101
config->opt_io_size = 1;
64
-static void stream_abort(Job *job)
102
- config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
65
-{
103
+ config->num_queues = num_queues;
66
- StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
104
config->max_discard_sectors = 32768;
67
-
105
config->max_discard_seg = 1;
68
- if (s->chain_frozen) {
106
config->discard_sector_alignment = config->blk_size >> 9;
69
- bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
107
@@ -XXX,XX +XXX,XX @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
70
- }
108
BlockExportOptionsVhostUserBlk *vu_opts = &opts->u.vhost_user_blk;
71
+ return blk_co_preadv(blk, offset, bytes, NULL, BDRV_REQ_PREFETCH);
72
}
73
74
static int stream_prepare(Job *job)
75
@@ -XXX,XX +XXX,XX @@ static int stream_prepare(Job *job)
76
Error *local_err = NULL;
109
Error *local_err = NULL;
77
int ret = 0;
110
uint64_t logical_block_size;
78
111
+ uint16_t num_queues = VHOST_USER_BLK_NUM_QUEUES_DEFAULT;
79
- bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
112
80
- s->chain_frozen = false;
113
vexp->writable = opts->writable;
81
+ /* We should drop filter at this point, as filter hold the backing chain */
114
vexp->blkcfg.wce = 0;
82
+ bdrv_cor_filter_drop(s->cor_filter_bs);
115
@@ -XXX,XX +XXX,XX @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
83
+ s->cor_filter_bs = NULL;
116
}
84
117
vexp->blk_size = logical_block_size;
85
if (bdrv_cow_child(unfiltered_bs)) {
118
blk_set_guest_block_size(exp->blk, logical_block_size);
86
const char *base_id = NULL, *base_fmt = NULL;
119
+
87
@@ -XXX,XX +XXX,XX @@ static void stream_clean(Job *job)
120
+ if (vu_opts->has_num_queues) {
88
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
121
+ num_queues = vu_opts->num_queues;
89
BlockJob *bjob = &s->common;
122
+ }
90
123
+ if (num_queues == 0) {
91
+ if (s->cor_filter_bs) {
124
+ error_setg(errp, "num-queues must be greater than 0");
92
+ bdrv_cor_filter_drop(s->cor_filter_bs);
125
+ return -EINVAL;
93
+ s->cor_filter_bs = NULL;
94
+ }
126
+ }
95
+
127
+
96
/* Reopen the image back in read-only mode if necessary */
128
vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
97
if (s->bs_read_only) {
129
- logical_block_size);
98
/* Give up write permissions before making it read-only */
130
+ logical_block_size, num_queues);
99
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
131
100
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
132
blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
101
BlockBackend *blk = s->common.blk;
133
vexp);
102
BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
134
103
- bool enable_cor = !bdrv_cow_child(s->base_overlay);
135
if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
104
int64_t len;
136
- VHOST_USER_BLK_MAX_QUEUES, &vu_blk_iface,
105
int64_t offset = 0;
137
- errp)) {
106
uint64_t delay_ns = 0;
138
+ num_queues, &vu_blk_iface, errp)) {
107
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
139
blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
108
}
140
blk_aio_detach, vexp);
109
job_progress_set_remaining(&s->common.job, len);
141
return -EADDRNOTAVAIL;
110
111
- /* Turn on copy-on-read for the whole block device so that guest read
112
- * requests help us make progress. Only do this when copying the entire
113
- * backing chain since the copy-on-read operation does not take base into
114
- * account.
115
- */
116
- if (enable_cor) {
117
- bdrv_enable_copy_on_read(s->target_bs);
118
- }
119
-
120
for ( ; offset < len; offset += n) {
121
bool copy;
122
int ret;
123
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
124
}
125
}
126
127
- if (enable_cor) {
128
- bdrv_disable_copy_on_read(s->target_bs);
129
- }
130
-
131
/* Do not remove the backing file if an error was there but ignored. */
132
return error;
133
}
134
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver stream_job_driver = {
135
.free = block_job_free,
136
.run = stream_run,
137
.prepare = stream_prepare,
138
- .abort = stream_abort,
139
.clean = stream_clean,
140
.user_resume = block_job_user_resume,
141
},
142
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
143
bool bs_read_only;
144
int basic_flags = BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED;
145
BlockDriverState *base_overlay;
146
+ BlockDriverState *cor_filter_bs = NULL;
147
BlockDriverState *above_base;
148
+ QDict *opts;
149
150
assert(!(base && bottom));
151
assert(!(backing_file_str && bottom));
152
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
153
}
154
}
155
156
- if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
157
- return;
158
- }
159
-
160
/* Make sure that the image is opened in read-write mode */
161
bs_read_only = bdrv_is_read_only(bs);
162
if (bs_read_only) {
163
- if (bdrv_reopen_set_read_only(bs, false, errp) != 0) {
164
- bs_read_only = false;
165
- goto fail;
166
+ int ret;
167
+ /* Hold the chain during reopen */
168
+ if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
169
+ return;
170
+ }
171
+
172
+ ret = bdrv_reopen_set_read_only(bs, false, errp);
173
+
174
+ /* failure, or cor-filter will hold the chain */
175
+ bdrv_unfreeze_backing_chain(bs, above_base);
176
+
177
+ if (ret < 0) {
178
+ return;
179
}
180
}
181
182
- /* Prevent concurrent jobs trying to modify the graph structure here, we
183
- * already have our own plans. Also don't allow resize as the image size is
184
- * queried only at the job start and then cached. */
185
- s = block_job_create(job_id, &stream_job_driver, NULL, bs,
186
- basic_flags | BLK_PERM_GRAPH_MOD,
187
+ opts = qdict_new();
188
+
189
+ qdict_put_str(opts, "driver", "copy-on-read");
190
+ qdict_put_str(opts, "file", bdrv_get_node_name(bs));
191
+ /* Pass the base_overlay node name as 'bottom' to COR driver */
192
+ qdict_put_str(opts, "bottom", base_overlay->node_name);
193
+ if (filter_node_name) {
194
+ qdict_put_str(opts, "node-name", filter_node_name);
195
+ }
196
+
197
+ cor_filter_bs = bdrv_insert_node(bs, opts, BDRV_O_RDWR, errp);
198
+ if (!cor_filter_bs) {
199
+ goto fail;
200
+ }
201
+
202
+ if (!filter_node_name) {
203
+ cor_filter_bs->implicit = true;
204
+ }
205
+
206
+ s = block_job_create(job_id, &stream_job_driver, NULL, cor_filter_bs,
207
+ BLK_PERM_CONSISTENT_READ,
208
basic_flags | BLK_PERM_WRITE,
209
speed, creation_flags, NULL, NULL, errp);
210
if (!s) {
211
goto fail;
212
}
213
214
+ /*
215
+ * Prevent concurrent jobs trying to modify the graph structure here, we
216
+ * already have our own plans. Also don't allow resize as the image size is
217
+ * queried only at the job start and then cached.
218
+ */
219
+ if (block_job_add_bdrv(&s->common, "active node", bs, 0,
220
+ basic_flags | BLK_PERM_WRITE, &error_abort)) {
221
+ goto fail;
222
+ }
223
+
224
/* Block all intermediate nodes between bs and base, because they will
225
* disappear from the chain after this operation. The streaming job reads
226
* every block only once, assuming that it doesn't change, so forbid writes
227
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
228
s->base_overlay = base_overlay;
229
s->above_base = above_base;
230
s->backing_file_str = g_strdup(backing_file_str);
231
+ s->cor_filter_bs = cor_filter_bs;
232
s->target_bs = bs;
233
s->bs_read_only = bs_read_only;
234
- s->chain_frozen = true;
235
236
s->on_error = on_error;
237
trace_stream_start(bs, base, s);
238
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
239
return;
240
241
fail:
242
+ if (cor_filter_bs) {
243
+ bdrv_cor_filter_drop(cor_filter_bs);
244
+ }
245
if (bs_read_only) {
246
bdrv_reopen_set_read_only(bs, true, NULL);
247
}
248
- bdrv_unfreeze_backing_chain(bs, above_base);
249
}
250
diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
251
index XXXXXXX..XXXXXXX 100755
252
--- a/tests/qemu-iotests/030
253
+++ b/tests/qemu-iotests/030
254
@@ -XXX,XX +XXX,XX @@ class TestParallelOps(iotests.QMPTestCase):
255
self.assert_no_active_block_jobs()
256
257
# Set a speed limit to make sure that this job blocks the rest
258
- result = self.vm.qmp('block-stream', device='node4', job_id='stream-node4', base=self.imgs[1], speed=1024*1024)
259
+ result = self.vm.qmp('block-stream', device='node4',
260
+ job_id='stream-node4', base=self.imgs[1],
261
+ filter_node_name='stream-filter', speed=1024*1024)
262
self.assert_qmp(result, 'return', {})
263
264
result = self.vm.qmp('block-stream', device='node5', job_id='stream-node5', base=self.imgs[2])
265
self.assert_qmp(result, 'error/desc',
266
- "Node 'node4' is busy: block device is in use by block job: stream")
267
+ "Node 'stream-filter' is busy: block device is in use by block job: stream")
268
269
result = self.vm.qmp('block-stream', device='node3', job_id='stream-node3', base=self.imgs[2])
270
self.assert_qmp(result, 'error/desc',
271
@@ -XXX,XX +XXX,XX @@ class TestParallelOps(iotests.QMPTestCase):
272
# block-commit should also fail if it touches nodes used by the stream job
273
result = self.vm.qmp('block-commit', device='drive0', base=self.imgs[4], job_id='commit-node4')
274
self.assert_qmp(result, 'error/desc',
275
- "Node 'node4' is busy: block device is in use by block job: stream")
276
+ "Node 'stream-filter' is busy: block device is in use by block job: stream")
277
278
result = self.vm.qmp('block-commit', device='drive0', base=self.imgs[1], top=self.imgs[3], job_id='commit-node1')
279
self.assert_qmp(result, 'error/desc',
280
diff --git a/tests/qemu-iotests/141.out b/tests/qemu-iotests/141.out
281
index XXXXXXX..XXXXXXX 100644
282
--- a/tests/qemu-iotests/141.out
283
+++ b/tests/qemu-iotests/141.out
284
@@ -XXX,XX +XXX,XX @@ wrote 1048576/1048576 bytes at offset 0
285
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
286
{'execute': 'blockdev-del',
287
'arguments': {'node-name': 'drv0'}}
288
-{"error": {"class": "GenericError", "desc": "Node drv0 is in use"}}
289
+{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: stream"}}
290
{'execute': 'block-job-cancel',
291
'arguments': {'device': 'job0'}}
292
{"return": {}}
293
diff --git a/tests/qemu-iotests/245 b/tests/qemu-iotests/245
294
index XXXXXXX..XXXXXXX 100755
295
--- a/tests/qemu-iotests/245
296
+++ b/tests/qemu-iotests/245
297
@@ -XXX,XX +XXX,XX @@ class TestBlockdevReopen(iotests.QMPTestCase):
298
299
# hd1 <- hd0
300
result = self.vm.qmp('block-stream', conv_keys = True, job_id = 'stream0',
301
- device = 'hd1', auto_finalize = False)
302
+ device = 'hd1', filter_node_name='cor',
303
+ auto_finalize = False)
304
self.assert_qmp(result, 'return', {})
305
306
- # We can't reopen with the original options because that would
307
- # make hd1 read-only and block-stream requires it to be read-write
308
- # (Which error message appears depends on whether the stream job is
309
- # already done with copying at this point.)
310
+ # We can't reopen with the original options because there is a filter
311
+ # inserted by stream job above hd1.
312
self.reopen(opts, {},
313
- ["Can't set node 'hd1' to r/o with copy-on-read enabled",
314
- "Cannot make block node read-only, there is a writer on it"])
315
+ "Cannot change the option 'backing.backing.file.node-name'")
316
+
317
+ # We can't reopen hd1 to read-only, as block-stream requires it to be
318
+ # read-write
319
+ self.reopen(opts['backing'], {'read-only': True},
320
+ "Cannot make block node read-only, there is a writer on it")
321
322
# We can't remove hd2 while the stream job is ongoing
323
opts['backing']['backing'] = None
324
- self.reopen(opts, {'backing.read-only': False}, "Cannot change 'backing' link from 'hd1' to 'hd2'")
325
+ self.reopen(opts['backing'], {'read-only': False},
326
+ "Cannot change 'backing' link from 'hd1' to 'hd2'")
327
328
# We can detach hd1 from hd0 because it doesn't affect the stream job
329
opts['backing'] = None
330
--
142
--
331
2.29.2
143
2.26.2
332
144
333
diff view generated by jsdifflib
Deleted patch
1
There are a couple of environment variables that we fetch with
2
os.environ.get() without supplying a default. Clearly they are required
3
and expected to be set by the ./check script (as evidenced by
4
execute_setup_common(), which checks for test_dir and
5
qemu_default_machine to be set, and aborts if they are not).
6
1
7
Using .get() this way has the disadvantage of returning an Optional[str]
8
type, which mypy will complain about when tests just assume these values
9
to be str.
10
11
Use [] instead, which raises a KeyError for environment variables that
12
are not set. When this exception is raised, catch it and move the abort
13
code from execute_setup_common() there.
14
15
Drop the 'assert iotests.sock_dir is not None' from iotest 300, because
16
that sort of thing is precisely what this patch wants to prevent.
17
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
20
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
21
Message-Id: <20210118105720.14824-2-mreitz@redhat.com>
22
---
23
tests/qemu-iotests/300 | 1 -
24
tests/qemu-iotests/iotests.py | 26 +++++++++++++-------------
25
2 files changed, 13 insertions(+), 14 deletions(-)
26
27
diff --git a/tests/qemu-iotests/300 b/tests/qemu-iotests/300
28
index XXXXXXX..XXXXXXX 100755
29
--- a/tests/qemu-iotests/300
30
+++ b/tests/qemu-iotests/300
31
@@ -XXX,XX +XXX,XX @@ import qemu
32
33
BlockBitmapMapping = List[Dict[str, Union[str, List[Dict[str, str]]]]]
34
35
-assert iotests.sock_dir is not None
36
mig_sock = os.path.join(iotests.sock_dir, 'mig_sock')
37
38
39
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
40
index XXXXXXX..XXXXXXX 100644
41
--- a/tests/qemu-iotests/iotests.py
42
+++ b/tests/qemu-iotests/iotests.py
43
@@ -XXX,XX +XXX,XX @@ qemu_opts = os.environ.get('QEMU_OPTIONS', '').strip().split(' ')
44
45
imgfmt = os.environ.get('IMGFMT', 'raw')
46
imgproto = os.environ.get('IMGPROTO', 'file')
47
-test_dir = os.environ.get('TEST_DIR')
48
-sock_dir = os.environ.get('SOCK_DIR')
49
output_dir = os.environ.get('OUTPUT_DIR', '.')
50
-cachemode = os.environ.get('CACHEMODE')
51
-aiomode = os.environ.get('AIOMODE')
52
-qemu_default_machine = os.environ.get('QEMU_DEFAULT_MACHINE')
53
+
54
+try:
55
+ test_dir = os.environ['TEST_DIR']
56
+ sock_dir = os.environ['SOCK_DIR']
57
+ cachemode = os.environ['CACHEMODE']
58
+ aiomode = os.environ['AIOMODE']
59
+ qemu_default_machine = os.environ['QEMU_DEFAULT_MACHINE']
60
+except KeyError:
61
+ # We are using these variables as proxies to indicate that we're
62
+ # not being run via "check". There may be other things set up by
63
+ # "check" that individual test cases rely on.
64
+ sys.stderr.write('Please run this test via the "check" script\n')
65
+ sys.exit(os.EX_USAGE)
66
67
socket_scm_helper = os.environ.get('SOCKET_SCM_HELPER', 'socket_scm_helper')
68
69
@@ -XXX,XX +XXX,XX @@ def execute_setup_common(supported_fmts: Sequence[str] = (),
70
"""
71
# Note: Python 3.6 and pylint do not like 'Collection' so use 'Sequence'.
72
73
- # We are using TEST_DIR and QEMU_DEFAULT_MACHINE as proxies to
74
- # indicate that we're not being run via "check". There may be
75
- # other things set up by "check" that individual test cases rely
76
- # on.
77
- if test_dir is None or qemu_default_machine is None:
78
- sys.stderr.write('Please run this test via the "check" script\n')
79
- sys.exit(os.EX_USAGE)
80
-
81
debug = '-d' in sys.argv
82
if debug:
83
sys.argv.remove('-d')
84
--
85
2.29.2
86
87
diff view generated by jsdifflib
Deleted patch
1
Instead of checking iotests.py only, check all Python files in the
2
qemu-iotests/ directory. Of course, most of them do not pass, so there
3
is an extensive skip list for now. (The only files that do pass are
4
209, 254, 283, and iotests.py.)
5
1
6
(Alternatively, we could have the opposite, i.e. an explicit list of
7
files that we do want to check, but I think it is better to check files
8
by default.)
9
10
Unless started in debug mode (./check -d), the output has no information
11
on which files are tested, so we will not have a problem e.g. with
12
backports, where some files may be missing when compared to upstream.
13
14
Besides the technical rewrite, some more things are changed:
15
16
- For the pylint invocation, PYTHONPATH is adjusted. This mirrors
17
setting MYPYPATH for mypy.
18
19
- Also, MYPYPATH is now derived from PYTHONPATH, so that we include
20
paths set by the environment. Maybe at some point we want to let the
21
check script add '../../python/' to PYTHONPATH so that iotests.py does
22
not need to do that.
23
24
- Passing --notes=FIXME,XXX to pylint suppresses warnings for TODO
25
comments. TODO is fine, we do not need 297 to complain about such
26
comments.
27
28
- The "Success" line from mypy's output is suppressed, because (A) it
29
does not add useful information, and (B) it would leak information
30
about the files having been tested to the reference output, which we
31
decidedly do not want.
32
33
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
34
Signed-off-by: Max Reitz <mreitz@redhat.com>
35
Message-Id: <20210118105720.14824-3-mreitz@redhat.com>
36
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
37
---
38
tests/qemu-iotests/297 | 112 +++++++++++++++++++++++++++++--------
39
tests/qemu-iotests/297.out | 5 +-
40
2 files changed, 92 insertions(+), 25 deletions(-)
41
42
diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
43
index XXXXXXX..XXXXXXX 100755
44
--- a/tests/qemu-iotests/297
45
+++ b/tests/qemu-iotests/297
46
@@ -XXX,XX +XXX,XX @@
47
-#!/usr/bin/env bash
48
+#!/usr/bin/env python3
49
# group: meta
50
#
51
# Copyright (C) 2020 Red Hat, Inc.
52
@@ -XXX,XX +XXX,XX @@
53
# You should have received a copy of the GNU General Public License
54
# along with this program. If not, see <http://www.gnu.org/licenses/>.
55
56
-seq=$(basename $0)
57
-echo "QA output created by $seq"
58
+import os
59
+import re
60
+import shutil
61
+import subprocess
62
+import sys
63
64
-status=1    # failure is the default!
65
+import iotests
66
67
-# get standard environment
68
-. ./common.rc
69
70
-if ! type -p "pylint-3" > /dev/null; then
71
- _notrun "pylint-3 not found"
72
-fi
73
-if ! type -p "mypy" > /dev/null; then
74
- _notrun "mypy not found"
75
-fi
76
+# TODO: Empty this list!
77
+SKIP_FILES = (
78
+ '030', '040', '041', '044', '045', '055', '056', '057', '065', '093',
79
+ '096', '118', '124', '129', '132', '136', '139', '147', '148', '149',
80
+ '151', '152', '155', '163', '165', '169', '194', '196', '199', '202',
81
+ '203', '205', '206', '207', '208', '210', '211', '212', '213', '216',
82
+ '218', '219', '222', '224', '228', '234', '235', '236', '237', '238',
83
+ '240', '242', '245', '246', '248', '255', '256', '257', '258', '260',
84
+ '262', '264', '266', '274', '277', '280', '281', '295', '296', '298',
85
+ '299', '300', '302', '303', '304', '307',
86
+ 'nbd-fault-injector.py', 'qcow2.py', 'qcow2_format.py', 'qed.py'
87
+)
88
89
-pylint-3 --score=n iotests.py
90
91
-MYPYPATH=../../python/ mypy --warn-unused-configs --disallow-subclassing-any \
92
- --disallow-any-generics --disallow-incomplete-defs \
93
- --disallow-untyped-decorators --no-implicit-optional \
94
- --warn-redundant-casts --warn-unused-ignores \
95
- --no-implicit-reexport iotests.py
96
+def is_python_file(filename):
97
+ if not os.path.isfile(filename):
98
+ return False
99
100
-# success, all done
101
-echo "*** done"
102
-rm -f $seq.full
103
-status=0
104
+ if filename.endswith('.py'):
105
+ return True
106
+
107
+ with open(filename) as f:
108
+ try:
109
+ first_line = f.readline()
110
+ return re.match('^#!.*python', first_line) is not None
111
+ except UnicodeDecodeError: # Ignore binary files
112
+ return False
113
+
114
+
115
+def run_linters():
116
+ files = [filename for filename in (set(os.listdir('.')) - set(SKIP_FILES))
117
+ if is_python_file(filename)]
118
+
119
+ iotests.logger.debug('Files to be checked:')
120
+ iotests.logger.debug(', '.join(sorted(files)))
121
+
122
+ print('=== pylint ===')
123
+ sys.stdout.flush()
124
+
125
+ # Todo notes are fine, but fixme's or xxx's should probably just be
126
+ # fixed (in tests, at least)
127
+ env = os.environ.copy()
128
+ qemu_module_path = os.path.join(os.path.dirname(__file__),
129
+ '..', '..', 'python')
130
+ try:
131
+ env['PYTHONPATH'] += os.pathsep + qemu_module_path
132
+ except KeyError:
133
+ env['PYTHONPATH'] = qemu_module_path
134
+ subprocess.run(('pylint-3', '--score=n', '--notes=FIXME,XXX', *files),
135
+ env=env, check=False)
136
+
137
+ print('=== mypy ===')
138
+ sys.stdout.flush()
139
+
140
+ # We have to call mypy separately for each file. Otherwise, it
141
+ # will interpret all given files as belonging together (i.e., they
142
+ # may not both define the same classes, etc.; most notably, they
143
+ # must not both define the __main__ module).
144
+ env['MYPYPATH'] = env['PYTHONPATH']
145
+ for filename in files:
146
+ p = subprocess.run(('mypy',
147
+ '--warn-unused-configs',
148
+ '--disallow-subclassing-any',
149
+ '--disallow-any-generics',
150
+ '--disallow-incomplete-defs',
151
+ '--disallow-untyped-decorators',
152
+ '--no-implicit-optional',
153
+ '--warn-redundant-casts',
154
+ '--warn-unused-ignores',
155
+ '--no-implicit-reexport',
156
+ filename),
157
+ env=env,
158
+ check=False,
159
+ stdout=subprocess.PIPE,
160
+ stderr=subprocess.STDOUT,
161
+ universal_newlines=True)
162
+
163
+ if p.returncode != 0:
164
+ print(p.stdout)
165
+
166
+
167
+for linter in ('pylint-3', 'mypy'):
168
+ if shutil.which(linter) is None:
169
+ iotests.notrun(f'{linter} not found')
170
+
171
+iotests.script_main(run_linters)
172
diff --git a/tests/qemu-iotests/297.out b/tests/qemu-iotests/297.out
173
index XXXXXXX..XXXXXXX 100644
174
--- a/tests/qemu-iotests/297.out
175
+++ b/tests/qemu-iotests/297.out
176
@@ -XXX,XX +XXX,XX @@
177
-QA output created by 297
178
-Success: no issues found in 1 source file
179
-*** done
180
+=== pylint ===
181
+=== mypy ===
182
--
183
2.29.2
184
185
diff view generated by jsdifflib
Deleted patch
1
Signed-off-by: Max Reitz <mreitz@redhat.com>
2
Reviewed-by: Eric Blake <eblake@redhat.com>
3
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
5
Message-Id: <20210118105720.14824-4-mreitz@redhat.com>
6
---
7
tests/qemu-iotests/124 | 8 +-------
8
tests/qemu-iotests/iotests.py | 11 +++++++----
9
2 files changed, 8 insertions(+), 11 deletions(-)
10
1
11
diff --git a/tests/qemu-iotests/124 b/tests/qemu-iotests/124
12
index XXXXXXX..XXXXXXX 100755
13
--- a/tests/qemu-iotests/124
14
+++ b/tests/qemu-iotests/124
15
@@ -XXX,XX +XXX,XX @@
16
17
import os
18
import iotests
19
+from iotests import try_remove
20
21
22
def io_write_patterns(img, patterns):
23
@@ -XXX,XX +XXX,XX @@ def io_write_patterns(img, patterns):
24
iotests.qemu_io('-c', 'write -P%s %s %s' % pattern, img)
25
26
27
-def try_remove(img):
28
- try:
29
- os.remove(img)
30
- except OSError:
31
- pass
32
-
33
-
34
def transaction_action(action, **kwargs):
35
return {
36
'type': action,
37
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
38
index XXXXXXX..XXXXXXX 100644
39
--- a/tests/qemu-iotests/iotests.py
40
+++ b/tests/qemu-iotests/iotests.py
41
@@ -XXX,XX +XXX,XX @@ class FilePath:
42
return False
43
44
45
+def try_remove(img):
46
+ try:
47
+ os.remove(img)
48
+ except OSError:
49
+ pass
50
+
51
def file_path_remover():
52
for path in reversed(file_path_remover.paths):
53
- try:
54
- os.remove(path)
55
- except OSError:
56
- pass
57
+ try_remove(path)
58
59
60
def file_path(*names, base_dir=test_dir):
61
--
62
2.29.2
63
64
diff view generated by jsdifflib
Deleted patch
1
Signed-off-by: Max Reitz <mreitz@redhat.com>
2
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
3
Reviewed-by: Eric Blake <eblake@redhat.com>
4
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
5
Message-Id: <20210118105720.14824-5-mreitz@redhat.com>
6
---
7
tests/qemu-iotests/129 | 2 ++
8
1 file changed, 2 insertions(+)
9
1
10
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
11
index XXXXXXX..XXXXXXX 100755
12
--- a/tests/qemu-iotests/129
13
+++ b/tests/qemu-iotests/129
14
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
15
result = self.vm.qmp("block_set_io_throttle", conv_keys=False,
16
**params)
17
self.vm.shutdown()
18
+ for img in (self.test_img, self.target_img, self.base_img):
19
+ iotests.try_remove(img)
20
21
def do_test_stop(self, cmd, **args):
22
"""Test 'stop' while block job is running on a throttled drive.
23
--
24
2.29.2
25
26
diff view generated by jsdifflib
Deleted patch
1
@busy is false when the job is paused, which happens all the time
2
because that is how jobs yield (e.g. for mirror at least since commit
3
565ac01f8d3).
4
1
5
Back when 129 was added (2015), perhaps there was no better way of
6
checking whether the job was still actually running. Now we have the
7
@status field (as of 58b295ba52c, i.e. 2018), which can give us exactly
8
that information.
9
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
13
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
14
Message-Id: <20210118105720.14824-6-mreitz@redhat.com>
15
---
16
tests/qemu-iotests/129 | 2 +-
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
19
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
20
index XXXXXXX..XXXXXXX 100755
21
--- a/tests/qemu-iotests/129
22
+++ b/tests/qemu-iotests/129
23
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
24
result = self.vm.qmp("stop")
25
self.assert_qmp(result, 'return', {})
26
result = self.vm.qmp("query-block-jobs")
27
- self.assert_qmp(result, 'return[0]/busy', True)
28
+ self.assert_qmp(result, 'return[0]/status', 'running')
29
self.assert_qmp(result, 'return[0]/ready', False)
30
31
def test_drive_mirror(self):
32
--
33
2.29.2
34
35
diff view generated by jsdifflib
Deleted patch
1
Throttling on the BB has not affected block jobs in a while, so it is
2
possible that one of the jobs in 129 finishes before the VM is stopped.
3
We can fix that by running the job from a throttle node.
4
1
5
Signed-off-by: Max Reitz <mreitz@redhat.com>
6
Reviewed-by: Eric Blake <eblake@redhat.com>
7
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
9
Message-Id: <20210118105720.14824-7-mreitz@redhat.com>
10
---
11
tests/qemu-iotests/129 | 37 +++++++++++++------------------------
12
1 file changed, 13 insertions(+), 24 deletions(-)
13
14
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
15
index XXXXXXX..XXXXXXX 100755
16
--- a/tests/qemu-iotests/129
17
+++ b/tests/qemu-iotests/129
18
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
19
iotests.qemu_img('create', '-f', iotests.imgfmt, self.test_img,
20
"-b", self.base_img, '-F', iotests.imgfmt)
21
iotests.qemu_io('-f', iotests.imgfmt, '-c', 'write -P0x5d 1M 128M', self.test_img)
22
- self.vm = iotests.VM().add_drive(self.test_img)
23
+ self.vm = iotests.VM()
24
+ self.vm.add_object('throttle-group,id=tg0,x-bps-total=1024')
25
+
26
+ source_drive = 'driver=throttle,' \
27
+ 'throttle-group=tg0,' \
28
+ f'file.driver={iotests.imgfmt},' \
29
+ f'file.file.filename={self.test_img}'
30
+
31
+ self.vm.add_drive(None, source_drive)
32
self.vm.launch()
33
34
def tearDown(self):
35
- params = {"device": "drive0",
36
- "bps": 0,
37
- "bps_rd": 0,
38
- "bps_wr": 0,
39
- "iops": 0,
40
- "iops_rd": 0,
41
- "iops_wr": 0,
42
- }
43
- result = self.vm.qmp("block_set_io_throttle", conv_keys=False,
44
- **params)
45
self.vm.shutdown()
46
for img in (self.test_img, self.target_img, self.base_img):
47
iotests.try_remove(img)
48
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
49
def do_test_stop(self, cmd, **args):
50
"""Test 'stop' while block job is running on a throttled drive.
51
The 'stop' command shouldn't drain the job"""
52
- params = {"device": "drive0",
53
- "bps": 1024,
54
- "bps_rd": 0,
55
- "bps_wr": 0,
56
- "iops": 0,
57
- "iops_rd": 0,
58
- "iops_wr": 0,
59
- }
60
- result = self.vm.qmp("block_set_io_throttle", conv_keys=False,
61
- **params)
62
- self.assert_qmp(result, 'return', {})
63
result = self.vm.qmp(cmd, **args)
64
self.assert_qmp(result, 'return', {})
65
+
66
result = self.vm.qmp("stop")
67
self.assert_qmp(result, 'return', {})
68
result = self.vm.qmp("query-block-jobs")
69
+
70
self.assert_qmp(result, 'return[0]/status', 'running')
71
self.assert_qmp(result, 'return[0]/ready', False)
72
73
def test_drive_mirror(self):
74
self.do_test_stop("drive-mirror", device="drive0",
75
- target=self.target_img,
76
+ target=self.target_img, format=iotests.imgfmt,
77
sync="full")
78
79
def test_drive_backup(self):
80
self.do_test_stop("drive-backup", device="drive0",
81
- target=self.target_img,
82
+ target=self.target_img, format=iotests.imgfmt,
83
sync="full")
84
85
def test_block_commit(self):
86
--
87
2.29.2
88
89
diff view generated by jsdifflib
Deleted patch
1
Before this patch, test_block_commit() performs an active commit, which
2
under the hood is a mirror job. If we want to test various different
3
block jobs, we should perhaps run an actual commit job instead.
4
1
5
Doing so requires adding an overlay above the source node before the
6
commit is done (and then specifying the source node as the top node for
7
the commit job).
8
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
Reviewed-by: Eric Blake <eblake@redhat.com>
11
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
12
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
13
Message-Id: <20210118105720.14824-8-mreitz@redhat.com>
14
---
15
tests/qemu-iotests/129 | 27 +++++++++++++++++++++++++--
16
1 file changed, 25 insertions(+), 2 deletions(-)
17
18
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
19
index XXXXXXX..XXXXXXX 100755
20
--- a/tests/qemu-iotests/129
21
+++ b/tests/qemu-iotests/129
22
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
23
test_img = os.path.join(iotests.test_dir, 'test.img')
24
target_img = os.path.join(iotests.test_dir, 'target.img')
25
base_img = os.path.join(iotests.test_dir, 'base.img')
26
+ overlay_img = os.path.join(iotests.test_dir, 'overlay.img')
27
28
def setUp(self):
29
iotests.qemu_img('create', '-f', iotests.imgfmt, self.base_img, "1G")
30
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
31
self.vm.add_object('throttle-group,id=tg0,x-bps-total=1024')
32
33
source_drive = 'driver=throttle,' \
34
+ 'node-name=source,' \
35
'throttle-group=tg0,' \
36
f'file.driver={iotests.imgfmt},' \
37
f'file.file.filename={self.test_img}'
38
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
39
40
def tearDown(self):
41
self.vm.shutdown()
42
- for img in (self.test_img, self.target_img, self.base_img):
43
+ for img in (self.test_img, self.target_img, self.base_img,
44
+ self.overlay_img):
45
iotests.try_remove(img)
46
47
def do_test_stop(self, cmd, **args):
48
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
49
sync="full")
50
51
def test_block_commit(self):
52
- self.do_test_stop("block-commit", device="drive0")
53
+ # Add overlay above the source node so that we actually use a
54
+ # commit job instead of a mirror job
55
+
56
+ iotests.qemu_img('create', '-f', iotests.imgfmt, self.overlay_img,
57
+ '1G')
58
+
59
+ result = self.vm.qmp('blockdev-add', **{
60
+ 'node-name': 'overlay',
61
+ 'driver': iotests.imgfmt,
62
+ 'file': {
63
+ 'driver': 'file',
64
+ 'filename': self.overlay_img
65
+ }
66
+ })
67
+ self.assert_qmp(result, 'return', {})
68
+
69
+ result = self.vm.qmp('blockdev-snapshot',
70
+ node='source', overlay='overlay')
71
+ self.assert_qmp(result, 'return', {})
72
+
73
+ self.do_test_stop('block-commit', device='drive0', top_node='source')
74
75
if __name__ == '__main__':
76
iotests.main(supported_fmts=["qcow2"],
77
--
78
2.29.2
79
80
diff view generated by jsdifflib
Deleted patch
1
Issuing 'stop' on the VM drains all nodes. If the mirror job has many
2
large requests in flight, this may lead to significant I/O that looks a
3
bit like 'stop' would make the job try to complete (which is what 129
4
should verify not to happen).
5
1
6
We can limit the I/O in flight by limiting the buffer size, so mirror
7
will make very little progress during the 'stop' drain.
8
9
(We do not need to do anything about commit, which has a buffer size of
10
512 kB by default; or backup, which goes cluster by cluster. Once we
11
have asynchronous requests for backup, that will change, but then we can
12
fine-tune the backup job to only perform a single request on a very
13
small chunk, too.)
14
15
Signed-off-by: Max Reitz <mreitz@redhat.com>
16
Reviewed-by: Eric Blake <eblake@redhat.com>
17
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
18
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
19
Message-Id: <20210118105720.14824-9-mreitz@redhat.com>
20
---
21
tests/qemu-iotests/129 | 2 +-
22
1 file changed, 1 insertion(+), 1 deletion(-)
23
24
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
25
index XXXXXXX..XXXXXXX 100755
26
--- a/tests/qemu-iotests/129
27
+++ b/tests/qemu-iotests/129
28
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
29
def test_drive_mirror(self):
30
self.do_test_stop("drive-mirror", device="drive0",
31
target=self.target_img, format=iotests.imgfmt,
32
- sync="full")
33
+ sync="full", buf_size=65536)
34
35
def test_drive_backup(self):
36
self.do_test_stop("drive-backup", device="drive0",
37
--
38
2.29.2
39
40
diff view generated by jsdifflib
Deleted patch
1
And consequentially drop it from 297's skip list.
2
1
3
Signed-off-by: Max Reitz <mreitz@redhat.com>
4
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
5
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Message-Id: <20210118105720.14824-10-mreitz@redhat.com>
7
---
8
tests/qemu-iotests/129 | 4 ++--
9
tests/qemu-iotests/297 | 2 +-
10
2 files changed, 3 insertions(+), 3 deletions(-)
11
12
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
13
index XXXXXXX..XXXXXXX 100755
14
--- a/tests/qemu-iotests/129
15
+++ b/tests/qemu-iotests/129
16
@@ -XXX,XX +XXX,XX @@
17
18
import os
19
import iotests
20
-import time
21
22
class TestStopWithBlockJob(iotests.QMPTestCase):
23
test_img = os.path.join(iotests.test_dir, 'test.img')
24
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
25
iotests.qemu_img('create', '-f', iotests.imgfmt, self.base_img, "1G")
26
iotests.qemu_img('create', '-f', iotests.imgfmt, self.test_img,
27
"-b", self.base_img, '-F', iotests.imgfmt)
28
- iotests.qemu_io('-f', iotests.imgfmt, '-c', 'write -P0x5d 1M 128M', self.test_img)
29
+ iotests.qemu_io('-f', iotests.imgfmt, '-c', 'write -P0x5d 1M 128M',
30
+ self.test_img)
31
self.vm = iotests.VM()
32
self.vm.add_object('throttle-group,id=tg0,x-bps-total=1024')
33
34
diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
35
index XXXXXXX..XXXXXXX 100755
36
--- a/tests/qemu-iotests/297
37
+++ b/tests/qemu-iotests/297
38
@@ -XXX,XX +XXX,XX @@ import iotests
39
# TODO: Empty this list!
40
SKIP_FILES = (
41
'030', '040', '041', '044', '045', '055', '056', '057', '065', '093',
42
- '096', '118', '124', '129', '132', '136', '139', '147', '148', '149',
43
+ '096', '118', '124', '132', '136', '139', '147', '148', '149',
44
'151', '152', '155', '163', '165', '169', '194', '196', '199', '202',
45
'203', '205', '206', '207', '208', '210', '211', '212', '213', '216',
46
'218', '219', '222', '224', '228', '234', '235', '236', '237', '238',
47
--
48
2.29.2
49
50
diff view generated by jsdifflib
Deleted patch
1
And consequentially drop it from 297's skip list.
2
1
3
Signed-off-by: Max Reitz <mreitz@redhat.com>
4
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
5
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
6
Message-Id: <20210118105720.14824-11-mreitz@redhat.com>
7
---
8
tests/qemu-iotests/297 | 2 +-
9
tests/qemu-iotests/300 | 18 +++++++++++++++---
10
2 files changed, 16 insertions(+), 4 deletions(-)
11
12
diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
13
index XXXXXXX..XXXXXXX 100755
14
--- a/tests/qemu-iotests/297
15
+++ b/tests/qemu-iotests/297
16
@@ -XXX,XX +XXX,XX @@ SKIP_FILES = (
17
'218', '219', '222', '224', '228', '234', '235', '236', '237', '238',
18
'240', '242', '245', '246', '248', '255', '256', '257', '258', '260',
19
'262', '264', '266', '274', '277', '280', '281', '295', '296', '298',
20
- '299', '300', '302', '303', '304', '307',
21
+ '299', '302', '303', '304', '307',
22
'nbd-fault-injector.py', 'qcow2.py', 'qcow2_format.py', 'qed.py'
23
)
24
25
diff --git a/tests/qemu-iotests/300 b/tests/qemu-iotests/300
26
index XXXXXXX..XXXXXXX 100755
27
--- a/tests/qemu-iotests/300
28
+++ b/tests/qemu-iotests/300
29
@@ -XXX,XX +XXX,XX @@ import os
30
import random
31
import re
32
from typing import Dict, List, Optional, Union
33
+
34
import iotests
35
+
36
+# Import qemu after iotests.py has amended sys.path
37
+# pylint: disable=wrong-import-order
38
import qemu
39
40
BlockBitmapMapping = List[Dict[str, Union[str, List[Dict[str, str]]]]]
41
@@ -XXX,XX +XXX,XX @@ class TestDirtyBitmapMigration(iotests.QMPTestCase):
42
If @msg is None, check that there has not been any error.
43
"""
44
self.vm_b.shutdown()
45
+
46
+ log = self.vm_b.get_log()
47
+ assert log is not None # Loaded after shutdown
48
+
49
if msg is None:
50
- self.assertNotIn('qemu-system-', self.vm_b.get_log())
51
+ self.assertNotIn('qemu-system-', log)
52
else:
53
- self.assertIn(msg, self.vm_b.get_log())
54
+ self.assertIn(msg, log)
55
56
@staticmethod
57
def mapping(node_name: str, node_alias: str,
58
@@ -XXX,XX +XXX,XX @@ class TestBlockBitmapMappingErrors(TestDirtyBitmapMigration):
59
60
# Check for the error in the source's log
61
self.vm_a.shutdown()
62
+
63
+ log = self.vm_a.get_log()
64
+ assert log is not None # Loaded after shutdown
65
+
66
self.assertIn(f"Cannot migrate bitmap '{name}' on node "
67
f"'{self.src_node_name}': Name is longer than 255 bytes",
68
- self.vm_a.get_log())
69
+ log)
70
71
# Expect abnormal shutdown of the destination VM because of
72
# the failed migration
73
--
74
2.29.2
75
76
diff view generated by jsdifflib
Deleted patch
1
Disposition (action) for any given signal is global for the process.
2
When two threads run coroutine-sigaltstack's qemu_coroutine_new()
3
concurrently, they may interfere with each other: One of them may revert
4
the SIGUSR2 handler to SIG_DFL, between the other thread (a) setting up
5
coroutine_trampoline() as the handler and (b) raising SIGUSR2. That
6
SIGUSR2 will then terminate the QEMU process abnormally.
7
1
8
We have to ensure that only one thread at a time can modify the
9
process-global SIGUSR2 handler. To do so, wrap the whole section where
10
that is done in a mutex.
11
12
Alternatively, we could for example have the SIGUSR2 handler always be
13
coroutine_trampoline(), so there would be no need to invoke sigaction()
14
in qemu_coroutine_new(). Laszlo has posted a patch to do so here:
15
16
https://lists.nongnu.org/archive/html/qemu-devel/2021-01/msg05962.html
17
18
However, given that coroutine-sigaltstack is more of a fallback
19
implementation for platforms that do not support ucontext, that change
20
may be a bit too invasive to be comfortable with it. The mutex proposed
21
here may negatively impact performance, but the change is much simpler.
22
23
Signed-off-by: Max Reitz <mreitz@redhat.com>
24
Message-Id: <20210125120305.19520-1-mreitz@redhat.com>
25
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
26
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
27
---
28
util/coroutine-sigaltstack.c | 9 +++++++++
29
1 file changed, 9 insertions(+)
30
31
diff --git a/util/coroutine-sigaltstack.c b/util/coroutine-sigaltstack.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/util/coroutine-sigaltstack.c
34
+++ b/util/coroutine-sigaltstack.c
35
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_new(void)
36
sigset_t sigs;
37
sigset_t osigs;
38
sigjmp_buf old_env;
39
+ static pthread_mutex_t sigusr2_mutex = PTHREAD_MUTEX_INITIALIZER;
40
41
/* The way to manipulate stack is with the sigaltstack function. We
42
* prepare a stack, with it delivering a signal to ourselves and then
43
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_new(void)
44
sa.sa_handler = coroutine_trampoline;
45
sigfillset(&sa.sa_mask);
46
sa.sa_flags = SA_ONSTACK;
47
+
48
+ /*
49
+ * sigaction() is a process-global operation. We must not run
50
+ * this code in multiple threads at once.
51
+ */
52
+ pthread_mutex_lock(&sigusr2_mutex);
53
if (sigaction(SIGUSR2, &sa, &osa) != 0) {
54
abort();
55
}
56
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_new(void)
57
* Restore the old SIGUSR2 signal handler and mask
58
*/
59
sigaction(SIGUSR2, &osa, NULL);
60
+ pthread_mutex_unlock(&sigusr2_mutex);
61
+
62
pthread_sigmask(SIG_SETMASK, &osigs, NULL);
63
64
/*
65
--
66
2.29.2
67
68
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
We'll need async block-copy invocation to use in backup directly.
3
bdrv_co_block_status_above has several design problems with handling
4
short backing files:
5
6
1. With want_zeros=true, it may return ret with BDRV_BLOCK_ZERO but
7
without BDRV_BLOCK_ALLOCATED flag, when actually short backing file
8
which produces these after-EOF zeros is inside requested backing
9
sequence.
10
11
2. With want_zero=false, it may return pnum=0 prior to actual EOF,
12
because of EOF of short backing file.
13
14
Fix these things, making logic about short backing files clearer.
15
16
With fixed bdrv_block_status_above we also have to improve is_zero in
17
qcow2 code, otherwise iotest 154 will fail, because with this patch we
18
stop to merge zeros of different types (produced by fully unallocated
19
in the whole backing chain regions vs produced by short backing files).
20
21
Note also, that this patch leaves for another day the general problem
22
around block-status: misuse of BDRV_BLOCK_ALLOCATED as is-fs-allocated
23
vs go-to-backing.
4
24
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
25
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
26
Reviewed-by: Alberto Garcia <berto@igalia.com>
7
Message-Id: <20210116214705.822267-4-vsementsov@virtuozzo.com>
27
Reviewed-by: Eric Blake <eblake@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
28
Message-id: 20200924194003.22080-2-vsementsov@virtuozzo.com
29
[Fix s/comes/come/ as suggested by Eric Blake
30
--Stefan]
31
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
32
---
10
include/block/block-copy.h | 29 ++++++++++++++
33
block/io.c | 68 ++++++++++++++++++++++++++++++++++++++++-----------
11
block/block-copy.c | 81 ++++++++++++++++++++++++++++++++++++--
34
block/qcow2.c | 16 ++++++++++--
12
2 files changed, 106 insertions(+), 4 deletions(-)
35
2 files changed, 68 insertions(+), 16 deletions(-)
13
36
14
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
37
diff --git a/block/io.c b/block/io.c
15
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
16
--- a/include/block/block-copy.h
39
--- a/block/io.c
17
+++ b/include/block/block-copy.h
40
+++ b/block/io.c
18
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
19
#include "qemu/co-shared-resource.h"
42
int64_t *map,
20
43
BlockDriverState **file)
21
typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
44
{
22
+typedef void (*BlockCopyAsyncCallbackFunc)(void *opaque);
45
+ int ret;
23
typedef struct BlockCopyState BlockCopyState;
46
BlockDriverState *p;
24
+typedef struct BlockCopyCallState BlockCopyCallState;
47
- int ret = 0;
25
48
- bool first = true;
26
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
49
+ int64_t eof = 0;
27
int64_t cluster_size, bool use_copy_range,
50
28
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
51
assert(bs != base);
29
int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
52
- for (p = bs; p != base; p = bdrv_filter_or_cow_bs(p)) {
30
bool *error_is_read);
31
32
+/*
33
+ * Run block-copy in a coroutine, create corresponding BlockCopyCallState
34
+ * object and return pointer to it. Never returns NULL.
35
+ *
36
+ * Caller is responsible to call block_copy_call_free() to free
37
+ * BlockCopyCallState object.
38
+ */
39
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
40
+ int64_t offset, int64_t bytes,
41
+ BlockCopyAsyncCallbackFunc cb,
42
+ void *cb_opaque);
43
+
53
+
44
+/*
54
+ ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
45
+ * Free finished BlockCopyCallState. Trying to free running
55
+ if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED) {
46
+ * block-copy will crash.
56
+ return ret;
47
+ */
57
+ }
48
+void block_copy_call_free(BlockCopyCallState *call_state);
49
+
58
+
50
+/*
59
+ if (ret & BDRV_BLOCK_EOF) {
51
+ * Note, that block-copy call is marked finished prior to calling
60
+ eof = offset + *pnum;
52
+ * the callback.
61
+ }
53
+ */
54
+bool block_copy_call_finished(BlockCopyCallState *call_state);
55
+bool block_copy_call_succeeded(BlockCopyCallState *call_state);
56
+bool block_copy_call_failed(BlockCopyCallState *call_state);
57
+int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
58
+
62
+
59
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
63
+ assert(*pnum <= bytes);
60
void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
64
+ bytes = *pnum;
61
62
diff --git a/block/block-copy.c b/block/block-copy.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/block/block-copy.c
65
+++ b/block/block-copy.c
66
@@ -XXX,XX +XXX,XX @@
67
static coroutine_fn int block_copy_task_entry(AioTask *task);
68
69
typedef struct BlockCopyCallState {
70
- /* IN parameters */
71
+ /* IN parameters. Initialized in block_copy_async() and never changed. */
72
BlockCopyState *s;
73
int64_t offset;
74
int64_t bytes;
75
+ BlockCopyAsyncCallbackFunc cb;
76
+ void *cb_opaque;
77
+
65
+
78
+ /* Coroutine where async block-copy is running */
66
+ for (p = bdrv_filter_or_cow_bs(bs); p != base;
79
+ Coroutine *co;
67
+ p = bdrv_filter_or_cow_bs(p))
80
68
+ {
81
/* State */
69
ret = bdrv_co_block_status(p, want_zero, offset, bytes, pnum, map,
82
- bool failed;
70
file);
83
+ int ret;
71
if (ret < 0) {
84
+ bool finished;
72
- break;
85
73
+ return ret;
86
/* OUT parameters */
74
}
87
bool error_is_read;
75
- if (ret & BDRV_BLOCK_ZERO && ret & BDRV_BLOCK_EOF && !first) {
88
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
76
+ if (*pnum == 0) {
89
77
/*
90
ret = block_copy_do_copy(t->s, t->offset, t->bytes, t->zeroes,
78
- * Reading beyond the end of the file continues to read
91
&error_is_read);
79
- * zeroes, but we can only widen the result to the
92
- if (ret < 0 && !t->call_state->failed) {
80
- * unallocated length we learned from an earlier
93
- t->call_state->failed = true;
81
- * iteration.
94
+ if (ret < 0 && !t->call_state->ret) {
82
+ * The top layer deferred to this layer, and because this layer is
95
+ t->call_state->ret = ret;
83
+ * short, any zeroes that we synthesize beyond EOF behave as if they
96
t->call_state->error_is_read = error_is_read;
84
+ * were allocated at this layer.
97
} else {
85
+ *
98
progress_work_done(t->s->progress, t->bytes);
86
+ * We don't include BDRV_BLOCK_EOF into ret, as upper layer may be
99
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
87
+ * larger. We'll add BDRV_BLOCK_EOF if needed at function end, see
100
*/
88
+ * below.
101
} while (ret > 0);
89
*/
102
90
+ assert(ret & BDRV_BLOCK_EOF);
103
+ call_state->finished = true;
91
*pnum = bytes;
92
+ if (file) {
93
+ *file = p;
94
+ }
95
+ ret = BDRV_BLOCK_ZERO | BDRV_BLOCK_ALLOCATED;
96
+ break;
97
}
98
- if (ret & (BDRV_BLOCK_ZERO | BDRV_BLOCK_DATA)) {
99
+ if (ret & BDRV_BLOCK_ALLOCATED) {
100
+ /*
101
+ * We've found the node and the status, we must break.
102
+ *
103
+ * Drop BDRV_BLOCK_EOF, as it's not for upper layer, which may be
104
+ * larger. We'll add BDRV_BLOCK_EOF if needed at function end, see
105
+ * below.
106
+ */
107
+ ret &= ~BDRV_BLOCK_EOF;
108
break;
109
}
110
- /* [offset, pnum] unallocated on this layer, which could be only
111
- * the first part of [offset, bytes]. */
112
- bytes = MIN(bytes, *pnum);
113
- first = false;
104
+
114
+
105
+ if (call_state->cb) {
115
+ /*
106
+ call_state->cb(call_state->cb_opaque);
116
+ * OK, [offset, offset + *pnum) region is unallocated on this layer,
117
+ * let's continue the diving.
118
+ */
119
+ assert(*pnum <= bytes);
120
+ bytes = *pnum;
107
+ }
121
+ }
122
+
123
+ if (offset + *pnum == eof) {
124
+ ret |= BDRV_BLOCK_EOF;
125
}
108
+
126
+
109
return ret;
127
return ret;
110
}
128
}
111
129
112
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
130
diff --git a/block/qcow2.c b/block/qcow2.c
113
return ret;
131
index XXXXXXX..XXXXXXX 100644
132
--- a/block/qcow2.c
133
+++ b/block/qcow2.c
134
@@ -XXX,XX +XXX,XX @@ static bool is_zero(BlockDriverState *bs, int64_t offset, int64_t bytes)
135
if (!bytes) {
136
return true;
137
}
138
- res = bdrv_block_status_above(bs, NULL, offset, bytes, &nr, NULL, NULL);
139
- return res >= 0 && (res & BDRV_BLOCK_ZERO) && nr == bytes;
140
+
141
+ /*
142
+ * bdrv_block_status_above doesn't merge different types of zeros, for
143
+ * example, zeros which come from the region which is unallocated in
144
+ * the whole backing chain, and zeros which come because of a short
145
+ * backing file. So, we need a loop.
146
+ */
147
+ do {
148
+ res = bdrv_block_status_above(bs, NULL, offset, bytes, &nr, NULL, NULL);
149
+ offset += nr;
150
+ bytes -= nr;
151
+ } while (res >= 0 && (res & BDRV_BLOCK_ZERO) && nr && bytes);
152
+
153
+ return res >= 0 && (res & BDRV_BLOCK_ZERO) && bytes == 0;
114
}
154
}
115
155
116
+static void coroutine_fn block_copy_async_co_entry(void *opaque)
156
static coroutine_fn int qcow2_co_pwrite_zeroes(BlockDriverState *bs,
117
+{
118
+ block_copy_common(opaque);
119
+}
120
+
121
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
122
+ int64_t offset, int64_t bytes,
123
+ BlockCopyAsyncCallbackFunc cb,
124
+ void *cb_opaque)
125
+{
126
+ BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
127
+
128
+ *call_state = (BlockCopyCallState) {
129
+ .s = s,
130
+ .offset = offset,
131
+ .bytes = bytes,
132
+ .cb = cb,
133
+ .cb_opaque = cb_opaque,
134
+
135
+ .co = qemu_coroutine_create(block_copy_async_co_entry, call_state),
136
+ };
137
+
138
+ qemu_coroutine_enter(call_state->co);
139
+
140
+ return call_state;
141
+}
142
+
143
+void block_copy_call_free(BlockCopyCallState *call_state)
144
+{
145
+ if (!call_state) {
146
+ return;
147
+ }
148
+
149
+ assert(call_state->finished);
150
+ g_free(call_state);
151
+}
152
+
153
+bool block_copy_call_finished(BlockCopyCallState *call_state)
154
+{
155
+ return call_state->finished;
156
+}
157
+
158
+bool block_copy_call_succeeded(BlockCopyCallState *call_state)
159
+{
160
+ return call_state->finished && call_state->ret == 0;
161
+}
162
+
163
+bool block_copy_call_failed(BlockCopyCallState *call_state)
164
+{
165
+ return call_state->finished && call_state->ret < 0;
166
+}
167
+
168
+int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
169
+{
170
+ assert(call_state->finished);
171
+ if (error_is_read) {
172
+ *error_is_read = call_state->error_is_read;
173
+ }
174
+ return call_state->ret;
175
+}
176
+
177
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
178
{
179
return s->copy_bitmap;
180
--
157
--
181
2.29.2
158
2.26.2
182
159
183
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
We are going to directly use one async block-copy operation for backup
3
In order to reuse bdrv_common_block_status_above in
4
job, so we need rate limiter.
4
bdrv_is_allocated_above, let's support include_base parameter.
5
6
We want to maintain current backup behavior: only background copying is
7
limited and copy-before-write operations only participate in limit
8
calculation. Therefore we need one rate limiter for block-copy state
9
and boolean flag for block-copy call state for actual limitation.
10
11
Note, that we can't just calculate each chunk in limiter after
12
successful copying: it will not save us from starting a lot of async
13
sub-requests which will exceed limit too much. Instead let's use the
14
following scheme on sub-request creation:
15
1. If at the moment limit is not exceeded, create the request and
16
account it immediately.
17
2. If at the moment limit is already exceeded, drop create sub-request
18
and handle limit instead (by sleep).
19
With this approach we'll never exceed the limit more than by one
20
sub-request (which pretty much matches current backup behavior).
21
22
Note also, that if there is in-flight block-copy async call,
23
block_copy_kick() should be used after set-speed to apply new setup
24
faster. For that block_copy_kick() published in this patch.
25
5
26
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
27
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Reviewed-by: Alberto Garcia <berto@igalia.com>
28
Message-Id: <20210116214705.822267-7-vsementsov@virtuozzo.com>
8
Reviewed-by: Eric Blake <eblake@redhat.com>
29
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
Message-id: 20200924194003.22080-3-vsementsov@virtuozzo.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
30
---
11
---
31
include/block/block-copy.h | 5 ++++-
12
block/coroutines.h | 2 ++
32
block/backup-top.c | 2 +-
13
block/io.c | 21 ++++++++++++++-------
33
block/backup.c | 2 +-
14
2 files changed, 16 insertions(+), 7 deletions(-)
34
block/block-copy.c | 46 +++++++++++++++++++++++++++++++++++++-
35
4 files changed, 51 insertions(+), 4 deletions(-)
36
15
37
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
16
diff --git a/block/coroutines.h b/block/coroutines.h
38
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
39
--- a/include/block/block-copy.h
18
--- a/block/coroutines.h
40
+++ b/include/block/block-copy.h
19
+++ b/block/coroutines.h
41
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
20
@@ -XXX,XX +XXX,XX @@ bdrv_pwritev(BdrvChild *child, int64_t offset, unsigned int bytes,
42
int64_t offset, int64_t *count);
21
int coroutine_fn
43
22
bdrv_co_common_block_status_above(BlockDriverState *bs,
44
int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
23
BlockDriverState *base,
45
- bool *error_is_read);
24
+ bool include_base,
46
+ bool ignore_ratelimit, bool *error_is_read);
25
bool want_zero,
47
26
int64_t offset,
48
/*
27
int64_t bytes,
49
* Run block-copy in a coroutine, create corresponding BlockCopyCallState
28
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
50
@@ -XXX,XX +XXX,XX @@ bool block_copy_call_succeeded(BlockCopyCallState *call_state);
29
int generated_co_wrapper
51
bool block_copy_call_failed(BlockCopyCallState *call_state);
30
bdrv_common_block_status_above(BlockDriverState *bs,
52
int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
31
BlockDriverState *base,
53
32
+ bool include_base,
54
+void block_copy_set_speed(BlockCopyState *s, uint64_t speed);
33
bool want_zero,
55
+void block_copy_kick(BlockCopyCallState *call_state);
34
int64_t offset,
56
+
35
int64_t bytes,
57
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
36
diff --git a/block/io.c b/block/io.c
58
void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
59
60
diff --git a/block/backup-top.c b/block/backup-top.c
61
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
62
--- a/block/backup-top.c
38
--- a/block/io.c
63
+++ b/block/backup-top.c
39
+++ b/block/io.c
64
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int backup_top_cbw(BlockDriverState *bs, uint64_t offset,
40
@@ -XXX,XX +XXX,XX @@ early_out:
65
off = QEMU_ALIGN_DOWN(offset, s->cluster_size);
41
int coroutine_fn
66
end = QEMU_ALIGN_UP(offset + bytes, s->cluster_size);
42
bdrv_co_common_block_status_above(BlockDriverState *bs,
67
43
BlockDriverState *base,
68
- return block_copy(s->bcs, off, end - off, NULL);
44
+ bool include_base,
69
+ return block_copy(s->bcs, off, end - off, true, NULL);
45
bool want_zero,
70
}
46
int64_t offset,
71
47
int64_t bytes,
72
static int coroutine_fn backup_top_co_pdiscard(BlockDriverState *bs,
48
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
73
diff --git a/block/backup.c b/block/backup.c
49
BlockDriverState *p;
74
index XXXXXXX..XXXXXXX 100644
50
int64_t eof = 0;
75
--- a/block/backup.c
51
76
+++ b/block/backup.c
52
- assert(bs != base);
77
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
53
+ assert(include_base || bs != base);
78
54
+ assert(!include_base || base); /* Can't include NULL base */
79
trace_backup_do_cow_enter(job, start, offset, bytes);
55
80
56
ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
81
- ret = block_copy(job->bcs, start, end - start, error_is_read);
57
- if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED) {
82
+ ret = block_copy(job->bcs, start, end - start, true, error_is_read);
58
+ if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) {
83
59
return ret;
84
trace_backup_do_cow_return(job, offset, bytes, ret);
60
}
85
61
86
diff --git a/block/block-copy.c b/block/block-copy.c
62
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
87
index XXXXXXX..XXXXXXX 100644
63
assert(*pnum <= bytes);
88
--- a/block/block-copy.c
64
bytes = *pnum;
89
+++ b/block/block-copy.c
65
90
@@ -XXX,XX +XXX,XX @@
66
- for (p = bdrv_filter_or_cow_bs(bs); p != base;
91
#define BLOCK_COPY_MAX_BUFFER (1 * MiB)
67
+ for (p = bdrv_filter_or_cow_bs(bs); include_base || p != base;
92
#define BLOCK_COPY_MAX_MEM (128 * MiB)
68
p = bdrv_filter_or_cow_bs(p))
93
#define BLOCK_COPY_MAX_WORKERS 64
69
{
94
+#define BLOCK_COPY_SLICE_TIME 100000000ULL /* ns */
70
ret = bdrv_co_block_status(p, want_zero, offset, bytes, pnum, map,
95
71
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
96
static coroutine_fn int block_copy_task_entry(AioTask *task);
72
break;
97
98
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
99
int64_t bytes;
100
int max_workers;
101
int64_t max_chunk;
102
+ bool ignore_ratelimit;
103
BlockCopyAsyncCallbackFunc cb;
104
void *cb_opaque;
105
106
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
107
/* State */
108
int ret;
109
bool finished;
110
+ QemuCoSleepState *sleep_state;
111
112
/* OUT parameters */
113
bool error_is_read;
114
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
115
void *progress_opaque;
116
117
SharedResource *mem;
118
+
119
+ uint64_t speed;
120
+ RateLimit rate_limit;
121
} BlockCopyState;
122
123
static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
124
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
125
}
73
}
126
task->zeroes = ret & BDRV_BLOCK_ZERO;
74
127
75
+ if (p == base) {
128
+ if (s->speed) {
76
+ assert(include_base);
129
+ if (!call_state->ignore_ratelimit) {
77
+ break;
130
+ uint64_t ns = ratelimit_calculate_delay(&s->rate_limit, 0);
131
+ if (ns > 0) {
132
+ block_copy_task_end(task, -EAGAIN);
133
+ g_free(task);
134
+ qemu_co_sleep_ns_wakeable(QEMU_CLOCK_REALTIME, ns,
135
+ &call_state->sleep_state);
136
+ continue;
137
+ }
138
+ }
139
+
140
+ ratelimit_calculate_delay(&s->rate_limit, task->bytes);
141
+ }
78
+ }
142
+
79
+
143
trace_block_copy_process(s, task->offset);
80
/*
144
81
* OK, [offset, offset + *pnum) region is unallocated on this layer,
145
co_get_from_shres(s->mem, task->bytes);
82
* let's continue the diving.
146
@@ -XXX,XX +XXX,XX @@ out:
83
@@ -XXX,XX +XXX,XX @@ int bdrv_block_status_above(BlockDriverState *bs, BlockDriverState *base,
147
return ret < 0 ? ret : found_dirty;
84
int64_t offset, int64_t bytes, int64_t *pnum,
85
int64_t *map, BlockDriverState **file)
86
{
87
- return bdrv_common_block_status_above(bs, base, true, offset, bytes,
88
+ return bdrv_common_block_status_above(bs, base, false, true, offset, bytes,
89
pnum, map, file);
148
}
90
}
149
91
150
+void block_copy_kick(BlockCopyCallState *call_state)
92
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_is_allocated(BlockDriverState *bs, int64_t offset,
151
+{
93
int ret;
152
+ if (call_state->sleep_state) {
94
int64_t dummy;
153
+ qemu_co_sleep_wake(call_state->sleep_state);
95
154
+ }
96
- ret = bdrv_common_block_status_above(bs, bdrv_filter_or_cow_bs(bs), false,
155
+}
97
- offset, bytes, pnum ? pnum : &dummy,
156
+
98
- NULL, NULL);
157
/*
99
+ ret = bdrv_common_block_status_above(bs, bs, true, false, offset,
158
* block_copy_common
100
+ bytes, pnum ? pnum : &dummy, NULL,
159
*
101
+ NULL);
160
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
102
if (ret < 0) {
161
}
103
return ret;
162
104
}
163
int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
164
- bool *error_is_read)
165
+ bool ignore_ratelimit, bool *error_is_read)
166
{
167
BlockCopyCallState call_state = {
168
.s = s,
169
.offset = start,
170
.bytes = bytes,
171
+ .ignore_ratelimit = ignore_ratelimit,
172
.max_workers = BLOCK_COPY_MAX_WORKERS,
173
};
174
175
@@ -XXX,XX +XXX,XX @@ void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip)
176
{
177
s->skip_unallocated = skip;
178
}
179
+
180
+void block_copy_set_speed(BlockCopyState *s, uint64_t speed)
181
+{
182
+ s->speed = speed;
183
+ if (speed > 0) {
184
+ ratelimit_set_speed(&s->rate_limit, speed, BLOCK_COPY_SLICE_TIME);
185
+ }
186
+
187
+ /*
188
+ * Note: it's good to kick all call states from here, but it should be done
189
+ * only from a coroutine, to not crash if s->calls list changed while
190
+ * entering one call. So for now, the only user of this function kicks its
191
+ * only one call_state by hand.
192
+ */
193
+}
194
--
105
--
195
2.29.2
106
2.26.2
196
107
197
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Add function to cancel running async block-copy call. It will be used
4
in backup.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Message-Id: <20210116214705.822267-8-vsementsov@virtuozzo.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
include/block/block-copy.h | 13 +++++++++++++
12
block/block-copy.c | 24 +++++++++++++++++++-----
13
2 files changed, 32 insertions(+), 5 deletions(-)
14
15
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/block/block-copy.h
18
+++ b/include/block/block-copy.h
19
@@ -XXX,XX +XXX,XX @@ void block_copy_call_free(BlockCopyCallState *call_state);
20
bool block_copy_call_finished(BlockCopyCallState *call_state);
21
bool block_copy_call_succeeded(BlockCopyCallState *call_state);
22
bool block_copy_call_failed(BlockCopyCallState *call_state);
23
+bool block_copy_call_cancelled(BlockCopyCallState *call_state);
24
int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
25
26
void block_copy_set_speed(BlockCopyState *s, uint64_t speed);
27
void block_copy_kick(BlockCopyCallState *call_state);
28
29
+/*
30
+ * Cancel running block-copy call.
31
+ *
32
+ * Cancel leaves block-copy state valid: dirty bits are correct and you may use
33
+ * cancel + <run block_copy with same parameters> to emulate pause/resume.
34
+ *
35
+ * Note also, that the cancel is async: it only marks block-copy call to be
36
+ * cancelled. So, the call may be cancelled (block_copy_call_cancelled() reports
37
+ * true) but not yet finished (block_copy_call_finished() reports false).
38
+ */
39
+void block_copy_call_cancel(BlockCopyCallState *call_state);
40
+
41
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
42
void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
43
44
diff --git a/block/block-copy.c b/block/block-copy.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/block/block-copy.c
47
+++ b/block/block-copy.c
48
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
49
int ret;
50
bool finished;
51
QemuCoSleepState *sleep_state;
52
+ bool cancelled;
53
54
/* OUT parameters */
55
bool error_is_read;
56
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
57
assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
58
assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
59
60
- while (bytes && aio_task_pool_status(aio) == 0) {
61
+ while (bytes && aio_task_pool_status(aio) == 0 && !call_state->cancelled) {
62
BlockCopyTask *task;
63
int64_t status_bytes;
64
65
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
66
do {
67
ret = block_copy_dirty_clusters(call_state);
68
69
- if (ret == 0) {
70
+ if (ret == 0 && !call_state->cancelled) {
71
ret = block_copy_wait_one(call_state->s, call_state->offset,
72
call_state->bytes);
73
}
74
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
75
* 2. We have waited for some intersecting block-copy request
76
* It may have failed and produced new dirty bits.
77
*/
78
- } while (ret > 0);
79
+ } while (ret > 0 && !call_state->cancelled);
80
81
call_state->finished = true;
82
83
@@ -XXX,XX +XXX,XX @@ bool block_copy_call_finished(BlockCopyCallState *call_state)
84
85
bool block_copy_call_succeeded(BlockCopyCallState *call_state)
86
{
87
- return call_state->finished && call_state->ret == 0;
88
+ return call_state->finished && !call_state->cancelled &&
89
+ call_state->ret == 0;
90
}
91
92
bool block_copy_call_failed(BlockCopyCallState *call_state)
93
{
94
- return call_state->finished && call_state->ret < 0;
95
+ return call_state->finished && !call_state->cancelled &&
96
+ call_state->ret < 0;
97
+}
98
+
99
+bool block_copy_call_cancelled(BlockCopyCallState *call_state)
100
+{
101
+ return call_state->cancelled;
102
}
103
104
int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
105
@@ -XXX,XX +XXX,XX @@ int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
106
return call_state->ret;
107
}
108
109
+void block_copy_call_cancel(BlockCopyCallState *call_state)
110
+{
111
+ call_state->cancelled = true;
112
+ block_copy_kick(call_state);
113
+}
114
+
115
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
116
{
117
return s->copy_bitmap;
118
--
119
2.29.2
120
121
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
We are going to use async block-copy call in backup, so we'll need to
3
We are going to reuse bdrv_common_block_status_above in
4
passthrough setting backup speed to block-copy call.
4
bdrv_is_allocated_above. bdrv_is_allocated_above may be called with
5
include_base == false and still bs == base (for ex. from img_rebase()).
6
7
So, support this corner case.
5
8
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
10
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
8
Message-Id: <20210116214705.822267-9-vsementsov@virtuozzo.com>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
Reviewed-by: Alberto Garcia <berto@igalia.com>
13
Message-id: 20200924194003.22080-4-vsementsov@virtuozzo.com
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
15
---
11
include/block/blockjob_int.h | 2 ++
16
block/io.c | 6 +++++-
12
blockjob.c | 6 ++++++
17
1 file changed, 5 insertions(+), 1 deletion(-)
13
2 files changed, 8 insertions(+)
14
18
15
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
19
diff --git a/block/io.c b/block/io.c
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/include/block/blockjob_int.h
21
--- a/block/io.c
18
+++ b/include/block/blockjob_int.h
22
+++ b/block/io.c
19
@@ -XXX,XX +XXX,XX @@ struct BlockJobDriver {
23
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
20
* besides job->blk to the new AioContext.
24
BlockDriverState *p;
21
*/
25
int64_t eof = 0;
22
void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
26
23
+
27
- assert(include_base || bs != base);
24
+ void (*set_speed)(BlockJob *job, int64_t speed);
28
assert(!include_base || base); /* Can't include NULL base */
25
};
29
26
30
+ if (!include_base && bs == base) {
27
/**
31
+ *pnum = bytes;
28
diff --git a/blockjob.c b/blockjob.c
32
+ return 0;
29
index XXXXXXX..XXXXXXX 100644
30
--- a/blockjob.c
31
+++ b/blockjob.c
32
@@ -XXX,XX +XXX,XX @@ static bool job_timer_pending(Job *job)
33
34
void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
35
{
36
+ const BlockJobDriver *drv = block_job_driver(job);
37
int64_t old_speed = job->speed;
38
39
if (job_apply_verb(&job->job, JOB_VERB_SET_SPEED, errp)) {
40
@@ -XXX,XX +XXX,XX @@ void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
41
ratelimit_set_speed(&job->limit, speed, BLOCK_JOB_SLICE_TIME);
42
43
job->speed = speed;
44
+
45
+ if (drv->set_speed) {
46
+ drv->set_speed(job, speed);
47
+ }
33
+ }
48
+
34
+
49
if (speed && speed <= old_speed) {
35
ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
50
return;
36
if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) {
51
}
37
return ret;
52
--
38
--
53
2.29.2
39
2.26.2
54
40
55
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
If main job coroutine called job_yield (while some background process
4
is in progress), we should give it a chance to call job_pause_point().
5
It will be used in backup, when moved on async block-copy.
6
7
Note, that job_user_pause is not enough: we want to handle
8
child_job_drained_begin() as well, which call job_pause().
9
10
Still, if job is already in job_do_yield() in job_pause_point() we
11
should not enter it.
12
13
iotest 109 output is modified: on stop we do bdrv_drain_all() which now
14
triggers job pause immediately (and pause after ready is standby).
15
16
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
17
Message-Id: <20210116214705.822267-10-vsementsov@virtuozzo.com>
18
Reviewed-by: Max Reitz <mreitz@redhat.com>
19
Signed-off-by: Max Reitz <mreitz@redhat.com>
20
---
21
job.c | 3 +++
22
tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
23
2 files changed, 27 insertions(+)
24
25
diff --git a/job.c b/job.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/job.c
28
+++ b/job.c
29
@@ -XXX,XX +XXX,XX @@ static bool job_timer_not_pending(Job *job)
30
void job_pause(Job *job)
31
{
32
job->pause_count++;
33
+ if (!job->paused) {
34
+ job_enter(job);
35
+ }
36
}
37
38
void job_resume(Job *job)
39
diff --git a/tests/qemu-iotests/109.out b/tests/qemu-iotests/109.out
40
index XXXXXXX..XXXXXXX 100644
41
--- a/tests/qemu-iotests/109.out
42
+++ b/tests/qemu-iotests/109.out
43
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
44
{"execute":"quit"}
45
{"return": {}}
46
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
47
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
48
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
49
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
50
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
51
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 1024, "offset": 1024, "speed": 0, "type": "mirror"}}
52
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
53
{"execute":"quit"}
54
{"return": {}}
55
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
56
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
57
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
58
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
59
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
60
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 197120, "offset": 197120, "speed": 0, "type": "mirror"}}
61
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
62
{"execute":"quit"}
63
{"return": {}}
64
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
65
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
66
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
67
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
68
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
69
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 327680, "offset": 327680, "speed": 0, "type": "mirror"}}
70
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
71
{"execute":"quit"}
72
{"return": {}}
73
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
74
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
75
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
76
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
77
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
78
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 1024, "offset": 1024, "speed": 0, "type": "mirror"}}
79
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
80
{"execute":"quit"}
81
{"return": {}}
82
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
83
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
84
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
85
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
86
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
87
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 65536, "offset": 65536, "speed": 0, "type": "mirror"}}
88
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
89
{"execute":"quit"}
90
{"return": {}}
91
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
92
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
93
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
94
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
95
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
96
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2560, "offset": 2560, "speed": 0, "type": "mirror"}}
97
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
98
{"execute":"quit"}
99
{"return": {}}
100
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
101
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
102
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
103
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
104
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
105
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2560, "offset": 2560, "speed": 0, "type": "mirror"}}
106
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
107
{"execute":"quit"}
108
{"return": {}}
109
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
110
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
111
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
112
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
113
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
114
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 31457280, "offset": 31457280, "speed": 0, "type": "mirror"}}
115
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
116
{"execute":"quit"}
117
{"return": {}}
118
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
119
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
120
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
121
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
122
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
123
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 327680, "offset": 327680, "speed": 0, "type": "mirror"}}
124
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
125
{"execute":"quit"}
126
{"return": {}}
127
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
128
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
129
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
130
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
131
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
132
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2048, "offset": 2048, "speed": 0, "type": "mirror"}}
133
@@ -XXX,XX +XXX,XX @@ WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed
134
{"execute":"quit"}
135
{"return": {}}
136
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
137
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
138
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
139
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
140
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
141
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 512, "offset": 512, "speed": 0, "type": "mirror"}}
142
@@ -XXX,XX +XXX,XX @@ Images are identical.
143
{"execute":"quit"}
144
{"return": {}}
145
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
146
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
147
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
148
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
149
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
150
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 512, "offset": 512, "speed": 0, "type": "mirror"}}
151
--
152
2.29.2
153
154
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
After introducing parallel async copy requests instead of plain
4
cluster-by-cluster copying loop, we'll have to wait for paused status,
5
as we need to wait for several parallel request. So, let's gently wait
6
instead of just asserting that job already paused.
7
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
10
Message-Id: <20210116214705.822267-12-vsementsov@virtuozzo.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
13
tests/qemu-iotests/056 | 9 +++++++--
14
1 file changed, 7 insertions(+), 2 deletions(-)
15
16
diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
17
index XXXXXXX..XXXXXXX 100755
18
--- a/tests/qemu-iotests/056
19
+++ b/tests/qemu-iotests/056
20
@@ -XXX,XX +XXX,XX @@ class BackupTest(iotests.QMPTestCase):
21
event = self.vm.event_wait(name="BLOCK_JOB_ERROR",
22
match={'data': {'device': 'drive0'}})
23
self.assertNotEqual(event, None)
24
- # OK, job should be wedged
25
- res = self.vm.qmp('query-block-jobs')
26
+ # OK, job should pause, but it can't do it immediately, as it can't
27
+ # cancel other parallel requests (which didn't fail)
28
+ with iotests.Timeout(60, "Timeout waiting for backup actually paused"):
29
+ while True:
30
+ res = self.vm.qmp('query-block-jobs')
31
+ if res['return'][0]['status'] == 'paused':
32
+ break
33
self.assert_qmp(res, 'return[0]/status', 'paused')
34
res = self.vm.qmp('block-job-dismiss', id='drive0')
35
self.assert_qmp(res, 'error/desc',
36
--
37
2.29.2
38
39
diff view generated by jsdifflib
Deleted patch
1
Right now, this does not change anything, because backup ignores
2
max-chunk and max-workers. However, as soon as backup is switched over
3
to block-copy for the background copying process, we will need it to
4
keep 129 passing.
5
1
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20210120102043.28346-1-mreitz@redhat.com>
8
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
---
10
tests/qemu-iotests/129 | 7 ++++++-
11
1 file changed, 6 insertions(+), 1 deletion(-)
12
13
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
14
index XXXXXXX..XXXXXXX 100755
15
--- a/tests/qemu-iotests/129
16
+++ b/tests/qemu-iotests/129
17
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
18
sync="full", buf_size=65536)
19
20
def test_drive_backup(self):
21
+ # Limit max-chunk and max-workers so that block-copy will not
22
+ # launch so many workers working on so much data each that
23
+ # stop's bdrv_drain_all() would finish the job
24
self.do_test_stop("drive-backup", device="drive0",
25
target=self.target_img, format=iotests.imgfmt,
26
- sync="full")
27
+ sync="full",
28
+ x_perf={ 'max-chunk': 65536,
29
+ 'max-workers': 8 })
30
31
def test_block_commit(self):
32
# Add overlay above the source node so that we actually use a
33
--
34
2.29.2
35
36
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
The further change of moving backup to be a one block-copy call will
4
make copying chunk-size and cluster-size two separate things. So, even
5
with 64k cluster sized qcow2 image, default chunk would be 1M.
6
185 test however assumes, that with speed limited to 64K, one iteration
7
would result in offset=64K. It will change, as first iteration would
8
result in offset=1M independently of speed.
9
10
So, let's explicitly specify, what test wants: set max-chunk to 64K, so
11
that one iteration is 64K. Note, that we don't need to limit
12
max-workers, as block-copy rate limiter will handle the situation and
13
wouldn't start new workers when speed limit is obviously reached.
14
15
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
16
Reviewed-by: Max Reitz <mreitz@redhat.com>
17
Message-Id: <20210116214705.822267-13-vsementsov@virtuozzo.com>
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
---
20
tests/qemu-iotests/185 | 3 ++-
21
tests/qemu-iotests/185.out | 3 ++-
22
2 files changed, 4 insertions(+), 2 deletions(-)
23
24
diff --git a/tests/qemu-iotests/185 b/tests/qemu-iotests/185
25
index XXXXXXX..XXXXXXX 100755
26
--- a/tests/qemu-iotests/185
27
+++ b/tests/qemu-iotests/185
28
@@ -XXX,XX +XXX,XX @@ _send_qemu_cmd $h \
29
'target': '$TEST_IMG.copy',
30
'format': '$IMGFMT',
31
'sync': 'full',
32
- 'speed': 65536 } }" \
33
+ 'speed': 65536,
34
+ 'x-perf': {'max-chunk': 65536} } }" \
35
"return"
36
37
# If we don't sleep here 'quit' command races with disk I/O
38
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
39
index XXXXXXX..XXXXXXX 100644
40
--- a/tests/qemu-iotests/185.out
41
+++ b/tests/qemu-iotests/185.out
42
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off
43
'target': 'TEST_DIR/t.IMGFMT.copy',
44
'format': 'IMGFMT',
45
'sync': 'full',
46
- 'speed': 65536 } }
47
+ 'speed': 65536,
48
+ 'x-perf': { 'max-chunk': 65536 } } }
49
Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 lazy_refcounts=off refcount_bits=16
50
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
51
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
52
--
53
2.29.2
54
55
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
The further change of moving backup to be a one block-copy call will
4
make copying chunk-size and cluster-size two separate things. So, even
5
with 64k cluster sized qcow2 image, default chunk would be 1M.
6
Test 219 depends on specified chunk-size. Update it for explicit
7
chunk-size for backup as for mirror.
8
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
11
Message-Id: <20210116214705.822267-14-vsementsov@virtuozzo.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
tests/qemu-iotests/219 | 13 +++++++------
15
1 file changed, 7 insertions(+), 6 deletions(-)
16
17
diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
18
index XXXXXXX..XXXXXXX 100755
19
--- a/tests/qemu-iotests/219
20
+++ b/tests/qemu-iotests/219
21
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('disk.img') as disk_path, \
22
# but related to this also automatic state transitions like job
23
# completion), but still get pause points often enough to avoid making this
24
# test very slow, it's important to have the right ratio between speed and
25
- # buf_size.
26
+ # copy-chunk-size.
27
#
28
- # For backup, buf_size is hard-coded to the source image cluster size (64k),
29
- # so we'll pick the same for mirror. The slice time, i.e. the granularity
30
- # of the rate limiting is 100ms. With a speed of 256k per second, we can
31
- # get four pause points per second. This gives us 250ms per iteration,
32
- # which should be enough to stay deterministic.
33
+ # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
34
+ # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
35
+ # is 100ms. With a speed of 256k per second, we can get four pause points
36
+ # per second. This gives us 250ms per iteration, which should be enough to
37
+ # stay deterministic.
38
39
test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
40
'device': 'drive0-node',
41
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('disk.img') as disk_path, \
42
'target': copy_path,
43
'sync': 'full',
44
'speed': 262144,
45
+ 'x-perf': {'max-chunk': 65536},
46
'auto-finalize': auto_finalize,
47
'auto-dismiss': auto_dismiss,
48
})
49
--
50
2.29.2
51
52
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Iotest 257 dumps a lot of in-progress information of backup job, such
4
as offset and bitmap dirtiness. Further commit will move backup to be
5
one block-copy call, which will introduce async parallel requests
6
instead of plain cluster-by-cluster copying. To keep things
7
deterministic, allow only one worker (only one copy request at a time)
8
for this test.
9
10
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
Reviewed-by: Max Reitz <mreitz@redhat.com>
12
Message-Id: <20210116214705.822267-15-vsementsov@virtuozzo.com>
13
Signed-off-by: Max Reitz <mreitz@redhat.com>
14
---
15
tests/qemu-iotests/257 | 1 +
16
tests/qemu-iotests/257.out | 306 ++++++++++++++++++-------------------
17
2 files changed, 154 insertions(+), 153 deletions(-)
18
19
diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
20
index XXXXXXX..XXXXXXX 100755
21
--- a/tests/qemu-iotests/257
22
+++ b/tests/qemu-iotests/257
23
@@ -XXX,XX +XXX,XX @@ def blockdev_backup(vm, device, target, sync, **kwargs):
24
target=target,
25
sync=sync,
26
filter_node_name='backup-top',
27
+ x_perf={'max-workers': 1},
28
**kwargs)
29
return result
30
31
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/qemu-iotests/257.out
34
+++ b/tests/qemu-iotests/257.out
35
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
36
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
37
{"return": {}}
38
{}
39
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
40
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
41
{"return": {}}
42
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
43
44
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
45
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
46
{"return": {}}
47
{}
48
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
49
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
50
{"return": {}}
51
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
52
53
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
54
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
55
{"return": {}}
56
{}
57
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
58
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
59
{"return": {}}
60
61
--- Write #2 ---
62
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
63
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
64
{"return": {}}
65
{}
66
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
67
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
68
{"return": {}}
69
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
70
71
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
72
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
73
{"return": {}}
74
{}
75
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
76
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
77
{"return": {}}
78
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
79
{"return": {}}
80
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
81
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
82
{"return": {}}
83
{}
84
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
85
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
86
{"return": {}}
87
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
88
89
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
90
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
91
{"return": {}}
92
{}
93
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
94
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
95
{"return": {}}
96
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
97
98
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
99
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
100
{"return": {}}
101
{}
102
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
103
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
104
{"return": {}}
105
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
106
{"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
107
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
108
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
109
{"return": {}}
110
{}
111
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
112
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
113
{"return": {}}
114
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
115
116
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
117
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
118
{"return": {}}
119
{}
120
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
121
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
122
{"return": {}}
123
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
124
{"return": {}}
125
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
126
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
127
{"return": {}}
128
{}
129
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
130
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
131
{"return": {}}
132
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
133
134
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
135
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
136
{"return": {}}
137
{}
138
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
139
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
140
{"return": {}}
141
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
142
143
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
144
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
145
{"return": {}}
146
{}
147
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
148
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
149
{"return": {}}
150
151
--- Write #2 ---
152
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
153
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
154
{"return": {}}
155
{}
156
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
157
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
158
{"return": {}}
159
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
160
161
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
162
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
163
{"return": {}}
164
{}
165
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
166
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
167
{"return": {}}
168
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
169
{"return": {}}
170
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
171
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
172
{"return": {}}
173
{}
174
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
175
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
176
{"return": {}}
177
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
178
179
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
180
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
181
{"return": {}}
182
{}
183
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
184
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
185
{"return": {}}
186
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
187
188
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
189
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
190
{"return": {}}
191
{}
192
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
193
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
194
{"return": {}}
195
196
--- Write #2 ---
197
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
198
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
199
{"return": {}}
200
{}
201
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
202
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
203
{"return": {}}
204
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
205
206
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
207
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
208
{"return": {}}
209
{}
210
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
211
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
212
{"return": {}}
213
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
214
{"return": {}}
215
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
216
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
217
{"return": {}}
218
{}
219
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
220
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
221
{"return": {}}
222
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
223
224
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
225
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
226
{"return": {}}
227
{}
228
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
229
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
230
{"return": {}}
231
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
232
233
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
234
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
235
{"return": {}}
236
{}
237
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
238
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
239
{"return": {}}
240
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
241
{"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
242
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
243
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
244
{"return": {}}
245
{}
246
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
247
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
248
{"return": {}}
249
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
250
251
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
252
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
253
{"return": {}}
254
{}
255
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
256
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
257
{"return": {}}
258
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
259
{"return": {}}
260
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
261
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
262
{"return": {}}
263
{}
264
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
265
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
266
{"return": {}}
267
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
268
269
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
270
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
271
{"return": {}}
272
{}
273
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
274
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
275
{"return": {}}
276
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
277
278
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
279
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
280
{"return": {}}
281
{}
282
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
283
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
284
{"return": {}}
285
286
--- Write #2 ---
287
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
288
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
289
{"return": {}}
290
{}
291
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
292
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
293
{"return": {}}
294
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
295
296
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
297
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
298
{"return": {}}
299
{}
300
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
301
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
302
{"return": {}}
303
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
304
{"return": {}}
305
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
306
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
307
{"return": {}}
308
{}
309
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
310
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
311
{"return": {}}
312
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
313
314
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
315
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
316
{"return": {}}
317
{}
318
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
319
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
320
{"return": {}}
321
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
322
323
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
324
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
325
{"return": {}}
326
{}
327
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
328
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
329
{"return": {}}
330
331
--- Write #2 ---
332
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
333
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
334
{"return": {}}
335
{}
336
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
337
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
338
{"return": {}}
339
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
340
341
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
342
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
343
{"return": {}}
344
{}
345
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
346
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
347
{"return": {}}
348
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
349
{"return": {}}
350
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
351
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
352
{"return": {}}
353
{}
354
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
355
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
356
{"return": {}}
357
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
358
359
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
360
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
361
{"return": {}}
362
{}
363
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
364
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
365
{"return": {}}
366
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
367
368
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
369
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
370
{"return": {}}
371
{}
372
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
373
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
374
{"return": {}}
375
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
376
{"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
377
@@ -XXX,XX +XXX,XX @@ expecting 13 dirty sectors; have 13. OK!
378
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
379
{"return": {}}
380
{}
381
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
382
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
383
{"return": {}}
384
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
385
386
@@ -XXX,XX +XXX,XX @@ expecting 13 dirty sectors; have 13. OK!
387
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
388
{"return": {}}
389
{}
390
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
391
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
392
{"return": {}}
393
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
394
{"return": {}}
395
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
396
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
397
{"return": {}}
398
{}
399
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
400
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
401
{"return": {}}
402
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
403
404
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
405
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
406
{"return": {}}
407
{}
408
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
409
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
410
{"return": {}}
411
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
412
413
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
414
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
415
{"return": {}}
416
{}
417
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
418
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
419
{"return": {}}
420
421
--- Write #2 ---
422
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
423
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
424
{"return": {}}
425
{}
426
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
427
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
428
{"return": {}}
429
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
430
431
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
432
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
433
{"return": {}}
434
{}
435
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
436
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
437
{"return": {}}
438
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
439
{"return": {}}
440
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
441
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
442
{"return": {}}
443
{}
444
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
445
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
446
{"return": {}}
447
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
448
449
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
450
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
451
{"return": {}}
452
{}
453
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
454
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
455
{"return": {}}
456
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
457
458
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
459
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
460
{"return": {}}
461
{}
462
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
463
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
464
{"return": {}}
465
466
--- Write #2 ---
467
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
468
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
469
{"return": {}}
470
{}
471
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
472
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
473
{"return": {}}
474
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
475
476
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
477
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
478
{"return": {}}
479
{}
480
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
481
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
482
{"return": {}}
483
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
484
{"return": {}}
485
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
486
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
487
{"return": {}}
488
{}
489
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
490
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
491
{"return": {}}
492
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
493
494
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
495
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
496
{"return": {}}
497
{}
498
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
499
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
500
{"return": {}}
501
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
502
503
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
504
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
505
{"return": {}}
506
{}
507
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
508
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
509
{"return": {}}
510
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
511
{"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
512
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
513
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
514
{"return": {}}
515
{}
516
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
517
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
518
{"return": {}}
519
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
520
521
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
522
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
523
{"return": {}}
524
{}
525
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
526
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
527
{"return": {}}
528
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
529
{"return": {}}
530
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
531
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
532
{"return": {}}
533
{}
534
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
535
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
536
{"return": {}}
537
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
538
539
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
540
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
541
{"return": {}}
542
{}
543
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
544
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
545
{"return": {}}
546
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
547
548
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
549
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
550
{"return": {}}
551
{}
552
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
553
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
554
{"return": {}}
555
556
--- Write #2 ---
557
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
558
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
559
{"return": {}}
560
{}
561
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
562
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
563
{"return": {}}
564
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
565
566
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
567
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
568
{"return": {}}
569
{}
570
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
571
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
572
{"return": {}}
573
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
574
{"return": {}}
575
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
576
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
577
{"return": {}}
578
{}
579
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
580
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
581
{"return": {}}
582
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
583
584
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
585
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
586
{"return": {}}
587
{}
588
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
589
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
590
{"return": {}}
591
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
592
593
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
594
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
595
{"return": {}}
596
{}
597
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
598
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
599
{"return": {}}
600
601
--- Write #2 ---
602
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
603
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
604
{"return": {}}
605
{}
606
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
607
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
608
{"return": {}}
609
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
610
611
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
612
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
613
{"return": {}}
614
{}
615
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
616
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
617
{"return": {}}
618
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
619
{"return": {}}
620
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
621
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
622
{"return": {}}
623
{}
624
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
625
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
626
{"return": {}}
627
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
628
629
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
630
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
631
{"return": {}}
632
{}
633
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
634
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
635
{"return": {}}
636
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
637
638
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
639
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
640
{"return": {}}
641
{}
642
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
643
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
644
{"return": {}}
645
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
646
{"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
647
@@ -XXX,XX +XXX,XX @@ expecting 1014 dirty sectors; have 1014. OK!
648
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
649
{"return": {}}
650
{}
651
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
652
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
653
{"return": {}}
654
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
655
656
@@ -XXX,XX +XXX,XX @@ expecting 1014 dirty sectors; have 1014. OK!
657
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
658
{"return": {}}
659
{}
660
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
661
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
662
{"return": {}}
663
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
664
{"return": {}}
665
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
666
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
667
{"return": {}}
668
{}
669
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
670
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
671
{"return": {}}
672
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
673
674
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
675
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
676
{"return": {}}
677
{}
678
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
679
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
680
{"return": {}}
681
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
682
683
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
684
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
685
{"return": {}}
686
{}
687
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
688
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
689
{"return": {}}
690
691
--- Write #2 ---
692
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
693
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
694
{"return": {}}
695
{}
696
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
697
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
698
{"return": {}}
699
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
700
701
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
702
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
703
{"return": {}}
704
{}
705
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
706
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
707
{"return": {}}
708
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
709
{"return": {}}
710
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
711
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
712
{"return": {}}
713
{}
714
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
715
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
716
{"return": {}}
717
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
718
719
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
720
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
721
{"return": {}}
722
{}
723
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
724
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
725
{"return": {}}
726
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
727
728
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
729
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
730
{"return": {}}
731
{}
732
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
733
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
734
{"return": {}}
735
736
--- Write #2 ---
737
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
738
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
739
{"return": {}}
740
{}
741
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
742
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
743
{"return": {}}
744
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
745
746
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
747
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
748
{"return": {}}
749
{}
750
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
751
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
752
{"return": {}}
753
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
754
{"return": {}}
755
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
756
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
757
{"return": {}}
758
{}
759
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
760
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
761
{"return": {}}
762
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
763
764
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
765
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
766
{"return": {}}
767
{}
768
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
769
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
770
{"return": {}}
771
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
772
773
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
774
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
775
{"return": {}}
776
{}
777
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
778
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
779
{"return": {}}
780
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
781
{"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
782
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
783
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
784
{"return": {}}
785
{}
786
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
787
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
788
{"return": {}}
789
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
790
791
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
792
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
793
{"return": {}}
794
{}
795
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
796
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
797
{"return": {}}
798
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
799
{"return": {}}
800
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
801
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
802
{"return": {}}
803
{}
804
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
805
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
806
{"return": {}}
807
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
808
809
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
810
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
811
{"return": {}}
812
{}
813
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
814
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
815
{"return": {}}
816
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
817
818
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
819
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
820
{"return": {}}
821
{}
822
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
823
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
824
{"return": {}}
825
826
--- Write #2 ---
827
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
828
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
829
{"return": {}}
830
{}
831
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
832
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
833
{"return": {}}
834
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
835
836
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
837
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
838
{"return": {}}
839
{}
840
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
841
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
842
{"return": {}}
843
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
844
{"return": {}}
845
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
846
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
847
{"return": {}}
848
{}
849
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
850
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
851
{"return": {}}
852
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
853
854
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
855
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
856
{"return": {}}
857
{}
858
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
859
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
860
{"return": {}}
861
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
862
863
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
864
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
865
{"return": {}}
866
{}
867
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
868
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
869
{"return": {}}
870
871
--- Write #2 ---
872
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
873
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
874
{"return": {}}
875
{}
876
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
877
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
878
{"return": {}}
879
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
880
881
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
882
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
883
{"return": {}}
884
{}
885
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
886
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
887
{"return": {}}
888
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
889
{"return": {}}
890
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
891
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
892
{"return": {}}
893
{}
894
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
895
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
896
{"return": {}}
897
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
898
899
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
900
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
901
{"return": {}}
902
{}
903
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
904
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
905
{"return": {}}
906
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
907
908
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
909
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
910
{"return": {}}
911
{}
912
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
913
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
914
{"return": {}}
915
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
916
{"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
917
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
918
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
919
{"return": {}}
920
{}
921
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
922
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
923
{"return": {}}
924
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
925
926
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
927
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
928
{"return": {}}
929
{}
930
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
931
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
932
{"return": {}}
933
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
934
{"return": {}}
935
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
936
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
937
{"return": {}}
938
{}
939
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
940
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
941
{"return": {}}
942
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
943
944
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
945
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
946
{"return": {}}
947
{}
948
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
949
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
950
{"return": {}}
951
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
952
953
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
954
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
955
{"return": {}}
956
{}
957
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
958
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
959
{"return": {}}
960
961
--- Write #2 ---
962
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
963
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
964
{"return": {}}
965
{}
966
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
967
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
968
{"return": {}}
969
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
970
971
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
972
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
973
{"return": {}}
974
{}
975
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
976
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
977
{"return": {}}
978
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
979
{"return": {}}
980
@@ -XXX,XX +XXX,XX @@ qemu_img compare "TEST_DIR/PID-img" "TEST_DIR/PID-fbackup2" ==> Identical, OK!
981
982
-- Sync mode incremental tests --
983
984
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
985
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
986
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
987
988
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
989
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
990
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
991
992
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
993
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
994
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
995
996
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
997
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
998
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
999
1000
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1001
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1002
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1003
1004
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1005
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1006
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1007
1008
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1009
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1010
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1011
1012
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1013
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1014
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1015
1016
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1017
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1018
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1019
1020
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1021
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1022
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1023
1024
-- Sync mode bitmap tests --
1025
1026
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1027
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1028
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1029
1030
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1031
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1032
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1033
1034
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1035
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1036
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1037
1038
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1039
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1040
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1041
1042
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1043
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1044
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1045
1046
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1047
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1048
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1049
1050
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1051
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1052
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1053
1054
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1055
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1056
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1057
1058
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1059
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1060
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1061
1062
-- Sync mode full tests --
1063
1064
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1065
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1066
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1067
1068
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1069
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1070
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1071
1072
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1073
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1074
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1075
1076
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1077
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1078
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1079
1080
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1081
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1082
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1083
1084
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1085
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1086
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1087
1088
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1089
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1090
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1091
1092
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1093
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1094
{"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'full'"}}
1095
1096
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1097
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1098
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1099
1100
-- Sync mode top tests --
1101
1102
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1103
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1104
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1105
1106
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1107
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1108
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1109
1110
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1111
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1112
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1113
1114
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1115
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1116
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1117
1118
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1119
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1120
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1121
1122
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1123
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1124
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1125
1126
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1127
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1128
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1129
1130
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1131
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1132
{"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'top'"}}
1133
1134
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1135
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1136
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1137
1138
-- Sync mode none tests --
1139
1140
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1141
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1142
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1143
1144
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1145
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1146
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1147
1148
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1149
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1150
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1151
1152
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1153
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1154
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1155
1156
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1157
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1158
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1159
1160
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1161
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1162
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1163
1164
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1165
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1166
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1167
1168
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1169
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1170
{"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
1171
1172
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1173
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1174
{"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
1175
1176
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1177
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1178
{"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
1179
1180
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1181
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1182
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1183
1184
--
1185
2.29.2
1186
1187
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
We are going to stop use of this callback in the following commit.
4
Still the callback handling code will be dropped in a separate commit.
5
So, for now let's make it optional.
6
7
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Message-Id: <20210116214705.822267-16-vsementsov@virtuozzo.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
---
12
block/block-copy.c | 4 +++-
13
1 file changed, 3 insertions(+), 1 deletion(-)
14
15
diff --git a/block/block-copy.c b/block/block-copy.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/block/block-copy.c
18
+++ b/block/block-copy.c
19
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
20
t->call_state->error_is_read = error_is_read;
21
} else {
22
progress_work_done(t->s->progress, t->bytes);
23
- t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
24
+ if (t->s->progress_bytes_callback) {
25
+ t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
26
+ }
27
}
28
co_put_to_shres(t->s->mem, t->bytes);
29
block_copy_task_end(t, ret);
30
--
31
2.29.2
32
33
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
This brings async request handling and block-status driven chunk sizes
3
bdrv_is_allocated_above wrongly handles short backing files: it reports
4
to backup out of the box, which improves backup performance.
4
after-EOF space as UNALLOCATED which is wrong, as on read the data is
5
generated on the level of short backing file (if all overlays have
6
unallocated areas at that place).
7
8
Reusing bdrv_common_block_status_above fixes the issue and unifies code
9
path.
5
10
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
12
Reviewed-by: Eric Blake <eblake@redhat.com>
8
Message-Id: <20210116214705.822267-18-vsementsov@virtuozzo.com>
13
Reviewed-by: Alberto Garcia <berto@igalia.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
14
Message-id: 20200924194003.22080-5-vsementsov@virtuozzo.com
15
[Fix s/has/have/ as suggested by Eric Blake. Fix s/area/areas/.
16
--Stefan]
17
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
18
---
11
block/backup.c | 187 +++++++++++++++++++++++++++++++------------------
19
block/io.c | 43 +++++--------------------------------------
12
1 file changed, 120 insertions(+), 67 deletions(-)
20
1 file changed, 5 insertions(+), 38 deletions(-)
13
21
14
diff --git a/block/backup.c b/block/backup.c
22
diff --git a/block/io.c b/block/io.c
15
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
16
--- a/block/backup.c
24
--- a/block/io.c
17
+++ b/block/backup.c
25
+++ b/block/io.c
18
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_is_allocated(BlockDriverState *bs, int64_t offset,
19
#include "block/block-copy.h"
27
* at 'offset + *pnum' may return the same allocation status (in other
20
#include "qapi/error.h"
28
* words, the result is not necessarily the maximum possible range);
21
#include "qapi/qmp/qerror.h"
29
* but 'pnum' will only be 0 when end of file is reached.
22
-#include "qemu/ratelimit.h"
30
- *
23
#include "qemu/cutils.h"
31
*/
24
#include "sysemu/block-backend.h"
32
int bdrv_is_allocated_above(BlockDriverState *top,
25
#include "qemu/bitmap.h"
33
BlockDriverState *base,
26
@@ -XXX,XX +XXX,XX @@ typedef struct BackupBlockJob {
34
bool include_base, int64_t offset,
27
BlockdevOnError on_source_error;
35
int64_t bytes, int64_t *pnum)
28
BlockdevOnError on_target_error;
36
{
29
uint64_t len;
37
- BlockDriverState *intermediate;
30
- uint64_t bytes_read;
38
- int ret;
31
int64_t cluster_size;
39
- int64_t n = bytes;
32
BackupPerf perf;
33
34
BlockCopyState *bcs;
35
+
36
+ bool wait;
37
+ BlockCopyCallState *bg_bcs_call;
38
} BackupBlockJob;
39
40
static const BlockJobDriver backup_job_driver;
41
42
-static void backup_progress_bytes_callback(int64_t bytes, void *opaque)
43
-{
44
- BackupBlockJob *s = opaque;
45
-
40
-
46
- s->bytes_read += bytes;
41
- assert(base || !include_base);
47
-}
48
-
42
-
49
-static int coroutine_fn backup_do_cow(BackupBlockJob *job,
43
- intermediate = top;
50
- int64_t offset, uint64_t bytes,
44
- while (include_base || intermediate != base) {
51
- bool *error_is_read)
45
- int64_t pnum_inter;
52
-{
46
- int64_t size_inter;
53
- int ret = 0;
54
- int64_t start, end; /* bytes */
55
-
47
-
56
- start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
48
- assert(intermediate);
57
- end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
49
- ret = bdrv_is_allocated(intermediate, offset, bytes, &pnum_inter);
50
- if (ret < 0) {
51
- return ret;
52
- }
53
- if (ret) {
54
- *pnum = pnum_inter;
55
- return 1;
56
- }
58
-
57
-
59
- trace_backup_do_cow_enter(job, start, offset, bytes);
58
- size_inter = bdrv_getlength(intermediate);
59
- if (size_inter < 0) {
60
- return size_inter;
61
- }
62
- if (n > pnum_inter &&
63
- (intermediate == top || offset + pnum_inter < size_inter)) {
64
- n = pnum_inter;
65
- }
60
-
66
-
61
- ret = block_copy(job->bcs, start, end - start, true, error_is_read);
67
- if (intermediate == base) {
68
- break;
69
- }
62
-
70
-
63
- trace_backup_do_cow_return(job, offset, bytes, ret);
71
- intermediate = bdrv_filter_or_cow_bs(intermediate);
64
-
72
+ int ret = bdrv_common_block_status_above(top, base, include_base, false,
65
- return ret;
73
+ offset, bytes, pnum, NULL, NULL);
66
-}
74
+ if (ret < 0) {
67
-
75
+ return ret;
68
static void backup_cleanup_sync_bitmap(BackupBlockJob *job, int ret)
69
{
70
BdrvDirtyBitmap *bm;
71
@@ -XXX,XX +XXX,XX @@ static BlockErrorAction backup_error_action(BackupBlockJob *job,
72
}
76
}
77
78
- *pnum = n;
79
- return 0;
80
+ return !!(ret & BDRV_BLOCK_ALLOCATED);
73
}
81
}
74
82
75
-static bool coroutine_fn yield_and_check(BackupBlockJob *job)
83
int coroutine_fn
76
+static void coroutine_fn backup_block_copy_callback(void *opaque)
77
{
78
- uint64_t delay_ns;
79
-
80
- if (job_is_cancelled(&job->common.job)) {
81
- return true;
82
- }
83
-
84
- /*
85
- * We need to yield even for delay_ns = 0 so that bdrv_drain_all() can
86
- * return. Without a yield, the VM would not reboot.
87
- */
88
- delay_ns = block_job_ratelimit_get_delay(&job->common, job->bytes_read);
89
- job->bytes_read = 0;
90
- job_sleep_ns(&job->common.job, delay_ns);
91
+ BackupBlockJob *s = opaque;
92
93
- if (job_is_cancelled(&job->common.job)) {
94
- return true;
95
+ if (s->wait) {
96
+ s->wait = false;
97
+ aio_co_wake(s->common.job.co);
98
+ } else {
99
+ job_enter(&s->common.job);
100
}
101
-
102
- return false;
103
}
104
105
static int coroutine_fn backup_loop(BackupBlockJob *job)
106
{
107
- bool error_is_read;
108
- int64_t offset;
109
- BdrvDirtyBitmapIter *bdbi;
110
+ BlockCopyCallState *s = NULL;
111
int ret = 0;
112
+ bool error_is_read;
113
+ BlockErrorAction act;
114
+
115
+ while (true) { /* retry loop */
116
+ job->bg_bcs_call = s = block_copy_async(job->bcs, 0,
117
+ QEMU_ALIGN_UP(job->len, job->cluster_size),
118
+ job->perf.max_workers, job->perf.max_chunk,
119
+ backup_block_copy_callback, job);
120
+
121
+ while (!block_copy_call_finished(s) &&
122
+ !job_is_cancelled(&job->common.job))
123
+ {
124
+ job_yield(&job->common.job);
125
+ }
126
127
- bdbi = bdrv_dirty_iter_new(block_copy_dirty_bitmap(job->bcs));
128
- while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
129
- do {
130
- if (yield_and_check(job)) {
131
- goto out;
132
- }
133
- ret = backup_do_cow(job, offset, job->cluster_size, &error_is_read);
134
- if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
135
- BLOCK_ERROR_ACTION_REPORT)
136
- {
137
- goto out;
138
- }
139
- } while (ret < 0);
140
+ if (!block_copy_call_finished(s)) {
141
+ assert(job_is_cancelled(&job->common.job));
142
+ /*
143
+ * Note that we can't use job_yield() here, as it doesn't work for
144
+ * cancelled job.
145
+ */
146
+ block_copy_call_cancel(s);
147
+ job->wait = true;
148
+ qemu_coroutine_yield();
149
+ assert(block_copy_call_finished(s));
150
+ ret = 0;
151
+ goto out;
152
+ }
153
+
154
+ if (job_is_cancelled(&job->common.job) ||
155
+ block_copy_call_succeeded(s))
156
+ {
157
+ ret = 0;
158
+ goto out;
159
+ }
160
+
161
+ if (block_copy_call_cancelled(s)) {
162
+ /*
163
+ * Job is not cancelled but only block-copy call. This is possible
164
+ * after job pause. Now the pause is finished, start new block-copy
165
+ * iteration.
166
+ */
167
+ block_copy_call_free(s);
168
+ continue;
169
+ }
170
+
171
+ /* The only remaining case is failed block-copy call. */
172
+ assert(block_copy_call_failed(s));
173
+
174
+ ret = block_copy_call_status(s, &error_is_read);
175
+ act = backup_error_action(job, error_is_read, -ret);
176
+ switch (act) {
177
+ case BLOCK_ERROR_ACTION_REPORT:
178
+ goto out;
179
+ case BLOCK_ERROR_ACTION_STOP:
180
+ /*
181
+ * Go to pause prior to starting new block-copy call on the next
182
+ * iteration.
183
+ */
184
+ job_pause_point(&job->common.job);
185
+ break;
186
+ case BLOCK_ERROR_ACTION_IGNORE:
187
+ /* Proceed to new block-copy call to retry. */
188
+ break;
189
+ default:
190
+ abort();
191
+ }
192
+
193
+ block_copy_call_free(s);
194
}
195
196
- out:
197
- bdrv_dirty_iter_free(bdbi);
198
+out:
199
+ block_copy_call_free(s);
200
+ job->bg_bcs_call = NULL;
201
return ret;
202
}
203
204
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
205
int64_t count;
206
207
for (offset = 0; offset < s->len; ) {
208
- if (yield_and_check(s)) {
209
+ if (job_is_cancelled(job)) {
210
+ return -ECANCELED;
211
+ }
212
+
213
+ job_pause_point(job);
214
+
215
+ if (job_is_cancelled(job)) {
216
return -ECANCELED;
217
}
218
219
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
220
return 0;
221
}
222
223
+static void coroutine_fn backup_pause(Job *job)
224
+{
225
+ BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
226
+
227
+ if (s->bg_bcs_call && !block_copy_call_finished(s->bg_bcs_call)) {
228
+ block_copy_call_cancel(s->bg_bcs_call);
229
+ s->wait = true;
230
+ qemu_coroutine_yield();
231
+ }
232
+}
233
+
234
+static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
235
+{
236
+ BackupBlockJob *s = container_of(job, BackupBlockJob, common);
237
+
238
+ /*
239
+ * block_job_set_speed() is called first from block_job_create(), when we
240
+ * don't yet have s->bcs.
241
+ */
242
+ if (s->bcs) {
243
+ block_copy_set_speed(s->bcs, speed);
244
+ if (s->bg_bcs_call) {
245
+ block_copy_kick(s->bg_bcs_call);
246
+ }
247
+ }
248
+}
249
+
250
static const BlockJobDriver backup_job_driver = {
251
.job_driver = {
252
.instance_size = sizeof(BackupBlockJob),
253
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver backup_job_driver = {
254
.commit = backup_commit,
255
.abort = backup_abort,
256
.clean = backup_clean,
257
- }
258
+ .pause = backup_pause,
259
+ },
260
+ .set_speed = backup_set_speed,
261
};
262
263
static int64_t backup_calculate_cluster_size(BlockDriverState *target,
264
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
265
job->len = len;
266
job->perf = *perf;
267
268
- block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
269
block_copy_set_progress_meter(bcs, &job->common.job.progress);
270
+ block_copy_set_speed(bcs, speed);
271
272
/* Required permissions are already taken by backup-top target */
273
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
274
--
84
--
275
2.29.2
85
2.26.2
276
86
277
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Further commit will add a benchmark
3
These cases are fixed by previous patches around block_status and
4
(scripts/simplebench/bench-backup.py), which will show that backup
4
is_allocated.
5
works better with async parallel requests (previous commit) and
6
disabled copy_range. So, let's disable copy_range by default.
7
8
Note: the option was added several commits ago with default to true,
9
to follow old behavior (the feature was enabled unconditionally), and
10
only now we are going to change the default behavior.
11
5
12
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
13
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Reviewed-by: Eric Blake <eblake@redhat.com>
14
Message-Id: <20210116214705.822267-19-vsementsov@virtuozzo.com>
8
Reviewed-by: Alberto Garcia <berto@igalia.com>
15
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
Message-id: 20200924194003.22080-6-vsementsov@virtuozzo.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
---
11
---
17
qapi/block-core.json | 2 +-
12
tests/qemu-iotests/274 | 20 +++++++++++
18
blockdev.c | 2 +-
13
tests/qemu-iotests/274.out | 68 ++++++++++++++++++++++++++++++++++++++
19
2 files changed, 2 insertions(+), 2 deletions(-)
14
2 files changed, 88 insertions(+)
20
15
21
diff --git a/qapi/block-core.json b/qapi/block-core.json
16
diff --git a/tests/qemu-iotests/274 b/tests/qemu-iotests/274
17
index XXXXXXX..XXXXXXX 100755
18
--- a/tests/qemu-iotests/274
19
+++ b/tests/qemu-iotests/274
20
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('base') as base, \
21
iotests.qemu_io_log('-c', 'read -P 1 0 %d' % size_short, mid)
22
iotests.qemu_io_log('-c', 'read -P 0 %d %d' % (size_short, size_diff), mid)
23
24
+ iotests.log('=== Testing qemu-img commit (top -> base) ===')
25
+
26
+ create_chain()
27
+ iotests.qemu_img_log('commit', '-b', base, top)
28
+ iotests.img_info_log(base)
29
+ iotests.qemu_io_log('-c', 'read -P 1 0 %d' % size_short, base)
30
+ iotests.qemu_io_log('-c', 'read -P 0 %d %d' % (size_short, size_diff), base)
31
+
32
+ iotests.log('=== Testing QMP active commit (top -> base) ===')
33
+
34
+ create_chain()
35
+ with create_vm() as vm:
36
+ vm.launch()
37
+ vm.qmp_log('block-commit', device='top', base_node='base',
38
+ job_id='job0', auto_dismiss=False)
39
+ vm.run_job('job0', wait=5)
40
+
41
+ iotests.img_info_log(mid)
42
+ iotests.qemu_io_log('-c', 'read -P 1 0 %d' % size_short, base)
43
+ iotests.qemu_io_log('-c', 'read -P 0 %d %d' % (size_short, size_diff), base)
44
45
iotests.log('== Resize tests ==')
46
47
diff --git a/tests/qemu-iotests/274.out b/tests/qemu-iotests/274.out
22
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
23
--- a/qapi/block-core.json
49
--- a/tests/qemu-iotests/274.out
24
+++ b/qapi/block-core.json
50
+++ b/tests/qemu-iotests/274.out
25
@@ -XXX,XX +XXX,XX @@
51
@@ -XXX,XX +XXX,XX @@ read 1048576/1048576 bytes at offset 0
26
# Optional parameters for backup. These parameters don't affect
52
read 1048576/1048576 bytes at offset 1048576
27
# functionality, but may significantly affect performance.
53
1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
28
#
54
29
-# @use-copy-range: Use copy offloading. Default true.
55
+=== Testing qemu-img commit (top -> base) ===
30
+# @use-copy-range: Use copy offloading. Default false.
56
+Formatting 'TEST_DIR/PID-base', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 lazy_refcounts=off refcount_bits=16
31
#
57
+
32
# @max-workers: Maximum number of parallel requests for the sustained background
58
+Formatting 'TEST_DIR/PID-mid', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=1048576 backing_file=TEST_DIR/PID-base backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
33
# copying process. Doesn't influence copy-before-write operations.
59
+
34
diff --git a/blockdev.c b/blockdev.c
60
+Formatting 'TEST_DIR/PID-top', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 backing_file=TEST_DIR/PID-mid backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
35
index XXXXXXX..XXXXXXX 100644
61
+
36
--- a/blockdev.c
62
+wrote 2097152/2097152 bytes at offset 0
37
+++ b/blockdev.c
63
+2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
38
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
64
+
39
{
65
+Image committed.
40
BlockJob *job = NULL;
66
+
41
BdrvDirtyBitmap *bmap = NULL;
67
+image: TEST_IMG
42
- BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
68
+file format: IMGFMT
43
+ BackupPerf perf = { .max_workers = 64 };
69
+virtual size: 2 MiB (2097152 bytes)
44
int job_flags = JOB_DEFAULT;
70
+cluster_size: 65536
45
71
+Format specific information:
46
if (!backup->has_speed) {
72
+ compat: 1.1
73
+ compression type: zlib
74
+ lazy refcounts: false
75
+ refcount bits: 16
76
+ corrupt: false
77
+ extended l2: false
78
+
79
+read 1048576/1048576 bytes at offset 0
80
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
81
+
82
+read 1048576/1048576 bytes at offset 1048576
83
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
84
+
85
+=== Testing QMP active commit (top -> base) ===
86
+Formatting 'TEST_DIR/PID-base', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 lazy_refcounts=off refcount_bits=16
87
+
88
+Formatting 'TEST_DIR/PID-mid', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=1048576 backing_file=TEST_DIR/PID-base backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
89
+
90
+Formatting 'TEST_DIR/PID-top', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 backing_file=TEST_DIR/PID-mid backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
91
+
92
+wrote 2097152/2097152 bytes at offset 0
93
+2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
94
+
95
+{"execute": "block-commit", "arguments": {"auto-dismiss": false, "base-node": "base", "device": "top", "job-id": "job0"}}
96
+{"return": {}}
97
+{"execute": "job-complete", "arguments": {"id": "job0"}}
98
+{"return": {}}
99
+{"data": {"device": "job0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_READY", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
100
+{"data": {"device": "job0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
101
+{"execute": "job-dismiss", "arguments": {"id": "job0"}}
102
+{"return": {}}
103
+image: TEST_IMG
104
+file format: IMGFMT
105
+virtual size: 1 MiB (1048576 bytes)
106
+cluster_size: 65536
107
+backing file: TEST_DIR/PID-base
108
+backing file format: IMGFMT
109
+Format specific information:
110
+ compat: 1.1
111
+ compression type: zlib
112
+ lazy refcounts: false
113
+ refcount bits: 16
114
+ corrupt: false
115
+ extended l2: false
116
+
117
+read 1048576/1048576 bytes at offset 0
118
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
119
+
120
+read 1048576/1048576 bytes at offset 1048576
121
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
122
+
123
== Resize tests ==
124
=== preallocation=off ===
125
Formatting 'TEST_DIR/PID-base', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=6442450944 lazy_refcounts=off refcount_bits=16
47
--
126
--
48
2.29.2
127
2.26.2
49
128
50
diff view generated by jsdifflib