1
The following changes since commit 75ee62ac606bfc9eb59310b9446df3434bf6e8c2:
1
The following changes since commit 474f3938d79ab36b9231c9ad3b5a9314c2aeacde:
2
2
3
Merge remote-tracking branch 'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging (2020-12-17 18:53:36 +0000)
3
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-jun-21-2019' into staging (2019-06-21 15:40:50 +0100)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://github.com/XanClic/qemu.git tags/pull-block-2020-12-18
7
https://github.com/XanClic/qemu.git tags/pull-block-2019-06-24
8
8
9
for you to fetch changes up to 0e72078128229bf9efb542e396ab44bf91b91340:
9
for you to fetch changes up to ab5d4a30f7f3803ca5106b370969c1b7b54136f8:
10
10
11
iotests: Fix _send_qemu_cmd with bash 5.1 (2020-12-18 12:47:38 +0100)
11
iotests: Fix 205 for concurrent runs (2019-06-24 16:01:40 +0200)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Block patches:
14
Block patches:
15
- New block filter: preallocate (which, on writes beyond an image file's
15
- The SSH block driver now uses libssh instead of libssh2
16
end, allocates big chunks of data so that such post-EOF writes will
16
- The VMDK block driver gets read-only support for the seSparse
17
occur less frequently)
17
subformat
18
- write-zeroes and block-status support for Quorum
18
- Various fixes
19
- Implementation of truncate for the nvme block driver similarly to the
19
20
existing implementations for host block devices and iscsi devices
20
---
21
- Block layer refactoring: Drop the tighten_restrictions concept in the
21
22
block permission functions
22
v2:
23
- iotest fixes
23
- Squashed Pino's fix for pre-0.8 libssh into the libssh patch
24
24
25
----------------------------------------------------------------
25
----------------------------------------------------------------
26
Alberto Garcia (2):
26
Anton Nefedov (1):
27
quorum: Implement bdrv_co_block_status()
27
iotest 134: test cluster-misaligned encrypted write
28
quorum: Implement bdrv_co_pwrite_zeroes()
29
28
30
Max Reitz (2):
29
Klaus Birkelund Jensen (1):
31
iotests/102: Pass $QEMU_HANDLE to _send_qemu_cmd
30
nvme: do not advertise support for unsupported arbitration mechanism
32
iotests: Fix _send_qemu_cmd with bash 5.1
33
31
34
Philippe Mathieu-Daudé (1):
32
Max Reitz (1):
35
block/nvme: Implement fake truncate() coroutine
33
iotests: Fix 205 for concurrent runs
36
34
37
Vladimir Sementsov-Ogievskiy (25):
35
Pino Toscano (1):
38
block: add bdrv_refresh_perms() helper
36
ssh: switch from libssh2 to libssh
39
block: bdrv_set_perm() drop redundant parameters.
40
block: bdrv_child_set_perm() drop redundant parameters.
41
block: drop tighten_restrictions
42
block: simplify comment to BDRV_REQ_SERIALISING
43
block/io.c: drop assertion on double waiting for request serialisation
44
block/io: split out bdrv_find_conflicting_request
45
block/io: bdrv_wait_serialising_requests_locked: drop extra bs arg
46
block: bdrv_mark_request_serialising: split non-waiting function
47
block: introduce BDRV_REQ_NO_WAIT flag
48
block: bdrv_check_perm(): process children anyway
49
block: introduce preallocate filter
50
qemu-io: add preallocate mode parameter for truncate command
51
iotests: qemu_io_silent: support --image-opts
52
iotests.py: execute_setup_common(): add required_fmts argument
53
iotests: add 298 to test new preallocate filter driver
54
scripts/simplebench: fix grammar: s/successed/succeeded/
55
scripts/simplebench: support iops
56
scripts/simplebench: use standard deviation for +- error
57
simplebench: rename ascii() to results_to_text()
58
simplebench: move results_to_text() into separate file
59
simplebench/results_to_text: improve view of the table
60
simplebench/results_to_text: add difference line to the table
61
simplebench/results_to_text: make executable
62
scripts/simplebench: add bench_prealloc.py
63
37
64
docs/system/qemu-block-drivers.rst.inc | 26 ++
38
Sam Eiderman (3):
65
qapi/block-core.json | 20 +-
39
vmdk: Fix comment regarding max l1_size coverage
66
include/block/block.h | 20 +-
40
vmdk: Reduce the max bound for L1 table size
67
include/block/block_int.h | 3 +-
41
vmdk: Add read-only support for seSparse snapshots
68
block.c | 185 +++-----
42
69
block/file-posix.c | 2 +-
43
Vladimir Sementsov-Ogievskiy (1):
70
block/io.c | 130 +++---
44
blockdev: enable non-root nodes for transaction drive-backup source
71
block/nvme.c | 24 ++
45
72
block/preallocate.c | 559 +++++++++++++++++++++++++
46
configure | 65 +-
73
block/quorum.c | 88 +++-
47
block/Makefile.objs | 6 +-
74
qemu-io-cmds.c | 46 +-
48
block/ssh.c | 652 ++++++++++--------
75
block/meson.build | 1 +
49
block/vmdk.c | 372 +++++++++-
76
scripts/simplebench/bench-example.py | 3 +-
50
blockdev.c | 2 +-
77
scripts/simplebench/bench_prealloc.py | 132 ++++++
51
hw/block/nvme.c | 1 -
78
scripts/simplebench/bench_write_req.py | 3 +-
52
.travis.yml | 4 +-
79
scripts/simplebench/results_to_text.py | 126 ++++++
53
block/trace-events | 14 +-
80
scripts/simplebench/simplebench.py | 66 ++-
54
docs/qemu-block-drivers.texi | 2 +-
81
tests/qemu-iotests/085.out | 167 ++++++--
55
.../dockerfiles/debian-win32-cross.docker | 1 -
82
tests/qemu-iotests/094.out | 10 +-
56
.../dockerfiles/debian-win64-cross.docker | 1 -
83
tests/qemu-iotests/095.out | 4 +-
57
tests/docker/dockerfiles/fedora.docker | 4 +-
84
tests/qemu-iotests/102 | 2 +-
58
tests/docker/dockerfiles/ubuntu.docker | 2 +-
85
tests/qemu-iotests/102.out | 2 +-
59
tests/docker/dockerfiles/ubuntu1804.docker | 2 +-
86
tests/qemu-iotests/109.out | 88 +++-
60
tests/qemu-iotests/059.out | 2 +-
87
tests/qemu-iotests/117.out | 13 +-
61
tests/qemu-iotests/134 | 9 +
88
tests/qemu-iotests/127.out | 12 +-
62
tests/qemu-iotests/134.out | 10 +
89
tests/qemu-iotests/140.out | 10 +-
63
tests/qemu-iotests/205 | 2 +-
90
tests/qemu-iotests/141.out | 128 ++++--
64
tests/qemu-iotests/207 | 54 +-
91
tests/qemu-iotests/143.out | 4 +-
65
tests/qemu-iotests/207.out | 2 +-
92
tests/qemu-iotests/144.out | 28 +-
66
20 files changed, 823 insertions(+), 384 deletions(-)
93
tests/qemu-iotests/153.out | 18 +-
94
tests/qemu-iotests/156.out | 39 +-
95
tests/qemu-iotests/161.out | 18 +-
96
tests/qemu-iotests/173.out | 25 +-
97
tests/qemu-iotests/182.out | 42 +-
98
tests/qemu-iotests/183.out | 19 +-
99
tests/qemu-iotests/185.out | 45 +-
100
tests/qemu-iotests/191.out | 12 +-
101
tests/qemu-iotests/223.out | 92 ++--
102
tests/qemu-iotests/229.out | 13 +-
103
tests/qemu-iotests/249.out | 16 +-
104
tests/qemu-iotests/298 | 186 ++++++++
105
tests/qemu-iotests/298.out | 5 +
106
tests/qemu-iotests/308.out | 103 ++++-
107
tests/qemu-iotests/312 | 159 +++++++
108
tests/qemu-iotests/312.out | 81 ++++
109
tests/qemu-iotests/common.qemu | 11 +-
110
tests/qemu-iotests/group | 2 +
111
tests/qemu-iotests/iotests.py | 16 +-
112
48 files changed, 2357 insertions(+), 447 deletions(-)
113
create mode 100644 block/preallocate.c
114
create mode 100755 scripts/simplebench/bench_prealloc.py
115
create mode 100755 scripts/simplebench/results_to_text.py
116
create mode 100644 tests/qemu-iotests/298
117
create mode 100644 tests/qemu-iotests/298.out
118
create mode 100755 tests/qemu-iotests/312
119
create mode 100644 tests/qemu-iotests/312.out
120
67
121
--
68
--
122
2.29.2
69
2.21.0
123
70
124
71
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Make separate function for common pattern.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Message-Id: <20201106124241.16950-5-vsementsov@virtuozzo.com>
7
[mreitz: Squashed in
8
https://lists.nongnu.org/archive/html/qemu-block/2020-11/msg00299.html]
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
block.c | 61 +++++++++++++++++++++++++++++----------------------------
12
1 file changed, 31 insertions(+), 30 deletions(-)
13
14
diff --git a/block.c b/block.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/block.c
17
+++ b/block.c
18
@@ -XXX,XX +XXX,XX @@ static void bdrv_child_abort_perm_update(BdrvChild *c)
19
bdrv_abort_perm_update(c->bs);
20
}
21
22
+static int bdrv_refresh_perms(BlockDriverState *bs, bool *tighten_restrictions,
23
+ Error **errp)
24
+{
25
+ int ret;
26
+ uint64_t perm, shared_perm;
27
+
28
+ bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
29
+ ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL,
30
+ tighten_restrictions, errp);
31
+ if (ret < 0) {
32
+ bdrv_abort_perm_update(bs);
33
+ return ret;
34
+ }
35
+ bdrv_set_perm(bs, perm, shared_perm);
36
+
37
+ return 0;
38
+}
39
+
40
int bdrv_child_try_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared,
41
Error **errp)
42
{
43
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
44
}
45
46
if (old_bs) {
47
- /* Update permissions for old node. This is guaranteed to succeed
48
- * because we're just taking a parent away, so we're loosening
49
- * restrictions. */
50
bool tighten_restrictions;
51
- int ret;
52
53
- bdrv_get_cumulative_perm(old_bs, &perm, &shared_perm);
54
- ret = bdrv_check_perm(old_bs, NULL, perm, shared_perm, NULL,
55
- &tighten_restrictions, NULL);
56
+ /*
57
+ * Update permissions for old node. We're just taking a parent away, so
58
+ * we're loosening restrictions. Errors of permission update are not
59
+ * fatal in this case, ignore them.
60
+ */
61
+ bdrv_refresh_perms(old_bs, &tighten_restrictions, NULL);
62
assert(tighten_restrictions == false);
63
- if (ret < 0) {
64
- /* We only tried to loosen restrictions, so errors are not fatal */
65
- bdrv_abort_perm_update(old_bs);
66
- } else {
67
- bdrv_set_perm(old_bs, perm, shared_perm);
68
- }
69
70
/* When the parent requiring a non-default AioContext is removed, the
71
* node moves back to the main AioContext */
72
@@ -XXX,XX +XXX,XX @@ void bdrv_init_with_whitelist(void)
73
int coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs, Error **errp)
74
{
75
BdrvChild *child, *parent;
76
- uint64_t perm, shared_perm;
77
Error *local_err = NULL;
78
int ret;
79
BdrvDirtyBitmap *bm;
80
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs, Error **errp)
81
*/
82
if (bs->open_flags & BDRV_O_INACTIVE) {
83
bs->open_flags &= ~BDRV_O_INACTIVE;
84
- bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
85
- ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL, NULL, errp);
86
+ ret = bdrv_refresh_perms(bs, NULL, errp);
87
if (ret < 0) {
88
- bdrv_abort_perm_update(bs);
89
bs->open_flags |= BDRV_O_INACTIVE;
90
return ret;
91
}
92
- bdrv_set_perm(bs, perm, shared_perm);
93
94
if (bs->drv->bdrv_co_invalidate_cache) {
95
bs->drv->bdrv_co_invalidate_cache(bs, &local_err);
96
@@ -XXX,XX +XXX,XX @@ static int bdrv_inactivate_recurse(BlockDriverState *bs)
97
{
98
BdrvChild *child, *parent;
99
bool tighten_restrictions;
100
- uint64_t perm, shared_perm;
101
int ret;
102
103
if (!bs->drv) {
104
@@ -XXX,XX +XXX,XX @@ static int bdrv_inactivate_recurse(BlockDriverState *bs)
105
106
bs->open_flags |= BDRV_O_INACTIVE;
107
108
- /* Update permissions, they may differ for inactive nodes */
109
- bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
110
- ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL,
111
- &tighten_restrictions, NULL);
112
+ /*
113
+ * Update permissions, they may differ for inactive nodes.
114
+ * We only tried to loosen restrictions, so errors are not fatal, ignore
115
+ * them.
116
+ */
117
+ bdrv_refresh_perms(bs, &tighten_restrictions, NULL);
118
assert(tighten_restrictions == false);
119
- if (ret < 0) {
120
- /* We only tried to loosen restrictions, so errors are not fatal */
121
- bdrv_abort_perm_update(bs);
122
- } else {
123
- bdrv_set_perm(bs, perm, shared_perm);
124
- }
125
-
126
127
/* Recursively inactivate children */
128
QLIST_FOREACH(child, &bs->children, next) {
129
--
130
2.29.2
131
132
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Klaus Birkelund Jensen <klaus@birkelund.eu>
2
2
3
NVMe drive cannot be shrunk.
3
The device mistakenly reports that the Weighted Round Robin with Urgent
4
Priority Class arbitration mechanism is supported.
4
5
5
Since commit c80d8b06cfa we can use the @exact parameter (set
6
It is not.
6
to false) to return success if the block device is larger than
7
the requested offset (even if we can not be shrunk).
8
7
9
Use this parameter to implement the NVMe truncate() coroutine,
8
Signed-off-by: Klaus Birkelund Jensen <klaus.jensen@cnexlabs.com>
10
similarly how it is done for the iscsi and file-posix drivers
9
Message-id: 20190606092530.14206-1-klaus@birkelund.eu
11
(see commit 82325ae5f2f "Evaluate @exact in protocol drivers").
10
Acked-by: Maxim Levitsky <mlevitsk@redhat.com>
12
13
Reported-by: Xueqiang Wei <xuwei@redhat.com>
14
Suggested-by: Max Reitz <mreitz@redhat.com>
15
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Message-Id: <20201210125202.858656-1-philmd@redhat.com>
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
18
---
12
---
19
block/nvme.c | 24 ++++++++++++++++++++++++
13
hw/block/nvme.c | 1 -
20
1 file changed, 24 insertions(+)
14
1 file changed, 1 deletion(-)
21
15
22
diff --git a/block/nvme.c b/block/nvme.c
16
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
23
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
24
--- a/block/nvme.c
18
--- a/hw/block/nvme.c
25
+++ b/block/nvme.c
19
+++ b/hw/block/nvme.c
26
@@ -XXX,XX +XXX,XX @@ out:
20
@@ -XXX,XX +XXX,XX @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp)
27
21
n->bar.cap = 0;
28
}
22
NVME_CAP_SET_MQES(n->bar.cap, 0x7ff);
29
23
NVME_CAP_SET_CQR(n->bar.cap, 1);
30
+static int coroutine_fn nvme_co_truncate(BlockDriverState *bs, int64_t offset,
24
- NVME_CAP_SET_AMS(n->bar.cap, 1);
31
+ bool exact, PreallocMode prealloc,
25
NVME_CAP_SET_TO(n->bar.cap, 0xf);
32
+ BdrvRequestFlags flags, Error **errp)
26
NVME_CAP_SET_CSS(n->bar.cap, 1);
33
+{
27
NVME_CAP_SET_MPSMAX(n->bar.cap, 4);
34
+ int64_t cur_length;
35
+
36
+ if (prealloc != PREALLOC_MODE_OFF) {
37
+ error_setg(errp, "Unsupported preallocation mode '%s'",
38
+ PreallocMode_str(prealloc));
39
+ return -ENOTSUP;
40
+ }
41
+
42
+ cur_length = nvme_getlength(bs);
43
+ if (offset != cur_length && exact) {
44
+ error_setg(errp, "Cannot resize NVMe devices");
45
+ return -ENOTSUP;
46
+ } else if (offset > cur_length) {
47
+ error_setg(errp, "Cannot grow NVMe devices");
48
+ return -EINVAL;
49
+ }
50
+
51
+ return 0;
52
+}
53
54
static int nvme_reopen_prepare(BDRVReopenState *reopen_state,
55
BlockReopenQueue *queue, Error **errp)
56
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_nvme = {
57
.bdrv_close = nvme_close,
58
.bdrv_getlength = nvme_getlength,
59
.bdrv_probe_blocksizes = nvme_probe_blocksizes,
60
+ .bdrv_co_truncate = nvme_co_truncate,
61
62
.bdrv_co_preadv = nvme_co_preadv,
63
.bdrv_co_pwritev = nvme_co_pwritev,
64
--
28
--
65
2.29.2
29
2.21.0
66
30
67
31
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
We should never set permissions other than cumulative permissions of
3
We forget to enable it for transaction .prepare, while it is already
4
parents. During bdrv_reopen_multiple() we _check_ for synthetic
4
enabled in do_drive_backup since commit a2d665c1bc362
5
permissions but when we do _set_ the graph is already updated.
5
"blockdev: loosen restrictions on drive-backup source node"
6
Add an assertion to bdrv_reopen_multiple(), other cases are more
7
obvious.
8
6
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Message-Id: <20201106124241.16950-6-vsementsov@virtuozzo.com>
8
Message-id: 20190618140804.59214-1-vsementsov@virtuozzo.com
11
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Reviewed-by: John Snow <jsnow@redhat.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
11
---
14
block.c | 29 +++++++++++++++--------------
12
blockdev.c | 2 +-
15
1 file changed, 15 insertions(+), 14 deletions(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
16
14
17
diff --git a/block.c b/block.c
15
diff --git a/blockdev.c b/blockdev.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/block.c
17
--- a/blockdev.c
20
+++ b/block.c
18
+++ b/blockdev.c
21
@@ -XXX,XX +XXX,XX @@ static void bdrv_abort_perm_update(BlockDriverState *bs)
19
@@ -XXX,XX +XXX,XX @@ static void drive_backup_prepare(BlkActionState *common, Error **errp)
22
}
20
assert(common->action->type == TRANSACTION_ACTION_KIND_DRIVE_BACKUP);
23
}
21
backup = common->action->u.drive_backup.data;
24
22
25
-static void bdrv_set_perm(BlockDriverState *bs, uint64_t cumulative_perms,
23
- bs = qmp_get_root_bs(backup->device, errp);
26
- uint64_t cumulative_shared_perms)
24
+ bs = bdrv_lookup_bs(backup->device, backup->device, errp);
27
+static void bdrv_set_perm(BlockDriverState *bs)
25
if (!bs) {
28
{
29
+ uint64_t cumulative_perms, cumulative_shared_perms;
30
BlockDriver *drv = bs->drv;
31
BdrvChild *c;
32
33
@@ -XXX,XX +XXX,XX @@ static void bdrv_set_perm(BlockDriverState *bs, uint64_t cumulative_perms,
34
return;
26
return;
35
}
27
}
36
37
+ bdrv_get_cumulative_perm(bs, &cumulative_perms, &cumulative_shared_perms);
38
+
39
/* Update this node */
40
if (drv->bdrv_set_perm) {
41
drv->bdrv_set_perm(bs, cumulative_perms, cumulative_shared_perms);
42
@@ -XXX,XX +XXX,XX @@ static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
43
44
static void bdrv_child_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared)
45
{
46
- uint64_t cumulative_perms, cumulative_shared_perms;
47
-
48
c->has_backup_perm = false;
49
50
c->perm = perm;
51
c->shared_perm = shared;
52
53
- bdrv_get_cumulative_perm(c->bs, &cumulative_perms,
54
- &cumulative_shared_perms);
55
- bdrv_set_perm(c->bs, cumulative_perms, cumulative_shared_perms);
56
+ bdrv_set_perm(c->bs);
57
}
58
59
static void bdrv_child_abort_perm_update(BdrvChild *c)
60
@@ -XXX,XX +XXX,XX @@ static int bdrv_refresh_perms(BlockDriverState *bs, bool *tighten_restrictions,
61
bdrv_abort_perm_update(bs);
62
return ret;
63
}
64
- bdrv_set_perm(bs, perm, shared_perm);
65
+ bdrv_set_perm(bs);
66
67
return 0;
68
}
69
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child_noperm(BdrvChild *child,
70
static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
71
{
72
BlockDriverState *old_bs = child->bs;
73
- uint64_t perm, shared_perm;
74
75
/* Asserts that child->frozen == false */
76
bdrv_replace_child_noperm(child, new_bs);
77
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
78
* restrictions.
79
*/
80
if (new_bs) {
81
- bdrv_get_cumulative_perm(new_bs, &perm, &shared_perm);
82
- bdrv_set_perm(new_bs, perm, shared_perm);
83
+ bdrv_set_perm(new_bs);
84
}
85
86
if (old_bs) {
87
@@ -XXX,XX +XXX,XX @@ cleanup_perm:
88
}
89
90
if (ret == 0) {
91
- bdrv_set_perm(state->bs, state->perm, state->shared_perm);
92
+ uint64_t perm, shared;
93
+
94
+ bdrv_get_cumulative_perm(state->bs, &perm, &shared);
95
+ assert(perm == state->perm);
96
+ assert(shared == state->shared_perm);
97
+
98
+ bdrv_set_perm(state->bs);
99
} else {
100
bdrv_abort_perm_update(state->bs);
101
if (state->replace_backing_bs && state->new_backing_bs) {
102
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_node_common(BlockDriverState *from,
103
bdrv_unref(from);
104
}
105
106
- bdrv_get_cumulative_perm(to, &perm, &shared);
107
- bdrv_set_perm(to, perm, shared);
108
+ bdrv_set_perm(to);
109
110
out:
111
g_slist_free(list);
112
--
28
--
113
2.29.2
29
2.21.0
114
30
115
31
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Anton Nefedov <anton.nefedov@virtuozzo.com>
2
2
3
This simply calls bdrv_co_pwrite_zeroes() in all children.
3
COW (even empty/zero) areas require encryption too
4
4
5
bs->supported_zero_flags is also set to the flags that are supported
5
Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com>
6
by all children.
6
Reviewed-by: Eric Blake <eblake@redhat.com>
7
8
Signed-off-by: Alberto Garcia <berto@igalia.com>
9
Message-Id: <2f09c842781fe336b4c2e40036bba577b7430190.1605286097.git.berto@igalia.com>
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Reviewed-by: Alberto Garcia <berto@igalia.com>
9
Message-id: 20190516143028.81155-1-anton.nefedov@virtuozzo.com
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
11
---
13
block/quorum.c | 36 ++++++++++++++++++++++++++++++++++--
12
tests/qemu-iotests/134 | 9 +++++++++
14
tests/qemu-iotests/312 | 11 +++++++++++
13
tests/qemu-iotests/134.out | 10 ++++++++++
15
tests/qemu-iotests/312.out | 8 ++++++++
14
2 files changed, 19 insertions(+)
16
3 files changed, 53 insertions(+), 2 deletions(-)
17
15
18
diff --git a/block/quorum.c b/block/quorum.c
16
diff --git a/tests/qemu-iotests/134 b/tests/qemu-iotests/134
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100755
20
--- a/block/quorum.c
18
--- a/tests/qemu-iotests/134
21
+++ b/block/quorum.c
19
+++ b/tests/qemu-iotests/134
22
@@ -XXX,XX +XXX,XX @@ static void write_quorum_entry(void *opaque)
20
@@ -XXX,XX +XXX,XX @@ echo
23
QuorumChildRequest *sacb = &acb->qcrs[i];
21
echo "== reading whole image =="
24
22
$QEMU_IO --object $SECRET -c "read 0 $size" --image-opts $IMGSPEC | _filter_qemu_io | _filter_testdir
25
sacb->bs = s->children[i]->bs;
23
26
- sacb->ret = bdrv_co_pwritev(s->children[i], acb->offset, acb->bytes,
24
+echo
27
- acb->qiov, acb->flags);
25
+echo "== rewriting cluster part =="
28
+ if (acb->flags & BDRV_REQ_ZERO_WRITE) {
26
+$QEMU_IO --object $SECRET -c "write -P 0xb 512 512" --image-opts $IMGSPEC | _filter_qemu_io | _filter_testdir
29
+ sacb->ret = bdrv_co_pwrite_zeroes(s->children[i], acb->offset,
30
+ acb->bytes, acb->flags);
31
+ } else {
32
+ sacb->ret = bdrv_co_pwritev(s->children[i], acb->offset, acb->bytes,
33
+ acb->qiov, acb->flags);
34
+ }
35
if (sacb->ret == 0) {
36
acb->success_count++;
37
} else {
38
@@ -XXX,XX +XXX,XX @@ static int quorum_co_pwritev(BlockDriverState *bs, uint64_t offset,
39
return ret;
40
}
41
42
+static int quorum_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
43
+ int bytes, BdrvRequestFlags flags)
44
+
27
+
45
+{
28
+echo
46
+ return quorum_co_pwritev(bs, offset, bytes, NULL,
29
+echo "== verify pattern =="
47
+ flags | BDRV_REQ_ZERO_WRITE);
30
+$QEMU_IO --object $SECRET -c "read -P 0 0 512" --image-opts $IMGSPEC | _filter_qemu_io | _filter_testdir
48
+}
31
+$QEMU_IO --object $SECRET -c "read -P 0xb 512 512" --image-opts $IMGSPEC | _filter_qemu_io | _filter_testdir
49
+
50
static int64_t quorum_getlength(BlockDriverState *bs)
51
{
52
BDRVQuorumState *s = bs->opaque;
53
@@ -XXX,XX +XXX,XX @@ static QemuOptsList quorum_runtime_opts = {
54
},
55
};
56
57
+static void quorum_refresh_flags(BlockDriverState *bs)
58
+{
59
+ BDRVQuorumState *s = bs->opaque;
60
+ int i;
61
+
62
+ bs->supported_zero_flags =
63
+ BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK;
64
+
65
+ for (i = 0; i < s->num_children; i++) {
66
+ bs->supported_zero_flags &= s->children[i]->bs->supported_zero_flags;
67
+ }
68
+
69
+ bs->supported_zero_flags |= BDRV_REQ_WRITE_UNCHANGED;
70
+}
71
+
72
static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
73
Error **errp)
74
{
75
@@ -XXX,XX +XXX,XX @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
76
s->next_child_index = s->num_children;
77
78
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED;
79
+ quorum_refresh_flags(bs);
80
81
g_free(opened);
82
goto exit;
83
@@ -XXX,XX +XXX,XX @@ static void quorum_add_child(BlockDriverState *bs, BlockDriverState *child_bs,
84
}
85
s->children = g_renew(BdrvChild *, s->children, s->num_children + 1);
86
s->children[s->num_children++] = child;
87
+ quorum_refresh_flags(bs);
88
89
out:
90
bdrv_drained_end(bs);
91
@@ -XXX,XX +XXX,XX @@ static void quorum_del_child(BlockDriverState *bs, BdrvChild *child,
92
s->children = g_renew(BdrvChild *, s->children, --s->num_children);
93
bdrv_unref_child(bs, child);
94
95
+ quorum_refresh_flags(bs);
96
bdrv_drained_end(bs);
97
}
98
99
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_quorum = {
100
101
.bdrv_co_preadv = quorum_co_preadv,
102
.bdrv_co_pwritev = quorum_co_pwritev,
103
+ .bdrv_co_pwrite_zeroes = quorum_co_pwrite_zeroes,
104
105
.bdrv_add_child = quorum_add_child,
106
.bdrv_del_child = quorum_del_child,
107
diff --git a/tests/qemu-iotests/312 b/tests/qemu-iotests/312
108
index XXXXXXX..XXXXXXX 100755
109
--- a/tests/qemu-iotests/312
110
+++ b/tests/qemu-iotests/312
111
@@ -XXX,XX +XXX,XX @@ $QEMU_IO -c "write -P 0 $((0x200000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu
112
$QEMU_IO -c "write -z $((0x200000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
113
$QEMU_IO -c "write -P 0 $((0x200000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
114
115
+# Test 5: write data to a region and then zeroize it, doing it
116
+# directly on the quorum device instead of the individual images.
117
+# This has no effect on the end result but proves that the quorum driver
118
+# supports 'write -z'.
119
+$QEMU_IO -c "open -o $quorum" -c "write -P 1 $((0x250000)) $((0x10000))" | _filter_qemu_io
120
+# Verify the data that we just wrote
121
+$QEMU_IO -c "open -o $quorum" -c "read -P 1 $((0x250000)) $((0x10000))" | _filter_qemu_io
122
+$QEMU_IO -c "open -o $quorum" -c "write -z $((0x250000)) $((0x10000))" | _filter_qemu_io
123
+# Now it should read back as zeroes
124
+$QEMU_IO -c "open -o $quorum" -c "read -P 0 $((0x250000)) $((0x10000))" | _filter_qemu_io
125
+
32
+
126
echo
33
echo
127
echo '### Launch the drive-mirror job'
34
echo "== rewriting whole image =="
128
echo
35
$QEMU_IO --object $SECRET -c "write -P 0xa 0 $size" --image-opts $IMGSPEC | _filter_qemu_io | _filter_testdir
129
diff --git a/tests/qemu-iotests/312.out b/tests/qemu-iotests/312.out
36
diff --git a/tests/qemu-iotests/134.out b/tests/qemu-iotests/134.out
130
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
131
--- a/tests/qemu-iotests/312.out
38
--- a/tests/qemu-iotests/134.out
132
+++ b/tests/qemu-iotests/312.out
39
+++ b/tests/qemu-iotests/134.out
133
@@ -XXX,XX +XXX,XX @@ wrote 196608/196608 bytes at offset 2097152
40
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 encryption=on encrypt.
134
192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
41
read 134217728/134217728 bytes at offset 0
135
wrote 131072/131072 bytes at offset 2097152
42
128 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
136
128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
43
137
+wrote 65536/65536 bytes at offset 2424832
44
+== rewriting cluster part ==
138
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
45
+wrote 512/512 bytes at offset 512
139
+read 65536/65536 bytes at offset 2424832
46
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
140
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
47
+
141
+wrote 65536/65536 bytes at offset 2424832
48
+== verify pattern ==
142
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
49
+read 512/512 bytes at offset 0
143
+read 65536/65536 bytes at offset 2424832
50
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
144
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
51
+read 512/512 bytes at offset 512
145
52
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
146
### Launch the drive-mirror job
53
+
147
54
== rewriting whole image ==
55
wrote 134217728/134217728 bytes at offset 0
56
128 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
148
--
57
--
149
2.29.2
58
2.21.0
150
59
151
60
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Sam Eiderman <shmuel.eiderman@oracle.com>
2
2
3
Benchmark for new preallocate filter.
3
Commit b0651b8c246d ("vmdk: Move l1_size check into vmdk_add_extent")
4
extended the l1_size check from VMDK4 to VMDK3 but did not update the
5
default coverage in the moved comment.
4
6
5
Example usage:
7
The previous vmdk4 calculation:
6
./bench_prealloc.py ../../build/qemu-img \
7
ssd-ext4:/path/to/mount/point \
8
ssd-xfs:/path2 hdd-ext4:/path3 hdd-xfs:/path4
9
8
10
The benchmark shows performance improvement (or degradation) when use
9
(512 * 1024 * 1024) * 512(l2 entries) * 65536(grain) = 16PB
11
new preallocate filter with qcow2 image.
12
10
13
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
The added vmdk3 calculation:
14
Message-Id: <20201021145859.11201-22-vsementsov@virtuozzo.com>
12
13
(512 * 1024 * 1024) * 4096(l2 entries) * 512(grain) = 1PB
14
15
Adding the calculation of vmdk3 to the comment.
16
17
In any case, VMware does not offer virtual disks more than 2TB for
18
vmdk4/vmdk3 or 64TB for the new undocumented seSparse format which is
19
not implemented yet in qemu.
20
21
Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com>
22
Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com>
23
Reviewed-by: Liran Alon <liran.alon@oracle.com>
24
Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com>
25
Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com>
26
Message-id: 20190620091057.47441-2-shmuel.eiderman@oracle.com
27
Reviewed-by: yuchenlin <yuchenlin@synology.com>
15
Reviewed-by: Max Reitz <mreitz@redhat.com>
28
Reviewed-by: Max Reitz <mreitz@redhat.com>
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
29
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
---
30
---
18
scripts/simplebench/bench_prealloc.py | 132 ++++++++++++++++++++++++++
31
block/vmdk.c | 11 ++++++++---
19
1 file changed, 132 insertions(+)
32
1 file changed, 8 insertions(+), 3 deletions(-)
20
create mode 100755 scripts/simplebench/bench_prealloc.py
21
33
22
diff --git a/scripts/simplebench/bench_prealloc.py b/scripts/simplebench/bench_prealloc.py
34
diff --git a/block/vmdk.c b/block/vmdk.c
23
new file mode 100755
35
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX
36
--- a/block/vmdk.c
25
--- /dev/null
37
+++ b/block/vmdk.c
26
+++ b/scripts/simplebench/bench_prealloc.py
38
@@ -XXX,XX +XXX,XX @@ static int vmdk_add_extent(BlockDriverState *bs,
27
@@ -XXX,XX +XXX,XX @@
39
return -EFBIG;
28
+#!/usr/bin/env python3
40
}
29
+#
41
if (l1_size > 512 * 1024 * 1024) {
30
+# Benchmark preallocate filter
42
- /* Although with big capacity and small l1_entry_sectors, we can get a
31
+#
43
+ /*
32
+# Copyright (c) 2020 Virtuozzo International GmbH.
44
+ * Although with big capacity and small l1_entry_sectors, we can get a
33
+#
45
* big l1_size, we don't want unbounded value to allocate the table.
34
+# This program is free software; you can redistribute it and/or modify
46
- * Limit it to 512M, which is 16PB for default cluster and L2 table
35
+# it under the terms of the GNU General Public License as published by
47
- * size */
36
+# the Free Software Foundation; either version 2 of the License, or
48
+ * Limit it to 512M, which is:
37
+# (at your option) any later version.
49
+ * 16PB - for default "Hosted Sparse Extent" (VMDK4)
38
+#
50
+ * cluster size: 64KB, L2 table size: 512 entries
39
+# This program is distributed in the hope that it will be useful,
51
+ * 1PB - for default "ESXi Host Sparse Extent" (VMDK3/vmfsSparse)
40
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
52
+ * cluster size: 512B, L2 table size: 4096 entries
41
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
53
+ */
42
+# GNU General Public License for more details.
54
error_setg(errp, "L1 size too big");
43
+#
55
return -EFBIG;
44
+# You should have received a copy of the GNU General Public License
56
}
45
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
46
+#
47
+
48
+
49
+import sys
50
+import os
51
+import subprocess
52
+import re
53
+import json
54
+
55
+import simplebench
56
+from results_to_text import results_to_text
57
+
58
+
59
+def qemu_img_bench(args):
60
+ p = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
61
+ universal_newlines=True)
62
+
63
+ if p.returncode == 0:
64
+ try:
65
+ m = re.search(r'Run completed in (\d+.\d+) seconds.', p.stdout)
66
+ return {'seconds': float(m.group(1))}
67
+ except Exception:
68
+ return {'error': f'failed to parse qemu-img output: {p.stdout}'}
69
+ else:
70
+ return {'error': f'qemu-img failed: {p.returncode}: {p.stdout}'}
71
+
72
+
73
+def bench_func(env, case):
74
+ fname = f"{case['dir']}/prealloc-test.qcow2"
75
+ try:
76
+ os.remove(fname)
77
+ except OSError:
78
+ pass
79
+
80
+ subprocess.run([env['qemu-img-binary'], 'create', '-f', 'qcow2', fname,
81
+ '16G'], stdout=subprocess.DEVNULL,
82
+ stderr=subprocess.DEVNULL, check=True)
83
+
84
+ args = [env['qemu-img-binary'], 'bench', '-c', str(case['count']),
85
+ '-d', '64', '-s', case['block-size'], '-t', 'none', '-n', '-w']
86
+ if env['prealloc']:
87
+ args += ['--image-opts',
88
+ 'driver=qcow2,file.driver=preallocate,file.file.driver=file,'
89
+ f'file.file.filename={fname}']
90
+ else:
91
+ args += ['-f', 'qcow2', fname]
92
+
93
+ return qemu_img_bench(args)
94
+
95
+
96
+def auto_count_bench_func(env, case):
97
+ case['count'] = 100
98
+ while True:
99
+ res = bench_func(env, case)
100
+ if 'error' in res:
101
+ return res
102
+
103
+ if res['seconds'] >= 1:
104
+ break
105
+
106
+ case['count'] *= 10
107
+
108
+ if res['seconds'] < 5:
109
+ case['count'] = round(case['count'] * 5 / res['seconds'])
110
+ res = bench_func(env, case)
111
+ if 'error' in res:
112
+ return res
113
+
114
+ res['iops'] = case['count'] / res['seconds']
115
+ return res
116
+
117
+
118
+if __name__ == '__main__':
119
+ if len(sys.argv) < 2:
120
+ print(f'USAGE: {sys.argv[0]} <qemu-img binary> '
121
+ 'DISK_NAME:DIR_PATH ...')
122
+ exit(1)
123
+
124
+ qemu_img = sys.argv[1]
125
+
126
+ envs = [
127
+ {
128
+ 'id': 'no-prealloc',
129
+ 'qemu-img-binary': qemu_img,
130
+ 'prealloc': False
131
+ },
132
+ {
133
+ 'id': 'prealloc',
134
+ 'qemu-img-binary': qemu_img,
135
+ 'prealloc': True
136
+ }
137
+ ]
138
+
139
+ aligned_cases = []
140
+ unaligned_cases = []
141
+
142
+ for disk in sys.argv[2:]:
143
+ name, path = disk.split(':')
144
+ aligned_cases.append({
145
+ 'id': f'{name}, aligned sequential 16k',
146
+ 'block-size': '16k',
147
+ 'dir': path
148
+ })
149
+ unaligned_cases.append({
150
+ 'id': f'{name}, unaligned sequential 64k',
151
+ 'block-size': '16k',
152
+ 'dir': path
153
+ })
154
+
155
+ result = simplebench.bench(auto_count_bench_func, envs,
156
+ aligned_cases + unaligned_cases, count=5)
157
+ print(results_to_text(result))
158
+ with open('results.json', 'w') as f:
159
+ json.dump(result, f, indent=4)
160
--
57
--
161
2.29.2
58
2.21.0
162
59
163
60
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Sam Eiderman <shmuel.eiderman@oracle.com>
2
2
3
Make results_to_text a tool to dump results saved in JSON file.
3
512M of L1 entries is a very loose bound, only 32M are required to store
4
the maximal supported VMDK file size of 2TB.
4
5
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Fixed qemu-iotest 59# - now failure occures before on impossible L1
6
Message-Id: <20201021145859.11201-21-vsementsov@virtuozzo.com>
7
table size.
8
9
Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com>
10
Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com>
11
Reviewed-by: Liran Alon <liran.alon@oracle.com>
12
Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com>
13
Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com>
14
Message-id: 20190620091057.47441-3-shmuel.eiderman@oracle.com
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
15
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
17
---
10
scripts/simplebench/results_to_text.py | 14 ++++++++++++++
18
block/vmdk.c | 13 +++++++------
11
1 file changed, 14 insertions(+)
19
tests/qemu-iotests/059.out | 2 +-
12
mode change 100644 => 100755 scripts/simplebench/results_to_text.py
20
2 files changed, 8 insertions(+), 7 deletions(-)
13
21
14
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
22
diff --git a/block/vmdk.c b/block/vmdk.c
15
old mode 100644
23
index XXXXXXX..XXXXXXX 100644
16
new mode 100755
24
--- a/block/vmdk.c
17
index XXXXXXX..XXXXXXX
25
+++ b/block/vmdk.c
18
--- a/scripts/simplebench/results_to_text.py
26
@@ -XXX,XX +XXX,XX @@ static int vmdk_add_extent(BlockDriverState *bs,
19
+++ b/scripts/simplebench/results_to_text.py
27
error_setg(errp, "Invalid granularity, image may be corrupt");
20
@@ -XXX,XX +XXX,XX @@
28
return -EFBIG;
21
+#!/usr/bin/env python3
29
}
22
+#
30
- if (l1_size > 512 * 1024 * 1024) {
23
# Simple benchmarking framework
31
+ if (l1_size > 32 * 1024 * 1024) {
24
#
32
/*
25
# Copyright (c) 2019 Virtuozzo International GmbH.
33
* Although with big capacity and small l1_entry_sectors, we can get a
26
@@ -XXX,XX +XXX,XX @@ def results_to_text(results):
34
* big l1_size, we don't want unbounded value to allocate the table.
27
tab.append(row)
35
- * Limit it to 512M, which is:
28
36
- * 16PB - for default "Hosted Sparse Extent" (VMDK4)
29
return f'All results are in {dim}\n\n' + tabulate.tabulate(tab)
37
- * cluster size: 64KB, L2 table size: 512 entries
30
+
38
- * 1PB - for default "ESXi Host Sparse Extent" (VMDK3/vmfsSparse)
31
+
39
- * cluster size: 512B, L2 table size: 4096 entries
32
+if __name__ == '__main__':
40
+ * Limit it to 32M, which is enough to store:
33
+ import sys
41
+ * 8TB - for both VMDK3 & VMDK4 with
34
+ import json
42
+ * minimal cluster size: 512B
35
+
43
+ * minimal L2 table size: 512 entries
36
+ if len(sys.argv) < 2:
44
+ * 8 TB is still more than the maximal value supported for
37
+ print(f'USAGE: {sys.argv[0]} results.json')
45
+ * VMDK3 & VMDK4 which is 2TB.
38
+ exit(1)
46
*/
39
+
47
error_setg(errp, "L1 size too big");
40
+ with open(sys.argv[1]) as f:
48
return -EFBIG;
41
+ print(results_to_text(json.load(f)))
49
diff --git a/tests/qemu-iotests/059.out b/tests/qemu-iotests/059.out
50
index XXXXXXX..XXXXXXX 100644
51
--- a/tests/qemu-iotests/059.out
52
+++ b/tests/qemu-iotests/059.out
53
@@ -XXX,XX +XXX,XX @@ Offset Length Mapped to File
54
0x140000000 0x10000 0x50000 TEST_DIR/t-s003.vmdk
55
56
=== Testing afl image with a very large capacity ===
57
-qemu-img: Can't get image size 'TEST_DIR/afl9.IMGFMT': File too large
58
+qemu-img: Could not open 'TEST_DIR/afl9.IMGFMT': L1 size too big
59
*** done
42
--
60
--
43
2.29.2
61
2.21.0
44
62
45
63
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Sam Eiderman <shmuel.eiderman@oracle.com>
2
2
3
It's intended to be inserted between format and protocol nodes to
3
Until ESXi 6.5 VMware used the vmfsSparse format for snapshots (VMDK3 in
4
preallocate additional space (expanding protocol file) on writes
4
QEMU).
5
crossing EOF. It improves performance for file-systems with slow
5
6
allocation.
6
This format was lacking in the following:
7
7
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
* Grain directory (L1) and grain table (L2) entries were 32-bit,
9
Message-Id: <20201021145859.11201-9-vsementsov@virtuozzo.com>
9
allowing access to only 2TB (slightly less) of data.
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
10
* The grain size (default) was 512 bytes - leading to data
11
[mreitz: Two comment fixes, and bumped the version from 5.2 to 6.0]
11
fragmentation and many grain tables.
12
* For space reclamation purposes, it was necessary to find all the
13
grains which are not pointed to by any grain table - so a reverse
14
mapping of "offset of grain in vmdk" to "grain table" must be
15
constructed - which takes large amounts of CPU/RAM.
16
17
The format specification can be found in VMware's documentation:
18
https://www.vmware.com/support/developer/vddk/vmdk_50_technote.pdf
19
20
In ESXi 6.5, to support snapshot files larger than 2TB, a new format was
21
introduced: SESparse (Space Efficient).
22
23
This format fixes the above issues:
24
25
* All entries are now 64-bit.
26
* The grain size (default) is 4KB.
27
* Grain directory and grain tables are now located at the beginning
28
of the file.
29
+ seSparse format reserves space for all grain tables.
30
+ Grain tables can be addressed using an index.
31
+ Grains are located in the end of the file and can also be
32
addressed with an index.
33
- seSparse vmdks of large disks (64TB) have huge preallocated
34
headers - mainly due to L2 tables, even for empty snapshots.
35
* The header contains a reverse mapping ("backmap") of "offset of
36
grain in vmdk" to "grain table" and a bitmap ("free bitmap") which
37
specifies for each grain - whether it is allocated or not.
38
Using these data structures we can implement space reclamation
39
efficiently.
40
* Due to the fact that the header now maintains two mappings:
41
* The regular one (grain directory & grain tables)
42
* A reverse one (backmap and free bitmap)
43
These data structures can lose consistency upon crash and result
44
in a corrupted VMDK.
45
Therefore, a journal is also added to the VMDK and is replayed
46
when the VMware reopens the file after a crash.
47
48
Since ESXi 6.7 - SESparse is the only snapshot format available.
49
50
Unfortunately, VMware does not provide documentation regarding the new
51
seSparse format.
52
53
This commit is based on black-box research of the seSparse format.
54
Various in-guest block operations and their effect on the snapshot file
55
were tested.
56
57
The only VMware provided source of information (regarding the underlying
58
implementation) was a log file on the ESXi:
59
60
/var/log/hostd.log
61
62
Whenever an seSparse snapshot is created - the log is being populated
63
with seSparse records.
64
65
Relevant log records are of the form:
66
67
[...] Const Header:
68
[...] constMagic = 0xcafebabe
69
[...] version = 2.1
70
[...] capacity = 204800
71
[...] grainSize = 8
72
[...] grainTableSize = 64
73
[...] flags = 0
74
[...] Extents:
75
[...] Header : <1 : 1>
76
[...] JournalHdr : <2 : 2>
77
[...] Journal : <2048 : 2048>
78
[...] GrainDirectory : <4096 : 2048>
79
[...] GrainTables : <6144 : 2048>
80
[...] FreeBitmap : <8192 : 2048>
81
[...] BackMap : <10240 : 2048>
82
[...] Grain : <12288 : 204800>
83
[...] Volatile Header:
84
[...] volatileMagic = 0xcafecafe
85
[...] FreeGTNumber = 0
86
[...] nextTxnSeqNumber = 0
87
[...] replayJournal = 0
88
89
The sizes that are seen in the log file are in sectors.
90
Extents are of the following format: <offset : size>
91
92
This commit is a strict implementation which enforces:
93
* magics
94
* version number 2.1
95
* grain size of 8 sectors (4KB)
96
* grain table size of 64 sectors
97
* zero flags
98
* extent locations
99
100
Additionally, this commit proivdes only a subset of the functionality
101
offered by seSparse's format:
102
* Read-only
103
* No journal replay
104
* No space reclamation
105
* No unmap support
106
107
Hence, journal header, journal, free bitmap and backmap extents are
108
unused, only the "classic" (L1 -> L2 -> data) grain access is
109
implemented.
110
111
However there are several differences in the grain access itself.
112
Grain directory (L1):
113
* Grain directory entries are indexes (not offsets) to grain
114
tables.
115
* Valid grain directory entries have their highest nibble set to
116
0x1.
117
* Since grain tables are always located in the beginning of the
118
file - the index can fit into 32 bits - so we can use its low
119
part if it's valid.
120
Grain table (L2):
121
* Grain table entries are indexes (not offsets) to grains.
122
* If the highest nibble of the entry is:
123
0x0:
124
The grain in not allocated.
125
The rest of the bytes are 0.
126
0x1:
127
The grain is unmapped - guest sees a zero grain.
128
The rest of the bits point to the previously mapped grain,
129
see 0x3 case.
130
0x2:
131
The grain is zero.
132
0x3:
133
The grain is allocated - to get the index calculate:
134
((entry & 0x0fff000000000000) >> 48) |
135
((entry & 0x0000ffffffffffff) << 12)
136
* The difference between 0x1 and 0x2 is that 0x1 is an unallocated
137
grain which results from the guest using sg_unmap to unmap the
138
grain - but the grain itself still exists in the grain extent - a
139
space reclamation procedure should delete it.
140
Unmapping a zero grain has no effect (0x2 will not change to 0x1)
141
but unmapping an unallocated grain will (0x0 to 0x1) - naturally.
142
143
In order to implement seSparse some fields had to be changed to support
144
both 32-bit and 64-bit entry sizes.
145
146
Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com>
147
Reviewed-by: Eyal Moscovici <eyal.moscovici@oracle.com>
148
Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com>
149
Signed-off-by: Sam Eiderman <shmuel.eiderman@oracle.com>
150
Message-id: 20190620091057.47441-4-shmuel.eiderman@oracle.com
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
151
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
152
---
14
docs/system/qemu-block-drivers.rst.inc | 26 ++
153
block/vmdk.c | 358 ++++++++++++++++++++++++++++++++++++++++++++++++---
15
qapi/block-core.json | 20 +-
154
1 file changed, 342 insertions(+), 16 deletions(-)
16
block/preallocate.c | 559 +++++++++++++++++++++++++
155
17
block/meson.build | 1 +
156
diff --git a/block/vmdk.c b/block/vmdk.c
18
4 files changed, 605 insertions(+), 1 deletion(-)
19
create mode 100644 block/preallocate.c
20
21
diff --git a/docs/system/qemu-block-drivers.rst.inc b/docs/system/qemu-block-drivers.rst.inc
22
index XXXXXXX..XXXXXXX 100644
157
index XXXXXXX..XXXXXXX 100644
23
--- a/docs/system/qemu-block-drivers.rst.inc
158
--- a/block/vmdk.c
24
+++ b/docs/system/qemu-block-drivers.rst.inc
159
+++ b/block/vmdk.c
25
@@ -XXX,XX +XXX,XX @@ on host and see if there are locks held by the QEMU process on the image file.
160
@@ -XXX,XX +XXX,XX @@ typedef struct {
26
More than one byte could be locked by the QEMU instance, each byte of which
161
uint16_t compressAlgorithm;
27
reflects a particular permission that is acquired or protected by the running
162
} QEMU_PACKED VMDK4Header;
28
block driver.
163
29
+
164
+typedef struct VMDKSESparseConstHeader {
30
+Filter drivers
165
+ uint64_t magic;
31
+~~~~~~~~~~~~~~
166
+ uint64_t version;
32
+
167
+ uint64_t capacity;
33
+QEMU supports several filter drivers, which don't store any data, but perform
168
+ uint64_t grain_size;
34
+some additional tasks, hooking io requests.
169
+ uint64_t grain_table_size;
35
+
170
+ uint64_t flags;
36
+.. program:: filter-drivers
171
+ uint64_t reserved1;
37
+.. option:: preallocate
172
+ uint64_t reserved2;
38
+
173
+ uint64_t reserved3;
39
+ The preallocate filter driver is intended to be inserted between format
174
+ uint64_t reserved4;
40
+ and protocol nodes and preallocates some additional space
175
+ uint64_t volatile_header_offset;
41
+ (expanding the protocol file) when writing past the file’s end. This can be
176
+ uint64_t volatile_header_size;
42
+ useful for file-systems with slow allocation.
177
+ uint64_t journal_header_offset;
43
+
178
+ uint64_t journal_header_size;
44
+ Supported options:
179
+ uint64_t journal_offset;
45
+
180
+ uint64_t journal_size;
46
+ .. program:: preallocate
181
+ uint64_t grain_dir_offset;
47
+ .. option:: prealloc-align
182
+ uint64_t grain_dir_size;
48
+
183
+ uint64_t grain_tables_offset;
49
+ On preallocation, align the file length to this value (in bytes), default 1M.
184
+ uint64_t grain_tables_size;
50
+
185
+ uint64_t free_bitmap_offset;
51
+ .. program:: preallocate
186
+ uint64_t free_bitmap_size;
52
+ .. option:: prealloc-size
187
+ uint64_t backmap_offset;
53
+
188
+ uint64_t backmap_size;
54
+ How much to preallocate (in bytes), default 128M.
189
+ uint64_t grains_offset;
55
diff --git a/qapi/block-core.json b/qapi/block-core.json
190
+ uint64_t grains_size;
56
index XXXXXXX..XXXXXXX 100644
191
+ uint8_t pad[304];
57
--- a/qapi/block-core.json
192
+} QEMU_PACKED VMDKSESparseConstHeader;
58
+++ b/qapi/block-core.json
193
+
59
@@ -XXX,XX +XXX,XX @@
194
+typedef struct VMDKSESparseVolatileHeader {
60
'cloop', 'compress', 'copy-on-read', 'dmg', 'file', 'ftp', 'ftps',
195
+ uint64_t magic;
61
'gluster', 'host_cdrom', 'host_device', 'http', 'https', 'iscsi',
196
+ uint64_t free_gt_number;
62
'luks', 'nbd', 'nfs', 'null-aio', 'null-co', 'nvme', 'parallels',
197
+ uint64_t next_txn_seq_number;
63
- 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'rbd',
198
+ uint64_t replay_journal;
64
+ 'preallocate', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'rbd',
199
+ uint8_t pad[480];
65
{ 'name': 'replication', 'if': 'defined(CONFIG_REPLICATION)' },
200
+} QEMU_PACKED VMDKSESparseVolatileHeader;
66
'sheepdog',
201
+
67
'ssh', 'throttle', 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
202
#define L2_CACHE_SIZE 16
68
@@ -XXX,XX +XXX,XX @@
203
69
'data': { 'aes': 'QCryptoBlockOptionsQCow',
204
typedef struct VmdkExtent {
70
'luks': 'QCryptoBlockOptionsLUKS'} }
205
@@ -XXX,XX +XXX,XX @@ typedef struct VmdkExtent {
71
206
bool compressed;
72
+##
207
bool has_marker;
73
+# @BlockdevOptionsPreallocate:
208
bool has_zero_grain;
74
+#
209
+ bool sesparse;
75
+# Filter driver intended to be inserted between format and protocol node
210
+ uint64_t sesparse_l2_tables_offset;
76
+# and do preallocation in protocol node on write.
211
+ uint64_t sesparse_clusters_offset;
77
+#
212
+ int32_t entry_size;
78
+# @prealloc-align: on preallocation, align file length to this number,
213
int version;
79
+# default 1048576 (1M)
214
int64_t sectors;
80
+#
215
int64_t end_sector;
81
+# @prealloc-size: how much to preallocate, default 134217728 (128M)
216
int64_t flat_start_offset;
82
+#
217
int64_t l1_table_offset;
83
+# Since: 6.0
218
int64_t l1_backup_table_offset;
84
+##
219
- uint32_t *l1_table;
85
+{ 'struct': 'BlockdevOptionsPreallocate',
220
+ void *l1_table;
86
+ 'base': 'BlockdevOptionsGenericFormat',
221
uint32_t *l1_backup_table;
87
+ 'data': { '*prealloc-align': 'int', '*prealloc-size': 'int' } }
222
unsigned int l1_size;
88
+
223
uint32_t l1_entry_sectors;
89
##
224
90
# @BlockdevOptionsQcow2:
225
unsigned int l2_size;
91
#
226
- uint32_t *l2_cache;
92
@@ -XXX,XX +XXX,XX @@
227
+ void *l2_cache;
93
'null-co': 'BlockdevOptionsNull',
228
uint32_t l2_cache_offsets[L2_CACHE_SIZE];
94
'nvme': 'BlockdevOptionsNVMe',
229
uint32_t l2_cache_counts[L2_CACHE_SIZE];
95
'parallels': 'BlockdevOptionsGenericFormat',
230
96
+ 'preallocate':'BlockdevOptionsPreallocate',
231
@@ -XXX,XX +XXX,XX @@ static int vmdk_add_extent(BlockDriverState *bs,
97
'qcow2': 'BlockdevOptionsQcow2',
232
* minimal L2 table size: 512 entries
98
'qcow': 'BlockdevOptionsQcow',
233
* 8 TB is still more than the maximal value supported for
99
'qed': 'BlockdevOptionsGenericCOWFormat',
234
* VMDK3 & VMDK4 which is 2TB.
100
diff --git a/block/preallocate.c b/block/preallocate.c
235
+ * 64TB - for "ESXi seSparse Extent"
101
new file mode 100644
236
+ * minimal cluster size: 512B (default is 4KB)
102
index XXXXXXX..XXXXXXX
237
+ * L2 table size: 4096 entries (const).
103
--- /dev/null
238
+ * 64TB is more than the maximal value supported for
104
+++ b/block/preallocate.c
239
+ * seSparse VMDKs (which is slightly less than 64TB)
105
@@ -XXX,XX +XXX,XX @@
240
*/
106
+/*
241
error_setg(errp, "L1 size too big");
107
+ * preallocate filter driver
242
return -EFBIG;
108
+ *
243
@@ -XXX,XX +XXX,XX @@ static int vmdk_add_extent(BlockDriverState *bs,
109
+ * The driver performs preallocate operation: it is injected above
244
extent->l2_size = l2_size;
110
+ * some node, and before each write over EOF it does additional preallocating
245
extent->cluster_sectors = flat ? sectors : cluster_sectors;
111
+ * write-zeroes request.
246
extent->next_cluster_sector = ROUND_UP(nb_sectors, cluster_sectors);
112
+ *
247
+ extent->entry_size = sizeof(uint32_t);
113
+ * Copyright (c) 2020 Virtuozzo International GmbH.
248
114
+ *
249
if (s->num_extents > 1) {
115
+ * Author:
250
extent->end_sector = (*(extent - 1)).end_sector + extent->sectors;
116
+ * Sementsov-Ogievskiy Vladimir <vsementsov@virtuozzo.com>
251
@@ -XXX,XX +XXX,XX @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent,
117
+ *
252
int i;
118
+ * This program is free software; you can redistribute it and/or modify
253
119
+ * it under the terms of the GNU General Public License as published by
254
/* read the L1 table */
120
+ * the Free Software Foundation; either version 2 of the License, or
255
- l1_size = extent->l1_size * sizeof(uint32_t);
121
+ * (at your option) any later version.
256
+ l1_size = extent->l1_size * extent->entry_size;
122
+ *
257
extent->l1_table = g_try_malloc(l1_size);
123
+ * This program is distributed in the hope that it will be useful,
258
if (l1_size && extent->l1_table == NULL) {
124
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
259
return -ENOMEM;
125
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
260
@@ -XXX,XX +XXX,XX @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent,
126
+ * GNU General Public License for more details.
261
goto fail_l1;
127
+ *
262
}
128
+ * You should have received a copy of the GNU General Public License
263
for (i = 0; i < extent->l1_size; i++) {
129
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
264
- le32_to_cpus(&extent->l1_table[i]);
130
+ */
265
+ if (extent->entry_size == sizeof(uint64_t)) {
131
+
266
+ le64_to_cpus((uint64_t *)extent->l1_table + i);
132
+#include "qemu/osdep.h"
267
+ } else {
133
+
268
+ assert(extent->entry_size == sizeof(uint32_t));
134
+#include "qapi/error.h"
269
+ le32_to_cpus((uint32_t *)extent->l1_table + i);
135
+#include "qemu/module.h"
270
+ }
136
+#include "qemu/option.h"
271
}
137
+#include "qemu/units.h"
272
138
+#include "block/block_int.h"
273
if (extent->l1_backup_table_offset) {
139
+
274
+ assert(!extent->sesparse);
140
+
275
extent->l1_backup_table = g_try_malloc(l1_size);
141
+typedef struct PreallocateOpts {
276
if (l1_size && extent->l1_backup_table == NULL) {
142
+ int64_t prealloc_size;
277
ret = -ENOMEM;
143
+ int64_t prealloc_align;
278
@@ -XXX,XX +XXX,XX @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent,
144
+} PreallocateOpts;
279
}
145
+
280
146
+typedef struct BDRVPreallocateState {
281
extent->l2_cache =
147
+ PreallocateOpts opts;
282
- g_new(uint32_t, extent->l2_size * L2_CACHE_SIZE);
148
+
283
+ g_malloc(extent->entry_size * extent->l2_size * L2_CACHE_SIZE);
149
+ /*
284
return 0;
150
+ * Track real data end, to crop preallocation on close. If < 0 the status is
285
fail_l1b:
151
+ * unknown.
286
g_free(extent->l1_backup_table);
152
+ *
287
@@ -XXX,XX +XXX,XX @@ static int vmdk_open_vmfs_sparse(BlockDriverState *bs,
153
+ * @data_end is a maximum of file size on open (or when we get write/resize
288
return ret;
154
+ * permissions) and all write request ends after it. So it's safe to
289
}
155
+ * truncate to data_end if it is valid.
290
156
+ */
291
+#define SESPARSE_CONST_HEADER_MAGIC UINT64_C(0x00000000cafebabe)
157
+ int64_t data_end;
292
+#define SESPARSE_VOLATILE_HEADER_MAGIC UINT64_C(0x00000000cafecafe)
158
+
293
+
159
+ /*
294
+/* Strict checks - format not officially documented */
160
+ * Start of trailing preallocated area which reads as zero. May be smaller
295
+static int check_se_sparse_const_header(VMDKSESparseConstHeader *header,
161
+ * than data_end, if user does over-EOF write zero operation. If < 0 the
296
+ Error **errp)
162
+ * status is unknown.
163
+ *
164
+ * If both @zero_start and @file_end are valid, the region
165
+ * [@zero_start, @file_end) is known to be preallocated zeroes. If @file_end
166
+ * is not valid, @zero_start doesn't make much sense.
167
+ */
168
+ int64_t zero_start;
169
+
170
+ /*
171
+ * Real end of file. Actually the cache for bdrv_getlength(bs->file->bs),
172
+ * to avoid extra lseek() calls on each write operation. If < 0 the status
173
+ * is unknown.
174
+ */
175
+ int64_t file_end;
176
+
177
+ /*
178
+ * All three states @data_end, @zero_start and @file_end are guaranteed to
179
+ * be invalid (< 0) when we don't have both exclusive BLK_PERM_RESIZE and
180
+ * BLK_PERM_WRITE permissions on file child.
181
+ */
182
+} BDRVPreallocateState;
183
+
184
+#define PREALLOCATE_OPT_PREALLOC_ALIGN "prealloc-align"
185
+#define PREALLOCATE_OPT_PREALLOC_SIZE "prealloc-size"
186
+static QemuOptsList runtime_opts = {
187
+ .name = "preallocate",
188
+ .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
189
+ .desc = {
190
+ {
191
+ .name = PREALLOCATE_OPT_PREALLOC_ALIGN,
192
+ .type = QEMU_OPT_SIZE,
193
+ .help = "on preallocation, align file length to this number, "
194
+ "default 1M",
195
+ },
196
+ {
197
+ .name = PREALLOCATE_OPT_PREALLOC_SIZE,
198
+ .type = QEMU_OPT_SIZE,
199
+ .help = "how much to preallocate, default 128M",
200
+ },
201
+ { /* end of list */ }
202
+ },
203
+};
204
+
205
+static bool preallocate_absorb_opts(PreallocateOpts *dest, QDict *options,
206
+ BlockDriverState *child_bs, Error **errp)
207
+{
297
+{
208
+ QemuOpts *opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
298
+ header->magic = le64_to_cpu(header->magic);
209
+
299
+ header->version = le64_to_cpu(header->version);
210
+ if (!qemu_opts_absorb_qdict(opts, options, errp)) {
300
+ header->grain_size = le64_to_cpu(header->grain_size);
211
+ return false;
301
+ header->grain_table_size = le64_to_cpu(header->grain_table_size);
212
+ }
302
+ header->flags = le64_to_cpu(header->flags);
213
+
303
+ header->reserved1 = le64_to_cpu(header->reserved1);
214
+ dest->prealloc_align =
304
+ header->reserved2 = le64_to_cpu(header->reserved2);
215
+ qemu_opt_get_size(opts, PREALLOCATE_OPT_PREALLOC_ALIGN, 1 * MiB);
305
+ header->reserved3 = le64_to_cpu(header->reserved3);
216
+ dest->prealloc_size =
306
+ header->reserved4 = le64_to_cpu(header->reserved4);
217
+ qemu_opt_get_size(opts, PREALLOCATE_OPT_PREALLOC_SIZE, 128 * MiB);
307
+
218
+
308
+ header->volatile_header_offset =
219
+ qemu_opts_del(opts);
309
+ le64_to_cpu(header->volatile_header_offset);
220
+
310
+ header->volatile_header_size = le64_to_cpu(header->volatile_header_size);
221
+ if (!QEMU_IS_ALIGNED(dest->prealloc_align, BDRV_SECTOR_SIZE)) {
311
+
222
+ error_setg(errp, "prealloc-align parameter of preallocate filter "
312
+ header->journal_header_offset = le64_to_cpu(header->journal_header_offset);
223
+ "is not aligned to %llu", BDRV_SECTOR_SIZE);
313
+ header->journal_header_size = le64_to_cpu(header->journal_header_size);
224
+ return false;
314
+
225
+ }
315
+ header->journal_offset = le64_to_cpu(header->journal_offset);
226
+
316
+ header->journal_size = le64_to_cpu(header->journal_size);
227
+ if (!QEMU_IS_ALIGNED(dest->prealloc_align,
317
+
228
+ child_bs->bl.request_alignment)) {
318
+ header->grain_dir_offset = le64_to_cpu(header->grain_dir_offset);
229
+ error_setg(errp, "prealloc-align parameter of preallocate filter "
319
+ header->grain_dir_size = le64_to_cpu(header->grain_dir_size);
230
+ "is not aligned to underlying node request alignment "
320
+
231
+ "(%" PRIi32 ")", child_bs->bl.request_alignment);
321
+ header->grain_tables_offset = le64_to_cpu(header->grain_tables_offset);
232
+ return false;
322
+ header->grain_tables_size = le64_to_cpu(header->grain_tables_size);
233
+ }
323
+
234
+
324
+ header->free_bitmap_offset = le64_to_cpu(header->free_bitmap_offset);
235
+ return true;
325
+ header->free_bitmap_size = le64_to_cpu(header->free_bitmap_size);
236
+}
326
+
237
+
327
+ header->backmap_offset = le64_to_cpu(header->backmap_offset);
238
+static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
328
+ header->backmap_size = le64_to_cpu(header->backmap_size);
239
+ Error **errp)
329
+
240
+{
330
+ header->grains_offset = le64_to_cpu(header->grains_offset);
241
+ BDRVPreallocateState *s = bs->opaque;
331
+ header->grains_size = le64_to_cpu(header->grains_size);
242
+
332
+
243
+ /*
333
+ if (header->magic != SESPARSE_CONST_HEADER_MAGIC) {
244
+ * s->data_end and friends should be initialized on permission update.
334
+ error_setg(errp, "Bad const header magic: 0x%016" PRIx64,
245
+ * For this to work, mark them invalid.
335
+ header->magic);
246
+ */
247
+ s->file_end = s->zero_start = s->data_end = -EINVAL;
248
+
249
+ bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
250
+ BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
251
+ false, errp);
252
+ if (!bs->file) {
253
+ return -EINVAL;
336
+ return -EINVAL;
254
+ }
337
+ }
255
+
338
+
256
+ if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) {
339
+ if (header->version != 0x0000000200000001) {
257
+ return -EINVAL;
340
+ error_setg(errp, "Unsupported version: 0x%016" PRIx64,
258
+ }
341
+ header->version);
259
+
342
+ return -ENOTSUP;
260
+ bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
343
+ }
261
+ (BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
344
+
262
+
345
+ if (header->grain_size != 8) {
263
+ bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
346
+ error_setg(errp, "Unsupported grain size: %" PRIu64,
264
+ ((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
347
+ header->grain_size);
265
+ bs->file->bs->supported_zero_flags);
348
+ return -ENOTSUP;
349
+ }
350
+
351
+ if (header->grain_table_size != 64) {
352
+ error_setg(errp, "Unsupported grain table size: %" PRIu64,
353
+ header->grain_table_size);
354
+ return -ENOTSUP;
355
+ }
356
+
357
+ if (header->flags != 0) {
358
+ error_setg(errp, "Unsupported flags: 0x%016" PRIx64,
359
+ header->flags);
360
+ return -ENOTSUP;
361
+ }
362
+
363
+ if (header->reserved1 != 0 || header->reserved2 != 0 ||
364
+ header->reserved3 != 0 || header->reserved4 != 0) {
365
+ error_setg(errp, "Unsupported reserved bits:"
366
+ " 0x%016" PRIx64 " 0x%016" PRIx64
367
+ " 0x%016" PRIx64 " 0x%016" PRIx64,
368
+ header->reserved1, header->reserved2,
369
+ header->reserved3, header->reserved4);
370
+ return -ENOTSUP;
371
+ }
372
+
373
+ /* check that padding is 0 */
374
+ if (!buffer_is_zero(header->pad, sizeof(header->pad))) {
375
+ error_setg(errp, "Unsupported non-zero const header padding");
376
+ return -ENOTSUP;
377
+ }
266
+
378
+
267
+ return 0;
379
+ return 0;
268
+}
380
+}
269
+
381
+
270
+static void preallocate_close(BlockDriverState *bs)
382
+static int check_se_sparse_volatile_header(VMDKSESparseVolatileHeader *header,
383
+ Error **errp)
384
+{
385
+ header->magic = le64_to_cpu(header->magic);
386
+ header->free_gt_number = le64_to_cpu(header->free_gt_number);
387
+ header->next_txn_seq_number = le64_to_cpu(header->next_txn_seq_number);
388
+ header->replay_journal = le64_to_cpu(header->replay_journal);
389
+
390
+ if (header->magic != SESPARSE_VOLATILE_HEADER_MAGIC) {
391
+ error_setg(errp, "Bad volatile header magic: 0x%016" PRIx64,
392
+ header->magic);
393
+ return -EINVAL;
394
+ }
395
+
396
+ if (header->replay_journal) {
397
+ error_setg(errp, "Image is dirty, Replaying journal not supported");
398
+ return -ENOTSUP;
399
+ }
400
+
401
+ /* check that padding is 0 */
402
+ if (!buffer_is_zero(header->pad, sizeof(header->pad))) {
403
+ error_setg(errp, "Unsupported non-zero volatile header padding");
404
+ return -ENOTSUP;
405
+ }
406
+
407
+ return 0;
408
+}
409
+
410
+static int vmdk_open_se_sparse(BlockDriverState *bs,
411
+ BdrvChild *file,
412
+ int flags, Error **errp)
271
+{
413
+{
272
+ int ret;
414
+ int ret;
273
+ BDRVPreallocateState *s = bs->opaque;
415
+ VMDKSESparseConstHeader const_header;
274
+
416
+ VMDKSESparseVolatileHeader volatile_header;
275
+ if (s->data_end < 0) {
417
+ VmdkExtent *extent;
276
+ return;
418
+
277
+ }
419
+ ret = bdrv_apply_auto_read_only(bs,
278
+
420
+ "No write support for seSparse images available", errp);
279
+ if (s->file_end < 0) {
280
+ s->file_end = bdrv_getlength(bs->file->bs);
281
+ if (s->file_end < 0) {
282
+ return;
283
+ }
284
+ }
285
+
286
+ if (s->data_end < s->file_end) {
287
+ ret = bdrv_truncate(bs->file, s->data_end, true, PREALLOC_MODE_OFF, 0,
288
+ NULL);
289
+ s->file_end = ret < 0 ? ret : s->data_end;
290
+ }
291
+}
292
+
293
+
294
+/*
295
+ * Handle reopen.
296
+ *
297
+ * We must implement reopen handlers, otherwise reopen just don't work. Handle
298
+ * new options and don't care about preallocation state, as it is handled in
299
+ * set/check permission handlers.
300
+ */
301
+
302
+static int preallocate_reopen_prepare(BDRVReopenState *reopen_state,
303
+ BlockReopenQueue *queue, Error **errp)
304
+{
305
+ PreallocateOpts *opts = g_new0(PreallocateOpts, 1);
306
+
307
+ if (!preallocate_absorb_opts(opts, reopen_state->options,
308
+ reopen_state->bs->file->bs, errp)) {
309
+ g_free(opts);
310
+ return -EINVAL;
311
+ }
312
+
313
+ reopen_state->opaque = opts;
314
+
315
+ return 0;
316
+}
317
+
318
+static void preallocate_reopen_commit(BDRVReopenState *state)
319
+{
320
+ BDRVPreallocateState *s = state->bs->opaque;
321
+
322
+ s->opts = *(PreallocateOpts *)state->opaque;
323
+
324
+ g_free(state->opaque);
325
+ state->opaque = NULL;
326
+}
327
+
328
+static void preallocate_reopen_abort(BDRVReopenState *state)
329
+{
330
+ g_free(state->opaque);
331
+ state->opaque = NULL;
332
+}
333
+
334
+static coroutine_fn int preallocate_co_preadv_part(
335
+ BlockDriverState *bs, uint64_t offset, uint64_t bytes,
336
+ QEMUIOVector *qiov, size_t qiov_offset, int flags)
337
+{
338
+ return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
339
+ flags);
340
+}
341
+
342
+static int coroutine_fn preallocate_co_pdiscard(BlockDriverState *bs,
343
+ int64_t offset, int bytes)
344
+{
345
+ return bdrv_co_pdiscard(bs->file, offset, bytes);
346
+}
347
+
348
+static bool can_write_resize(uint64_t perm)
349
+{
350
+ return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE);
351
+}
352
+
353
+static bool has_prealloc_perms(BlockDriverState *bs)
354
+{
355
+ BDRVPreallocateState *s = bs->opaque;
356
+
357
+ if (can_write_resize(bs->file->perm)) {
358
+ assert(!(bs->file->shared_perm & BLK_PERM_WRITE));
359
+ assert(!(bs->file->shared_perm & BLK_PERM_RESIZE));
360
+ return true;
361
+ }
362
+
363
+ assert(s->data_end < 0);
364
+ assert(s->zero_start < 0);
365
+ assert(s->file_end < 0);
366
+ return false;
367
+}
368
+
369
+/*
370
+ * Call on each write. Returns true if @want_merge_zero is true and the region
371
+ * [offset, offset + bytes) is zeroed (as a result of this call or earlier
372
+ * preallocation).
373
+ *
374
+ * want_merge_zero is used to merge write-zero request with preallocation in
375
+ * one bdrv_co_pwrite_zeroes() call.
376
+ */
377
+static bool coroutine_fn handle_write(BlockDriverState *bs, int64_t offset,
378
+ int64_t bytes, bool want_merge_zero)
379
+{
380
+ BDRVPreallocateState *s = bs->opaque;
381
+ int64_t end = offset + bytes;
382
+ int64_t prealloc_start, prealloc_end;
383
+ int ret;
384
+
385
+ if (!has_prealloc_perms(bs)) {
386
+ /* We don't have state neither should try to recover it */
387
+ return false;
388
+ }
389
+
390
+ if (s->data_end < 0) {
391
+ s->data_end = bdrv_getlength(bs->file->bs);
392
+ if (s->data_end < 0) {
393
+ return false;
394
+ }
395
+
396
+ if (s->file_end < 0) {
397
+ s->file_end = s->data_end;
398
+ }
399
+ }
400
+
401
+ if (end <= s->data_end) {
402
+ return false;
403
+ }
404
+
405
+ /* We have valid s->data_end, and request writes beyond it. */
406
+
407
+ s->data_end = end;
408
+ if (s->zero_start < 0 || !want_merge_zero) {
409
+ s->zero_start = end;
410
+ }
411
+
412
+ if (s->file_end < 0) {
413
+ s->file_end = bdrv_getlength(bs->file->bs);
414
+ if (s->file_end < 0) {
415
+ return false;
416
+ }
417
+ }
418
+
419
+ /* Now s->data_end, s->zero_start and s->file_end are valid. */
420
+
421
+ if (end <= s->file_end) {
422
+ /* No preallocation needed. */
423
+ return want_merge_zero && offset >= s->zero_start;
424
+ }
425
+
426
+ /* Now we want new preallocation, as request writes beyond s->file_end. */
427
+
428
+ prealloc_start = want_merge_zero ? MIN(offset, s->file_end) : s->file_end;
429
+ prealloc_end = QEMU_ALIGN_UP(end + s->opts.prealloc_size,
430
+ s->opts.prealloc_align);
431
+
432
+ ret = bdrv_co_pwrite_zeroes(
433
+ bs->file, prealloc_start, prealloc_end - prealloc_start,
434
+ BDRV_REQ_NO_FALLBACK | BDRV_REQ_SERIALISING | BDRV_REQ_NO_WAIT);
435
+ if (ret < 0) {
421
+ if (ret < 0) {
436
+ s->file_end = ret;
422
+ return ret;
437
+ return false;
423
+ }
438
+ }
424
+
439
+
425
+ assert(sizeof(const_header) == SECTOR_SIZE);
440
+ s->file_end = prealloc_end;
426
+
441
+ return want_merge_zero;
427
+ ret = bdrv_pread(file, 0, &const_header, sizeof(const_header));
442
+}
443
+
444
+static int coroutine_fn preallocate_co_pwrite_zeroes(BlockDriverState *bs,
445
+ int64_t offset, int bytes, BdrvRequestFlags flags)
446
+{
447
+ bool want_merge_zero =
448
+ !(flags & ~(BDRV_REQ_ZERO_WRITE | BDRV_REQ_NO_FALLBACK));
449
+ if (handle_write(bs, offset, bytes, want_merge_zero)) {
450
+ return 0;
451
+ }
452
+
453
+ return bdrv_co_pwrite_zeroes(bs->file, offset, bytes, flags);
454
+}
455
+
456
+static coroutine_fn int preallocate_co_pwritev_part(BlockDriverState *bs,
457
+ uint64_t offset,
458
+ uint64_t bytes,
459
+ QEMUIOVector *qiov,
460
+ size_t qiov_offset,
461
+ int flags)
462
+{
463
+ handle_write(bs, offset, bytes, false);
464
+
465
+ return bdrv_co_pwritev_part(bs->file, offset, bytes, qiov, qiov_offset,
466
+ flags);
467
+}
468
+
469
+static int coroutine_fn
470
+preallocate_co_truncate(BlockDriverState *bs, int64_t offset,
471
+ bool exact, PreallocMode prealloc,
472
+ BdrvRequestFlags flags, Error **errp)
473
+{
474
+ ERRP_GUARD();
475
+ BDRVPreallocateState *s = bs->opaque;
476
+ int ret;
477
+
478
+ if (s->data_end >= 0 && offset > s->data_end) {
479
+ if (s->file_end < 0) {
480
+ s->file_end = bdrv_getlength(bs->file->bs);
481
+ if (s->file_end < 0) {
482
+ error_setg(errp, "failed to get file length");
483
+ return s->file_end;
484
+ }
485
+ }
486
+
487
+ if (prealloc == PREALLOC_MODE_FALLOC) {
488
+ /*
489
+ * If offset <= s->file_end, the task is already done, just
490
+ * update s->data_end, to move part of "filter preallocation"
491
+ * to "preallocation requested by user".
492
+ * Otherwise just proceed to preallocate missing part.
493
+ */
494
+ if (offset <= s->file_end) {
495
+ s->data_end = offset;
496
+ return 0;
497
+ }
498
+ } else {
499
+ /*
500
+ * We have to drop our preallocation, to
501
+ * - avoid "Cannot use preallocation for shrinking files" in
502
+ * case of offset < file_end
503
+ * - give PREALLOC_MODE_OFF a chance to keep small disk
504
+ * usage
505
+ * - give PREALLOC_MODE_FULL a chance to actually write the
506
+ * whole region as user expects
507
+ */
508
+ if (s->file_end > s->data_end) {
509
+ ret = bdrv_co_truncate(bs->file, s->data_end, true,
510
+ PREALLOC_MODE_OFF, 0, errp);
511
+ if (ret < 0) {
512
+ s->file_end = ret;
513
+ error_prepend(errp, "preallocate-filter: failed to drop "
514
+ "write-zero preallocation: ");
515
+ return ret;
516
+ }
517
+ s->file_end = s->data_end;
518
+ }
519
+ }
520
+
521
+ s->data_end = offset;
522
+ }
523
+
524
+ ret = bdrv_co_truncate(bs->file, offset, exact, prealloc, flags, errp);
525
+ if (ret < 0) {
428
+ if (ret < 0) {
526
+ s->file_end = s->zero_start = s->data_end = ret;
429
+ bdrv_refresh_filename(file->bs);
430
+ error_setg_errno(errp, -ret,
431
+ "Could not read const header from file '%s'",
432
+ file->bs->filename);
527
+ return ret;
433
+ return ret;
528
+ }
434
+ }
529
+
435
+
530
+ if (has_prealloc_perms(bs)) {
436
+ /* check const header */
531
+ s->file_end = s->zero_start = s->data_end = offset;
437
+ ret = check_se_sparse_const_header(&const_header, errp);
532
+ }
438
+ if (ret < 0) {
533
+ return 0;
439
+ return ret;
534
+}
440
+ }
535
+
441
+
536
+static int coroutine_fn preallocate_co_flush(BlockDriverState *bs)
442
+ assert(sizeof(volatile_header) == SECTOR_SIZE);
537
+{
443
+
538
+ return bdrv_co_flush(bs->file->bs);
444
+ ret = bdrv_pread(file,
539
+}
445
+ const_header.volatile_header_offset * SECTOR_SIZE,
540
+
446
+ &volatile_header, sizeof(volatile_header));
541
+static int64_t preallocate_getlength(BlockDriverState *bs)
447
+ if (ret < 0) {
542
+{
448
+ bdrv_refresh_filename(file->bs);
543
+ int64_t ret;
449
+ error_setg_errno(errp, -ret,
544
+ BDRVPreallocateState *s = bs->opaque;
450
+ "Could not read volatile header from file '%s'",
545
+
451
+ file->bs->filename);
546
+ if (s->data_end >= 0) {
452
+ return ret;
547
+ return s->data_end;
453
+ }
548
+ }
454
+
549
+
455
+ /* check volatile header */
550
+ ret = bdrv_getlength(bs->file->bs);
456
+ ret = check_se_sparse_volatile_header(&volatile_header, errp);
551
+
457
+ if (ret < 0) {
552
+ if (has_prealloc_perms(bs)) {
458
+ return ret;
553
+ s->file_end = s->zero_start = s->data_end = ret;
459
+ }
460
+
461
+ ret = vmdk_add_extent(bs, file, false,
462
+ const_header.capacity,
463
+ const_header.grain_dir_offset * SECTOR_SIZE,
464
+ 0,
465
+ const_header.grain_dir_size *
466
+ SECTOR_SIZE / sizeof(uint64_t),
467
+ const_header.grain_table_size *
468
+ SECTOR_SIZE / sizeof(uint64_t),
469
+ const_header.grain_size,
470
+ &extent,
471
+ errp);
472
+ if (ret < 0) {
473
+ return ret;
474
+ }
475
+
476
+ extent->sesparse = true;
477
+ extent->sesparse_l2_tables_offset = const_header.grain_tables_offset;
478
+ extent->sesparse_clusters_offset = const_header.grains_offset;
479
+ extent->entry_size = sizeof(uint64_t);
480
+
481
+ ret = vmdk_init_tables(bs, extent, errp);
482
+ if (ret) {
483
+ /* free extent allocated by vmdk_add_extent */
484
+ vmdk_free_last_extent(bs);
554
+ }
485
+ }
555
+
486
+
556
+ return ret;
487
+ return ret;
557
+}
488
+}
558
+
489
+
559
+static int preallocate_check_perm(BlockDriverState *bs,
490
static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
560
+ uint64_t perm, uint64_t shared, Error **errp)
491
QDict *options, Error **errp);
561
+{
492
562
+ BDRVPreallocateState *s = bs->opaque;
493
@@ -XXX,XX +XXX,XX @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
563
+
494
* RW [size in sectors] SPARSE "file-name.vmdk"
564
+ if (s->data_end >= 0 && !can_write_resize(perm)) {
495
* RW [size in sectors] VMFS "file-name.vmdk"
565
+ /*
496
* RW [size in sectors] VMFSSPARSE "file-name.vmdk"
566
+ * Lose permissions.
497
+ * RW [size in sectors] SESPARSE "file-name.vmdk"
567
+ * We should truncate in check_perm, as in set_perm bs->file->perm will
498
*/
568
+ * be already changed, and we should not violate it.
499
flat_offset = -1;
569
+ */
500
matches = sscanf(p, "%10s %" SCNd64 " %10s \"%511[^\n\r\"]\" %" SCNd64,
570
+ if (s->file_end < 0) {
501
@@ -XXX,XX +XXX,XX @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
571
+ s->file_end = bdrv_getlength(bs->file->bs);
502
572
+ if (s->file_end < 0) {
503
if (sectors <= 0 ||
573
+ error_setg(errp, "Failed to get file length");
504
(strcmp(type, "FLAT") && strcmp(type, "SPARSE") &&
574
+ return s->file_end;
505
- strcmp(type, "VMFS") && strcmp(type, "VMFSSPARSE")) ||
575
+ }
506
+ strcmp(type, "VMFS") && strcmp(type, "VMFSSPARSE") &&
576
+ }
507
+ strcmp(type, "SESPARSE")) ||
577
+
508
(strcmp(access, "RW"))) {
578
+ if (s->data_end < s->file_end) {
509
continue;
579
+ int ret = bdrv_truncate(bs->file, s->data_end, true,
510
}
580
+ PREALLOC_MODE_OFF, 0, NULL);
511
@@ -XXX,XX +XXX,XX @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
581
+ if (ret < 0) {
512
return ret;
582
+ error_setg(errp, "Failed to drop preallocation");
513
}
583
+ s->file_end = ret;
514
extent = &s->extents[s->num_extents - 1];
515
+ } else if (!strcmp(type, "SESPARSE")) {
516
+ ret = vmdk_open_se_sparse(bs, extent_file, bs->open_flags, errp);
517
+ if (ret) {
518
+ bdrv_unref_child(bs, extent_file);
584
+ return ret;
519
+ return ret;
585
+ }
520
+ }
586
+ s->file_end = s->data_end;
521
+ extent = &s->extents[s->num_extents - 1];
587
+ }
522
} else {
588
+ }
523
error_setg(errp, "Unsupported extent type '%s'", type);
589
+
524
bdrv_unref_child(bs, extent_file);
590
+ return 0;
525
@@ -XXX,XX +XXX,XX @@ static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
591
+}
526
if (strcmp(ct, "monolithicFlat") &&
592
+
527
strcmp(ct, "vmfs") &&
593
+static void preallocate_set_perm(BlockDriverState *bs,
528
strcmp(ct, "vmfsSparse") &&
594
+ uint64_t perm, uint64_t shared)
529
+ strcmp(ct, "seSparse") &&
595
+{
530
strcmp(ct, "twoGbMaxExtentSparse") &&
596
+ BDRVPreallocateState *s = bs->opaque;
531
strcmp(ct, "twoGbMaxExtentFlat")) {
597
+
532
error_setg(errp, "Unsupported image type '%s'", ct);
598
+ if (can_write_resize(perm)) {
533
@@ -XXX,XX +XXX,XX @@ static int get_cluster_offset(BlockDriverState *bs,
599
+ if (s->data_end < 0) {
534
{
600
+ s->data_end = s->file_end = s->zero_start =
535
unsigned int l1_index, l2_offset, l2_index;
601
+ bdrv_getlength(bs->file->bs);
536
int min_index, i, j;
537
- uint32_t min_count, *l2_table;
538
+ uint32_t min_count;
539
+ void *l2_table;
540
bool zeroed = false;
541
int64_t ret;
542
int64_t cluster_sector;
543
+ unsigned int l2_size_bytes = extent->l2_size * extent->entry_size;
544
545
if (m_data) {
546
m_data->valid = 0;
547
@@ -XXX,XX +XXX,XX @@ static int get_cluster_offset(BlockDriverState *bs,
548
if (l1_index >= extent->l1_size) {
549
return VMDK_ERROR;
550
}
551
- l2_offset = extent->l1_table[l1_index];
552
+ if (extent->sesparse) {
553
+ uint64_t l2_offset_u64;
554
+
555
+ assert(extent->entry_size == sizeof(uint64_t));
556
+
557
+ l2_offset_u64 = ((uint64_t *)extent->l1_table)[l1_index];
558
+ if (l2_offset_u64 == 0) {
559
+ l2_offset = 0;
560
+ } else if ((l2_offset_u64 & 0xffffffff00000000) != 0x1000000000000000) {
561
+ /*
562
+ * Top most nibble is 0x1 if grain table is allocated.
563
+ * strict check - top most 4 bytes must be 0x10000000 since max
564
+ * supported size is 64TB for disk - so no more than 64TB / 16MB
565
+ * grain directories which is smaller than uint32,
566
+ * where 16MB is the only supported default grain table coverage.
567
+ */
568
+ return VMDK_ERROR;
569
+ } else {
570
+ l2_offset_u64 = l2_offset_u64 & 0x00000000ffffffff;
571
+ l2_offset_u64 = extent->sesparse_l2_tables_offset +
572
+ l2_offset_u64 * l2_size_bytes / SECTOR_SIZE;
573
+ if (l2_offset_u64 > 0x00000000ffffffff) {
574
+ return VMDK_ERROR;
575
+ }
576
+ l2_offset = (unsigned int)(l2_offset_u64);
602
+ }
577
+ }
603
+ } else {
578
+ } else {
604
+ /*
579
+ assert(extent->entry_size == sizeof(uint32_t));
605
+ * We drop our permissions, as well as allow shared
580
+ l2_offset = ((uint32_t *)extent->l1_table)[l1_index];
606
+ * permissions (see preallocate_child_perm), anyone will be able to
581
+ }
607
+ * change the child, so mark all states invalid. We'll regain control if
582
if (!l2_offset) {
608
+ * get good permissions back.
583
return VMDK_UNALLOC;
609
+ */
584
}
610
+ s->data_end = s->file_end = s->zero_start = -EINVAL;
585
@@ -XXX,XX +XXX,XX @@ static int get_cluster_offset(BlockDriverState *bs,
611
+ }
586
extent->l2_cache_counts[j] >>= 1;
612
+}
587
}
613
+
588
}
614
+static void preallocate_child_perm(BlockDriverState *bs, BdrvChild *c,
589
- l2_table = extent->l2_cache + (i * extent->l2_size);
615
+ BdrvChildRole role, BlockReopenQueue *reopen_queue,
590
+ l2_table = (char *)extent->l2_cache + (i * l2_size_bytes);
616
+ uint64_t perm, uint64_t shared, uint64_t *nperm, uint64_t *nshared)
591
goto found;
617
+{
592
}
618
+ bdrv_default_perms(bs, c, role, reopen_queue, perm, shared, nperm, nshared);
593
}
619
+
594
@@ -XXX,XX +XXX,XX @@ static int get_cluster_offset(BlockDriverState *bs,
620
+ if (can_write_resize(perm)) {
595
min_index = i;
621
+ /* This should come by default, but let's enforce: */
596
}
622
+ *nperm |= BLK_PERM_WRITE | BLK_PERM_RESIZE;
597
}
623
+
598
- l2_table = extent->l2_cache + (min_index * extent->l2_size);
624
+ /*
599
+ l2_table = (char *)extent->l2_cache + (min_index * l2_size_bytes);
625
+ * Don't share, to keep our states s->file_end, s->data_end and
600
BLKDBG_EVENT(extent->file, BLKDBG_L2_LOAD);
626
+ * s->zero_start valid.
601
if (bdrv_pread(extent->file,
627
+ */
602
(int64_t)l2_offset * 512,
628
+ *nshared &= ~(BLK_PERM_WRITE | BLK_PERM_RESIZE);
603
l2_table,
629
+ }
604
- extent->l2_size * sizeof(uint32_t)
630
+}
605
- ) != extent->l2_size * sizeof(uint32_t)) {
631
+
606
+ l2_size_bytes
632
+BlockDriver bdrv_preallocate_filter = {
607
+ ) != l2_size_bytes) {
633
+ .format_name = "preallocate",
608
return VMDK_ERROR;
634
+ .instance_size = sizeof(BDRVPreallocateState),
609
}
635
+
610
636
+ .bdrv_getlength = preallocate_getlength,
611
@@ -XXX,XX +XXX,XX @@ static int get_cluster_offset(BlockDriverState *bs,
637
+ .bdrv_open = preallocate_open,
612
extent->l2_cache_counts[min_index] = 1;
638
+ .bdrv_close = preallocate_close,
613
found:
639
+
614
l2_index = ((offset >> 9) / extent->cluster_sectors) % extent->l2_size;
640
+ .bdrv_reopen_prepare = preallocate_reopen_prepare,
615
- cluster_sector = le32_to_cpu(l2_table[l2_index]);
641
+ .bdrv_reopen_commit = preallocate_reopen_commit,
616
642
+ .bdrv_reopen_abort = preallocate_reopen_abort,
617
- if (extent->has_zero_grain && cluster_sector == VMDK_GTE_ZEROED) {
643
+
618
- zeroed = true;
644
+ .bdrv_co_preadv_part = preallocate_co_preadv_part,
619
+ if (extent->sesparse) {
645
+ .bdrv_co_pwritev_part = preallocate_co_pwritev_part,
620
+ cluster_sector = le64_to_cpu(((uint64_t *)l2_table)[l2_index]);
646
+ .bdrv_co_pwrite_zeroes = preallocate_co_pwrite_zeroes,
621
+ switch (cluster_sector & 0xf000000000000000) {
647
+ .bdrv_co_pdiscard = preallocate_co_pdiscard,
622
+ case 0x0000000000000000:
648
+ .bdrv_co_flush = preallocate_co_flush,
623
+ /* unallocated grain */
649
+ .bdrv_co_truncate = preallocate_co_truncate,
624
+ if (cluster_sector != 0) {
650
+
625
+ return VMDK_ERROR;
651
+ .bdrv_check_perm = preallocate_check_perm,
626
+ }
652
+ .bdrv_set_perm = preallocate_set_perm,
627
+ break;
653
+ .bdrv_child_perm = preallocate_child_perm,
628
+ case 0x1000000000000000:
654
+
629
+ /* scsi-unmapped grain - fallthrough */
655
+ .has_variable_length = true,
630
+ case 0x2000000000000000:
656
+ .is_filter = true,
631
+ /* zero grain */
657
+};
632
+ zeroed = true;
658
+
633
+ break;
659
+static void bdrv_preallocate_init(void)
634
+ case 0x3000000000000000:
660
+{
635
+ /* allocated grain */
661
+ bdrv_register(&bdrv_preallocate_filter);
636
+ cluster_sector = (((cluster_sector & 0x0fff000000000000) >> 48) |
662
+}
637
+ ((cluster_sector & 0x0000ffffffffffff) << 12));
663
+
638
+ cluster_sector = extent->sesparse_clusters_offset +
664
+block_init(bdrv_preallocate_init);
639
+ cluster_sector * extent->cluster_sectors;
665
diff --git a/block/meson.build b/block/meson.build
640
+ break;
666
index XXXXXXX..XXXXXXX 100644
641
+ default:
667
--- a/block/meson.build
642
+ return VMDK_ERROR;
668
+++ b/block/meson.build
643
+ }
669
@@ -XXX,XX +XXX,XX @@ block_ss.add(files(
644
+ } else {
670
'block-copy.c',
645
+ cluster_sector = le32_to_cpu(((uint32_t *)l2_table)[l2_index]);
671
'commit.c',
646
+
672
'copy-on-read.c',
647
+ if (extent->has_zero_grain && cluster_sector == VMDK_GTE_ZEROED) {
673
+ 'preallocate.c',
648
+ zeroed = true;
674
'create.c',
649
+ }
675
'crypto.c',
650
}
676
'dirty-bitmap.c',
651
652
if (!cluster_sector || zeroed) {
653
if (!allocate) {
654
return zeroed ? VMDK_ZEROED : VMDK_UNALLOC;
655
}
656
+ assert(!extent->sesparse);
657
658
if (extent->next_cluster_sector >= VMDK_EXTENT_MAX_SECTORS) {
659
return VMDK_ERROR;
660
@@ -XXX,XX +XXX,XX @@ static int get_cluster_offset(BlockDriverState *bs,
661
m_data->l1_index = l1_index;
662
m_data->l2_index = l2_index;
663
m_data->l2_offset = l2_offset;
664
- m_data->l2_cache_entry = &l2_table[l2_index];
665
+ m_data->l2_cache_entry = ((uint32_t *)l2_table) + l2_index;
666
}
667
}
668
*cluster_offset = cluster_sector << BDRV_SECTOR_BITS;
669
@@ -XXX,XX +XXX,XX @@ static int vmdk_pwritev(BlockDriverState *bs, uint64_t offset,
670
if (!extent) {
671
return -EIO;
672
}
673
+ if (extent->sesparse) {
674
+ return -ENOTSUP;
675
+ }
676
offset_in_cluster = vmdk_find_offset_in_cluster(extent, offset);
677
n_bytes = MIN(bytes, extent->cluster_sectors * BDRV_SECTOR_SIZE
678
- offset_in_cluster);
677
--
679
--
678
2.29.2
680
2.21.0
679
681
680
682
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Pino Toscano <ptoscano@redhat.com>
2
2
3
We must set the permission used for _check_. Assert that we have
3
Rewrite the implementation of the ssh block driver to use libssh instead
4
backup and drop extra arguments.
4
of libssh2. The libssh library has various advantages over libssh2:
5
- easier API for authentication (for example for using ssh-agent)
6
- easier API for known_hosts handling
7
- supports newer types of keys in known_hosts
5
8
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
Use APIs/features available in libssh 0.8 conditionally, to support
7
Message-Id: <20201106124241.16950-7-vsementsov@virtuozzo.com>
10
older versions (which are not recommended though).
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
11
12
Adjust the iotest 207 according to the different error message, and to
13
find the default key type for localhost (to properly compare the
14
fingerprint with).
15
Contributed-by: Max Reitz <mreitz@redhat.com>
16
17
Adjust the various Docker/Travis scripts to use libssh when available
18
instead of libssh2. The mingw/mxe testing is dropped for now, as there
19
are no packages for it.
20
21
Signed-off-by: Pino Toscano <ptoscano@redhat.com>
22
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
23
Acked-by: Alex Bennée <alex.bennee@linaro.org>
24
Message-id: 20190620200840.17655-1-ptoscano@redhat.com
25
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
26
Message-id: 5873173.t2JhDm7DL7@lindworm.usersys.redhat.com
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
27
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
28
---
11
block.c | 15 ++++-----------
29
configure | 65 +-
12
1 file changed, 4 insertions(+), 11 deletions(-)
30
block/Makefile.objs | 6 +-
31
block/ssh.c | 652 ++++++++++--------
32
.travis.yml | 4 +-
33
block/trace-events | 14 +-
34
docs/qemu-block-drivers.texi | 2 +-
35
.../dockerfiles/debian-win32-cross.docker | 1 -
36
.../dockerfiles/debian-win64-cross.docker | 1 -
37
tests/docker/dockerfiles/fedora.docker | 4 +-
38
tests/docker/dockerfiles/ubuntu.docker | 2 +-
39
tests/docker/dockerfiles/ubuntu1804.docker | 2 +-
40
tests/qemu-iotests/207 | 54 +-
41
tests/qemu-iotests/207.out | 2 +-
42
13 files changed, 449 insertions(+), 360 deletions(-)
13
43
14
diff --git a/block.c b/block.c
44
diff --git a/configure b/configure
45
index XXXXXXX..XXXXXXX 100755
46
--- a/configure
47
+++ b/configure
48
@@ -XXX,XX +XXX,XX @@ auth_pam=""
49
vte=""
50
virglrenderer=""
51
tpm=""
52
-libssh2=""
53
+libssh=""
54
live_block_migration="yes"
55
numa=""
56
tcmalloc="no"
57
@@ -XXX,XX +XXX,XX @@ for opt do
58
;;
59
--enable-tpm) tpm="yes"
60
;;
61
- --disable-libssh2) libssh2="no"
62
+ --disable-libssh) libssh="no"
63
;;
64
- --enable-libssh2) libssh2="yes"
65
+ --enable-libssh) libssh="yes"
66
;;
67
--disable-live-block-migration) live_block_migration="no"
68
;;
69
@@ -XXX,XX +XXX,XX @@ disabled with --disable-FEATURE, default is enabled if available:
70
coroutine-pool coroutine freelist (better performance)
71
glusterfs GlusterFS backend
72
tpm TPM support
73
- libssh2 ssh block device support
74
+ libssh ssh block device support
75
numa libnuma support
76
libxml2 for Parallels image format
77
tcmalloc tcmalloc support
78
@@ -XXX,XX +XXX,XX @@ EOF
79
fi
80
81
##########################################
82
-# libssh2 probe
83
-min_libssh2_version=1.2.8
84
-if test "$libssh2" != "no" ; then
85
- if $pkg_config --atleast-version=$min_libssh2_version libssh2; then
86
- libssh2_cflags=$($pkg_config libssh2 --cflags)
87
- libssh2_libs=$($pkg_config libssh2 --libs)
88
- libssh2=yes
89
+# libssh probe
90
+if test "$libssh" != "no" ; then
91
+ if $pkg_config --exists libssh; then
92
+ libssh_cflags=$($pkg_config libssh --cflags)
93
+ libssh_libs=$($pkg_config libssh --libs)
94
+ libssh=yes
95
else
96
- if test "$libssh2" = "yes" ; then
97
- error_exit "libssh2 >= $min_libssh2_version required for --enable-libssh2"
98
+ if test "$libssh" = "yes" ; then
99
+ error_exit "libssh required for --enable-libssh"
100
fi
101
- libssh2=no
102
+ libssh=no
103
fi
104
fi
105
106
##########################################
107
-# libssh2_sftp_fsync probe
108
+# Check for libssh 0.8
109
+# This is done like this instead of using the LIBSSH_VERSION_* and
110
+# SSH_VERSION_* macros because some distributions in the past shipped
111
+# snapshots of the future 0.8 from Git, and those snapshots did not
112
+# have updated version numbers (still referring to 0.7.0).
113
114
-if test "$libssh2" = "yes"; then
115
+if test "$libssh" = "yes"; then
116
cat > $TMPC <<EOF
117
-#include <stdio.h>
118
-#include <libssh2.h>
119
-#include <libssh2_sftp.h>
120
-int main(void) {
121
- LIBSSH2_SESSION *session;
122
- LIBSSH2_SFTP *sftp;
123
- LIBSSH2_SFTP_HANDLE *sftp_handle;
124
- session = libssh2_session_init ();
125
- sftp = libssh2_sftp_init (session);
126
- sftp_handle = libssh2_sftp_open (sftp, "/", 0, 0);
127
- libssh2_sftp_fsync (sftp_handle);
128
- return 0;
129
-}
130
+#include <libssh/libssh.h>
131
+int main(void) { return ssh_get_server_publickey(NULL, NULL); }
132
EOF
133
- # libssh2_cflags/libssh2_libs defined in previous test.
134
- if compile_prog "$libssh2_cflags" "$libssh2_libs" ; then
135
- QEMU_CFLAGS="-DHAS_LIBSSH2_SFTP_FSYNC $QEMU_CFLAGS"
136
+ if compile_prog "$libssh_cflags" "$libssh_libs"; then
137
+ libssh_cflags="-DHAVE_LIBSSH_0_8 $libssh_cflags"
138
fi
139
fi
140
141
@@ -XXX,XX +XXX,XX @@ echo "GlusterFS support $glusterfs"
142
echo "gcov $gcov_tool"
143
echo "gcov enabled $gcov"
144
echo "TPM support $tpm"
145
-echo "libssh2 support $libssh2"
146
+echo "libssh support $libssh"
147
echo "QOM debugging $qom_cast_debug"
148
echo "Live block migration $live_block_migration"
149
echo "lzo support $lzo"
150
@@ -XXX,XX +XXX,XX @@ if test "$glusterfs_iocb_has_stat" = "yes" ; then
151
echo "CONFIG_GLUSTERFS_IOCB_HAS_STAT=y" >> $config_host_mak
152
fi
153
154
-if test "$libssh2" = "yes" ; then
155
- echo "CONFIG_LIBSSH2=m" >> $config_host_mak
156
- echo "LIBSSH2_CFLAGS=$libssh2_cflags" >> $config_host_mak
157
- echo "LIBSSH2_LIBS=$libssh2_libs" >> $config_host_mak
158
+if test "$libssh" = "yes" ; then
159
+ echo "CONFIG_LIBSSH=m" >> $config_host_mak
160
+ echo "LIBSSH_CFLAGS=$libssh_cflags" >> $config_host_mak
161
+ echo "LIBSSH_LIBS=$libssh_libs" >> $config_host_mak
162
fi
163
164
if test "$live_block_migration" = "yes" ; then
165
diff --git a/block/Makefile.objs b/block/Makefile.objs
15
index XXXXXXX..XXXXXXX 100644
166
index XXXXXXX..XXXXXXX 100644
16
--- a/block.c
167
--- a/block/Makefile.objs
17
+++ b/block.c
168
+++ b/block/Makefile.objs
18
@@ -XXX,XX +XXX,XX @@ static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
169
@@ -XXX,XX +XXX,XX @@ block-obj-$(CONFIG_CURL) += curl.o
19
GSList *ignore_children,
170
block-obj-$(CONFIG_RBD) += rbd.o
20
bool *tighten_restrictions, Error **errp);
171
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
21
static void bdrv_child_abort_perm_update(BdrvChild *c);
172
block-obj-$(CONFIG_VXHS) += vxhs.o
22
-static void bdrv_child_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared);
173
-block-obj-$(CONFIG_LIBSSH2) += ssh.o
23
+static void bdrv_child_set_perm(BdrvChild *c);
174
+block-obj-$(CONFIG_LIBSSH) += ssh.o
24
175
block-obj-y += accounting.o dirty-bitmap.o
25
typedef struct BlockReopenQueueEntry {
176
block-obj-y += write-threshold.o
26
bool prepared;
177
block-obj-y += backup.o
27
@@ -XXX,XX +XXX,XX @@ static void bdrv_set_perm(BlockDriverState *bs)
178
@@ -XXX,XX +XXX,XX @@ rbd.o-libs := $(RBD_LIBS)
28
179
gluster.o-cflags := $(GLUSTERFS_CFLAGS)
29
/* Update all children */
180
gluster.o-libs := $(GLUSTERFS_LIBS)
30
QLIST_FOREACH(c, &bs->children, next) {
181
vxhs.o-libs := $(VXHS_LIBS)
31
- uint64_t cur_perm, cur_shared;
182
-ssh.o-cflags := $(LIBSSH2_CFLAGS)
32
- bdrv_child_perm(bs, c->bs, c, c->role, NULL,
183
-ssh.o-libs := $(LIBSSH2_LIBS)
33
- cumulative_perms, cumulative_shared_perms,
184
+ssh.o-cflags := $(LIBSSH_CFLAGS)
34
- &cur_perm, &cur_shared);
185
+ssh.o-libs := $(LIBSSH_LIBS)
35
- bdrv_child_set_perm(c, cur_perm, cur_shared);
186
block-obj-dmg-bz2-$(CONFIG_BZIP2) += dmg-bz2.o
36
+ bdrv_child_set_perm(c);
187
block-obj-$(if $(CONFIG_DMG),m,n) += $(block-obj-dmg-bz2-y)
37
}
188
dmg-bz2.o-libs := $(BZIP2_LIBS)
189
diff --git a/block/ssh.c b/block/ssh.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/block/ssh.c
192
+++ b/block/ssh.c
193
@@ -XXX,XX +XXX,XX @@
194
195
#include "qemu/osdep.h"
196
197
-#include <libssh2.h>
198
-#include <libssh2_sftp.h>
199
+#include <libssh/libssh.h>
200
+#include <libssh/sftp.h>
201
202
#include "block/block_int.h"
203
#include "block/qdict.h"
204
@@ -XXX,XX +XXX,XX @@
205
#include "trace.h"
206
207
/*
208
- * TRACE_LIBSSH2=<bitmask> enables tracing in libssh2 itself. Note
209
- * that this requires that libssh2 was specially compiled with the
210
- * `./configure --enable-debug' option, so most likely you will have
211
- * to compile it yourself. The meaning of <bitmask> is described
212
- * here: http://www.libssh2.org/libssh2_trace.html
213
+ * TRACE_LIBSSH=<level> enables tracing in libssh itself.
214
+ * The meaning of <level> is described here:
215
+ * http://api.libssh.org/master/group__libssh__log.html
216
*/
217
-#define TRACE_LIBSSH2 0 /* or try: LIBSSH2_TRACE_SFTP */
218
+#define TRACE_LIBSSH 0 /* see: SSH_LOG_* */
219
220
typedef struct BDRVSSHState {
221
/* Coroutine. */
222
@@ -XXX,XX +XXX,XX @@ typedef struct BDRVSSHState {
223
224
/* SSH connection. */
225
int sock; /* socket */
226
- LIBSSH2_SESSION *session; /* ssh session */
227
- LIBSSH2_SFTP *sftp; /* sftp session */
228
- LIBSSH2_SFTP_HANDLE *sftp_handle; /* sftp remote file handle */
229
+ ssh_session session; /* ssh session */
230
+ sftp_session sftp; /* sftp session */
231
+ sftp_file sftp_handle; /* sftp remote file handle */
232
233
- /* See ssh_seek() function below. */
234
- int64_t offset;
235
- bool offset_op_read;
236
-
237
- /* File attributes at open. We try to keep the .filesize field
238
+ /*
239
+ * File attributes at open. We try to keep the .size field
240
* updated if it changes (eg by writing at the end of the file).
241
*/
242
- LIBSSH2_SFTP_ATTRIBUTES attrs;
243
+ sftp_attributes attrs;
244
245
InetSocketAddress *inet;
246
247
@@ -XXX,XX +XXX,XX @@ static void ssh_state_init(BDRVSSHState *s)
248
{
249
memset(s, 0, sizeof *s);
250
s->sock = -1;
251
- s->offset = -1;
252
qemu_co_mutex_init(&s->lock);
38
}
253
}
39
254
40
@@ -XXX,XX +XXX,XX @@ static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
255
@@ -XXX,XX +XXX,XX @@ static void ssh_state_free(BDRVSSHState *s)
256
{
257
g_free(s->user);
258
259
+ if (s->attrs) {
260
+ sftp_attributes_free(s->attrs);
261
+ }
262
if (s->sftp_handle) {
263
- libssh2_sftp_close(s->sftp_handle);
264
+ sftp_close(s->sftp_handle);
265
}
266
if (s->sftp) {
267
- libssh2_sftp_shutdown(s->sftp);
268
+ sftp_free(s->sftp);
269
}
270
if (s->session) {
271
- libssh2_session_disconnect(s->session,
272
- "from qemu ssh client: "
273
- "user closed the connection");
274
- libssh2_session_free(s->session);
275
- }
276
- if (s->sock >= 0) {
277
- close(s->sock);
278
+ ssh_disconnect(s->session);
279
+ ssh_free(s->session); /* This frees s->sock */
280
}
281
}
282
283
@@ -XXX,XX +XXX,XX @@ session_error_setg(Error **errp, BDRVSSHState *s, const char *fs, ...)
284
va_end(args);
285
286
if (s->session) {
287
- char *ssh_err;
288
+ const char *ssh_err;
289
int ssh_err_code;
290
291
- /* This is not an errno. See <libssh2.h>. */
292
- ssh_err_code = libssh2_session_last_error(s->session,
293
- &ssh_err, NULL, 0);
294
- error_setg(errp, "%s: %s (libssh2 error code: %d)",
295
+ /* This is not an errno. See <libssh/libssh.h>. */
296
+ ssh_err = ssh_get_error(s->session);
297
+ ssh_err_code = ssh_get_error_code(s->session);
298
+ error_setg(errp, "%s: %s (libssh error code: %d)",
299
msg, ssh_err, ssh_err_code);
300
} else {
301
error_setg(errp, "%s", msg);
302
@@ -XXX,XX +XXX,XX @@ sftp_error_setg(Error **errp, BDRVSSHState *s, const char *fs, ...)
303
va_end(args);
304
305
if (s->sftp) {
306
- char *ssh_err;
307
+ const char *ssh_err;
308
int ssh_err_code;
309
- unsigned long sftp_err_code;
310
+ int sftp_err_code;
311
312
- /* This is not an errno. See <libssh2.h>. */
313
- ssh_err_code = libssh2_session_last_error(s->session,
314
- &ssh_err, NULL, 0);
315
- /* See <libssh2_sftp.h>. */
316
- sftp_err_code = libssh2_sftp_last_error((s)->sftp);
317
+ /* This is not an errno. See <libssh/libssh.h>. */
318
+ ssh_err = ssh_get_error(s->session);
319
+ ssh_err_code = ssh_get_error_code(s->session);
320
+ /* See <libssh/sftp.h>. */
321
+ sftp_err_code = sftp_get_error(s->sftp);
322
323
error_setg(errp,
324
- "%s: %s (libssh2 error code: %d, sftp error code: %lu)",
325
+ "%s: %s (libssh error code: %d, sftp error code: %d)",
326
msg, ssh_err, ssh_err_code, sftp_err_code);
327
} else {
328
error_setg(errp, "%s", msg);
329
@@ -XXX,XX +XXX,XX @@ sftp_error_setg(Error **errp, BDRVSSHState *s, const char *fs, ...)
330
331
static void sftp_error_trace(BDRVSSHState *s, const char *op)
332
{
333
- char *ssh_err;
334
+ const char *ssh_err;
335
int ssh_err_code;
336
- unsigned long sftp_err_code;
337
+ int sftp_err_code;
338
339
- /* This is not an errno. See <libssh2.h>. */
340
- ssh_err_code = libssh2_session_last_error(s->session,
341
- &ssh_err, NULL, 0);
342
- /* See <libssh2_sftp.h>. */
343
- sftp_err_code = libssh2_sftp_last_error((s)->sftp);
344
+ /* This is not an errno. See <libssh/libssh.h>. */
345
+ ssh_err = ssh_get_error(s->session);
346
+ ssh_err_code = ssh_get_error_code(s->session);
347
+ /* See <libssh/sftp.h>. */
348
+ sftp_err_code = sftp_get_error(s->sftp);
349
350
trace_sftp_error(op, ssh_err, ssh_err_code, sftp_err_code);
351
}
352
@@ -XXX,XX +XXX,XX @@ static void ssh_parse_filename(const char *filename, QDict *options,
353
parse_uri(filename, options, errp);
354
}
355
356
-static int check_host_key_knownhosts(BDRVSSHState *s,
357
- const char *host, int port, Error **errp)
358
+static int check_host_key_knownhosts(BDRVSSHState *s, Error **errp)
359
{
360
- const char *home;
361
- char *knh_file = NULL;
362
- LIBSSH2_KNOWNHOSTS *knh = NULL;
363
- struct libssh2_knownhost *found;
364
- int ret, r;
365
- const char *hostkey;
366
- size_t len;
367
- int type;
368
-
369
- hostkey = libssh2_session_hostkey(s->session, &len, &type);
370
- if (!hostkey) {
371
+ int ret;
372
+#ifdef HAVE_LIBSSH_0_8
373
+ enum ssh_known_hosts_e state;
374
+ int r;
375
+ ssh_key pubkey;
376
+ enum ssh_keytypes_e pubkey_type;
377
+ unsigned char *server_hash = NULL;
378
+ size_t server_hash_len;
379
+ char *fingerprint = NULL;
380
+
381
+ state = ssh_session_is_known_server(s->session);
382
+ trace_ssh_server_status(state);
383
+
384
+ switch (state) {
385
+ case SSH_KNOWN_HOSTS_OK:
386
+ /* OK */
387
+ trace_ssh_check_host_key_knownhosts();
388
+ break;
389
+ case SSH_KNOWN_HOSTS_CHANGED:
390
ret = -EINVAL;
391
- session_error_setg(errp, s, "failed to read remote host key");
392
+ r = ssh_get_server_publickey(s->session, &pubkey);
393
+ if (r == 0) {
394
+ r = ssh_get_publickey_hash(pubkey, SSH_PUBLICKEY_HASH_SHA256,
395
+ &server_hash, &server_hash_len);
396
+ pubkey_type = ssh_key_type(pubkey);
397
+ ssh_key_free(pubkey);
398
+ }
399
+ if (r == 0) {
400
+ fingerprint = ssh_get_fingerprint_hash(SSH_PUBLICKEY_HASH_SHA256,
401
+ server_hash,
402
+ server_hash_len);
403
+ ssh_clean_pubkey_hash(&server_hash);
404
+ }
405
+ if (fingerprint) {
406
+ error_setg(errp,
407
+ "host key (%s key with fingerprint %s) does not match "
408
+ "the one in known_hosts; this may be a possible attack",
409
+ ssh_key_type_to_char(pubkey_type), fingerprint);
410
+ ssh_string_free_char(fingerprint);
411
+ } else {
412
+ error_setg(errp,
413
+ "host key does not match the one in known_hosts; this "
414
+ "may be a possible attack");
415
+ }
416
goto out;
417
- }
418
-
419
- knh = libssh2_knownhost_init(s->session);
420
- if (!knh) {
421
+ case SSH_KNOWN_HOSTS_OTHER:
422
ret = -EINVAL;
423
- session_error_setg(errp, s,
424
- "failed to initialize known hosts support");
425
+ error_setg(errp,
426
+ "host key for this server not found, another type exists");
427
+ goto out;
428
+ case SSH_KNOWN_HOSTS_UNKNOWN:
429
+ ret = -EINVAL;
430
+ error_setg(errp, "no host key was found in known_hosts");
431
+ goto out;
432
+ case SSH_KNOWN_HOSTS_NOT_FOUND:
433
+ ret = -ENOENT;
434
+ error_setg(errp, "known_hosts file not found");
435
+ goto out;
436
+ case SSH_KNOWN_HOSTS_ERROR:
437
+ ret = -EINVAL;
438
+ error_setg(errp, "error while checking the host");
439
+ goto out;
440
+ default:
441
+ ret = -EINVAL;
442
+ error_setg(errp, "error while checking for known server (%d)", state);
443
goto out;
444
}
445
+#else /* !HAVE_LIBSSH_0_8 */
446
+ int state;
447
448
- home = getenv("HOME");
449
- if (home) {
450
- knh_file = g_strdup_printf("%s/.ssh/known_hosts", home);
451
- } else {
452
- knh_file = g_strdup_printf("/root/.ssh/known_hosts");
453
- }
454
-
455
- /* Read all known hosts from OpenSSH-style known_hosts file. */
456
- libssh2_knownhost_readfile(knh, knh_file, LIBSSH2_KNOWNHOST_FILE_OPENSSH);
457
+ state = ssh_is_server_known(s->session);
458
+ trace_ssh_server_status(state);
459
460
- r = libssh2_knownhost_checkp(knh, host, port, hostkey, len,
461
- LIBSSH2_KNOWNHOST_TYPE_PLAIN|
462
- LIBSSH2_KNOWNHOST_KEYENC_RAW,
463
- &found);
464
- switch (r) {
465
- case LIBSSH2_KNOWNHOST_CHECK_MATCH:
466
+ switch (state) {
467
+ case SSH_SERVER_KNOWN_OK:
468
/* OK */
469
- trace_ssh_check_host_key_knownhosts(found->key);
470
+ trace_ssh_check_host_key_knownhosts();
471
break;
472
- case LIBSSH2_KNOWNHOST_CHECK_MISMATCH:
473
+ case SSH_SERVER_KNOWN_CHANGED:
474
ret = -EINVAL;
475
- session_error_setg(errp, s,
476
- "host key does not match the one in known_hosts"
477
- " (found key %s)", found->key);
478
+ error_setg(errp,
479
+ "host key does not match the one in known_hosts; this "
480
+ "may be a possible attack");
481
goto out;
482
- case LIBSSH2_KNOWNHOST_CHECK_NOTFOUND:
483
+ case SSH_SERVER_FOUND_OTHER:
484
ret = -EINVAL;
485
- session_error_setg(errp, s, "no host key was found in known_hosts");
486
+ error_setg(errp,
487
+ "host key for this server not found, another type exists");
488
+ goto out;
489
+ case SSH_SERVER_FILE_NOT_FOUND:
490
+ ret = -ENOENT;
491
+ error_setg(errp, "known_hosts file not found");
492
goto out;
493
- case LIBSSH2_KNOWNHOST_CHECK_FAILURE:
494
+ case SSH_SERVER_NOT_KNOWN:
495
ret = -EINVAL;
496
- session_error_setg(errp, s,
497
- "failure matching the host key with known_hosts");
498
+ error_setg(errp, "no host key was found in known_hosts");
499
+ goto out;
500
+ case SSH_SERVER_ERROR:
501
+ ret = -EINVAL;
502
+ error_setg(errp, "server error");
503
goto out;
504
default:
505
ret = -EINVAL;
506
- session_error_setg(errp, s, "unknown error matching the host key"
507
- " with known_hosts (%d)", r);
508
+ error_setg(errp, "error while checking for known server (%d)", state);
509
goto out;
510
}
511
+#endif /* !HAVE_LIBSSH_0_8 */
512
513
/* known_hosts checking successful. */
514
ret = 0;
515
516
out:
517
- if (knh != NULL) {
518
- libssh2_knownhost_free(knh);
519
- }
520
- g_free(knh_file);
521
return ret;
522
}
523
524
@@ -XXX,XX +XXX,XX @@ static int compare_fingerprint(const unsigned char *fingerprint, size_t len,
525
526
static int
527
check_host_key_hash(BDRVSSHState *s, const char *hash,
528
- int hash_type, size_t fingerprint_len, Error **errp)
529
+ enum ssh_publickey_hash_type type, Error **errp)
530
{
531
- const char *fingerprint;
532
-
533
- fingerprint = libssh2_hostkey_hash(s->session, hash_type);
534
- if (!fingerprint) {
535
+ int r;
536
+ ssh_key pubkey;
537
+ unsigned char *server_hash;
538
+ size_t server_hash_len;
539
+
540
+#ifdef HAVE_LIBSSH_0_8
541
+ r = ssh_get_server_publickey(s->session, &pubkey);
542
+#else
543
+ r = ssh_get_publickey(s->session, &pubkey);
544
+#endif
545
+ if (r != SSH_OK) {
546
session_error_setg(errp, s, "failed to read remote host key");
547
return -EINVAL;
548
}
549
550
- if(compare_fingerprint((unsigned char *) fingerprint, fingerprint_len,
551
- hash) != 0) {
552
+ r = ssh_get_publickey_hash(pubkey, type, &server_hash, &server_hash_len);
553
+ ssh_key_free(pubkey);
554
+ if (r != 0) {
555
+ session_error_setg(errp, s,
556
+ "failed reading the hash of the server SSH key");
557
+ return -EINVAL;
558
+ }
559
+
560
+ r = compare_fingerprint(server_hash, server_hash_len, hash);
561
+ ssh_clean_pubkey_hash(&server_hash);
562
+ if (r != 0) {
563
error_setg(errp, "remote host key does not match host_key_check '%s'",
564
hash);
565
return -EPERM;
566
@@ -XXX,XX +XXX,XX @@ check_host_key_hash(BDRVSSHState *s, const char *hash,
41
return 0;
567
return 0;
42
}
568
}
43
569
44
-static void bdrv_child_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared)
570
-static int check_host_key(BDRVSSHState *s, const char *host, int port,
45
+static void bdrv_child_set_perm(BdrvChild *c)
571
- SshHostKeyCheck *hkc, Error **errp)
572
+static int check_host_key(BDRVSSHState *s, SshHostKeyCheck *hkc, Error **errp)
46
{
573
{
47
c->has_backup_perm = false;
574
SshHostKeyCheckMode mode;
48
575
49
- c->perm = perm;
576
@@ -XXX,XX +XXX,XX @@ static int check_host_key(BDRVSSHState *s, const char *host, int port,
50
- c->shared_perm = shared;
577
case SSH_HOST_KEY_CHECK_MODE_HASH:
578
if (hkc->u.hash.type == SSH_HOST_KEY_CHECK_HASH_TYPE_MD5) {
579
return check_host_key_hash(s, hkc->u.hash.hash,
580
- LIBSSH2_HOSTKEY_HASH_MD5, 16, errp);
581
+ SSH_PUBLICKEY_HASH_MD5, errp);
582
} else if (hkc->u.hash.type == SSH_HOST_KEY_CHECK_HASH_TYPE_SHA1) {
583
return check_host_key_hash(s, hkc->u.hash.hash,
584
- LIBSSH2_HOSTKEY_HASH_SHA1, 20, errp);
585
+ SSH_PUBLICKEY_HASH_SHA1, errp);
586
}
587
g_assert_not_reached();
588
break;
589
case SSH_HOST_KEY_CHECK_MODE_KNOWN_HOSTS:
590
- return check_host_key_knownhosts(s, host, port, errp);
591
+ return check_host_key_knownhosts(s, errp);
592
default:
593
g_assert_not_reached();
594
}
595
@@ -XXX,XX +XXX,XX @@ static int check_host_key(BDRVSSHState *s, const char *host, int port,
596
return -EINVAL;
597
}
598
599
-static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
600
+static int authenticate(BDRVSSHState *s, Error **errp)
601
{
602
int r, ret;
603
- const char *userauthlist;
604
- LIBSSH2_AGENT *agent = NULL;
605
- struct libssh2_agent_publickey *identity;
606
- struct libssh2_agent_publickey *prev_identity = NULL;
607
+ int method;
608
609
- userauthlist = libssh2_userauth_list(s->session, user, strlen(user));
610
- if (strstr(userauthlist, "publickey") == NULL) {
611
+ /* Try to authenticate with the "none" method. */
612
+ r = ssh_userauth_none(s->session, NULL);
613
+ if (r == SSH_AUTH_ERROR) {
614
ret = -EPERM;
615
- error_setg(errp,
616
- "remote server does not support \"publickey\" authentication");
617
+ session_error_setg(errp, s, "failed to authenticate using none "
618
+ "authentication");
619
goto out;
620
- }
51
-
621
-
52
bdrv_set_perm(c->bs);
622
- /* Connect to ssh-agent and try each identity in turn. */
623
- agent = libssh2_agent_init(s->session);
624
- if (!agent) {
625
- ret = -EINVAL;
626
- session_error_setg(errp, s, "failed to initialize ssh-agent support");
627
- goto out;
628
- }
629
- if (libssh2_agent_connect(agent)) {
630
- ret = -ECONNREFUSED;
631
- session_error_setg(errp, s, "failed to connect to ssh-agent");
632
- goto out;
633
- }
634
- if (libssh2_agent_list_identities(agent)) {
635
- ret = -EINVAL;
636
- session_error_setg(errp, s,
637
- "failed requesting identities from ssh-agent");
638
+ } else if (r == SSH_AUTH_SUCCESS) {
639
+ /* Authenticated! */
640
+ ret = 0;
641
goto out;
642
}
643
644
- for(;;) {
645
- r = libssh2_agent_get_identity(agent, &identity, prev_identity);
646
- if (r == 1) { /* end of list */
647
- break;
648
- }
649
- if (r < 0) {
650
+ method = ssh_userauth_list(s->session, NULL);
651
+ trace_ssh_auth_methods(method);
652
+
653
+ /*
654
+ * Try to authenticate with publickey, using the ssh-agent
655
+ * if available.
656
+ */
657
+ if (method & SSH_AUTH_METHOD_PUBLICKEY) {
658
+ r = ssh_userauth_publickey_auto(s->session, NULL, NULL);
659
+ if (r == SSH_AUTH_ERROR) {
660
ret = -EINVAL;
661
- session_error_setg(errp, s,
662
- "failed to obtain identity from ssh-agent");
663
+ session_error_setg(errp, s, "failed to authenticate using "
664
+ "publickey authentication");
665
goto out;
666
- }
667
- r = libssh2_agent_userauth(agent, user, identity);
668
- if (r == 0) {
669
+ } else if (r == SSH_AUTH_SUCCESS) {
670
/* Authenticated! */
671
ret = 0;
672
goto out;
673
}
674
- /* Failed to authenticate with this identity, try the next one. */
675
- prev_identity = identity;
676
}
677
678
ret = -EPERM;
679
@@ -XXX,XX +XXX,XX @@ static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
680
"and the identities held by your ssh-agent");
681
682
out:
683
- if (agent != NULL) {
684
- /* Note: libssh2 implementation implicitly calls
685
- * libssh2_agent_disconnect if necessary.
686
- */
687
- libssh2_agent_free(agent);
688
- }
689
-
690
return ret;
53
}
691
}
54
692
55
@@ -XXX,XX +XXX,XX @@ int bdrv_child_try_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared,
693
@@ -XXX,XX +XXX,XX @@ static int connect_to_ssh(BDRVSSHState *s, BlockdevOptionsSsh *opts,
56
return ret;
694
int ssh_flags, int creat_mode, Error **errp)
57
}
695
{
58
696
int r, ret;
59
- bdrv_child_set_perm(c, perm, shared);
697
- long port = 0;
60
+ bdrv_child_set_perm(c);
698
+ unsigned int port = 0;
61
699
+ int new_sock = -1;
700
701
if (opts->has_user) {
702
s->user = g_strdup(opts->user);
703
@@ -XXX,XX +XXX,XX @@ static int connect_to_ssh(BDRVSSHState *s, BlockdevOptionsSsh *opts,
704
s->inet = opts->server;
705
opts->server = NULL;
706
707
- if (qemu_strtol(s->inet->port, NULL, 10, &port) < 0) {
708
+ if (qemu_strtoui(s->inet->port, NULL, 10, &port) < 0) {
709
error_setg(errp, "Use only numeric port value");
710
ret = -EINVAL;
711
goto err;
712
}
713
714
/* Open the socket and connect. */
715
- s->sock = inet_connect_saddr(s->inet, errp);
716
- if (s->sock < 0) {
717
+ new_sock = inet_connect_saddr(s->inet, errp);
718
+ if (new_sock < 0) {
719
ret = -EIO;
720
goto err;
721
}
722
723
+ /*
724
+ * Try to disable the Nagle algorithm on TCP sockets to reduce latency,
725
+ * but do not fail if it cannot be disabled.
726
+ */
727
+ r = socket_set_nodelay(new_sock);
728
+ if (r < 0) {
729
+ warn_report("can't set TCP_NODELAY for the ssh server %s: %s",
730
+ s->inet->host, strerror(errno));
731
+ }
732
+
733
/* Create SSH session. */
734
- s->session = libssh2_session_init();
735
+ s->session = ssh_new();
736
if (!s->session) {
737
ret = -EINVAL;
738
- session_error_setg(errp, s, "failed to initialize libssh2 session");
739
+ session_error_setg(errp, s, "failed to initialize libssh session");
740
goto err;
741
}
742
743
-#if TRACE_LIBSSH2 != 0
744
- libssh2_trace(s->session, TRACE_LIBSSH2);
745
-#endif
746
+ /*
747
+ * Make sure we are in blocking mode during the connection and
748
+ * authentication phases.
749
+ */
750
+ ssh_set_blocking(s->session, 1);
751
752
- r = libssh2_session_handshake(s->session, s->sock);
753
- if (r != 0) {
754
+ r = ssh_options_set(s->session, SSH_OPTIONS_USER, s->user);
755
+ if (r < 0) {
756
+ ret = -EINVAL;
757
+ session_error_setg(errp, s,
758
+ "failed to set the user in the libssh session");
759
+ goto err;
760
+ }
761
+
762
+ r = ssh_options_set(s->session, SSH_OPTIONS_HOST, s->inet->host);
763
+ if (r < 0) {
764
+ ret = -EINVAL;
765
+ session_error_setg(errp, s,
766
+ "failed to set the host in the libssh session");
767
+ goto err;
768
+ }
769
+
770
+ if (port > 0) {
771
+ r = ssh_options_set(s->session, SSH_OPTIONS_PORT, &port);
772
+ if (r < 0) {
773
+ ret = -EINVAL;
774
+ session_error_setg(errp, s,
775
+ "failed to set the port in the libssh session");
776
+ goto err;
777
+ }
778
+ }
779
+
780
+ r = ssh_options_set(s->session, SSH_OPTIONS_COMPRESSION, "none");
781
+ if (r < 0) {
782
+ ret = -EINVAL;
783
+ session_error_setg(errp, s,
784
+ "failed to disable the compression in the libssh "
785
+ "session");
786
+ goto err;
787
+ }
788
+
789
+ /* Read ~/.ssh/config. */
790
+ r = ssh_options_parse_config(s->session, NULL);
791
+ if (r < 0) {
792
+ ret = -EINVAL;
793
+ session_error_setg(errp, s, "failed to parse ~/.ssh/config");
794
+ goto err;
795
+ }
796
+
797
+ r = ssh_options_set(s->session, SSH_OPTIONS_FD, &new_sock);
798
+ if (r < 0) {
799
+ ret = -EINVAL;
800
+ session_error_setg(errp, s,
801
+ "failed to set the socket in the libssh session");
802
+ goto err;
803
+ }
804
+ /* libssh took ownership of the socket. */
805
+ s->sock = new_sock;
806
+ new_sock = -1;
807
+
808
+ /* Connect. */
809
+ r = ssh_connect(s->session);
810
+ if (r != SSH_OK) {
811
ret = -EINVAL;
812
session_error_setg(errp, s, "failed to establish SSH session");
813
goto err;
814
}
815
816
/* Check the remote host's key against known_hosts. */
817
- ret = check_host_key(s, s->inet->host, port, opts->host_key_check, errp);
818
+ ret = check_host_key(s, opts->host_key_check, errp);
819
if (ret < 0) {
820
goto err;
821
}
822
823
/* Authenticate. */
824
- ret = authenticate(s, s->user, errp);
825
+ ret = authenticate(s, errp);
826
if (ret < 0) {
827
goto err;
828
}
829
830
/* Start SFTP. */
831
- s->sftp = libssh2_sftp_init(s->session);
832
+ s->sftp = sftp_new(s->session);
833
if (!s->sftp) {
834
- session_error_setg(errp, s, "failed to initialize sftp handle");
835
+ session_error_setg(errp, s, "failed to create sftp handle");
836
+ ret = -EINVAL;
837
+ goto err;
838
+ }
839
+
840
+ r = sftp_init(s->sftp);
841
+ if (r < 0) {
842
+ sftp_error_setg(errp, s, "failed to initialize sftp handle");
843
ret = -EINVAL;
844
goto err;
845
}
846
847
/* Open the remote file. */
848
trace_ssh_connect_to_ssh(opts->path, ssh_flags, creat_mode);
849
- s->sftp_handle = libssh2_sftp_open(s->sftp, opts->path, ssh_flags,
850
- creat_mode);
851
+ s->sftp_handle = sftp_open(s->sftp, opts->path, ssh_flags, creat_mode);
852
if (!s->sftp_handle) {
853
- session_error_setg(errp, s, "failed to open remote file '%s'",
854
- opts->path);
855
+ sftp_error_setg(errp, s, "failed to open remote file '%s'",
856
+ opts->path);
857
ret = -EINVAL;
858
goto err;
859
}
860
861
- r = libssh2_sftp_fstat(s->sftp_handle, &s->attrs);
862
- if (r < 0) {
863
+ /* Make sure the SFTP file is handled in blocking mode. */
864
+ sftp_file_set_blocking(s->sftp_handle);
865
+
866
+ s->attrs = sftp_fstat(s->sftp_handle);
867
+ if (!s->attrs) {
868
sftp_error_setg(errp, s, "failed to read file attributes");
869
return -EINVAL;
870
}
871
@@ -XXX,XX +XXX,XX @@ static int connect_to_ssh(BDRVSSHState *s, BlockdevOptionsSsh *opts,
872
return 0;
873
874
err:
875
+ if (s->attrs) {
876
+ sftp_attributes_free(s->attrs);
877
+ }
878
+ s->attrs = NULL;
879
if (s->sftp_handle) {
880
- libssh2_sftp_close(s->sftp_handle);
881
+ sftp_close(s->sftp_handle);
882
}
883
s->sftp_handle = NULL;
884
if (s->sftp) {
885
- libssh2_sftp_shutdown(s->sftp);
886
+ sftp_free(s->sftp);
887
}
888
s->sftp = NULL;
889
if (s->session) {
890
- libssh2_session_disconnect(s->session,
891
- "from qemu ssh client: "
892
- "error opening connection");
893
- libssh2_session_free(s->session);
894
+ ssh_disconnect(s->session);
895
+ ssh_free(s->session);
896
}
897
s->session = NULL;
898
+ s->sock = -1;
899
+ if (new_sock >= 0) {
900
+ close(new_sock);
901
+ }
902
903
return ret;
904
}
905
@@ -XXX,XX +XXX,XX @@ static int ssh_file_open(BlockDriverState *bs, QDict *options, int bdrv_flags,
906
907
ssh_state_init(s);
908
909
- ssh_flags = LIBSSH2_FXF_READ;
910
+ ssh_flags = 0;
911
if (bdrv_flags & BDRV_O_RDWR) {
912
- ssh_flags |= LIBSSH2_FXF_WRITE;
913
+ ssh_flags |= O_RDWR;
914
+ } else {
915
+ ssh_flags |= O_RDONLY;
916
}
917
918
opts = ssh_parse_options(options, errp);
919
@@ -XXX,XX +XXX,XX @@ static int ssh_file_open(BlockDriverState *bs, QDict *options, int bdrv_flags,
920
}
921
922
/* Go non-blocking. */
923
- libssh2_session_set_blocking(s->session, 0);
924
+ ssh_set_blocking(s->session, 0);
925
926
qapi_free_BlockdevOptionsSsh(opts);
927
928
return 0;
929
930
err:
931
- if (s->sock >= 0) {
932
- close(s->sock);
933
- }
934
- s->sock = -1;
935
-
936
qapi_free_BlockdevOptionsSsh(opts);
937
938
return ret;
939
@@ -XXX,XX +XXX,XX @@ static int ssh_grow_file(BDRVSSHState *s, int64_t offset, Error **errp)
940
{
941
ssize_t ret;
942
char c[1] = { '\0' };
943
- int was_blocking = libssh2_session_get_blocking(s->session);
944
+ int was_blocking = ssh_is_blocking(s->session);
945
946
/* offset must be strictly greater than the current size so we do
947
* not overwrite anything */
948
- assert(offset > 0 && offset > s->attrs.filesize);
949
+ assert(offset > 0 && offset > s->attrs->size);
950
951
- libssh2_session_set_blocking(s->session, 1);
952
+ ssh_set_blocking(s->session, 1);
953
954
- libssh2_sftp_seek64(s->sftp_handle, offset - 1);
955
- ret = libssh2_sftp_write(s->sftp_handle, c, 1);
956
+ sftp_seek64(s->sftp_handle, offset - 1);
957
+ ret = sftp_write(s->sftp_handle, c, 1);
958
959
- libssh2_session_set_blocking(s->session, was_blocking);
960
+ ssh_set_blocking(s->session, was_blocking);
961
962
if (ret < 0) {
963
sftp_error_setg(errp, s, "Failed to grow file");
964
return -EIO;
965
}
966
967
- s->attrs.filesize = offset;
968
+ s->attrs->size = offset;
62
return 0;
969
return 0;
63
}
970
}
971
972
@@ -XXX,XX +XXX,XX @@ static int ssh_co_create(BlockdevCreateOptions *options, Error **errp)
973
ssh_state_init(&s);
974
975
ret = connect_to_ssh(&s, opts->location,
976
- LIBSSH2_FXF_READ|LIBSSH2_FXF_WRITE|
977
- LIBSSH2_FXF_CREAT|LIBSSH2_FXF_TRUNC,
978
+ O_RDWR | O_CREAT | O_TRUNC,
979
0644, errp);
980
if (ret < 0) {
981
goto fail;
982
@@ -XXX,XX +XXX,XX @@ static int ssh_has_zero_init(BlockDriverState *bs)
983
/* Assume false, unless we can positively prove it's true. */
984
int has_zero_init = 0;
985
986
- if (s->attrs.flags & LIBSSH2_SFTP_ATTR_PERMISSIONS) {
987
- if (s->attrs.permissions & LIBSSH2_SFTP_S_IFREG) {
988
- has_zero_init = 1;
989
- }
990
+ if (s->attrs->type == SSH_FILEXFER_TYPE_REGULAR) {
991
+ has_zero_init = 1;
992
}
993
994
return has_zero_init;
995
@@ -XXX,XX +XXX,XX @@ static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
996
.co = qemu_coroutine_self()
997
};
998
999
- r = libssh2_session_block_directions(s->session);
1000
+ r = ssh_get_poll_flags(s->session);
1001
1002
- if (r & LIBSSH2_SESSION_BLOCK_INBOUND) {
1003
+ if (r & SSH_READ_PENDING) {
1004
rd_handler = restart_coroutine;
1005
}
1006
- if (r & LIBSSH2_SESSION_BLOCK_OUTBOUND) {
1007
+ if (r & SSH_WRITE_PENDING) {
1008
wr_handler = restart_coroutine;
1009
}
1010
1011
@@ -XXX,XX +XXX,XX @@ static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
1012
trace_ssh_co_yield_back(s->sock);
1013
}
1014
1015
-/* SFTP has a function `libssh2_sftp_seek64' which seeks to a position
1016
- * in the remote file. Notice that it just updates a field in the
1017
- * sftp_handle structure, so there is no network traffic and it cannot
1018
- * fail.
1019
- *
1020
- * However, `libssh2_sftp_seek64' does have a catastrophic effect on
1021
- * performance since it causes the handle to throw away all in-flight
1022
- * reads and buffered readahead data. Therefore this function tries
1023
- * to be intelligent about when to call the underlying libssh2 function.
1024
- */
1025
-#define SSH_SEEK_WRITE 0
1026
-#define SSH_SEEK_READ 1
1027
-#define SSH_SEEK_FORCE 2
1028
-
1029
-static void ssh_seek(BDRVSSHState *s, int64_t offset, int flags)
1030
-{
1031
- bool op_read = (flags & SSH_SEEK_READ) != 0;
1032
- bool force = (flags & SSH_SEEK_FORCE) != 0;
1033
-
1034
- if (force || op_read != s->offset_op_read || offset != s->offset) {
1035
- trace_ssh_seek(offset);
1036
- libssh2_sftp_seek64(s->sftp_handle, offset);
1037
- s->offset = offset;
1038
- s->offset_op_read = op_read;
1039
- }
1040
-}
1041
-
1042
static coroutine_fn int ssh_read(BDRVSSHState *s, BlockDriverState *bs,
1043
int64_t offset, size_t size,
1044
QEMUIOVector *qiov)
1045
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int ssh_read(BDRVSSHState *s, BlockDriverState *bs,
1046
1047
trace_ssh_read(offset, size);
1048
1049
- ssh_seek(s, offset, SSH_SEEK_READ);
1050
+ trace_ssh_seek(offset);
1051
+ sftp_seek64(s->sftp_handle, offset);
1052
1053
/* This keeps track of the current iovec element ('i'), where we
1054
* will write to next ('buf'), and the end of the current iovec
1055
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int ssh_read(BDRVSSHState *s, BlockDriverState *bs,
1056
buf = i->iov_base;
1057
end_of_vec = i->iov_base + i->iov_len;
1058
1059
- /* libssh2 has a hard-coded limit of 2000 bytes per request,
1060
- * although it will also do readahead behind our backs. Therefore
1061
- * we may have to do repeated reads here until we have read 'size'
1062
- * bytes.
1063
- */
1064
for (got = 0; got < size; ) {
1065
+ size_t request_read_size;
1066
again:
1067
- trace_ssh_read_buf(buf, end_of_vec - buf);
1068
- r = libssh2_sftp_read(s->sftp_handle, buf, end_of_vec - buf);
1069
- trace_ssh_read_return(r);
1070
+ /*
1071
+ * The size of SFTP packets is limited to 32K bytes, so limit
1072
+ * the amount of data requested to 16K, as libssh currently
1073
+ * does not handle multiple requests on its own.
1074
+ */
1075
+ request_read_size = MIN(end_of_vec - buf, 16384);
1076
+ trace_ssh_read_buf(buf, end_of_vec - buf, request_read_size);
1077
+ r = sftp_read(s->sftp_handle, buf, request_read_size);
1078
+ trace_ssh_read_return(r, sftp_get_error(s->sftp));
1079
1080
- if (r == LIBSSH2_ERROR_EAGAIN || r == LIBSSH2_ERROR_TIMEOUT) {
1081
+ if (r == SSH_AGAIN) {
1082
co_yield(s, bs);
1083
goto again;
1084
}
1085
- if (r < 0) {
1086
- sftp_error_trace(s, "read");
1087
- s->offset = -1;
1088
- return -EIO;
1089
- }
1090
- if (r == 0) {
1091
+ if (r == SSH_EOF || (r == 0 && sftp_get_error(s->sftp) == SSH_FX_EOF)) {
1092
/* EOF: Short read so pad the buffer with zeroes and return it. */
1093
qemu_iovec_memset(qiov, got, 0, size - got);
1094
return 0;
1095
}
1096
+ if (r <= 0) {
1097
+ sftp_error_trace(s, "read");
1098
+ return -EIO;
1099
+ }
1100
1101
got += r;
1102
buf += r;
1103
- s->offset += r;
1104
if (buf >= end_of_vec && got < size) {
1105
i++;
1106
buf = i->iov_base;
1107
@@ -XXX,XX +XXX,XX @@ static int ssh_write(BDRVSSHState *s, BlockDriverState *bs,
1108
1109
trace_ssh_write(offset, size);
1110
1111
- ssh_seek(s, offset, SSH_SEEK_WRITE);
1112
+ trace_ssh_seek(offset);
1113
+ sftp_seek64(s->sftp_handle, offset);
1114
1115
/* This keeps track of the current iovec element ('i'), where we
1116
* will read from next ('buf'), and the end of the current iovec
1117
@@ -XXX,XX +XXX,XX @@ static int ssh_write(BDRVSSHState *s, BlockDriverState *bs,
1118
end_of_vec = i->iov_base + i->iov_len;
1119
1120
for (written = 0; written < size; ) {
1121
+ size_t request_write_size;
1122
again:
1123
- trace_ssh_write_buf(buf, end_of_vec - buf);
1124
- r = libssh2_sftp_write(s->sftp_handle, buf, end_of_vec - buf);
1125
- trace_ssh_write_return(r);
1126
+ /*
1127
+ * Avoid too large data packets, as libssh currently does not
1128
+ * handle multiple requests on its own.
1129
+ */
1130
+ request_write_size = MIN(end_of_vec - buf, 131072);
1131
+ trace_ssh_write_buf(buf, end_of_vec - buf, request_write_size);
1132
+ r = sftp_write(s->sftp_handle, buf, request_write_size);
1133
+ trace_ssh_write_return(r, sftp_get_error(s->sftp));
1134
1135
- if (r == LIBSSH2_ERROR_EAGAIN || r == LIBSSH2_ERROR_TIMEOUT) {
1136
+ if (r == SSH_AGAIN) {
1137
co_yield(s, bs);
1138
goto again;
1139
}
1140
if (r < 0) {
1141
sftp_error_trace(s, "write");
1142
- s->offset = -1;
1143
return -EIO;
1144
}
1145
- /* The libssh2 API is very unclear about this. A comment in
1146
- * the code says "nothing was acked, and no EAGAIN was
1147
- * received!" which apparently means that no data got sent
1148
- * out, and the underlying channel didn't return any EAGAIN
1149
- * indication. I think this is a bug in either libssh2 or
1150
- * OpenSSH (server-side). In any case, forcing a seek (to
1151
- * discard libssh2 internal buffers), and then trying again
1152
- * works for me.
1153
- */
1154
- if (r == 0) {
1155
- ssh_seek(s, offset + written, SSH_SEEK_WRITE|SSH_SEEK_FORCE);
1156
- co_yield(s, bs);
1157
- goto again;
1158
- }
1159
1160
written += r;
1161
buf += r;
1162
- s->offset += r;
1163
if (buf >= end_of_vec && written < size) {
1164
i++;
1165
buf = i->iov_base;
1166
end_of_vec = i->iov_base + i->iov_len;
1167
}
1168
1169
- if (offset + written > s->attrs.filesize)
1170
- s->attrs.filesize = offset + written;
1171
+ if (offset + written > s->attrs->size) {
1172
+ s->attrs->size = offset + written;
1173
+ }
1174
}
1175
1176
return 0;
1177
@@ -XXX,XX +XXX,XX @@ static void unsafe_flush_warning(BDRVSSHState *s, const char *what)
1178
}
1179
}
1180
1181
-#ifdef HAS_LIBSSH2_SFTP_FSYNC
1182
+#ifdef HAVE_LIBSSH_0_8
1183
1184
static coroutine_fn int ssh_flush(BDRVSSHState *s, BlockDriverState *bs)
1185
{
1186
int r;
1187
1188
trace_ssh_flush();
1189
+
1190
+ if (!sftp_extension_supported(s->sftp, "fsync@openssh.com", "1")) {
1191
+ unsafe_flush_warning(s, "OpenSSH >= 6.3");
1192
+ return 0;
1193
+ }
1194
again:
1195
- r = libssh2_sftp_fsync(s->sftp_handle);
1196
- if (r == LIBSSH2_ERROR_EAGAIN || r == LIBSSH2_ERROR_TIMEOUT) {
1197
+ r = sftp_fsync(s->sftp_handle);
1198
+ if (r == SSH_AGAIN) {
1199
co_yield(s, bs);
1200
goto again;
1201
}
1202
- if (r == LIBSSH2_ERROR_SFTP_PROTOCOL &&
1203
- libssh2_sftp_last_error(s->sftp) == LIBSSH2_FX_OP_UNSUPPORTED) {
1204
- unsafe_flush_warning(s, "OpenSSH >= 6.3");
1205
- return 0;
1206
- }
1207
if (r < 0) {
1208
sftp_error_trace(s, "fsync");
1209
return -EIO;
1210
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int ssh_co_flush(BlockDriverState *bs)
1211
return ret;
1212
}
1213
1214
-#else /* !HAS_LIBSSH2_SFTP_FSYNC */
1215
+#else /* !HAVE_LIBSSH_0_8 */
1216
1217
static coroutine_fn int ssh_co_flush(BlockDriverState *bs)
1218
{
1219
BDRVSSHState *s = bs->opaque;
1220
1221
- unsafe_flush_warning(s, "libssh2 >= 1.4.4");
1222
+ unsafe_flush_warning(s, "libssh >= 0.8.0");
1223
return 0;
1224
}
1225
1226
-#endif /* !HAS_LIBSSH2_SFTP_FSYNC */
1227
+#endif /* !HAVE_LIBSSH_0_8 */
1228
1229
static int64_t ssh_getlength(BlockDriverState *bs)
1230
{
1231
BDRVSSHState *s = bs->opaque;
1232
int64_t length;
1233
1234
- /* Note we cannot make a libssh2 call here. */
1235
- length = (int64_t) s->attrs.filesize;
1236
+ /* Note we cannot make a libssh call here. */
1237
+ length = (int64_t) s->attrs->size;
1238
trace_ssh_getlength(length);
1239
1240
return length;
1241
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn ssh_co_truncate(BlockDriverState *bs, int64_t offset,
1242
return -ENOTSUP;
1243
}
1244
1245
- if (offset < s->attrs.filesize) {
1246
+ if (offset < s->attrs->size) {
1247
error_setg(errp, "ssh driver does not support shrinking files");
1248
return -ENOTSUP;
1249
}
1250
1251
- if (offset == s->attrs.filesize) {
1252
+ if (offset == s->attrs->size) {
1253
return 0;
1254
}
1255
1256
@@ -XXX,XX +XXX,XX @@ static void bdrv_ssh_init(void)
1257
{
1258
int r;
1259
1260
- r = libssh2_init(0);
1261
+ r = ssh_init();
1262
if (r != 0) {
1263
- fprintf(stderr, "libssh2 initialization failed, %d\n", r);
1264
+ fprintf(stderr, "libssh initialization failed, %d\n", r);
1265
exit(EXIT_FAILURE);
1266
}
1267
1268
+#if TRACE_LIBSSH != 0
1269
+ ssh_set_log_level(TRACE_LIBSSH);
1270
+#endif
1271
+
1272
bdrv_register(&bdrv_ssh);
1273
}
1274
1275
diff --git a/.travis.yml b/.travis.yml
1276
index XXXXXXX..XXXXXXX 100644
1277
--- a/.travis.yml
1278
+++ b/.travis.yml
1279
@@ -XXX,XX +XXX,XX @@ addons:
1280
- libseccomp-dev
1281
- libspice-protocol-dev
1282
- libspice-server-dev
1283
- - libssh2-1-dev
1284
+ - libssh-dev
1285
- liburcu-dev
1286
- libusb-1.0-0-dev
1287
- libvte-2.91-dev
1288
@@ -XXX,XX +XXX,XX @@ matrix:
1289
- libseccomp-dev
1290
- libspice-protocol-dev
1291
- libspice-server-dev
1292
- - libssh2-1-dev
1293
+ - libssh-dev
1294
- liburcu-dev
1295
- libusb-1.0-0-dev
1296
- libvte-2.91-dev
1297
diff --git a/block/trace-events b/block/trace-events
1298
index XXXXXXX..XXXXXXX 100644
1299
--- a/block/trace-events
1300
+++ b/block/trace-events
1301
@@ -XXX,XX +XXX,XX @@ nbd_client_connect_success(const char *export_name) "export '%s'"
1302
# ssh.c
1303
ssh_restart_coroutine(void *co) "co=%p"
1304
ssh_flush(void) "fsync"
1305
-ssh_check_host_key_knownhosts(const char *key) "host key OK: %s"
1306
+ssh_check_host_key_knownhosts(void) "host key OK"
1307
ssh_connect_to_ssh(char *path, int flags, int mode) "opening file %s flags=0x%x creat_mode=0%o"
1308
ssh_co_yield(int sock, void *rd_handler, void *wr_handler) "s->sock=%d rd_handler=%p wr_handler=%p"
1309
ssh_co_yield_back(int sock) "s->sock=%d - back"
1310
ssh_getlength(int64_t length) "length=%" PRIi64
1311
ssh_co_create_opts(uint64_t size) "total_size=%" PRIu64
1312
ssh_read(int64_t offset, size_t size) "offset=%" PRIi64 " size=%zu"
1313
-ssh_read_buf(void *buf, size_t size) "sftp_read buf=%p size=%zu"
1314
-ssh_read_return(ssize_t ret) "sftp_read returned %zd"
1315
+ssh_read_buf(void *buf, size_t size, size_t actual_size) "sftp_read buf=%p size=%zu (actual size=%zu)"
1316
+ssh_read_return(ssize_t ret, int sftp_err) "sftp_read returned %zd (sftp error=%d)"
1317
ssh_write(int64_t offset, size_t size) "offset=%" PRIi64 " size=%zu"
1318
-ssh_write_buf(void *buf, size_t size) "sftp_write buf=%p size=%zu"
1319
-ssh_write_return(ssize_t ret) "sftp_write returned %zd"
1320
+ssh_write_buf(void *buf, size_t size, size_t actual_size) "sftp_write buf=%p size=%zu (actual size=%zu)"
1321
+ssh_write_return(ssize_t ret, int sftp_err) "sftp_write returned %zd (sftp error=%d)"
1322
ssh_seek(int64_t offset) "seeking to offset=%" PRIi64
1323
+ssh_auth_methods(int methods) "auth methods=0x%x"
1324
+ssh_server_status(int status) "server status=%d"
1325
1326
# curl.c
1327
curl_timer_cb(long timeout_ms) "timer callback timeout_ms %ld"
1328
@@ -XXX,XX +XXX,XX @@ sheepdog_snapshot_create(const char *sn_name, const char *id) "%s %s"
1329
sheepdog_snapshot_create_inode(const char *name, uint32_t snap, uint32_t vdi) "s->inode: name %s snap_id 0x%" PRIx32 " vdi 0x%" PRIx32
1330
1331
# ssh.c
1332
-sftp_error(const char *op, const char *ssh_err, int ssh_err_code, unsigned long sftp_err_code) "%s failed: %s (libssh2 error code: %d, sftp error code: %lu)"
1333
+sftp_error(const char *op, const char *ssh_err, int ssh_err_code, int sftp_err_code) "%s failed: %s (libssh error code: %d, sftp error code: %d)"
1334
diff --git a/docs/qemu-block-drivers.texi b/docs/qemu-block-drivers.texi
1335
index XXXXXXX..XXXXXXX 100644
1336
--- a/docs/qemu-block-drivers.texi
1337
+++ b/docs/qemu-block-drivers.texi
1338
@@ -XXX,XX +XXX,XX @@ print a warning when @code{fsync} is not supported:
1339
1340
warning: ssh server @code{ssh.example.com:22} does not support fsync
1341
1342
-With sufficiently new versions of libssh2 and OpenSSH, @code{fsync} is
1343
+With sufficiently new versions of libssh and OpenSSH, @code{fsync} is
1344
supported.
1345
1346
@node disk_images_nvme
1347
diff --git a/tests/docker/dockerfiles/debian-win32-cross.docker b/tests/docker/dockerfiles/debian-win32-cross.docker
1348
index XXXXXXX..XXXXXXX 100644
1349
--- a/tests/docker/dockerfiles/debian-win32-cross.docker
1350
+++ b/tests/docker/dockerfiles/debian-win32-cross.docker
1351
@@ -XXX,XX +XXX,XX @@ RUN DEBIAN_FRONTEND=noninteractive eatmydata \
1352
mxe-$TARGET-w64-mingw32.shared-curl \
1353
mxe-$TARGET-w64-mingw32.shared-glib \
1354
mxe-$TARGET-w64-mingw32.shared-libgcrypt \
1355
- mxe-$TARGET-w64-mingw32.shared-libssh2 \
1356
mxe-$TARGET-w64-mingw32.shared-libusb1 \
1357
mxe-$TARGET-w64-mingw32.shared-lzo \
1358
mxe-$TARGET-w64-mingw32.shared-nettle \
1359
diff --git a/tests/docker/dockerfiles/debian-win64-cross.docker b/tests/docker/dockerfiles/debian-win64-cross.docker
1360
index XXXXXXX..XXXXXXX 100644
1361
--- a/tests/docker/dockerfiles/debian-win64-cross.docker
1362
+++ b/tests/docker/dockerfiles/debian-win64-cross.docker
1363
@@ -XXX,XX +XXX,XX @@ RUN DEBIAN_FRONTEND=noninteractive eatmydata \
1364
mxe-$TARGET-w64-mingw32.shared-curl \
1365
mxe-$TARGET-w64-mingw32.shared-glib \
1366
mxe-$TARGET-w64-mingw32.shared-libgcrypt \
1367
- mxe-$TARGET-w64-mingw32.shared-libssh2 \
1368
mxe-$TARGET-w64-mingw32.shared-libusb1 \
1369
mxe-$TARGET-w64-mingw32.shared-lzo \
1370
mxe-$TARGET-w64-mingw32.shared-nettle \
1371
diff --git a/tests/docker/dockerfiles/fedora.docker b/tests/docker/dockerfiles/fedora.docker
1372
index XXXXXXX..XXXXXXX 100644
1373
--- a/tests/docker/dockerfiles/fedora.docker
1374
+++ b/tests/docker/dockerfiles/fedora.docker
1375
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
1376
libpng-devel \
1377
librbd-devel \
1378
libseccomp-devel \
1379
- libssh2-devel \
1380
+ libssh-devel \
1381
libubsan \
1382
libusbx-devel \
1383
libxml2-devel \
1384
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
1385
mingw32-gtk3 \
1386
mingw32-libjpeg-turbo \
1387
mingw32-libpng \
1388
- mingw32-libssh2 \
1389
mingw32-libtasn1 \
1390
mingw32-nettle \
1391
mingw32-pixman \
1392
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
1393
mingw64-gtk3 \
1394
mingw64-libjpeg-turbo \
1395
mingw64-libpng \
1396
- mingw64-libssh2 \
1397
mingw64-libtasn1 \
1398
mingw64-nettle \
1399
mingw64-pixman \
1400
diff --git a/tests/docker/dockerfiles/ubuntu.docker b/tests/docker/dockerfiles/ubuntu.docker
1401
index XXXXXXX..XXXXXXX 100644
1402
--- a/tests/docker/dockerfiles/ubuntu.docker
1403
+++ b/tests/docker/dockerfiles/ubuntu.docker
1404
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES flex bison \
1405
libsnappy-dev \
1406
libspice-protocol-dev \
1407
libspice-server-dev \
1408
- libssh2-1-dev \
1409
+ libssh-dev \
1410
libusb-1.0-0-dev \
1411
libusbredirhost-dev \
1412
libvdeplug-dev \
1413
diff --git a/tests/docker/dockerfiles/ubuntu1804.docker b/tests/docker/dockerfiles/ubuntu1804.docker
1414
index XXXXXXX..XXXXXXX 100644
1415
--- a/tests/docker/dockerfiles/ubuntu1804.docker
1416
+++ b/tests/docker/dockerfiles/ubuntu1804.docker
1417
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES flex bison \
1418
libsnappy-dev \
1419
libspice-protocol-dev \
1420
libspice-server-dev \
1421
- libssh2-1-dev \
1422
+ libssh-dev \
1423
libusb-1.0-0-dev \
1424
libusbredirhost-dev \
1425
libvdeplug-dev \
1426
diff --git a/tests/qemu-iotests/207 b/tests/qemu-iotests/207
1427
index XXXXXXX..XXXXXXX 100755
1428
--- a/tests/qemu-iotests/207
1429
+++ b/tests/qemu-iotests/207
1430
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('t.img') as disk_path, \
1431
1432
iotests.img_info_log(remote_path)
1433
1434
- md5_key = subprocess.check_output(
1435
- 'ssh-keyscan -t rsa 127.0.0.1 2>/dev/null | grep -v "\\^#" | ' +
1436
- 'cut -d" " -f3 | base64 -d | md5sum -b | cut -d" " -f1',
1437
- shell=True).rstrip().decode('ascii')
1438
+ keys = subprocess.check_output(
1439
+ 'ssh-keyscan 127.0.0.1 2>/dev/null | grep -v "\\^#" | ' +
1440
+ 'cut -d" " -f3',
1441
+ shell=True).rstrip().decode('ascii').split('\n')
1442
+
1443
+ # Mappings of base64 representations to digests
1444
+ md5_keys = {}
1445
+ sha1_keys = {}
1446
+
1447
+ for key in keys:
1448
+ md5_keys[key] = subprocess.check_output(
1449
+ 'echo %s | base64 -d | md5sum -b | cut -d" " -f1' % key,
1450
+ shell=True).rstrip().decode('ascii')
1451
+
1452
+ sha1_keys[key] = subprocess.check_output(
1453
+ 'echo %s | base64 -d | sha1sum -b | cut -d" " -f1' % key,
1454
+ shell=True).rstrip().decode('ascii')
1455
1456
vm.launch()
1457
+
1458
+ # Find correct key first
1459
+ matching_key = None
1460
+ for key in keys:
1461
+ result = vm.qmp('blockdev-add',
1462
+ driver='ssh', node_name='node0', path=disk_path,
1463
+ server={
1464
+ 'host': '127.0.0.1',
1465
+ 'port': '22',
1466
+ }, host_key_check={
1467
+ 'mode': 'hash',
1468
+ 'type': 'md5',
1469
+ 'hash': md5_keys[key],
1470
+ })
1471
+
1472
+ if 'error' not in result:
1473
+ vm.qmp('blockdev-del', node_name='node0')
1474
+ matching_key = key
1475
+ break
1476
+
1477
+ if matching_key is None:
1478
+ vm.shutdown()
1479
+ iotests.notrun('Did not find a key that fits 127.0.0.1')
1480
+
1481
blockdev_create(vm, { 'driver': 'ssh',
1482
'location': {
1483
'path': disk_path,
1484
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('t.img') as disk_path, \
1485
'host-key-check': {
1486
'mode': 'hash',
1487
'type': 'md5',
1488
- 'hash': md5_key,
1489
+ 'hash': md5_keys[matching_key],
1490
}
1491
},
1492
'size': 8388608 })
1493
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('t.img') as disk_path, \
1494
1495
iotests.img_info_log(remote_path)
1496
1497
- sha1_key = subprocess.check_output(
1498
- 'ssh-keyscan -t rsa 127.0.0.1 2>/dev/null | grep -v "\\^#" | ' +
1499
- 'cut -d" " -f3 | base64 -d | sha1sum -b | cut -d" " -f1',
1500
- shell=True).rstrip().decode('ascii')
1501
-
1502
vm.launch()
1503
blockdev_create(vm, { 'driver': 'ssh',
1504
'location': {
1505
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('t.img') as disk_path, \
1506
'host-key-check': {
1507
'mode': 'hash',
1508
'type': 'sha1',
1509
- 'hash': sha1_key,
1510
+ 'hash': sha1_keys[matching_key],
1511
}
1512
},
1513
'size': 4194304 })
1514
diff --git a/tests/qemu-iotests/207.out b/tests/qemu-iotests/207.out
1515
index XXXXXXX..XXXXXXX 100644
1516
--- a/tests/qemu-iotests/207.out
1517
+++ b/tests/qemu-iotests/207.out
1518
@@ -XXX,XX +XXX,XX @@ virtual size: 4 MiB (4194304 bytes)
1519
1520
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "ssh", "location": {"host-key-check": {"mode": "none"}, "path": "/this/is/not/an/existing/path", "server": {"host": "127.0.0.1", "port": "22"}}, "size": 4194304}}}
1521
{"return": {}}
1522
-Job failed: failed to open remote file '/this/is/not/an/existing/path': Failed opening remote file (libssh2 error code: -31)
1523
+Job failed: failed to open remote file '/this/is/not/an/existing/path': SFTP server: No such file (libssh error code: 1, sftp error code: 2)
1524
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
1525
{"return": {}}
1526
64
--
1527
--
65
2.29.2
1528
2.21.0
66
1529
67
1530
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
The only users of this thing are:
4
1. bdrv_child_try_set_perm, to ignore failures on loosen restrictions
5
2. assertion in bdrv_replace_child
6
3. assertion in bdrv_inactivate_recurse
7
8
Assertions are not enough reason for overcomplication the permission
9
update system. So, look at bdrv_child_try_set_perm.
10
11
We are interested in tighten_restrictions only on failure. But on
12
failure this field is not reliable: we may fail in the middle of
13
permission update, some nodes are not touched and we don't know should
14
their permissions be tighten or not. So, we rely on the fact that if we
15
loose restrictions on some node (or BdrvChild), we'll not tighten
16
restriction in the whole subtree as part of this update (assertions 2
17
and 3 rely on this fact as well). And, if we rely on this fact anyway,
18
we can just check it on top, and don't pass additional pointer through
19
the whole recursive infrastructure.
20
21
Note also, that further patches will fix real bugs in permission update
22
system, so now is good time to simplify it, as a help for further
23
refactorings.
24
25
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
26
Message-Id: <20201106124241.16950-8-vsementsov@virtuozzo.com>
27
[mreitz: Fixed rebase conflict]
28
Signed-off-by: Max Reitz <mreitz@redhat.com>
29
---
30
block.c | 89 +++++++++++----------------------------------------------
31
1 file changed, 17 insertions(+), 72 deletions(-)
32
33
diff --git a/block.c b/block.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/block.c
36
+++ b/block.c
37
@@ -XXX,XX +XXX,XX @@ static int bdrv_fill_options(QDict **options, const char *filename,
38
39
static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
40
uint64_t perm, uint64_t shared,
41
- GSList *ignore_children,
42
- bool *tighten_restrictions, Error **errp);
43
+ GSList *ignore_children, Error **errp);
44
static void bdrv_child_abort_perm_update(BdrvChild *c);
45
static void bdrv_child_set_perm(BdrvChild *c);
46
47
@@ -XXX,XX +XXX,XX @@ static void bdrv_child_perm(BlockDriverState *bs, BlockDriverState *child_bs,
48
* permissions of all its parents. This involves checking whether all necessary
49
* permission changes to child nodes can be performed.
50
*
51
- * Will set *tighten_restrictions to true if and only if new permissions have to
52
- * be taken or currently shared permissions are to be unshared. Otherwise,
53
- * errors are not fatal as long as the caller accepts that the restrictions
54
- * remain tighter than they need to be. The caller still has to abort the
55
- * transaction.
56
- * @tighten_restrictions cannot be used together with @q: When reopening, we may
57
- * encounter fatal errors even though no restrictions are to be tightened. For
58
- * example, changing a node from RW to RO will fail if the WRITE permission is
59
- * to be kept.
60
- *
61
* A call to this function must always be followed by a call to bdrv_set_perm()
62
* or bdrv_abort_perm_update().
63
*/
64
static int bdrv_check_perm(BlockDriverState *bs, BlockReopenQueue *q,
65
uint64_t cumulative_perms,
66
uint64_t cumulative_shared_perms,
67
- GSList *ignore_children,
68
- bool *tighten_restrictions, Error **errp)
69
+ GSList *ignore_children, Error **errp)
70
{
71
BlockDriver *drv = bs->drv;
72
BdrvChild *c;
73
int ret;
74
75
- assert(!q || !tighten_restrictions);
76
-
77
- if (tighten_restrictions) {
78
- uint64_t current_perms, current_shared;
79
- uint64_t added_perms, removed_shared_perms;
80
-
81
- bdrv_get_cumulative_perm(bs, &current_perms, &current_shared);
82
-
83
- added_perms = cumulative_perms & ~current_perms;
84
- removed_shared_perms = current_shared & ~cumulative_shared_perms;
85
-
86
- *tighten_restrictions = added_perms || removed_shared_perms;
87
- }
88
-
89
/* Write permissions never work with read-only images */
90
if ((cumulative_perms & (BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED)) &&
91
!bdrv_is_writable_after_reopen(bs, q))
92
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_perm(BlockDriverState *bs, BlockReopenQueue *q,
93
/* Check all children */
94
QLIST_FOREACH(c, &bs->children, next) {
95
uint64_t cur_perm, cur_shared;
96
- bool child_tighten_restr;
97
98
bdrv_child_perm(bs, c->bs, c, c->role, q,
99
cumulative_perms, cumulative_shared_perms,
100
&cur_perm, &cur_shared);
101
ret = bdrv_child_check_perm(c, q, cur_perm, cur_shared, ignore_children,
102
- tighten_restrictions ? &child_tighten_restr
103
- : NULL,
104
errp);
105
- if (tighten_restrictions) {
106
- *tighten_restrictions |= child_tighten_restr;
107
- }
108
if (ret < 0) {
109
return ret;
110
}
111
@@ -XXX,XX +XXX,XX @@ char *bdrv_perm_names(uint64_t perm)
112
* set, the BdrvChild objects in this list are ignored in the calculations;
113
* this allows checking permission updates for an existing reference.
114
*
115
- * See bdrv_check_perm() for the semantics of @tighten_restrictions.
116
- *
117
* Needs to be followed by a call to either bdrv_set_perm() or
118
* bdrv_abort_perm_update(). */
119
static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
120
uint64_t new_used_perm,
121
uint64_t new_shared_perm,
122
GSList *ignore_children,
123
- bool *tighten_restrictions,
124
Error **errp)
125
{
126
BdrvChild *c;
127
uint64_t cumulative_perms = new_used_perm;
128
uint64_t cumulative_shared_perms = new_shared_perm;
129
130
- assert(!q || !tighten_restrictions);
131
132
/* There is no reason why anyone couldn't tolerate write_unchanged */
133
assert(new_shared_perm & BLK_PERM_WRITE_UNCHANGED);
134
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
135
char *user = bdrv_child_user_desc(c);
136
char *perm_names = bdrv_perm_names(new_used_perm & ~c->shared_perm);
137
138
- if (tighten_restrictions) {
139
- *tighten_restrictions = true;
140
- }
141
-
142
error_setg(errp, "Conflicts with use by %s as '%s', which does not "
143
"allow '%s' on %s",
144
user, c->name, perm_names, bdrv_get_node_name(c->bs));
145
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
146
char *user = bdrv_child_user_desc(c);
147
char *perm_names = bdrv_perm_names(c->perm & ~new_shared_perm);
148
149
- if (tighten_restrictions) {
150
- *tighten_restrictions = true;
151
- }
152
-
153
error_setg(errp, "Conflicts with use by %s as '%s', which uses "
154
"'%s' on %s",
155
user, c->name, perm_names, bdrv_get_node_name(c->bs));
156
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
157
}
158
159
return bdrv_check_perm(bs, q, cumulative_perms, cumulative_shared_perms,
160
- ignore_children, tighten_restrictions, errp);
161
+ ignore_children, errp);
162
}
163
164
/* Needs to be followed by a call to either bdrv_child_set_perm() or
165
* bdrv_child_abort_perm_update(). */
166
static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
167
uint64_t perm, uint64_t shared,
168
- GSList *ignore_children,
169
- bool *tighten_restrictions, Error **errp)
170
+ GSList *ignore_children, Error **errp)
171
{
172
int ret;
173
174
ignore_children = g_slist_prepend(g_slist_copy(ignore_children), c);
175
- ret = bdrv_check_update_perm(c->bs, q, perm, shared, ignore_children,
176
- tighten_restrictions, errp);
177
+ ret = bdrv_check_update_perm(c->bs, q, perm, shared, ignore_children, errp);
178
g_slist_free(ignore_children);
179
180
if (ret < 0) {
181
@@ -XXX,XX +XXX,XX @@ static void bdrv_child_abort_perm_update(BdrvChild *c)
182
bdrv_abort_perm_update(c->bs);
183
}
184
185
-static int bdrv_refresh_perms(BlockDriverState *bs, bool *tighten_restrictions,
186
- Error **errp)
187
+static int bdrv_refresh_perms(BlockDriverState *bs, Error **errp)
188
{
189
int ret;
190
uint64_t perm, shared_perm;
191
192
bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
193
- ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL,
194
- tighten_restrictions, errp);
195
+ ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL, errp);
196
if (ret < 0) {
197
bdrv_abort_perm_update(bs);
198
return ret;
199
@@ -XXX,XX +XXX,XX @@ int bdrv_child_try_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared,
200
{
201
Error *local_err = NULL;
202
int ret;
203
- bool tighten_restrictions;
204
205
- ret = bdrv_child_check_perm(c, NULL, perm, shared, NULL,
206
- &tighten_restrictions, &local_err);
207
+ ret = bdrv_child_check_perm(c, NULL, perm, shared, NULL, &local_err);
208
if (ret < 0) {
209
bdrv_child_abort_perm_update(c);
210
- if (tighten_restrictions) {
211
+ if ((perm & ~c->perm) || (c->shared_perm & ~shared)) {
212
+ /* tighten permissions */
213
error_propagate(errp, local_err);
214
} else {
215
/*
216
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
217
}
218
219
if (old_bs) {
220
- bool tighten_restrictions;
221
-
222
/*
223
* Update permissions for old node. We're just taking a parent away, so
224
* we're loosening restrictions. Errors of permission update are not
225
* fatal in this case, ignore them.
226
*/
227
- bdrv_refresh_perms(old_bs, &tighten_restrictions, NULL);
228
- assert(tighten_restrictions == false);
229
+ bdrv_refresh_perms(old_bs, NULL);
230
231
/* When the parent requiring a non-default AioContext is removed, the
232
* node moves back to the main AioContext */
233
@@ -XXX,XX +XXX,XX @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
234
Error *local_err = NULL;
235
int ret;
236
237
- ret = bdrv_check_update_perm(child_bs, NULL, perm, shared_perm, NULL, NULL,
238
- errp);
239
+ ret = bdrv_check_update_perm(child_bs, NULL, perm, shared_perm, NULL, errp);
240
if (ret < 0) {
241
bdrv_abort_perm_update(child_bs);
242
bdrv_unref(child_bs);
243
@@ -XXX,XX +XXX,XX @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
244
QTAILQ_FOREACH(bs_entry, bs_queue, entry) {
245
BDRVReopenState *state = &bs_entry->state;
246
ret = bdrv_check_perm(state->bs, bs_queue, state->perm,
247
- state->shared_perm, NULL, NULL, errp);
248
+ state->shared_perm, NULL, errp);
249
if (ret < 0) {
250
goto cleanup_perm;
251
}
252
@@ -XXX,XX +XXX,XX @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
253
bs_queue, state->perm, state->shared_perm,
254
&nperm, &nshared);
255
ret = bdrv_check_update_perm(state->new_backing_bs, NULL,
256
- nperm, nshared, NULL, NULL, errp);
257
+ nperm, nshared, NULL, errp);
258
if (ret < 0) {
259
goto cleanup_perm;
260
}
261
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_node_common(BlockDriverState *from,
262
263
/* Check whether the required permissions can be granted on @to, ignoring
264
* all BdrvChild in @list so that they can't block themselves. */
265
- ret = bdrv_check_update_perm(to, NULL, perm, shared, list, NULL, errp);
266
+ ret = bdrv_check_update_perm(to, NULL, perm, shared, list, errp);
267
if (ret < 0) {
268
bdrv_abort_perm_update(to);
269
goto out;
270
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs, Error **errp)
271
*/
272
if (bs->open_flags & BDRV_O_INACTIVE) {
273
bs->open_flags &= ~BDRV_O_INACTIVE;
274
- ret = bdrv_refresh_perms(bs, NULL, errp);
275
+ ret = bdrv_refresh_perms(bs, errp);
276
if (ret < 0) {
277
bs->open_flags |= BDRV_O_INACTIVE;
278
return ret;
279
@@ -XXX,XX +XXX,XX @@ static bool bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
280
static int bdrv_inactivate_recurse(BlockDriverState *bs)
281
{
282
BdrvChild *child, *parent;
283
- bool tighten_restrictions;
284
int ret;
285
286
if (!bs->drv) {
287
@@ -XXX,XX +XXX,XX @@ static int bdrv_inactivate_recurse(BlockDriverState *bs)
288
* We only tried to loosen restrictions, so errors are not fatal, ignore
289
* them.
290
*/
291
- bdrv_refresh_perms(bs, &tighten_restrictions, NULL);
292
- assert(tighten_restrictions == false);
293
+ bdrv_refresh_perms(bs, NULL);
294
295
/* Recursively inactivate children */
296
QLIST_FOREACH(child, &bs->children, next) {
297
--
298
2.29.2
299
300
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
1. BDRV_REQ_NO_SERIALISING doesn't exist already, don't mention it.
4
5
2. We are going to add one more user of BDRV_REQ_SERIALISING, so
6
comment about backup becomes a bit confusing here. The use case in
7
backup is documented in block/backup.c, so let's just drop
8
duplication here.
9
10
3. The fact that BDRV_REQ_SERIALISING is only for write requests is
11
omitted. Add a note.
12
13
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
14
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
15
Reviewed-by: Alberto Garcia <berto@igalia.com>
16
Message-Id: <20201021145859.11201-2-vsementsov@virtuozzo.com>
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
18
---
19
include/block/block.h | 11 +----------
20
1 file changed, 1 insertion(+), 10 deletions(-)
21
22
diff --git a/include/block/block.h b/include/block/block.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/include/block/block.h
25
+++ b/include/block/block.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum {
27
* content. */
28
BDRV_REQ_WRITE_UNCHANGED = 0x40,
29
30
- /*
31
- * BDRV_REQ_SERIALISING forces request serialisation for writes.
32
- * It is used to ensure that writes to the backing file of a backup process
33
- * target cannot race with a read of the backup target that defers to the
34
- * backing file.
35
- *
36
- * Note, that BDRV_REQ_SERIALISING is _not_ opposite in meaning to
37
- * BDRV_REQ_NO_SERIALISING. A more descriptive name for the latter might be
38
- * _DO_NOT_WAIT_FOR_SERIALISING, except that is too long.
39
- */
40
+ /* Forces request serialisation. Use only with write requests. */
41
BDRV_REQ_SERIALISING = 0x80,
42
43
/* Execute the request only if the operation can be offloaded or otherwise
44
--
45
2.29.2
46
47
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
The comments states, that on misaligned request we should have already
4
been waiting. But for bdrv_padding_rmw_read, we called
5
bdrv_mark_request_serialising with align = request_alignment, and now
6
we serialise with align = cluster_size. So we may have to wait again
7
with larger alignment.
8
9
Note, that the only user of BDRV_REQ_SERIALISING is backup which issues
10
cluster-aligned requests, so seems the assertion should not fire for
11
now. But it's wrong anyway.
12
13
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
14
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
15
Message-Id: <20201021145859.11201-3-vsementsov@virtuozzo.com>
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
---
18
block/io.c | 11 +----------
19
1 file changed, 1 insertion(+), 10 deletions(-)
20
21
diff --git a/block/io.c b/block/io.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/block/io.c
24
+++ b/block/io.c
25
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
26
BdrvTrackedRequest *req, int flags)
27
{
28
BlockDriverState *bs = child->bs;
29
- bool waited;
30
int64_t end_sector = DIV_ROUND_UP(offset + bytes, BDRV_SECTOR_SIZE);
31
32
if (bs->read_only) {
33
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
34
assert(!(flags & ~BDRV_REQ_MASK));
35
36
if (flags & BDRV_REQ_SERIALISING) {
37
- waited = bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
38
- /*
39
- * For a misaligned request we should have already waited earlier,
40
- * because we come after bdrv_padding_rmw_read which must be called
41
- * with the request already marked as serialising.
42
- */
43
- assert(!waited ||
44
- (req->offset == req->overlap_offset &&
45
- req->bytes == req->overlap_bytes));
46
+ bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
47
} else {
48
bdrv_wait_serialising_requests(req);
49
}
50
--
51
2.29.2
52
53
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
Tests should place their files into the test directory. This includes
2
Unix sockets. 205 currently fails to do so, which prevents it from
3
being run concurrently.
2
4
3
To be reused in separate.
5
Signed-off-by: Max Reitz <mreitz@redhat.com>
4
6
Message-id: 20190618210238.9524-1-mreitz@redhat.com
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Eric Blake <eblake@redhat.com>
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-Id: <20201021145859.11201-4-vsementsov@virtuozzo.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
9
---
10
block/io.c | 71 +++++++++++++++++++++++++++++++-----------------------
10
tests/qemu-iotests/205 | 2 +-
11
1 file changed, 41 insertions(+), 30 deletions(-)
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
12
13
diff --git a/block/io.c b/block/io.c
13
diff --git a/tests/qemu-iotests/205 b/tests/qemu-iotests/205
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100755
15
--- a/block/io.c
15
--- a/tests/qemu-iotests/205
16
+++ b/block/io.c
16
+++ b/tests/qemu-iotests/205
17
@@ -XXX,XX +XXX,XX @@ static bool tracked_request_overlaps(BdrvTrackedRequest *req,
17
@@ -XXX,XX +XXX,XX @@ import iotests
18
return true;
18
import time
19
}
19
from iotests import qemu_img_create, qemu_io, filter_qemu_io, QemuIoInteractive
20
20
21
+/* Called with self->bs->reqs_lock held */
21
-nbd_sock = 'nbd_sock'
22
+static BdrvTrackedRequest *
22
+nbd_sock = os.path.join(iotests.test_dir, 'nbd_sock')
23
+bdrv_find_conflicting_request(BdrvTrackedRequest *self)
23
nbd_uri = 'nbd+unix:///exp?socket=' + nbd_sock
24
+{
24
disk = os.path.join(iotests.test_dir, 'disk')
25
+ BdrvTrackedRequest *req;
26
+
27
+ QLIST_FOREACH(req, &self->bs->tracked_requests, list) {
28
+ if (req == self || (!req->serialising && !self->serialising)) {
29
+ continue;
30
+ }
31
+ if (tracked_request_overlaps(req, self->overlap_offset,
32
+ self->overlap_bytes))
33
+ {
34
+ /*
35
+ * Hitting this means there was a reentrant request, for
36
+ * example, a block driver issuing nested requests. This must
37
+ * never happen since it means deadlock.
38
+ */
39
+ assert(qemu_coroutine_self() != req->co);
40
+
41
+ /*
42
+ * If the request is already (indirectly) waiting for us, or
43
+ * will wait for us as soon as it wakes up, then just go on
44
+ * (instead of producing a deadlock in the former case).
45
+ */
46
+ if (!req->waiting_for) {
47
+ return req;
48
+ }
49
+ }
50
+ }
51
+
52
+ return NULL;
53
+}
54
+
55
static bool coroutine_fn
56
bdrv_wait_serialising_requests_locked(BlockDriverState *bs,
57
BdrvTrackedRequest *self)
58
{
59
BdrvTrackedRequest *req;
60
- bool retry;
61
bool waited = false;
62
63
- do {
64
- retry = false;
65
- QLIST_FOREACH(req, &bs->tracked_requests, list) {
66
- if (req == self || (!req->serialising && !self->serialising)) {
67
- continue;
68
- }
69
- if (tracked_request_overlaps(req, self->overlap_offset,
70
- self->overlap_bytes))
71
- {
72
- /* Hitting this means there was a reentrant request, for
73
- * example, a block driver issuing nested requests. This must
74
- * never happen since it means deadlock.
75
- */
76
- assert(qemu_coroutine_self() != req->co);
77
-
78
- /* If the request is already (indirectly) waiting for us, or
79
- * will wait for us as soon as it wakes up, then just go on
80
- * (instead of producing a deadlock in the former case). */
81
- if (!req->waiting_for) {
82
- self->waiting_for = req;
83
- qemu_co_queue_wait(&req->wait_queue, &bs->reqs_lock);
84
- self->waiting_for = NULL;
85
- retry = true;
86
- waited = true;
87
- break;
88
- }
89
- }
90
- }
91
- } while (retry);
92
+ while ((req = bdrv_find_conflicting_request(self))) {
93
+ self->waiting_for = req;
94
+ qemu_co_queue_wait(&req->wait_queue, &bs->reqs_lock);
95
+ self->waiting_for = NULL;
96
+ waited = true;
97
+ }
98
+
99
return waited;
100
}
101
25
102
--
26
--
103
2.29.2
27
2.21.0
104
28
105
29
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
bs is linked in req, so no needs to pass it separately. Most of
4
tracked-requests API doesn't have bs argument. Actually, after this
5
patch only tracked_request_begin has it, but it's for purpose.
6
7
While being here, also add a comment about what "_locked" is.
8
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-Id: <20201021145859.11201-5-vsementsov@virtuozzo.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
block/io.c | 10 +++++-----
15
1 file changed, 5 insertions(+), 5 deletions(-)
16
17
diff --git a/block/io.c b/block/io.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/block/io.c
20
+++ b/block/io.c
21
@@ -XXX,XX +XXX,XX @@ bdrv_find_conflicting_request(BdrvTrackedRequest *self)
22
return NULL;
23
}
24
25
+/* Called with self->bs->reqs_lock held */
26
static bool coroutine_fn
27
-bdrv_wait_serialising_requests_locked(BlockDriverState *bs,
28
- BdrvTrackedRequest *self)
29
+bdrv_wait_serialising_requests_locked(BdrvTrackedRequest *self)
30
{
31
BdrvTrackedRequest *req;
32
bool waited = false;
33
34
while ((req = bdrv_find_conflicting_request(self))) {
35
self->waiting_for = req;
36
- qemu_co_queue_wait(&req->wait_queue, &bs->reqs_lock);
37
+ qemu_co_queue_wait(&req->wait_queue, &self->bs->reqs_lock);
38
self->waiting_for = NULL;
39
waited = true;
40
}
41
@@ -XXX,XX +XXX,XX @@ bool bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
42
43
req->overlap_offset = MIN(req->overlap_offset, overlap_offset);
44
req->overlap_bytes = MAX(req->overlap_bytes, overlap_bytes);
45
- waited = bdrv_wait_serialising_requests_locked(bs, req);
46
+ waited = bdrv_wait_serialising_requests_locked(req);
47
qemu_co_mutex_unlock(&bs->reqs_lock);
48
return waited;
49
}
50
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn bdrv_wait_serialising_requests(BdrvTrackedRequest *self
51
}
52
53
qemu_co_mutex_lock(&bs->reqs_lock);
54
- waited = bdrv_wait_serialising_requests_locked(bs, self);
55
+ waited = bdrv_wait_serialising_requests_locked(self);
56
qemu_co_mutex_unlock(&bs->reqs_lock);
57
58
return waited;
59
--
60
2.29.2
61
62
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
We'll need a separate function, which will only "mark" request
4
serialising with specified align but not wait for conflicting
5
requests. So, it will be like old bdrv_mark_request_serialising(),
6
before merging bdrv_wait_serialising_requests_locked() into it.
7
8
To reduce the possible mess, let's do the following:
9
10
Public function that does both marking and waiting will be called
11
bdrv_make_request_serialising, and private function which will only
12
"mark" will be called tracked_request_set_serialising().
13
14
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
15
Reviewed-by: Max Reitz <mreitz@redhat.com>
16
Message-Id: <20201021145859.11201-6-vsementsov@virtuozzo.com>
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
18
---
19
include/block/block_int.h | 3 ++-
20
block/file-posix.c | 2 +-
21
block/io.c | 35 +++++++++++++++++++++++------------
22
3 files changed, 26 insertions(+), 14 deletions(-)
23
24
diff --git a/include/block/block_int.h b/include/block/block_int.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/block/block_int.h
27
+++ b/include/block/block_int.h
28
@@ -XXX,XX +XXX,XX @@ extern unsigned int bdrv_drain_all_count;
29
void bdrv_apply_subtree_drain(BdrvChild *child, BlockDriverState *new_parent);
30
void bdrv_unapply_subtree_drain(BdrvChild *child, BlockDriverState *old_parent);
31
32
-bool coroutine_fn bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align);
33
+bool coroutine_fn bdrv_make_request_serialising(BdrvTrackedRequest *req,
34
+ uint64_t align);
35
BdrvTrackedRequest *coroutine_fn bdrv_co_get_self_request(BlockDriverState *bs);
36
37
int get_tmp_filename(char *filename, int size);
38
diff --git a/block/file-posix.c b/block/file-posix.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/block/file-posix.c
41
+++ b/block/file-posix.c
42
@@ -XXX,XX +XXX,XX @@ raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int bytes,
43
44
assert(bdrv_check_request(req->offset, req->bytes) == 0);
45
46
- bdrv_mark_request_serialising(req, bs->bl.request_alignment);
47
+ bdrv_make_request_serialising(req, bs->bl.request_alignment);
48
}
49
#endif
50
51
diff --git a/block/io.c b/block/io.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/block/io.c
54
+++ b/block/io.c
55
@@ -XXX,XX +XXX,XX @@ bdrv_wait_serialising_requests_locked(BdrvTrackedRequest *self)
56
return waited;
57
}
58
59
-bool bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
60
+/* Called with req->bs->reqs_lock held */
61
+static void tracked_request_set_serialising(BdrvTrackedRequest *req,
62
+ uint64_t align)
63
{
64
- BlockDriverState *bs = req->bs;
65
int64_t overlap_offset = req->offset & ~(align - 1);
66
uint64_t overlap_bytes = ROUND_UP(req->offset + req->bytes, align)
67
- overlap_offset;
68
- bool waited;
69
70
- qemu_co_mutex_lock(&bs->reqs_lock);
71
if (!req->serialising) {
72
qatomic_inc(&req->bs->serialising_in_flight);
73
req->serialising = true;
74
@@ -XXX,XX +XXX,XX @@ bool bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
75
76
req->overlap_offset = MIN(req->overlap_offset, overlap_offset);
77
req->overlap_bytes = MAX(req->overlap_bytes, overlap_bytes);
78
- waited = bdrv_wait_serialising_requests_locked(req);
79
- qemu_co_mutex_unlock(&bs->reqs_lock);
80
- return waited;
81
}
82
83
/**
84
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn bdrv_wait_serialising_requests(BdrvTrackedRequest *self
85
return waited;
86
}
87
88
+bool coroutine_fn bdrv_make_request_serialising(BdrvTrackedRequest *req,
89
+ uint64_t align)
90
+{
91
+ bool waited;
92
+
93
+ qemu_co_mutex_lock(&req->bs->reqs_lock);
94
+
95
+ tracked_request_set_serialising(req, align);
96
+ waited = bdrv_wait_serialising_requests_locked(req);
97
+
98
+ qemu_co_mutex_unlock(&req->bs->reqs_lock);
99
+
100
+ return waited;
101
+}
102
+
103
int bdrv_check_request(int64_t offset, int64_t bytes)
104
{
105
if (offset < 0 || bytes < 0) {
106
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
107
* with each other for the same cluster. For example, in copy-on-read
108
* it ensures that the CoR read and write operations are atomic and
109
* guest writes cannot interleave between them. */
110
- bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
111
+ bdrv_make_request_serialising(req, bdrv_get_cluster_size(bs));
112
} else {
113
bdrv_wait_serialising_requests(req);
114
}
115
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
116
assert(!(flags & ~BDRV_REQ_MASK));
117
118
if (flags & BDRV_REQ_SERIALISING) {
119
- bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
120
+ bdrv_make_request_serialising(req, bdrv_get_cluster_size(bs));
121
} else {
122
bdrv_wait_serialising_requests(req);
123
}
124
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_co_do_zero_pwritev(BdrvChild *child,
125
126
padding = bdrv_init_padding(bs, offset, bytes, &pad);
127
if (padding) {
128
- bdrv_mark_request_serialising(req, align);
129
+ bdrv_make_request_serialising(req, align);
130
131
bdrv_padding_rmw_read(child, req, &pad, true);
132
133
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
134
}
135
136
if (bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, &pad)) {
137
- bdrv_mark_request_serialising(&req, align);
138
+ bdrv_make_request_serialising(&req, align);
139
bdrv_padding_rmw_read(child, &req, &pad, false);
140
}
141
142
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_truncate(BdrvChild *child, int64_t offset, bool exact,
143
* new area, we need to make sure that no write requests are made to it
144
* concurrently or they might be overwritten by preallocation. */
145
if (new_bytes) {
146
- bdrv_mark_request_serialising(&req, 1);
147
+ bdrv_make_request_serialising(&req, 1);
148
}
149
if (bs->read_only) {
150
error_setg(errp, "Image is read-only");
151
--
152
2.29.2
153
154
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Add flag to make serialising request no wait: if there are conflicting
4
requests, just return error immediately. It's will be used in upcoming
5
preallocate filter.
6
7
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Message-Id: <20201021145859.11201-7-vsementsov@virtuozzo.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
---
12
include/block/block.h | 9 ++++++++-
13
block/io.c | 11 ++++++++++-
14
2 files changed, 18 insertions(+), 2 deletions(-)
15
16
diff --git a/include/block/block.h b/include/block/block.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/block/block.h
19
+++ b/include/block/block.h
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
21
* written to qiov parameter which may be NULL.
22
*/
23
BDRV_REQ_PREFETCH = 0x200,
24
+
25
+ /*
26
+ * If we need to wait for other requests, just fail immediately. Used
27
+ * only together with BDRV_REQ_SERIALISING.
28
+ */
29
+ BDRV_REQ_NO_WAIT = 0x400,
30
+
31
/* Mask of valid flags */
32
- BDRV_REQ_MASK = 0x3ff,
33
+ BDRV_REQ_MASK = 0x7ff,
34
} BdrvRequestFlags;
35
36
typedef struct BlockSizes {
37
diff --git a/block/io.c b/block/io.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/block/io.c
40
+++ b/block/io.c
41
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
42
assert(!(bs->open_flags & BDRV_O_INACTIVE));
43
assert((bs->open_flags & BDRV_O_NO_IO) == 0);
44
assert(!(flags & ~BDRV_REQ_MASK));
45
+ assert(!((flags & BDRV_REQ_NO_WAIT) && !(flags & BDRV_REQ_SERIALISING)));
46
47
if (flags & BDRV_REQ_SERIALISING) {
48
- bdrv_make_request_serialising(req, bdrv_get_cluster_size(bs));
49
+ QEMU_LOCK_GUARD(&bs->reqs_lock);
50
+
51
+ tracked_request_set_serialising(req, bdrv_get_cluster_size(bs));
52
+
53
+ if ((flags & BDRV_REQ_NO_WAIT) && bdrv_find_conflicting_request(req)) {
54
+ return -EBUSY;
55
+ }
56
+
57
+ bdrv_wait_serialising_requests_locked(req);
58
} else {
59
bdrv_wait_serialising_requests(req);
60
}
61
--
62
2.29.2
63
64
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Do generic processing even for drivers which define .bdrv_check_perm
4
handler. It's needed for further preallocate filter: it will need to do
5
additional action on bdrv_check_perm, but don't want to reimplement
6
generic logic.
7
8
The patch doesn't change existing behaviour: the only driver that
9
implements bdrv_check_perm is file-posix, but it never has any
10
children.
11
12
Also, bdrv_set_perm() don't stop processing if driver has
13
.bdrv_set_perm handler as well.
14
15
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
16
Message-Id: <20201021145859.11201-8-vsementsov@virtuozzo.com>
17
Reviewed-by: Max Reitz <mreitz@redhat.com>
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
---
20
block.c | 7 +++++--
21
1 file changed, 5 insertions(+), 2 deletions(-)
22
23
diff --git a/block.c b/block.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/block.c
26
+++ b/block.c
27
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_perm(BlockDriverState *bs, BlockReopenQueue *q,
28
}
29
30
if (drv->bdrv_check_perm) {
31
- return drv->bdrv_check_perm(bs, cumulative_perms,
32
- cumulative_shared_perms, errp);
33
+ ret = drv->bdrv_check_perm(bs, cumulative_perms,
34
+ cumulative_shared_perms, errp);
35
+ if (ret < 0) {
36
+ return ret;
37
+ }
38
}
39
40
/* Drivers that never have children can omit .bdrv_child_perm() */
41
--
42
2.29.2
43
44
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
This will be used in further test.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20201021145859.11201-10-vsementsov@virtuozzo.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
qemu-io-cmds.c | 46 ++++++++++++++++++++++++++++++++--------------
11
1 file changed, 32 insertions(+), 14 deletions(-)
12
13
diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/qemu-io-cmds.c
16
+++ b/qemu-io-cmds.c
17
@@ -XXX,XX +XXX,XX @@ static const cmdinfo_t flush_cmd = {
18
.oneline = "flush all in-core file state to disk",
19
};
20
21
+static int truncate_f(BlockBackend *blk, int argc, char **argv);
22
+static const cmdinfo_t truncate_cmd = {
23
+ .name = "truncate",
24
+ .altname = "t",
25
+ .cfunc = truncate_f,
26
+ .perm = BLK_PERM_WRITE | BLK_PERM_RESIZE,
27
+ .argmin = 1,
28
+ .argmax = 3,
29
+ .args = "[-m prealloc_mode] off",
30
+ .oneline = "truncates the current file at the given offset",
31
+};
32
+
33
static int truncate_f(BlockBackend *blk, int argc, char **argv)
34
{
35
Error *local_err = NULL;
36
int64_t offset;
37
- int ret;
38
+ int c, ret;
39
+ PreallocMode prealloc = PREALLOC_MODE_OFF;
40
41
- offset = cvtnum(argv[1]);
42
+ while ((c = getopt(argc, argv, "m:")) != -1) {
43
+ switch (c) {
44
+ case 'm':
45
+ prealloc = qapi_enum_parse(&PreallocMode_lookup, optarg,
46
+ PREALLOC_MODE__MAX, NULL);
47
+ if (prealloc == PREALLOC_MODE__MAX) {
48
+ error_report("Invalid preallocation mode '%s'", optarg);
49
+ return -EINVAL;
50
+ }
51
+ break;
52
+ default:
53
+ qemuio_command_usage(&truncate_cmd);
54
+ return -EINVAL;
55
+ }
56
+ }
57
+
58
+ offset = cvtnum(argv[optind]);
59
if (offset < 0) {
60
print_cvtnum_err(offset, argv[1]);
61
return offset;
62
@@ -XXX,XX +XXX,XX @@ static int truncate_f(BlockBackend *blk, int argc, char **argv)
63
* exact=true. It is better to err on the "emit more errors" side
64
* than to be overly permissive.
65
*/
66
- ret = blk_truncate(blk, offset, false, PREALLOC_MODE_OFF, 0, &local_err);
67
+ ret = blk_truncate(blk, offset, false, prealloc, 0, &local_err);
68
if (ret < 0) {
69
error_report_err(local_err);
70
return ret;
71
@@ -XXX,XX +XXX,XX @@ static int truncate_f(BlockBackend *blk, int argc, char **argv)
72
return 0;
73
}
74
75
-static const cmdinfo_t truncate_cmd = {
76
- .name = "truncate",
77
- .altname = "t",
78
- .cfunc = truncate_f,
79
- .perm = BLK_PERM_WRITE | BLK_PERM_RESIZE,
80
- .argmin = 1,
81
- .argmax = 1,
82
- .args = "off",
83
- .oneline = "truncates the current file at the given offset",
84
-};
85
-
86
static int length_f(BlockBackend *blk, int argc, char **argv)
87
{
88
int64_t size;
89
--
90
2.29.2
91
92
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
5
Message-Id: <20201021145859.11201-11-vsementsov@virtuozzo.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
8
tests/qemu-iotests/iotests.py | 7 ++++++-
9
1 file changed, 6 insertions(+), 1 deletion(-)
10
11
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tests/qemu-iotests/iotests.py
14
+++ b/tests/qemu-iotests/iotests.py
15
@@ -XXX,XX +XXX,XX @@ def qemu_io_log(*args):
16
17
def qemu_io_silent(*args):
18
'''Run qemu-io and return the exit code, suppressing stdout'''
19
- args = qemu_io_args + list(args)
20
+ if '-f' in args or '--image-opts' in args:
21
+ default_args = qemu_io_args_no_fmt
22
+ else:
23
+ default_args = qemu_io_args
24
+
25
+ args = default_args + list(args)
26
exitcode = subprocess.call(args, stdout=open('/dev/null', 'w'))
27
if exitcode < 0:
28
sys.stderr.write('qemu-io received signal %i: %s\n' %
29
--
30
2.29.2
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Add a parameter to skip test if some needed additional formats are not
4
supported (for example filter drivers).
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-12-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
tests/qemu-iotests/iotests.py | 9 ++++++++-
12
1 file changed, 8 insertions(+), 1 deletion(-)
13
14
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/qemu-iotests/iotests.py
17
+++ b/tests/qemu-iotests/iotests.py
18
@@ -XXX,XX +XXX,XX @@ def _verify_aio_mode(supported_aio_modes: Sequence[str] = ()) -> None:
19
if supported_aio_modes and (aiomode not in supported_aio_modes):
20
notrun('not suitable for this aio mode: %s' % aiomode)
21
22
+def _verify_formats(required_formats: Sequence[str] = ()) -> None:
23
+ usf_list = list(set(required_formats) - set(supported_formats()))
24
+ if usf_list:
25
+ notrun(f'formats {usf_list} are not whitelisted')
26
+
27
def supports_quorum():
28
return 'quorum' in qemu_img_pipe('--help')
29
30
@@ -XXX,XX +XXX,XX @@ def execute_setup_common(supported_fmts: Sequence[str] = (),
31
supported_aio_modes: Sequence[str] = (),
32
unsupported_fmts: Sequence[str] = (),
33
supported_protocols: Sequence[str] = (),
34
- unsupported_protocols: Sequence[str] = ()) -> bool:
35
+ unsupported_protocols: Sequence[str] = (),
36
+ required_fmts: Sequence[str] = ()) -> bool:
37
"""
38
Perform necessary setup for either script-style or unittest-style tests.
39
40
@@ -XXX,XX +XXX,XX @@ def execute_setup_common(supported_fmts: Sequence[str] = (),
41
_verify_platform(supported=supported_platforms)
42
_verify_cache_mode(supported_cache_modes)
43
_verify_aio_mode(supported_aio_modes)
44
+ _verify_formats(required_fmts)
45
46
return debug
47
48
--
49
2.29.2
50
51
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Message-Id: <20201021145859.11201-13-vsementsov@virtuozzo.com>
5
Reviewed-by: Max Reitz <mreitz@redhat.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
8
tests/qemu-iotests/298 | 186 +++++++++++++++++++++++++++++++++++++
9
tests/qemu-iotests/298.out | 5 +
10
tests/qemu-iotests/group | 1 +
11
3 files changed, 192 insertions(+)
12
create mode 100644 tests/qemu-iotests/298
13
create mode 100644 tests/qemu-iotests/298.out
14
15
diff --git a/tests/qemu-iotests/298 b/tests/qemu-iotests/298
16
new file mode 100644
17
index XXXXXXX..XXXXXXX
18
--- /dev/null
19
+++ b/tests/qemu-iotests/298
20
@@ -XXX,XX +XXX,XX @@
21
+#!/usr/bin/env python3
22
+#
23
+# Test for preallocate filter
24
+#
25
+# Copyright (c) 2020 Virtuozzo International GmbH.
26
+#
27
+# This program is free software; you can redistribute it and/or modify
28
+# it under the terms of the GNU General Public License as published by
29
+# the Free Software Foundation; either version 2 of the License, or
30
+# (at your option) any later version.
31
+#
32
+# This program is distributed in the hope that it will be useful,
33
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
34
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
35
+# GNU General Public License for more details.
36
+#
37
+# You should have received a copy of the GNU General Public License
38
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
39
+#
40
+
41
+import os
42
+import iotests
43
+
44
+MiB = 1024 * 1024
45
+disk = os.path.join(iotests.test_dir, 'disk')
46
+overlay = os.path.join(iotests.test_dir, 'overlay')
47
+refdisk = os.path.join(iotests.test_dir, 'refdisk')
48
+drive_opts = f'node-name=disk,driver={iotests.imgfmt},' \
49
+ f'file.node-name=filter,file.driver=preallocate,' \
50
+ f'file.file.node-name=file,file.file.filename={disk}'
51
+
52
+
53
+class TestPreallocateBase(iotests.QMPTestCase):
54
+ def setUp(self):
55
+ iotests.qemu_img_create('-f', iotests.imgfmt, disk, str(10 * MiB))
56
+
57
+ def tearDown(self):
58
+ try:
59
+ self.check_small()
60
+ check = iotests.qemu_img_check(disk)
61
+ self.assertFalse('leaks' in check)
62
+ self.assertFalse('corruptions' in check)
63
+ self.assertEqual(check['check-errors'], 0)
64
+ finally:
65
+ os.remove(disk)
66
+
67
+ def check_big(self):
68
+ self.assertTrue(os.path.getsize(disk) > 100 * MiB)
69
+
70
+ def check_small(self):
71
+ self.assertTrue(os.path.getsize(disk) < 10 * MiB)
72
+
73
+
74
+class TestQemuImg(TestPreallocateBase):
75
+ def test_qemu_img(self):
76
+ p = iotests.QemuIoInteractive('--image-opts', drive_opts)
77
+
78
+ p.cmd('write 0 1M')
79
+ p.cmd('flush')
80
+
81
+ self.check_big()
82
+
83
+ p.close()
84
+
85
+
86
+class TestPreallocateFilter(TestPreallocateBase):
87
+ def setUp(self):
88
+ super().setUp()
89
+ self.vm = iotests.VM().add_drive(path=None, opts=drive_opts)
90
+ self.vm.launch()
91
+
92
+ def tearDown(self):
93
+ self.vm.shutdown()
94
+ super().tearDown()
95
+
96
+ def test_prealloc(self):
97
+ self.vm.hmp_qemu_io('drive0', 'write 0 1M')
98
+ self.check_big()
99
+
100
+ def test_external_snapshot(self):
101
+ self.test_prealloc()
102
+
103
+ result = self.vm.qmp('blockdev-snapshot-sync', node_name='disk',
104
+ snapshot_file=overlay,
105
+ snapshot_node_name='overlay')
106
+ self.assert_qmp(result, 'return', {})
107
+
108
+ # on reopen to r-o base preallocation should be dropped
109
+ self.check_small()
110
+
111
+ self.vm.hmp_qemu_io('drive0', 'write 1M 1M')
112
+
113
+ result = self.vm.qmp('block-commit', device='overlay')
114
+ self.assert_qmp(result, 'return', {})
115
+ self.complete_and_wait()
116
+
117
+ # commit of new megabyte should trigger preallocation
118
+ self.check_big()
119
+
120
+ def test_reopen_opts(self):
121
+ result = self.vm.qmp('x-blockdev-reopen', **{
122
+ 'node-name': 'disk',
123
+ 'driver': iotests.imgfmt,
124
+ 'file': {
125
+ 'node-name': 'filter',
126
+ 'driver': 'preallocate',
127
+ 'prealloc-size': 20 * MiB,
128
+ 'prealloc-align': 5 * MiB,
129
+ 'file': {
130
+ 'node-name': 'file',
131
+ 'driver': 'file',
132
+ 'filename': disk
133
+ }
134
+ }
135
+ })
136
+ self.assert_qmp(result, 'return', {})
137
+
138
+ self.vm.hmp_qemu_io('drive0', 'write 0 1M')
139
+ self.assertTrue(os.path.getsize(disk) == 25 * MiB)
140
+
141
+
142
+class TestTruncate(iotests.QMPTestCase):
143
+ def setUp(self):
144
+ iotests.qemu_img_create('-f', iotests.imgfmt, disk, str(10 * MiB))
145
+ iotests.qemu_img_create('-f', iotests.imgfmt, refdisk, str(10 * MiB))
146
+
147
+ def tearDown(self):
148
+ os.remove(disk)
149
+ os.remove(refdisk)
150
+
151
+ def do_test(self, prealloc_mode, new_size):
152
+ ret = iotests.qemu_io_silent('--image-opts', '-c', 'write 0 10M', '-c',
153
+ f'truncate -m {prealloc_mode} {new_size}',
154
+ drive_opts)
155
+ self.assertEqual(ret, 0)
156
+
157
+ ret = iotests.qemu_io_silent('-f', iotests.imgfmt, '-c', 'write 0 10M',
158
+ '-c',
159
+ f'truncate -m {prealloc_mode} {new_size}',
160
+ refdisk)
161
+ self.assertEqual(ret, 0)
162
+
163
+ stat = os.stat(disk)
164
+ refstat = os.stat(refdisk)
165
+
166
+ # Probably we'll want preallocate filter to keep align to cluster when
167
+ # shrink preallocation, so, ignore small differece
168
+ self.assertLess(abs(stat.st_size - refstat.st_size), 64 * 1024)
169
+
170
+ # Preallocate filter may leak some internal clusters (for example, if
171
+ # guest write far over EOF, skipping some clusters - they will remain
172
+ # fallocated, preallocate filter don't care about such leaks, it drops
173
+ # only trailing preallocation.
174
+ self.assertLess(abs(stat.st_blocks - refstat.st_blocks) * 512,
175
+ 1024 * 1024)
176
+
177
+ def test_real_shrink(self):
178
+ self.do_test('off', '5M')
179
+
180
+ def test_truncate_inside_preallocated_area__falloc(self):
181
+ self.do_test('falloc', '50M')
182
+
183
+ def test_truncate_inside_preallocated_area__metadata(self):
184
+ self.do_test('metadata', '50M')
185
+
186
+ def test_truncate_inside_preallocated_area__full(self):
187
+ self.do_test('full', '50M')
188
+
189
+ def test_truncate_inside_preallocated_area__off(self):
190
+ self.do_test('off', '50M')
191
+
192
+ def test_truncate_over_preallocated_area__falloc(self):
193
+ self.do_test('falloc', '150M')
194
+
195
+ def test_truncate_over_preallocated_area__metadata(self):
196
+ self.do_test('metadata', '150M')
197
+
198
+ def test_truncate_over_preallocated_area__full(self):
199
+ self.do_test('full', '150M')
200
+
201
+ def test_truncate_over_preallocated_area__off(self):
202
+ self.do_test('off', '150M')
203
+
204
+
205
+if __name__ == '__main__':
206
+ iotests.main(supported_fmts=['qcow2'], required_fmts=['preallocate'])
207
diff --git a/tests/qemu-iotests/298.out b/tests/qemu-iotests/298.out
208
new file mode 100644
209
index XXXXXXX..XXXXXXX
210
--- /dev/null
211
+++ b/tests/qemu-iotests/298.out
212
@@ -XXX,XX +XXX,XX @@
213
+.............
214
+----------------------------------------------------------------------
215
+Ran 13 tests
216
+
217
+OK
218
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
219
index XXXXXXX..XXXXXXX 100644
220
--- a/tests/qemu-iotests/group
221
+++ b/tests/qemu-iotests/group
222
@@ -XXX,XX +XXX,XX @@
223
295 rw
224
296 rw
225
297 meta
226
+298
227
299 auto quick
228
300 migration
229
301 backing quick
230
--
231
2.29.2
232
233
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Message-Id: <20201021145859.11201-14-vsementsov@virtuozzo.com>
5
Reviewed-by: Max Reitz <mreitz@redhat.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
8
scripts/simplebench/simplebench.py | 12 ++++++------
9
1 file changed, 6 insertions(+), 6 deletions(-)
10
11
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
12
index XXXXXXX..XXXXXXX 100644
13
--- a/scripts/simplebench/simplebench.py
14
+++ b/scripts/simplebench/simplebench.py
15
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
16
17
result = {'runs': runs}
18
19
- successed = [r for r in runs if ('seconds' in r)]
20
- if successed:
21
- avg = sum(r['seconds'] for r in successed) / len(successed)
22
+ succeeded = [r for r in runs if ('seconds' in r)]
23
+ if succeeded:
24
+ avg = sum(r['seconds'] for r in succeeded) / len(succeeded)
25
result['average'] = avg
26
- result['delta'] = max(abs(r['seconds'] - avg) for r in successed)
27
+ result['delta'] = max(abs(r['seconds'] - avg) for r in succeeded)
28
29
- if len(successed) < count:
30
- result['n-failed'] = count - len(successed)
31
+ if len(succeeded) < count:
32
+ result['n-failed'] = count - len(succeeded)
33
34
return result
35
36
--
37
2.29.2
38
39
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Support benchmarks returning not seconds but iops. We'll use it for
4
further new test.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-15-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/simplebench.py | 38 ++++++++++++++++++++++--------
12
1 file changed, 28 insertions(+), 10 deletions(-)
13
14
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/scripts/simplebench/simplebench.py
17
+++ b/scripts/simplebench/simplebench.py
18
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
19
20
test_func -- benchmarking function with prototype
21
test_func(env, case), which takes test_env and test_case
22
- arguments and returns {'seconds': int} (which is benchmark
23
- result) on success and {'error': str} on error. Returned
24
- dict may contain any other additional fields.
25
+ arguments and on success returns dict with 'seconds' or
26
+ 'iops' (or both) fields, specifying the benchmark result.
27
+ If both 'iops' and 'seconds' provided, the 'iops' is
28
+ considered the main, and 'seconds' is just an additional
29
+ info. On failure test_func should return {'error': str}.
30
+ Returned dict may contain any other additional fields.
31
test_env -- test environment - opaque first argument for test_func
32
test_case -- test case - opaque second argument for test_func
33
count -- how many times to call test_func, to calculate average
34
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
35
36
Returns dict with the following fields:
37
'runs': list of test_func results
38
- 'average': average seconds per run (exists only if at least one run
39
- succeeded)
40
+ 'dimension': dimension of results, may be 'seconds' or 'iops'
41
+ 'average': average value (iops or seconds) per run (exists only if at
42
+ least one run succeeded)
43
'delta': maximum delta between test_func result and the average
44
(exists only if at least one run succeeded)
45
'n-failed': number of failed runs (exists only if at least one run
46
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
47
48
result = {'runs': runs}
49
50
- succeeded = [r for r in runs if ('seconds' in r)]
51
+ succeeded = [r for r in runs if ('seconds' in r or 'iops' in r)]
52
if succeeded:
53
- avg = sum(r['seconds'] for r in succeeded) / len(succeeded)
54
+ if 'iops' in succeeded[0]:
55
+ assert all('iops' in r for r in succeeded)
56
+ dim = 'iops'
57
+ else:
58
+ assert all('seconds' in r for r in succeeded)
59
+ assert all('iops' not in r for r in succeeded)
60
+ dim = 'seconds'
61
+ avg = sum(r[dim] for r in succeeded) / len(succeeded)
62
+ result['dimension'] = dim
63
result['average'] = avg
64
- result['delta'] = max(abs(r['seconds'] - avg) for r in succeeded)
65
+ result['delta'] = max(abs(r[dim] - avg) for r in succeeded)
66
67
if len(succeeded) < count:
68
result['n-failed'] = count - len(succeeded)
69
@@ -XXX,XX +XXX,XX @@ def ascii(results):
70
"""Return ASCII representation of bench() returned dict."""
71
from tabulate import tabulate
72
73
+ dim = None
74
tab = [[""] + [c['id'] for c in results['envs']]]
75
for case in results['cases']:
76
row = [case['id']]
77
for env in results['envs']:
78
- row.append(ascii_one(results['tab'][case['id']][env['id']]))
79
+ res = results['tab'][case['id']][env['id']]
80
+ if dim is None:
81
+ dim = res['dimension']
82
+ else:
83
+ assert dim == res['dimension']
84
+ row.append(ascii_one(res))
85
tab.append(row)
86
87
- return tabulate(tab)
88
+ return f'All results are in {dim}\n\n' + tabulate(tab)
89
--
90
2.29.2
91
92
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Standard deviation is more usual to see after +- than current maximum
4
of deviations.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-16-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/simplebench.py | 11 ++++++-----
12
1 file changed, 6 insertions(+), 5 deletions(-)
13
14
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/scripts/simplebench/simplebench.py
17
+++ b/scripts/simplebench/simplebench.py
18
@@ -XXX,XX +XXX,XX @@
19
# along with this program. If not, see <http://www.gnu.org/licenses/>.
20
#
21
22
+import statistics
23
+
24
25
def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
26
"""Benchmark one test-case
27
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
28
'dimension': dimension of results, may be 'seconds' or 'iops'
29
'average': average value (iops or seconds) per run (exists only if at
30
least one run succeeded)
31
- 'delta': maximum delta between test_func result and the average
32
+ 'stdev': standard deviation of results
33
(exists only if at least one run succeeded)
34
'n-failed': number of failed runs (exists only if at least one run
35
failed)
36
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
37
assert all('seconds' in r for r in succeeded)
38
assert all('iops' not in r for r in succeeded)
39
dim = 'seconds'
40
- avg = sum(r[dim] for r in succeeded) / len(succeeded)
41
result['dimension'] = dim
42
- result['average'] = avg
43
- result['delta'] = max(abs(r[dim] - avg) for r in succeeded)
44
+ result['average'] = statistics.mean(r[dim] for r in succeeded)
45
+ result['stdev'] = statistics.stdev(r[dim] for r in succeeded)
46
47
if len(succeeded) < count:
48
result['n-failed'] = count - len(succeeded)
49
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
50
def ascii_one(result):
51
"""Return ASCII representation of bench_one() returned dict."""
52
if 'average' in result:
53
- s = '{:.2f} +- {:.2f}'.format(result['average'], result['delta'])
54
+ s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
55
if 'n-failed' in result:
56
s += '\n({} failed)'.format(result['n-failed'])
57
return s
58
--
59
2.29.2
60
61
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Next patch will use utf8 plus-minus symbol, let's use more generic (and
4
more readable) name.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-17-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/bench-example.py | 2 +-
12
scripts/simplebench/bench_write_req.py | 2 +-
13
scripts/simplebench/simplebench.py | 10 +++++-----
14
3 files changed, 7 insertions(+), 7 deletions(-)
15
16
diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
17
index XXXXXXX..XXXXXXX 100644
18
--- a/scripts/simplebench/bench-example.py
19
+++ b/scripts/simplebench/bench-example.py
20
@@ -XXX,XX +XXX,XX @@ test_envs = [
21
]
22
23
result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
24
-print(simplebench.ascii(result))
25
+print(simplebench.results_to_text(result))
26
diff --git a/scripts/simplebench/bench_write_req.py b/scripts/simplebench/bench_write_req.py
27
index XXXXXXX..XXXXXXX 100755
28
--- a/scripts/simplebench/bench_write_req.py
29
+++ b/scripts/simplebench/bench_write_req.py
30
@@ -XXX,XX +XXX,XX @@ if __name__ == '__main__':
31
32
result = simplebench.bench(bench_func, test_envs, test_cases, count=3,
33
initial_run=False)
34
- print(simplebench.ascii(result))
35
+ print(simplebench.results_to_text(result))
36
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
37
index XXXXXXX..XXXXXXX 100644
38
--- a/scripts/simplebench/simplebench.py
39
+++ b/scripts/simplebench/simplebench.py
40
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
41
return result
42
43
44
-def ascii_one(result):
45
- """Return ASCII representation of bench_one() returned dict."""
46
+def result_to_text(result):
47
+ """Return text representation of bench_one() returned dict."""
48
if 'average' in result:
49
s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
50
if 'n-failed' in result:
51
@@ -XXX,XX +XXX,XX @@ def bench(test_func, test_envs, test_cases, *args, **vargs):
52
return results
53
54
55
-def ascii(results):
56
- """Return ASCII representation of bench() returned dict."""
57
+def results_to_text(results):
58
+ """Return text representation of bench() returned dict."""
59
from tabulate import tabulate
60
61
dim = None
62
@@ -XXX,XX +XXX,XX @@ def ascii(results):
63
dim = res['dimension']
64
else:
65
assert dim == res['dimension']
66
- row.append(ascii_one(res))
67
+ row.append(result_to_text(res))
68
tab.append(row)
69
70
return f'All results are in {dim}\n\n' + tabulate(tab)
71
--
72
2.29.2
73
74
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Let's keep view part in separate: this way it's better to improve it in
4
the following commits.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-18-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/bench-example.py | 3 +-
12
scripts/simplebench/bench_write_req.py | 3 +-
13
scripts/simplebench/results_to_text.py | 48 ++++++++++++++++++++++++++
14
scripts/simplebench/simplebench.py | 31 -----------------
15
4 files changed, 52 insertions(+), 33 deletions(-)
16
create mode 100644 scripts/simplebench/results_to_text.py
17
18
diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
19
index XXXXXXX..XXXXXXX 100644
20
--- a/scripts/simplebench/bench-example.py
21
+++ b/scripts/simplebench/bench-example.py
22
@@ -XXX,XX +XXX,XX @@
23
#
24
25
import simplebench
26
+from results_to_text import results_to_text
27
from bench_block_job import bench_block_copy, drv_file, drv_nbd
28
29
30
@@ -XXX,XX +XXX,XX @@ test_envs = [
31
]
32
33
result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
34
-print(simplebench.results_to_text(result))
35
+print(results_to_text(result))
36
diff --git a/scripts/simplebench/bench_write_req.py b/scripts/simplebench/bench_write_req.py
37
index XXXXXXX..XXXXXXX 100755
38
--- a/scripts/simplebench/bench_write_req.py
39
+++ b/scripts/simplebench/bench_write_req.py
40
@@ -XXX,XX +XXX,XX @@ import sys
41
import os
42
import subprocess
43
import simplebench
44
+from results_to_text import results_to_text
45
46
47
def bench_func(env, case):
48
@@ -XXX,XX +XXX,XX @@ if __name__ == '__main__':
49
50
result = simplebench.bench(bench_func, test_envs, test_cases, count=3,
51
initial_run=False)
52
- print(simplebench.results_to_text(result))
53
+ print(results_to_text(result))
54
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
55
new file mode 100644
56
index XXXXXXX..XXXXXXX
57
--- /dev/null
58
+++ b/scripts/simplebench/results_to_text.py
59
@@ -XXX,XX +XXX,XX @@
60
+# Simple benchmarking framework
61
+#
62
+# Copyright (c) 2019 Virtuozzo International GmbH.
63
+#
64
+# This program is free software; you can redistribute it and/or modify
65
+# it under the terms of the GNU General Public License as published by
66
+# the Free Software Foundation; either version 2 of the License, or
67
+# (at your option) any later version.
68
+#
69
+# This program is distributed in the hope that it will be useful,
70
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
71
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
72
+# GNU General Public License for more details.
73
+#
74
+# You should have received a copy of the GNU General Public License
75
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
76
+#
77
+
78
+
79
+def result_to_text(result):
80
+ """Return text representation of bench_one() returned dict."""
81
+ if 'average' in result:
82
+ s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
83
+ if 'n-failed' in result:
84
+ s += '\n({} failed)'.format(result['n-failed'])
85
+ return s
86
+ else:
87
+ return 'FAILED'
88
+
89
+
90
+def results_to_text(results):
91
+ """Return text representation of bench() returned dict."""
92
+ from tabulate import tabulate
93
+
94
+ dim = None
95
+ tab = [[""] + [c['id'] for c in results['envs']]]
96
+ for case in results['cases']:
97
+ row = [case['id']]
98
+ for env in results['envs']:
99
+ res = results['tab'][case['id']][env['id']]
100
+ if dim is None:
101
+ dim = res['dimension']
102
+ else:
103
+ assert dim == res['dimension']
104
+ row.append(result_to_text(res))
105
+ tab.append(row)
106
+
107
+ return f'All results are in {dim}\n\n' + tabulate(tab)
108
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
109
index XXXXXXX..XXXXXXX 100644
110
--- a/scripts/simplebench/simplebench.py
111
+++ b/scripts/simplebench/simplebench.py
112
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
113
return result
114
115
116
-def result_to_text(result):
117
- """Return text representation of bench_one() returned dict."""
118
- if 'average' in result:
119
- s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
120
- if 'n-failed' in result:
121
- s += '\n({} failed)'.format(result['n-failed'])
122
- return s
123
- else:
124
- return 'FAILED'
125
-
126
-
127
def bench(test_func, test_envs, test_cases, *args, **vargs):
128
"""Fill benchmark table
129
130
@@ -XXX,XX +XXX,XX @@ def bench(test_func, test_envs, test_cases, *args, **vargs):
131
132
print('Done')
133
return results
134
-
135
-
136
-def results_to_text(results):
137
- """Return text representation of bench() returned dict."""
138
- from tabulate import tabulate
139
-
140
- dim = None
141
- tab = [[""] + [c['id'] for c in results['envs']]]
142
- for case in results['cases']:
143
- row = [case['id']]
144
- for env in results['envs']:
145
- res = results['tab'][case['id']][env['id']]
146
- if dim is None:
147
- dim = res['dimension']
148
- else:
149
- assert dim == res['dimension']
150
- row.append(result_to_text(res))
151
- tab.append(row)
152
-
153
- return f'All results are in {dim}\n\n' + tabulate(tab)
154
--
155
2.29.2
156
157
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Move to generic format for floats and percentage for error.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Message-Id: <20201021145859.11201-19-vsementsov@virtuozzo.com>
7
Acked-by: Max Reitz <mreitz@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
scripts/simplebench/results_to_text.py | 13 ++++++++++++-
11
1 file changed, 12 insertions(+), 1 deletion(-)
12
13
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
14
index XXXXXXX..XXXXXXX 100644
15
--- a/scripts/simplebench/results_to_text.py
16
+++ b/scripts/simplebench/results_to_text.py
17
@@ -XXX,XX +XXX,XX @@
18
# along with this program. If not, see <http://www.gnu.org/licenses/>.
19
#
20
21
+import math
22
+
23
+
24
+def format_value(x, stdev):
25
+ stdev_pr = stdev / x * 100
26
+ if stdev_pr < 1.5:
27
+ # don't care too much
28
+ return f'{x:.2g}'
29
+ else:
30
+ return f'{x:.2g} ± {math.ceil(stdev_pr)}%'
31
+
32
33
def result_to_text(result):
34
"""Return text representation of bench_one() returned dict."""
35
if 'average' in result:
36
- s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
37
+ s = format_value(result['average'], result['stdev'])
38
if 'n-failed' in result:
39
s += '\n({} failed)'.format(result['n-failed'])
40
return s
41
--
42
2.29.2
43
44
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Performance improvements / degradations are usually discussed in
4
percentage. Let's make the script calculate it for us.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-20-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
[mreitz: 'seconds' instead of 'secs']
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
---
12
scripts/simplebench/results_to_text.py | 67 +++++++++++++++++++++++---
13
1 file changed, 60 insertions(+), 7 deletions(-)
14
15
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
16
index XXXXXXX..XXXXXXX 100644
17
--- a/scripts/simplebench/results_to_text.py
18
+++ b/scripts/simplebench/results_to_text.py
19
@@ -XXX,XX +XXX,XX @@
20
#
21
22
import math
23
+import tabulate
24
+
25
+# We want leading whitespace for difference row cells (see below)
26
+tabulate.PRESERVE_WHITESPACE = True
27
28
29
def format_value(x, stdev):
30
@@ -XXX,XX +XXX,XX @@ def result_to_text(result):
31
return 'FAILED'
32
33
34
-def results_to_text(results):
35
- """Return text representation of bench() returned dict."""
36
- from tabulate import tabulate
37
-
38
+def results_dimension(results):
39
dim = None
40
- tab = [[""] + [c['id'] for c in results['envs']]]
41
for case in results['cases']:
42
- row = [case['id']]
43
for env in results['envs']:
44
res = results['tab'][case['id']][env['id']]
45
if dim is None:
46
dim = res['dimension']
47
else:
48
assert dim == res['dimension']
49
+
50
+ assert dim in ('iops', 'seconds')
51
+
52
+ return dim
53
+
54
+
55
+def results_to_text(results):
56
+ """Return text representation of bench() returned dict."""
57
+ n_columns = len(results['envs'])
58
+ named_columns = n_columns > 2
59
+ dim = results_dimension(results)
60
+ tab = []
61
+
62
+ if named_columns:
63
+ # Environment columns are named A, B, ...
64
+ tab.append([''] + [chr(ord('A') + i) for i in range(n_columns)])
65
+
66
+ tab.append([''] + [c['id'] for c in results['envs']])
67
+
68
+ for case in results['cases']:
69
+ row = [case['id']]
70
+ case_results = results['tab'][case['id']]
71
+ for env in results['envs']:
72
+ res = case_results[env['id']]
73
row.append(result_to_text(res))
74
tab.append(row)
75
76
- return f'All results are in {dim}\n\n' + tabulate(tab)
77
+ # Add row of difference between columns. For each column starting from
78
+ # B we calculate difference with all previous columns.
79
+ row = ['', ''] # case name and first column
80
+ for i in range(1, n_columns):
81
+ cell = ''
82
+ env = results['envs'][i]
83
+ res = case_results[env['id']]
84
+
85
+ if 'average' not in res:
86
+ # Failed result
87
+ row.append(cell)
88
+ continue
89
+
90
+ for j in range(0, i):
91
+ env_j = results['envs'][j]
92
+ res_j = case_results[env_j['id']]
93
+ cell += ' '
94
+
95
+ if 'average' not in res_j:
96
+ # Failed result
97
+ cell += '--'
98
+ continue
99
+
100
+ col_j = tab[0][j + 1] if named_columns else ''
101
+ diff_pr = round((res['average'] - res_j['average']) /
102
+ res_j['average'] * 100)
103
+ cell += f' {col_j}{diff_pr:+}%'
104
+ row.append(cell)
105
+ tab.append(row)
106
+
107
+ return f'All results are in {dim}\n\n' + tabulate.tabulate(tab)
108
--
109
2.29.2
110
111
diff view generated by jsdifflib
Deleted patch
1
From: Alberto Garcia <berto@igalia.com>
2
1
3
The quorum driver does not implement bdrv_co_block_status() and
4
because of that it always reports to contain data even if all its
5
children are known to be empty.
6
7
One consequence of this is that if we for example create a quorum with
8
a size of 10GB and we mirror it to a new image the operation will
9
write 10GB of actual zeroes to the destination image wasting a lot of
10
time and disk space.
11
12
Since a quorum has an arbitrary number of children of potentially
13
different formats there is no way to report all possible allocation
14
status flags in a way that makes sense, so this implementation only
15
reports when a given region is known to contain zeroes
16
(BDRV_BLOCK_ZERO) or not (BDRV_BLOCK_DATA).
17
18
If all children agree that a region contains zeroes then we can return
19
BDRV_BLOCK_ZERO using the smallest size reported by the children
20
(because all agree that a region of at least that size contains
21
zeroes).
22
23
If at least one child disagrees we have to return BDRV_BLOCK_DATA.
24
In this case we use the largest of the sizes reported by the children
25
that didn't return BDRV_BLOCK_ZERO (because we know that there won't
26
be an agreement for at least that size).
27
28
Signed-off-by: Alberto Garcia <berto@igalia.com>
29
Tested-by: Tao Xu <tao3.xu@intel.com>
30
Reviewed-by: Max Reitz <mreitz@redhat.com>
31
Message-Id: <db83149afcf0f793effc8878089d29af4c46ffe1.1605286097.git.berto@igalia.com>
32
Signed-off-by: Max Reitz <mreitz@redhat.com>
33
---
34
block/quorum.c | 52 +++++++++++++
35
tests/qemu-iotests/312 | 148 +++++++++++++++++++++++++++++++++++++
36
tests/qemu-iotests/312.out | 67 +++++++++++++++++
37
tests/qemu-iotests/group | 1 +
38
4 files changed, 268 insertions(+)
39
create mode 100755 tests/qemu-iotests/312
40
create mode 100644 tests/qemu-iotests/312.out
41
42
diff --git a/block/quorum.c b/block/quorum.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/block/quorum.c
45
+++ b/block/quorum.c
46
@@ -XXX,XX +XXX,XX @@
47
#include "qemu/module.h"
48
#include "qemu/option.h"
49
#include "block/block_int.h"
50
+#include "block/coroutines.h"
51
#include "block/qdict.h"
52
#include "qapi/error.h"
53
#include "qapi/qapi-events-block.h"
54
@@ -XXX,XX +XXX,XX @@ static void quorum_child_perm(BlockDriverState *bs, BdrvChild *c,
55
| DEFAULT_PERM_UNCHANGED;
56
}
57
58
+/*
59
+ * Each one of the children can report different status flags even
60
+ * when they contain the same data, so what this function does is
61
+ * return BDRV_BLOCK_ZERO if *all* children agree that a certain
62
+ * region contains zeroes, and BDRV_BLOCK_DATA otherwise.
63
+ */
64
+static int coroutine_fn quorum_co_block_status(BlockDriverState *bs,
65
+ bool want_zero,
66
+ int64_t offset, int64_t count,
67
+ int64_t *pnum, int64_t *map,
68
+ BlockDriverState **file)
69
+{
70
+ BDRVQuorumState *s = bs->opaque;
71
+ int i, ret;
72
+ int64_t pnum_zero = count;
73
+ int64_t pnum_data = 0;
74
+
75
+ for (i = 0; i < s->num_children; i++) {
76
+ int64_t bytes;
77
+ ret = bdrv_co_common_block_status_above(s->children[i]->bs, NULL, false,
78
+ want_zero, offset, count,
79
+ &bytes, NULL, NULL, NULL);
80
+ if (ret < 0) {
81
+ quorum_report_bad(QUORUM_OP_TYPE_READ, offset, count,
82
+ s->children[i]->bs->node_name, ret);
83
+ pnum_data = count;
84
+ break;
85
+ }
86
+ /*
87
+ * Even if all children agree about whether there are zeroes
88
+ * or not at @offset they might disagree on the size, so use
89
+ * the smallest when reporting BDRV_BLOCK_ZERO and the largest
90
+ * when reporting BDRV_BLOCK_DATA.
91
+ */
92
+ if (ret & BDRV_BLOCK_ZERO) {
93
+ pnum_zero = MIN(pnum_zero, bytes);
94
+ } else {
95
+ pnum_data = MAX(pnum_data, bytes);
96
+ }
97
+ }
98
+
99
+ if (pnum_data) {
100
+ *pnum = pnum_data;
101
+ return BDRV_BLOCK_DATA;
102
+ } else {
103
+ *pnum = pnum_zero;
104
+ return BDRV_BLOCK_ZERO;
105
+ }
106
+}
107
+
108
static const char *const quorum_strong_runtime_opts[] = {
109
QUORUM_OPT_VOTE_THRESHOLD,
110
QUORUM_OPT_BLKVERIFY,
111
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_quorum = {
112
.bdrv_close = quorum_close,
113
.bdrv_gather_child_options = quorum_gather_child_options,
114
.bdrv_dirname = quorum_dirname,
115
+ .bdrv_co_block_status = quorum_co_block_status,
116
117
.bdrv_co_flush_to_disk = quorum_co_flush,
118
119
diff --git a/tests/qemu-iotests/312 b/tests/qemu-iotests/312
120
new file mode 100755
121
index XXXXXXX..XXXXXXX
122
--- /dev/null
123
+++ b/tests/qemu-iotests/312
124
@@ -XXX,XX +XXX,XX @@
125
+#!/usr/bin/env bash
126
+#
127
+# Test drive-mirror with quorum
128
+#
129
+# The goal of this test is to check how the quorum driver reports
130
+# regions that are known to read as zeroes (BDRV_BLOCK_ZERO). The idea
131
+# is that drive-mirror will try the efficient representation of zeroes
132
+# in the destination image instead of writing actual zeroes.
133
+#
134
+# Copyright (C) 2020 Igalia, S.L.
135
+# Author: Alberto Garcia <berto@igalia.com>
136
+#
137
+# This program is free software; you can redistribute it and/or modify
138
+# it under the terms of the GNU General Public License as published by
139
+# the Free Software Foundation; either version 2 of the License, or
140
+# (at your option) any later version.
141
+#
142
+# This program is distributed in the hope that it will be useful,
143
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
144
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
145
+# GNU General Public License for more details.
146
+#
147
+# You should have received a copy of the GNU General Public License
148
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
149
+#
150
+
151
+# creator
152
+owner=berto@igalia.com
153
+
154
+seq=`basename $0`
155
+echo "QA output created by $seq"
156
+
157
+status=1    # failure is the default!
158
+
159
+_cleanup()
160
+{
161
+ _rm_test_img "$TEST_IMG.0"
162
+ _rm_test_img "$TEST_IMG.1"
163
+ _rm_test_img "$TEST_IMG.2"
164
+ _rm_test_img "$TEST_IMG.3"
165
+ _cleanup_qemu
166
+}
167
+trap "_cleanup; exit \$status" 0 1 2 3 15
168
+
169
+# get standard environment, filters and checks
170
+. ./common.rc
171
+. ./common.filter
172
+. ./common.qemu
173
+
174
+_supported_fmt qcow2
175
+_supported_proto file
176
+_supported_os Linux
177
+_unsupported_imgopts cluster_size data_file
178
+
179
+echo
180
+echo '### Create all images' # three source (quorum), one destination
181
+echo
182
+TEST_IMG="$TEST_IMG.0" _make_test_img -o cluster_size=64k 10M
183
+TEST_IMG="$TEST_IMG.1" _make_test_img -o cluster_size=64k 10M
184
+TEST_IMG="$TEST_IMG.2" _make_test_img -o cluster_size=64k 10M
185
+TEST_IMG="$TEST_IMG.3" _make_test_img -o cluster_size=64k 10M
186
+
187
+quorum="driver=raw,file.driver=quorum,file.vote-threshold=2"
188
+quorum="$quorum,file.children.0.file.filename=$TEST_IMG.0"
189
+quorum="$quorum,file.children.1.file.filename=$TEST_IMG.1"
190
+quorum="$quorum,file.children.2.file.filename=$TEST_IMG.2"
191
+quorum="$quorum,file.children.0.driver=$IMGFMT"
192
+quorum="$quorum,file.children.1.driver=$IMGFMT"
193
+quorum="$quorum,file.children.2.driver=$IMGFMT"
194
+
195
+echo
196
+echo '### Output of qemu-img map (empty quorum)'
197
+echo
198
+$QEMU_IMG map --image-opts $quorum | _filter_qemu_img_map
199
+
200
+# Now we write data to the quorum. All three images will read as
201
+# zeroes in all cases, but with different ways to represent them
202
+# (unallocated clusters, zero clusters, data clusters with zeroes)
203
+# that will have an effect on how the data will be mirrored and the
204
+# output of qemu-img map on the resulting image.
205
+echo
206
+echo '### Write data to the quorum'
207
+echo
208
+# Test 1: data regions surrounded by unallocated clusters.
209
+# Three data regions, the largest one (0x30000) will be picked, end result:
210
+# offset 0x10000, length 0x30000 -> data
211
+$QEMU_IO -c "write -P 0 $((0x10000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
212
+$QEMU_IO -c "write -P 0 $((0x10000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
213
+$QEMU_IO -c "write -P 0 $((0x10000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
214
+
215
+# Test 2: zero regions surrounded by data clusters.
216
+# First we allocate the data clusters.
217
+$QEMU_IO -c "open -o $quorum" -c "write -P 0 $((0x100000)) $((0x40000))" | _filter_qemu_io
218
+
219
+# Three zero regions, the smallest one (0x10000) will be picked, end result:
220
+# offset 0x100000, length 0x10000 -> data
221
+# offset 0x110000, length 0x10000 -> zeroes
222
+# offset 0x120000, length 0x20000 -> data
223
+$QEMU_IO -c "write -z $((0x110000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
224
+$QEMU_IO -c "write -z $((0x110000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
225
+$QEMU_IO -c "write -z $((0x110000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
226
+
227
+# Test 3: zero clusters surrounded by unallocated clusters.
228
+# Everything reads as zeroes, no effect on the end result.
229
+$QEMU_IO -c "write -z $((0x150000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
230
+$QEMU_IO -c "write -z $((0x150000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
231
+$QEMU_IO -c "write -z $((0x150000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
232
+
233
+# Test 4: mix of data and zero clusters.
234
+# The zero region will be ignored in favor of the largest data region
235
+# (0x20000), end result:
236
+# offset 0x200000, length 0x20000 -> data
237
+$QEMU_IO -c "write -P 0 $((0x200000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
238
+$QEMU_IO -c "write -z $((0x200000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
239
+$QEMU_IO -c "write -P 0 $((0x200000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
240
+
241
+echo
242
+echo '### Launch the drive-mirror job'
243
+echo
244
+qemu_comm_method="qmp" _launch_qemu -drive if=virtio,"$quorum"
245
+h=$QEMU_HANDLE
246
+_send_qemu_cmd $h "{ 'execute': 'qmp_capabilities' }" 'return'
247
+
248
+_send_qemu_cmd $h \
249
+ "{'execute': 'drive-mirror',
250
+ 'arguments': {'device': 'virtio0',
251
+ 'format': '$IMGFMT',
252
+ 'target': '$TEST_IMG.3',
253
+ 'sync': 'full',
254
+ 'mode': 'existing' }}" \
255
+ "BLOCK_JOB_READY.*virtio0"
256
+
257
+_send_qemu_cmd $h \
258
+ "{ 'execute': 'block-job-complete',
259
+ 'arguments': { 'device': 'virtio0' } }" \
260
+ 'BLOCK_JOB_COMPLETED'
261
+
262
+_send_qemu_cmd $h "{ 'execute': 'quit' }" ''
263
+
264
+echo
265
+echo '### Output of qemu-img map (destination image)'
266
+echo
267
+$QEMU_IMG map "$TEST_IMG.3" | _filter_qemu_img_map
268
+
269
+# success, all done
270
+echo "*** done"
271
+rm -f $seq.full
272
+status=0
273
diff --git a/tests/qemu-iotests/312.out b/tests/qemu-iotests/312.out
274
new file mode 100644
275
index XXXXXXX..XXXXXXX
276
--- /dev/null
277
+++ b/tests/qemu-iotests/312.out
278
@@ -XXX,XX +XXX,XX @@
279
+QA output created by 312
280
+
281
+### Create all images
282
+
283
+Formatting 'TEST_DIR/t.IMGFMT.0', fmt=IMGFMT size=10485760
284
+Formatting 'TEST_DIR/t.IMGFMT.1', fmt=IMGFMT size=10485760
285
+Formatting 'TEST_DIR/t.IMGFMT.2', fmt=IMGFMT size=10485760
286
+Formatting 'TEST_DIR/t.IMGFMT.3', fmt=IMGFMT size=10485760
287
+
288
+### Output of qemu-img map (empty quorum)
289
+
290
+Offset Length File
291
+
292
+### Write data to the quorum
293
+
294
+wrote 65536/65536 bytes at offset 65536
295
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
296
+wrote 196608/196608 bytes at offset 65536
297
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
298
+wrote 131072/131072 bytes at offset 65536
299
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
300
+wrote 262144/262144 bytes at offset 1048576
301
+256 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
302
+wrote 65536/65536 bytes at offset 1114112
303
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
304
+wrote 196608/196608 bytes at offset 1114112
305
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
306
+wrote 131072/131072 bytes at offset 1114112
307
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
308
+wrote 65536/65536 bytes at offset 1376256
309
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
310
+wrote 196608/196608 bytes at offset 1376256
311
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
312
+wrote 131072/131072 bytes at offset 1376256
313
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
314
+wrote 65536/65536 bytes at offset 2097152
315
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
316
+wrote 196608/196608 bytes at offset 2097152
317
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
318
+wrote 131072/131072 bytes at offset 2097152
319
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
320
+
321
+### Launch the drive-mirror job
322
+
323
+{ 'execute': 'qmp_capabilities' }
324
+{"return": {}}
325
+{'execute': 'drive-mirror', 'arguments': {'device': 'virtio0', 'format': 'IMGFMT', 'target': 'TEST_DIR/t.IMGFMT.3', 'sync': 'full', 'mode': 'existing' }}
326
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "virtio0"}}
327
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "virtio0"}}
328
+{"return": {}}
329
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "virtio0"}}
330
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "virtio0", "len": 10485760, "offset": 10485760, "speed": 0, "type": "mirror"}}
331
+{ 'execute': 'block-job-complete', 'arguments': { 'device': 'virtio0' } }
332
+{"return": {}}
333
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "virtio0"}}
334
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "virtio0"}}
335
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "virtio0", "len": 10485760, "offset": 10485760, "speed": 0, "type": "mirror"}}
336
+{ 'execute': 'quit' }
337
+
338
+### Output of qemu-img map (destination image)
339
+
340
+Offset Length File
341
+0x10000 0x30000 TEST_DIR/t.IMGFMT.3
342
+0x100000 0x10000 TEST_DIR/t.IMGFMT.3
343
+0x120000 0x20000 TEST_DIR/t.IMGFMT.3
344
+0x200000 0x20000 TEST_DIR/t.IMGFMT.3
345
+*** done
346
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
347
index XXXXXXX..XXXXXXX 100644
348
--- a/tests/qemu-iotests/group
349
+++ b/tests/qemu-iotests/group
350
@@ -XXX,XX +XXX,XX @@
351
307 rw quick export
352
308 rw
353
309 rw auto quick
354
+312 rw auto quick
355
--
356
2.29.2
357
358
diff view generated by jsdifflib
Deleted patch
1
The first parameter passed to _send_qemu_cmd is supposed to be the
2
$QEMU_HANDLE. 102 does not do so here, fix it.
3
1
4
As a result, the output changes: Now we see the prompt this command is
5
supposedly waiting for before the resize message - as it should be.
6
7
Signed-off-by: Max Reitz <mreitz@redhat.com>
8
Message-Id: <20201217153803.101231-2-mreitz@redhat.com>
9
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
10
---
11
tests/qemu-iotests/102 | 2 +-
12
tests/qemu-iotests/102.out | 2 +-
13
2 files changed, 2 insertions(+), 2 deletions(-)
14
15
diff --git a/tests/qemu-iotests/102 b/tests/qemu-iotests/102
16
index XXXXXXX..XXXXXXX 100755
17
--- a/tests/qemu-iotests/102
18
+++ b/tests/qemu-iotests/102
19
@@ -XXX,XX +XXX,XX @@ $QEMU_IO -c 'write 0 64k' "$TEST_IMG" | _filter_qemu_io
20
qemu_comm_method=monitor _launch_qemu -drive if=none,file="$TEST_IMG",id=drv0
21
22
# Wait for a prompt to appear (so we know qemu has opened the image)
23
-_send_qemu_cmd '' '(qemu)'
24
+_send_qemu_cmd $QEMU_HANDLE '' '(qemu)'
25
26
$QEMU_IMG resize --shrink --image-opts \
27
"driver=raw,file.driver=file,file.filename=$TEST_IMG,file.locking=off" \
28
diff --git a/tests/qemu-iotests/102.out b/tests/qemu-iotests/102.out
29
index XXXXXXX..XXXXXXX 100644
30
--- a/tests/qemu-iotests/102.out
31
+++ b/tests/qemu-iotests/102.out
32
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=65536
33
wrote 65536/65536 bytes at offset 0
34
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
35
QEMU X.Y.Z monitor - type 'help' for more information
36
-Image resized.
37
(qemu)
38
+Image resized.
39
(qemu) qemu-io drv0 map
40
64 KiB (0x10000) bytes allocated at offset 0 bytes (0x0)
41
*** done
42
--
43
2.29.2
44
45
diff view generated by jsdifflib
Deleted patch
1
With bash 5.1, the output of the following script changes:
2
1
3
a=("double space")
4
a=${a[@]:0:1}
5
echo "$a"
6
7
from "double space" to "double space", i.e. all white space is
8
preserved as-is. This is probably what we actually want here (judging
9
from the "...to accommodate pathnames with spaces" comment), but before
10
5.1, we would have to quote the ${} slice to get the same behavior.
11
12
In any case, without quoting, the reference output of many iotests is
13
different between bash 5.1 and pre-5.1, which is not very good. The
14
output of 5.1 is what we want, so whatever we do to get pre-5.1 to the
15
same result, it means we have to fix the reference output of basically
16
all tests that invoke _send_qemu_cmd (except the ones that only use
17
single spaces in the commands they invoke).
18
19
Instead of quoting the ${} slice (cmd="${$@: 1:...}"), we can also just
20
not use array slicing and replace the whole thing with a simple "cmd=$1;
21
shift", which works because all callers quote the whole $cmd argument
22
anyway.
23
24
Signed-off-by: Max Reitz <mreitz@redhat.com>
25
Message-Id: <20201217153803.101231-3-mreitz@redhat.com>
26
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
27
---
28
tests/qemu-iotests/085.out | 167 ++++++++++++++++++++++++++++-----
29
tests/qemu-iotests/094.out | 10 +-
30
tests/qemu-iotests/095.out | 4 +-
31
tests/qemu-iotests/109.out | 88 ++++++++++++-----
32
tests/qemu-iotests/117.out | 13 ++-
33
tests/qemu-iotests/127.out | 12 ++-
34
tests/qemu-iotests/140.out | 10 +-
35
tests/qemu-iotests/141.out | 128 +++++++++++++++++++------
36
tests/qemu-iotests/143.out | 4 +-
37
tests/qemu-iotests/144.out | 28 +++++-
38
tests/qemu-iotests/153.out | 18 ++--
39
tests/qemu-iotests/156.out | 39 ++++++--
40
tests/qemu-iotests/161.out | 18 +++-
41
tests/qemu-iotests/173.out | 25 ++++-
42
tests/qemu-iotests/182.out | 42 +++++++--
43
tests/qemu-iotests/183.out | 19 +++-
44
tests/qemu-iotests/185.out | 45 +++++++--
45
tests/qemu-iotests/191.out | 12 ++-
46
tests/qemu-iotests/223.out | 92 ++++++++++++------
47
tests/qemu-iotests/229.out | 13 ++-
48
tests/qemu-iotests/249.out | 16 +++-
49
tests/qemu-iotests/308.out | 103 +++++++++++++++++---
50
tests/qemu-iotests/312.out | 10 +-
51
tests/qemu-iotests/common.qemu | 11 +--
52
24 files changed, 728 insertions(+), 199 deletions(-)
53
54
diff --git a/tests/qemu-iotests/085.out b/tests/qemu-iotests/085.out
55
index XXXXXXX..XXXXXXX 100644
56
--- a/tests/qemu-iotests/085.out
57
+++ b/tests/qemu-iotests/085.out
58
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.2', fmt=IMGFMT size=134217728
59
60
=== Create a single snapshot on virtio0 ===
61
62
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT', 'format': 'IMGFMT' } }
63
+{ 'execute': 'blockdev-snapshot-sync',
64
+ 'arguments': { 'device': 'virtio0',
65
+ 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT',
66
+ 'format': 'IMGFMT' } }
67
Formatting 'TEST_DIR/1-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/t.qcow2.1 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
68
{"return": {}}
69
70
=== Invalid command - missing device and nodename ===
71
72
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT', 'format': 'IMGFMT' } }
73
+{ 'execute': 'blockdev-snapshot-sync',
74
+ 'arguments': { 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT',
75
+ 'format': 'IMGFMT' } }
76
{"error": {"class": "GenericError", "desc": "Cannot find device= nor node_name="}}
77
78
=== Invalid command - missing snapshot-file ===
79
80
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'format': 'IMGFMT' } }
81
+{ 'execute': 'blockdev-snapshot-sync',
82
+ 'arguments': { 'device': 'virtio0',
83
+ 'format': 'IMGFMT' } }
84
{"error": {"class": "GenericError", "desc": "Parameter 'snapshot-file' is missing"}}
85
86
87
=== Create several transactional group snapshots ===
88
89
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/2-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/2-snapshot-v1.IMGFMT' } } ] } }
90
+{ 'execute': 'transaction', 'arguments':
91
+ {'actions': [
92
+ { 'type': 'blockdev-snapshot-sync', 'data' :
93
+ { 'device': 'virtio0',
94
+ 'snapshot-file': 'TEST_DIR/2-snapshot-v0.IMGFMT' } },
95
+ { 'type': 'blockdev-snapshot-sync', 'data' :
96
+ { 'device': 'virtio1',
97
+ 'snapshot-file': 'TEST_DIR/2-snapshot-v1.IMGFMT' } } ]
98
+ } }
99
Formatting 'TEST_DIR/2-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/1-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
100
Formatting 'TEST_DIR/2-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/t.qcow2.2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
101
{"return": {}}
102
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/3-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/3-snapshot-v1.IMGFMT' } } ] } }
103
+{ 'execute': 'transaction', 'arguments':
104
+ {'actions': [
105
+ { 'type': 'blockdev-snapshot-sync', 'data' :
106
+ { 'device': 'virtio0',
107
+ 'snapshot-file': 'TEST_DIR/3-snapshot-v0.IMGFMT' } },
108
+ { 'type': 'blockdev-snapshot-sync', 'data' :
109
+ { 'device': 'virtio1',
110
+ 'snapshot-file': 'TEST_DIR/3-snapshot-v1.IMGFMT' } } ]
111
+ } }
112
Formatting 'TEST_DIR/3-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/2-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
113
Formatting 'TEST_DIR/3-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/2-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
114
{"return": {}}
115
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/4-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/4-snapshot-v1.IMGFMT' } } ] } }
116
+{ 'execute': 'transaction', 'arguments':
117
+ {'actions': [
118
+ { 'type': 'blockdev-snapshot-sync', 'data' :
119
+ { 'device': 'virtio0',
120
+ 'snapshot-file': 'TEST_DIR/4-snapshot-v0.IMGFMT' } },
121
+ { 'type': 'blockdev-snapshot-sync', 'data' :
122
+ { 'device': 'virtio1',
123
+ 'snapshot-file': 'TEST_DIR/4-snapshot-v1.IMGFMT' } } ]
124
+ } }
125
Formatting 'TEST_DIR/4-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/3-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
126
Formatting 'TEST_DIR/4-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/3-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
127
{"return": {}}
128
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/5-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/5-snapshot-v1.IMGFMT' } } ] } }
129
+{ 'execute': 'transaction', 'arguments':
130
+ {'actions': [
131
+ { 'type': 'blockdev-snapshot-sync', 'data' :
132
+ { 'device': 'virtio0',
133
+ 'snapshot-file': 'TEST_DIR/5-snapshot-v0.IMGFMT' } },
134
+ { 'type': 'blockdev-snapshot-sync', 'data' :
135
+ { 'device': 'virtio1',
136
+ 'snapshot-file': 'TEST_DIR/5-snapshot-v1.IMGFMT' } } ]
137
+ } }
138
Formatting 'TEST_DIR/5-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/4-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
139
Formatting 'TEST_DIR/5-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/4-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
140
{"return": {}}
141
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/6-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/6-snapshot-v1.IMGFMT' } } ] } }
142
+{ 'execute': 'transaction', 'arguments':
143
+ {'actions': [
144
+ { 'type': 'blockdev-snapshot-sync', 'data' :
145
+ { 'device': 'virtio0',
146
+ 'snapshot-file': 'TEST_DIR/6-snapshot-v0.IMGFMT' } },
147
+ { 'type': 'blockdev-snapshot-sync', 'data' :
148
+ { 'device': 'virtio1',
149
+ 'snapshot-file': 'TEST_DIR/6-snapshot-v1.IMGFMT' } } ]
150
+ } }
151
Formatting 'TEST_DIR/6-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/5-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
152
Formatting 'TEST_DIR/6-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/5-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
153
{"return": {}}
154
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/7-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/7-snapshot-v1.IMGFMT' } } ] } }
155
+{ 'execute': 'transaction', 'arguments':
156
+ {'actions': [
157
+ { 'type': 'blockdev-snapshot-sync', 'data' :
158
+ { 'device': 'virtio0',
159
+ 'snapshot-file': 'TEST_DIR/7-snapshot-v0.IMGFMT' } },
160
+ { 'type': 'blockdev-snapshot-sync', 'data' :
161
+ { 'device': 'virtio1',
162
+ 'snapshot-file': 'TEST_DIR/7-snapshot-v1.IMGFMT' } } ]
163
+ } }
164
Formatting 'TEST_DIR/7-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/6-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
165
Formatting 'TEST_DIR/7-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/6-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
166
{"return": {}}
167
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/8-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/8-snapshot-v1.IMGFMT' } } ] } }
168
+{ 'execute': 'transaction', 'arguments':
169
+ {'actions': [
170
+ { 'type': 'blockdev-snapshot-sync', 'data' :
171
+ { 'device': 'virtio0',
172
+ 'snapshot-file': 'TEST_DIR/8-snapshot-v0.IMGFMT' } },
173
+ { 'type': 'blockdev-snapshot-sync', 'data' :
174
+ { 'device': 'virtio1',
175
+ 'snapshot-file': 'TEST_DIR/8-snapshot-v1.IMGFMT' } } ]
176
+ } }
177
Formatting 'TEST_DIR/8-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/7-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
178
Formatting 'TEST_DIR/8-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/7-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
179
{"return": {}}
180
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/9-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/9-snapshot-v1.IMGFMT' } } ] } }
181
+{ 'execute': 'transaction', 'arguments':
182
+ {'actions': [
183
+ { 'type': 'blockdev-snapshot-sync', 'data' :
184
+ { 'device': 'virtio0',
185
+ 'snapshot-file': 'TEST_DIR/9-snapshot-v0.IMGFMT' } },
186
+ { 'type': 'blockdev-snapshot-sync', 'data' :
187
+ { 'device': 'virtio1',
188
+ 'snapshot-file': 'TEST_DIR/9-snapshot-v1.IMGFMT' } } ]
189
+ } }
190
Formatting 'TEST_DIR/9-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/8-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
191
Formatting 'TEST_DIR/9-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/8-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
192
{"return": {}}
193
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/10-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/10-snapshot-v1.IMGFMT' } } ] } }
194
+{ 'execute': 'transaction', 'arguments':
195
+ {'actions': [
196
+ { 'type': 'blockdev-snapshot-sync', 'data' :
197
+ { 'device': 'virtio0',
198
+ 'snapshot-file': 'TEST_DIR/10-snapshot-v0.IMGFMT' } },
199
+ { 'type': 'blockdev-snapshot-sync', 'data' :
200
+ { 'device': 'virtio1',
201
+ 'snapshot-file': 'TEST_DIR/10-snapshot-v1.IMGFMT' } } ]
202
+ } }
203
Formatting 'TEST_DIR/10-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/9-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
204
Formatting 'TEST_DIR/10-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/9-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
205
{"return": {}}
206
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/10-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extende
207
=== Create a couple of snapshots using blockdev-snapshot ===
208
209
Formatting 'TEST_DIR/11-snapshot-v0.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/10-snapshot-v0.IMGFMT backing_fmt=IMGFMT
210
-{ 'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'snap_11', 'backing': null, 'file': { 'driver': 'file', 'filename': 'TEST_DIR/11-snapshot-v0.IMGFMT', 'node-name': 'file_11' } } }
211
+{ 'execute': 'blockdev-add', 'arguments':
212
+ { 'driver': 'IMGFMT', 'node-name': 'snap_11', 'backing': null,
213
+ 'file':
214
+ { 'driver': 'file', 'filename': 'TEST_DIR/11-snapshot-v0.IMGFMT',
215
+ 'node-name': 'file_11' } } }
216
{"return": {}}
217
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_11' } }
218
+{ 'execute': 'blockdev-snapshot',
219
+ 'arguments': { 'node': 'virtio0',
220
+ 'overlay':'snap_11' } }
221
{"return": {}}
222
Formatting 'TEST_DIR/12-snapshot-v0.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/11-snapshot-v0.IMGFMT backing_fmt=IMGFMT
223
-{ 'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'snap_12', 'backing': null, 'file': { 'driver': 'file', 'filename': 'TEST_DIR/12-snapshot-v0.IMGFMT', 'node-name': 'file_12' } } }
224
+{ 'execute': 'blockdev-add', 'arguments':
225
+ { 'driver': 'IMGFMT', 'node-name': 'snap_12', 'backing': null,
226
+ 'file':
227
+ { 'driver': 'file', 'filename': 'TEST_DIR/12-snapshot-v0.IMGFMT',
228
+ 'node-name': 'file_12' } } }
229
{"return": {}}
230
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_12' } }
231
+{ 'execute': 'blockdev-snapshot',
232
+ 'arguments': { 'node': 'virtio0',
233
+ 'overlay':'snap_12' } }
234
{"return": {}}
235
236
=== Invalid command - cannot create a snapshot using a file BDS ===
237
238
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'virtio0', 'overlay':'file_12' } }
239
+{ 'execute': 'blockdev-snapshot',
240
+ 'arguments': { 'node':'virtio0',
241
+ 'overlay':'file_12' }
242
+ }
243
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
244
245
=== Invalid command - snapshot node used as active layer ===
246
247
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_12' } }
248
+{ 'execute': 'blockdev-snapshot',
249
+ 'arguments': { 'node': 'virtio0',
250
+ 'overlay':'snap_12' } }
251
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
252
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'virtio0', 'overlay':'virtio0' } }
253
+{ 'execute': 'blockdev-snapshot',
254
+ 'arguments': { 'node':'virtio0',
255
+ 'overlay':'virtio0' }
256
+ }
257
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
258
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'virtio0', 'overlay':'virtio1' } }
259
+{ 'execute': 'blockdev-snapshot',
260
+ 'arguments': { 'node':'virtio0',
261
+ 'overlay':'virtio1' }
262
+ }
263
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
264
265
=== Invalid command - snapshot node used as backing hd ===
266
267
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_11' } }
268
+{ 'execute': 'blockdev-snapshot',
269
+ 'arguments': { 'node': 'virtio0',
270
+ 'overlay':'snap_11' } }
271
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
272
273
=== Invalid command - snapshot node has a backing image ===
274
275
Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=134217728
276
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/t.IMGFMT.base backing_fmt=IMGFMT
277
-{ 'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'snap_13', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT', 'node-name': 'file_13' } } }
278
-{"return": {}}
279
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_13' } }
280
+{ 'execute': 'blockdev-add', 'arguments':
281
+ { 'driver': 'IMGFMT', 'node-name': 'snap_13',
282
+ 'file':
283
+ { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT',
284
+ 'node-name': 'file_13' } } }
285
+{"return": {}}
286
+{ 'execute': 'blockdev-snapshot',
287
+ 'arguments': { 'node': 'virtio0',
288
+ 'overlay':'snap_13' } }
289
{"error": {"class": "GenericError", "desc": "The overlay already has a backing image"}}
290
291
=== Invalid command - The node does not exist ===
292
293
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_14' } }
294
+{ 'execute': 'blockdev-snapshot',
295
+ 'arguments': { 'node': 'virtio0',
296
+ 'overlay':'snap_14' } }
297
{"error": {"class": "GenericError", "desc": "Cannot find device=snap_14 nor node_name=snap_14"}}
298
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'nodevice', 'overlay':'snap_13' } }
299
+{ 'execute': 'blockdev-snapshot',
300
+ 'arguments': { 'node':'nodevice',
301
+ 'overlay':'snap_13' }
302
+ }
303
{"error": {"class": "GenericError", "desc": "Cannot find device=nodevice nor node_name=nodevice"}}
304
*** done
305
diff --git a/tests/qemu-iotests/094.out b/tests/qemu-iotests/094.out
306
index XXXXXXX..XXXXXXX 100644
307
--- a/tests/qemu-iotests/094.out
308
+++ b/tests/qemu-iotests/094.out
309
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=67108864
310
Formatting 'TEST_DIR/source.IMGFMT', fmt=IMGFMT size=67108864
311
{'execute': 'qmp_capabilities'}
312
{"return": {}}
313
-{'execute': 'drive-mirror', 'arguments': {'device': 'src', 'target': 'nbd+unix:///?socket=SOCK_DIR/nbd', 'format': 'nbd', 'sync':'full', 'mode':'existing'}}
314
+{'execute': 'drive-mirror',
315
+ 'arguments': {'device': 'src',
316
+ 'target': 'nbd+unix:///?socket=SOCK_DIR/nbd',
317
+ 'format': 'nbd',
318
+ 'sync':'full',
319
+ 'mode':'existing'}}
320
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
321
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
322
{"return": {}}
323
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
324
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "src", "len": 67108864, "offset": 67108864, "speed": 0, "type": "mirror"}}
325
-{'execute': 'block-job-complete', 'arguments': {'device': 'src'}}
326
+{'execute': 'block-job-complete',
327
+ 'arguments': {'device': 'src'}}
328
{"return": {}}
329
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
330
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
331
diff --git a/tests/qemu-iotests/095.out b/tests/qemu-iotests/095.out
332
index XXXXXXX..XXXXXXX 100644
333
--- a/tests/qemu-iotests/095.out
334
+++ b/tests/qemu-iotests/095.out
335
@@ -XXX,XX +XXX,XX @@ virtual size: 5 MiB (5242880 bytes)
336
337
{ 'execute': 'qmp_capabilities' }
338
{"return": {}}
339
-{ 'execute': 'block-commit', 'arguments': { 'device': 'test', 'top': 'TEST_DIR/t.IMGFMT.snp1' } }
340
+{ 'execute': 'block-commit',
341
+ 'arguments': { 'device': 'test',
342
+ 'top': 'TEST_DIR/t.IMGFMT.snp1' } }
343
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "test"}}
344
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "test"}}
345
{"return": {}}
346
diff --git a/tests/qemu-iotests/109.out b/tests/qemu-iotests/109.out
347
index XXXXXXX..XXXXXXX 100644
348
--- a/tests/qemu-iotests/109.out
349
+++ b/tests/qemu-iotests/109.out
350
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
351
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
352
{ 'execute': 'qmp_capabilities' }
353
{"return": {}}
354
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
355
+{'execute':'drive-mirror', 'arguments':{
356
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
357
+ 'mode': 'existing', 'sync': 'full'}}
358
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
359
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
360
Specify the 'raw' format explicitly to remove the restrictions.
361
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
362
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
363
{ 'execute': 'qmp_capabilities' }
364
{"return": {}}
365
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
366
+{'execute':'drive-mirror', 'arguments':{
367
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
368
+ 'mode': 'existing', 'sync': 'full'}}
369
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
370
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
371
{"return": {}}
372
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
373
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
374
{ 'execute': 'qmp_capabilities' }
375
{"return": {}}
376
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
377
+{'execute':'drive-mirror', 'arguments':{
378
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
379
+ 'mode': 'existing', 'sync': 'full'}}
380
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
381
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
382
Specify the 'raw' format explicitly to remove the restrictions.
383
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
384
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
385
{ 'execute': 'qmp_capabilities' }
386
{"return": {}}
387
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
388
+{'execute':'drive-mirror', 'arguments':{
389
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
390
+ 'mode': 'existing', 'sync': 'full'}}
391
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
392
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
393
{"return": {}}
394
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
395
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
396
{ 'execute': 'qmp_capabilities' }
397
{"return": {}}
398
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
399
+{'execute':'drive-mirror', 'arguments':{
400
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
401
+ 'mode': 'existing', 'sync': 'full'}}
402
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
403
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
404
Specify the 'raw' format explicitly to remove the restrictions.
405
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
406
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
407
{ 'execute': 'qmp_capabilities' }
408
{"return": {}}
409
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
410
+{'execute':'drive-mirror', 'arguments':{
411
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
412
+ 'mode': 'existing', 'sync': 'full'}}
413
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
414
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
415
{"return": {}}
416
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
417
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
418
{ 'execute': 'qmp_capabilities' }
419
{"return": {}}
420
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
421
+{'execute':'drive-mirror', 'arguments':{
422
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
423
+ 'mode': 'existing', 'sync': 'full'}}
424
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
425
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
426
Specify the 'raw' format explicitly to remove the restrictions.
427
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
428
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
429
{ 'execute': 'qmp_capabilities' }
430
{"return": {}}
431
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
432
+{'execute':'drive-mirror', 'arguments':{
433
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
434
+ 'mode': 'existing', 'sync': 'full'}}
435
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
436
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
437
{"return": {}}
438
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
439
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
440
{ 'execute': 'qmp_capabilities' }
441
{"return": {}}
442
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
443
+{'execute':'drive-mirror', 'arguments':{
444
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
445
+ 'mode': 'existing', 'sync': 'full'}}
446
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
447
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
448
Specify the 'raw' format explicitly to remove the restrictions.
449
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
450
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
451
{ 'execute': 'qmp_capabilities' }
452
{"return": {}}
453
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
454
+{'execute':'drive-mirror', 'arguments':{
455
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
456
+ 'mode': 'existing', 'sync': 'full'}}
457
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
458
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
459
{"return": {}}
460
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
461
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
462
{ 'execute': 'qmp_capabilities' }
463
{"return": {}}
464
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
465
+{'execute':'drive-mirror', 'arguments':{
466
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
467
+ 'mode': 'existing', 'sync': 'full'}}
468
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
469
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
470
Specify the 'raw' format explicitly to remove the restrictions.
471
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
472
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
473
{ 'execute': 'qmp_capabilities' }
474
{"return": {}}
475
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
476
+{'execute':'drive-mirror', 'arguments':{
477
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
478
+ 'mode': 'existing', 'sync': 'full'}}
479
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
480
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
481
{"return": {}}
482
@@ -XXX,XX +XXX,XX @@ Images are identical.
483
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
484
{ 'execute': 'qmp_capabilities' }
485
{"return": {}}
486
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
487
+{'execute':'drive-mirror', 'arguments':{
488
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
489
+ 'mode': 'existing', 'sync': 'full'}}
490
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
491
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
492
Specify the 'raw' format explicitly to remove the restrictions.
493
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
494
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
495
{ 'execute': 'qmp_capabilities' }
496
{"return": {}}
497
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
498
+{'execute':'drive-mirror', 'arguments':{
499
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
500
+ 'mode': 'existing', 'sync': 'full'}}
501
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
502
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
503
{"return": {}}
504
@@ -XXX,XX +XXX,XX @@ Images are identical.
505
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
506
{ 'execute': 'qmp_capabilities' }
507
{"return": {}}
508
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
509
+{'execute':'drive-mirror', 'arguments':{
510
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
511
+ 'mode': 'existing', 'sync': 'full'}}
512
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
513
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
514
Specify the 'raw' format explicitly to remove the restrictions.
515
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
516
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
517
{ 'execute': 'qmp_capabilities' }
518
{"return": {}}
519
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
520
+{'execute':'drive-mirror', 'arguments':{
521
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
522
+ 'mode': 'existing', 'sync': 'full'}}
523
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
524
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
525
{"return": {}}
526
@@ -XXX,XX +XXX,XX @@ Images are identical.
527
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
528
{ 'execute': 'qmp_capabilities' }
529
{"return": {}}
530
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
531
+{'execute':'drive-mirror', 'arguments':{
532
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
533
+ 'mode': 'existing', 'sync': 'full'}}
534
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
535
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
536
Specify the 'raw' format explicitly to remove the restrictions.
537
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
538
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
539
{ 'execute': 'qmp_capabilities' }
540
{"return": {}}
541
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
542
+{'execute':'drive-mirror', 'arguments':{
543
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
544
+ 'mode': 'existing', 'sync': 'full'}}
545
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
546
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
547
{"return": {}}
548
@@ -XXX,XX +XXX,XX @@ Images are identical.
549
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
550
{ 'execute': 'qmp_capabilities' }
551
{"return": {}}
552
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
553
+{'execute':'drive-mirror', 'arguments':{
554
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
555
+ 'mode': 'existing', 'sync': 'full'}}
556
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
557
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
558
Specify the 'raw' format explicitly to remove the restrictions.
559
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
560
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
561
{ 'execute': 'qmp_capabilities' }
562
{"return": {}}
563
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
564
+{'execute':'drive-mirror', 'arguments':{
565
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
566
+ 'mode': 'existing', 'sync': 'full'}}
567
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
568
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
569
{"return": {}}
570
@@ -XXX,XX +XXX,XX @@ Images are identical.
571
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
572
{ 'execute': 'qmp_capabilities' }
573
{"return": {}}
574
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
575
+{'execute':'drive-mirror', 'arguments':{
576
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
577
+ 'mode': 'existing', 'sync': 'full'}}
578
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
579
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
580
Specify the 'raw' format explicitly to remove the restrictions.
581
@@ -XXX,XX +XXX,XX @@ WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed
582
Images are identical.
583
{ 'execute': 'qmp_capabilities' }
584
{"return": {}}
585
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
586
+{'execute':'drive-mirror', 'arguments':{
587
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
588
+ 'mode': 'existing', 'sync': 'full'}}
589
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
590
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
591
{"return": {}}
592
diff --git a/tests/qemu-iotests/117.out b/tests/qemu-iotests/117.out
593
index XXXXXXX..XXXXXXX 100644
594
--- a/tests/qemu-iotests/117.out
595
+++ b/tests/qemu-iotests/117.out
596
@@ -XXX,XX +XXX,XX @@ QA output created by 117
597
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=65536
598
{ 'execute': 'qmp_capabilities' }
599
{"return": {}}
600
-{ 'execute': 'blockdev-add', 'arguments': { 'node-name': 'protocol', 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' } }
601
+{ 'execute': 'blockdev-add',
602
+ 'arguments': { 'node-name': 'protocol',
603
+ 'driver': 'file',
604
+ 'filename': 'TEST_DIR/t.IMGFMT' } }
605
{"return": {}}
606
-{ 'execute': 'blockdev-add', 'arguments': { 'node-name': 'format', 'driver': 'IMGFMT', 'file': 'protocol' } }
607
+{ 'execute': 'blockdev-add',
608
+ 'arguments': { 'node-name': 'format',
609
+ 'driver': 'IMGFMT',
610
+ 'file': 'protocol' } }
611
{"return": {}}
612
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io format "write -P 42 0 64k"' } }
613
+{ 'execute': 'human-monitor-command',
614
+ 'arguments': { 'command-line': 'qemu-io format "write -P 42 0 64k"' } }
615
wrote 65536/65536 bytes at offset 0
616
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
617
{"return": ""}
618
diff --git a/tests/qemu-iotests/127.out b/tests/qemu-iotests/127.out
619
index XXXXXXX..XXXXXXX 100644
620
--- a/tests/qemu-iotests/127.out
621
+++ b/tests/qemu-iotests/127.out
622
@@ -XXX,XX +XXX,XX @@ wrote 42/42 bytes at offset 0
623
42 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
624
{ 'execute': 'qmp_capabilities' }
625
{"return": {}}
626
-{ 'execute': 'drive-mirror', 'arguments': { 'job-id': 'mirror', 'device': 'source', 'target': 'TEST_DIR/t.IMGFMT.overlay1', 'mode': 'existing', 'sync': 'top' } }
627
+{ 'execute': 'drive-mirror',
628
+ 'arguments': {
629
+ 'job-id': 'mirror',
630
+ 'device': 'source',
631
+ 'target': 'TEST_DIR/t.IMGFMT.overlay1',
632
+ 'mode': 'existing',
633
+ 'sync': 'top'
634
+ } }
635
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "mirror"}}
636
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "mirror"}}
637
{"return": {}}
638
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "mirror"}}
639
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "mirror", "len": 65536, "offset": 65536, "speed": 0, "type": "mirror"}}
640
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'mirror' } }
641
+{ 'execute': 'block-job-complete',
642
+ 'arguments': { 'device': 'mirror' } }
643
{"return": {}}
644
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "mirror"}}
645
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "mirror"}}
646
diff --git a/tests/qemu-iotests/140.out b/tests/qemu-iotests/140.out
647
index XXXXXXX..XXXXXXX 100644
648
--- a/tests/qemu-iotests/140.out
649
+++ b/tests/qemu-iotests/140.out
650
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 0
651
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
652
{ 'execute': 'qmp_capabilities' }
653
{"return": {}}
654
-{ 'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'unix', 'data': { 'path': 'SOCK_DIR/nbd' }}}}
655
+{ 'execute': 'nbd-server-start',
656
+ 'arguments': { 'addr': { 'type': 'unix',
657
+ 'data': { 'path': 'SOCK_DIR/nbd' }}}}
658
{"return": {}}
659
-{ 'execute': 'nbd-server-add', 'arguments': { 'device': 'drv' }}
660
+{ 'execute': 'nbd-server-add',
661
+ 'arguments': { 'device': 'drv' }}
662
{"return": {}}
663
read 65536/65536 bytes at offset 0
664
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
665
-{ 'execute': 'eject', 'arguments': { 'device': 'drv' }}
666
+{ 'execute': 'eject',
667
+ 'arguments': { 'device': 'drv' }}
668
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "drv"}}
669
qemu-io: can't open device nbd+unix:///drv?socket=SOCK_DIR/nbd: Requested export not available
670
server reported: export 'drv' not present
671
diff --git a/tests/qemu-iotests/141.out b/tests/qemu-iotests/141.out
672
index XXXXXXX..XXXXXXX 100644
673
--- a/tests/qemu-iotests/141.out
674
+++ b/tests/qemu-iotests/141.out
675
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/m.
676
677
=== Testing drive-backup ===
678
679
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
680
-{"return": {}}
681
-{'execute': 'drive-backup', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'target': 'TEST_DIR/o.IMGFMT', 'format': 'IMGFMT', 'sync': 'none'}}
682
+{'execute': 'blockdev-add',
683
+ 'arguments': {
684
+ 'node-name': 'drv0',
685
+ 'driver': 'IMGFMT',
686
+ 'file': {
687
+ 'driver': 'file',
688
+ 'filename': 'TEST_DIR/t.IMGFMT'
689
+ }}}
690
+{"return": {}}
691
+{'execute': 'drive-backup',
692
+'arguments': {'job-id': 'job0',
693
+'device': 'drv0',
694
+'target': 'TEST_DIR/o.IMGFMT',
695
+'format': 'IMGFMT',
696
+'sync': 'none'}}
697
Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
698
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
699
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
700
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "job0"}}
701
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
702
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
703
+{'execute': 'blockdev-del',
704
+ 'arguments': {'node-name': 'drv0'}}
705
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: node is used as backing hd of 'NODE_NAME'"}}
706
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
707
+{'execute': 'block-job-cancel',
708
+ 'arguments': {'device': 'job0'}}
709
{"return": {}}
710
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
711
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 0, "speed": 0, "type": "backup"}}
712
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
713
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
714
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
715
+{'execute': 'blockdev-del',
716
+ 'arguments': {'node-name': 'drv0'}}
717
{"return": {}}
718
719
=== Testing drive-mirror ===
720
721
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
722
-{"return": {}}
723
-{'execute': 'drive-mirror', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'target': 'TEST_DIR/o.IMGFMT', 'format': 'IMGFMT', 'sync': 'none'}}
724
+{'execute': 'blockdev-add',
725
+ 'arguments': {
726
+ 'node-name': 'drv0',
727
+ 'driver': 'IMGFMT',
728
+ 'file': {
729
+ 'driver': 'file',
730
+ 'filename': 'TEST_DIR/t.IMGFMT'
731
+ }}}
732
+{"return": {}}
733
+{'execute': 'drive-mirror',
734
+'arguments': {'job-id': 'job0',
735
+'device': 'drv0',
736
+'target': 'TEST_DIR/o.IMGFMT',
737
+'format': 'IMGFMT',
738
+'sync': 'none'}}
739
Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
740
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
741
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
742
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "job0"}}
743
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}}
744
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
745
+{'execute': 'blockdev-del',
746
+ 'arguments': {'node-name': 'drv0'}}
747
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: mirror"}}
748
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
749
+{'execute': 'block-job-cancel',
750
+ 'arguments': {'device': 'job0'}}
751
{"return": {}}
752
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job0"}}
753
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job0"}}
754
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}}
755
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
756
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
757
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
758
+{'execute': 'blockdev-del',
759
+ 'arguments': {'node-name': 'drv0'}}
760
{"return": {}}
761
762
=== Testing active block-commit ===
763
764
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
765
-{"return": {}}
766
-{'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'drv0'}}
767
+{'execute': 'blockdev-add',
768
+ 'arguments': {
769
+ 'node-name': 'drv0',
770
+ 'driver': 'IMGFMT',
771
+ 'file': {
772
+ 'driver': 'file',
773
+ 'filename': 'TEST_DIR/t.IMGFMT'
774
+ }}}
775
+{"return": {}}
776
+{'execute': 'block-commit',
777
+'arguments': {'job-id': 'job0', 'device': 'drv0'}}
778
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
779
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
780
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "job0"}}
781
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
782
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
783
+{'execute': 'blockdev-del',
784
+ 'arguments': {'node-name': 'drv0'}}
785
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: commit"}}
786
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
787
+{'execute': 'block-job-cancel',
788
+ 'arguments': {'device': 'job0'}}
789
{"return": {}}
790
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job0"}}
791
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job0"}}
792
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
793
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
794
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
795
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
796
+{'execute': 'blockdev-del',
797
+ 'arguments': {'node-name': 'drv0'}}
798
{"return": {}}
799
800
=== Testing non-active block-commit ===
801
802
wrote 1048576/1048576 bytes at offset 0
803
1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
804
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
805
-{"return": {}}
806
-{'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'top': 'TEST_DIR/m.IMGFMT', 'speed': 1}}
807
+{'execute': 'blockdev-add',
808
+ 'arguments': {
809
+ 'node-name': 'drv0',
810
+ 'driver': 'IMGFMT',
811
+ 'file': {
812
+ 'driver': 'file',
813
+ 'filename': 'TEST_DIR/t.IMGFMT'
814
+ }}}
815
+{"return": {}}
816
+{'execute': 'block-commit',
817
+'arguments': {'job-id': 'job0',
818
+'device': 'drv0',
819
+'top': 'TEST_DIR/m.IMGFMT',
820
+'speed': 1}}
821
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
822
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
823
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
824
+{'execute': 'blockdev-del',
825
+ 'arguments': {'node-name': 'drv0'}}
826
{"error": {"class": "GenericError", "desc": "Node drv0 is in use"}}
827
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
828
+{'execute': 'block-job-cancel',
829
+ 'arguments': {'device': 'job0'}}
830
{"return": {}}
831
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
832
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "commit"}}
833
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
834
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
835
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
836
+{'execute': 'blockdev-del',
837
+ 'arguments': {'node-name': 'drv0'}}
838
{"return": {}}
839
840
=== Testing block-stream ===
841
842
wrote 1048576/1048576 bytes at offset 0
843
1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
844
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
845
-{"return": {}}
846
-{'execute': 'block-stream', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'speed': 1}}
847
+{'execute': 'blockdev-add',
848
+ 'arguments': {
849
+ 'node-name': 'drv0',
850
+ 'driver': 'IMGFMT',
851
+ 'file': {
852
+ 'driver': 'file',
853
+ 'filename': 'TEST_DIR/t.IMGFMT'
854
+ }}}
855
+{"return": {}}
856
+{'execute': 'block-stream',
857
+'arguments': {'job-id': 'job0',
858
+'device': 'drv0',
859
+'speed': 1}}
860
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
861
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
862
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
863
+{'execute': 'blockdev-del',
864
+ 'arguments': {'node-name': 'drv0'}}
865
{"error": {"class": "GenericError", "desc": "Node drv0 is in use"}}
866
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
867
+{'execute': 'block-job-cancel',
868
+ 'arguments': {'device': 'job0'}}
869
{"return": {}}
870
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
871
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "stream"}}
872
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
873
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
874
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
875
+{'execute': 'blockdev-del',
876
+ 'arguments': {'node-name': 'drv0'}}
877
{"return": {}}
878
*** done
879
diff --git a/tests/qemu-iotests/143.out b/tests/qemu-iotests/143.out
880
index XXXXXXX..XXXXXXX 100644
881
--- a/tests/qemu-iotests/143.out
882
+++ b/tests/qemu-iotests/143.out
883
@@ -XXX,XX +XXX,XX @@
884
QA output created by 143
885
{ 'execute': 'qmp_capabilities' }
886
{"return": {}}
887
-{ 'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'unix', 'data': { 'path': 'SOCK_DIR/nbd' }}}}
888
+{ 'execute': 'nbd-server-start',
889
+ 'arguments': { 'addr': { 'type': 'unix',
890
+ 'data': { 'path': 'SOCK_DIR/nbd' }}}}
891
{"return": {}}
892
qemu-io: can't open device nbd+unix:///no_such_export?socket=SOCK_DIR/nbd: Requested export not available
893
server reported: export 'no_such_export' not present
894
diff --git a/tests/qemu-iotests/144.out b/tests/qemu-iotests/144.out
895
index XXXXXXX..XXXXXXX 100644
896
--- a/tests/qemu-iotests/144.out
897
+++ b/tests/qemu-iotests/144.out
898
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=536870912
899
900
{ 'execute': 'qmp_capabilities' }
901
{"return": {}}
902
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'snapshot-file':'TEST_DIR/tmp.IMGFMT', 'format': 'IMGFMT' } }
903
+{ 'execute': 'blockdev-snapshot-sync',
904
+ 'arguments': {
905
+ 'device': 'virtio0',
906
+ 'snapshot-file':'TEST_DIR/tmp.IMGFMT',
907
+ 'format': 'IMGFMT'
908
+ }
909
+ }
910
Formatting 'TEST_DIR/tmp.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=536870912 backing_file=TEST_DIR/t.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
911
{"return": {}}
912
913
=== Performing block-commit on active layer ===
914
915
-{ 'execute': 'block-commit', 'arguments': { 'device': 'virtio0' } }
916
+{ 'execute': 'block-commit',
917
+ 'arguments': {
918
+ 'device': 'virtio0'
919
+ }
920
+ }
921
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "virtio0"}}
922
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "virtio0"}}
923
{"return": {}}
924
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "virtio0"}}
925
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "virtio0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
926
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'virtio0' } }
927
+{ 'execute': 'block-job-complete',
928
+ 'arguments': {
929
+ 'device': 'virtio0'
930
+ }
931
+ }
932
{"return": {}}
933
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "virtio0"}}
934
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "virtio0"}}
935
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/tmp.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off co
936
937
=== Performing Live Snapshot 2 ===
938
939
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'snapshot-file':'TEST_DIR/tmp2.IMGFMT', 'format': 'IMGFMT' } }
940
+{ 'execute': 'blockdev-snapshot-sync',
941
+ 'arguments': {
942
+ 'device': 'virtio0',
943
+ 'snapshot-file':'TEST_DIR/tmp2.IMGFMT',
944
+ 'format': 'IMGFMT'
945
+ }
946
+ }
947
Formatting 'TEST_DIR/tmp2.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=536870912 backing_file=TEST_DIR/t.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
948
{"return": {}}
949
*** done
950
diff --git a/tests/qemu-iotests/153.out b/tests/qemu-iotests/153.out
951
index XXXXXXX..XXXXXXX 100644
952
--- a/tests/qemu-iotests/153.out
953
+++ b/tests/qemu-iotests/153.out
954
@@ -XXX,XX +XXX,XX @@ _qemu_img_wrapper commit -b TEST_DIR/t.qcow2.b TEST_DIR/t.qcow2.c
955
{ 'execute': 'qmp_capabilities' }
956
{"return": {}}
957
Adding drive
958
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT' } }
959
+{ 'execute': 'human-monitor-command',
960
+ 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT' } }
961
{"return": "OKrn"}
962
963
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
964
@@ -XXX,XX +XXX,XX @@ Creating overlay with qemu-img when the guest is running should be allowed
965
966
_qemu_img_wrapper create -f qcow2 -b TEST_DIR/t.qcow2 -F qcow2 TEST_DIR/t.qcow2.overlay
967
== Closing an image should unlock it ==
968
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_del d0' } }
969
+{ 'execute': 'human-monitor-command',
970
+ 'arguments': { 'command-line': 'drive_del d0' } }
971
{"return": ""}
972
973
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
974
Adding two and closing one
975
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT,readonly=on' } }
976
+{ 'execute': 'human-monitor-command',
977
+ 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT,readonly=on' } }
978
{"return": "OKrn"}
979
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_add 0 if=none,id=d1,file=TEST_DIR/t.IMGFMT,readonly=on' } }
980
+{ 'execute': 'human-monitor-command',
981
+ 'arguments': { 'command-line': 'drive_add 0 if=none,id=d1,file=TEST_DIR/t.IMGFMT,readonly=on' } }
982
{"return": "OKrn"}
983
984
_qemu_img_wrapper info TEST_DIR/t.qcow2
985
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_del d0' } }
986
+{ 'execute': 'human-monitor-command',
987
+ 'arguments': { 'command-line': 'drive_del d0' } }
988
{"return": ""}
989
990
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
991
qemu-io: can't open device TEST_DIR/t.qcow2: Failed to get "write" lock
992
Is another process using the image [TEST_DIR/t.qcow2]?
993
Closing the other
994
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_del d1' } }
995
+{ 'execute': 'human-monitor-command',
996
+ 'arguments': { 'command-line': 'drive_del d1' } }
997
{"return": ""}
998
999
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
1000
diff --git a/tests/qemu-iotests/156.out b/tests/qemu-iotests/156.out
1001
index XXXXXXX..XXXXXXX 100644
1002
--- a/tests/qemu-iotests/156.out
1003
+++ b/tests/qemu-iotests/156.out
1004
@@ -XXX,XX +XXX,XX @@ wrote 196608/196608 bytes at offset 65536
1005
{ 'execute': 'qmp_capabilities' }
1006
{"return": {}}
1007
Formatting 'TEST_DIR/t.IMGFMT.overlay', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
1008
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'source', 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay', 'format': 'IMGFMT', 'mode': 'existing' } }
1009
+{ 'execute': 'blockdev-snapshot-sync',
1010
+ 'arguments': { 'device': 'source',
1011
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay',
1012
+ 'format': 'IMGFMT',
1013
+ 'mode': 'existing' } }
1014
{"return": {}}
1015
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "write -P 3 128k 128k"' } }
1016
+{ 'execute': 'human-monitor-command',
1017
+ 'arguments': { 'command-line':
1018
+ 'qemu-io source "write -P 3 128k 128k"' } }
1019
wrote 131072/131072 bytes at offset 131072
1020
128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1021
{"return": ""}
1022
Formatting 'TEST_DIR/t.IMGFMT.target.overlay', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT.target backing_fmt=IMGFMT
1023
-{ 'execute': 'drive-mirror', 'arguments': { 'device': 'source', 'target': 'TEST_DIR/t.IMGFMT.target.overlay', 'mode': 'existing', 'sync': 'top' } }
1024
+{ 'execute': 'drive-mirror',
1025
+ 'arguments': { 'device': 'source',
1026
+ 'target': 'TEST_DIR/t.IMGFMT.target.overlay',
1027
+ 'mode': 'existing',
1028
+ 'sync': 'top' } }
1029
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "source"}}
1030
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "source"}}
1031
{"return": {}}
1032
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "source"}}
1033
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "source", "len": 131072, "offset": 131072, "speed": 0, "type": "mirror"}}
1034
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "write -P 4 192k 64k"' } }
1035
+{ 'execute': 'human-monitor-command',
1036
+ 'arguments': { 'command-line':
1037
+ 'qemu-io source "write -P 4 192k 64k"' } }
1038
wrote 65536/65536 bytes at offset 196608
1039
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1040
{"return": ""}
1041
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'source' } }
1042
+{ 'execute': 'block-job-complete',
1043
+ 'arguments': { 'device': 'source' } }
1044
{"return": {}}
1045
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "source"}}
1046
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "source"}}
1047
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 196608
1048
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "source"}}
1049
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "source"}}
1050
1051
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 1 0k 64k"' } }
1052
+{ 'execute': 'human-monitor-command',
1053
+ 'arguments': { 'command-line':
1054
+ 'qemu-io source "read -P 1 0k 64k"' } }
1055
read 65536/65536 bytes at offset 0
1056
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1057
{"return": ""}
1058
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 2 64k 64k"' } }
1059
+{ 'execute': 'human-monitor-command',
1060
+ 'arguments': { 'command-line':
1061
+ 'qemu-io source "read -P 2 64k 64k"' } }
1062
read 65536/65536 bytes at offset 65536
1063
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1064
{"return": ""}
1065
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 3 128k 64k"' } }
1066
+{ 'execute': 'human-monitor-command',
1067
+ 'arguments': { 'command-line':
1068
+ 'qemu-io source "read -P 3 128k 64k"' } }
1069
read 65536/65536 bytes at offset 131072
1070
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1071
{"return": ""}
1072
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 4 192k 64k"' } }
1073
+{ 'execute': 'human-monitor-command',
1074
+ 'arguments': { 'command-line':
1075
+ 'qemu-io source "read -P 4 192k 64k"' } }
1076
read 65536/65536 bytes at offset 196608
1077
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1078
{"return": ""}
1079
diff --git a/tests/qemu-iotests/161.out b/tests/qemu-iotests/161.out
1080
index XXXXXXX..XXXXXXX 100644
1081
--- a/tests/qemu-iotests/161.out
1082
+++ b/tests/qemu-iotests/161.out
1083
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.
1084
1085
{ 'execute': 'qmp_capabilities' }
1086
{"return": {}}
1087
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1088
+{ 'execute': 'human-monitor-command',
1089
+ 'arguments': { 'command-line':
1090
+ 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1091
{"return": ""}
1092
1093
*** Stream and then change an option on the backing file
1094
1095
{ 'execute': 'qmp_capabilities' }
1096
{"return": {}}
1097
-{ 'execute': 'block-stream', 'arguments': { 'device': 'none0', 'base': 'TEST_DIR/t.IMGFMT.base' } }
1098
+{ 'execute': 'block-stream', 'arguments': { 'device': 'none0',
1099
+ 'base': 'TEST_DIR/t.IMGFMT.base' } }
1100
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "none0"}}
1101
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "none0"}}
1102
{"return": {}}
1103
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1104
+{ 'execute': 'human-monitor-command',
1105
+ 'arguments': { 'command-line':
1106
+ 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1107
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "none0"}}
1108
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "none0"}}
1109
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "none0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "stream"}}
1110
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.int', fmt=IMGFMT size=1048576 backing_file=TEST_DI
1111
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT.int backing_fmt=IMGFMT
1112
{ 'execute': 'qmp_capabilities' }
1113
{"return": {}}
1114
-{ 'execute': 'block-commit', 'arguments': { 'device': 'none0', 'top': 'TEST_DIR/t.IMGFMT.int' } }
1115
+{ 'execute': 'block-commit', 'arguments': { 'device': 'none0',
1116
+ 'top': 'TEST_DIR/t.IMGFMT.int' } }
1117
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "none0"}}
1118
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "none0"}}
1119
{"return": {}}
1120
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1121
+{ 'execute': 'human-monitor-command',
1122
+ 'arguments': { 'command-line':
1123
+ 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1124
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "none0"}}
1125
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "none0"}}
1126
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "none0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "commit"}}
1127
diff --git a/tests/qemu-iotests/173.out b/tests/qemu-iotests/173.out
1128
index XXXXXXX..XXXXXXX 100644
1129
--- a/tests/qemu-iotests/173.out
1130
+++ b/tests/qemu-iotests/173.out
1131
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/image.snp1', fmt=IMGFMT size=104857600
1132
1133
{ 'execute': 'qmp_capabilities' }
1134
{"return": {}}
1135
-{ 'arguments': { 'device': 'disk2', 'format': 'IMGFMT', 'mode': 'existing', 'snapshot-file': 'TEST_DIR/image.snp1', 'snapshot-node-name': 'snp1' }, 'execute': 'blockdev-snapshot-sync' }
1136
+{ 'arguments': {
1137
+ 'device': 'disk2',
1138
+ 'format': 'IMGFMT',
1139
+ 'mode': 'existing',
1140
+ 'snapshot-file': 'TEST_DIR/image.snp1',
1141
+ 'snapshot-node-name': 'snp1'
1142
+ },
1143
+ 'execute': 'blockdev-snapshot-sync'
1144
+ }
1145
{"return": {}}
1146
-{ 'arguments': { 'backing-file': 'image.base', 'device': 'disk2', 'image-node-name': 'snp1' }, 'execute': 'change-backing-file' }
1147
+{ 'arguments': {
1148
+ 'backing-file': 'image.base',
1149
+ 'device': 'disk2',
1150
+ 'image-node-name': 'snp1'
1151
+ },
1152
+ 'execute': 'change-backing-file'
1153
+ }
1154
{"return": {}}
1155
-{ 'arguments': { 'base': 'TEST_DIR/image.base', 'device': 'disk2' }, 'execute': 'block-stream' }
1156
+{ 'arguments': {
1157
+ 'base': 'TEST_DIR/image.base',
1158
+ 'device': 'disk2'
1159
+ },
1160
+ 'execute': 'block-stream'
1161
+ }
1162
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk2"}}
1163
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk2"}}
1164
{"return": {}}
1165
diff --git a/tests/qemu-iotests/182.out b/tests/qemu-iotests/182.out
1166
index XXXXXXX..XXXXXXX 100644
1167
--- a/tests/qemu-iotests/182.out
1168
+++ b/tests/qemu-iotests/182.out
1169
@@ -XXX,XX +XXX,XX @@ Is another process using the image [TEST_DIR/t.qcow2]?
1170
1171
{'execute': 'qmp_capabilities'}
1172
{"return": {}}
1173
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'node0', 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT', 'locking': 'on' } }
1174
-{"return": {}}
1175
-{'execute': 'blockdev-snapshot-sync', 'arguments': { 'node-name': 'node0', 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay', 'snapshot-node-name': 'node1' } }
1176
+{'execute': 'blockdev-add',
1177
+ 'arguments': {
1178
+ 'node-name': 'node0',
1179
+ 'driver': 'file',
1180
+ 'filename': 'TEST_DIR/t.IMGFMT',
1181
+ 'locking': 'on'
1182
+ } }
1183
+{"return": {}}
1184
+{'execute': 'blockdev-snapshot-sync',
1185
+ 'arguments': {
1186
+ 'node-name': 'node0',
1187
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay',
1188
+ 'snapshot-node-name': 'node1'
1189
+ } }
1190
Formatting 'TEST_DIR/t.qcow2.overlay', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=197120 backing_file=TEST_DIR/t.qcow2 backing_fmt=file lazy_refcounts=off refcount_bits=16
1191
{"return": {}}
1192
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'node1', 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT', 'locking': 'on' } }
1193
-{"return": {}}
1194
-{'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'unix', 'data': { 'path': 'SOCK_DIR/nbd.socket' } } } }
1195
-{"return": {}}
1196
-{'execute': 'nbd-server-add', 'arguments': { 'device': 'node1' } }
1197
+{'execute': 'blockdev-add',
1198
+ 'arguments': {
1199
+ 'node-name': 'node1',
1200
+ 'driver': 'file',
1201
+ 'filename': 'TEST_DIR/t.IMGFMT',
1202
+ 'locking': 'on'
1203
+ } }
1204
+{"return": {}}
1205
+{'execute': 'nbd-server-start',
1206
+ 'arguments': {
1207
+ 'addr': {
1208
+ 'type': 'unix',
1209
+ 'data': {
1210
+ 'path': 'SOCK_DIR/nbd.socket'
1211
+ } } } }
1212
+{"return": {}}
1213
+{'execute': 'nbd-server-add',
1214
+ 'arguments': {
1215
+ 'device': 'node1'
1216
+ } }
1217
{"return": {}}
1218
1219
=== Testing failure to loosen restrictions ===
1220
diff --git a/tests/qemu-iotests/183.out b/tests/qemu-iotests/183.out
1221
index XXXXXXX..XXXXXXX 100644
1222
--- a/tests/qemu-iotests/183.out
1223
+++ b/tests/qemu-iotests/183.out
1224
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.dest', fmt=IMGFMT size=67108864
1225
1226
=== Write something on the source ===
1227
1228
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "write -P 0x55 0 64k"' } }
1229
+{ 'execute': 'human-monitor-command',
1230
+ 'arguments': { 'command-line':
1231
+ 'qemu-io disk "write -P 0x55 0 64k"' } }
1232
wrote 65536/65536 bytes at offset 0
1233
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1234
{"return": ""}
1235
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "read -P 0x55 0 64k"' } }
1236
+{ 'execute': 'human-monitor-command',
1237
+ 'arguments': { 'command-line':
1238
+ 'qemu-io disk "read -P 0x55 0 64k"' } }
1239
read 65536/65536 bytes at offset 0
1240
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1241
{"return": ""}
1242
1243
=== Do block migration to destination ===
1244
1245
-{ 'execute': 'migrate', 'arguments': { 'uri': 'unix:SOCK_DIR/migrate', 'blk': true } }
1246
+{ 'execute': 'migrate',
1247
+ 'arguments': { 'uri': 'unix:SOCK_DIR/migrate', 'blk': true } }
1248
{"return": {}}
1249
{ 'execute': 'query-status' }
1250
{"return": {"status": "postmigrate", "singlestep": false, "running": false}}
1251
@@ -XXX,XX +XXX,XX @@ read 65536/65536 bytes at offset 0
1252
{ 'execute': 'query-status' }
1253
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"}
1254
{"return": {"status": "running", "singlestep": false, "running": true}}
1255
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "read -P 0x55 0 64k"' } }
1256
+{ 'execute': 'human-monitor-command',
1257
+ 'arguments': { 'command-line':
1258
+ 'qemu-io disk "read -P 0x55 0 64k"' } }
1259
read 65536/65536 bytes at offset 0
1260
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1261
{"return": ""}
1262
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "write -P 0x66 1M 64k"' } }
1263
+{ 'execute': 'human-monitor-command',
1264
+ 'arguments': { 'command-line':
1265
+ 'qemu-io disk "write -P 0x66 1M 64k"' } }
1266
wrote 65536/65536 bytes at offset 1048576
1267
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1268
{"return": ""}
1269
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
1270
index XXXXXXX..XXXXXXX 100644
1271
--- a/tests/qemu-iotests/185.out
1272
+++ b/tests/qemu-iotests/185.out
1273
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=67108864
1274
1275
=== Creating backing chain ===
1276
1277
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'disk', 'snapshot-file': 'TEST_DIR/t.IMGFMT.mid', 'format': 'IMGFMT', 'mode': 'absolute-paths' } }
1278
+{ 'execute': 'blockdev-snapshot-sync',
1279
+ 'arguments': { 'device': 'disk',
1280
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT.mid',
1281
+ 'format': 'IMGFMT',
1282
+ 'mode': 'absolute-paths' } }
1283
Formatting 'TEST_DIR/t.qcow2.mid', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 backing_file=TEST_DIR/t.qcow2.base backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
1284
{"return": {}}
1285
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "write 0 4M"' } }
1286
+{ 'execute': 'human-monitor-command',
1287
+ 'arguments': { 'command-line':
1288
+ 'qemu-io disk "write 0 4M"' } }
1289
wrote 4194304/4194304 bytes at offset 0
1290
4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1291
{"return": ""}
1292
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'disk', 'snapshot-file': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'absolute-paths' } }
1293
+{ 'execute': 'blockdev-snapshot-sync',
1294
+ 'arguments': { 'device': 'disk',
1295
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT',
1296
+ 'format': 'IMGFMT',
1297
+ 'mode': 'absolute-paths' } }
1298
Formatting 'TEST_DIR/t.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 backing_file=TEST_DIR/t.qcow2.mid backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
1299
{"return": {}}
1300
1301
=== Start commit job and exit qemu ===
1302
1303
-{ 'execute': 'block-commit', 'arguments': { 'device': 'disk', 'base':'TEST_DIR/t.IMGFMT.base', 'top': 'TEST_DIR/t.IMGFMT.mid', 'speed': 65536 } }
1304
+{ 'execute': 'block-commit',
1305
+ 'arguments': { 'device': 'disk',
1306
+ 'base':'TEST_DIR/t.IMGFMT.base',
1307
+ 'top': 'TEST_DIR/t.IMGFMT.mid',
1308
+ 'speed': 65536 } }
1309
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1310
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1311
{"return": {}}
1312
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off comp
1313
1314
{ 'execute': 'qmp_capabilities' }
1315
{"return": {}}
1316
-{ 'execute': 'block-commit', 'arguments': { 'device': 'disk', 'base':'TEST_DIR/t.IMGFMT.base', 'speed': 65536 } }
1317
+{ 'execute': 'block-commit',
1318
+ 'arguments': { 'device': 'disk',
1319
+ 'base':'TEST_DIR/t.IMGFMT.base',
1320
+ 'speed': 65536 } }
1321
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1322
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1323
{"return": {}}
1324
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off comp
1325
1326
{ 'execute': 'qmp_capabilities' }
1327
{"return": {}}
1328
-{ 'execute': 'drive-mirror', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536 } }
1329
+{ 'execute': 'drive-mirror',
1330
+ 'arguments': { 'device': 'disk',
1331
+ 'target': 'TEST_DIR/t.IMGFMT.copy',
1332
+ 'format': 'IMGFMT',
1333
+ 'sync': 'full',
1334
+ 'speed': 65536 } }
1335
Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 lazy_refcounts=off refcount_bits=16
1336
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1337
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1338
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off
1339
1340
{ 'execute': 'qmp_capabilities' }
1341
{"return": {}}
1342
-{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536 } }
1343
+{ 'execute': 'drive-backup',
1344
+ 'arguments': { 'device': 'disk',
1345
+ 'target': 'TEST_DIR/t.IMGFMT.copy',
1346
+ 'format': 'IMGFMT',
1347
+ 'sync': 'full',
1348
+ 'speed': 65536 } }
1349
Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 lazy_refcounts=off refcount_bits=16
1350
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1351
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1352
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off
1353
1354
{ 'execute': 'qmp_capabilities' }
1355
{"return": {}}
1356
-{ 'execute': 'block-stream', 'arguments': { 'device': 'disk', 'speed': 65536 } }
1357
+{ 'execute': 'block-stream',
1358
+ 'arguments': { 'device': 'disk',
1359
+ 'speed': 65536 } }
1360
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1361
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1362
{"return": {}}
1363
diff --git a/tests/qemu-iotests/191.out b/tests/qemu-iotests/191.out
1364
index XXXXXXX..XXXXXXX 100644
1365
--- a/tests/qemu-iotests/191.out
1366
+++ b/tests/qemu-iotests/191.out
1367
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 1048576
1368
1369
=== Perform commit job ===
1370
1371
-{ 'execute': 'block-commit', 'arguments': { 'job-id': 'commit0', 'device': 'top', 'base':'TEST_DIR/t.IMGFMT.base', 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1372
+{ 'execute': 'block-commit',
1373
+ 'arguments': { 'job-id': 'commit0',
1374
+ 'device': 'top',
1375
+ 'base':'TEST_DIR/t.IMGFMT.base',
1376
+ 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1377
{
1378
"timestamp": {
1379
"seconds": TIMESTAMP,
1380
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 1048576
1381
1382
=== Perform commit job ===
1383
1384
-{ 'execute': 'block-commit', 'arguments': { 'job-id': 'commit0', 'device': 'top', 'base':'TEST_DIR/t.IMGFMT.base', 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1385
+{ 'execute': 'block-commit',
1386
+ 'arguments': { 'job-id': 'commit0',
1387
+ 'device': 'top',
1388
+ 'base':'TEST_DIR/t.IMGFMT.base',
1389
+ 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1390
{
1391
"timestamp": {
1392
"seconds": TIMESTAMP,
1393
diff --git a/tests/qemu-iotests/223.out b/tests/qemu-iotests/223.out
1394
index XXXXXXX..XXXXXXX 100644
1395
--- a/tests/qemu-iotests/223.out
1396
+++ b/tests/qemu-iotests/223.out
1397
@@ -XXX,XX +XXX,XX @@ wrote 2097152/2097152 bytes at offset 2097152
1398
1399
{"execute":"qmp_capabilities"}
1400
{"return": {}}
1401
-{"execute":"blockdev-add", "arguments":{"driver":"IMGFMT", "node-name":"n", "file":{"driver":"file", "filename":"TEST_DIR/t.IMGFMT"}}}
1402
+{"execute":"blockdev-add",
1403
+ "arguments":{"driver":"IMGFMT", "node-name":"n",
1404
+ "file":{"driver":"file", "filename":"TEST_DIR/t.IMGFMT"}}}
1405
{"return": {}}
1406
-{"execute":"block-dirty-bitmap-disable", "arguments":{"node":"n", "name":"b"}}
1407
+{"execute":"block-dirty-bitmap-disable",
1408
+ "arguments":{"node":"n", "name":"b"}}
1409
{"return": {}}
1410
1411
=== Set up NBD with normal access ===
1412
1413
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1414
+{"execute":"nbd-server-add",
1415
+ "arguments":{"device":"n"}}
1416
{"error": {"class": "GenericError", "desc": "NBD server not running"}}
1417
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd"}}}}
1418
+{"execute":"nbd-server-start",
1419
+ "arguments":{"addr":{"type":"unix",
1420
+ "data":{"path":"SOCK_DIR/nbd"}}}}
1421
{"return": {}}
1422
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd1"}}}}
1423
+{"execute":"nbd-server-start",
1424
+ "arguments":{"addr":{"type":"unix",
1425
+ "data":{"path":"SOCK_DIR/nbd1"}}}}
1426
{"error": {"class": "GenericError", "desc": "NBD server already running"}}
1427
exports available: 0
1428
-{"execute":"nbd-server-add", "arguments":{"device":"n", "bitmap":"b"}}
1429
+{"execute":"nbd-server-add",
1430
+ "arguments":{"device":"n", "bitmap":"b"}}
1431
{"return": {}}
1432
-{"execute":"nbd-server-add", "arguments":{"device":"nosuch"}}
1433
+{"execute":"nbd-server-add",
1434
+ "arguments":{"device":"nosuch"}}
1435
{"error": {"class": "GenericError", "desc": "Cannot find device=nosuch nor node_name=nosuch"}}
1436
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1437
+{"execute":"nbd-server-add",
1438
+ "arguments":{"device":"n"}}
1439
{"error": {"class": "GenericError", "desc": "Block export id 'n' is already in use"}}
1440
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b2"}}
1441
+{"execute":"nbd-server-add",
1442
+ "arguments":{"device":"n", "name":"n2",
1443
+ "bitmap":"b2"}}
1444
{"error": {"class": "GenericError", "desc": "Enabled bitmap 'b2' incompatible with readonly export"}}
1445
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b3"}}
1446
+{"execute":"nbd-server-add",
1447
+ "arguments":{"device":"n", "name":"n2",
1448
+ "bitmap":"b3"}}
1449
{"error": {"class": "GenericError", "desc": "Bitmap 'b3' is not found"}}
1450
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "writable":true, "description":"some text", "bitmap":"b2"}}
1451
+{"execute":"nbd-server-add",
1452
+ "arguments":{"device":"n", "name":"n2", "writable":true,
1453
+ "description":"some text", "bitmap":"b2"}}
1454
{"return": {}}
1455
exports available: 2
1456
export: 'n'
1457
@@ -XXX,XX +XXX,XX @@ read 2097152/2097152 bytes at offset 2097152
1458
1459
=== End qemu NBD server ===
1460
1461
-{"execute":"nbd-server-remove", "arguments":{"name":"n"}}
1462
+{"execute":"nbd-server-remove",
1463
+ "arguments":{"name":"n"}}
1464
{"return": {}}
1465
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1466
+{"execute":"nbd-server-remove",
1467
+ "arguments":{"name":"n2"}}
1468
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n"}}
1469
{"return": {}}
1470
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1471
+{"execute":"nbd-server-remove",
1472
+ "arguments":{"name":"n2"}}
1473
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n2"}}
1474
{"error": {"class": "GenericError", "desc": "Export 'n2' is not found"}}
1475
{"execute":"nbd-server-stop"}
1476
@@ -XXX,XX +XXX,XX @@ read 2097152/2097152 bytes at offset 2097152
1477
1478
=== Set up NBD with iothread access ===
1479
1480
-{"execute":"x-blockdev-set-iothread", "arguments":{"node-name":"n", "iothread":"io0"}}
1481
+{"execute":"x-blockdev-set-iothread",
1482
+ "arguments":{"node-name":"n", "iothread":"io0"}}
1483
{"return": {}}
1484
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1485
+{"execute":"nbd-server-add",
1486
+ "arguments":{"device":"n"}}
1487
{"error": {"class": "GenericError", "desc": "NBD server not running"}}
1488
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd"}}}}
1489
+{"execute":"nbd-server-start",
1490
+ "arguments":{"addr":{"type":"unix",
1491
+ "data":{"path":"SOCK_DIR/nbd"}}}}
1492
{"return": {}}
1493
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd1"}}}}
1494
+{"execute":"nbd-server-start",
1495
+ "arguments":{"addr":{"type":"unix",
1496
+ "data":{"path":"SOCK_DIR/nbd1"}}}}
1497
{"error": {"class": "GenericError", "desc": "NBD server already running"}}
1498
exports available: 0
1499
-{"execute":"nbd-server-add", "arguments":{"device":"n", "bitmap":"b"}}
1500
+{"execute":"nbd-server-add",
1501
+ "arguments":{"device":"n", "bitmap":"b"}}
1502
{"return": {}}
1503
-{"execute":"nbd-server-add", "arguments":{"device":"nosuch"}}
1504
+{"execute":"nbd-server-add",
1505
+ "arguments":{"device":"nosuch"}}
1506
{"error": {"class": "GenericError", "desc": "Cannot find device=nosuch nor node_name=nosuch"}}
1507
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1508
+{"execute":"nbd-server-add",
1509
+ "arguments":{"device":"n"}}
1510
{"error": {"class": "GenericError", "desc": "Block export id 'n' is already in use"}}
1511
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b2"}}
1512
+{"execute":"nbd-server-add",
1513
+ "arguments":{"device":"n", "name":"n2",
1514
+ "bitmap":"b2"}}
1515
{"error": {"class": "GenericError", "desc": "Enabled bitmap 'b2' incompatible with readonly export"}}
1516
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b3"}}
1517
+{"execute":"nbd-server-add",
1518
+ "arguments":{"device":"n", "name":"n2",
1519
+ "bitmap":"b3"}}
1520
{"error": {"class": "GenericError", "desc": "Bitmap 'b3' is not found"}}
1521
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "writable":true, "description":"some text", "bitmap":"b2"}}
1522
+{"execute":"nbd-server-add",
1523
+ "arguments":{"device":"n", "name":"n2", "writable":true,
1524
+ "description":"some text", "bitmap":"b2"}}
1525
{"return": {}}
1526
exports available: 2
1527
export: 'n'
1528
@@ -XXX,XX +XXX,XX @@ read 2097152/2097152 bytes at offset 2097152
1529
1530
=== End qemu NBD server ===
1531
1532
-{"execute":"nbd-server-remove", "arguments":{"name":"n"}}
1533
+{"execute":"nbd-server-remove",
1534
+ "arguments":{"name":"n"}}
1535
{"return": {}}
1536
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1537
+{"execute":"nbd-server-remove",
1538
+ "arguments":{"name":"n2"}}
1539
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n"}}
1540
{"return": {}}
1541
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1542
+{"execute":"nbd-server-remove",
1543
+ "arguments":{"name":"n2"}}
1544
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n2"}}
1545
{"error": {"class": "GenericError", "desc": "Export 'n2' is not found"}}
1546
{"execute":"nbd-server-stop"}
1547
diff --git a/tests/qemu-iotests/229.out b/tests/qemu-iotests/229.out
1548
index XXXXXXX..XXXXXXX 100644
1549
--- a/tests/qemu-iotests/229.out
1550
+++ b/tests/qemu-iotests/229.out
1551
@@ -XXX,XX +XXX,XX @@ wrote 2097152/2097152 bytes at offset 0
1552
1553
=== Starting drive-mirror, causing error & stop ===
1554
1555
-{'execute': 'drive-mirror', 'arguments': {'device': 'testdisk', 'format': 'IMGFMT', 'target': 'blkdebug:TEST_DIR/blkdebug.conf:TEST_DIR/t.IMGFMT.dest', 'sync': 'full', 'mode': 'existing', 'on-source-error': 'stop', 'on-target-error': 'stop' }}
1556
+{'execute': 'drive-mirror',
1557
+ 'arguments': {'device': 'testdisk',
1558
+ 'format': 'IMGFMT',
1559
+ 'target': 'blkdebug:TEST_DIR/blkdebug.conf:TEST_DIR/t.IMGFMT.dest',
1560
+ 'sync': 'full',
1561
+ 'mode': 'existing',
1562
+ 'on-source-error': 'stop',
1563
+ 'on-target-error': 'stop' }}
1564
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "testdisk"}}
1565
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "testdisk"}}
1566
{"return": {}}
1567
@@ -XXX,XX +XXX,XX @@ wrote 2097152/2097152 bytes at offset 0
1568
1569
=== Force cancel job paused in error state ===
1570
1571
-{'execute': 'block-job-cancel', 'arguments': { 'device': 'testdisk', 'force': true}}
1572
+{'execute': 'block-job-cancel',
1573
+ 'arguments': { 'device': 'testdisk',
1574
+ 'force': true}}
1575
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "testdisk"}}
1576
{"return": {}}
1577
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "testdisk"}}
1578
diff --git a/tests/qemu-iotests/249.out b/tests/qemu-iotests/249.out
1579
index XXXXXXX..XXXXXXX 100644
1580
--- a/tests/qemu-iotests/249.out
1581
+++ b/tests/qemu-iotests/249.out
1582
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.
1583
1584
=== Send a write command to a drive opened in read-only mode (1)
1585
1586
-{ 'execute': 'human-monitor-command', 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1587
+{ 'execute': 'human-monitor-command',
1588
+ 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1589
{"return": "Block node is read-onlyrn"}
1590
1591
=== Run block-commit on base using an invalid filter node name
1592
1593
-{ 'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int', 'filter-node-name': '1234'}}
1594
+{ 'execute': 'block-commit',
1595
+ 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int',
1596
+ 'filter-node-name': '1234'}}
1597
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
1598
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
1599
{"error": {"class": "GenericError", "desc": "Invalid node name"}}
1600
1601
=== Send a write command to a drive opened in read-only mode (2)
1602
1603
-{ 'execute': 'human-monitor-command', 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1604
+{ 'execute': 'human-monitor-command',
1605
+ 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1606
{"return": "Block node is read-onlyrn"}
1607
1608
=== Run block-commit on base using the default filter node name
1609
1610
-{ 'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int'}}
1611
+{ 'execute': 'block-commit',
1612
+ 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int'}}
1613
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
1614
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
1615
{"return": {}}
1616
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.
1617
1618
=== Send a write command to a drive opened in read-only mode (3)
1619
1620
-{ 'execute': 'human-monitor-command', 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1621
+{ 'execute': 'human-monitor-command',
1622
+ 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1623
{"return": "Block node is read-onlyrn"}
1624
*** done
1625
diff --git a/tests/qemu-iotests/308.out b/tests/qemu-iotests/308.out
1626
index XXXXXXX..XXXXXXX 100644
1627
--- a/tests/qemu-iotests/308.out
1628
+++ b/tests/qemu-iotests/308.out
1629
@@ -XXX,XX +XXX,XX @@ wrote 67108864/67108864 bytes at offset 0
1630
64 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1631
{'execute': 'qmp_capabilities'}
1632
{"return": {}}
1633
-{'execute': 'blockdev-add', 'arguments': { 'driver': 'file', 'node-name': 'node-protocol', 'filename': 'TEST_DIR/t.IMGFMT' } }
1634
+{'execute': 'blockdev-add',
1635
+ 'arguments': {
1636
+ 'driver': 'file',
1637
+ 'node-name': 'node-protocol',
1638
+ 'filename': 'TEST_DIR/t.IMGFMT'
1639
+ } }
1640
{"return": {}}
1641
-{'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'node-format', 'file': 'node-protocol' } }
1642
+{'execute': 'blockdev-add',
1643
+ 'arguments': {
1644
+ 'driver': 'IMGFMT',
1645
+ 'node-name': 'node-format',
1646
+ 'file': 'node-protocol'
1647
+ } }
1648
{"return": {}}
1649
1650
=== Mountpoint not present ===
1651
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-err', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1652
+{'execute': 'block-export-add',
1653
+ 'arguments': {
1654
+ 'type': 'fuse',
1655
+ 'id': 'export-err',
1656
+ 'node-name': 'node-format',
1657
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1658
+ } }
1659
{"error": {"class": "GenericError", "desc": "Failed to stat 'TEST_DIR/t.IMGFMT.fuse': No such file or directory"}}
1660
1661
=== Mountpoint is a directory ===
1662
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-err', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1663
+{'execute': 'block-export-add',
1664
+ 'arguments': {
1665
+ 'type': 'fuse',
1666
+ 'id': 'export-err',
1667
+ 'node-name': 'node-format',
1668
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1669
+ } }
1670
{"error": {"class": "GenericError", "desc": "'TEST_DIR/t.IMGFMT.fuse' is not a regular file"}}
1671
1672
=== Mountpoint is a regular file ===
1673
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1674
+{'execute': 'block-export-add',
1675
+ 'arguments': {
1676
+ 'type': 'fuse',
1677
+ 'id': 'export-mp',
1678
+ 'node-name': 'node-format',
1679
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1680
+ } }
1681
{"return": {}}
1682
Images are identical.
1683
1684
=== Mount over existing file ===
1685
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-img', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT' } }
1686
+{'execute': 'block-export-add',
1687
+ 'arguments': {
1688
+ 'type': 'fuse',
1689
+ 'id': 'export-img',
1690
+ 'node-name': 'node-format',
1691
+ 'mountpoint': 'TEST_DIR/t.IMGFMT'
1692
+ } }
1693
{"return": {}}
1694
Images are identical.
1695
1696
=== Double export ===
1697
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-err', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1698
+{'execute': 'block-export-add',
1699
+ 'arguments': {
1700
+ 'type': 'fuse',
1701
+ 'id': 'export-err',
1702
+ 'node-name': 'node-format',
1703
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1704
+ } }
1705
{"error": {"class": "GenericError", "desc": "There already is a FUSE export on 'TEST_DIR/t.IMGFMT.fuse'"}}
1706
1707
=== Remove export ===
1708
virtual size: 64 MiB (67108864 bytes)
1709
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-mp' } }
1710
+{'execute': 'block-export-del',
1711
+ 'arguments': {
1712
+ 'id': 'export-mp'
1713
+ } }
1714
{"return": {}}
1715
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-mp"}}
1716
virtual size: 0 B (0 bytes)
1717
1718
=== Writable export ===
1719
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true } }
1720
+{'execute': 'block-export-add',
1721
+ 'arguments': {
1722
+ 'type': 'fuse',
1723
+ 'id': 'export-mp',
1724
+ 'node-name': 'node-format',
1725
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true
1726
+ } }
1727
{"return": {}}
1728
write failed: Permission denied
1729
wrote 65536/65536 bytes at offset 1048576
1730
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 1048576
1731
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1732
1733
=== Resizing exports ===
1734
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-mp' } }
1735
+{'execute': 'block-export-del',
1736
+ 'arguments': {
1737
+ 'id': 'export-mp'
1738
+ } }
1739
{"return": {}}
1740
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-mp"}}
1741
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-img' } }
1742
+{'execute': 'block-export-del',
1743
+ 'arguments': {
1744
+ 'id': 'export-img'
1745
+ } }
1746
{"return": {}}
1747
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-img"}}
1748
-{'execute': 'blockdev-del', 'arguments': { 'node-name': 'node-format' } }
1749
+{'execute': 'blockdev-del',
1750
+ 'arguments': {
1751
+ 'node-name': 'node-format'
1752
+ } }
1753
{"return": {}}
1754
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-protocol', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true } }
1755
+{'execute': 'block-export-add',
1756
+ 'arguments': {
1757
+ 'type': 'fuse',
1758
+ 'id': 'export-mp',
1759
+ 'node-name': 'node-protocol',
1760
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true
1761
+ } }
1762
{"return": {}}
1763
1764
--- Try growing non-growable export ---
1765
@@ -XXX,XX +XXX,XX @@ OK: Post-truncate image size is as expected
1766
OK: Disk usage grew with fallocate
1767
1768
--- Try growing growable export ---
1769
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-mp' } }
1770
+{'execute': 'block-export-del',
1771
+ 'arguments': {
1772
+ 'id': 'export-mp'
1773
+ } }
1774
{"return": {}}
1775
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-mp"}}
1776
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-protocol', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true, 'growable': true } }
1777
+{'execute': 'block-export-add',
1778
+ 'arguments': {
1779
+ 'type': 'fuse',
1780
+ 'id': 'export-mp',
1781
+ 'node-name': 'node-protocol',
1782
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true, 'growable': true
1783
+ } }
1784
{"return": {}}
1785
65536+0 records in
1786
65536+0 records out
1787
diff --git a/tests/qemu-iotests/312.out b/tests/qemu-iotests/312.out
1788
index XXXXXXX..XXXXXXX 100644
1789
--- a/tests/qemu-iotests/312.out
1790
+++ b/tests/qemu-iotests/312.out
1791
@@ -XXX,XX +XXX,XX @@ read 65536/65536 bytes at offset 2424832
1792
1793
{ 'execute': 'qmp_capabilities' }
1794
{"return": {}}
1795
-{'execute': 'drive-mirror', 'arguments': {'device': 'virtio0', 'format': 'IMGFMT', 'target': 'TEST_DIR/t.IMGFMT.3', 'sync': 'full', 'mode': 'existing' }}
1796
+{'execute': 'drive-mirror',
1797
+ 'arguments': {'device': 'virtio0',
1798
+ 'format': 'IMGFMT',
1799
+ 'target': 'TEST_DIR/t.IMGFMT.3',
1800
+ 'sync': 'full',
1801
+ 'mode': 'existing' }}
1802
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "virtio0"}}
1803
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "virtio0"}}
1804
{"return": {}}
1805
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "virtio0"}}
1806
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "virtio0", "len": 10485760, "offset": 10485760, "speed": 0, "type": "mirror"}}
1807
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'virtio0' } }
1808
+{ 'execute': 'block-job-complete',
1809
+ 'arguments': { 'device': 'virtio0' } }
1810
{"return": {}}
1811
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "virtio0"}}
1812
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "virtio0"}}
1813
diff --git a/tests/qemu-iotests/common.qemu b/tests/qemu-iotests/common.qemu
1814
index XXXXXXX..XXXXXXX 100644
1815
--- a/tests/qemu-iotests/common.qemu
1816
+++ b/tests/qemu-iotests/common.qemu
1817
@@ -XXX,XX +XXX,XX @@ _send_qemu_cmd()
1818
count=${qemu_cmd_repeat}
1819
use_error="no"
1820
fi
1821
- # This array element extraction is done to accommodate pathnames with spaces
1822
- if [ -z "${success_or_failure}" ]; then
1823
- cmd=${@: 1:${#@}-1}
1824
- shift $(($# - 1))
1825
- else
1826
- cmd=${@: 1:${#@}-2}
1827
- shift $(($# - 2))
1828
- fi
1829
+
1830
+ cmd=$1
1831
+ shift
1832
1833
# Display QMP being sent, but not HMP (since HMP already echoes its
1834
# input back to output); decide based on leading '{'
1835
--
1836
2.29.2
1837
1838
diff view generated by jsdifflib