1
The following changes since commit 248b23735645f7cbb503d9be6f5bf825f2a603ab:
1
The following changes since commit 75ee62ac606bfc9eb59310b9446df3434bf6e8c2:
2
2
3
Update version for v2.10.0-rc4 release (2017-08-24 17:34:26 +0100)
3
Merge remote-tracking branch 'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging (2020-12-17 18:53:36 +0000)
4
4
5
are available in the git repository at:
5
are available in the Git repository at:
6
6
7
git://github.com/stefanha/qemu.git tags/block-pull-request
7
https://github.com/XanClic/qemu.git tags/pull-block-2020-12-18
8
8
9
for you to fetch changes up to 3e4c705212abfe8c9882a00beb2d1466a8a53cec:
9
for you to fetch changes up to 0e72078128229bf9efb542e396ab44bf91b91340:
10
10
11
qcow2: allocate cluster_cache/cluster_data on demand (2017-08-30 18:02:10 +0100)
11
iotests: Fix _send_qemu_cmd with bash 5.1 (2020-12-18 12:47:38 +0100)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Block patches:
15
- New block filter: preallocate (which, on writes beyond an image file's
16
end, allocates big chunks of data so that such post-EOF writes will
17
occur less frequently)
18
- write-zeroes and block-status support for Quorum
19
- Implementation of truncate for the nvme block driver similarly to the
20
existing implementations for host block devices and iscsi devices
21
- Block layer refactoring: Drop the tighten_restrictions concept in the
22
block permission functions
23
- iotest fixes
14
24
15
----------------------------------------------------------------
25
----------------------------------------------------------------
26
Alberto Garcia (2):
27
quorum: Implement bdrv_co_block_status()
28
quorum: Implement bdrv_co_pwrite_zeroes()
16
29
17
Alberto Garcia (8):
30
Max Reitz (2):
18
throttle: Fix wrong variable name in the header documentation
31
iotests/102: Pass $QEMU_HANDLE to _send_qemu_cmd
19
throttle: Update the throttle_fix_bucket() documentation
32
iotests: Fix _send_qemu_cmd with bash 5.1
20
throttle: Make throttle_is_valid() a bit less verbose
21
throttle: Remove throttle_fix_bucket() / throttle_unfix_bucket()
22
throttle: Make LeakyBucket.avg and LeakyBucket.max integer types
23
throttle: Make burst_length 64bit and add range checks
24
throttle: Test the valid range of config values
25
misc: Remove unused Error variables
26
33
27
Dan Aloni (1):
34
Philippe Mathieu-Daudé (1):
28
nvme: Fix get/set number of queues feature, again
35
block/nvme: Implement fake truncate() coroutine
29
36
30
Eduardo Habkost (1):
37
Vladimir Sementsov-Ogievskiy (25):
31
oslib-posix: Print errors before aborting on qemu_alloc_stack()
38
block: add bdrv_refresh_perms() helper
39
block: bdrv_set_perm() drop redundant parameters.
40
block: bdrv_child_set_perm() drop redundant parameters.
41
block: drop tighten_restrictions
42
block: simplify comment to BDRV_REQ_SERIALISING
43
block/io.c: drop assertion on double waiting for request serialisation
44
block/io: split out bdrv_find_conflicting_request
45
block/io: bdrv_wait_serialising_requests_locked: drop extra bs arg
46
block: bdrv_mark_request_serialising: split non-waiting function
47
block: introduce BDRV_REQ_NO_WAIT flag
48
block: bdrv_check_perm(): process children anyway
49
block: introduce preallocate filter
50
qemu-io: add preallocate mode parameter for truncate command
51
iotests: qemu_io_silent: support --image-opts
52
iotests.py: execute_setup_common(): add required_fmts argument
53
iotests: add 298 to test new preallocate filter driver
54
scripts/simplebench: fix grammar: s/successed/succeeded/
55
scripts/simplebench: support iops
56
scripts/simplebench: use standard deviation for +- error
57
simplebench: rename ascii() to results_to_text()
58
simplebench: move results_to_text() into separate file
59
simplebench/results_to_text: improve view of the table
60
simplebench/results_to_text: add difference line to the table
61
simplebench/results_to_text: make executable
62
scripts/simplebench: add bench_prealloc.py
32
63
33
Fred Rolland (1):
64
docs/system/qemu-block-drivers.rst.inc | 26 ++
34
qemu-doc: Add UUID support in initiator name
65
qapi/block-core.json | 20 +-
35
66
include/block/block.h | 20 +-
36
Stefan Hajnoczi (4):
67
include/block/block_int.h | 3 +-
37
scripts: add argparse module for Python 2.6 compatibility
68
block.c | 185 +++-----
38
docker.py: Python 2.6 argparse compatibility
69
block/file-posix.c | 2 +-
39
tests: migration/guestperf Python 2.6 argparse compatibility
70
block/io.c | 130 +++---
40
qcow2: allocate cluster_cache/cluster_data on demand
71
block/nvme.c | 24 ++
41
72
block/preallocate.c | 559 +++++++++++++++++++++++++
42
include/qemu/throttle.h | 8 +-
73
block/quorum.c | 88 +++-
43
block/qcow.c | 12 +-
74
qemu-io-cmds.c | 46 +-
44
block/qcow2-cluster.c | 17 +
75
block/meson.build | 1 +
45
block/qcow2.c | 20 +-
76
scripts/simplebench/bench-example.py | 3 +-
46
dump.c | 4 +-
77
scripts/simplebench/bench_prealloc.py | 132 ++++++
47
hw/block/nvme.c | 4 +-
78
scripts/simplebench/bench_write_req.py | 3 +-
48
tests/test-throttle.c | 80 +-
79
scripts/simplebench/results_to_text.py | 126 ++++++
49
util/oslib-posix.c | 2 +
80
scripts/simplebench/simplebench.py | 66 ++-
50
util/throttle.c | 86 +-
81
tests/qemu-iotests/085.out | 167 ++++++--
51
COPYING.PYTHON | 270 ++++
82
tests/qemu-iotests/094.out | 10 +-
52
qemu-doc.texi | 5 +-
83
tests/qemu-iotests/095.out | 4 +-
53
scripts/argparse.py | 2406 ++++++++++++++++++++++++++++++++++++
84
tests/qemu-iotests/102 | 2 +-
54
tests/docker/docker.py | 4 +-
85
tests/qemu-iotests/102.out | 2 +-
55
tests/migration/guestperf/shell.py | 8 +-
86
tests/qemu-iotests/109.out | 88 +++-
56
14 files changed, 2831 insertions(+), 95 deletions(-)
87
tests/qemu-iotests/117.out | 13 +-
57
create mode 100644 COPYING.PYTHON
88
tests/qemu-iotests/127.out | 12 +-
58
create mode 100644 scripts/argparse.py
89
tests/qemu-iotests/140.out | 10 +-
90
tests/qemu-iotests/141.out | 128 ++++--
91
tests/qemu-iotests/143.out | 4 +-
92
tests/qemu-iotests/144.out | 28 +-
93
tests/qemu-iotests/153.out | 18 +-
94
tests/qemu-iotests/156.out | 39 +-
95
tests/qemu-iotests/161.out | 18 +-
96
tests/qemu-iotests/173.out | 25 +-
97
tests/qemu-iotests/182.out | 42 +-
98
tests/qemu-iotests/183.out | 19 +-
99
tests/qemu-iotests/185.out | 45 +-
100
tests/qemu-iotests/191.out | 12 +-
101
tests/qemu-iotests/223.out | 92 ++--
102
tests/qemu-iotests/229.out | 13 +-
103
tests/qemu-iotests/249.out | 16 +-
104
tests/qemu-iotests/298 | 186 ++++++++
105
tests/qemu-iotests/298.out | 5 +
106
tests/qemu-iotests/308.out | 103 ++++-
107
tests/qemu-iotests/312 | 159 +++++++
108
tests/qemu-iotests/312.out | 81 ++++
109
tests/qemu-iotests/common.qemu | 11 +-
110
tests/qemu-iotests/group | 2 +
111
tests/qemu-iotests/iotests.py | 16 +-
112
48 files changed, 2357 insertions(+), 447 deletions(-)
113
create mode 100644 block/preallocate.c
114
create mode 100755 scripts/simplebench/bench_prealloc.py
115
create mode 100755 scripts/simplebench/results_to_text.py
116
create mode 100644 tests/qemu-iotests/298
117
create mode 100644 tests/qemu-iotests/298.out
118
create mode 100755 tests/qemu-iotests/312
119
create mode 100644 tests/qemu-iotests/312.out
59
120
60
--
121
--
61
2.13.5
122
2.29.2
62
123
63
124
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Make separate function for common pattern.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Message-Id: <20201106124241.16950-5-vsementsov@virtuozzo.com>
7
[mreitz: Squashed in
8
https://lists.nongnu.org/archive/html/qemu-block/2020-11/msg00299.html]
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
block.c | 61 +++++++++++++++++++++++++++++----------------------------
12
1 file changed, 31 insertions(+), 30 deletions(-)
13
14
diff --git a/block.c b/block.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/block.c
17
+++ b/block.c
18
@@ -XXX,XX +XXX,XX @@ static void bdrv_child_abort_perm_update(BdrvChild *c)
19
bdrv_abort_perm_update(c->bs);
20
}
21
22
+static int bdrv_refresh_perms(BlockDriverState *bs, bool *tighten_restrictions,
23
+ Error **errp)
24
+{
25
+ int ret;
26
+ uint64_t perm, shared_perm;
27
+
28
+ bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
29
+ ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL,
30
+ tighten_restrictions, errp);
31
+ if (ret < 0) {
32
+ bdrv_abort_perm_update(bs);
33
+ return ret;
34
+ }
35
+ bdrv_set_perm(bs, perm, shared_perm);
36
+
37
+ return 0;
38
+}
39
+
40
int bdrv_child_try_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared,
41
Error **errp)
42
{
43
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
44
}
45
46
if (old_bs) {
47
- /* Update permissions for old node. This is guaranteed to succeed
48
- * because we're just taking a parent away, so we're loosening
49
- * restrictions. */
50
bool tighten_restrictions;
51
- int ret;
52
53
- bdrv_get_cumulative_perm(old_bs, &perm, &shared_perm);
54
- ret = bdrv_check_perm(old_bs, NULL, perm, shared_perm, NULL,
55
- &tighten_restrictions, NULL);
56
+ /*
57
+ * Update permissions for old node. We're just taking a parent away, so
58
+ * we're loosening restrictions. Errors of permission update are not
59
+ * fatal in this case, ignore them.
60
+ */
61
+ bdrv_refresh_perms(old_bs, &tighten_restrictions, NULL);
62
assert(tighten_restrictions == false);
63
- if (ret < 0) {
64
- /* We only tried to loosen restrictions, so errors are not fatal */
65
- bdrv_abort_perm_update(old_bs);
66
- } else {
67
- bdrv_set_perm(old_bs, perm, shared_perm);
68
- }
69
70
/* When the parent requiring a non-default AioContext is removed, the
71
* node moves back to the main AioContext */
72
@@ -XXX,XX +XXX,XX @@ void bdrv_init_with_whitelist(void)
73
int coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs, Error **errp)
74
{
75
BdrvChild *child, *parent;
76
- uint64_t perm, shared_perm;
77
Error *local_err = NULL;
78
int ret;
79
BdrvDirtyBitmap *bm;
80
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs, Error **errp)
81
*/
82
if (bs->open_flags & BDRV_O_INACTIVE) {
83
bs->open_flags &= ~BDRV_O_INACTIVE;
84
- bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
85
- ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL, NULL, errp);
86
+ ret = bdrv_refresh_perms(bs, NULL, errp);
87
if (ret < 0) {
88
- bdrv_abort_perm_update(bs);
89
bs->open_flags |= BDRV_O_INACTIVE;
90
return ret;
91
}
92
- bdrv_set_perm(bs, perm, shared_perm);
93
94
if (bs->drv->bdrv_co_invalidate_cache) {
95
bs->drv->bdrv_co_invalidate_cache(bs, &local_err);
96
@@ -XXX,XX +XXX,XX @@ static int bdrv_inactivate_recurse(BlockDriverState *bs)
97
{
98
BdrvChild *child, *parent;
99
bool tighten_restrictions;
100
- uint64_t perm, shared_perm;
101
int ret;
102
103
if (!bs->drv) {
104
@@ -XXX,XX +XXX,XX @@ static int bdrv_inactivate_recurse(BlockDriverState *bs)
105
106
bs->open_flags |= BDRV_O_INACTIVE;
107
108
- /* Update permissions, they may differ for inactive nodes */
109
- bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
110
- ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL,
111
- &tighten_restrictions, NULL);
112
+ /*
113
+ * Update permissions, they may differ for inactive nodes.
114
+ * We only tried to loosen restrictions, so errors are not fatal, ignore
115
+ * them.
116
+ */
117
+ bdrv_refresh_perms(bs, &tighten_restrictions, NULL);
118
assert(tighten_restrictions == false);
119
- if (ret < 0) {
120
- /* We only tried to loosen restrictions, so errors are not fatal */
121
- bdrv_abort_perm_update(bs);
122
- } else {
123
- bdrv_set_perm(bs, perm, shared_perm);
124
- }
125
-
126
127
/* Recursively inactivate children */
128
QLIST_FOREACH(child, &bs->children, next) {
129
--
130
2.29.2
131
132
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Use a pointer to the bucket instead of repeating cfg->buckets[i] all
3
We should never set permissions other than cumulative permissions of
4
the time. This makes the code more concise and will help us expand the
4
parents. During bdrv_reopen_multiple() we _check_ for synthetic
5
checks later and save a few line breaks.
5
permissions but when we do _set_ the graph is already updated.
6
Add an assertion to bdrv_reopen_multiple(), other cases are more
7
obvious.
6
8
7
Signed-off-by: Alberto Garcia <berto@igalia.com>
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Message-id: 763ffc40a26b17d54cf93f5a999e4656049fcf0c.1503580370.git.berto@igalia.com
10
Message-Id: <20201106124241.16950-6-vsementsov@virtuozzo.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Reviewed-by: Max Reitz <mreitz@redhat.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
13
---
11
util/throttle.c | 15 +++++++--------
14
block.c | 29 +++++++++++++++--------------
12
1 file changed, 7 insertions(+), 8 deletions(-)
15
1 file changed, 15 insertions(+), 14 deletions(-)
13
16
14
diff --git a/util/throttle.c b/util/throttle.c
17
diff --git a/block.c b/block.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/util/throttle.c
19
--- a/block.c
17
+++ b/util/throttle.c
20
+++ b/block.c
18
@@ -XXX,XX +XXX,XX @@ bool throttle_is_valid(ThrottleConfig *cfg, Error **errp)
21
@@ -XXX,XX +XXX,XX @@ static void bdrv_abort_perm_update(BlockDriverState *bs)
19
}
22
}
20
23
}
21
for (i = 0; i < BUCKETS_COUNT; i++) {
24
22
- if (cfg->buckets[i].avg < 0 ||
25
-static void bdrv_set_perm(BlockDriverState *bs, uint64_t cumulative_perms,
23
- cfg->buckets[i].max < 0 ||
26
- uint64_t cumulative_shared_perms)
24
- cfg->buckets[i].avg > THROTTLE_VALUE_MAX ||
27
+static void bdrv_set_perm(BlockDriverState *bs)
25
- cfg->buckets[i].max > THROTTLE_VALUE_MAX) {
28
{
26
+ LeakyBucket *bkt = &cfg->buckets[i];
29
+ uint64_t cumulative_perms, cumulative_shared_perms;
27
+ if (bkt->avg < 0 || bkt->max < 0 ||
30
BlockDriver *drv = bs->drv;
28
+ bkt->avg > THROTTLE_VALUE_MAX || bkt->max > THROTTLE_VALUE_MAX) {
31
BdrvChild *c;
29
error_setg(errp, "bps/iops/max values must be within [0, %lld]",
32
30
THROTTLE_VALUE_MAX);
33
@@ -XXX,XX +XXX,XX @@ static void bdrv_set_perm(BlockDriverState *bs, uint64_t cumulative_perms,
31
return false;
34
return;
35
}
36
37
+ bdrv_get_cumulative_perm(bs, &cumulative_perms, &cumulative_shared_perms);
38
+
39
/* Update this node */
40
if (drv->bdrv_set_perm) {
41
drv->bdrv_set_perm(bs, cumulative_perms, cumulative_shared_perms);
42
@@ -XXX,XX +XXX,XX @@ static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
43
44
static void bdrv_child_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared)
45
{
46
- uint64_t cumulative_perms, cumulative_shared_perms;
47
-
48
c->has_backup_perm = false;
49
50
c->perm = perm;
51
c->shared_perm = shared;
52
53
- bdrv_get_cumulative_perm(c->bs, &cumulative_perms,
54
- &cumulative_shared_perms);
55
- bdrv_set_perm(c->bs, cumulative_perms, cumulative_shared_perms);
56
+ bdrv_set_perm(c->bs);
57
}
58
59
static void bdrv_child_abort_perm_update(BdrvChild *c)
60
@@ -XXX,XX +XXX,XX @@ static int bdrv_refresh_perms(BlockDriverState *bs, bool *tighten_restrictions,
61
bdrv_abort_perm_update(bs);
62
return ret;
63
}
64
- bdrv_set_perm(bs, perm, shared_perm);
65
+ bdrv_set_perm(bs);
66
67
return 0;
68
}
69
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child_noperm(BdrvChild *child,
70
static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
71
{
72
BlockDriverState *old_bs = child->bs;
73
- uint64_t perm, shared_perm;
74
75
/* Asserts that child->frozen == false */
76
bdrv_replace_child_noperm(child, new_bs);
77
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
78
* restrictions.
79
*/
80
if (new_bs) {
81
- bdrv_get_cumulative_perm(new_bs, &perm, &shared_perm);
82
- bdrv_set_perm(new_bs, perm, shared_perm);
83
+ bdrv_set_perm(new_bs);
84
}
85
86
if (old_bs) {
87
@@ -XXX,XX +XXX,XX @@ cleanup_perm:
32
}
88
}
33
89
34
- if (!cfg->buckets[i].burst_length) {
90
if (ret == 0) {
35
+ if (!bkt->burst_length) {
91
- bdrv_set_perm(state->bs, state->perm, state->shared_perm);
36
error_setg(errp, "the burst length cannot be 0");
92
+ uint64_t perm, shared;
37
return false;
93
+
38
}
94
+ bdrv_get_cumulative_perm(state->bs, &perm, &shared);
39
95
+ assert(perm == state->perm);
40
- if (cfg->buckets[i].burst_length > 1 && !cfg->buckets[i].max) {
96
+ assert(shared == state->shared_perm);
41
+ if (bkt->burst_length > 1 && !bkt->max) {
97
+
42
error_setg(errp, "burst length set without burst rate");
98
+ bdrv_set_perm(state->bs);
43
return false;
99
} else {
44
}
100
bdrv_abort_perm_update(state->bs);
45
101
if (state->replace_backing_bs && state->new_backing_bs) {
46
- if (cfg->buckets[i].max && !cfg->buckets[i].avg) {
102
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_node_common(BlockDriverState *from,
47
+ if (bkt->max && !bkt->avg) {
103
bdrv_unref(from);
48
error_setg(errp, "bps_max/iops_max require corresponding"
104
}
49
" bps/iops values");
105
50
return false;
106
- bdrv_get_cumulative_perm(to, &perm, &shared);
51
}
107
- bdrv_set_perm(to, perm, shared);
52
108
+ bdrv_set_perm(to);
53
- if (cfg->buckets[i].max && cfg->buckets[i].max < cfg->buckets[i].avg) {
109
54
+ if (bkt->max && bkt->max < bkt->avg) {
110
out:
55
error_setg(errp, "bps_max/iops_max cannot be lower than bps/iops");
111
g_slist_free(list);
56
return false;
57
}
58
--
112
--
59
2.13.5
113
2.29.2
60
114
61
115
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Both the throttling limits set with the throttling.iops-* and
3
We must set the permission used for _check_. Assert that we have
4
throttling.bps-* options and their QMP equivalents defined in the
4
backup and drop extra arguments.
5
BlockIOThrottle struct are integer values.
6
5
7
Those limits are also reported in the BlockDeviceInfo struct and they
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
are integers there as well.
7
Message-Id: <20201106124241.16950-7-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
block.c | 15 ++++-----------
12
1 file changed, 4 insertions(+), 11 deletions(-)
9
13
10
Therefore there's no reason to store them internally as double and do
14
diff --git a/block.c b/block.c
11
the conversion everytime we're setting or querying them, so this patch
12
uses uint64_t for those types. Let's also use an unsigned type because
13
we don't allow negative values anyway.
14
15
LeakyBucket.level and LeakyBucket.burst_level do however remain double
16
because their value changes depending on the fraction of time elapsed
17
since the previous I/O operation.
18
19
Signed-off-by: Alberto Garcia <berto@igalia.com>
20
Message-id: f29b840422767b5be2c41c2dfdbbbf6c5f8fedf8.1503580370.git.berto@igalia.com
21
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
22
---
23
include/qemu/throttle.h | 4 ++--
24
tests/test-throttle.c | 3 ++-
25
util/throttle.c | 7 +++----
26
3 files changed, 7 insertions(+), 7 deletions(-)
27
28
diff --git a/include/qemu/throttle.h b/include/qemu/throttle.h
29
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
30
--- a/include/qemu/throttle.h
16
--- a/block.c
31
+++ b/include/qemu/throttle.h
17
+++ b/block.c
32
@@ -XXX,XX +XXX,XX @@ typedef enum {
18
@@ -XXX,XX +XXX,XX @@ static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
33
*/
19
GSList *ignore_children,
34
20
bool *tighten_restrictions, Error **errp);
35
typedef struct LeakyBucket {
21
static void bdrv_child_abort_perm_update(BdrvChild *c);
36
- double avg; /* average goal in units per second */
22
-static void bdrv_child_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared);
37
- double max; /* leaky bucket max burst in units */
23
+static void bdrv_child_set_perm(BdrvChild *c);
38
+ uint64_t avg; /* average goal in units per second */
24
39
+ uint64_t max; /* leaky bucket max burst in units */
25
typedef struct BlockReopenQueueEntry {
40
double level; /* bucket level in units */
26
bool prepared;
41
double burst_level; /* bucket level in units (for computing bursts) */
27
@@ -XXX,XX +XXX,XX @@ static void bdrv_set_perm(BlockDriverState *bs)
42
unsigned burst_length; /* max length of the burst period, in seconds */
28
43
diff --git a/tests/test-throttle.c b/tests/test-throttle.c
29
/* Update all children */
44
index XXXXXXX..XXXXXXX 100644
30
QLIST_FOREACH(c, &bs->children, next) {
45
--- a/tests/test-throttle.c
31
- uint64_t cur_perm, cur_shared;
46
+++ b/tests/test-throttle.c
32
- bdrv_child_perm(bs, c->bs, c, c->role, NULL,
47
@@ -XXX,XX +XXX,XX @@ static void test_enabled(void)
33
- cumulative_perms, cumulative_shared_perms,
48
for (i = 0; i < BUCKETS_COUNT; i++) {
34
- &cur_perm, &cur_shared);
49
throttle_config_init(&cfg);
35
- bdrv_child_set_perm(c, cur_perm, cur_shared);
50
set_cfg_value(false, i, 150);
36
+ bdrv_child_set_perm(c);
51
+ g_assert(throttle_is_valid(&cfg, NULL));
52
g_assert(throttle_enabled(&cfg));
53
}
54
55
for (i = 0; i < BUCKETS_COUNT; i++) {
56
throttle_config_init(&cfg);
57
set_cfg_value(false, i, -150);
58
- g_assert(!throttle_enabled(&cfg));
59
+ g_assert(!throttle_is_valid(&cfg, NULL));
60
}
37
}
61
}
38
}
62
39
63
diff --git a/util/throttle.c b/util/throttle.c
40
@@ -XXX,XX +XXX,XX @@ static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
64
index XXXXXXX..XXXXXXX 100644
41
return 0;
65
--- a/util/throttle.c
42
}
66
+++ b/util/throttle.c
43
67
@@ -XXX,XX +XXX,XX @@ int64_t throttle_compute_wait(LeakyBucket *bkt)
44
-static void bdrv_child_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared)
68
/* If bkt->max is 0 we still want to allow short bursts of I/O
45
+static void bdrv_child_set_perm(BdrvChild *c)
69
* from the guest, otherwise every other request will be throttled
46
{
70
* and performance will suffer considerably. */
47
c->has_backup_perm = false;
71
- bucket_size = bkt->avg / 10;
48
72
+ bucket_size = (double) bkt->avg / 10;
49
- c->perm = perm;
73
burst_bucket_size = 0;
50
- c->shared_perm = shared;
74
} else {
51
-
75
/* If we have a burst limit then we have to wait until all I/O
52
bdrv_set_perm(c->bs);
76
* at burst rate has finished before throttling to bkt->avg */
53
}
77
bucket_size = bkt->max * bkt->burst_length;
54
78
- burst_bucket_size = bkt->max / 10;
55
@@ -XXX,XX +XXX,XX @@ int bdrv_child_try_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared,
79
+ burst_bucket_size = (double) bkt->max / 10;
56
return ret;
80
}
57
}
81
58
82
/* If the main bucket is full then we have to wait */
59
- bdrv_child_set_perm(c, perm, shared);
83
@@ -XXX,XX +XXX,XX @@ bool throttle_is_valid(ThrottleConfig *cfg, Error **errp)
60
+ bdrv_child_set_perm(c);
84
61
85
for (i = 0; i < BUCKETS_COUNT; i++) {
62
return 0;
86
LeakyBucket *bkt = &cfg->buckets[i];
63
}
87
- if (bkt->avg < 0 || bkt->max < 0 ||
88
- bkt->avg > THROTTLE_VALUE_MAX || bkt->max > THROTTLE_VALUE_MAX) {
89
+ if (bkt->avg > THROTTLE_VALUE_MAX || bkt->max > THROTTLE_VALUE_MAX) {
90
error_setg(errp, "bps/iops/max values must be within [0, %lld]",
91
THROTTLE_VALUE_MAX);
92
return false;
93
--
64
--
94
2.13.5
65
2.29.2
95
66
96
67
diff view generated by jsdifflib
New patch
1
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
3
The only users of this thing are:
4
1. bdrv_child_try_set_perm, to ignore failures on loosen restrictions
5
2. assertion in bdrv_replace_child
6
3. assertion in bdrv_inactivate_recurse
7
8
Assertions are not enough reason for overcomplication the permission
9
update system. So, look at bdrv_child_try_set_perm.
10
11
We are interested in tighten_restrictions only on failure. But on
12
failure this field is not reliable: we may fail in the middle of
13
permission update, some nodes are not touched and we don't know should
14
their permissions be tighten or not. So, we rely on the fact that if we
15
loose restrictions on some node (or BdrvChild), we'll not tighten
16
restriction in the whole subtree as part of this update (assertions 2
17
and 3 rely on this fact as well). And, if we rely on this fact anyway,
18
we can just check it on top, and don't pass additional pointer through
19
the whole recursive infrastructure.
20
21
Note also, that further patches will fix real bugs in permission update
22
system, so now is good time to simplify it, as a help for further
23
refactorings.
24
25
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
26
Message-Id: <20201106124241.16950-8-vsementsov@virtuozzo.com>
27
[mreitz: Fixed rebase conflict]
28
Signed-off-by: Max Reitz <mreitz@redhat.com>
29
---
30
block.c | 89 +++++++++++----------------------------------------------
31
1 file changed, 17 insertions(+), 72 deletions(-)
32
33
diff --git a/block.c b/block.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/block.c
36
+++ b/block.c
37
@@ -XXX,XX +XXX,XX @@ static int bdrv_fill_options(QDict **options, const char *filename,
38
39
static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
40
uint64_t perm, uint64_t shared,
41
- GSList *ignore_children,
42
- bool *tighten_restrictions, Error **errp);
43
+ GSList *ignore_children, Error **errp);
44
static void bdrv_child_abort_perm_update(BdrvChild *c);
45
static void bdrv_child_set_perm(BdrvChild *c);
46
47
@@ -XXX,XX +XXX,XX @@ static void bdrv_child_perm(BlockDriverState *bs, BlockDriverState *child_bs,
48
* permissions of all its parents. This involves checking whether all necessary
49
* permission changes to child nodes can be performed.
50
*
51
- * Will set *tighten_restrictions to true if and only if new permissions have to
52
- * be taken or currently shared permissions are to be unshared. Otherwise,
53
- * errors are not fatal as long as the caller accepts that the restrictions
54
- * remain tighter than they need to be. The caller still has to abort the
55
- * transaction.
56
- * @tighten_restrictions cannot be used together with @q: When reopening, we may
57
- * encounter fatal errors even though no restrictions are to be tightened. For
58
- * example, changing a node from RW to RO will fail if the WRITE permission is
59
- * to be kept.
60
- *
61
* A call to this function must always be followed by a call to bdrv_set_perm()
62
* or bdrv_abort_perm_update().
63
*/
64
static int bdrv_check_perm(BlockDriverState *bs, BlockReopenQueue *q,
65
uint64_t cumulative_perms,
66
uint64_t cumulative_shared_perms,
67
- GSList *ignore_children,
68
- bool *tighten_restrictions, Error **errp)
69
+ GSList *ignore_children, Error **errp)
70
{
71
BlockDriver *drv = bs->drv;
72
BdrvChild *c;
73
int ret;
74
75
- assert(!q || !tighten_restrictions);
76
-
77
- if (tighten_restrictions) {
78
- uint64_t current_perms, current_shared;
79
- uint64_t added_perms, removed_shared_perms;
80
-
81
- bdrv_get_cumulative_perm(bs, &current_perms, &current_shared);
82
-
83
- added_perms = cumulative_perms & ~current_perms;
84
- removed_shared_perms = current_shared & ~cumulative_shared_perms;
85
-
86
- *tighten_restrictions = added_perms || removed_shared_perms;
87
- }
88
-
89
/* Write permissions never work with read-only images */
90
if ((cumulative_perms & (BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED)) &&
91
!bdrv_is_writable_after_reopen(bs, q))
92
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_perm(BlockDriverState *bs, BlockReopenQueue *q,
93
/* Check all children */
94
QLIST_FOREACH(c, &bs->children, next) {
95
uint64_t cur_perm, cur_shared;
96
- bool child_tighten_restr;
97
98
bdrv_child_perm(bs, c->bs, c, c->role, q,
99
cumulative_perms, cumulative_shared_perms,
100
&cur_perm, &cur_shared);
101
ret = bdrv_child_check_perm(c, q, cur_perm, cur_shared, ignore_children,
102
- tighten_restrictions ? &child_tighten_restr
103
- : NULL,
104
errp);
105
- if (tighten_restrictions) {
106
- *tighten_restrictions |= child_tighten_restr;
107
- }
108
if (ret < 0) {
109
return ret;
110
}
111
@@ -XXX,XX +XXX,XX @@ char *bdrv_perm_names(uint64_t perm)
112
* set, the BdrvChild objects in this list are ignored in the calculations;
113
* this allows checking permission updates for an existing reference.
114
*
115
- * See bdrv_check_perm() for the semantics of @tighten_restrictions.
116
- *
117
* Needs to be followed by a call to either bdrv_set_perm() or
118
* bdrv_abort_perm_update(). */
119
static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
120
uint64_t new_used_perm,
121
uint64_t new_shared_perm,
122
GSList *ignore_children,
123
- bool *tighten_restrictions,
124
Error **errp)
125
{
126
BdrvChild *c;
127
uint64_t cumulative_perms = new_used_perm;
128
uint64_t cumulative_shared_perms = new_shared_perm;
129
130
- assert(!q || !tighten_restrictions);
131
132
/* There is no reason why anyone couldn't tolerate write_unchanged */
133
assert(new_shared_perm & BLK_PERM_WRITE_UNCHANGED);
134
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
135
char *user = bdrv_child_user_desc(c);
136
char *perm_names = bdrv_perm_names(new_used_perm & ~c->shared_perm);
137
138
- if (tighten_restrictions) {
139
- *tighten_restrictions = true;
140
- }
141
-
142
error_setg(errp, "Conflicts with use by %s as '%s', which does not "
143
"allow '%s' on %s",
144
user, c->name, perm_names, bdrv_get_node_name(c->bs));
145
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
146
char *user = bdrv_child_user_desc(c);
147
char *perm_names = bdrv_perm_names(c->perm & ~new_shared_perm);
148
149
- if (tighten_restrictions) {
150
- *tighten_restrictions = true;
151
- }
152
-
153
error_setg(errp, "Conflicts with use by %s as '%s', which uses "
154
"'%s' on %s",
155
user, c->name, perm_names, bdrv_get_node_name(c->bs));
156
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_update_perm(BlockDriverState *bs, BlockReopenQueue *q,
157
}
158
159
return bdrv_check_perm(bs, q, cumulative_perms, cumulative_shared_perms,
160
- ignore_children, tighten_restrictions, errp);
161
+ ignore_children, errp);
162
}
163
164
/* Needs to be followed by a call to either bdrv_child_set_perm() or
165
* bdrv_child_abort_perm_update(). */
166
static int bdrv_child_check_perm(BdrvChild *c, BlockReopenQueue *q,
167
uint64_t perm, uint64_t shared,
168
- GSList *ignore_children,
169
- bool *tighten_restrictions, Error **errp)
170
+ GSList *ignore_children, Error **errp)
171
{
172
int ret;
173
174
ignore_children = g_slist_prepend(g_slist_copy(ignore_children), c);
175
- ret = bdrv_check_update_perm(c->bs, q, perm, shared, ignore_children,
176
- tighten_restrictions, errp);
177
+ ret = bdrv_check_update_perm(c->bs, q, perm, shared, ignore_children, errp);
178
g_slist_free(ignore_children);
179
180
if (ret < 0) {
181
@@ -XXX,XX +XXX,XX @@ static void bdrv_child_abort_perm_update(BdrvChild *c)
182
bdrv_abort_perm_update(c->bs);
183
}
184
185
-static int bdrv_refresh_perms(BlockDriverState *bs, bool *tighten_restrictions,
186
- Error **errp)
187
+static int bdrv_refresh_perms(BlockDriverState *bs, Error **errp)
188
{
189
int ret;
190
uint64_t perm, shared_perm;
191
192
bdrv_get_cumulative_perm(bs, &perm, &shared_perm);
193
- ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL,
194
- tighten_restrictions, errp);
195
+ ret = bdrv_check_perm(bs, NULL, perm, shared_perm, NULL, errp);
196
if (ret < 0) {
197
bdrv_abort_perm_update(bs);
198
return ret;
199
@@ -XXX,XX +XXX,XX @@ int bdrv_child_try_set_perm(BdrvChild *c, uint64_t perm, uint64_t shared,
200
{
201
Error *local_err = NULL;
202
int ret;
203
- bool tighten_restrictions;
204
205
- ret = bdrv_child_check_perm(c, NULL, perm, shared, NULL,
206
- &tighten_restrictions, &local_err);
207
+ ret = bdrv_child_check_perm(c, NULL, perm, shared, NULL, &local_err);
208
if (ret < 0) {
209
bdrv_child_abort_perm_update(c);
210
- if (tighten_restrictions) {
211
+ if ((perm & ~c->perm) || (c->shared_perm & ~shared)) {
212
+ /* tighten permissions */
213
error_propagate(errp, local_err);
214
} else {
215
/*
216
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs)
217
}
218
219
if (old_bs) {
220
- bool tighten_restrictions;
221
-
222
/*
223
* Update permissions for old node. We're just taking a parent away, so
224
* we're loosening restrictions. Errors of permission update are not
225
* fatal in this case, ignore them.
226
*/
227
- bdrv_refresh_perms(old_bs, &tighten_restrictions, NULL);
228
- assert(tighten_restrictions == false);
229
+ bdrv_refresh_perms(old_bs, NULL);
230
231
/* When the parent requiring a non-default AioContext is removed, the
232
* node moves back to the main AioContext */
233
@@ -XXX,XX +XXX,XX @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
234
Error *local_err = NULL;
235
int ret;
236
237
- ret = bdrv_check_update_perm(child_bs, NULL, perm, shared_perm, NULL, NULL,
238
- errp);
239
+ ret = bdrv_check_update_perm(child_bs, NULL, perm, shared_perm, NULL, errp);
240
if (ret < 0) {
241
bdrv_abort_perm_update(child_bs);
242
bdrv_unref(child_bs);
243
@@ -XXX,XX +XXX,XX @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
244
QTAILQ_FOREACH(bs_entry, bs_queue, entry) {
245
BDRVReopenState *state = &bs_entry->state;
246
ret = bdrv_check_perm(state->bs, bs_queue, state->perm,
247
- state->shared_perm, NULL, NULL, errp);
248
+ state->shared_perm, NULL, errp);
249
if (ret < 0) {
250
goto cleanup_perm;
251
}
252
@@ -XXX,XX +XXX,XX @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
253
bs_queue, state->perm, state->shared_perm,
254
&nperm, &nshared);
255
ret = bdrv_check_update_perm(state->new_backing_bs, NULL,
256
- nperm, nshared, NULL, NULL, errp);
257
+ nperm, nshared, NULL, errp);
258
if (ret < 0) {
259
goto cleanup_perm;
260
}
261
@@ -XXX,XX +XXX,XX @@ static void bdrv_replace_node_common(BlockDriverState *from,
262
263
/* Check whether the required permissions can be granted on @to, ignoring
264
* all BdrvChild in @list so that they can't block themselves. */
265
- ret = bdrv_check_update_perm(to, NULL, perm, shared, list, NULL, errp);
266
+ ret = bdrv_check_update_perm(to, NULL, perm, shared, list, errp);
267
if (ret < 0) {
268
bdrv_abort_perm_update(to);
269
goto out;
270
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs, Error **errp)
271
*/
272
if (bs->open_flags & BDRV_O_INACTIVE) {
273
bs->open_flags &= ~BDRV_O_INACTIVE;
274
- ret = bdrv_refresh_perms(bs, NULL, errp);
275
+ ret = bdrv_refresh_perms(bs, errp);
276
if (ret < 0) {
277
bs->open_flags |= BDRV_O_INACTIVE;
278
return ret;
279
@@ -XXX,XX +XXX,XX @@ static bool bdrv_has_bds_parent(BlockDriverState *bs, bool only_active)
280
static int bdrv_inactivate_recurse(BlockDriverState *bs)
281
{
282
BdrvChild *child, *parent;
283
- bool tighten_restrictions;
284
int ret;
285
286
if (!bs->drv) {
287
@@ -XXX,XX +XXX,XX @@ static int bdrv_inactivate_recurse(BlockDriverState *bs)
288
* We only tried to loosen restrictions, so errors are not fatal, ignore
289
* them.
290
*/
291
- bdrv_refresh_perms(bs, &tighten_restrictions, NULL);
292
- assert(tighten_restrictions == false);
293
+ bdrv_refresh_perms(bs, NULL);
294
295
/* Recursively inactivate children */
296
QLIST_FOREACH(child, &bs->children, next) {
297
--
298
2.29.2
299
300
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
1. BDRV_REQ_NO_SERIALISING doesn't exist already, don't mention it.
4
5
2. We are going to add one more user of BDRV_REQ_SERIALISING, so
6
comment about backup becomes a bit confusing here. The use case in
7
backup is documented in block/backup.c, so let's just drop
8
duplication here.
9
10
3. The fact that BDRV_REQ_SERIALISING is only for write requests is
11
omitted. Add a note.
12
13
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
14
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
15
Reviewed-by: Alberto Garcia <berto@igalia.com>
16
Message-Id: <20201021145859.11201-2-vsementsov@virtuozzo.com>
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
18
---
19
include/block/block.h | 11 +----------
20
1 file changed, 1 insertion(+), 10 deletions(-)
21
22
diff --git a/include/block/block.h b/include/block/block.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/include/block/block.h
25
+++ b/include/block/block.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum {
27
* content. */
28
BDRV_REQ_WRITE_UNCHANGED = 0x40,
29
30
- /*
31
- * BDRV_REQ_SERIALISING forces request serialisation for writes.
32
- * It is used to ensure that writes to the backing file of a backup process
33
- * target cannot race with a read of the backup target that defers to the
34
- * backing file.
35
- *
36
- * Note, that BDRV_REQ_SERIALISING is _not_ opposite in meaning to
37
- * BDRV_REQ_NO_SERIALISING. A more descriptive name for the latter might be
38
- * _DO_NOT_WAIT_FOR_SERIALISING, except that is too long.
39
- */
40
+ /* Forces request serialisation. Use only with write requests. */
41
BDRV_REQ_SERIALISING = 0x80,
42
43
/* Execute the request only if the operation can be offloaded or otherwise
44
--
45
2.29.2
46
47
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
The comments states, that on misaligned request we should have already
4
been waiting. But for bdrv_padding_rmw_read, we called
5
bdrv_mark_request_serialising with align = request_alignment, and now
6
we serialise with align = cluster_size. So we may have to wait again
7
with larger alignment.
8
9
Note, that the only user of BDRV_REQ_SERIALISING is backup which issues
10
cluster-aligned requests, so seems the assertion should not fire for
11
now. But it's wrong anyway.
12
13
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
14
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
15
Message-Id: <20201021145859.11201-3-vsementsov@virtuozzo.com>
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
---
18
block/io.c | 11 +----------
19
1 file changed, 1 insertion(+), 10 deletions(-)
20
21
diff --git a/block/io.c b/block/io.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/block/io.c
24
+++ b/block/io.c
25
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
26
BdrvTrackedRequest *req, int flags)
27
{
28
BlockDriverState *bs = child->bs;
29
- bool waited;
30
int64_t end_sector = DIV_ROUND_UP(offset + bytes, BDRV_SECTOR_SIZE);
31
32
if (bs->read_only) {
33
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
34
assert(!(flags & ~BDRV_REQ_MASK));
35
36
if (flags & BDRV_REQ_SERIALISING) {
37
- waited = bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
38
- /*
39
- * For a misaligned request we should have already waited earlier,
40
- * because we come after bdrv_padding_rmw_read which must be called
41
- * with the request already marked as serialising.
42
- */
43
- assert(!waited ||
44
- (req->offset == req->overlap_offset &&
45
- req->bytes == req->overlap_bytes));
46
+ bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
47
} else {
48
bdrv_wait_serialising_requests(req);
49
}
50
--
51
2.29.2
52
53
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
To be reused in separate.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Message-Id: <20201021145859.11201-4-vsementsov@virtuozzo.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
block/io.c | 71 +++++++++++++++++++++++++++++++-----------------------
11
1 file changed, 41 insertions(+), 30 deletions(-)
12
13
diff --git a/block/io.c b/block/io.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/block/io.c
16
+++ b/block/io.c
17
@@ -XXX,XX +XXX,XX @@ static bool tracked_request_overlaps(BdrvTrackedRequest *req,
18
return true;
19
}
20
21
+/* Called with self->bs->reqs_lock held */
22
+static BdrvTrackedRequest *
23
+bdrv_find_conflicting_request(BdrvTrackedRequest *self)
24
+{
25
+ BdrvTrackedRequest *req;
26
+
27
+ QLIST_FOREACH(req, &self->bs->tracked_requests, list) {
28
+ if (req == self || (!req->serialising && !self->serialising)) {
29
+ continue;
30
+ }
31
+ if (tracked_request_overlaps(req, self->overlap_offset,
32
+ self->overlap_bytes))
33
+ {
34
+ /*
35
+ * Hitting this means there was a reentrant request, for
36
+ * example, a block driver issuing nested requests. This must
37
+ * never happen since it means deadlock.
38
+ */
39
+ assert(qemu_coroutine_self() != req->co);
40
+
41
+ /*
42
+ * If the request is already (indirectly) waiting for us, or
43
+ * will wait for us as soon as it wakes up, then just go on
44
+ * (instead of producing a deadlock in the former case).
45
+ */
46
+ if (!req->waiting_for) {
47
+ return req;
48
+ }
49
+ }
50
+ }
51
+
52
+ return NULL;
53
+}
54
+
55
static bool coroutine_fn
56
bdrv_wait_serialising_requests_locked(BlockDriverState *bs,
57
BdrvTrackedRequest *self)
58
{
59
BdrvTrackedRequest *req;
60
- bool retry;
61
bool waited = false;
62
63
- do {
64
- retry = false;
65
- QLIST_FOREACH(req, &bs->tracked_requests, list) {
66
- if (req == self || (!req->serialising && !self->serialising)) {
67
- continue;
68
- }
69
- if (tracked_request_overlaps(req, self->overlap_offset,
70
- self->overlap_bytes))
71
- {
72
- /* Hitting this means there was a reentrant request, for
73
- * example, a block driver issuing nested requests. This must
74
- * never happen since it means deadlock.
75
- */
76
- assert(qemu_coroutine_self() != req->co);
77
-
78
- /* If the request is already (indirectly) waiting for us, or
79
- * will wait for us as soon as it wakes up, then just go on
80
- * (instead of producing a deadlock in the former case). */
81
- if (!req->waiting_for) {
82
- self->waiting_for = req;
83
- qemu_co_queue_wait(&req->wait_queue, &bs->reqs_lock);
84
- self->waiting_for = NULL;
85
- retry = true;
86
- waited = true;
87
- break;
88
- }
89
- }
90
- }
91
- } while (retry);
92
+ while ((req = bdrv_find_conflicting_request(self))) {
93
+ self->waiting_for = req;
94
+ qemu_co_queue_wait(&req->wait_queue, &bs->reqs_lock);
95
+ self->waiting_for = NULL;
96
+ waited = true;
97
+ }
98
+
99
return waited;
100
}
101
102
--
103
2.29.2
104
105
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
There's a few cases which we're passing an Error pointer to a function
3
bs is linked in req, so no needs to pass it separately. Most of
4
only to discard it immediately afterwards without checking it. In
4
tracked-requests API doesn't have bs argument. Actually, after this
5
these cases we can simply remove the variable and pass NULL instead.
5
patch only tracked_request_begin has it, but it's for purpose.
6
6
7
Signed-off-by: Alberto Garcia <berto@igalia.com>
7
While being here, also add a comment about what "_locked" is.
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
9
Reviewed-by: Eric Blake <eblake@redhat.com>
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Message-id: 20170829120836.16091-1-berto@igalia.com
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Message-Id: <20201021145859.11201-5-vsementsov@virtuozzo.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
13
---
13
block/qcow.c | 12 +++---------
14
block/io.c | 10 +++++-----
14
block/qcow2.c | 8 ++------
15
1 file changed, 5 insertions(+), 5 deletions(-)
15
dump.c | 4 +---
16
3 files changed, 6 insertions(+), 18 deletions(-)
17
16
18
diff --git a/block/qcow.c b/block/qcow.c
17
diff --git a/block/io.c b/block/io.c
19
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
20
--- a/block/qcow.c
19
--- a/block/io.c
21
+++ b/block/qcow.c
20
+++ b/block/io.c
22
@@ -XXX,XX +XXX,XX @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
21
@@ -XXX,XX +XXX,XX @@ bdrv_find_conflicting_request(BdrvTrackedRequest *self)
23
start_sect = (offset & ~(s->cluster_size - 1)) >> 9;
24
for(i = 0; i < s->cluster_sectors; i++) {
25
if (i < n_start || i >= n_end) {
26
- Error *err = NULL;
27
memset(s->cluster_data, 0x00, 512);
28
if (qcrypto_block_encrypt(s->crypto, start_sect + i,
29
s->cluster_data,
30
BDRV_SECTOR_SIZE,
31
- &err) < 0) {
32
- error_free(err);
33
+ NULL) < 0) {
34
errno = EIO;
35
return -1;
36
}
37
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
38
QEMUIOVector hd_qiov;
39
uint8_t *buf;
40
void *orig_buf;
41
- Error *err = NULL;
42
43
if (qiov->niov > 1) {
44
buf = orig_buf = qemu_try_blockalign(bs, qiov->size);
45
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
46
if (bs->encrypted) {
47
assert(s->crypto);
48
if (qcrypto_block_decrypt(s->crypto, sector_num, buf,
49
- n * BDRV_SECTOR_SIZE, &err) < 0) {
50
+ n * BDRV_SECTOR_SIZE, NULL) < 0) {
51
goto fail;
52
}
53
}
54
@@ -XXX,XX +XXX,XX @@ done:
55
return ret;
56
57
fail:
58
- error_free(err);
59
ret = -EIO;
60
goto done;
61
}
62
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
63
break;
64
}
65
if (bs->encrypted) {
66
- Error *err = NULL;
67
assert(s->crypto);
68
if (qcrypto_block_encrypt(s->crypto, sector_num, buf,
69
- n * BDRV_SECTOR_SIZE, &err) < 0) {
70
- error_free(err);
71
+ n * BDRV_SECTOR_SIZE, NULL) < 0) {
72
ret = -EIO;
73
break;
74
}
75
diff --git a/block/qcow2.c b/block/qcow2.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/block/qcow2.c
78
+++ b/block/qcow2.c
79
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int qcow2_co_preadv(BlockDriverState *bs, uint64_t offset,
80
assert(s->crypto);
81
assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
82
assert((cur_bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
83
- Error *err = NULL;
84
if (qcrypto_block_decrypt(s->crypto,
85
(s->crypt_physical_offset ?
86
cluster_offset + offset_in_cluster :
87
offset) >> BDRV_SECTOR_BITS,
88
cluster_data,
89
cur_bytes,
90
- &err) < 0) {
91
- error_free(err);
92
+ NULL) < 0) {
93
ret = -EIO;
94
goto fail;
95
}
96
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int qcow2_co_pwritev(BlockDriverState *bs, uint64_t offset,
97
qemu_iovec_concat(&hd_qiov, qiov, bytes_done, cur_bytes);
98
99
if (bs->encrypted) {
100
- Error *err = NULL;
101
assert(s->crypto);
102
if (!cluster_data) {
103
cluster_data = qemu_try_blockalign(bs->file->bs,
104
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int qcow2_co_pwritev(BlockDriverState *bs, uint64_t offset,
105
cluster_offset + offset_in_cluster :
106
offset) >> BDRV_SECTOR_BITS,
107
cluster_data,
108
- cur_bytes, &err) < 0) {
109
- error_free(err);
110
+ cur_bytes, NULL) < 0) {
111
ret = -EIO;
112
goto fail;
113
}
114
diff --git a/dump.c b/dump.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/dump.c
117
+++ b/dump.c
118
@@ -XXX,XX +XXX,XX @@ static void dump_process(DumpState *s, Error **errp)
119
120
static void *dump_thread(void *data)
121
{
122
- Error *err = NULL;
123
DumpState *s = (DumpState *)data;
124
- dump_process(s, &err);
125
- error_free(err);
126
+ dump_process(s, NULL);
127
return NULL;
22
return NULL;
128
}
23
}
129
24
25
+/* Called with self->bs->reqs_lock held */
26
static bool coroutine_fn
27
-bdrv_wait_serialising_requests_locked(BlockDriverState *bs,
28
- BdrvTrackedRequest *self)
29
+bdrv_wait_serialising_requests_locked(BdrvTrackedRequest *self)
30
{
31
BdrvTrackedRequest *req;
32
bool waited = false;
33
34
while ((req = bdrv_find_conflicting_request(self))) {
35
self->waiting_for = req;
36
- qemu_co_queue_wait(&req->wait_queue, &bs->reqs_lock);
37
+ qemu_co_queue_wait(&req->wait_queue, &self->bs->reqs_lock);
38
self->waiting_for = NULL;
39
waited = true;
40
}
41
@@ -XXX,XX +XXX,XX @@ bool bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
42
43
req->overlap_offset = MIN(req->overlap_offset, overlap_offset);
44
req->overlap_bytes = MAX(req->overlap_bytes, overlap_bytes);
45
- waited = bdrv_wait_serialising_requests_locked(bs, req);
46
+ waited = bdrv_wait_serialising_requests_locked(req);
47
qemu_co_mutex_unlock(&bs->reqs_lock);
48
return waited;
49
}
50
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn bdrv_wait_serialising_requests(BdrvTrackedRequest *self
51
}
52
53
qemu_co_mutex_lock(&bs->reqs_lock);
54
- waited = bdrv_wait_serialising_requests_locked(bs, self);
55
+ waited = bdrv_wait_serialising_requests_locked(self);
56
qemu_co_mutex_unlock(&bs->reqs_lock);
57
58
return waited;
130
--
59
--
131
2.13.5
60
2.29.2
132
61
133
62
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
We'll need a separate function, which will only "mark" request
4
serialising with specified align but not wait for conflicting
5
requests. So, it will be like old bdrv_mark_request_serialising(),
6
before merging bdrv_wait_serialising_requests_locked() into it.
7
8
To reduce the possible mess, let's do the following:
9
10
Public function that does both marking and waiting will be called
11
bdrv_make_request_serialising, and private function which will only
12
"mark" will be called tracked_request_set_serialising().
13
14
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
15
Reviewed-by: Max Reitz <mreitz@redhat.com>
16
Message-Id: <20201021145859.11201-6-vsementsov@virtuozzo.com>
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
18
---
19
include/block/block_int.h | 3 ++-
20
block/file-posix.c | 2 +-
21
block/io.c | 35 +++++++++++++++++++++++------------
22
3 files changed, 26 insertions(+), 14 deletions(-)
23
24
diff --git a/include/block/block_int.h b/include/block/block_int.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/block/block_int.h
27
+++ b/include/block/block_int.h
28
@@ -XXX,XX +XXX,XX @@ extern unsigned int bdrv_drain_all_count;
29
void bdrv_apply_subtree_drain(BdrvChild *child, BlockDriverState *new_parent);
30
void bdrv_unapply_subtree_drain(BdrvChild *child, BlockDriverState *old_parent);
31
32
-bool coroutine_fn bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align);
33
+bool coroutine_fn bdrv_make_request_serialising(BdrvTrackedRequest *req,
34
+ uint64_t align);
35
BdrvTrackedRequest *coroutine_fn bdrv_co_get_self_request(BlockDriverState *bs);
36
37
int get_tmp_filename(char *filename, int size);
38
diff --git a/block/file-posix.c b/block/file-posix.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/block/file-posix.c
41
+++ b/block/file-posix.c
42
@@ -XXX,XX +XXX,XX @@ raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int bytes,
43
44
assert(bdrv_check_request(req->offset, req->bytes) == 0);
45
46
- bdrv_mark_request_serialising(req, bs->bl.request_alignment);
47
+ bdrv_make_request_serialising(req, bs->bl.request_alignment);
48
}
49
#endif
50
51
diff --git a/block/io.c b/block/io.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/block/io.c
54
+++ b/block/io.c
55
@@ -XXX,XX +XXX,XX @@ bdrv_wait_serialising_requests_locked(BdrvTrackedRequest *self)
56
return waited;
57
}
58
59
-bool bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
60
+/* Called with req->bs->reqs_lock held */
61
+static void tracked_request_set_serialising(BdrvTrackedRequest *req,
62
+ uint64_t align)
63
{
64
- BlockDriverState *bs = req->bs;
65
int64_t overlap_offset = req->offset & ~(align - 1);
66
uint64_t overlap_bytes = ROUND_UP(req->offset + req->bytes, align)
67
- overlap_offset;
68
- bool waited;
69
70
- qemu_co_mutex_lock(&bs->reqs_lock);
71
if (!req->serialising) {
72
qatomic_inc(&req->bs->serialising_in_flight);
73
req->serialising = true;
74
@@ -XXX,XX +XXX,XX @@ bool bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
75
76
req->overlap_offset = MIN(req->overlap_offset, overlap_offset);
77
req->overlap_bytes = MAX(req->overlap_bytes, overlap_bytes);
78
- waited = bdrv_wait_serialising_requests_locked(req);
79
- qemu_co_mutex_unlock(&bs->reqs_lock);
80
- return waited;
81
}
82
83
/**
84
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn bdrv_wait_serialising_requests(BdrvTrackedRequest *self
85
return waited;
86
}
87
88
+bool coroutine_fn bdrv_make_request_serialising(BdrvTrackedRequest *req,
89
+ uint64_t align)
90
+{
91
+ bool waited;
92
+
93
+ qemu_co_mutex_lock(&req->bs->reqs_lock);
94
+
95
+ tracked_request_set_serialising(req, align);
96
+ waited = bdrv_wait_serialising_requests_locked(req);
97
+
98
+ qemu_co_mutex_unlock(&req->bs->reqs_lock);
99
+
100
+ return waited;
101
+}
102
+
103
int bdrv_check_request(int64_t offset, int64_t bytes)
104
{
105
if (offset < 0 || bytes < 0) {
106
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
107
* with each other for the same cluster. For example, in copy-on-read
108
* it ensures that the CoR read and write operations are atomic and
109
* guest writes cannot interleave between them. */
110
- bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
111
+ bdrv_make_request_serialising(req, bdrv_get_cluster_size(bs));
112
} else {
113
bdrv_wait_serialising_requests(req);
114
}
115
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
116
assert(!(flags & ~BDRV_REQ_MASK));
117
118
if (flags & BDRV_REQ_SERIALISING) {
119
- bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
120
+ bdrv_make_request_serialising(req, bdrv_get_cluster_size(bs));
121
} else {
122
bdrv_wait_serialising_requests(req);
123
}
124
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_co_do_zero_pwritev(BdrvChild *child,
125
126
padding = bdrv_init_padding(bs, offset, bytes, &pad);
127
if (padding) {
128
- bdrv_mark_request_serialising(req, align);
129
+ bdrv_make_request_serialising(req, align);
130
131
bdrv_padding_rmw_read(child, req, &pad, true);
132
133
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
134
}
135
136
if (bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, &pad)) {
137
- bdrv_mark_request_serialising(&req, align);
138
+ bdrv_make_request_serialising(&req, align);
139
bdrv_padding_rmw_read(child, &req, &pad, false);
140
}
141
142
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_truncate(BdrvChild *child, int64_t offset, bool exact,
143
* new area, we need to make sure that no write requests are made to it
144
* concurrently or they might be overwritten by preallocation. */
145
if (new_bytes) {
146
- bdrv_mark_request_serialising(&req, 1);
147
+ bdrv_make_request_serialising(&req, 1);
148
}
149
if (bs->read_only) {
150
error_setg(errp, "Image is read-only");
151
--
152
2.29.2
153
154
diff view generated by jsdifflib
1
Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1)
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
* cluster_size + 512 bytes upfront. Allocate s->cluster_cache and
3
s->cluster_data when the first read operation is performance on a
4
compressed cluster.
5
2
6
The buffers are freed in .bdrv_close(). .bdrv_open() no longer has any
3
Add flag to make serialising request no wait: if there are conflicting
7
code paths that can allocate these buffers, so remove the free functions
4
requests, just return error immediately. It's will be used in upcoming
8
in the error code path.
5
preallocate filter.
9
6
10
This patch can result in significant memory savings when many qcow2
7
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
disks are attached or backing file chains are long:
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Message-Id: <20201021145859.11201-7-vsementsov@virtuozzo.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
---
12
include/block/block.h | 9 ++++++++-
13
block/io.c | 11 ++++++++++-
14
2 files changed, 18 insertions(+), 2 deletions(-)
12
15
13
Before 12.81% (1,023,193,088B)
16
diff --git a/include/block/block.h b/include/block/block.h
14
After 5.36% (393,893,888B)
15
16
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
17
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
18
Reviewed-by: Eric Blake <eblake@redhat.com>
19
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
20
Message-id: 20170821135530.32344-1-stefanha@redhat.com
21
Cc: Kevin Wolf <kwolf@redhat.com>
22
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
23
---
24
block/qcow2-cluster.c | 17 +++++++++++++++++
25
block/qcow2.c | 12 ------------
26
2 files changed, 17 insertions(+), 12 deletions(-)
27
28
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
29
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
30
--- a/block/qcow2-cluster.c
18
--- a/include/block/block.h
31
+++ b/block/qcow2-cluster.c
19
+++ b/include/block/block.h
32
@@ -XXX,XX +XXX,XX @@ int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
33
nb_csectors = ((cluster_offset >> s->csize_shift) & s->csize_mask) + 1;
21
* written to qiov parameter which may be NULL.
34
sector_offset = coffset & 511;
22
*/
35
csize = nb_csectors * 512 - sector_offset;
23
BDRV_REQ_PREFETCH = 0x200,
36
+
24
+
37
+ /* Allocate buffers on first decompress operation, most images are
25
+ /*
38
+ * uncompressed and the memory overhead can be avoided. The buffers
26
+ * If we need to wait for other requests, just fail immediately. Used
39
+ * are freed in .bdrv_close().
27
+ * only together with BDRV_REQ_SERIALISING.
40
+ */
28
+ */
41
+ if (!s->cluster_data) {
29
+ BDRV_REQ_NO_WAIT = 0x400,
42
+ /* one more sector for decompressed data alignment */
30
+
43
+ s->cluster_data = qemu_try_blockalign(bs->file->bs,
31
/* Mask of valid flags */
44
+ QCOW_MAX_CRYPT_CLUSTERS * s->cluster_size + 512);
32
- BDRV_REQ_MASK = 0x3ff,
45
+ if (!s->cluster_data) {
33
+ BDRV_REQ_MASK = 0x7ff,
46
+ return -ENOMEM;
34
} BdrvRequestFlags;
47
+ }
35
48
+ }
36
typedef struct BlockSizes {
49
+ if (!s->cluster_cache) {
37
diff --git a/block/io.c b/block/io.c
50
+ s->cluster_cache = g_malloc(s->cluster_size);
38
index XXXXXXX..XXXXXXX 100644
39
--- a/block/io.c
40
+++ b/block/io.c
41
@@ -XXX,XX +XXX,XX @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
42
assert(!(bs->open_flags & BDRV_O_INACTIVE));
43
assert((bs->open_flags & BDRV_O_NO_IO) == 0);
44
assert(!(flags & ~BDRV_REQ_MASK));
45
+ assert(!((flags & BDRV_REQ_NO_WAIT) && !(flags & BDRV_REQ_SERIALISING)));
46
47
if (flags & BDRV_REQ_SERIALISING) {
48
- bdrv_make_request_serialising(req, bdrv_get_cluster_size(bs));
49
+ QEMU_LOCK_GUARD(&bs->reqs_lock);
50
+
51
+ tracked_request_set_serialising(req, bdrv_get_cluster_size(bs));
52
+
53
+ if ((flags & BDRV_REQ_NO_WAIT) && bdrv_find_conflicting_request(req)) {
54
+ return -EBUSY;
51
+ }
55
+ }
52
+
56
+
53
BLKDBG_EVENT(bs->file, BLKDBG_READ_COMPRESSED);
57
+ bdrv_wait_serialising_requests_locked(req);
54
ret = bdrv_read(bs->file, coffset >> 9, s->cluster_data,
58
} else {
55
nb_csectors);
59
bdrv_wait_serialising_requests(req);
56
diff --git a/block/qcow2.c b/block/qcow2.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/block/qcow2.c
59
+++ b/block/qcow2.c
60
@@ -XXX,XX +XXX,XX @@ static int qcow2_do_open(BlockDriverState *bs, QDict *options, int flags,
61
goto fail;
62
}
60
}
63
64
- s->cluster_cache = g_malloc(s->cluster_size);
65
- /* one more sector for decompressed data alignment */
66
- s->cluster_data = qemu_try_blockalign(bs->file->bs, QCOW_MAX_CRYPT_CLUSTERS
67
- * s->cluster_size + 512);
68
- if (s->cluster_data == NULL) {
69
- error_setg(errp, "Could not allocate temporary cluster buffer");
70
- ret = -ENOMEM;
71
- goto fail;
72
- }
73
-
74
s->cluster_cache_offset = -1;
75
s->flags = flags;
76
77
@@ -XXX,XX +XXX,XX @@ static int qcow2_do_open(BlockDriverState *bs, QDict *options, int flags,
78
if (s->refcount_block_cache) {
79
qcow2_cache_destroy(bs, s->refcount_block_cache);
80
}
81
- g_free(s->cluster_cache);
82
- qemu_vfree(s->cluster_data);
83
qcrypto_block_free(s->crypto);
84
qapi_free_QCryptoBlockOpenOptions(s->crypto_opts);
85
return ret;
86
--
61
--
87
2.13.5
62
2.29.2
88
63
89
64
diff view generated by jsdifflib
1
From: Eduardo Habkost <ehabkost@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
If QEMU is running on a system that's out of memory and mmap()
3
Do generic processing even for drivers which define .bdrv_check_perm
4
fails, QEMU aborts with no error message at all, making it hard
4
handler. It's needed for further preallocate filter: it will need to do
5
to debug the reason for the failure.
5
additional action on bdrv_check_perm, but don't want to reimplement
6
generic logic.
6
7
7
Add perror() calls that will print error information before
8
The patch doesn't change existing behaviour: the only driver that
8
aborting.
9
implements bdrv_check_perm is file-posix, but it never has any
10
children.
9
11
10
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
12
Also, bdrv_set_perm() don't stop processing if driver has
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
.bdrv_set_perm handler as well.
12
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
13
Message-id: 20170829212053.6003-1-ehabkost@redhat.com
15
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Message-Id: <20201021145859.11201-8-vsementsov@virtuozzo.com>
17
Reviewed-by: Max Reitz <mreitz@redhat.com>
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
15
---
19
---
16
util/oslib-posix.c | 2 ++
20
block.c | 7 +++++--
17
1 file changed, 2 insertions(+)
21
1 file changed, 5 insertions(+), 2 deletions(-)
18
22
19
diff --git a/util/oslib-posix.c b/util/oslib-posix.c
23
diff --git a/block.c b/block.c
20
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
21
--- a/util/oslib-posix.c
25
--- a/block.c
22
+++ b/util/oslib-posix.c
26
+++ b/block.c
23
@@ -XXX,XX +XXX,XX @@ void *qemu_alloc_stack(size_t *sz)
27
@@ -XXX,XX +XXX,XX @@ static int bdrv_check_perm(BlockDriverState *bs, BlockReopenQueue *q,
24
ptr = mmap(NULL, *sz, PROT_READ | PROT_WRITE,
25
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
26
if (ptr == MAP_FAILED) {
27
+ perror("failed to allocate memory for stack");
28
abort();
29
}
28
}
30
29
31
@@ -XXX,XX +XXX,XX @@ void *qemu_alloc_stack(size_t *sz)
30
if (drv->bdrv_check_perm) {
32
guardpage = ptr;
31
- return drv->bdrv_check_perm(bs, cumulative_perms,
33
#endif
32
- cumulative_shared_perms, errp);
34
if (mprotect(guardpage, pagesz, PROT_NONE) != 0) {
33
+ ret = drv->bdrv_check_perm(bs, cumulative_perms,
35
+ perror("failed to set up stack guard page");
34
+ cumulative_shared_perms, errp);
36
abort();
35
+ if (ret < 0) {
36
+ return ret;
37
+ }
37
}
38
}
38
39
40
/* Drivers that never have children can omit .bdrv_child_perm() */
39
--
41
--
40
2.13.5
42
2.29.2
41
43
42
44
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
LeakyBucket.burst_length is defined as an unsigned integer but the
3
It's intended to be inserted between format and protocol nodes to
4
code never checks for overflows and it only makes sure that the value
4
preallocate additional space (expanding protocol file) on writes
5
is not 0.
5
crossing EOF. It improves performance for file-systems with slow
6
allocation.
6
7
7
In practice this means that the user can set something like
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
throttling.iops-total-max-length=4294967300 despite being larger than
9
Message-Id: <20201021145859.11201-9-vsementsov@virtuozzo.com>
9
UINT_MAX and the final value after casting to unsigned int will be 4.
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
11
[mreitz: Two comment fixes, and bumped the version from 5.2 to 6.0]
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
docs/system/qemu-block-drivers.rst.inc | 26 ++
15
qapi/block-core.json | 20 +-
16
block/preallocate.c | 559 +++++++++++++++++++++++++
17
block/meson.build | 1 +
18
4 files changed, 605 insertions(+), 1 deletion(-)
19
create mode 100644 block/preallocate.c
10
20
11
This patch changes the data type to uint64_t. This does not increase
21
diff --git a/docs/system/qemu-block-drivers.rst.inc b/docs/system/qemu-block-drivers.rst.inc
12
the storage size of LeakyBucket, and allows us to assign the value
13
directly from qemu_opt_get_number() or BlockIOThrottle and then do the
14
checks directly in throttle_is_valid().
15
16
The value of burst_length does not have a specific upper limit,
17
but since the bucket size is defined by max * burst_length we have
18
to prevent overflows. Instead of going for UINT64_MAX or something
19
similar this patch reuses THROTTLE_VALUE_MAX, which allows I/O bursts
20
of 1 GiB/s for 10 days in a row.
21
22
Signed-off-by: Alberto Garcia <berto@igalia.com>
23
Message-id: 1b2e3049803f71cafb2e1fa1be4fb47147a0d398.1503580370.git.berto@igalia.com
24
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
25
---
26
include/qemu/throttle.h | 2 +-
27
util/throttle.c | 5 +++++
28
2 files changed, 6 insertions(+), 1 deletion(-)
29
30
diff --git a/include/qemu/throttle.h b/include/qemu/throttle.h
31
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
32
--- a/include/qemu/throttle.h
23
--- a/docs/system/qemu-block-drivers.rst.inc
33
+++ b/include/qemu/throttle.h
24
+++ b/docs/system/qemu-block-drivers.rst.inc
34
@@ -XXX,XX +XXX,XX @@ typedef struct LeakyBucket {
25
@@ -XXX,XX +XXX,XX @@ on host and see if there are locks held by the QEMU process on the image file.
35
uint64_t max; /* leaky bucket max burst in units */
26
More than one byte could be locked by the QEMU instance, each byte of which
36
double level; /* bucket level in units */
27
reflects a particular permission that is acquired or protected by the running
37
double burst_level; /* bucket level in units (for computing bursts) */
28
block driver.
38
- unsigned burst_length; /* max length of the burst period, in seconds */
29
+
39
+ uint64_t burst_length; /* max length of the burst period, in seconds */
30
+Filter drivers
40
} LeakyBucket;
31
+~~~~~~~~~~~~~~
41
32
+
42
/* The following structure is used to configure a ThrottleState
33
+QEMU supports several filter drivers, which don't store any data, but perform
43
diff --git a/util/throttle.c b/util/throttle.c
34
+some additional tasks, hooking io requests.
35
+
36
+.. program:: filter-drivers
37
+.. option:: preallocate
38
+
39
+ The preallocate filter driver is intended to be inserted between format
40
+ and protocol nodes and preallocates some additional space
41
+ (expanding the protocol file) when writing past the file’s end. This can be
42
+ useful for file-systems with slow allocation.
43
+
44
+ Supported options:
45
+
46
+ .. program:: preallocate
47
+ .. option:: prealloc-align
48
+
49
+ On preallocation, align the file length to this value (in bytes), default 1M.
50
+
51
+ .. program:: preallocate
52
+ .. option:: prealloc-size
53
+
54
+ How much to preallocate (in bytes), default 128M.
55
diff --git a/qapi/block-core.json b/qapi/block-core.json
44
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
45
--- a/util/throttle.c
57
--- a/qapi/block-core.json
46
+++ b/util/throttle.c
58
+++ b/qapi/block-core.json
47
@@ -XXX,XX +XXX,XX @@ bool throttle_is_valid(ThrottleConfig *cfg, Error **errp)
59
@@ -XXX,XX +XXX,XX @@
48
return false;
60
'cloop', 'compress', 'copy-on-read', 'dmg', 'file', 'ftp', 'ftps',
49
}
61
'gluster', 'host_cdrom', 'host_device', 'http', 'https', 'iscsi',
50
62
'luks', 'nbd', 'nfs', 'null-aio', 'null-co', 'nvme', 'parallels',
51
+ if (bkt->max && bkt->burst_length > THROTTLE_VALUE_MAX / bkt->max) {
63
- 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'rbd',
52
+ error_setg(errp, "burst length too high for this burst rate");
64
+ 'preallocate', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'rbd',
65
{ 'name': 'replication', 'if': 'defined(CONFIG_REPLICATION)' },
66
'sheepdog',
67
'ssh', 'throttle', 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
68
@@ -XXX,XX +XXX,XX @@
69
'data': { 'aes': 'QCryptoBlockOptionsQCow',
70
'luks': 'QCryptoBlockOptionsLUKS'} }
71
72
+##
73
+# @BlockdevOptionsPreallocate:
74
+#
75
+# Filter driver intended to be inserted between format and protocol node
76
+# and do preallocation in protocol node on write.
77
+#
78
+# @prealloc-align: on preallocation, align file length to this number,
79
+# default 1048576 (1M)
80
+#
81
+# @prealloc-size: how much to preallocate, default 134217728 (128M)
82
+#
83
+# Since: 6.0
84
+##
85
+{ 'struct': 'BlockdevOptionsPreallocate',
86
+ 'base': 'BlockdevOptionsGenericFormat',
87
+ 'data': { '*prealloc-align': 'int', '*prealloc-size': 'int' } }
88
+
89
##
90
# @BlockdevOptionsQcow2:
91
#
92
@@ -XXX,XX +XXX,XX @@
93
'null-co': 'BlockdevOptionsNull',
94
'nvme': 'BlockdevOptionsNVMe',
95
'parallels': 'BlockdevOptionsGenericFormat',
96
+ 'preallocate':'BlockdevOptionsPreallocate',
97
'qcow2': 'BlockdevOptionsQcow2',
98
'qcow': 'BlockdevOptionsQcow',
99
'qed': 'BlockdevOptionsGenericCOWFormat',
100
diff --git a/block/preallocate.c b/block/preallocate.c
101
new file mode 100644
102
index XXXXXXX..XXXXXXX
103
--- /dev/null
104
+++ b/block/preallocate.c
105
@@ -XXX,XX +XXX,XX @@
106
+/*
107
+ * preallocate filter driver
108
+ *
109
+ * The driver performs preallocate operation: it is injected above
110
+ * some node, and before each write over EOF it does additional preallocating
111
+ * write-zeroes request.
112
+ *
113
+ * Copyright (c) 2020 Virtuozzo International GmbH.
114
+ *
115
+ * Author:
116
+ * Sementsov-Ogievskiy Vladimir <vsementsov@virtuozzo.com>
117
+ *
118
+ * This program is free software; you can redistribute it and/or modify
119
+ * it under the terms of the GNU General Public License as published by
120
+ * the Free Software Foundation; either version 2 of the License, or
121
+ * (at your option) any later version.
122
+ *
123
+ * This program is distributed in the hope that it will be useful,
124
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
125
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
126
+ * GNU General Public License for more details.
127
+ *
128
+ * You should have received a copy of the GNU General Public License
129
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
130
+ */
131
+
132
+#include "qemu/osdep.h"
133
+
134
+#include "qapi/error.h"
135
+#include "qemu/module.h"
136
+#include "qemu/option.h"
137
+#include "qemu/units.h"
138
+#include "block/block_int.h"
139
+
140
+
141
+typedef struct PreallocateOpts {
142
+ int64_t prealloc_size;
143
+ int64_t prealloc_align;
144
+} PreallocateOpts;
145
+
146
+typedef struct BDRVPreallocateState {
147
+ PreallocateOpts opts;
148
+
149
+ /*
150
+ * Track real data end, to crop preallocation on close. If < 0 the status is
151
+ * unknown.
152
+ *
153
+ * @data_end is a maximum of file size on open (or when we get write/resize
154
+ * permissions) and all write request ends after it. So it's safe to
155
+ * truncate to data_end if it is valid.
156
+ */
157
+ int64_t data_end;
158
+
159
+ /*
160
+ * Start of trailing preallocated area which reads as zero. May be smaller
161
+ * than data_end, if user does over-EOF write zero operation. If < 0 the
162
+ * status is unknown.
163
+ *
164
+ * If both @zero_start and @file_end are valid, the region
165
+ * [@zero_start, @file_end) is known to be preallocated zeroes. If @file_end
166
+ * is not valid, @zero_start doesn't make much sense.
167
+ */
168
+ int64_t zero_start;
169
+
170
+ /*
171
+ * Real end of file. Actually the cache for bdrv_getlength(bs->file->bs),
172
+ * to avoid extra lseek() calls on each write operation. If < 0 the status
173
+ * is unknown.
174
+ */
175
+ int64_t file_end;
176
+
177
+ /*
178
+ * All three states @data_end, @zero_start and @file_end are guaranteed to
179
+ * be invalid (< 0) when we don't have both exclusive BLK_PERM_RESIZE and
180
+ * BLK_PERM_WRITE permissions on file child.
181
+ */
182
+} BDRVPreallocateState;
183
+
184
+#define PREALLOCATE_OPT_PREALLOC_ALIGN "prealloc-align"
185
+#define PREALLOCATE_OPT_PREALLOC_SIZE "prealloc-size"
186
+static QemuOptsList runtime_opts = {
187
+ .name = "preallocate",
188
+ .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
189
+ .desc = {
190
+ {
191
+ .name = PREALLOCATE_OPT_PREALLOC_ALIGN,
192
+ .type = QEMU_OPT_SIZE,
193
+ .help = "on preallocation, align file length to this number, "
194
+ "default 1M",
195
+ },
196
+ {
197
+ .name = PREALLOCATE_OPT_PREALLOC_SIZE,
198
+ .type = QEMU_OPT_SIZE,
199
+ .help = "how much to preallocate, default 128M",
200
+ },
201
+ { /* end of list */ }
202
+ },
203
+};
204
+
205
+static bool preallocate_absorb_opts(PreallocateOpts *dest, QDict *options,
206
+ BlockDriverState *child_bs, Error **errp)
207
+{
208
+ QemuOpts *opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
209
+
210
+ if (!qemu_opts_absorb_qdict(opts, options, errp)) {
211
+ return false;
212
+ }
213
+
214
+ dest->prealloc_align =
215
+ qemu_opt_get_size(opts, PREALLOCATE_OPT_PREALLOC_ALIGN, 1 * MiB);
216
+ dest->prealloc_size =
217
+ qemu_opt_get_size(opts, PREALLOCATE_OPT_PREALLOC_SIZE, 128 * MiB);
218
+
219
+ qemu_opts_del(opts);
220
+
221
+ if (!QEMU_IS_ALIGNED(dest->prealloc_align, BDRV_SECTOR_SIZE)) {
222
+ error_setg(errp, "prealloc-align parameter of preallocate filter "
223
+ "is not aligned to %llu", BDRV_SECTOR_SIZE);
224
+ return false;
225
+ }
226
+
227
+ if (!QEMU_IS_ALIGNED(dest->prealloc_align,
228
+ child_bs->bl.request_alignment)) {
229
+ error_setg(errp, "prealloc-align parameter of preallocate filter "
230
+ "is not aligned to underlying node request alignment "
231
+ "(%" PRIi32 ")", child_bs->bl.request_alignment);
232
+ return false;
233
+ }
234
+
235
+ return true;
236
+}
237
+
238
+static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
239
+ Error **errp)
240
+{
241
+ BDRVPreallocateState *s = bs->opaque;
242
+
243
+ /*
244
+ * s->data_end and friends should be initialized on permission update.
245
+ * For this to work, mark them invalid.
246
+ */
247
+ s->file_end = s->zero_start = s->data_end = -EINVAL;
248
+
249
+ bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
250
+ BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
251
+ false, errp);
252
+ if (!bs->file) {
253
+ return -EINVAL;
254
+ }
255
+
256
+ if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) {
257
+ return -EINVAL;
258
+ }
259
+
260
+ bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
261
+ (BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
262
+
263
+ bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
264
+ ((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
265
+ bs->file->bs->supported_zero_flags);
266
+
267
+ return 0;
268
+}
269
+
270
+static void preallocate_close(BlockDriverState *bs)
271
+{
272
+ int ret;
273
+ BDRVPreallocateState *s = bs->opaque;
274
+
275
+ if (s->data_end < 0) {
276
+ return;
277
+ }
278
+
279
+ if (s->file_end < 0) {
280
+ s->file_end = bdrv_getlength(bs->file->bs);
281
+ if (s->file_end < 0) {
282
+ return;
283
+ }
284
+ }
285
+
286
+ if (s->data_end < s->file_end) {
287
+ ret = bdrv_truncate(bs->file, s->data_end, true, PREALLOC_MODE_OFF, 0,
288
+ NULL);
289
+ s->file_end = ret < 0 ? ret : s->data_end;
290
+ }
291
+}
292
+
293
+
294
+/*
295
+ * Handle reopen.
296
+ *
297
+ * We must implement reopen handlers, otherwise reopen just don't work. Handle
298
+ * new options and don't care about preallocation state, as it is handled in
299
+ * set/check permission handlers.
300
+ */
301
+
302
+static int preallocate_reopen_prepare(BDRVReopenState *reopen_state,
303
+ BlockReopenQueue *queue, Error **errp)
304
+{
305
+ PreallocateOpts *opts = g_new0(PreallocateOpts, 1);
306
+
307
+ if (!preallocate_absorb_opts(opts, reopen_state->options,
308
+ reopen_state->bs->file->bs, errp)) {
309
+ g_free(opts);
310
+ return -EINVAL;
311
+ }
312
+
313
+ reopen_state->opaque = opts;
314
+
315
+ return 0;
316
+}
317
+
318
+static void preallocate_reopen_commit(BDRVReopenState *state)
319
+{
320
+ BDRVPreallocateState *s = state->bs->opaque;
321
+
322
+ s->opts = *(PreallocateOpts *)state->opaque;
323
+
324
+ g_free(state->opaque);
325
+ state->opaque = NULL;
326
+}
327
+
328
+static void preallocate_reopen_abort(BDRVReopenState *state)
329
+{
330
+ g_free(state->opaque);
331
+ state->opaque = NULL;
332
+}
333
+
334
+static coroutine_fn int preallocate_co_preadv_part(
335
+ BlockDriverState *bs, uint64_t offset, uint64_t bytes,
336
+ QEMUIOVector *qiov, size_t qiov_offset, int flags)
337
+{
338
+ return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
339
+ flags);
340
+}
341
+
342
+static int coroutine_fn preallocate_co_pdiscard(BlockDriverState *bs,
343
+ int64_t offset, int bytes)
344
+{
345
+ return bdrv_co_pdiscard(bs->file, offset, bytes);
346
+}
347
+
348
+static bool can_write_resize(uint64_t perm)
349
+{
350
+ return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE);
351
+}
352
+
353
+static bool has_prealloc_perms(BlockDriverState *bs)
354
+{
355
+ BDRVPreallocateState *s = bs->opaque;
356
+
357
+ if (can_write_resize(bs->file->perm)) {
358
+ assert(!(bs->file->shared_perm & BLK_PERM_WRITE));
359
+ assert(!(bs->file->shared_perm & BLK_PERM_RESIZE));
360
+ return true;
361
+ }
362
+
363
+ assert(s->data_end < 0);
364
+ assert(s->zero_start < 0);
365
+ assert(s->file_end < 0);
366
+ return false;
367
+}
368
+
369
+/*
370
+ * Call on each write. Returns true if @want_merge_zero is true and the region
371
+ * [offset, offset + bytes) is zeroed (as a result of this call or earlier
372
+ * preallocation).
373
+ *
374
+ * want_merge_zero is used to merge write-zero request with preallocation in
375
+ * one bdrv_co_pwrite_zeroes() call.
376
+ */
377
+static bool coroutine_fn handle_write(BlockDriverState *bs, int64_t offset,
378
+ int64_t bytes, bool want_merge_zero)
379
+{
380
+ BDRVPreallocateState *s = bs->opaque;
381
+ int64_t end = offset + bytes;
382
+ int64_t prealloc_start, prealloc_end;
383
+ int ret;
384
+
385
+ if (!has_prealloc_perms(bs)) {
386
+ /* We don't have state neither should try to recover it */
387
+ return false;
388
+ }
389
+
390
+ if (s->data_end < 0) {
391
+ s->data_end = bdrv_getlength(bs->file->bs);
392
+ if (s->data_end < 0) {
53
+ return false;
393
+ return false;
54
+ }
394
+ }
55
+
395
+
56
if (bkt->max && !bkt->avg) {
396
+ if (s->file_end < 0) {
57
error_setg(errp, "bps_max/iops_max require corresponding"
397
+ s->file_end = s->data_end;
58
" bps/iops values");
398
+ }
399
+ }
400
+
401
+ if (end <= s->data_end) {
402
+ return false;
403
+ }
404
+
405
+ /* We have valid s->data_end, and request writes beyond it. */
406
+
407
+ s->data_end = end;
408
+ if (s->zero_start < 0 || !want_merge_zero) {
409
+ s->zero_start = end;
410
+ }
411
+
412
+ if (s->file_end < 0) {
413
+ s->file_end = bdrv_getlength(bs->file->bs);
414
+ if (s->file_end < 0) {
415
+ return false;
416
+ }
417
+ }
418
+
419
+ /* Now s->data_end, s->zero_start and s->file_end are valid. */
420
+
421
+ if (end <= s->file_end) {
422
+ /* No preallocation needed. */
423
+ return want_merge_zero && offset >= s->zero_start;
424
+ }
425
+
426
+ /* Now we want new preallocation, as request writes beyond s->file_end. */
427
+
428
+ prealloc_start = want_merge_zero ? MIN(offset, s->file_end) : s->file_end;
429
+ prealloc_end = QEMU_ALIGN_UP(end + s->opts.prealloc_size,
430
+ s->opts.prealloc_align);
431
+
432
+ ret = bdrv_co_pwrite_zeroes(
433
+ bs->file, prealloc_start, prealloc_end - prealloc_start,
434
+ BDRV_REQ_NO_FALLBACK | BDRV_REQ_SERIALISING | BDRV_REQ_NO_WAIT);
435
+ if (ret < 0) {
436
+ s->file_end = ret;
437
+ return false;
438
+ }
439
+
440
+ s->file_end = prealloc_end;
441
+ return want_merge_zero;
442
+}
443
+
444
+static int coroutine_fn preallocate_co_pwrite_zeroes(BlockDriverState *bs,
445
+ int64_t offset, int bytes, BdrvRequestFlags flags)
446
+{
447
+ bool want_merge_zero =
448
+ !(flags & ~(BDRV_REQ_ZERO_WRITE | BDRV_REQ_NO_FALLBACK));
449
+ if (handle_write(bs, offset, bytes, want_merge_zero)) {
450
+ return 0;
451
+ }
452
+
453
+ return bdrv_co_pwrite_zeroes(bs->file, offset, bytes, flags);
454
+}
455
+
456
+static coroutine_fn int preallocate_co_pwritev_part(BlockDriverState *bs,
457
+ uint64_t offset,
458
+ uint64_t bytes,
459
+ QEMUIOVector *qiov,
460
+ size_t qiov_offset,
461
+ int flags)
462
+{
463
+ handle_write(bs, offset, bytes, false);
464
+
465
+ return bdrv_co_pwritev_part(bs->file, offset, bytes, qiov, qiov_offset,
466
+ flags);
467
+}
468
+
469
+static int coroutine_fn
470
+preallocate_co_truncate(BlockDriverState *bs, int64_t offset,
471
+ bool exact, PreallocMode prealloc,
472
+ BdrvRequestFlags flags, Error **errp)
473
+{
474
+ ERRP_GUARD();
475
+ BDRVPreallocateState *s = bs->opaque;
476
+ int ret;
477
+
478
+ if (s->data_end >= 0 && offset > s->data_end) {
479
+ if (s->file_end < 0) {
480
+ s->file_end = bdrv_getlength(bs->file->bs);
481
+ if (s->file_end < 0) {
482
+ error_setg(errp, "failed to get file length");
483
+ return s->file_end;
484
+ }
485
+ }
486
+
487
+ if (prealloc == PREALLOC_MODE_FALLOC) {
488
+ /*
489
+ * If offset <= s->file_end, the task is already done, just
490
+ * update s->data_end, to move part of "filter preallocation"
491
+ * to "preallocation requested by user".
492
+ * Otherwise just proceed to preallocate missing part.
493
+ */
494
+ if (offset <= s->file_end) {
495
+ s->data_end = offset;
496
+ return 0;
497
+ }
498
+ } else {
499
+ /*
500
+ * We have to drop our preallocation, to
501
+ * - avoid "Cannot use preallocation for shrinking files" in
502
+ * case of offset < file_end
503
+ * - give PREALLOC_MODE_OFF a chance to keep small disk
504
+ * usage
505
+ * - give PREALLOC_MODE_FULL a chance to actually write the
506
+ * whole region as user expects
507
+ */
508
+ if (s->file_end > s->data_end) {
509
+ ret = bdrv_co_truncate(bs->file, s->data_end, true,
510
+ PREALLOC_MODE_OFF, 0, errp);
511
+ if (ret < 0) {
512
+ s->file_end = ret;
513
+ error_prepend(errp, "preallocate-filter: failed to drop "
514
+ "write-zero preallocation: ");
515
+ return ret;
516
+ }
517
+ s->file_end = s->data_end;
518
+ }
519
+ }
520
+
521
+ s->data_end = offset;
522
+ }
523
+
524
+ ret = bdrv_co_truncate(bs->file, offset, exact, prealloc, flags, errp);
525
+ if (ret < 0) {
526
+ s->file_end = s->zero_start = s->data_end = ret;
527
+ return ret;
528
+ }
529
+
530
+ if (has_prealloc_perms(bs)) {
531
+ s->file_end = s->zero_start = s->data_end = offset;
532
+ }
533
+ return 0;
534
+}
535
+
536
+static int coroutine_fn preallocate_co_flush(BlockDriverState *bs)
537
+{
538
+ return bdrv_co_flush(bs->file->bs);
539
+}
540
+
541
+static int64_t preallocate_getlength(BlockDriverState *bs)
542
+{
543
+ int64_t ret;
544
+ BDRVPreallocateState *s = bs->opaque;
545
+
546
+ if (s->data_end >= 0) {
547
+ return s->data_end;
548
+ }
549
+
550
+ ret = bdrv_getlength(bs->file->bs);
551
+
552
+ if (has_prealloc_perms(bs)) {
553
+ s->file_end = s->zero_start = s->data_end = ret;
554
+ }
555
+
556
+ return ret;
557
+}
558
+
559
+static int preallocate_check_perm(BlockDriverState *bs,
560
+ uint64_t perm, uint64_t shared, Error **errp)
561
+{
562
+ BDRVPreallocateState *s = bs->opaque;
563
+
564
+ if (s->data_end >= 0 && !can_write_resize(perm)) {
565
+ /*
566
+ * Lose permissions.
567
+ * We should truncate in check_perm, as in set_perm bs->file->perm will
568
+ * be already changed, and we should not violate it.
569
+ */
570
+ if (s->file_end < 0) {
571
+ s->file_end = bdrv_getlength(bs->file->bs);
572
+ if (s->file_end < 0) {
573
+ error_setg(errp, "Failed to get file length");
574
+ return s->file_end;
575
+ }
576
+ }
577
+
578
+ if (s->data_end < s->file_end) {
579
+ int ret = bdrv_truncate(bs->file, s->data_end, true,
580
+ PREALLOC_MODE_OFF, 0, NULL);
581
+ if (ret < 0) {
582
+ error_setg(errp, "Failed to drop preallocation");
583
+ s->file_end = ret;
584
+ return ret;
585
+ }
586
+ s->file_end = s->data_end;
587
+ }
588
+ }
589
+
590
+ return 0;
591
+}
592
+
593
+static void preallocate_set_perm(BlockDriverState *bs,
594
+ uint64_t perm, uint64_t shared)
595
+{
596
+ BDRVPreallocateState *s = bs->opaque;
597
+
598
+ if (can_write_resize(perm)) {
599
+ if (s->data_end < 0) {
600
+ s->data_end = s->file_end = s->zero_start =
601
+ bdrv_getlength(bs->file->bs);
602
+ }
603
+ } else {
604
+ /*
605
+ * We drop our permissions, as well as allow shared
606
+ * permissions (see preallocate_child_perm), anyone will be able to
607
+ * change the child, so mark all states invalid. We'll regain control if
608
+ * get good permissions back.
609
+ */
610
+ s->data_end = s->file_end = s->zero_start = -EINVAL;
611
+ }
612
+}
613
+
614
+static void preallocate_child_perm(BlockDriverState *bs, BdrvChild *c,
615
+ BdrvChildRole role, BlockReopenQueue *reopen_queue,
616
+ uint64_t perm, uint64_t shared, uint64_t *nperm, uint64_t *nshared)
617
+{
618
+ bdrv_default_perms(bs, c, role, reopen_queue, perm, shared, nperm, nshared);
619
+
620
+ if (can_write_resize(perm)) {
621
+ /* This should come by default, but let's enforce: */
622
+ *nperm |= BLK_PERM_WRITE | BLK_PERM_RESIZE;
623
+
624
+ /*
625
+ * Don't share, to keep our states s->file_end, s->data_end and
626
+ * s->zero_start valid.
627
+ */
628
+ *nshared &= ~(BLK_PERM_WRITE | BLK_PERM_RESIZE);
629
+ }
630
+}
631
+
632
+BlockDriver bdrv_preallocate_filter = {
633
+ .format_name = "preallocate",
634
+ .instance_size = sizeof(BDRVPreallocateState),
635
+
636
+ .bdrv_getlength = preallocate_getlength,
637
+ .bdrv_open = preallocate_open,
638
+ .bdrv_close = preallocate_close,
639
+
640
+ .bdrv_reopen_prepare = preallocate_reopen_prepare,
641
+ .bdrv_reopen_commit = preallocate_reopen_commit,
642
+ .bdrv_reopen_abort = preallocate_reopen_abort,
643
+
644
+ .bdrv_co_preadv_part = preallocate_co_preadv_part,
645
+ .bdrv_co_pwritev_part = preallocate_co_pwritev_part,
646
+ .bdrv_co_pwrite_zeroes = preallocate_co_pwrite_zeroes,
647
+ .bdrv_co_pdiscard = preallocate_co_pdiscard,
648
+ .bdrv_co_flush = preallocate_co_flush,
649
+ .bdrv_co_truncate = preallocate_co_truncate,
650
+
651
+ .bdrv_check_perm = preallocate_check_perm,
652
+ .bdrv_set_perm = preallocate_set_perm,
653
+ .bdrv_child_perm = preallocate_child_perm,
654
+
655
+ .has_variable_length = true,
656
+ .is_filter = true,
657
+};
658
+
659
+static void bdrv_preallocate_init(void)
660
+{
661
+ bdrv_register(&bdrv_preallocate_filter);
662
+}
663
+
664
+block_init(bdrv_preallocate_init);
665
diff --git a/block/meson.build b/block/meson.build
666
index XXXXXXX..XXXXXXX 100644
667
--- a/block/meson.build
668
+++ b/block/meson.build
669
@@ -XXX,XX +XXX,XX @@ block_ss.add(files(
670
'block-copy.c',
671
'commit.c',
672
'copy-on-read.c',
673
+ 'preallocate.c',
674
'create.c',
675
'crypto.c',
676
'dirty-bitmap.c',
59
--
677
--
60
2.13.5
678
2.29.2
61
679
62
680
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
This will be used in further test.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20201021145859.11201-10-vsementsov@virtuozzo.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
qemu-io-cmds.c | 46 ++++++++++++++++++++++++++++++++--------------
11
1 file changed, 32 insertions(+), 14 deletions(-)
12
13
diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/qemu-io-cmds.c
16
+++ b/qemu-io-cmds.c
17
@@ -XXX,XX +XXX,XX @@ static const cmdinfo_t flush_cmd = {
18
.oneline = "flush all in-core file state to disk",
19
};
20
21
+static int truncate_f(BlockBackend *blk, int argc, char **argv);
22
+static const cmdinfo_t truncate_cmd = {
23
+ .name = "truncate",
24
+ .altname = "t",
25
+ .cfunc = truncate_f,
26
+ .perm = BLK_PERM_WRITE | BLK_PERM_RESIZE,
27
+ .argmin = 1,
28
+ .argmax = 3,
29
+ .args = "[-m prealloc_mode] off",
30
+ .oneline = "truncates the current file at the given offset",
31
+};
32
+
33
static int truncate_f(BlockBackend *blk, int argc, char **argv)
34
{
35
Error *local_err = NULL;
36
int64_t offset;
37
- int ret;
38
+ int c, ret;
39
+ PreallocMode prealloc = PREALLOC_MODE_OFF;
40
41
- offset = cvtnum(argv[1]);
42
+ while ((c = getopt(argc, argv, "m:")) != -1) {
43
+ switch (c) {
44
+ case 'm':
45
+ prealloc = qapi_enum_parse(&PreallocMode_lookup, optarg,
46
+ PREALLOC_MODE__MAX, NULL);
47
+ if (prealloc == PREALLOC_MODE__MAX) {
48
+ error_report("Invalid preallocation mode '%s'", optarg);
49
+ return -EINVAL;
50
+ }
51
+ break;
52
+ default:
53
+ qemuio_command_usage(&truncate_cmd);
54
+ return -EINVAL;
55
+ }
56
+ }
57
+
58
+ offset = cvtnum(argv[optind]);
59
if (offset < 0) {
60
print_cvtnum_err(offset, argv[1]);
61
return offset;
62
@@ -XXX,XX +XXX,XX @@ static int truncate_f(BlockBackend *blk, int argc, char **argv)
63
* exact=true. It is better to err on the "emit more errors" side
64
* than to be overly permissive.
65
*/
66
- ret = blk_truncate(blk, offset, false, PREALLOC_MODE_OFF, 0, &local_err);
67
+ ret = blk_truncate(blk, offset, false, prealloc, 0, &local_err);
68
if (ret < 0) {
69
error_report_err(local_err);
70
return ret;
71
@@ -XXX,XX +XXX,XX @@ static int truncate_f(BlockBackend *blk, int argc, char **argv)
72
return 0;
73
}
74
75
-static const cmdinfo_t truncate_cmd = {
76
- .name = "truncate",
77
- .altname = "t",
78
- .cfunc = truncate_f,
79
- .perm = BLK_PERM_WRITE | BLK_PERM_RESIZE,
80
- .argmin = 1,
81
- .argmax = 1,
82
- .args = "off",
83
- .oneline = "truncates the current file at the given offset",
84
-};
85
-
86
static int length_f(BlockBackend *blk, int argc, char **argv)
87
{
88
int64_t size;
89
--
90
2.29.2
91
92
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
5
Message-Id: <20201021145859.11201-11-vsementsov@virtuozzo.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
8
tests/qemu-iotests/iotests.py | 7 ++++++-
9
1 file changed, 6 insertions(+), 1 deletion(-)
10
11
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tests/qemu-iotests/iotests.py
14
+++ b/tests/qemu-iotests/iotests.py
15
@@ -XXX,XX +XXX,XX @@ def qemu_io_log(*args):
16
17
def qemu_io_silent(*args):
18
'''Run qemu-io and return the exit code, suppressing stdout'''
19
- args = qemu_io_args + list(args)
20
+ if '-f' in args or '--image-opts' in args:
21
+ default_args = qemu_io_args_no_fmt
22
+ else:
23
+ default_args = qemu_io_args
24
+
25
+ args = default_args + list(args)
26
exitcode = subprocess.call(args, stdout=open('/dev/null', 'w'))
27
if exitcode < 0:
28
sys.stderr.write('qemu-io received signal %i: %s\n' %
29
--
30
2.29.2
31
32
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Add a parameter to skip test if some needed additional formats are not
4
supported (for example filter drivers).
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-12-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
tests/qemu-iotests/iotests.py | 9 ++++++++-
12
1 file changed, 8 insertions(+), 1 deletion(-)
13
14
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/qemu-iotests/iotests.py
17
+++ b/tests/qemu-iotests/iotests.py
18
@@ -XXX,XX +XXX,XX @@ def _verify_aio_mode(supported_aio_modes: Sequence[str] = ()) -> None:
19
if supported_aio_modes and (aiomode not in supported_aio_modes):
20
notrun('not suitable for this aio mode: %s' % aiomode)
21
22
+def _verify_formats(required_formats: Sequence[str] = ()) -> None:
23
+ usf_list = list(set(required_formats) - set(supported_formats()))
24
+ if usf_list:
25
+ notrun(f'formats {usf_list} are not whitelisted')
26
+
27
def supports_quorum():
28
return 'quorum' in qemu_img_pipe('--help')
29
30
@@ -XXX,XX +XXX,XX @@ def execute_setup_common(supported_fmts: Sequence[str] = (),
31
supported_aio_modes: Sequence[str] = (),
32
unsupported_fmts: Sequence[str] = (),
33
supported_protocols: Sequence[str] = (),
34
- unsupported_protocols: Sequence[str] = ()) -> bool:
35
+ unsupported_protocols: Sequence[str] = (),
36
+ required_fmts: Sequence[str] = ()) -> bool:
37
"""
38
Perform necessary setup for either script-style or unittest-style tests.
39
40
@@ -XXX,XX +XXX,XX @@ def execute_setup_common(supported_fmts: Sequence[str] = (),
41
_verify_platform(supported=supported_platforms)
42
_verify_cache_mode(supported_cache_modes)
43
_verify_aio_mode(supported_aio_modes)
44
+ _verify_formats(required_fmts)
45
46
return debug
47
48
--
49
2.29.2
50
51
diff view generated by jsdifflib
New patch
1
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Message-Id: <20201021145859.11201-13-vsementsov@virtuozzo.com>
5
Reviewed-by: Max Reitz <mreitz@redhat.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
8
tests/qemu-iotests/298 | 186 +++++++++++++++++++++++++++++++++++++
9
tests/qemu-iotests/298.out | 5 +
10
tests/qemu-iotests/group | 1 +
11
3 files changed, 192 insertions(+)
12
create mode 100644 tests/qemu-iotests/298
13
create mode 100644 tests/qemu-iotests/298.out
14
15
diff --git a/tests/qemu-iotests/298 b/tests/qemu-iotests/298
16
new file mode 100644
17
index XXXXXXX..XXXXXXX
18
--- /dev/null
19
+++ b/tests/qemu-iotests/298
20
@@ -XXX,XX +XXX,XX @@
21
+#!/usr/bin/env python3
22
+#
23
+# Test for preallocate filter
24
+#
25
+# Copyright (c) 2020 Virtuozzo International GmbH.
26
+#
27
+# This program is free software; you can redistribute it and/or modify
28
+# it under the terms of the GNU General Public License as published by
29
+# the Free Software Foundation; either version 2 of the License, or
30
+# (at your option) any later version.
31
+#
32
+# This program is distributed in the hope that it will be useful,
33
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
34
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
35
+# GNU General Public License for more details.
36
+#
37
+# You should have received a copy of the GNU General Public License
38
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
39
+#
40
+
41
+import os
42
+import iotests
43
+
44
+MiB = 1024 * 1024
45
+disk = os.path.join(iotests.test_dir, 'disk')
46
+overlay = os.path.join(iotests.test_dir, 'overlay')
47
+refdisk = os.path.join(iotests.test_dir, 'refdisk')
48
+drive_opts = f'node-name=disk,driver={iotests.imgfmt},' \
49
+ f'file.node-name=filter,file.driver=preallocate,' \
50
+ f'file.file.node-name=file,file.file.filename={disk}'
51
+
52
+
53
+class TestPreallocateBase(iotests.QMPTestCase):
54
+ def setUp(self):
55
+ iotests.qemu_img_create('-f', iotests.imgfmt, disk, str(10 * MiB))
56
+
57
+ def tearDown(self):
58
+ try:
59
+ self.check_small()
60
+ check = iotests.qemu_img_check(disk)
61
+ self.assertFalse('leaks' in check)
62
+ self.assertFalse('corruptions' in check)
63
+ self.assertEqual(check['check-errors'], 0)
64
+ finally:
65
+ os.remove(disk)
66
+
67
+ def check_big(self):
68
+ self.assertTrue(os.path.getsize(disk) > 100 * MiB)
69
+
70
+ def check_small(self):
71
+ self.assertTrue(os.path.getsize(disk) < 10 * MiB)
72
+
73
+
74
+class TestQemuImg(TestPreallocateBase):
75
+ def test_qemu_img(self):
76
+ p = iotests.QemuIoInteractive('--image-opts', drive_opts)
77
+
78
+ p.cmd('write 0 1M')
79
+ p.cmd('flush')
80
+
81
+ self.check_big()
82
+
83
+ p.close()
84
+
85
+
86
+class TestPreallocateFilter(TestPreallocateBase):
87
+ def setUp(self):
88
+ super().setUp()
89
+ self.vm = iotests.VM().add_drive(path=None, opts=drive_opts)
90
+ self.vm.launch()
91
+
92
+ def tearDown(self):
93
+ self.vm.shutdown()
94
+ super().tearDown()
95
+
96
+ def test_prealloc(self):
97
+ self.vm.hmp_qemu_io('drive0', 'write 0 1M')
98
+ self.check_big()
99
+
100
+ def test_external_snapshot(self):
101
+ self.test_prealloc()
102
+
103
+ result = self.vm.qmp('blockdev-snapshot-sync', node_name='disk',
104
+ snapshot_file=overlay,
105
+ snapshot_node_name='overlay')
106
+ self.assert_qmp(result, 'return', {})
107
+
108
+ # on reopen to r-o base preallocation should be dropped
109
+ self.check_small()
110
+
111
+ self.vm.hmp_qemu_io('drive0', 'write 1M 1M')
112
+
113
+ result = self.vm.qmp('block-commit', device='overlay')
114
+ self.assert_qmp(result, 'return', {})
115
+ self.complete_and_wait()
116
+
117
+ # commit of new megabyte should trigger preallocation
118
+ self.check_big()
119
+
120
+ def test_reopen_opts(self):
121
+ result = self.vm.qmp('x-blockdev-reopen', **{
122
+ 'node-name': 'disk',
123
+ 'driver': iotests.imgfmt,
124
+ 'file': {
125
+ 'node-name': 'filter',
126
+ 'driver': 'preallocate',
127
+ 'prealloc-size': 20 * MiB,
128
+ 'prealloc-align': 5 * MiB,
129
+ 'file': {
130
+ 'node-name': 'file',
131
+ 'driver': 'file',
132
+ 'filename': disk
133
+ }
134
+ }
135
+ })
136
+ self.assert_qmp(result, 'return', {})
137
+
138
+ self.vm.hmp_qemu_io('drive0', 'write 0 1M')
139
+ self.assertTrue(os.path.getsize(disk) == 25 * MiB)
140
+
141
+
142
+class TestTruncate(iotests.QMPTestCase):
143
+ def setUp(self):
144
+ iotests.qemu_img_create('-f', iotests.imgfmt, disk, str(10 * MiB))
145
+ iotests.qemu_img_create('-f', iotests.imgfmt, refdisk, str(10 * MiB))
146
+
147
+ def tearDown(self):
148
+ os.remove(disk)
149
+ os.remove(refdisk)
150
+
151
+ def do_test(self, prealloc_mode, new_size):
152
+ ret = iotests.qemu_io_silent('--image-opts', '-c', 'write 0 10M', '-c',
153
+ f'truncate -m {prealloc_mode} {new_size}',
154
+ drive_opts)
155
+ self.assertEqual(ret, 0)
156
+
157
+ ret = iotests.qemu_io_silent('-f', iotests.imgfmt, '-c', 'write 0 10M',
158
+ '-c',
159
+ f'truncate -m {prealloc_mode} {new_size}',
160
+ refdisk)
161
+ self.assertEqual(ret, 0)
162
+
163
+ stat = os.stat(disk)
164
+ refstat = os.stat(refdisk)
165
+
166
+ # Probably we'll want preallocate filter to keep align to cluster when
167
+ # shrink preallocation, so, ignore small differece
168
+ self.assertLess(abs(stat.st_size - refstat.st_size), 64 * 1024)
169
+
170
+ # Preallocate filter may leak some internal clusters (for example, if
171
+ # guest write far over EOF, skipping some clusters - they will remain
172
+ # fallocated, preallocate filter don't care about such leaks, it drops
173
+ # only trailing preallocation.
174
+ self.assertLess(abs(stat.st_blocks - refstat.st_blocks) * 512,
175
+ 1024 * 1024)
176
+
177
+ def test_real_shrink(self):
178
+ self.do_test('off', '5M')
179
+
180
+ def test_truncate_inside_preallocated_area__falloc(self):
181
+ self.do_test('falloc', '50M')
182
+
183
+ def test_truncate_inside_preallocated_area__metadata(self):
184
+ self.do_test('metadata', '50M')
185
+
186
+ def test_truncate_inside_preallocated_area__full(self):
187
+ self.do_test('full', '50M')
188
+
189
+ def test_truncate_inside_preallocated_area__off(self):
190
+ self.do_test('off', '50M')
191
+
192
+ def test_truncate_over_preallocated_area__falloc(self):
193
+ self.do_test('falloc', '150M')
194
+
195
+ def test_truncate_over_preallocated_area__metadata(self):
196
+ self.do_test('metadata', '150M')
197
+
198
+ def test_truncate_over_preallocated_area__full(self):
199
+ self.do_test('full', '150M')
200
+
201
+ def test_truncate_over_preallocated_area__off(self):
202
+ self.do_test('off', '150M')
203
+
204
+
205
+if __name__ == '__main__':
206
+ iotests.main(supported_fmts=['qcow2'], required_fmts=['preallocate'])
207
diff --git a/tests/qemu-iotests/298.out b/tests/qemu-iotests/298.out
208
new file mode 100644
209
index XXXXXXX..XXXXXXX
210
--- /dev/null
211
+++ b/tests/qemu-iotests/298.out
212
@@ -XXX,XX +XXX,XX @@
213
+.............
214
+----------------------------------------------------------------------
215
+Ran 13 tests
216
+
217
+OK
218
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
219
index XXXXXXX..XXXXXXX 100644
220
--- a/tests/qemu-iotests/group
221
+++ b/tests/qemu-iotests/group
222
@@ -XXX,XX +XXX,XX @@
223
295 rw
224
296 rw
225
297 meta
226
+298
227
299 auto quick
228
300 migration
229
301 backing quick
230
--
231
2.29.2
232
233
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Message-Id: <20201021145859.11201-14-vsementsov@virtuozzo.com>
5
Reviewed-by: Max Reitz <mreitz@redhat.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
8
scripts/simplebench/simplebench.py | 12 ++++++------
9
1 file changed, 6 insertions(+), 6 deletions(-)
10
11
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
12
index XXXXXXX..XXXXXXX 100644
13
--- a/scripts/simplebench/simplebench.py
14
+++ b/scripts/simplebench/simplebench.py
15
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
16
17
result = {'runs': runs}
18
19
- successed = [r for r in runs if ('seconds' in r)]
20
- if successed:
21
- avg = sum(r['seconds'] for r in successed) / len(successed)
22
+ succeeded = [r for r in runs if ('seconds' in r)]
23
+ if succeeded:
24
+ avg = sum(r['seconds'] for r in succeeded) / len(succeeded)
25
result['average'] = avg
26
- result['delta'] = max(abs(r['seconds'] - avg) for r in successed)
27
+ result['delta'] = max(abs(r['seconds'] - avg) for r in succeeded)
28
29
- if len(successed) < count:
30
- result['n-failed'] = count - len(successed)
31
+ if len(succeeded) < count:
32
+ result['n-failed'] = count - len(succeeded)
33
34
return result
35
36
--
37
2.29.2
38
39
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Support benchmarks returning not seconds but iops. We'll use it for
4
further new test.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-15-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/simplebench.py | 38 ++++++++++++++++++++++--------
12
1 file changed, 28 insertions(+), 10 deletions(-)
13
14
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/scripts/simplebench/simplebench.py
17
+++ b/scripts/simplebench/simplebench.py
18
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
19
20
test_func -- benchmarking function with prototype
21
test_func(env, case), which takes test_env and test_case
22
- arguments and returns {'seconds': int} (which is benchmark
23
- result) on success and {'error': str} on error. Returned
24
- dict may contain any other additional fields.
25
+ arguments and on success returns dict with 'seconds' or
26
+ 'iops' (or both) fields, specifying the benchmark result.
27
+ If both 'iops' and 'seconds' provided, the 'iops' is
28
+ considered the main, and 'seconds' is just an additional
29
+ info. On failure test_func should return {'error': str}.
30
+ Returned dict may contain any other additional fields.
31
test_env -- test environment - opaque first argument for test_func
32
test_case -- test case - opaque second argument for test_func
33
count -- how many times to call test_func, to calculate average
34
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
35
36
Returns dict with the following fields:
37
'runs': list of test_func results
38
- 'average': average seconds per run (exists only if at least one run
39
- succeeded)
40
+ 'dimension': dimension of results, may be 'seconds' or 'iops'
41
+ 'average': average value (iops or seconds) per run (exists only if at
42
+ least one run succeeded)
43
'delta': maximum delta between test_func result and the average
44
(exists only if at least one run succeeded)
45
'n-failed': number of failed runs (exists only if at least one run
46
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
47
48
result = {'runs': runs}
49
50
- succeeded = [r for r in runs if ('seconds' in r)]
51
+ succeeded = [r for r in runs if ('seconds' in r or 'iops' in r)]
52
if succeeded:
53
- avg = sum(r['seconds'] for r in succeeded) / len(succeeded)
54
+ if 'iops' in succeeded[0]:
55
+ assert all('iops' in r for r in succeeded)
56
+ dim = 'iops'
57
+ else:
58
+ assert all('seconds' in r for r in succeeded)
59
+ assert all('iops' not in r for r in succeeded)
60
+ dim = 'seconds'
61
+ avg = sum(r[dim] for r in succeeded) / len(succeeded)
62
+ result['dimension'] = dim
63
result['average'] = avg
64
- result['delta'] = max(abs(r['seconds'] - avg) for r in succeeded)
65
+ result['delta'] = max(abs(r[dim] - avg) for r in succeeded)
66
67
if len(succeeded) < count:
68
result['n-failed'] = count - len(succeeded)
69
@@ -XXX,XX +XXX,XX @@ def ascii(results):
70
"""Return ASCII representation of bench() returned dict."""
71
from tabulate import tabulate
72
73
+ dim = None
74
tab = [[""] + [c['id'] for c in results['envs']]]
75
for case in results['cases']:
76
row = [case['id']]
77
for env in results['envs']:
78
- row.append(ascii_one(results['tab'][case['id']][env['id']]))
79
+ res = results['tab'][case['id']][env['id']]
80
+ if dim is None:
81
+ dim = res['dimension']
82
+ else:
83
+ assert dim == res['dimension']
84
+ row.append(ascii_one(res))
85
tab.append(row)
86
87
- return tabulate(tab)
88
+ return f'All results are in {dim}\n\n' + tabulate(tab)
89
--
90
2.29.2
91
92
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Standard deviation is more usual to see after +- than current maximum
4
of deviations.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-16-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/simplebench.py | 11 ++++++-----
12
1 file changed, 6 insertions(+), 5 deletions(-)
13
14
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/scripts/simplebench/simplebench.py
17
+++ b/scripts/simplebench/simplebench.py
18
@@ -XXX,XX +XXX,XX @@
19
# along with this program. If not, see <http://www.gnu.org/licenses/>.
20
#
21
22
+import statistics
23
+
24
25
def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
26
"""Benchmark one test-case
27
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
28
'dimension': dimension of results, may be 'seconds' or 'iops'
29
'average': average value (iops or seconds) per run (exists only if at
30
least one run succeeded)
31
- 'delta': maximum delta between test_func result and the average
32
+ 'stdev': standard deviation of results
33
(exists only if at least one run succeeded)
34
'n-failed': number of failed runs (exists only if at least one run
35
failed)
36
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
37
assert all('seconds' in r for r in succeeded)
38
assert all('iops' not in r for r in succeeded)
39
dim = 'seconds'
40
- avg = sum(r[dim] for r in succeeded) / len(succeeded)
41
result['dimension'] = dim
42
- result['average'] = avg
43
- result['delta'] = max(abs(r[dim] - avg) for r in succeeded)
44
+ result['average'] = statistics.mean(r[dim] for r in succeeded)
45
+ result['stdev'] = statistics.stdev(r[dim] for r in succeeded)
46
47
if len(succeeded) < count:
48
result['n-failed'] = count - len(succeeded)
49
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
50
def ascii_one(result):
51
"""Return ASCII representation of bench_one() returned dict."""
52
if 'average' in result:
53
- s = '{:.2f} +- {:.2f}'.format(result['average'], result['delta'])
54
+ s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
55
if 'n-failed' in result:
56
s += '\n({} failed)'.format(result['n-failed'])
57
return s
58
--
59
2.29.2
60
61
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Next patch will use utf8 plus-minus symbol, let's use more generic (and
4
more readable) name.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-17-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/bench-example.py | 2 +-
12
scripts/simplebench/bench_write_req.py | 2 +-
13
scripts/simplebench/simplebench.py | 10 +++++-----
14
3 files changed, 7 insertions(+), 7 deletions(-)
15
16
diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
17
index XXXXXXX..XXXXXXX 100644
18
--- a/scripts/simplebench/bench-example.py
19
+++ b/scripts/simplebench/bench-example.py
20
@@ -XXX,XX +XXX,XX @@ test_envs = [
21
]
22
23
result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
24
-print(simplebench.ascii(result))
25
+print(simplebench.results_to_text(result))
26
diff --git a/scripts/simplebench/bench_write_req.py b/scripts/simplebench/bench_write_req.py
27
index XXXXXXX..XXXXXXX 100755
28
--- a/scripts/simplebench/bench_write_req.py
29
+++ b/scripts/simplebench/bench_write_req.py
30
@@ -XXX,XX +XXX,XX @@ if __name__ == '__main__':
31
32
result = simplebench.bench(bench_func, test_envs, test_cases, count=3,
33
initial_run=False)
34
- print(simplebench.ascii(result))
35
+ print(simplebench.results_to_text(result))
36
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
37
index XXXXXXX..XXXXXXX 100644
38
--- a/scripts/simplebench/simplebench.py
39
+++ b/scripts/simplebench/simplebench.py
40
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
41
return result
42
43
44
-def ascii_one(result):
45
- """Return ASCII representation of bench_one() returned dict."""
46
+def result_to_text(result):
47
+ """Return text representation of bench_one() returned dict."""
48
if 'average' in result:
49
s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
50
if 'n-failed' in result:
51
@@ -XXX,XX +XXX,XX @@ def bench(test_func, test_envs, test_cases, *args, **vargs):
52
return results
53
54
55
-def ascii(results):
56
- """Return ASCII representation of bench() returned dict."""
57
+def results_to_text(results):
58
+ """Return text representation of bench() returned dict."""
59
from tabulate import tabulate
60
61
dim = None
62
@@ -XXX,XX +XXX,XX @@ def ascii(results):
63
dim = res['dimension']
64
else:
65
assert dim == res['dimension']
66
- row.append(ascii_one(res))
67
+ row.append(result_to_text(res))
68
tab.append(row)
69
70
return f'All results are in {dim}\n\n' + tabulate(tab)
71
--
72
2.29.2
73
74
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Let's keep view part in separate: this way it's better to improve it in
4
the following commits.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20201021145859.11201-18-vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
scripts/simplebench/bench-example.py | 3 +-
12
scripts/simplebench/bench_write_req.py | 3 +-
13
scripts/simplebench/results_to_text.py | 48 ++++++++++++++++++++++++++
14
scripts/simplebench/simplebench.py | 31 -----------------
15
4 files changed, 52 insertions(+), 33 deletions(-)
16
create mode 100644 scripts/simplebench/results_to_text.py
17
18
diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
19
index XXXXXXX..XXXXXXX 100644
20
--- a/scripts/simplebench/bench-example.py
21
+++ b/scripts/simplebench/bench-example.py
22
@@ -XXX,XX +XXX,XX @@
23
#
24
25
import simplebench
26
+from results_to_text import results_to_text
27
from bench_block_job import bench_block_copy, drv_file, drv_nbd
28
29
30
@@ -XXX,XX +XXX,XX @@ test_envs = [
31
]
32
33
result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
34
-print(simplebench.results_to_text(result))
35
+print(results_to_text(result))
36
diff --git a/scripts/simplebench/bench_write_req.py b/scripts/simplebench/bench_write_req.py
37
index XXXXXXX..XXXXXXX 100755
38
--- a/scripts/simplebench/bench_write_req.py
39
+++ b/scripts/simplebench/bench_write_req.py
40
@@ -XXX,XX +XXX,XX @@ import sys
41
import os
42
import subprocess
43
import simplebench
44
+from results_to_text import results_to_text
45
46
47
def bench_func(env, case):
48
@@ -XXX,XX +XXX,XX @@ if __name__ == '__main__':
49
50
result = simplebench.bench(bench_func, test_envs, test_cases, count=3,
51
initial_run=False)
52
- print(simplebench.results_to_text(result))
53
+ print(results_to_text(result))
54
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
55
new file mode 100644
56
index XXXXXXX..XXXXXXX
57
--- /dev/null
58
+++ b/scripts/simplebench/results_to_text.py
59
@@ -XXX,XX +XXX,XX @@
60
+# Simple benchmarking framework
61
+#
62
+# Copyright (c) 2019 Virtuozzo International GmbH.
63
+#
64
+# This program is free software; you can redistribute it and/or modify
65
+# it under the terms of the GNU General Public License as published by
66
+# the Free Software Foundation; either version 2 of the License, or
67
+# (at your option) any later version.
68
+#
69
+# This program is distributed in the hope that it will be useful,
70
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
71
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
72
+# GNU General Public License for more details.
73
+#
74
+# You should have received a copy of the GNU General Public License
75
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
76
+#
77
+
78
+
79
+def result_to_text(result):
80
+ """Return text representation of bench_one() returned dict."""
81
+ if 'average' in result:
82
+ s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
83
+ if 'n-failed' in result:
84
+ s += '\n({} failed)'.format(result['n-failed'])
85
+ return s
86
+ else:
87
+ return 'FAILED'
88
+
89
+
90
+def results_to_text(results):
91
+ """Return text representation of bench() returned dict."""
92
+ from tabulate import tabulate
93
+
94
+ dim = None
95
+ tab = [[""] + [c['id'] for c in results['envs']]]
96
+ for case in results['cases']:
97
+ row = [case['id']]
98
+ for env in results['envs']:
99
+ res = results['tab'][case['id']][env['id']]
100
+ if dim is None:
101
+ dim = res['dimension']
102
+ else:
103
+ assert dim == res['dimension']
104
+ row.append(result_to_text(res))
105
+ tab.append(row)
106
+
107
+ return f'All results are in {dim}\n\n' + tabulate(tab)
108
diff --git a/scripts/simplebench/simplebench.py b/scripts/simplebench/simplebench.py
109
index XXXXXXX..XXXXXXX 100644
110
--- a/scripts/simplebench/simplebench.py
111
+++ b/scripts/simplebench/simplebench.py
112
@@ -XXX,XX +XXX,XX @@ def bench_one(test_func, test_env, test_case, count=5, initial_run=True):
113
return result
114
115
116
-def result_to_text(result):
117
- """Return text representation of bench_one() returned dict."""
118
- if 'average' in result:
119
- s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
120
- if 'n-failed' in result:
121
- s += '\n({} failed)'.format(result['n-failed'])
122
- return s
123
- else:
124
- return 'FAILED'
125
-
126
-
127
def bench(test_func, test_envs, test_cases, *args, **vargs):
128
"""Fill benchmark table
129
130
@@ -XXX,XX +XXX,XX @@ def bench(test_func, test_envs, test_cases, *args, **vargs):
131
132
print('Done')
133
return results
134
-
135
-
136
-def results_to_text(results):
137
- """Return text representation of bench() returned dict."""
138
- from tabulate import tabulate
139
-
140
- dim = None
141
- tab = [[""] + [c['id'] for c in results['envs']]]
142
- for case in results['cases']:
143
- row = [case['id']]
144
- for env in results['envs']:
145
- res = results['tab'][case['id']][env['id']]
146
- if dim is None:
147
- dim = res['dimension']
148
- else:
149
- assert dim == res['dimension']
150
- row.append(result_to_text(res))
151
- tab.append(row)
152
-
153
- return f'All results are in {dim}\n\n' + tabulate(tab)
154
--
155
2.29.2
156
157
diff view generated by jsdifflib
1
From: Fred Rolland <rollandf@gmail.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Update doc with the usage of UUID for initiator name.
3
Move to generic format for floats and percentage for error.
4
4
5
Related-To: https://bugzilla.redhat.com/1006468
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Signed-off-by: Fred Rolland <frolland@redhat.com>
6
Message-Id: <20201021145859.11201-19-vsementsov@virtuozzo.com>
7
Message-id: 20170823084830.30500-1-frolland@redhat.com
7
Acked-by: Max Reitz <mreitz@redhat.com>
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
9
---
10
qemu-doc.texi | 5 +++--
10
scripts/simplebench/results_to_text.py | 13 ++++++++++++-
11
1 file changed, 3 insertions(+), 2 deletions(-)
11
1 file changed, 12 insertions(+), 1 deletion(-)
12
12
13
diff --git a/qemu-doc.texi b/qemu-doc.texi
13
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/qemu-doc.texi
15
--- a/scripts/simplebench/results_to_text.py
16
+++ b/qemu-doc.texi
16
+++ b/scripts/simplebench/results_to_text.py
17
@@ -XXX,XX +XXX,XX @@ in a configuration file provided via '-readconfig' or directly on the
17
@@ -XXX,XX +XXX,XX @@
18
command line.
18
# along with this program. If not, see <http://www.gnu.org/licenses/>.
19
19
#
20
If the initiator-name is not specified qemu will use a default name
20
21
-of 'iqn.2008-11.org.linux-kvm[:<name>'] where <name> is the name of the
21
+import math
22
+of 'iqn.2008-11.org.linux-kvm[:<uuid>'] where <uuid> is the UUID of the
22
+
23
+virtual machine. If the UUID is not specified qemu will use
23
+
24
+'iqn.2008-11.org.linux-kvm[:<name>'] where <name> is the name of the
24
+def format_value(x, stdev):
25
virtual machine.
25
+ stdev_pr = stdev / x * 100
26
26
+ if stdev_pr < 1.5:
27
-
27
+ # don't care too much
28
@example
28
+ return f'{x:.2g}'
29
Setting a specific initiator name to use when logging in to the target
29
+ else:
30
-iscsi initiator-name=iqn.qemu.test:my-initiator
30
+ return f'{x:.2g} ± {math.ceil(stdev_pr)}%'
31
+
32
33
def result_to_text(result):
34
"""Return text representation of bench_one() returned dict."""
35
if 'average' in result:
36
- s = '{:.2f} +- {:.2f}'.format(result['average'], result['stdev'])
37
+ s = format_value(result['average'], result['stdev'])
38
if 'n-failed' in result:
39
s += '\n({} failed)'.format(result['n-failed'])
40
return s
31
--
41
--
32
2.13.5
42
2.29.2
33
43
34
44
diff view generated by jsdifflib
1
Add the scripts/ directory to sys.path so Python 2.6 will be able to
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
import argparse.
3
2
4
Cc: Daniel P. Berrange <berrange@redhat.com>
3
Performance improvements / degradations are usually discussed in
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
percentage. Let's make the script calculate it for us.
6
Acked-by: John Snow <jsnow@redhat.com>
5
7
Acked-by: Fam Zheng <famz@redhat.com>
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Message-id: 20170825155732.15665-4-stefanha@redhat.com
7
Message-Id: <20201021145859.11201-20-vsementsov@virtuozzo.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
[mreitz: 'seconds' instead of 'secs']
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
11
---
11
tests/migration/guestperf/shell.py | 8 +++++---
12
scripts/simplebench/results_to_text.py | 67 +++++++++++++++++++++++---
12
1 file changed, 5 insertions(+), 3 deletions(-)
13
1 file changed, 60 insertions(+), 7 deletions(-)
13
14
14
diff --git a/tests/migration/guestperf/shell.py b/tests/migration/guestperf/shell.py
15
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/migration/guestperf/shell.py
17
--- a/scripts/simplebench/results_to_text.py
17
+++ b/tests/migration/guestperf/shell.py
18
+++ b/scripts/simplebench/results_to_text.py
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
19
#
20
#
20
21
21
22
import math
22
-import argparse
23
+import tabulate
23
-import fnmatch
24
+
24
import os
25
+# We want leading whitespace for difference row cells (see below)
25
import os.path
26
+tabulate.PRESERVE_WHITESPACE = True
26
-import platform
27
27
import sys
28
28
+sys.path.append(os.path.join(os.path.dirname(__file__),
29
def format_value(x, stdev):
29
+ '..', '..', '..', 'scripts'))
30
@@ -XXX,XX +XXX,XX @@ def result_to_text(result):
30
+import argparse
31
return 'FAILED'
31
+import fnmatch
32
32
+import platform
33
33
34
-def results_to_text(results):
34
from guestperf.hardware import Hardware
35
- """Return text representation of bench() returned dict."""
35
from guestperf.engine import Engine
36
- from tabulate import tabulate
37
-
38
+def results_dimension(results):
39
dim = None
40
- tab = [[""] + [c['id'] for c in results['envs']]]
41
for case in results['cases']:
42
- row = [case['id']]
43
for env in results['envs']:
44
res = results['tab'][case['id']][env['id']]
45
if dim is None:
46
dim = res['dimension']
47
else:
48
assert dim == res['dimension']
49
+
50
+ assert dim in ('iops', 'seconds')
51
+
52
+ return dim
53
+
54
+
55
+def results_to_text(results):
56
+ """Return text representation of bench() returned dict."""
57
+ n_columns = len(results['envs'])
58
+ named_columns = n_columns > 2
59
+ dim = results_dimension(results)
60
+ tab = []
61
+
62
+ if named_columns:
63
+ # Environment columns are named A, B, ...
64
+ tab.append([''] + [chr(ord('A') + i) for i in range(n_columns)])
65
+
66
+ tab.append([''] + [c['id'] for c in results['envs']])
67
+
68
+ for case in results['cases']:
69
+ row = [case['id']]
70
+ case_results = results['tab'][case['id']]
71
+ for env in results['envs']:
72
+ res = case_results[env['id']]
73
row.append(result_to_text(res))
74
tab.append(row)
75
76
- return f'All results are in {dim}\n\n' + tabulate(tab)
77
+ # Add row of difference between columns. For each column starting from
78
+ # B we calculate difference with all previous columns.
79
+ row = ['', ''] # case name and first column
80
+ for i in range(1, n_columns):
81
+ cell = ''
82
+ env = results['envs'][i]
83
+ res = case_results[env['id']]
84
+
85
+ if 'average' not in res:
86
+ # Failed result
87
+ row.append(cell)
88
+ continue
89
+
90
+ for j in range(0, i):
91
+ env_j = results['envs'][j]
92
+ res_j = case_results[env_j['id']]
93
+ cell += ' '
94
+
95
+ if 'average' not in res_j:
96
+ # Failed result
97
+ cell += '--'
98
+ continue
99
+
100
+ col_j = tab[0][j + 1] if named_columns else ''
101
+ diff_pr = round((res['average'] - res_j['average']) /
102
+ res_j['average'] * 100)
103
+ cell += f' {col_j}{diff_pr:+}%'
104
+ row.append(cell)
105
+ tab.append(row)
106
+
107
+ return f'All results are in {dim}\n\n' + tabulate.tabulate(tab)
36
--
108
--
37
2.13.5
109
2.29.2
38
110
39
111
diff view generated by jsdifflib
1
Add the scripts/ directory to sys.path so Python 2.6 will be able to
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
import argparse.
3
2
4
Cc: Fam Zheng <famz@redhat.com>
3
Make results_to_text a tool to dump results saved in JSON file.
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
6
Acked-by: John Snow <jsnow@redhat.com>
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Acked-by: Fam Zheng <famz@redhat.com>
6
Message-Id: <20201021145859.11201-21-vsementsov@virtuozzo.com>
8
Message-id: 20170825155732.15665-3-stefanha@redhat.com
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
---
9
---
11
tests/docker/docker.py | 4 +++-
10
scripts/simplebench/results_to_text.py | 14 ++++++++++++++
12
1 file changed, 3 insertions(+), 1 deletion(-)
11
1 file changed, 14 insertions(+)
12
mode change 100644 => 100755 scripts/simplebench/results_to_text.py
13
13
14
diff --git a/tests/docker/docker.py b/tests/docker/docker.py
14
diff --git a/scripts/simplebench/results_to_text.py b/scripts/simplebench/results_to_text.py
15
index XXXXXXX..XXXXXXX 100755
15
old mode 100644
16
--- a/tests/docker/docker.py
16
new mode 100755
17
+++ b/tests/docker/docker.py
17
index XXXXXXX..XXXXXXX
18
--- a/scripts/simplebench/results_to_text.py
19
+++ b/scripts/simplebench/results_to_text.py
18
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
19
21
+#!/usr/bin/env python3
20
import os
22
+#
21
import sys
23
# Simple benchmarking framework
22
+sys.path.append(os.path.join(os.path.dirname(__file__),
24
#
23
+ '..', '..', 'scripts'))
25
# Copyright (c) 2019 Virtuozzo International GmbH.
24
+import argparse
26
@@ -XXX,XX +XXX,XX @@ def results_to_text(results):
25
import subprocess
27
tab.append(row)
26
import json
28
27
import hashlib
29
return f'All results are in {dim}\n\n' + tabulate.tabulate(tab)
28
import atexit
30
+
29
import uuid
31
+
30
-import argparse
32
+if __name__ == '__main__':
31
import tempfile
33
+ import sys
32
import re
34
+ import json
33
import signal
35
+
36
+ if len(sys.argv) < 2:
37
+ print(f'USAGE: {sys.argv[0]} results.json')
38
+ exit(1)
39
+
40
+ with open(sys.argv[1]) as f:
41
+ print(results_to_text(json.load(f)))
34
--
42
--
35
2.13.5
43
2.29.2
36
44
37
45
diff view generated by jsdifflib
1
The minimum Python version supported by QEMU is 2.6. The argparse
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
standard library module was only added in Python 2.7. Many scripts
3
would like to use argparse because it supports command-line
4
sub-commands.
5
2
6
This patch adds argparse. See the top of argparse.py for details.
3
Benchmark for new preallocate filter.
7
4
8
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
5
Example usage:
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
./bench_prealloc.py ../../build/qemu-img \
10
Acked-by: John Snow <jsnow@redhat.com>
7
ssd-ext4:/path/to/mount/point \
11
Message-id: 20170825155732.15665-2-stefanha@redhat.com
8
ssd-xfs:/path2 hdd-ext4:/path3 hdd-xfs:/path4
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
10
The benchmark shows performance improvement (or degradation) when use
11
new preallocate filter with qcow2 image.
12
13
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
14
Message-Id: <20201021145859.11201-22-vsementsov@virtuozzo.com>
15
Reviewed-by: Max Reitz <mreitz@redhat.com>
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
17
---
14
COPYING.PYTHON | 270 ++++++
18
scripts/simplebench/bench_prealloc.py | 132 ++++++++++++++++++++++++++
15
scripts/argparse.py | 2406 +++++++++++++++++++++++++++++++++++++++++++++++++++
19
1 file changed, 132 insertions(+)
16
2 files changed, 2676 insertions(+)
20
create mode 100755 scripts/simplebench/bench_prealloc.py
17
create mode 100644 COPYING.PYTHON
18
create mode 100644 scripts/argparse.py
19
21
20
diff --git a/COPYING.PYTHON b/COPYING.PYTHON
22
diff --git a/scripts/simplebench/bench_prealloc.py b/scripts/simplebench/bench_prealloc.py
21
new file mode 100644
23
new file mode 100755
22
index XXXXXXX..XXXXXXX
24
index XXXXXXX..XXXXXXX
23
--- /dev/null
25
--- /dev/null
24
+++ b/COPYING.PYTHON
26
+++ b/scripts/simplebench/bench_prealloc.py
25
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@
26
+A. HISTORY OF THE SOFTWARE
28
+#!/usr/bin/env python3
27
+==========================
29
+#
28
+
30
+# Benchmark preallocate filter
29
+Python was created in the early 1990s by Guido van Rossum at Stichting
31
+#
30
+Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
32
+# Copyright (c) 2020 Virtuozzo International GmbH.
31
+as a successor of a language called ABC. Guido remains Python's
33
+#
32
+principal author, although it includes many contributions from others.
34
+# This program is free software; you can redistribute it and/or modify
33
+
35
+# it under the terms of the GNU General Public License as published by
34
+In 1995, Guido continued his work on Python at the Corporation for
36
+# the Free Software Foundation; either version 2 of the License, or
35
+National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
37
+# (at your option) any later version.
36
+in Reston, Virginia where he released several versions of the
38
+#
37
+software.
39
+# This program is distributed in the hope that it will be useful,
38
+
40
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
39
+In May 2000, Guido and the Python core development team moved to
41
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
40
+BeOpen.com to form the BeOpen PythonLabs team. In October of the same
42
+# GNU General Public License for more details.
41
+year, the PythonLabs team moved to Digital Creations (now Zope
43
+#
42
+Corporation, see http://www.zope.com). In 2001, the Python Software
44
+# You should have received a copy of the GNU General Public License
43
+Foundation (PSF, see http://www.python.org/psf/) was formed, a
45
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
44
+non-profit organization created specifically to own Python-related
46
+#
45
+Intellectual Property. Zope Corporation is a sponsoring member of
46
+the PSF.
47
+
48
+All Python releases are Open Source (see http://www.opensource.org for
49
+the Open Source Definition). Historically, most, but not all, Python
50
+releases have also been GPL-compatible; the table below summarizes
51
+the various releases.
52
+
53
+ Release Derived Year Owner GPL-
54
+ from compatible? (1)
55
+
56
+ 0.9.0 thru 1.2 1991-1995 CWI yes
57
+ 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
58
+ 1.6 1.5.2 2000 CNRI no
59
+ 2.0 1.6 2000 BeOpen.com no
60
+ 1.6.1 1.6 2001 CNRI yes (2)
61
+ 2.1 2.0+1.6.1 2001 PSF no
62
+ 2.0.1 2.0+1.6.1 2001 PSF yes
63
+ 2.1.1 2.1+2.0.1 2001 PSF yes
64
+ 2.2 2.1.1 2001 PSF yes
65
+ 2.1.2 2.1.1 2002 PSF yes
66
+ 2.1.3 2.1.2 2002 PSF yes
67
+ 2.2.1 2.2 2002 PSF yes
68
+ 2.2.2 2.2.1 2002 PSF yes
69
+ 2.2.3 2.2.2 2003 PSF yes
70
+ 2.3 2.2.2 2002-2003 PSF yes
71
+ 2.3.1 2.3 2002-2003 PSF yes
72
+ 2.3.2 2.3.1 2002-2003 PSF yes
73
+ 2.3.3 2.3.2 2002-2003 PSF yes
74
+ 2.3.4 2.3.3 2004 PSF yes
75
+ 2.3.5 2.3.4 2005 PSF yes
76
+ 2.4 2.3 2004 PSF yes
77
+ 2.4.1 2.4 2005 PSF yes
78
+ 2.4.2 2.4.1 2005 PSF yes
79
+ 2.4.3 2.4.2 2006 PSF yes
80
+ 2.5 2.4 2006 PSF yes
81
+ 2.7 2.6 2010 PSF yes
82
+
83
+Footnotes:
84
+
85
+(1) GPL-compatible doesn't mean that we're distributing Python under
86
+ the GPL. All Python licenses, unlike the GPL, let you distribute
87
+ a modified version without making your changes open source. The
88
+ GPL-compatible licenses make it possible to combine Python with
89
+ other software that is released under the GPL; the others don't.
90
+
91
+(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
92
+ because its license has a choice of law clause. According to
93
+ CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
94
+ is "not incompatible" with the GPL.
95
+
96
+Thanks to the many outside volunteers who have worked under Guido's
97
+direction to make these releases possible.
98
+
47
+
99
+
48
+
100
+B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
49
+import sys
101
+===============================================================
50
+import os
51
+import subprocess
52
+import re
53
+import json
102
+
54
+
103
+PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
55
+import simplebench
104
+--------------------------------------------
56
+from results_to_text import results_to_text
105
+
106
+1. This LICENSE AGREEMENT is between the Python Software Foundation
107
+("PSF"), and the Individual or Organization ("Licensee") accessing and
108
+otherwise using this software ("Python") in source or binary form and
109
+its associated documentation.
110
+
111
+2. Subject to the terms and conditions of this License Agreement, PSF
112
+hereby grants Licensee a nonexclusive, royalty-free, world-wide
113
+license to reproduce, analyze, test, perform and/or display publicly,
114
+prepare derivative works, distribute, and otherwise use Python
115
+alone or in any derivative version, provided, however, that PSF's
116
+License Agreement and PSF's notice of copyright, i.e., "Copyright (c)
117
+2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation; All Rights
118
+Reserved" are retained in Python alone or in any derivative version
119
+prepared by Licensee.
120
+
121
+3. In the event Licensee prepares a derivative work that is based on
122
+or incorporates Python or any part thereof, and wants to make
123
+the derivative work available to others as provided herein, then
124
+Licensee hereby agrees to include in any such work a brief summary of
125
+the changes made to Python.
126
+
127
+4. PSF is making Python available to Licensee on an "AS IS"
128
+basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
129
+IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
130
+DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
131
+FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
132
+INFRINGE ANY THIRD PARTY RIGHTS.
133
+
134
+5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
135
+FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
136
+A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
137
+OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
138
+
139
+6. This License Agreement will automatically terminate upon a material
140
+breach of its terms and conditions.
141
+
142
+7. Nothing in this License Agreement shall be deemed to create any
143
+relationship of agency, partnership, or joint venture between PSF and
144
+Licensee. This License Agreement does not grant permission to use PSF
145
+trademarks or trade name in a trademark sense to endorse or promote
146
+products or services of Licensee, or any third party.
147
+
148
+8. By copying, installing or otherwise using Python, Licensee
149
+agrees to be bound by the terms and conditions of this License
150
+Agreement.
151
+
57
+
152
+
58
+
153
+BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
59
+def qemu_img_bench(args):
154
+-------------------------------------------
60
+ p = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
61
+ universal_newlines=True)
155
+
62
+
156
+BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
63
+ if p.returncode == 0:
157
+
64
+ try:
158
+1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
65
+ m = re.search(r'Run completed in (\d+.\d+) seconds.', p.stdout)
159
+office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
66
+ return {'seconds': float(m.group(1))}
160
+Individual or Organization ("Licensee") accessing and otherwise using
67
+ except Exception:
161
+this software in source or binary form and its associated
68
+ return {'error': f'failed to parse qemu-img output: {p.stdout}'}
162
+documentation ("the Software").
69
+ else:
163
+
70
+ return {'error': f'qemu-img failed: {p.returncode}: {p.stdout}'}
164
+2. Subject to the terms and conditions of this BeOpen Python License
165
+Agreement, BeOpen hereby grants Licensee a non-exclusive,
166
+royalty-free, world-wide license to reproduce, analyze, test, perform
167
+and/or display publicly, prepare derivative works, distribute, and
168
+otherwise use the Software alone or in any derivative version,
169
+provided, however, that the BeOpen Python License is retained in the
170
+Software, alone or in any derivative version prepared by Licensee.
171
+
172
+3. BeOpen is making the Software available to Licensee on an "AS IS"
173
+basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
174
+IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
175
+DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
176
+FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
177
+INFRINGE ANY THIRD PARTY RIGHTS.
178
+
179
+4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
180
+SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
181
+AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
182
+DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
183
+
184
+5. This License Agreement will automatically terminate upon a material
185
+breach of its terms and conditions.
186
+
187
+6. This License Agreement shall be governed by and interpreted in all
188
+respects by the law of the State of California, excluding conflict of
189
+law provisions. Nothing in this License Agreement shall be deemed to
190
+create any relationship of agency, partnership, or joint venture
191
+between BeOpen and Licensee. This License Agreement does not grant
192
+permission to use BeOpen trademarks or trade names in a trademark
193
+sense to endorse or promote products or services of Licensee, or any
194
+third party. As an exception, the "BeOpen Python" logos available at
195
+http://www.pythonlabs.com/logos.html may be used according to the
196
+permissions granted on that web page.
197
+
198
+7. By copying, installing or otherwise using the software, Licensee
199
+agrees to be bound by the terms and conditions of this License
200
+Agreement.
201
+
71
+
202
+
72
+
203
+CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
73
+def bench_func(env, case):
204
+---------------------------------------
74
+ fname = f"{case['dir']}/prealloc-test.qcow2"
75
+ try:
76
+ os.remove(fname)
77
+ except OSError:
78
+ pass
205
+
79
+
206
+1. This LICENSE AGREEMENT is between the Corporation for National
80
+ subprocess.run([env['qemu-img-binary'], 'create', '-f', 'qcow2', fname,
207
+Research Initiatives, having an office at 1895 Preston White Drive,
81
+ '16G'], stdout=subprocess.DEVNULL,
208
+Reston, VA 20191 ("CNRI"), and the Individual or Organization
82
+ stderr=subprocess.DEVNULL, check=True)
209
+("Licensee") accessing and otherwise using Python 1.6.1 software in
210
+source or binary form and its associated documentation.
211
+
83
+
212
+2. Subject to the terms and conditions of this License Agreement, CNRI
84
+ args = [env['qemu-img-binary'], 'bench', '-c', str(case['count']),
213
+hereby grants Licensee a nonexclusive, royalty-free, world-wide
85
+ '-d', '64', '-s', case['block-size'], '-t', 'none', '-n', '-w']
214
+license to reproduce, analyze, test, perform and/or display publicly,
86
+ if env['prealloc']:
215
+prepare derivative works, distribute, and otherwise use Python 1.6.1
87
+ args += ['--image-opts',
216
+alone or in any derivative version, provided, however, that CNRI's
88
+ 'driver=qcow2,file.driver=preallocate,file.file.driver=file,'
217
+License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
89
+ f'file.file.filename={fname}']
218
+1995-2001 Corporation for National Research Initiatives; All Rights
90
+ else:
219
+Reserved" are retained in Python 1.6.1 alone or in any derivative
91
+ args += ['-f', 'qcow2', fname]
220
+version prepared by Licensee. Alternately, in lieu of CNRI's License
221
+Agreement, Licensee may substitute the following text (omitting the
222
+quotes): "Python 1.6.1 is made available subject to the terms and
223
+conditions in CNRI's License Agreement. This Agreement together with
224
+Python 1.6.1 may be located on the Internet using the following
225
+unique, persistent identifier (known as a handle): 1895.22/1013. This
226
+Agreement may also be obtained from a proxy server on the Internet
227
+using the following URL: http://hdl.handle.net/1895.22/1013".
228
+
92
+
229
+3. In the event Licensee prepares a derivative work that is based on
93
+ return qemu_img_bench(args)
230
+or incorporates Python 1.6.1 or any part thereof, and wants to make
231
+the derivative work available to others as provided herein, then
232
+Licensee hereby agrees to include in any such work a brief summary of
233
+the changes made to Python 1.6.1.
234
+
235
+4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
236
+basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
237
+IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
238
+DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
239
+FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
240
+INFRINGE ANY THIRD PARTY RIGHTS.
241
+
242
+5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
243
+1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
244
+A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
245
+OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
246
+
247
+6. This License Agreement will automatically terminate upon a material
248
+breach of its terms and conditions.
249
+
250
+7. This License Agreement shall be governed by the federal
251
+intellectual property law of the United States, including without
252
+limitation the federal copyright law, and, to the extent such
253
+U.S. federal law does not apply, by the law of the Commonwealth of
254
+Virginia, excluding Virginia's conflict of law provisions.
255
+Notwithstanding the foregoing, with regard to derivative works based
256
+on Python 1.6.1 that incorporate non-separable material that was
257
+previously distributed under the GNU General Public License (GPL), the
258
+law of the Commonwealth of Virginia shall govern this License
259
+Agreement only as to issues arising under or with respect to
260
+Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
261
+License Agreement shall be deemed to create any relationship of
262
+agency, partnership, or joint venture between CNRI and Licensee. This
263
+License Agreement does not grant permission to use CNRI trademarks or
264
+trade name in a trademark sense to endorse or promote products or
265
+services of Licensee, or any third party.
266
+
267
+8. By clicking on the "ACCEPT" button where indicated, or by copying,
268
+installing or otherwise using Python 1.6.1, Licensee agrees to be
269
+bound by the terms and conditions of this License Agreement.
270
+
271
+ ACCEPT
272
+
94
+
273
+
95
+
274
+CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
96
+def auto_count_bench_func(env, case):
275
+--------------------------------------------------
97
+ case['count'] = 100
98
+ while True:
99
+ res = bench_func(env, case)
100
+ if 'error' in res:
101
+ return res
276
+
102
+
277
+Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
103
+ if res['seconds'] >= 1:
278
+The Netherlands. All rights reserved.
104
+ break
279
+
105
+
280
+Permission to use, copy, modify, and distribute this software and its
106
+ case['count'] *= 10
281
+documentation for any purpose and without fee is hereby granted,
282
+provided that the above copyright notice appear in all copies and that
283
+both that copyright notice and this permission notice appear in
284
+supporting documentation, and that the name of Stichting Mathematisch
285
+Centrum or CWI not be used in advertising or publicity pertaining to
286
+distribution of the software without specific, written prior
287
+permission.
288
+
107
+
289
+STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
108
+ if res['seconds'] < 5:
290
+THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
109
+ case['count'] = round(case['count'] * 5 / res['seconds'])
291
+FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
110
+ res = bench_func(env, case)
292
+FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
111
+ if 'error' in res:
293
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
112
+ return res
294
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
295
+OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
296
diff --git a/scripts/argparse.py b/scripts/argparse.py
297
new file mode 100644
298
index XXXXXXX..XXXXXXX
299
--- /dev/null
300
+++ b/scripts/argparse.py
301
@@ -XXX,XX +XXX,XX @@
302
+# This is a local copy of the standard library argparse module taken from PyPI.
303
+# It is licensed under the Python Software Foundation License. This is a
304
+# fallback for Python 2.6 which does not include this module. Python 2.7+ and
305
+# 3+ will never load this module because built-in modules are loaded before
306
+# anything in sys.path.
307
+#
308
+# If your script is not located in the same directory as this file, import it
309
+# like this:
310
+#
311
+# import os
312
+# import sys
313
+# sys.path.append(os.path.join(os.path.dirname(__file__), ..., 'scripts'))
314
+# import argparse
315
+
113
+
316
+# Author: Steven J. Bethard <steven.bethard@gmail.com>.
114
+ res['iops'] = case['count'] / res['seconds']
317
+# Maintainer: Thomas Waldmann <tw@waldmann-edv.de>
115
+ return res
318
+
319
+"""Command-line parsing library
320
+
321
+This module is an optparse-inspired command-line parsing library that:
322
+
323
+ - handles both optional and positional arguments
324
+ - produces highly informative usage messages
325
+ - supports parsers that dispatch to sub-parsers
326
+
327
+The following is a simple usage example that sums integers from the
328
+command-line and writes the result to a file::
329
+
330
+ parser = argparse.ArgumentParser(
331
+ description='sum the integers at the command line')
332
+ parser.add_argument(
333
+ 'integers', metavar='int', nargs='+', type=int,
334
+ help='an integer to be summed')
335
+ parser.add_argument(
336
+ '--log', default=sys.stdout, type=argparse.FileType('w'),
337
+ help='the file where the sum should be written')
338
+ args = parser.parse_args()
339
+ args.log.write('%s' % sum(args.integers))
340
+ args.log.close()
341
+
342
+The module contains the following public classes:
343
+
344
+ - ArgumentParser -- The main entry point for command-line parsing. As the
345
+ example above shows, the add_argument() method is used to populate
346
+ the parser with actions for optional and positional arguments. Then
347
+ the parse_args() method is invoked to convert the args at the
348
+ command-line into an object with attributes.
349
+
350
+ - ArgumentError -- The exception raised by ArgumentParser objects when
351
+ there are errors with the parser's actions. Errors raised while
352
+ parsing the command-line are caught by ArgumentParser and emitted
353
+ as command-line messages.
354
+
355
+ - FileType -- A factory for defining types of files to be created. As the
356
+ example above shows, instances of FileType are typically passed as
357
+ the type= argument of add_argument() calls.
358
+
359
+ - Action -- The base class for parser actions. Typically actions are
360
+ selected by passing strings like 'store_true' or 'append_const' to
361
+ the action= argument of add_argument(). However, for greater
362
+ customization of ArgumentParser actions, subclasses of Action may
363
+ be defined and passed as the action= argument.
364
+
365
+ - HelpFormatter, RawDescriptionHelpFormatter, RawTextHelpFormatter,
366
+ ArgumentDefaultsHelpFormatter -- Formatter classes which
367
+ may be passed as the formatter_class= argument to the
368
+ ArgumentParser constructor. HelpFormatter is the default,
369
+ RawDescriptionHelpFormatter and RawTextHelpFormatter tell the parser
370
+ not to change the formatting for help text, and
371
+ ArgumentDefaultsHelpFormatter adds information about argument defaults
372
+ to the help.
373
+
374
+All other classes in this module are considered implementation details.
375
+(Also note that HelpFormatter and RawDescriptionHelpFormatter are only
376
+considered public as object names -- the API of the formatter objects is
377
+still considered an implementation detail.)
378
+"""
379
+
380
+__version__ = '1.4.0' # we use our own version number independant of the
381
+ # one in stdlib and we release this on pypi.
382
+
383
+__external_lib__ = True # to make sure the tests really test THIS lib,
384
+ # not the builtin one in Python stdlib
385
+
386
+__all__ = [
387
+ 'ArgumentParser',
388
+ 'ArgumentError',
389
+ 'ArgumentTypeError',
390
+ 'FileType',
391
+ 'HelpFormatter',
392
+ 'ArgumentDefaultsHelpFormatter',
393
+ 'RawDescriptionHelpFormatter',
394
+ 'RawTextHelpFormatter',
395
+ 'Namespace',
396
+ 'Action',
397
+ 'ONE_OR_MORE',
398
+ 'OPTIONAL',
399
+ 'PARSER',
400
+ 'REMAINDER',
401
+ 'SUPPRESS',
402
+ 'ZERO_OR_MORE',
403
+]
404
+
116
+
405
+
117
+
406
+import copy as _copy
118
+if __name__ == '__main__':
407
+import os as _os
119
+ if len(sys.argv) < 2:
408
+import re as _re
120
+ print(f'USAGE: {sys.argv[0]} <qemu-img binary> '
409
+import sys as _sys
121
+ 'DISK_NAME:DIR_PATH ...')
410
+import textwrap as _textwrap
122
+ exit(1)
411
+
123
+
412
+from gettext import gettext as _
124
+ qemu_img = sys.argv[1]
413
+
125
+
414
+try:
126
+ envs = [
415
+ set
127
+ {
416
+except NameError:
128
+ 'id': 'no-prealloc',
417
+ # for python < 2.4 compatibility (sets module is there since 2.3):
129
+ 'qemu-img-binary': qemu_img,
418
+ from sets import Set as set
130
+ 'prealloc': False
131
+ },
132
+ {
133
+ 'id': 'prealloc',
134
+ 'qemu-img-binary': qemu_img,
135
+ 'prealloc': True
136
+ }
137
+ ]
419
+
138
+
420
+try:
139
+ aligned_cases = []
421
+ basestring
140
+ unaligned_cases = []
422
+except NameError:
423
+ basestring = str
424
+
141
+
425
+try:
142
+ for disk in sys.argv[2:]:
426
+ sorted
143
+ name, path = disk.split(':')
427
+except NameError:
144
+ aligned_cases.append({
428
+ # for python < 2.4 compatibility:
145
+ 'id': f'{name}, aligned sequential 16k',
429
+ def sorted(iterable, reverse=False):
146
+ 'block-size': '16k',
430
+ result = list(iterable)
147
+ 'dir': path
431
+ result.sort()
148
+ })
432
+ if reverse:
149
+ unaligned_cases.append({
433
+ result.reverse()
150
+ 'id': f'{name}, unaligned sequential 64k',
434
+ return result
151
+ 'block-size': '16k',
152
+ 'dir': path
153
+ })
435
+
154
+
436
+
155
+ result = simplebench.bench(auto_count_bench_func, envs,
437
+def _callable(obj):
156
+ aligned_cases + unaligned_cases, count=5)
438
+ return hasattr(obj, '__call__') or hasattr(obj, '__bases__')
157
+ print(results_to_text(result))
439
+
158
+ with open('results.json', 'w') as f:
440
+
159
+ json.dump(result, f, indent=4)
441
+SUPPRESS = '==SUPPRESS=='
442
+
443
+OPTIONAL = '?'
444
+ZERO_OR_MORE = '*'
445
+ONE_OR_MORE = '+'
446
+PARSER = 'A...'
447
+REMAINDER = '...'
448
+_UNRECOGNIZED_ARGS_ATTR = '_unrecognized_args'
449
+
450
+# =============================
451
+# Utility functions and classes
452
+# =============================
453
+
454
+class _AttributeHolder(object):
455
+ """Abstract base class that provides __repr__.
456
+
457
+ The __repr__ method returns a string in the format::
458
+ ClassName(attr=name, attr=name, ...)
459
+ The attributes are determined either by a class-level attribute,
460
+ '_kwarg_names', or by inspecting the instance __dict__.
461
+ """
462
+
463
+ def __repr__(self):
464
+ type_name = type(self).__name__
465
+ arg_strings = []
466
+ for arg in self._get_args():
467
+ arg_strings.append(repr(arg))
468
+ for name, value in self._get_kwargs():
469
+ arg_strings.append('%s=%r' % (name, value))
470
+ return '%s(%s)' % (type_name, ', '.join(arg_strings))
471
+
472
+ def _get_kwargs(self):
473
+ return sorted(self.__dict__.items())
474
+
475
+ def _get_args(self):
476
+ return []
477
+
478
+
479
+def _ensure_value(namespace, name, value):
480
+ if getattr(namespace, name, None) is None:
481
+ setattr(namespace, name, value)
482
+ return getattr(namespace, name)
483
+
484
+
485
+# ===============
486
+# Formatting Help
487
+# ===============
488
+
489
+class HelpFormatter(object):
490
+ """Formatter for generating usage messages and argument help strings.
491
+
492
+ Only the name of this class is considered a public API. All the methods
493
+ provided by the class are considered an implementation detail.
494
+ """
495
+
496
+ def __init__(self,
497
+ prog,
498
+ indent_increment=2,
499
+ max_help_position=24,
500
+ width=None):
501
+
502
+ # default setting for width
503
+ if width is None:
504
+ try:
505
+ width = int(_os.environ['COLUMNS'])
506
+ except (KeyError, ValueError):
507
+ width = 80
508
+ width -= 2
509
+
510
+ self._prog = prog
511
+ self._indent_increment = indent_increment
512
+ self._max_help_position = max_help_position
513
+ self._width = width
514
+
515
+ self._current_indent = 0
516
+ self._level = 0
517
+ self._action_max_length = 0
518
+
519
+ self._root_section = self._Section(self, None)
520
+ self._current_section = self._root_section
521
+
522
+ self._whitespace_matcher = _re.compile(r'\s+')
523
+ self._long_break_matcher = _re.compile(r'\n\n\n+')
524
+
525
+ # ===============================
526
+ # Section and indentation methods
527
+ # ===============================
528
+ def _indent(self):
529
+ self._current_indent += self._indent_increment
530
+ self._level += 1
531
+
532
+ def _dedent(self):
533
+ self._current_indent -= self._indent_increment
534
+ assert self._current_indent >= 0, 'Indent decreased below 0.'
535
+ self._level -= 1
536
+
537
+ class _Section(object):
538
+
539
+ def __init__(self, formatter, parent, heading=None):
540
+ self.formatter = formatter
541
+ self.parent = parent
542
+ self.heading = heading
543
+ self.items = []
544
+
545
+ def format_help(self):
546
+ # format the indented section
547
+ if self.parent is not None:
548
+ self.formatter._indent()
549
+ join = self.formatter._join_parts
550
+ for func, args in self.items:
551
+ func(*args)
552
+ item_help = join([func(*args) for func, args in self.items])
553
+ if self.parent is not None:
554
+ self.formatter._dedent()
555
+
556
+ # return nothing if the section was empty
557
+ if not item_help:
558
+ return ''
559
+
560
+ # add the heading if the section was non-empty
561
+ if self.heading is not SUPPRESS and self.heading is not None:
562
+ current_indent = self.formatter._current_indent
563
+ heading = '%*s%s:\n' % (current_indent, '', self.heading)
564
+ else:
565
+ heading = ''
566
+
567
+ # join the section-initial newline, the heading and the help
568
+ return join(['\n', heading, item_help, '\n'])
569
+
570
+ def _add_item(self, func, args):
571
+ self._current_section.items.append((func, args))
572
+
573
+ # ========================
574
+ # Message building methods
575
+ # ========================
576
+ def start_section(self, heading):
577
+ self._indent()
578
+ section = self._Section(self, self._current_section, heading)
579
+ self._add_item(section.format_help, [])
580
+ self._current_section = section
581
+
582
+ def end_section(self):
583
+ self._current_section = self._current_section.parent
584
+ self._dedent()
585
+
586
+ def add_text(self, text):
587
+ if text is not SUPPRESS and text is not None:
588
+ self._add_item(self._format_text, [text])
589
+
590
+ def add_usage(self, usage, actions, groups, prefix=None):
591
+ if usage is not SUPPRESS:
592
+ args = usage, actions, groups, prefix
593
+ self._add_item(self._format_usage, args)
594
+
595
+ def add_argument(self, action):
596
+ if action.help is not SUPPRESS:
597
+
598
+ # find all invocations
599
+ get_invocation = self._format_action_invocation
600
+ invocations = [get_invocation(action)]
601
+ for subaction in self._iter_indented_subactions(action):
602
+ invocations.append(get_invocation(subaction))
603
+
604
+ # update the maximum item length
605
+ invocation_length = max([len(s) for s in invocations])
606
+ action_length = invocation_length + self._current_indent
607
+ self._action_max_length = max(self._action_max_length,
608
+ action_length)
609
+
610
+ # add the item to the list
611
+ self._add_item(self._format_action, [action])
612
+
613
+ def add_arguments(self, actions):
614
+ for action in actions:
615
+ self.add_argument(action)
616
+
617
+ # =======================
618
+ # Help-formatting methods
619
+ # =======================
620
+ def format_help(self):
621
+ help = self._root_section.format_help()
622
+ if help:
623
+ help = self._long_break_matcher.sub('\n\n', help)
624
+ help = help.strip('\n') + '\n'
625
+ return help
626
+
627
+ def _join_parts(self, part_strings):
628
+ return ''.join([part
629
+ for part in part_strings
630
+ if part and part is not SUPPRESS])
631
+
632
+ def _format_usage(self, usage, actions, groups, prefix):
633
+ if prefix is None:
634
+ prefix = _('usage: ')
635
+
636
+ # if usage is specified, use that
637
+ if usage is not None:
638
+ usage = usage % dict(prog=self._prog)
639
+
640
+ # if no optionals or positionals are available, usage is just prog
641
+ elif usage is None and not actions:
642
+ usage = '%(prog)s' % dict(prog=self._prog)
643
+
644
+ # if optionals and positionals are available, calculate usage
645
+ elif usage is None:
646
+ prog = '%(prog)s' % dict(prog=self._prog)
647
+
648
+ # split optionals from positionals
649
+ optionals = []
650
+ positionals = []
651
+ for action in actions:
652
+ if action.option_strings:
653
+ optionals.append(action)
654
+ else:
655
+ positionals.append(action)
656
+
657
+ # build full usage string
658
+ format = self._format_actions_usage
659
+ action_usage = format(optionals + positionals, groups)
660
+ usage = ' '.join([s for s in [prog, action_usage] if s])
661
+
662
+ # wrap the usage parts if it's too long
663
+ text_width = self._width - self._current_indent
664
+ if len(prefix) + len(usage) > text_width:
665
+
666
+ # break usage into wrappable parts
667
+ part_regexp = r'\(.*?\)+|\[.*?\]+|\S+'
668
+ opt_usage = format(optionals, groups)
669
+ pos_usage = format(positionals, groups)
670
+ opt_parts = _re.findall(part_regexp, opt_usage)
671
+ pos_parts = _re.findall(part_regexp, pos_usage)
672
+ assert ' '.join(opt_parts) == opt_usage
673
+ assert ' '.join(pos_parts) == pos_usage
674
+
675
+ # helper for wrapping lines
676
+ def get_lines(parts, indent, prefix=None):
677
+ lines = []
678
+ line = []
679
+ if prefix is not None:
680
+ line_len = len(prefix) - 1
681
+ else:
682
+ line_len = len(indent) - 1
683
+ for part in parts:
684
+ if line_len + 1 + len(part) > text_width:
685
+ lines.append(indent + ' '.join(line))
686
+ line = []
687
+ line_len = len(indent) - 1
688
+ line.append(part)
689
+ line_len += len(part) + 1
690
+ if line:
691
+ lines.append(indent + ' '.join(line))
692
+ if prefix is not None:
693
+ lines[0] = lines[0][len(indent):]
694
+ return lines
695
+
696
+ # if prog is short, follow it with optionals or positionals
697
+ if len(prefix) + len(prog) <= 0.75 * text_width:
698
+ indent = ' ' * (len(prefix) + len(prog) + 1)
699
+ if opt_parts:
700
+ lines = get_lines([prog] + opt_parts, indent, prefix)
701
+ lines.extend(get_lines(pos_parts, indent))
702
+ elif pos_parts:
703
+ lines = get_lines([prog] + pos_parts, indent, prefix)
704
+ else:
705
+ lines = [prog]
706
+
707
+ # if prog is long, put it on its own line
708
+ else:
709
+ indent = ' ' * len(prefix)
710
+ parts = opt_parts + pos_parts
711
+ lines = get_lines(parts, indent)
712
+ if len(lines) > 1:
713
+ lines = []
714
+ lines.extend(get_lines(opt_parts, indent))
715
+ lines.extend(get_lines(pos_parts, indent))
716
+ lines = [prog] + lines
717
+
718
+ # join lines into usage
719
+ usage = '\n'.join(lines)
720
+
721
+ # prefix with 'usage:'
722
+ return '%s%s\n\n' % (prefix, usage)
723
+
724
+ def _format_actions_usage(self, actions, groups):
725
+ # find group indices and identify actions in groups
726
+ group_actions = set()
727
+ inserts = {}
728
+ for group in groups:
729
+ try:
730
+ start = actions.index(group._group_actions[0])
731
+ except ValueError:
732
+ continue
733
+ else:
734
+ end = start + len(group._group_actions)
735
+ if actions[start:end] == group._group_actions:
736
+ for action in group._group_actions:
737
+ group_actions.add(action)
738
+ if not group.required:
739
+ if start in inserts:
740
+ inserts[start] += ' ['
741
+ else:
742
+ inserts[start] = '['
743
+ inserts[end] = ']'
744
+ else:
745
+ if start in inserts:
746
+ inserts[start] += ' ('
747
+ else:
748
+ inserts[start] = '('
749
+ inserts[end] = ')'
750
+ for i in range(start + 1, end):
751
+ inserts[i] = '|'
752
+
753
+ # collect all actions format strings
754
+ parts = []
755
+ for i, action in enumerate(actions):
756
+
757
+ # suppressed arguments are marked with None
758
+ # remove | separators for suppressed arguments
759
+ if action.help is SUPPRESS:
760
+ parts.append(None)
761
+ if inserts.get(i) == '|':
762
+ inserts.pop(i)
763
+ elif inserts.get(i + 1) == '|':
764
+ inserts.pop(i + 1)
765
+
766
+ # produce all arg strings
767
+ elif not action.option_strings:
768
+ part = self._format_args(action, action.dest)
769
+
770
+ # if it's in a group, strip the outer []
771
+ if action in group_actions:
772
+ if part[0] == '[' and part[-1] == ']':
773
+ part = part[1:-1]
774
+
775
+ # add the action string to the list
776
+ parts.append(part)
777
+
778
+ # produce the first way to invoke the option in brackets
779
+ else:
780
+ option_string = action.option_strings[0]
781
+
782
+ # if the Optional doesn't take a value, format is:
783
+ # -s or --long
784
+ if action.nargs == 0:
785
+ part = '%s' % option_string
786
+
787
+ # if the Optional takes a value, format is:
788
+ # -s ARGS or --long ARGS
789
+ else:
790
+ default = action.dest.upper()
791
+ args_string = self._format_args(action, default)
792
+ part = '%s %s' % (option_string, args_string)
793
+
794
+ # make it look optional if it's not required or in a group
795
+ if not action.required and action not in group_actions:
796
+ part = '[%s]' % part
797
+
798
+ # add the action string to the list
799
+ parts.append(part)
800
+
801
+ # insert things at the necessary indices
802
+ for i in sorted(inserts, reverse=True):
803
+ parts[i:i] = [inserts[i]]
804
+
805
+ # join all the action items with spaces
806
+ text = ' '.join([item for item in parts if item is not None])
807
+
808
+ # clean up separators for mutually exclusive groups
809
+ open = r'[\[(]'
810
+ close = r'[\])]'
811
+ text = _re.sub(r'(%s) ' % open, r'\1', text)
812
+ text = _re.sub(r' (%s)' % close, r'\1', text)
813
+ text = _re.sub(r'%s *%s' % (open, close), r'', text)
814
+ text = _re.sub(r'\(([^|]*)\)', r'\1', text)
815
+ text = text.strip()
816
+
817
+ # return the text
818
+ return text
819
+
820
+ def _format_text(self, text):
821
+ if '%(prog)' in text:
822
+ text = text % dict(prog=self._prog)
823
+ text_width = self._width - self._current_indent
824
+ indent = ' ' * self._current_indent
825
+ return self._fill_text(text, text_width, indent) + '\n\n'
826
+
827
+ def _format_action(self, action):
828
+ # determine the required width and the entry label
829
+ help_position = min(self._action_max_length + 2,
830
+ self._max_help_position)
831
+ help_width = self._width - help_position
832
+ action_width = help_position - self._current_indent - 2
833
+ action_header = self._format_action_invocation(action)
834
+
835
+ # ho nelp; start on same line and add a final newline
836
+ if not action.help:
837
+ tup = self._current_indent, '', action_header
838
+ action_header = '%*s%s\n' % tup
839
+
840
+ # short action name; start on the same line and pad two spaces
841
+ elif len(action_header) <= action_width:
842
+ tup = self._current_indent, '', action_width, action_header
843
+ action_header = '%*s%-*s ' % tup
844
+ indent_first = 0
845
+
846
+ # long action name; start on the next line
847
+ else:
848
+ tup = self._current_indent, '', action_header
849
+ action_header = '%*s%s\n' % tup
850
+ indent_first = help_position
851
+
852
+ # collect the pieces of the action help
853
+ parts = [action_header]
854
+
855
+ # if there was help for the action, add lines of help text
856
+ if action.help:
857
+ help_text = self._expand_help(action)
858
+ help_lines = self._split_lines(help_text, help_width)
859
+ parts.append('%*s%s\n' % (indent_first, '', help_lines[0]))
860
+ for line in help_lines[1:]:
861
+ parts.append('%*s%s\n' % (help_position, '', line))
862
+
863
+ # or add a newline if the description doesn't end with one
864
+ elif not action_header.endswith('\n'):
865
+ parts.append('\n')
866
+
867
+ # if there are any sub-actions, add their help as well
868
+ for subaction in self._iter_indented_subactions(action):
869
+ parts.append(self._format_action(subaction))
870
+
871
+ # return a single string
872
+ return self._join_parts(parts)
873
+
874
+ def _format_action_invocation(self, action):
875
+ if not action.option_strings:
876
+ metavar, = self._metavar_formatter(action, action.dest)(1)
877
+ return metavar
878
+
879
+ else:
880
+ parts = []
881
+
882
+ # if the Optional doesn't take a value, format is:
883
+ # -s, --long
884
+ if action.nargs == 0:
885
+ parts.extend(action.option_strings)
886
+
887
+ # if the Optional takes a value, format is:
888
+ # -s ARGS, --long ARGS
889
+ else:
890
+ default = action.dest.upper()
891
+ args_string = self._format_args(action, default)
892
+ for option_string in action.option_strings:
893
+ parts.append('%s %s' % (option_string, args_string))
894
+
895
+ return ', '.join(parts)
896
+
897
+ def _metavar_formatter(self, action, default_metavar):
898
+ if action.metavar is not None:
899
+ result = action.metavar
900
+ elif action.choices is not None:
901
+ choice_strs = [str(choice) for choice in action.choices]
902
+ result = '{%s}' % ','.join(choice_strs)
903
+ else:
904
+ result = default_metavar
905
+
906
+ def format(tuple_size):
907
+ if isinstance(result, tuple):
908
+ return result
909
+ else:
910
+ return (result, ) * tuple_size
911
+ return format
912
+
913
+ def _format_args(self, action, default_metavar):
914
+ get_metavar = self._metavar_formatter(action, default_metavar)
915
+ if action.nargs is None:
916
+ result = '%s' % get_metavar(1)
917
+ elif action.nargs == OPTIONAL:
918
+ result = '[%s]' % get_metavar(1)
919
+ elif action.nargs == ZERO_OR_MORE:
920
+ result = '[%s [%s ...]]' % get_metavar(2)
921
+ elif action.nargs == ONE_OR_MORE:
922
+ result = '%s [%s ...]' % get_metavar(2)
923
+ elif action.nargs == REMAINDER:
924
+ result = '...'
925
+ elif action.nargs == PARSER:
926
+ result = '%s ...' % get_metavar(1)
927
+ else:
928
+ formats = ['%s' for _ in range(action.nargs)]
929
+ result = ' '.join(formats) % get_metavar(action.nargs)
930
+ return result
931
+
932
+ def _expand_help(self, action):
933
+ params = dict(vars(action), prog=self._prog)
934
+ for name in list(params):
935
+ if params[name] is SUPPRESS:
936
+ del params[name]
937
+ for name in list(params):
938
+ if hasattr(params[name], '__name__'):
939
+ params[name] = params[name].__name__
940
+ if params.get('choices') is not None:
941
+ choices_str = ', '.join([str(c) for c in params['choices']])
942
+ params['choices'] = choices_str
943
+ return self._get_help_string(action) % params
944
+
945
+ def _iter_indented_subactions(self, action):
946
+ try:
947
+ get_subactions = action._get_subactions
948
+ except AttributeError:
949
+ pass
950
+ else:
951
+ self._indent()
952
+ for subaction in get_subactions():
953
+ yield subaction
954
+ self._dedent()
955
+
956
+ def _split_lines(self, text, width):
957
+ text = self._whitespace_matcher.sub(' ', text).strip()
958
+ return _textwrap.wrap(text, width)
959
+
960
+ def _fill_text(self, text, width, indent):
961
+ text = self._whitespace_matcher.sub(' ', text).strip()
962
+ return _textwrap.fill(text, width, initial_indent=indent,
963
+ subsequent_indent=indent)
964
+
965
+ def _get_help_string(self, action):
966
+ return action.help
967
+
968
+
969
+class RawDescriptionHelpFormatter(HelpFormatter):
970
+ """Help message formatter which retains any formatting in descriptions.
971
+
972
+ Only the name of this class is considered a public API. All the methods
973
+ provided by the class are considered an implementation detail.
974
+ """
975
+
976
+ def _fill_text(self, text, width, indent):
977
+ return ''.join([indent + line for line in text.splitlines(True)])
978
+
979
+
980
+class RawTextHelpFormatter(RawDescriptionHelpFormatter):
981
+ """Help message formatter which retains formatting of all help text.
982
+
983
+ Only the name of this class is considered a public API. All the methods
984
+ provided by the class are considered an implementation detail.
985
+ """
986
+
987
+ def _split_lines(self, text, width):
988
+ return text.splitlines()
989
+
990
+
991
+class ArgumentDefaultsHelpFormatter(HelpFormatter):
992
+ """Help message formatter which adds default values to argument help.
993
+
994
+ Only the name of this class is considered a public API. All the methods
995
+ provided by the class are considered an implementation detail.
996
+ """
997
+
998
+ def _get_help_string(self, action):
999
+ help = action.help
1000
+ if '%(default)' not in action.help:
1001
+ if action.default is not SUPPRESS:
1002
+ defaulting_nargs = [OPTIONAL, ZERO_OR_MORE]
1003
+ if action.option_strings or action.nargs in defaulting_nargs:
1004
+ help += ' (default: %(default)s)'
1005
+ return help
1006
+
1007
+
1008
+# =====================
1009
+# Options and Arguments
1010
+# =====================
1011
+
1012
+def _get_action_name(argument):
1013
+ if argument is None:
1014
+ return None
1015
+ elif argument.option_strings:
1016
+ return '/'.join(argument.option_strings)
1017
+ elif argument.metavar not in (None, SUPPRESS):
1018
+ return argument.metavar
1019
+ elif argument.dest not in (None, SUPPRESS):
1020
+ return argument.dest
1021
+ else:
1022
+ return None
1023
+
1024
+
1025
+class ArgumentError(Exception):
1026
+ """An error from creating or using an argument (optional or positional).
1027
+
1028
+ The string value of this exception is the message, augmented with
1029
+ information about the argument that caused it.
1030
+ """
1031
+
1032
+ def __init__(self, argument, message):
1033
+ self.argument_name = _get_action_name(argument)
1034
+ self.message = message
1035
+
1036
+ def __str__(self):
1037
+ if self.argument_name is None:
1038
+ format = '%(message)s'
1039
+ else:
1040
+ format = 'argument %(argument_name)s: %(message)s'
1041
+ return format % dict(message=self.message,
1042
+ argument_name=self.argument_name)
1043
+
1044
+
1045
+class ArgumentTypeError(Exception):
1046
+ """An error from trying to convert a command line string to a type."""
1047
+ pass
1048
+
1049
+
1050
+# ==============
1051
+# Action classes
1052
+# ==============
1053
+
1054
+class Action(_AttributeHolder):
1055
+ """Information about how to convert command line strings to Python objects.
1056
+
1057
+ Action objects are used by an ArgumentParser to represent the information
1058
+ needed to parse a single argument from one or more strings from the
1059
+ command line. The keyword arguments to the Action constructor are also
1060
+ all attributes of Action instances.
1061
+
1062
+ Keyword Arguments:
1063
+
1064
+ - option_strings -- A list of command-line option strings which
1065
+ should be associated with this action.
1066
+
1067
+ - dest -- The name of the attribute to hold the created object(s)
1068
+
1069
+ - nargs -- The number of command-line arguments that should be
1070
+ consumed. By default, one argument will be consumed and a single
1071
+ value will be produced. Other values include:
1072
+ - N (an integer) consumes N arguments (and produces a list)
1073
+ - '?' consumes zero or one arguments
1074
+ - '*' consumes zero or more arguments (and produces a list)
1075
+ - '+' consumes one or more arguments (and produces a list)
1076
+ Note that the difference between the default and nargs=1 is that
1077
+ with the default, a single value will be produced, while with
1078
+ nargs=1, a list containing a single value will be produced.
1079
+
1080
+ - const -- The value to be produced if the option is specified and the
1081
+ option uses an action that takes no values.
1082
+
1083
+ - default -- The value to be produced if the option is not specified.
1084
+
1085
+ - type -- The type which the command-line arguments should be converted
1086
+ to, should be one of 'string', 'int', 'float', 'complex' or a
1087
+ callable object that accepts a single string argument. If None,
1088
+ 'string' is assumed.
1089
+
1090
+ - choices -- A container of values that should be allowed. If not None,
1091
+ after a command-line argument has been converted to the appropriate
1092
+ type, an exception will be raised if it is not a member of this
1093
+ collection.
1094
+
1095
+ - required -- True if the action must always be specified at the
1096
+ command line. This is only meaningful for optional command-line
1097
+ arguments.
1098
+
1099
+ - help -- The help string describing the argument.
1100
+
1101
+ - metavar -- The name to be used for the option's argument with the
1102
+ help string. If None, the 'dest' value will be used as the name.
1103
+ """
1104
+
1105
+ def __init__(self,
1106
+ option_strings,
1107
+ dest,
1108
+ nargs=None,
1109
+ const=None,
1110
+ default=None,
1111
+ type=None,
1112
+ choices=None,
1113
+ required=False,
1114
+ help=None,
1115
+ metavar=None):
1116
+ self.option_strings = option_strings
1117
+ self.dest = dest
1118
+ self.nargs = nargs
1119
+ self.const = const
1120
+ self.default = default
1121
+ self.type = type
1122
+ self.choices = choices
1123
+ self.required = required
1124
+ self.help = help
1125
+ self.metavar = metavar
1126
+
1127
+ def _get_kwargs(self):
1128
+ names = [
1129
+ 'option_strings',
1130
+ 'dest',
1131
+ 'nargs',
1132
+ 'const',
1133
+ 'default',
1134
+ 'type',
1135
+ 'choices',
1136
+ 'help',
1137
+ 'metavar',
1138
+ ]
1139
+ return [(name, getattr(self, name)) for name in names]
1140
+
1141
+ def __call__(self, parser, namespace, values, option_string=None):
1142
+ raise NotImplementedError(_('.__call__() not defined'))
1143
+
1144
+
1145
+class _StoreAction(Action):
1146
+
1147
+ def __init__(self,
1148
+ option_strings,
1149
+ dest,
1150
+ nargs=None,
1151
+ const=None,
1152
+ default=None,
1153
+ type=None,
1154
+ choices=None,
1155
+ required=False,
1156
+ help=None,
1157
+ metavar=None):
1158
+ if nargs == 0:
1159
+ raise ValueError('nargs for store actions must be > 0; if you '
1160
+ 'have nothing to store, actions such as store '
1161
+ 'true or store const may be more appropriate')
1162
+ if const is not None and nargs != OPTIONAL:
1163
+ raise ValueError('nargs must be %r to supply const' % OPTIONAL)
1164
+ super(_StoreAction, self).__init__(
1165
+ option_strings=option_strings,
1166
+ dest=dest,
1167
+ nargs=nargs,
1168
+ const=const,
1169
+ default=default,
1170
+ type=type,
1171
+ choices=choices,
1172
+ required=required,
1173
+ help=help,
1174
+ metavar=metavar)
1175
+
1176
+ def __call__(self, parser, namespace, values, option_string=None):
1177
+ setattr(namespace, self.dest, values)
1178
+
1179
+
1180
+class _StoreConstAction(Action):
1181
+
1182
+ def __init__(self,
1183
+ option_strings,
1184
+ dest,
1185
+ const,
1186
+ default=None,
1187
+ required=False,
1188
+ help=None,
1189
+ metavar=None):
1190
+ super(_StoreConstAction, self).__init__(
1191
+ option_strings=option_strings,
1192
+ dest=dest,
1193
+ nargs=0,
1194
+ const=const,
1195
+ default=default,
1196
+ required=required,
1197
+ help=help)
1198
+
1199
+ def __call__(self, parser, namespace, values, option_string=None):
1200
+ setattr(namespace, self.dest, self.const)
1201
+
1202
+
1203
+class _StoreTrueAction(_StoreConstAction):
1204
+
1205
+ def __init__(self,
1206
+ option_strings,
1207
+ dest,
1208
+ default=False,
1209
+ required=False,
1210
+ help=None):
1211
+ super(_StoreTrueAction, self).__init__(
1212
+ option_strings=option_strings,
1213
+ dest=dest,
1214
+ const=True,
1215
+ default=default,
1216
+ required=required,
1217
+ help=help)
1218
+
1219
+
1220
+class _StoreFalseAction(_StoreConstAction):
1221
+
1222
+ def __init__(self,
1223
+ option_strings,
1224
+ dest,
1225
+ default=True,
1226
+ required=False,
1227
+ help=None):
1228
+ super(_StoreFalseAction, self).__init__(
1229
+ option_strings=option_strings,
1230
+ dest=dest,
1231
+ const=False,
1232
+ default=default,
1233
+ required=required,
1234
+ help=help)
1235
+
1236
+
1237
+class _AppendAction(Action):
1238
+
1239
+ def __init__(self,
1240
+ option_strings,
1241
+ dest,
1242
+ nargs=None,
1243
+ const=None,
1244
+ default=None,
1245
+ type=None,
1246
+ choices=None,
1247
+ required=False,
1248
+ help=None,
1249
+ metavar=None):
1250
+ if nargs == 0:
1251
+ raise ValueError('nargs for append actions must be > 0; if arg '
1252
+ 'strings are not supplying the value to append, '
1253
+ 'the append const action may be more appropriate')
1254
+ if const is not None and nargs != OPTIONAL:
1255
+ raise ValueError('nargs must be %r to supply const' % OPTIONAL)
1256
+ super(_AppendAction, self).__init__(
1257
+ option_strings=option_strings,
1258
+ dest=dest,
1259
+ nargs=nargs,
1260
+ const=const,
1261
+ default=default,
1262
+ type=type,
1263
+ choices=choices,
1264
+ required=required,
1265
+ help=help,
1266
+ metavar=metavar)
1267
+
1268
+ def __call__(self, parser, namespace, values, option_string=None):
1269
+ items = _copy.copy(_ensure_value(namespace, self.dest, []))
1270
+ items.append(values)
1271
+ setattr(namespace, self.dest, items)
1272
+
1273
+
1274
+class _AppendConstAction(Action):
1275
+
1276
+ def __init__(self,
1277
+ option_strings,
1278
+ dest,
1279
+ const,
1280
+ default=None,
1281
+ required=False,
1282
+ help=None,
1283
+ metavar=None):
1284
+ super(_AppendConstAction, self).__init__(
1285
+ option_strings=option_strings,
1286
+ dest=dest,
1287
+ nargs=0,
1288
+ const=const,
1289
+ default=default,
1290
+ required=required,
1291
+ help=help,
1292
+ metavar=metavar)
1293
+
1294
+ def __call__(self, parser, namespace, values, option_string=None):
1295
+ items = _copy.copy(_ensure_value(namespace, self.dest, []))
1296
+ items.append(self.const)
1297
+ setattr(namespace, self.dest, items)
1298
+
1299
+
1300
+class _CountAction(Action):
1301
+
1302
+ def __init__(self,
1303
+ option_strings,
1304
+ dest,
1305
+ default=None,
1306
+ required=False,
1307
+ help=None):
1308
+ super(_CountAction, self).__init__(
1309
+ option_strings=option_strings,
1310
+ dest=dest,
1311
+ nargs=0,
1312
+ default=default,
1313
+ required=required,
1314
+ help=help)
1315
+
1316
+ def __call__(self, parser, namespace, values, option_string=None):
1317
+ new_count = _ensure_value(namespace, self.dest, 0) + 1
1318
+ setattr(namespace, self.dest, new_count)
1319
+
1320
+
1321
+class _HelpAction(Action):
1322
+
1323
+ def __init__(self,
1324
+ option_strings,
1325
+ dest=SUPPRESS,
1326
+ default=SUPPRESS,
1327
+ help=None):
1328
+ super(_HelpAction, self).__init__(
1329
+ option_strings=option_strings,
1330
+ dest=dest,
1331
+ default=default,
1332
+ nargs=0,
1333
+ help=help)
1334
+
1335
+ def __call__(self, parser, namespace, values, option_string=None):
1336
+ parser.print_help()
1337
+ parser.exit()
1338
+
1339
+
1340
+class _VersionAction(Action):
1341
+
1342
+ def __init__(self,
1343
+ option_strings,
1344
+ version=None,
1345
+ dest=SUPPRESS,
1346
+ default=SUPPRESS,
1347
+ help="show program's version number and exit"):
1348
+ super(_VersionAction, self).__init__(
1349
+ option_strings=option_strings,
1350
+ dest=dest,
1351
+ default=default,
1352
+ nargs=0,
1353
+ help=help)
1354
+ self.version = version
1355
+
1356
+ def __call__(self, parser, namespace, values, option_string=None):
1357
+ version = self.version
1358
+ if version is None:
1359
+ version = parser.version
1360
+ formatter = parser._get_formatter()
1361
+ formatter.add_text(version)
1362
+ parser.exit(message=formatter.format_help())
1363
+
1364
+
1365
+class _SubParsersAction(Action):
1366
+
1367
+ class _ChoicesPseudoAction(Action):
1368
+
1369
+ def __init__(self, name, aliases, help):
1370
+ metavar = dest = name
1371
+ if aliases:
1372
+ metavar += ' (%s)' % ', '.join(aliases)
1373
+ sup = super(_SubParsersAction._ChoicesPseudoAction, self)
1374
+ sup.__init__(option_strings=[], dest=dest, help=help,
1375
+ metavar=metavar)
1376
+
1377
+ def __init__(self,
1378
+ option_strings,
1379
+ prog,
1380
+ parser_class,
1381
+ dest=SUPPRESS,
1382
+ help=None,
1383
+ metavar=None):
1384
+
1385
+ self._prog_prefix = prog
1386
+ self._parser_class = parser_class
1387
+ self._name_parser_map = {}
1388
+ self._choices_actions = []
1389
+
1390
+ super(_SubParsersAction, self).__init__(
1391
+ option_strings=option_strings,
1392
+ dest=dest,
1393
+ nargs=PARSER,
1394
+ choices=self._name_parser_map,
1395
+ help=help,
1396
+ metavar=metavar)
1397
+
1398
+ def add_parser(self, name, **kwargs):
1399
+ # set prog from the existing prefix
1400
+ if kwargs.get('prog') is None:
1401
+ kwargs['prog'] = '%s %s' % (self._prog_prefix, name)
1402
+
1403
+ aliases = kwargs.pop('aliases', ())
1404
+
1405
+ # create a pseudo-action to hold the choice help
1406
+ if 'help' in kwargs:
1407
+ help = kwargs.pop('help')
1408
+ choice_action = self._ChoicesPseudoAction(name, aliases, help)
1409
+ self._choices_actions.append(choice_action)
1410
+
1411
+ # create the parser and add it to the map
1412
+ parser = self._parser_class(**kwargs)
1413
+ self._name_parser_map[name] = parser
1414
+
1415
+ # make parser available under aliases also
1416
+ for alias in aliases:
1417
+ self._name_parser_map[alias] = parser
1418
+
1419
+ return parser
1420
+
1421
+ def _get_subactions(self):
1422
+ return self._choices_actions
1423
+
1424
+ def __call__(self, parser, namespace, values, option_string=None):
1425
+ parser_name = values[0]
1426
+ arg_strings = values[1:]
1427
+
1428
+ # set the parser name if requested
1429
+ if self.dest is not SUPPRESS:
1430
+ setattr(namespace, self.dest, parser_name)
1431
+
1432
+ # select the parser
1433
+ try:
1434
+ parser = self._name_parser_map[parser_name]
1435
+ except KeyError:
1436
+ tup = parser_name, ', '.join(self._name_parser_map)
1437
+ msg = _('unknown parser %r (choices: %s)' % tup)
1438
+ raise ArgumentError(self, msg)
1439
+
1440
+ # parse all the remaining options into the namespace
1441
+ # store any unrecognized options on the object, so that the top
1442
+ # level parser can decide what to do with them
1443
+ namespace, arg_strings = parser.parse_known_args(arg_strings, namespace)
1444
+ if arg_strings:
1445
+ vars(namespace).setdefault(_UNRECOGNIZED_ARGS_ATTR, [])
1446
+ getattr(namespace, _UNRECOGNIZED_ARGS_ATTR).extend(arg_strings)
1447
+
1448
+
1449
+# ==============
1450
+# Type classes
1451
+# ==============
1452
+
1453
+class FileType(object):
1454
+ """Factory for creating file object types
1455
+
1456
+ Instances of FileType are typically passed as type= arguments to the
1457
+ ArgumentParser add_argument() method.
1458
+
1459
+ Keyword Arguments:
1460
+ - mode -- A string indicating how the file is to be opened. Accepts the
1461
+ same values as the builtin open() function.
1462
+ - bufsize -- The file's desired buffer size. Accepts the same values as
1463
+ the builtin open() function.
1464
+ """
1465
+
1466
+ def __init__(self, mode='r', bufsize=None):
1467
+ self._mode = mode
1468
+ self._bufsize = bufsize
1469
+
1470
+ def __call__(self, string):
1471
+ # the special argument "-" means sys.std{in,out}
1472
+ if string == '-':
1473
+ if 'r' in self._mode:
1474
+ return _sys.stdin
1475
+ elif 'w' in self._mode:
1476
+ return _sys.stdout
1477
+ else:
1478
+ msg = _('argument "-" with mode %r' % self._mode)
1479
+ raise ValueError(msg)
1480
+
1481
+ try:
1482
+ # all other arguments are used as file names
1483
+ if self._bufsize:
1484
+ return open(string, self._mode, self._bufsize)
1485
+ else:
1486
+ return open(string, self._mode)
1487
+ except IOError:
1488
+ err = _sys.exc_info()[1]
1489
+ message = _("can't open '%s': %s")
1490
+ raise ArgumentTypeError(message % (string, err))
1491
+
1492
+ def __repr__(self):
1493
+ args = [self._mode, self._bufsize]
1494
+ args_str = ', '.join([repr(arg) for arg in args if arg is not None])
1495
+ return '%s(%s)' % (type(self).__name__, args_str)
1496
+
1497
+# ===========================
1498
+# Optional and Positional Parsing
1499
+# ===========================
1500
+
1501
+class Namespace(_AttributeHolder):
1502
+ """Simple object for storing attributes.
1503
+
1504
+ Implements equality by attribute names and values, and provides a simple
1505
+ string representation.
1506
+ """
1507
+
1508
+ def __init__(self, **kwargs):
1509
+ for name in kwargs:
1510
+ setattr(self, name, kwargs[name])
1511
+
1512
+ __hash__ = None
1513
+
1514
+ def __eq__(self, other):
1515
+ return vars(self) == vars(other)
1516
+
1517
+ def __ne__(self, other):
1518
+ return not (self == other)
1519
+
1520
+ def __contains__(self, key):
1521
+ return key in self.__dict__
1522
+
1523
+
1524
+class _ActionsContainer(object):
1525
+
1526
+ def __init__(self,
1527
+ description,
1528
+ prefix_chars,
1529
+ argument_default,
1530
+ conflict_handler):
1531
+ super(_ActionsContainer, self).__init__()
1532
+
1533
+ self.description = description
1534
+ self.argument_default = argument_default
1535
+ self.prefix_chars = prefix_chars
1536
+ self.conflict_handler = conflict_handler
1537
+
1538
+ # set up registries
1539
+ self._registries = {}
1540
+
1541
+ # register actions
1542
+ self.register('action', None, _StoreAction)
1543
+ self.register('action', 'store', _StoreAction)
1544
+ self.register('action', 'store_const', _StoreConstAction)
1545
+ self.register('action', 'store_true', _StoreTrueAction)
1546
+ self.register('action', 'store_false', _StoreFalseAction)
1547
+ self.register('action', 'append', _AppendAction)
1548
+ self.register('action', 'append_const', _AppendConstAction)
1549
+ self.register('action', 'count', _CountAction)
1550
+ self.register('action', 'help', _HelpAction)
1551
+ self.register('action', 'version', _VersionAction)
1552
+ self.register('action', 'parsers', _SubParsersAction)
1553
+
1554
+ # raise an exception if the conflict handler is invalid
1555
+ self._get_handler()
1556
+
1557
+ # action storage
1558
+ self._actions = []
1559
+ self._option_string_actions = {}
1560
+
1561
+ # groups
1562
+ self._action_groups = []
1563
+ self._mutually_exclusive_groups = []
1564
+
1565
+ # defaults storage
1566
+ self._defaults = {}
1567
+
1568
+ # determines whether an "option" looks like a negative number
1569
+ self._negative_number_matcher = _re.compile(r'^-\d+$|^-\d*\.\d+$')
1570
+
1571
+ # whether or not there are any optionals that look like negative
1572
+ # numbers -- uses a list so it can be shared and edited
1573
+ self._has_negative_number_optionals = []
1574
+
1575
+ # ====================
1576
+ # Registration methods
1577
+ # ====================
1578
+ def register(self, registry_name, value, object):
1579
+ registry = self._registries.setdefault(registry_name, {})
1580
+ registry[value] = object
1581
+
1582
+ def _registry_get(self, registry_name, value, default=None):
1583
+ return self._registries[registry_name].get(value, default)
1584
+
1585
+ # ==================================
1586
+ # Namespace default accessor methods
1587
+ # ==================================
1588
+ def set_defaults(self, **kwargs):
1589
+ self._defaults.update(kwargs)
1590
+
1591
+ # if these defaults match any existing arguments, replace
1592
+ # the previous default on the object with the new one
1593
+ for action in self._actions:
1594
+ if action.dest in kwargs:
1595
+ action.default = kwargs[action.dest]
1596
+
1597
+ def get_default(self, dest):
1598
+ for action in self._actions:
1599
+ if action.dest == dest and action.default is not None:
1600
+ return action.default
1601
+ return self._defaults.get(dest, None)
1602
+
1603
+
1604
+ # =======================
1605
+ # Adding argument actions
1606
+ # =======================
1607
+ def add_argument(self, *args, **kwargs):
1608
+ """
1609
+ add_argument(dest, ..., name=value, ...)
1610
+ add_argument(option_string, option_string, ..., name=value, ...)
1611
+ """
1612
+
1613
+ # if no positional args are supplied or only one is supplied and
1614
+ # it doesn't look like an option string, parse a positional
1615
+ # argument
1616
+ chars = self.prefix_chars
1617
+ if not args or len(args) == 1 and args[0][0] not in chars:
1618
+ if args and 'dest' in kwargs:
1619
+ raise ValueError('dest supplied twice for positional argument')
1620
+ kwargs = self._get_positional_kwargs(*args, **kwargs)
1621
+
1622
+ # otherwise, we're adding an optional argument
1623
+ else:
1624
+ kwargs = self._get_optional_kwargs(*args, **kwargs)
1625
+
1626
+ # if no default was supplied, use the parser-level default
1627
+ if 'default' not in kwargs:
1628
+ dest = kwargs['dest']
1629
+ if dest in self._defaults:
1630
+ kwargs['default'] = self._defaults[dest]
1631
+ elif self.argument_default is not None:
1632
+ kwargs['default'] = self.argument_default
1633
+
1634
+ # create the action object, and add it to the parser
1635
+ action_class = self._pop_action_class(kwargs)
1636
+ if not _callable(action_class):
1637
+ raise ValueError('unknown action "%s"' % action_class)
1638
+ action = action_class(**kwargs)
1639
+
1640
+ # raise an error if the action type is not callable
1641
+ type_func = self._registry_get('type', action.type, action.type)
1642
+ if not _callable(type_func):
1643
+ raise ValueError('%r is not callable' % type_func)
1644
+
1645
+ return self._add_action(action)
1646
+
1647
+ def add_argument_group(self, *args, **kwargs):
1648
+ group = _ArgumentGroup(self, *args, **kwargs)
1649
+ self._action_groups.append(group)
1650
+ return group
1651
+
1652
+ def add_mutually_exclusive_group(self, **kwargs):
1653
+ group = _MutuallyExclusiveGroup(self, **kwargs)
1654
+ self._mutually_exclusive_groups.append(group)
1655
+ return group
1656
+
1657
+ def _add_action(self, action):
1658
+ # resolve any conflicts
1659
+ self._check_conflict(action)
1660
+
1661
+ # add to actions list
1662
+ self._actions.append(action)
1663
+ action.container = self
1664
+
1665
+ # index the action by any option strings it has
1666
+ for option_string in action.option_strings:
1667
+ self._option_string_actions[option_string] = action
1668
+
1669
+ # set the flag if any option strings look like negative numbers
1670
+ for option_string in action.option_strings:
1671
+ if self._negative_number_matcher.match(option_string):
1672
+ if not self._has_negative_number_optionals:
1673
+ self._has_negative_number_optionals.append(True)
1674
+
1675
+ # return the created action
1676
+ return action
1677
+
1678
+ def _remove_action(self, action):
1679
+ self._actions.remove(action)
1680
+
1681
+ def _add_container_actions(self, container):
1682
+ # collect groups by titles
1683
+ title_group_map = {}
1684
+ for group in self._action_groups:
1685
+ if group.title in title_group_map:
1686
+ msg = _('cannot merge actions - two groups are named %r')
1687
+ raise ValueError(msg % (group.title))
1688
+ title_group_map[group.title] = group
1689
+
1690
+ # map each action to its group
1691
+ group_map = {}
1692
+ for group in container._action_groups:
1693
+
1694
+ # if a group with the title exists, use that, otherwise
1695
+ # create a new group matching the container's group
1696
+ if group.title not in title_group_map:
1697
+ title_group_map[group.title] = self.add_argument_group(
1698
+ title=group.title,
1699
+ description=group.description,
1700
+ conflict_handler=group.conflict_handler)
1701
+
1702
+ # map the actions to their new group
1703
+ for action in group._group_actions:
1704
+ group_map[action] = title_group_map[group.title]
1705
+
1706
+ # add container's mutually exclusive groups
1707
+ # NOTE: if add_mutually_exclusive_group ever gains title= and
1708
+ # description= then this code will need to be expanded as above
1709
+ for group in container._mutually_exclusive_groups:
1710
+ mutex_group = self.add_mutually_exclusive_group(
1711
+ required=group.required)
1712
+
1713
+ # map the actions to their new mutex group
1714
+ for action in group._group_actions:
1715
+ group_map[action] = mutex_group
1716
+
1717
+ # add all actions to this container or their group
1718
+ for action in container._actions:
1719
+ group_map.get(action, self)._add_action(action)
1720
+
1721
+ def _get_positional_kwargs(self, dest, **kwargs):
1722
+ # make sure required is not specified
1723
+ if 'required' in kwargs:
1724
+ msg = _("'required' is an invalid argument for positionals")
1725
+ raise TypeError(msg)
1726
+
1727
+ # mark positional arguments as required if at least one is
1728
+ # always required
1729
+ if kwargs.get('nargs') not in [OPTIONAL, ZERO_OR_MORE]:
1730
+ kwargs['required'] = True
1731
+ if kwargs.get('nargs') == ZERO_OR_MORE and 'default' not in kwargs:
1732
+ kwargs['required'] = True
1733
+
1734
+ # return the keyword arguments with no option strings
1735
+ return dict(kwargs, dest=dest, option_strings=[])
1736
+
1737
+ def _get_optional_kwargs(self, *args, **kwargs):
1738
+ # determine short and long option strings
1739
+ option_strings = []
1740
+ long_option_strings = []
1741
+ for option_string in args:
1742
+ # error on strings that don't start with an appropriate prefix
1743
+ if not option_string[0] in self.prefix_chars:
1744
+ msg = _('invalid option string %r: '
1745
+ 'must start with a character %r')
1746
+ tup = option_string, self.prefix_chars
1747
+ raise ValueError(msg % tup)
1748
+
1749
+ # strings starting with two prefix characters are long options
1750
+ option_strings.append(option_string)
1751
+ if option_string[0] in self.prefix_chars:
1752
+ if len(option_string) > 1:
1753
+ if option_string[1] in self.prefix_chars:
1754
+ long_option_strings.append(option_string)
1755
+
1756
+ # infer destination, '--foo-bar' -> 'foo_bar' and '-x' -> 'x'
1757
+ dest = kwargs.pop('dest', None)
1758
+ if dest is None:
1759
+ if long_option_strings:
1760
+ dest_option_string = long_option_strings[0]
1761
+ else:
1762
+ dest_option_string = option_strings[0]
1763
+ dest = dest_option_string.lstrip(self.prefix_chars)
1764
+ if not dest:
1765
+ msg = _('dest= is required for options like %r')
1766
+ raise ValueError(msg % option_string)
1767
+ dest = dest.replace('-', '_')
1768
+
1769
+ # return the updated keyword arguments
1770
+ return dict(kwargs, dest=dest, option_strings=option_strings)
1771
+
1772
+ def _pop_action_class(self, kwargs, default=None):
1773
+ action = kwargs.pop('action', default)
1774
+ return self._registry_get('action', action, action)
1775
+
1776
+ def _get_handler(self):
1777
+ # determine function from conflict handler string
1778
+ handler_func_name = '_handle_conflict_%s' % self.conflict_handler
1779
+ try:
1780
+ return getattr(self, handler_func_name)
1781
+ except AttributeError:
1782
+ msg = _('invalid conflict_resolution value: %r')
1783
+ raise ValueError(msg % self.conflict_handler)
1784
+
1785
+ def _check_conflict(self, action):
1786
+
1787
+ # find all options that conflict with this option
1788
+ confl_optionals = []
1789
+ for option_string in action.option_strings:
1790
+ if option_string in self._option_string_actions:
1791
+ confl_optional = self._option_string_actions[option_string]
1792
+ confl_optionals.append((option_string, confl_optional))
1793
+
1794
+ # resolve any conflicts
1795
+ if confl_optionals:
1796
+ conflict_handler = self._get_handler()
1797
+ conflict_handler(action, confl_optionals)
1798
+
1799
+ def _handle_conflict_error(self, action, conflicting_actions):
1800
+ message = _('conflicting option string(s): %s')
1801
+ conflict_string = ', '.join([option_string
1802
+ for option_string, action
1803
+ in conflicting_actions])
1804
+ raise ArgumentError(action, message % conflict_string)
1805
+
1806
+ def _handle_conflict_resolve(self, action, conflicting_actions):
1807
+
1808
+ # remove all conflicting options
1809
+ for option_string, action in conflicting_actions:
1810
+
1811
+ # remove the conflicting option
1812
+ action.option_strings.remove(option_string)
1813
+ self._option_string_actions.pop(option_string, None)
1814
+
1815
+ # if the option now has no option string, remove it from the
1816
+ # container holding it
1817
+ if not action.option_strings:
1818
+ action.container._remove_action(action)
1819
+
1820
+
1821
+class _ArgumentGroup(_ActionsContainer):
1822
+
1823
+ def __init__(self, container, title=None, description=None, **kwargs):
1824
+ # add any missing keyword arguments by checking the container
1825
+ update = kwargs.setdefault
1826
+ update('conflict_handler', container.conflict_handler)
1827
+ update('prefix_chars', container.prefix_chars)
1828
+ update('argument_default', container.argument_default)
1829
+ super_init = super(_ArgumentGroup, self).__init__
1830
+ super_init(description=description, **kwargs)
1831
+
1832
+ # group attributes
1833
+ self.title = title
1834
+ self._group_actions = []
1835
+
1836
+ # share most attributes with the container
1837
+ self._registries = container._registries
1838
+ self._actions = container._actions
1839
+ self._option_string_actions = container._option_string_actions
1840
+ self._defaults = container._defaults
1841
+ self._has_negative_number_optionals = \
1842
+ container._has_negative_number_optionals
1843
+
1844
+ def _add_action(self, action):
1845
+ action = super(_ArgumentGroup, self)._add_action(action)
1846
+ self._group_actions.append(action)
1847
+ return action
1848
+
1849
+ def _remove_action(self, action):
1850
+ super(_ArgumentGroup, self)._remove_action(action)
1851
+ self._group_actions.remove(action)
1852
+
1853
+
1854
+class _MutuallyExclusiveGroup(_ArgumentGroup):
1855
+
1856
+ def __init__(self, container, required=False):
1857
+ super(_MutuallyExclusiveGroup, self).__init__(container)
1858
+ self.required = required
1859
+ self._container = container
1860
+
1861
+ def _add_action(self, action):
1862
+ if action.required:
1863
+ msg = _('mutually exclusive arguments must be optional')
1864
+ raise ValueError(msg)
1865
+ action = self._container._add_action(action)
1866
+ self._group_actions.append(action)
1867
+ return action
1868
+
1869
+ def _remove_action(self, action):
1870
+ self._container._remove_action(action)
1871
+ self._group_actions.remove(action)
1872
+
1873
+
1874
+class ArgumentParser(_AttributeHolder, _ActionsContainer):
1875
+ """Object for parsing command line strings into Python objects.
1876
+
1877
+ Keyword Arguments:
1878
+ - prog -- The name of the program (default: sys.argv[0])
1879
+ - usage -- A usage message (default: auto-generated from arguments)
1880
+ - description -- A description of what the program does
1881
+ - epilog -- Text following the argument descriptions
1882
+ - parents -- Parsers whose arguments should be copied into this one
1883
+ - formatter_class -- HelpFormatter class for printing help messages
1884
+ - prefix_chars -- Characters that prefix optional arguments
1885
+ - fromfile_prefix_chars -- Characters that prefix files containing
1886
+ additional arguments
1887
+ - argument_default -- The default value for all arguments
1888
+ - conflict_handler -- String indicating how to handle conflicts
1889
+ - add_help -- Add a -h/-help option
1890
+ """
1891
+
1892
+ def __init__(self,
1893
+ prog=None,
1894
+ usage=None,
1895
+ description=None,
1896
+ epilog=None,
1897
+ version=None,
1898
+ parents=[],
1899
+ formatter_class=HelpFormatter,
1900
+ prefix_chars='-',
1901
+ fromfile_prefix_chars=None,
1902
+ argument_default=None,
1903
+ conflict_handler='error',
1904
+ add_help=True):
1905
+
1906
+ if version is not None:
1907
+ import warnings
1908
+ warnings.warn(
1909
+ """The "version" argument to ArgumentParser is deprecated. """
1910
+ """Please use """
1911
+ """"add_argument(..., action='version', version="N", ...)" """
1912
+ """instead""", DeprecationWarning)
1913
+
1914
+ superinit = super(ArgumentParser, self).__init__
1915
+ superinit(description=description,
1916
+ prefix_chars=prefix_chars,
1917
+ argument_default=argument_default,
1918
+ conflict_handler=conflict_handler)
1919
+
1920
+ # default setting for prog
1921
+ if prog is None:
1922
+ prog = _os.path.basename(_sys.argv[0])
1923
+
1924
+ self.prog = prog
1925
+ self.usage = usage
1926
+ self.epilog = epilog
1927
+ self.version = version
1928
+ self.formatter_class = formatter_class
1929
+ self.fromfile_prefix_chars = fromfile_prefix_chars
1930
+ self.add_help = add_help
1931
+
1932
+ add_group = self.add_argument_group
1933
+ self._positionals = add_group(_('positional arguments'))
1934
+ self._optionals = add_group(_('optional arguments'))
1935
+ self._subparsers = None
1936
+
1937
+ # register types
1938
+ def identity(string):
1939
+ return string
1940
+ self.register('type', None, identity)
1941
+
1942
+ # add help and version arguments if necessary
1943
+ # (using explicit default to override global argument_default)
1944
+ if '-' in prefix_chars:
1945
+ default_prefix = '-'
1946
+ else:
1947
+ default_prefix = prefix_chars[0]
1948
+ if self.add_help:
1949
+ self.add_argument(
1950
+ default_prefix+'h', default_prefix*2+'help',
1951
+ action='help', default=SUPPRESS,
1952
+ help=_('show this help message and exit'))
1953
+ if self.version:
1954
+ self.add_argument(
1955
+ default_prefix+'v', default_prefix*2+'version',
1956
+ action='version', default=SUPPRESS,
1957
+ version=self.version,
1958
+ help=_("show program's version number and exit"))
1959
+
1960
+ # add parent arguments and defaults
1961
+ for parent in parents:
1962
+ self._add_container_actions(parent)
1963
+ try:
1964
+ defaults = parent._defaults
1965
+ except AttributeError:
1966
+ pass
1967
+ else:
1968
+ self._defaults.update(defaults)
1969
+
1970
+ # =======================
1971
+ # Pretty __repr__ methods
1972
+ # =======================
1973
+ def _get_kwargs(self):
1974
+ names = [
1975
+ 'prog',
1976
+ 'usage',
1977
+ 'description',
1978
+ 'version',
1979
+ 'formatter_class',
1980
+ 'conflict_handler',
1981
+ 'add_help',
1982
+ ]
1983
+ return [(name, getattr(self, name)) for name in names]
1984
+
1985
+ # ==================================
1986
+ # Optional/Positional adding methods
1987
+ # ==================================
1988
+ def add_subparsers(self, **kwargs):
1989
+ if self._subparsers is not None:
1990
+ self.error(_('cannot have multiple subparser arguments'))
1991
+
1992
+ # add the parser class to the arguments if it's not present
1993
+ kwargs.setdefault('parser_class', type(self))
1994
+
1995
+ if 'title' in kwargs or 'description' in kwargs:
1996
+ title = _(kwargs.pop('title', 'subcommands'))
1997
+ description = _(kwargs.pop('description', None))
1998
+ self._subparsers = self.add_argument_group(title, description)
1999
+ else:
2000
+ self._subparsers = self._positionals
2001
+
2002
+ # prog defaults to the usage message of this parser, skipping
2003
+ # optional arguments and with no "usage:" prefix
2004
+ if kwargs.get('prog') is None:
2005
+ formatter = self._get_formatter()
2006
+ positionals = self._get_positional_actions()
2007
+ groups = self._mutually_exclusive_groups
2008
+ formatter.add_usage(self.usage, positionals, groups, '')
2009
+ kwargs['prog'] = formatter.format_help().strip()
2010
+
2011
+ # create the parsers action and add it to the positionals list
2012
+ parsers_class = self._pop_action_class(kwargs, 'parsers')
2013
+ action = parsers_class(option_strings=[], **kwargs)
2014
+ self._subparsers._add_action(action)
2015
+
2016
+ # return the created parsers action
2017
+ return action
2018
+
2019
+ def _add_action(self, action):
2020
+ if action.option_strings:
2021
+ self._optionals._add_action(action)
2022
+ else:
2023
+ self._positionals._add_action(action)
2024
+ return action
2025
+
2026
+ def _get_optional_actions(self):
2027
+ return [action
2028
+ for action in self._actions
2029
+ if action.option_strings]
2030
+
2031
+ def _get_positional_actions(self):
2032
+ return [action
2033
+ for action in self._actions
2034
+ if not action.option_strings]
2035
+
2036
+ # =====================================
2037
+ # Command line argument parsing methods
2038
+ # =====================================
2039
+ def parse_args(self, args=None, namespace=None):
2040
+ args, argv = self.parse_known_args(args, namespace)
2041
+ if argv:
2042
+ msg = _('unrecognized arguments: %s')
2043
+ self.error(msg % ' '.join(argv))
2044
+ return args
2045
+
2046
+ def parse_known_args(self, args=None, namespace=None):
2047
+ # args default to the system args
2048
+ if args is None:
2049
+ args = _sys.argv[1:]
2050
+
2051
+ # default Namespace built from parser defaults
2052
+ if namespace is None:
2053
+ namespace = Namespace()
2054
+
2055
+ # add any action defaults that aren't present
2056
+ for action in self._actions:
2057
+ if action.dest is not SUPPRESS:
2058
+ if not hasattr(namespace, action.dest):
2059
+ if action.default is not SUPPRESS:
2060
+ setattr(namespace, action.dest, action.default)
2061
+
2062
+ # add any parser defaults that aren't present
2063
+ for dest in self._defaults:
2064
+ if not hasattr(namespace, dest):
2065
+ setattr(namespace, dest, self._defaults[dest])
2066
+
2067
+ # parse the arguments and exit if there are any errors
2068
+ try:
2069
+ namespace, args = self._parse_known_args(args, namespace)
2070
+ if hasattr(namespace, _UNRECOGNIZED_ARGS_ATTR):
2071
+ args.extend(getattr(namespace, _UNRECOGNIZED_ARGS_ATTR))
2072
+ delattr(namespace, _UNRECOGNIZED_ARGS_ATTR)
2073
+ return namespace, args
2074
+ except ArgumentError:
2075
+ err = _sys.exc_info()[1]
2076
+ self.error(str(err))
2077
+
2078
+ def _parse_known_args(self, arg_strings, namespace):
2079
+ # replace arg strings that are file references
2080
+ if self.fromfile_prefix_chars is not None:
2081
+ arg_strings = self._read_args_from_files(arg_strings)
2082
+
2083
+ # map all mutually exclusive arguments to the other arguments
2084
+ # they can't occur with
2085
+ action_conflicts = {}
2086
+ for mutex_group in self._mutually_exclusive_groups:
2087
+ group_actions = mutex_group._group_actions
2088
+ for i, mutex_action in enumerate(mutex_group._group_actions):
2089
+ conflicts = action_conflicts.setdefault(mutex_action, [])
2090
+ conflicts.extend(group_actions[:i])
2091
+ conflicts.extend(group_actions[i + 1:])
2092
+
2093
+ # find all option indices, and determine the arg_string_pattern
2094
+ # which has an 'O' if there is an option at an index,
2095
+ # an 'A' if there is an argument, or a '-' if there is a '--'
2096
+ option_string_indices = {}
2097
+ arg_string_pattern_parts = []
2098
+ arg_strings_iter = iter(arg_strings)
2099
+ for i, arg_string in enumerate(arg_strings_iter):
2100
+
2101
+ # all args after -- are non-options
2102
+ if arg_string == '--':
2103
+ arg_string_pattern_parts.append('-')
2104
+ for arg_string in arg_strings_iter:
2105
+ arg_string_pattern_parts.append('A')
2106
+
2107
+ # otherwise, add the arg to the arg strings
2108
+ # and note the index if it was an option
2109
+ else:
2110
+ option_tuple = self._parse_optional(arg_string)
2111
+ if option_tuple is None:
2112
+ pattern = 'A'
2113
+ else:
2114
+ option_string_indices[i] = option_tuple
2115
+ pattern = 'O'
2116
+ arg_string_pattern_parts.append(pattern)
2117
+
2118
+ # join the pieces together to form the pattern
2119
+ arg_strings_pattern = ''.join(arg_string_pattern_parts)
2120
+
2121
+ # converts arg strings to the appropriate and then takes the action
2122
+ seen_actions = set()
2123
+ seen_non_default_actions = set()
2124
+
2125
+ def take_action(action, argument_strings, option_string=None):
2126
+ seen_actions.add(action)
2127
+ argument_values = self._get_values(action, argument_strings)
2128
+
2129
+ # error if this argument is not allowed with other previously
2130
+ # seen arguments, assuming that actions that use the default
2131
+ # value don't really count as "present"
2132
+ if argument_values is not action.default:
2133
+ seen_non_default_actions.add(action)
2134
+ for conflict_action in action_conflicts.get(action, []):
2135
+ if conflict_action in seen_non_default_actions:
2136
+ msg = _('not allowed with argument %s')
2137
+ action_name = _get_action_name(conflict_action)
2138
+ raise ArgumentError(action, msg % action_name)
2139
+
2140
+ # take the action if we didn't receive a SUPPRESS value
2141
+ # (e.g. from a default)
2142
+ if argument_values is not SUPPRESS:
2143
+ action(self, namespace, argument_values, option_string)
2144
+
2145
+ # function to convert arg_strings into an optional action
2146
+ def consume_optional(start_index):
2147
+
2148
+ # get the optional identified at this index
2149
+ option_tuple = option_string_indices[start_index]
2150
+ action, option_string, explicit_arg = option_tuple
2151
+
2152
+ # identify additional optionals in the same arg string
2153
+ # (e.g. -xyz is the same as -x -y -z if no args are required)
2154
+ match_argument = self._match_argument
2155
+ action_tuples = []
2156
+ while True:
2157
+
2158
+ # if we found no optional action, skip it
2159
+ if action is None:
2160
+ extras.append(arg_strings[start_index])
2161
+ return start_index + 1
2162
+
2163
+ # if there is an explicit argument, try to match the
2164
+ # optional's string arguments to only this
2165
+ if explicit_arg is not None:
2166
+ arg_count = match_argument(action, 'A')
2167
+
2168
+ # if the action is a single-dash option and takes no
2169
+ # arguments, try to parse more single-dash options out
2170
+ # of the tail of the option string
2171
+ chars = self.prefix_chars
2172
+ if arg_count == 0 and option_string[1] not in chars:
2173
+ action_tuples.append((action, [], option_string))
2174
+ char = option_string[0]
2175
+ option_string = char + explicit_arg[0]
2176
+ new_explicit_arg = explicit_arg[1:] or None
2177
+ optionals_map = self._option_string_actions
2178
+ if option_string in optionals_map:
2179
+ action = optionals_map[option_string]
2180
+ explicit_arg = new_explicit_arg
2181
+ else:
2182
+ msg = _('ignored explicit argument %r')
2183
+ raise ArgumentError(action, msg % explicit_arg)
2184
+
2185
+ # if the action expect exactly one argument, we've
2186
+ # successfully matched the option; exit the loop
2187
+ elif arg_count == 1:
2188
+ stop = start_index + 1
2189
+ args = [explicit_arg]
2190
+ action_tuples.append((action, args, option_string))
2191
+ break
2192
+
2193
+ # error if a double-dash option did not use the
2194
+ # explicit argument
2195
+ else:
2196
+ msg = _('ignored explicit argument %r')
2197
+ raise ArgumentError(action, msg % explicit_arg)
2198
+
2199
+ # if there is no explicit argument, try to match the
2200
+ # optional's string arguments with the following strings
2201
+ # if successful, exit the loop
2202
+ else:
2203
+ start = start_index + 1
2204
+ selected_patterns = arg_strings_pattern[start:]
2205
+ arg_count = match_argument(action, selected_patterns)
2206
+ stop = start + arg_count
2207
+ args = arg_strings[start:stop]
2208
+ action_tuples.append((action, args, option_string))
2209
+ break
2210
+
2211
+ # add the Optional to the list and return the index at which
2212
+ # the Optional's string args stopped
2213
+ assert action_tuples
2214
+ for action, args, option_string in action_tuples:
2215
+ take_action(action, args, option_string)
2216
+ return stop
2217
+
2218
+ # the list of Positionals left to be parsed; this is modified
2219
+ # by consume_positionals()
2220
+ positionals = self._get_positional_actions()
2221
+
2222
+ # function to convert arg_strings into positional actions
2223
+ def consume_positionals(start_index):
2224
+ # match as many Positionals as possible
2225
+ match_partial = self._match_arguments_partial
2226
+ selected_pattern = arg_strings_pattern[start_index:]
2227
+ arg_counts = match_partial(positionals, selected_pattern)
2228
+
2229
+ # slice off the appropriate arg strings for each Positional
2230
+ # and add the Positional and its args to the list
2231
+ for action, arg_count in zip(positionals, arg_counts):
2232
+ args = arg_strings[start_index: start_index + arg_count]
2233
+ start_index += arg_count
2234
+ take_action(action, args)
2235
+
2236
+ # slice off the Positionals that we just parsed and return the
2237
+ # index at which the Positionals' string args stopped
2238
+ positionals[:] = positionals[len(arg_counts):]
2239
+ return start_index
2240
+
2241
+ # consume Positionals and Optionals alternately, until we have
2242
+ # passed the last option string
2243
+ extras = []
2244
+ start_index = 0
2245
+ if option_string_indices:
2246
+ max_option_string_index = max(option_string_indices)
2247
+ else:
2248
+ max_option_string_index = -1
2249
+ while start_index <= max_option_string_index:
2250
+
2251
+ # consume any Positionals preceding the next option
2252
+ next_option_string_index = min([
2253
+ index
2254
+ for index in option_string_indices
2255
+ if index >= start_index])
2256
+ if start_index != next_option_string_index:
2257
+ positionals_end_index = consume_positionals(start_index)
2258
+
2259
+ # only try to parse the next optional if we didn't consume
2260
+ # the option string during the positionals parsing
2261
+ if positionals_end_index > start_index:
2262
+ start_index = positionals_end_index
2263
+ continue
2264
+ else:
2265
+ start_index = positionals_end_index
2266
+
2267
+ # if we consumed all the positionals we could and we're not
2268
+ # at the index of an option string, there were extra arguments
2269
+ if start_index not in option_string_indices:
2270
+ strings = arg_strings[start_index:next_option_string_index]
2271
+ extras.extend(strings)
2272
+ start_index = next_option_string_index
2273
+
2274
+ # consume the next optional and any arguments for it
2275
+ start_index = consume_optional(start_index)
2276
+
2277
+ # consume any positionals following the last Optional
2278
+ stop_index = consume_positionals(start_index)
2279
+
2280
+ # if we didn't consume all the argument strings, there were extras
2281
+ extras.extend(arg_strings[stop_index:])
2282
+
2283
+ # if we didn't use all the Positional objects, there were too few
2284
+ # arg strings supplied.
2285
+ if positionals:
2286
+ self.error(_('too few arguments'))
2287
+
2288
+ # make sure all required actions were present, and convert defaults.
2289
+ for action in self._actions:
2290
+ if action not in seen_actions:
2291
+ if action.required:
2292
+ name = _get_action_name(action)
2293
+ self.error(_('argument %s is required') % name)
2294
+ else:
2295
+ # Convert action default now instead of doing it before
2296
+ # parsing arguments to avoid calling convert functions
2297
+ # twice (which may fail) if the argument was given, but
2298
+ # only if it was defined already in the namespace
2299
+ if (action.default is not None and
2300
+ isinstance(action.default, basestring) and
2301
+ hasattr(namespace, action.dest) and
2302
+ action.default is getattr(namespace, action.dest)):
2303
+ setattr(namespace, action.dest,
2304
+ self._get_value(action, action.default))
2305
+
2306
+ # make sure all required groups had one option present
2307
+ for group in self._mutually_exclusive_groups:
2308
+ if group.required:
2309
+ for action in group._group_actions:
2310
+ if action in seen_non_default_actions:
2311
+ break
2312
+
2313
+ # if no actions were used, report the error
2314
+ else:
2315
+ names = [_get_action_name(action)
2316
+ for action in group._group_actions
2317
+ if action.help is not SUPPRESS]
2318
+ msg = _('one of the arguments %s is required')
2319
+ self.error(msg % ' '.join(names))
2320
+
2321
+ # return the updated namespace and the extra arguments
2322
+ return namespace, extras
2323
+
2324
+ def _read_args_from_files(self, arg_strings):
2325
+ # expand arguments referencing files
2326
+ new_arg_strings = []
2327
+ for arg_string in arg_strings:
2328
+
2329
+ # for regular arguments, just add them back into the list
2330
+ if arg_string[0] not in self.fromfile_prefix_chars:
2331
+ new_arg_strings.append(arg_string)
2332
+
2333
+ # replace arguments referencing files with the file content
2334
+ else:
2335
+ try:
2336
+ args_file = open(arg_string[1:])
2337
+ try:
2338
+ arg_strings = []
2339
+ for arg_line in args_file.read().splitlines():
2340
+ for arg in self.convert_arg_line_to_args(arg_line):
2341
+ arg_strings.append(arg)
2342
+ arg_strings = self._read_args_from_files(arg_strings)
2343
+ new_arg_strings.extend(arg_strings)
2344
+ finally:
2345
+ args_file.close()
2346
+ except IOError:
2347
+ err = _sys.exc_info()[1]
2348
+ self.error(str(err))
2349
+
2350
+ # return the modified argument list
2351
+ return new_arg_strings
2352
+
2353
+ def convert_arg_line_to_args(self, arg_line):
2354
+ return [arg_line]
2355
+
2356
+ def _match_argument(self, action, arg_strings_pattern):
2357
+ # match the pattern for this action to the arg strings
2358
+ nargs_pattern = self._get_nargs_pattern(action)
2359
+ match = _re.match(nargs_pattern, arg_strings_pattern)
2360
+
2361
+ # raise an exception if we weren't able to find a match
2362
+ if match is None:
2363
+ nargs_errors = {
2364
+ None: _('expected one argument'),
2365
+ OPTIONAL: _('expected at most one argument'),
2366
+ ONE_OR_MORE: _('expected at least one argument'),
2367
+ }
2368
+ default = _('expected %s argument(s)') % action.nargs
2369
+ msg = nargs_errors.get(action.nargs, default)
2370
+ raise ArgumentError(action, msg)
2371
+
2372
+ # return the number of arguments matched
2373
+ return len(match.group(1))
2374
+
2375
+ def _match_arguments_partial(self, actions, arg_strings_pattern):
2376
+ # progressively shorten the actions list by slicing off the
2377
+ # final actions until we find a match
2378
+ result = []
2379
+ for i in range(len(actions), 0, -1):
2380
+ actions_slice = actions[:i]
2381
+ pattern = ''.join([self._get_nargs_pattern(action)
2382
+ for action in actions_slice])
2383
+ match = _re.match(pattern, arg_strings_pattern)
2384
+ if match is not None:
2385
+ result.extend([len(string) for string in match.groups()])
2386
+ break
2387
+
2388
+ # return the list of arg string counts
2389
+ return result
2390
+
2391
+ def _parse_optional(self, arg_string):
2392
+ # if it's an empty string, it was meant to be a positional
2393
+ if not arg_string:
2394
+ return None
2395
+
2396
+ # if it doesn't start with a prefix, it was meant to be positional
2397
+ if not arg_string[0] in self.prefix_chars:
2398
+ return None
2399
+
2400
+ # if the option string is present in the parser, return the action
2401
+ if arg_string in self._option_string_actions:
2402
+ action = self._option_string_actions[arg_string]
2403
+ return action, arg_string, None
2404
+
2405
+ # if it's just a single character, it was meant to be positional
2406
+ if len(arg_string) == 1:
2407
+ return None
2408
+
2409
+ # if the option string before the "=" is present, return the action
2410
+ if '=' in arg_string:
2411
+ option_string, explicit_arg = arg_string.split('=', 1)
2412
+ if option_string in self._option_string_actions:
2413
+ action = self._option_string_actions[option_string]
2414
+ return action, option_string, explicit_arg
2415
+
2416
+ # search through all possible prefixes of the option string
2417
+ # and all actions in the parser for possible interpretations
2418
+ option_tuples = self._get_option_tuples(arg_string)
2419
+
2420
+ # if multiple actions match, the option string was ambiguous
2421
+ if len(option_tuples) > 1:
2422
+ options = ', '.join([option_string
2423
+ for action, option_string, explicit_arg in option_tuples])
2424
+ tup = arg_string, options
2425
+ self.error(_('ambiguous option: %s could match %s') % tup)
2426
+
2427
+ # if exactly one action matched, this segmentation is good,
2428
+ # so return the parsed action
2429
+ elif len(option_tuples) == 1:
2430
+ option_tuple, = option_tuples
2431
+ return option_tuple
2432
+
2433
+ # if it was not found as an option, but it looks like a negative
2434
+ # number, it was meant to be positional
2435
+ # unless there are negative-number-like options
2436
+ if self._negative_number_matcher.match(arg_string):
2437
+ if not self._has_negative_number_optionals:
2438
+ return None
2439
+
2440
+ # if it contains a space, it was meant to be a positional
2441
+ if ' ' in arg_string:
2442
+ return None
2443
+
2444
+ # it was meant to be an optional but there is no such option
2445
+ # in this parser (though it might be a valid option in a subparser)
2446
+ return None, arg_string, None
2447
+
2448
+ def _get_option_tuples(self, option_string):
2449
+ result = []
2450
+
2451
+ # option strings starting with two prefix characters are only
2452
+ # split at the '='
2453
+ chars = self.prefix_chars
2454
+ if option_string[0] in chars and option_string[1] in chars:
2455
+ if '=' in option_string:
2456
+ option_prefix, explicit_arg = option_string.split('=', 1)
2457
+ else:
2458
+ option_prefix = option_string
2459
+ explicit_arg = None
2460
+ for option_string in self._option_string_actions:
2461
+ if option_string.startswith(option_prefix):
2462
+ action = self._option_string_actions[option_string]
2463
+ tup = action, option_string, explicit_arg
2464
+ result.append(tup)
2465
+
2466
+ # single character options can be concatenated with their arguments
2467
+ # but multiple character options always have to have their argument
2468
+ # separate
2469
+ elif option_string[0] in chars and option_string[1] not in chars:
2470
+ option_prefix = option_string
2471
+ explicit_arg = None
2472
+ short_option_prefix = option_string[:2]
2473
+ short_explicit_arg = option_string[2:]
2474
+
2475
+ for option_string in self._option_string_actions:
2476
+ if option_string == short_option_prefix:
2477
+ action = self._option_string_actions[option_string]
2478
+ tup = action, option_string, short_explicit_arg
2479
+ result.append(tup)
2480
+ elif option_string.startswith(option_prefix):
2481
+ action = self._option_string_actions[option_string]
2482
+ tup = action, option_string, explicit_arg
2483
+ result.append(tup)
2484
+
2485
+ # shouldn't ever get here
2486
+ else:
2487
+ self.error(_('unexpected option string: %s') % option_string)
2488
+
2489
+ # return the collected option tuples
2490
+ return result
2491
+
2492
+ def _get_nargs_pattern(self, action):
2493
+ # in all examples below, we have to allow for '--' args
2494
+ # which are represented as '-' in the pattern
2495
+ nargs = action.nargs
2496
+
2497
+ # the default (None) is assumed to be a single argument
2498
+ if nargs is None:
2499
+ nargs_pattern = '(-*A-*)'
2500
+
2501
+ # allow zero or one arguments
2502
+ elif nargs == OPTIONAL:
2503
+ nargs_pattern = '(-*A?-*)'
2504
+
2505
+ # allow zero or more arguments
2506
+ elif nargs == ZERO_OR_MORE:
2507
+ nargs_pattern = '(-*[A-]*)'
2508
+
2509
+ # allow one or more arguments
2510
+ elif nargs == ONE_OR_MORE:
2511
+ nargs_pattern = '(-*A[A-]*)'
2512
+
2513
+ # allow any number of options or arguments
2514
+ elif nargs == REMAINDER:
2515
+ nargs_pattern = '([-AO]*)'
2516
+
2517
+ # allow one argument followed by any number of options or arguments
2518
+ elif nargs == PARSER:
2519
+ nargs_pattern = '(-*A[-AO]*)'
2520
+
2521
+ # all others should be integers
2522
+ else:
2523
+ nargs_pattern = '(-*%s-*)' % '-*'.join('A' * nargs)
2524
+
2525
+ # if this is an optional action, -- is not allowed
2526
+ if action.option_strings:
2527
+ nargs_pattern = nargs_pattern.replace('-*', '')
2528
+ nargs_pattern = nargs_pattern.replace('-', '')
2529
+
2530
+ # return the pattern
2531
+ return nargs_pattern
2532
+
2533
+ # ========================
2534
+ # Value conversion methods
2535
+ # ========================
2536
+ def _get_values(self, action, arg_strings):
2537
+ # for everything but PARSER args, strip out '--'
2538
+ if action.nargs not in [PARSER, REMAINDER]:
2539
+ arg_strings = [s for s in arg_strings if s != '--']
2540
+
2541
+ # optional argument produces a default when not present
2542
+ if not arg_strings and action.nargs == OPTIONAL:
2543
+ if action.option_strings:
2544
+ value = action.const
2545
+ else:
2546
+ value = action.default
2547
+ if isinstance(value, basestring):
2548
+ value = self._get_value(action, value)
2549
+ self._check_value(action, value)
2550
+
2551
+ # when nargs='*' on a positional, if there were no command-line
2552
+ # args, use the default if it is anything other than None
2553
+ elif (not arg_strings and action.nargs == ZERO_OR_MORE and
2554
+ not action.option_strings):
2555
+ if action.default is not None:
2556
+ value = action.default
2557
+ else:
2558
+ value = arg_strings
2559
+ self._check_value(action, value)
2560
+
2561
+ # single argument or optional argument produces a single value
2562
+ elif len(arg_strings) == 1 and action.nargs in [None, OPTIONAL]:
2563
+ arg_string, = arg_strings
2564
+ value = self._get_value(action, arg_string)
2565
+ self._check_value(action, value)
2566
+
2567
+ # REMAINDER arguments convert all values, checking none
2568
+ elif action.nargs == REMAINDER:
2569
+ value = [self._get_value(action, v) for v in arg_strings]
2570
+
2571
+ # PARSER arguments convert all values, but check only the first
2572
+ elif action.nargs == PARSER:
2573
+ value = [self._get_value(action, v) for v in arg_strings]
2574
+ self._check_value(action, value[0])
2575
+
2576
+ # all other types of nargs produce a list
2577
+ else:
2578
+ value = [self._get_value(action, v) for v in arg_strings]
2579
+ for v in value:
2580
+ self._check_value(action, v)
2581
+
2582
+ # return the converted value
2583
+ return value
2584
+
2585
+ def _get_value(self, action, arg_string):
2586
+ type_func = self._registry_get('type', action.type, action.type)
2587
+ if not _callable(type_func):
2588
+ msg = _('%r is not callable')
2589
+ raise ArgumentError(action, msg % type_func)
2590
+
2591
+ # convert the value to the appropriate type
2592
+ try:
2593
+ result = type_func(arg_string)
2594
+
2595
+ # ArgumentTypeErrors indicate errors
2596
+ except ArgumentTypeError:
2597
+ name = getattr(action.type, '__name__', repr(action.type))
2598
+ msg = str(_sys.exc_info()[1])
2599
+ raise ArgumentError(action, msg)
2600
+
2601
+ # TypeErrors or ValueErrors also indicate errors
2602
+ except (TypeError, ValueError):
2603
+ name = getattr(action.type, '__name__', repr(action.type))
2604
+ msg = _('invalid %s value: %r')
2605
+ raise ArgumentError(action, msg % (name, arg_string))
2606
+
2607
+ # return the converted value
2608
+ return result
2609
+
2610
+ def _check_value(self, action, value):
2611
+ # converted value must be one of the choices (if specified)
2612
+ if action.choices is not None and value not in action.choices:
2613
+ tup = value, ', '.join(map(repr, action.choices))
2614
+ msg = _('invalid choice: %r (choose from %s)') % tup
2615
+ raise ArgumentError(action, msg)
2616
+
2617
+ # =======================
2618
+ # Help-formatting methods
2619
+ # =======================
2620
+ def format_usage(self):
2621
+ formatter = self._get_formatter()
2622
+ formatter.add_usage(self.usage, self._actions,
2623
+ self._mutually_exclusive_groups)
2624
+ return formatter.format_help()
2625
+
2626
+ def format_help(self):
2627
+ formatter = self._get_formatter()
2628
+
2629
+ # usage
2630
+ formatter.add_usage(self.usage, self._actions,
2631
+ self._mutually_exclusive_groups)
2632
+
2633
+ # description
2634
+ formatter.add_text(self.description)
2635
+
2636
+ # positionals, optionals and user-defined groups
2637
+ for action_group in self._action_groups:
2638
+ formatter.start_section(action_group.title)
2639
+ formatter.add_text(action_group.description)
2640
+ formatter.add_arguments(action_group._group_actions)
2641
+ formatter.end_section()
2642
+
2643
+ # epilog
2644
+ formatter.add_text(self.epilog)
2645
+
2646
+ # determine help from format above
2647
+ return formatter.format_help()
2648
+
2649
+ def format_version(self):
2650
+ import warnings
2651
+ warnings.warn(
2652
+ 'The format_version method is deprecated -- the "version" '
2653
+ 'argument to ArgumentParser is no longer supported.',
2654
+ DeprecationWarning)
2655
+ formatter = self._get_formatter()
2656
+ formatter.add_text(self.version)
2657
+ return formatter.format_help()
2658
+
2659
+ def _get_formatter(self):
2660
+ return self.formatter_class(prog=self.prog)
2661
+
2662
+ # =====================
2663
+ # Help-printing methods
2664
+ # =====================
2665
+ def print_usage(self, file=None):
2666
+ if file is None:
2667
+ file = _sys.stdout
2668
+ self._print_message(self.format_usage(), file)
2669
+
2670
+ def print_help(self, file=None):
2671
+ if file is None:
2672
+ file = _sys.stdout
2673
+ self._print_message(self.format_help(), file)
2674
+
2675
+ def print_version(self, file=None):
2676
+ import warnings
2677
+ warnings.warn(
2678
+ 'The print_version method is deprecated -- the "version" '
2679
+ 'argument to ArgumentParser is no longer supported.',
2680
+ DeprecationWarning)
2681
+ self._print_message(self.format_version(), file)
2682
+
2683
+ def _print_message(self, message, file=None):
2684
+ if message:
2685
+ if file is None:
2686
+ file = _sys.stderr
2687
+ file.write(message)
2688
+
2689
+ # ===============
2690
+ # Exiting methods
2691
+ # ===============
2692
+ def exit(self, status=0, message=None):
2693
+ if message:
2694
+ self._print_message(message, _sys.stderr)
2695
+ _sys.exit(status)
2696
+
2697
+ def error(self, message):
2698
+ """error(message: string)
2699
+
2700
+ Prints a usage message incorporating the message to stderr and
2701
+ exits.
2702
+
2703
+ If you override this in a subclass, it should not return -- it
2704
+ should either exit or raise an exception.
2705
+ """
2706
+ self.print_usage(_sys.stderr)
2707
+ self.exit(2, _('%s: error: %s\n') % (self.prog, message))
2708
--
160
--
2709
2.13.5
161
2.29.2
2710
162
2711
163
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Alberto Garcia <berto@igalia.com>
2
2
3
The quorum driver does not implement bdrv_co_block_status() and
4
because of that it always reports to contain data even if all its
5
children are known to be empty.
6
7
One consequence of this is that if we for example create a quorum with
8
a size of 10GB and we mirror it to a new image the operation will
9
write 10GB of actual zeroes to the destination image wasting a lot of
10
time and disk space.
11
12
Since a quorum has an arbitrary number of children of potentially
13
different formats there is no way to report all possible allocation
14
status flags in a way that makes sense, so this implementation only
15
reports when a given region is known to contain zeroes
16
(BDRV_BLOCK_ZERO) or not (BDRV_BLOCK_DATA).
17
18
If all children agree that a region contains zeroes then we can return
19
BDRV_BLOCK_ZERO using the smallest size reported by the children
20
(because all agree that a region of at least that size contains
21
zeroes).
22
23
If at least one child disagrees we have to return BDRV_BLOCK_DATA.
24
In this case we use the largest of the sizes reported by the children
25
that didn't return BDRV_BLOCK_ZERO (because we know that there won't
26
be an agreement for at least that size).
27
3
Signed-off-by: Alberto Garcia <berto@igalia.com>
28
Signed-off-by: Alberto Garcia <berto@igalia.com>
4
Message-id: a57dd6274e1b6dc9c28769fec4c7ea543be5c5e3.1503580370.git.berto@igalia.com
29
Tested-by: Tao Xu <tao3.xu@intel.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
30
Reviewed-by: Max Reitz <mreitz@redhat.com>
31
Message-Id: <db83149afcf0f793effc8878089d29af4c46ffe1.1605286097.git.berto@igalia.com>
32
Signed-off-by: Max Reitz <mreitz@redhat.com>
6
---
33
---
7
tests/test-throttle.c | 77 +++++++++++++++++++++++++++++++++++++++++++++++++++
34
block/quorum.c | 52 +++++++++++++
8
1 file changed, 77 insertions(+)
35
tests/qemu-iotests/312 | 148 +++++++++++++++++++++++++++++++++++++
9
36
tests/qemu-iotests/312.out | 67 +++++++++++++++++
10
diff --git a/tests/test-throttle.c b/tests/test-throttle.c
37
tests/qemu-iotests/group | 1 +
38
4 files changed, 268 insertions(+)
39
create mode 100755 tests/qemu-iotests/312
40
create mode 100644 tests/qemu-iotests/312.out
41
42
diff --git a/block/quorum.c b/block/quorum.c
11
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
12
--- a/tests/test-throttle.c
44
--- a/block/quorum.c
13
+++ b/tests/test-throttle.c
45
+++ b/block/quorum.c
14
@@ -XXX,XX +XXX,XX @@ static void test_is_valid(void)
46
@@ -XXX,XX +XXX,XX @@
15
test_is_valid_for_value(1, true);
47
#include "qemu/module.h"
48
#include "qemu/option.h"
49
#include "block/block_int.h"
50
+#include "block/coroutines.h"
51
#include "block/qdict.h"
52
#include "qapi/error.h"
53
#include "qapi/qapi-events-block.h"
54
@@ -XXX,XX +XXX,XX @@ static void quorum_child_perm(BlockDriverState *bs, BdrvChild *c,
55
| DEFAULT_PERM_UNCHANGED;
16
}
56
}
17
57
18
+static void test_ranges(void)
58
+/*
59
+ * Each one of the children can report different status flags even
60
+ * when they contain the same data, so what this function does is
61
+ * return BDRV_BLOCK_ZERO if *all* children agree that a certain
62
+ * region contains zeroes, and BDRV_BLOCK_DATA otherwise.
63
+ */
64
+static int coroutine_fn quorum_co_block_status(BlockDriverState *bs,
65
+ bool want_zero,
66
+ int64_t offset, int64_t count,
67
+ int64_t *pnum, int64_t *map,
68
+ BlockDriverState **file)
19
+{
69
+{
20
+ int i;
70
+ BDRVQuorumState *s = bs->opaque;
21
+
71
+ int i, ret;
22
+ for (i = 0; i < BUCKETS_COUNT; i++) {
72
+ int64_t pnum_zero = count;
23
+ LeakyBucket *b = &cfg.buckets[i];
73
+ int64_t pnum_data = 0;
24
+ throttle_config_init(&cfg);
74
+
25
+
75
+ for (i = 0; i < s->num_children; i++) {
26
+ /* avg = 0 means throttling is disabled, but the config is valid */
76
+ int64_t bytes;
27
+ b->avg = 0;
77
+ ret = bdrv_co_common_block_status_above(s->children[i]->bs, NULL, false,
28
+ g_assert(throttle_is_valid(&cfg, NULL));
78
+ want_zero, offset, count,
29
+ g_assert(!throttle_enabled(&cfg));
79
+ &bytes, NULL, NULL, NULL);
30
+
80
+ if (ret < 0) {
31
+ /* These are valid configurations (values <= THROTTLE_VALUE_MAX) */
81
+ quorum_report_bad(QUORUM_OP_TYPE_READ, offset, count,
32
+ b->avg = 1;
82
+ s->children[i]->bs->node_name, ret);
33
+ g_assert(throttle_is_valid(&cfg, NULL));
83
+ pnum_data = count;
34
+
84
+ break;
35
+ b->avg = THROTTLE_VALUE_MAX;
85
+ }
36
+ g_assert(throttle_is_valid(&cfg, NULL));
86
+ /*
37
+
87
+ * Even if all children agree about whether there are zeroes
38
+ b->avg = THROTTLE_VALUE_MAX;
88
+ * or not at @offset they might disagree on the size, so use
39
+ b->max = THROTTLE_VALUE_MAX;
89
+ * the smallest when reporting BDRV_BLOCK_ZERO and the largest
40
+ g_assert(throttle_is_valid(&cfg, NULL));
90
+ * when reporting BDRV_BLOCK_DATA.
41
+
91
+ */
42
+ /* Values over THROTTLE_VALUE_MAX are not allowed */
92
+ if (ret & BDRV_BLOCK_ZERO) {
43
+ b->avg = THROTTLE_VALUE_MAX + 1;
93
+ pnum_zero = MIN(pnum_zero, bytes);
44
+ g_assert(!throttle_is_valid(&cfg, NULL));
94
+ } else {
45
+
95
+ pnum_data = MAX(pnum_data, bytes);
46
+ b->avg = THROTTLE_VALUE_MAX;
96
+ }
47
+ b->max = THROTTLE_VALUE_MAX + 1;
97
+ }
48
+ g_assert(!throttle_is_valid(&cfg, NULL));
98
+
49
+
99
+ if (pnum_data) {
50
+ /* burst_length must be between 1 and THROTTLE_VALUE_MAX */
100
+ *pnum = pnum_data;
51
+ b->avg = 1;
101
+ return BDRV_BLOCK_DATA;
52
+ b->max = 1;
102
+ } else {
53
+ b->burst_length = 0;
103
+ *pnum = pnum_zero;
54
+ g_assert(!throttle_is_valid(&cfg, NULL));
104
+ return BDRV_BLOCK_ZERO;
55
+
56
+ b->avg = 1;
57
+ b->max = 1;
58
+ b->burst_length = 1;
59
+ g_assert(throttle_is_valid(&cfg, NULL));
60
+
61
+ b->avg = 1;
62
+ b->max = 1;
63
+ b->burst_length = THROTTLE_VALUE_MAX;
64
+ g_assert(throttle_is_valid(&cfg, NULL));
65
+
66
+ b->avg = 1;
67
+ b->max = 1;
68
+ b->burst_length = THROTTLE_VALUE_MAX + 1;
69
+ g_assert(!throttle_is_valid(&cfg, NULL));
70
+
71
+ /* burst_length * max cannot exceed THROTTLE_VALUE_MAX */
72
+ b->avg = 1;
73
+ b->max = 2;
74
+ b->burst_length = THROTTLE_VALUE_MAX / 2;
75
+ g_assert(throttle_is_valid(&cfg, NULL));
76
+
77
+ b->avg = 1;
78
+ b->max = 3;
79
+ b->burst_length = THROTTLE_VALUE_MAX / 2;
80
+ g_assert(!throttle_is_valid(&cfg, NULL));
81
+
82
+ b->avg = 1;
83
+ b->max = THROTTLE_VALUE_MAX;
84
+ b->burst_length = 1;
85
+ g_assert(throttle_is_valid(&cfg, NULL));
86
+
87
+ b->avg = 1;
88
+ b->max = THROTTLE_VALUE_MAX;
89
+ b->burst_length = 2;
90
+ g_assert(!throttle_is_valid(&cfg, NULL));
91
+ }
105
+ }
92
+}
106
+}
93
+
107
+
94
static void test_max_is_missing_limit(void)
108
static const char *const quorum_strong_runtime_opts[] = {
95
{
109
QUORUM_OPT_VOTE_THRESHOLD,
96
int i;
110
QUORUM_OPT_BLKVERIFY,
97
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
111
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_quorum = {
98
g_test_add_func("/throttle/config/enabled", test_enabled);
112
.bdrv_close = quorum_close,
99
g_test_add_func("/throttle/config/conflicting", test_conflicting_config);
113
.bdrv_gather_child_options = quorum_gather_child_options,
100
g_test_add_func("/throttle/config/is_valid", test_is_valid);
114
.bdrv_dirname = quorum_dirname,
101
+ g_test_add_func("/throttle/config/ranges", test_ranges);
115
+ .bdrv_co_block_status = quorum_co_block_status,
102
g_test_add_func("/throttle/config/max", test_max_is_missing_limit);
116
103
g_test_add_func("/throttle/config/iops_size",
117
.bdrv_co_flush_to_disk = quorum_co_flush,
104
test_iops_size_is_missing_limit);
118
119
diff --git a/tests/qemu-iotests/312 b/tests/qemu-iotests/312
120
new file mode 100755
121
index XXXXXXX..XXXXXXX
122
--- /dev/null
123
+++ b/tests/qemu-iotests/312
124
@@ -XXX,XX +XXX,XX @@
125
+#!/usr/bin/env bash
126
+#
127
+# Test drive-mirror with quorum
128
+#
129
+# The goal of this test is to check how the quorum driver reports
130
+# regions that are known to read as zeroes (BDRV_BLOCK_ZERO). The idea
131
+# is that drive-mirror will try the efficient representation of zeroes
132
+# in the destination image instead of writing actual zeroes.
133
+#
134
+# Copyright (C) 2020 Igalia, S.L.
135
+# Author: Alberto Garcia <berto@igalia.com>
136
+#
137
+# This program is free software; you can redistribute it and/or modify
138
+# it under the terms of the GNU General Public License as published by
139
+# the Free Software Foundation; either version 2 of the License, or
140
+# (at your option) any later version.
141
+#
142
+# This program is distributed in the hope that it will be useful,
143
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
144
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
145
+# GNU General Public License for more details.
146
+#
147
+# You should have received a copy of the GNU General Public License
148
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
149
+#
150
+
151
+# creator
152
+owner=berto@igalia.com
153
+
154
+seq=`basename $0`
155
+echo "QA output created by $seq"
156
+
157
+status=1    # failure is the default!
158
+
159
+_cleanup()
160
+{
161
+ _rm_test_img "$TEST_IMG.0"
162
+ _rm_test_img "$TEST_IMG.1"
163
+ _rm_test_img "$TEST_IMG.2"
164
+ _rm_test_img "$TEST_IMG.3"
165
+ _cleanup_qemu
166
+}
167
+trap "_cleanup; exit \$status" 0 1 2 3 15
168
+
169
+# get standard environment, filters and checks
170
+. ./common.rc
171
+. ./common.filter
172
+. ./common.qemu
173
+
174
+_supported_fmt qcow2
175
+_supported_proto file
176
+_supported_os Linux
177
+_unsupported_imgopts cluster_size data_file
178
+
179
+echo
180
+echo '### Create all images' # three source (quorum), one destination
181
+echo
182
+TEST_IMG="$TEST_IMG.0" _make_test_img -o cluster_size=64k 10M
183
+TEST_IMG="$TEST_IMG.1" _make_test_img -o cluster_size=64k 10M
184
+TEST_IMG="$TEST_IMG.2" _make_test_img -o cluster_size=64k 10M
185
+TEST_IMG="$TEST_IMG.3" _make_test_img -o cluster_size=64k 10M
186
+
187
+quorum="driver=raw,file.driver=quorum,file.vote-threshold=2"
188
+quorum="$quorum,file.children.0.file.filename=$TEST_IMG.0"
189
+quorum="$quorum,file.children.1.file.filename=$TEST_IMG.1"
190
+quorum="$quorum,file.children.2.file.filename=$TEST_IMG.2"
191
+quorum="$quorum,file.children.0.driver=$IMGFMT"
192
+quorum="$quorum,file.children.1.driver=$IMGFMT"
193
+quorum="$quorum,file.children.2.driver=$IMGFMT"
194
+
195
+echo
196
+echo '### Output of qemu-img map (empty quorum)'
197
+echo
198
+$QEMU_IMG map --image-opts $quorum | _filter_qemu_img_map
199
+
200
+# Now we write data to the quorum. All three images will read as
201
+# zeroes in all cases, but with different ways to represent them
202
+# (unallocated clusters, zero clusters, data clusters with zeroes)
203
+# that will have an effect on how the data will be mirrored and the
204
+# output of qemu-img map on the resulting image.
205
+echo
206
+echo '### Write data to the quorum'
207
+echo
208
+# Test 1: data regions surrounded by unallocated clusters.
209
+# Three data regions, the largest one (0x30000) will be picked, end result:
210
+# offset 0x10000, length 0x30000 -> data
211
+$QEMU_IO -c "write -P 0 $((0x10000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
212
+$QEMU_IO -c "write -P 0 $((0x10000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
213
+$QEMU_IO -c "write -P 0 $((0x10000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
214
+
215
+# Test 2: zero regions surrounded by data clusters.
216
+# First we allocate the data clusters.
217
+$QEMU_IO -c "open -o $quorum" -c "write -P 0 $((0x100000)) $((0x40000))" | _filter_qemu_io
218
+
219
+# Three zero regions, the smallest one (0x10000) will be picked, end result:
220
+# offset 0x100000, length 0x10000 -> data
221
+# offset 0x110000, length 0x10000 -> zeroes
222
+# offset 0x120000, length 0x20000 -> data
223
+$QEMU_IO -c "write -z $((0x110000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
224
+$QEMU_IO -c "write -z $((0x110000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
225
+$QEMU_IO -c "write -z $((0x110000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
226
+
227
+# Test 3: zero clusters surrounded by unallocated clusters.
228
+# Everything reads as zeroes, no effect on the end result.
229
+$QEMU_IO -c "write -z $((0x150000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
230
+$QEMU_IO -c "write -z $((0x150000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
231
+$QEMU_IO -c "write -z $((0x150000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
232
+
233
+# Test 4: mix of data and zero clusters.
234
+# The zero region will be ignored in favor of the largest data region
235
+# (0x20000), end result:
236
+# offset 0x200000, length 0x20000 -> data
237
+$QEMU_IO -c "write -P 0 $((0x200000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu_io
238
+$QEMU_IO -c "write -z $((0x200000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
239
+$QEMU_IO -c "write -P 0 $((0x200000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
240
+
241
+echo
242
+echo '### Launch the drive-mirror job'
243
+echo
244
+qemu_comm_method="qmp" _launch_qemu -drive if=virtio,"$quorum"
245
+h=$QEMU_HANDLE
246
+_send_qemu_cmd $h "{ 'execute': 'qmp_capabilities' }" 'return'
247
+
248
+_send_qemu_cmd $h \
249
+ "{'execute': 'drive-mirror',
250
+ 'arguments': {'device': 'virtio0',
251
+ 'format': '$IMGFMT',
252
+ 'target': '$TEST_IMG.3',
253
+ 'sync': 'full',
254
+ 'mode': 'existing' }}" \
255
+ "BLOCK_JOB_READY.*virtio0"
256
+
257
+_send_qemu_cmd $h \
258
+ "{ 'execute': 'block-job-complete',
259
+ 'arguments': { 'device': 'virtio0' } }" \
260
+ 'BLOCK_JOB_COMPLETED'
261
+
262
+_send_qemu_cmd $h "{ 'execute': 'quit' }" ''
263
+
264
+echo
265
+echo '### Output of qemu-img map (destination image)'
266
+echo
267
+$QEMU_IMG map "$TEST_IMG.3" | _filter_qemu_img_map
268
+
269
+# success, all done
270
+echo "*** done"
271
+rm -f $seq.full
272
+status=0
273
diff --git a/tests/qemu-iotests/312.out b/tests/qemu-iotests/312.out
274
new file mode 100644
275
index XXXXXXX..XXXXXXX
276
--- /dev/null
277
+++ b/tests/qemu-iotests/312.out
278
@@ -XXX,XX +XXX,XX @@
279
+QA output created by 312
280
+
281
+### Create all images
282
+
283
+Formatting 'TEST_DIR/t.IMGFMT.0', fmt=IMGFMT size=10485760
284
+Formatting 'TEST_DIR/t.IMGFMT.1', fmt=IMGFMT size=10485760
285
+Formatting 'TEST_DIR/t.IMGFMT.2', fmt=IMGFMT size=10485760
286
+Formatting 'TEST_DIR/t.IMGFMT.3', fmt=IMGFMT size=10485760
287
+
288
+### Output of qemu-img map (empty quorum)
289
+
290
+Offset Length File
291
+
292
+### Write data to the quorum
293
+
294
+wrote 65536/65536 bytes at offset 65536
295
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
296
+wrote 196608/196608 bytes at offset 65536
297
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
298
+wrote 131072/131072 bytes at offset 65536
299
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
300
+wrote 262144/262144 bytes at offset 1048576
301
+256 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
302
+wrote 65536/65536 bytes at offset 1114112
303
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
304
+wrote 196608/196608 bytes at offset 1114112
305
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
306
+wrote 131072/131072 bytes at offset 1114112
307
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
308
+wrote 65536/65536 bytes at offset 1376256
309
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
310
+wrote 196608/196608 bytes at offset 1376256
311
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
312
+wrote 131072/131072 bytes at offset 1376256
313
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
314
+wrote 65536/65536 bytes at offset 2097152
315
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
316
+wrote 196608/196608 bytes at offset 2097152
317
+192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
318
+wrote 131072/131072 bytes at offset 2097152
319
+128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
320
+
321
+### Launch the drive-mirror job
322
+
323
+{ 'execute': 'qmp_capabilities' }
324
+{"return": {}}
325
+{'execute': 'drive-mirror', 'arguments': {'device': 'virtio0', 'format': 'IMGFMT', 'target': 'TEST_DIR/t.IMGFMT.3', 'sync': 'full', 'mode': 'existing' }}
326
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "virtio0"}}
327
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "virtio0"}}
328
+{"return": {}}
329
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "virtio0"}}
330
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "virtio0", "len": 10485760, "offset": 10485760, "speed": 0, "type": "mirror"}}
331
+{ 'execute': 'block-job-complete', 'arguments': { 'device': 'virtio0' } }
332
+{"return": {}}
333
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "virtio0"}}
334
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "virtio0"}}
335
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "virtio0", "len": 10485760, "offset": 10485760, "speed": 0, "type": "mirror"}}
336
+{ 'execute': 'quit' }
337
+
338
+### Output of qemu-img map (destination image)
339
+
340
+Offset Length File
341
+0x10000 0x30000 TEST_DIR/t.IMGFMT.3
342
+0x100000 0x10000 TEST_DIR/t.IMGFMT.3
343
+0x120000 0x20000 TEST_DIR/t.IMGFMT.3
344
+0x200000 0x20000 TEST_DIR/t.IMGFMT.3
345
+*** done
346
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
347
index XXXXXXX..XXXXXXX 100644
348
--- a/tests/qemu-iotests/group
349
+++ b/tests/qemu-iotests/group
350
@@ -XXX,XX +XXX,XX @@
351
307 rw quick export
352
308 rw
353
309 rw auto quick
354
+312 rw auto quick
105
--
355
--
106
2.13.5
356
2.29.2
107
357
108
358
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Alberto Garcia <berto@igalia.com>
2
2
3
The throttling code can change internally the value of bkt->max if it
3
This simply calls bdrv_co_pwrite_zeroes() in all children.
4
hasn't been set by the user. The problem with this is that if we want
4
5
to retrieve the original value we have to undo this change first. This
5
bs->supported_zero_flags is also set to the flags that are supported
6
is ugly and unnecessary: this patch removes the throttle_fix_bucket()
6
by all children.
7
and throttle_unfix_bucket() functions completely and moves the logic
8
to throttle_compute_wait().
9
7
10
Signed-off-by: Alberto Garcia <berto@igalia.com>
8
Signed-off-by: Alberto Garcia <berto@igalia.com>
11
Reviewed-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
9
Message-Id: <2f09c842781fe336b4c2e40036bba577b7430190.1605286097.git.berto@igalia.com>
12
Message-id: 5b0b9e1ac6eb208d709eddc7b09e7669a523bff3.1503580370.git.berto@igalia.com
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
14
---
12
---
15
util/throttle.c | 62 +++++++++++++++++++++------------------------------------
13
block/quorum.c | 36 ++++++++++++++++++++++++++++++++++--
16
1 file changed, 23 insertions(+), 39 deletions(-)
14
tests/qemu-iotests/312 | 11 +++++++++++
15
tests/qemu-iotests/312.out | 8 ++++++++
16
3 files changed, 53 insertions(+), 2 deletions(-)
17
17
18
diff --git a/util/throttle.c b/util/throttle.c
18
diff --git a/block/quorum.c b/block/quorum.c
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/util/throttle.c
20
--- a/block/quorum.c
21
+++ b/util/throttle.c
21
+++ b/block/quorum.c
22
@@ -XXX,XX +XXX,XX @@ static int64_t throttle_do_compute_wait(double limit, double extra)
22
@@ -XXX,XX +XXX,XX @@ static void write_quorum_entry(void *opaque)
23
int64_t throttle_compute_wait(LeakyBucket *bkt)
23
QuorumChildRequest *sacb = &acb->qcrs[i];
24
25
sacb->bs = s->children[i]->bs;
26
- sacb->ret = bdrv_co_pwritev(s->children[i], acb->offset, acb->bytes,
27
- acb->qiov, acb->flags);
28
+ if (acb->flags & BDRV_REQ_ZERO_WRITE) {
29
+ sacb->ret = bdrv_co_pwrite_zeroes(s->children[i], acb->offset,
30
+ acb->bytes, acb->flags);
31
+ } else {
32
+ sacb->ret = bdrv_co_pwritev(s->children[i], acb->offset, acb->bytes,
33
+ acb->qiov, acb->flags);
34
+ }
35
if (sacb->ret == 0) {
36
acb->success_count++;
37
} else {
38
@@ -XXX,XX +XXX,XX @@ static int quorum_co_pwritev(BlockDriverState *bs, uint64_t offset,
39
return ret;
40
}
41
42
+static int quorum_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
43
+ int bytes, BdrvRequestFlags flags)
44
+
45
+{
46
+ return quorum_co_pwritev(bs, offset, bytes, NULL,
47
+ flags | BDRV_REQ_ZERO_WRITE);
48
+}
49
+
50
static int64_t quorum_getlength(BlockDriverState *bs)
24
{
51
{
25
double extra; /* the number of extra units blocking the io */
52
BDRVQuorumState *s = bs->opaque;
26
+ double bucket_size; /* I/O before throttling to bkt->avg */
53
@@ -XXX,XX +XXX,XX @@ static QemuOptsList quorum_runtime_opts = {
27
+ double burst_bucket_size; /* Before throttling to bkt->max */
54
},
28
55
};
29
if (!bkt->avg) {
56
30
return 0;
57
+static void quorum_refresh_flags(BlockDriverState *bs)
31
}
58
+{
32
59
+ BDRVQuorumState *s = bs->opaque;
33
- /* If the bucket is full then we have to wait */
60
+ int i;
34
- extra = bkt->level - bkt->max * bkt->burst_length;
61
+
35
+ if (!bkt->max) {
62
+ bs->supported_zero_flags =
36
+ /* If bkt->max is 0 we still want to allow short bursts of I/O
63
+ BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK;
37
+ * from the guest, otherwise every other request will be throttled
64
+
38
+ * and performance will suffer considerably. */
65
+ for (i = 0; i < s->num_children; i++) {
39
+ bucket_size = bkt->avg / 10;
66
+ bs->supported_zero_flags &= s->children[i]->bs->supported_zero_flags;
40
+ burst_bucket_size = 0;
41
+ } else {
42
+ /* If we have a burst limit then we have to wait until all I/O
43
+ * at burst rate has finished before throttling to bkt->avg */
44
+ bucket_size = bkt->max * bkt->burst_length;
45
+ burst_bucket_size = bkt->max / 10;
46
+ }
67
+ }
47
+
68
+
48
+ /* If the main bucket is full then we have to wait */
69
+ bs->supported_zero_flags |= BDRV_REQ_WRITE_UNCHANGED;
49
+ extra = bkt->level - bucket_size;
70
+}
50
if (extra > 0) {
71
+
51
return throttle_do_compute_wait(bkt->avg, extra);
72
static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
73
Error **errp)
74
{
75
@@ -XXX,XX +XXX,XX @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
76
s->next_child_index = s->num_children;
77
78
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED;
79
+ quorum_refresh_flags(bs);
80
81
g_free(opened);
82
goto exit;
83
@@ -XXX,XX +XXX,XX @@ static void quorum_add_child(BlockDriverState *bs, BlockDriverState *child_bs,
52
}
84
}
53
85
s->children = g_renew(BdrvChild *, s->children, s->num_children + 1);
54
- /* If the bucket is not full yet we have to make sure that we
86
s->children[s->num_children++] = child;
55
- * fulfill the goal of bkt->max units per second. */
87
+ quorum_refresh_flags(bs);
56
+ /* If the main bucket is not full yet we still have to check the
88
57
+ * burst bucket in order to enforce the burst limit */
89
out:
58
if (bkt->burst_length > 1) {
90
bdrv_drained_end(bs);
59
- /* We use 1/10 of the max value to smooth the throttling.
91
@@ -XXX,XX +XXX,XX @@ static void quorum_del_child(BlockDriverState *bs, BdrvChild *child,
60
- * See throttle_fix_bucket() for more details. */
92
s->children = g_renew(BdrvChild *, s->children, --s->num_children);
61
- extra = bkt->burst_level - bkt->max / 10;
93
bdrv_unref_child(bs, child);
62
+ extra = bkt->burst_level - burst_bucket_size;
94
63
if (extra > 0) {
95
+ quorum_refresh_flags(bs);
64
return throttle_do_compute_wait(bkt->max, extra);
96
bdrv_drained_end(bs);
65
}
66
@@ -XXX,XX +XXX,XX @@ bool throttle_is_valid(ThrottleConfig *cfg, Error **errp)
67
return true;
68
}
97
}
69
98
70
-/* fix bucket parameters */
99
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_quorum = {
71
-static void throttle_fix_bucket(LeakyBucket *bkt)
100
72
-{
101
.bdrv_co_preadv = quorum_co_preadv,
73
- double min;
102
.bdrv_co_pwritev = quorum_co_pwritev,
74
-
103
+ .bdrv_co_pwrite_zeroes = quorum_co_pwrite_zeroes,
75
- /* zero bucket level */
104
76
- bkt->level = bkt->burst_level = 0;
105
.bdrv_add_child = quorum_add_child,
77
-
106
.bdrv_del_child = quorum_del_child,
78
- /* If bkt->max is 0 we still want to allow short bursts of I/O
107
diff --git a/tests/qemu-iotests/312 b/tests/qemu-iotests/312
79
- * from the guest, otherwise every other request will be throttled
108
index XXXXXXX..XXXXXXX 100755
80
- * and performance will suffer considerably. */
109
--- a/tests/qemu-iotests/312
81
- min = bkt->avg / 10;
110
+++ b/tests/qemu-iotests/312
82
- if (bkt->avg && !bkt->max) {
111
@@ -XXX,XX +XXX,XX @@ $QEMU_IO -c "write -P 0 $((0x200000)) $((0x10000))" "$TEST_IMG.0" | _filter_qemu
83
- bkt->max = min;
112
$QEMU_IO -c "write -z $((0x200000)) $((0x30000))" "$TEST_IMG.1" | _filter_qemu_io
84
- }
113
$QEMU_IO -c "write -P 0 $((0x200000)) $((0x20000))" "$TEST_IMG.2" | _filter_qemu_io
85
-}
114
86
-
115
+# Test 5: write data to a region and then zeroize it, doing it
87
-/* undo internal bucket parameter changes (see throttle_fix_bucket()) */
116
+# directly on the quorum device instead of the individual images.
88
-static void throttle_unfix_bucket(LeakyBucket *bkt)
117
+# This has no effect on the end result but proves that the quorum driver
89
-{
118
+# supports 'write -z'.
90
- if (bkt->max < bkt->avg) {
119
+$QEMU_IO -c "open -o $quorum" -c "write -P 1 $((0x250000)) $((0x10000))" | _filter_qemu_io
91
- bkt->max = 0;
120
+# Verify the data that we just wrote
92
- }
121
+$QEMU_IO -c "open -o $quorum" -c "read -P 1 $((0x250000)) $((0x10000))" | _filter_qemu_io
93
-}
122
+$QEMU_IO -c "open -o $quorum" -c "write -z $((0x250000)) $((0x10000))" | _filter_qemu_io
94
-
123
+# Now it should read back as zeroes
95
/* Used to configure the throttle
124
+$QEMU_IO -c "open -o $quorum" -c "read -P 0 $((0x250000)) $((0x10000))" | _filter_qemu_io
96
*
125
+
97
* @ts: the throttle state we are working on
126
echo
98
@@ -XXX,XX +XXX,XX @@ void throttle_config(ThrottleState *ts,
127
echo '### Launch the drive-mirror job'
99
128
echo
100
ts->cfg = *cfg;
129
diff --git a/tests/qemu-iotests/312.out b/tests/qemu-iotests/312.out
101
130
index XXXXXXX..XXXXXXX 100644
102
+ /* Zero bucket level */
131
--- a/tests/qemu-iotests/312.out
103
for (i = 0; i < BUCKETS_COUNT; i++) {
132
+++ b/tests/qemu-iotests/312.out
104
- throttle_fix_bucket(&ts->cfg.buckets[i]);
133
@@ -XXX,XX +XXX,XX @@ wrote 196608/196608 bytes at offset 2097152
105
+ ts->cfg.buckets[i].level = 0;
134
192 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
106
+ ts->cfg.buckets[i].burst_level = 0;
135
wrote 131072/131072 bytes at offset 2097152
107
}
136
128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
108
137
+wrote 65536/65536 bytes at offset 2424832
109
ts->previous_leak = qemu_clock_get_ns(clock_type);
138
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
110
@@ -XXX,XX +XXX,XX @@ void throttle_config(ThrottleState *ts,
139
+read 65536/65536 bytes at offset 2424832
111
*/
140
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
112
void throttle_get_config(ThrottleState *ts, ThrottleConfig *cfg)
141
+wrote 65536/65536 bytes at offset 2424832
113
{
142
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
114
- int i;
143
+read 65536/65536 bytes at offset 2424832
115
-
144
+64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
116
*cfg = ts->cfg;
145
117
-
146
### Launch the drive-mirror job
118
- for (i = 0; i < BUCKETS_COUNT; i++) {
119
- throttle_unfix_bucket(&cfg->buckets[i]);
120
- }
121
}
122
123
147
124
--
148
--
125
2.13.5
149
2.29.2
126
150
127
151
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
The way the throttling algorithm works is that requests start being
3
NVMe drive cannot be shrunk.
4
throttled once the bucket level exceeds the burst limit. When we get
5
there the bucket leaks at the level set by the user (bkt->avg), and
6
that leak rate is what prevents guest I/O from exceeding the desired
7
limit.
8
4
9
If we don't allow bursts (i.e. bkt->max == 0) then we can start
5
Since commit c80d8b06cfa we can use the @exact parameter (set
10
throttling requests immediately. The problem with keeping the
6
to false) to return success if the block device is larger than
11
threshold at 0 is that it only allows one request at a time, and as
7
the requested offset (even if we can not be shrunk).
12
soon as there's a bit of I/O from the guest every other request will
13
be throttled and performance will suffer considerably. That can even
14
make the guest unable to reach the throttle limit if that limit is
15
high enough, and that happens regardless of the block scheduler used
16
by the guest.
17
8
18
Increasing that threshold gives flexibility to the guest, allowing it
9
Use this parameter to implement the NVMe truncate() coroutine,
19
to perform short bursts of I/O before being throttled. Increasing the
10
similarly how it is done for the iscsi and file-posix drivers
20
threshold too much does not make a difference in the long run (because
11
(see commit 82325ae5f2f "Evaluate @exact in protocol drivers").
21
it's the leak rate what defines the actual throughput) but it does
22
allow the guest to perform longer initial bursts and exceed the
23
throttle limit for a short while.
24
12
25
A burst value of bkt->avg / 10 allows the guest to perform 100ms'
13
Reported-by: Xueqiang Wei <xuwei@redhat.com>
26
worth of I/O at the target rate without being throttled.
14
Suggested-by: Max Reitz <mreitz@redhat.com>
15
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Message-Id: <20201210125202.858656-1-philmd@redhat.com>
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
18
---
19
block/nvme.c | 24 ++++++++++++++++++++++++
20
1 file changed, 24 insertions(+)
27
21
28
Signed-off-by: Alberto Garcia <berto@igalia.com>
22
diff --git a/block/nvme.c b/block/nvme.c
29
Message-id: 31aae6645f0d1fbf3860fb2b528b757236f0c0a7.1503580370.git.berto@igalia.com
30
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
31
---
32
util/throttle.c | 11 +++--------
33
1 file changed, 3 insertions(+), 8 deletions(-)
34
35
diff --git a/util/throttle.c b/util/throttle.c
36
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
37
--- a/util/throttle.c
24
--- a/block/nvme.c
38
+++ b/util/throttle.c
25
+++ b/block/nvme.c
39
@@ -XXX,XX +XXX,XX @@ static void throttle_fix_bucket(LeakyBucket *bkt)
26
@@ -XXX,XX +XXX,XX @@ out:
40
/* zero bucket level */
27
41
bkt->level = bkt->burst_level = 0;
28
}
42
29
43
- /* The following is done to cope with the Linux CFQ block scheduler
30
+static int coroutine_fn nvme_co_truncate(BlockDriverState *bs, int64_t offset,
44
- * which regroup reads and writes by block of 100ms in the guest.
31
+ bool exact, PreallocMode prealloc,
45
- * When they are two process one making reads and one making writes cfq
32
+ BdrvRequestFlags flags, Error **errp)
46
- * make a pattern looking like the following:
33
+{
47
- * WWWWWWWWWWWRRRRRRRRRRRRRRWWWWWWWWWWWWWwRRRRRRRRRRRRRRRRR
34
+ int64_t cur_length;
48
- * Having a max burst value of 100ms of the average will help smooth the
35
+
49
- * throttling
36
+ if (prealloc != PREALLOC_MODE_OFF) {
50
- */
37
+ error_setg(errp, "Unsupported preallocation mode '%s'",
51
+ /* If bkt->max is 0 we still want to allow short bursts of I/O
38
+ PreallocMode_str(prealloc));
52
+ * from the guest, otherwise every other request will be throttled
39
+ return -ENOTSUP;
53
+ * and performance will suffer considerably. */
40
+ }
54
min = bkt->avg / 10;
41
+
55
if (bkt->avg && !bkt->max) {
42
+ cur_length = nvme_getlength(bs);
56
bkt->max = min;
43
+ if (offset != cur_length && exact) {
44
+ error_setg(errp, "Cannot resize NVMe devices");
45
+ return -ENOTSUP;
46
+ } else if (offset > cur_length) {
47
+ error_setg(errp, "Cannot grow NVMe devices");
48
+ return -EINVAL;
49
+ }
50
+
51
+ return 0;
52
+}
53
54
static int nvme_reopen_prepare(BDRVReopenState *reopen_state,
55
BlockReopenQueue *queue, Error **errp)
56
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_nvme = {
57
.bdrv_close = nvme_close,
58
.bdrv_getlength = nvme_getlength,
59
.bdrv_probe_blocksizes = nvme_probe_blocksizes,
60
+ .bdrv_co_truncate = nvme_co_truncate,
61
62
.bdrv_co_preadv = nvme_co_preadv,
63
.bdrv_co_pwritev = nvme_co_pwritev,
57
--
64
--
58
2.13.5
65
2.29.2
59
66
60
67
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
The first parameter passed to _send_qemu_cmd is supposed to be the
2
$QEMU_HANDLE. 102 does not do so here, fix it.
2
3
3
The level of the burst bucket is stored in bkt.burst_level, not
4
As a result, the output changes: Now we see the prompt this command is
4
bkt.burst_length.
5
supposedly waiting for before the resize message - as it should be.
5
6
6
Signed-off-by: Alberto Garcia <berto@igalia.com>
7
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
Reviewed-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
8
Message-Id: <20201217153803.101231-2-mreitz@redhat.com>
8
Message-id: 49aab2711d02f285567f3b3b13a113847af33812.1503580370.git.berto@igalia.com
9
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
10
---
11
include/qemu/throttle.h | 2 +-
11
tests/qemu-iotests/102 | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
12
tests/qemu-iotests/102.out | 2 +-
13
2 files changed, 2 insertions(+), 2 deletions(-)
13
14
14
diff --git a/include/qemu/throttle.h b/include/qemu/throttle.h
15
diff --git a/tests/qemu-iotests/102 b/tests/qemu-iotests/102
16
index XXXXXXX..XXXXXXX 100755
17
--- a/tests/qemu-iotests/102
18
+++ b/tests/qemu-iotests/102
19
@@ -XXX,XX +XXX,XX @@ $QEMU_IO -c 'write 0 64k' "$TEST_IMG" | _filter_qemu_io
20
qemu_comm_method=monitor _launch_qemu -drive if=none,file="$TEST_IMG",id=drv0
21
22
# Wait for a prompt to appear (so we know qemu has opened the image)
23
-_send_qemu_cmd '' '(qemu)'
24
+_send_qemu_cmd $QEMU_HANDLE '' '(qemu)'
25
26
$QEMU_IMG resize --shrink --image-opts \
27
"driver=raw,file.driver=file,file.filename=$TEST_IMG,file.locking=off" \
28
diff --git a/tests/qemu-iotests/102.out b/tests/qemu-iotests/102.out
15
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
16
--- a/include/qemu/throttle.h
30
--- a/tests/qemu-iotests/102.out
17
+++ b/include/qemu/throttle.h
31
+++ b/tests/qemu-iotests/102.out
18
@@ -XXX,XX +XXX,XX @@ typedef enum {
32
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=65536
19
* - The bkt.avg rate does not apply until the bucket is full,
33
wrote 65536/65536 bytes at offset 0
20
* allowing the user to do bursts until then. The I/O limit during
34
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
21
* bursts is bkt.max. To enforce this limit we keep an additional
35
QEMU X.Y.Z monitor - type 'help' for more information
22
- * bucket in bkt.burst_length that leaks at a rate of bkt.max units
36
-Image resized.
23
+ * bucket in bkt.burst_level that leaks at a rate of bkt.max units
37
(qemu)
24
* per second.
38
+Image resized.
25
*
39
(qemu) qemu-io drv0 map
26
* - Because of all of the above, the user can perform I/O at a
40
64 KiB (0x10000) bytes allocated at offset 0 bytes (0x0)
41
*** done
27
--
42
--
28
2.13.5
43
2.29.2
29
44
30
45
diff view generated by jsdifflib
1
From: Dan Aloni <dan@kernelim.com>
1
With bash 5.1, the output of the following script changes:
2
2
3
The number of queues that should be return by the admin command should:
3
a=("double space")
4
a=${a[@]:0:1}
5
echo "$a"
4
6
5
1) Only mention the number of non-admin queues.
7
from "double space" to "double space", i.e. all white space is
6
2) It is zero-based, meaning that '0 == one non-admin queue',
8
preserved as-is. This is probably what we actually want here (judging
7
'1 == two non-admin queues', and so forth.
9
from the "...to accommodate pathnames with spaces" comment), but before
10
5.1, we would have to quote the ${} slice to get the same behavior.
8
11
9
Because our `num_queues` means the number of queues _plus_ the admin
12
In any case, without quoting, the reference output of many iotests is
10
queue, then the right calculation for the number returned from the admin
13
different between bash 5.1 and pre-5.1, which is not very good. The
11
command is `num_queues - 2`, combining the two requirements mentioned.
14
output of 5.1 is what we want, so whatever we do to get pre-5.1 to the
15
same result, it means we have to fix the reference output of basically
16
all tests that invoke _send_qemu_cmd (except the ones that only use
17
single spaces in the commands they invoke).
12
18
13
The issue was discovered by reducing num_queues from 64 to 8 and running
19
Instead of quoting the ${} slice (cmd="${$@: 1:...}"), we can also just
14
a Linux VM with an SMP parameter larger than that (e.g. 22). It tries to
20
not use array slicing and replace the whole thing with a simple "cmd=$1;
15
utilize all queues, and therefore fails with an invalid queue number
21
shift", which works because all callers quote the whole $cmd argument
16
when trying to queue I/Os on the last queue.
22
anyway.
17
23
18
Signed-off-by: Dan Aloni <dan@kernelim.com>
24
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
CC: Alex Friedman <alex@e8storage.com>
25
Message-Id: <20201217153803.101231-3-mreitz@redhat.com>
20
CC: Keith Busch <keith.busch@intel.com>
26
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
21
CC: Stefan Hajnoczi <stefanha@redhat.com>
22
Reviewed-by: Keith Busch <keith.busch@intel.com>
23
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
24
---
27
---
25
hw/block/nvme.c | 4 ++--
28
tests/qemu-iotests/085.out | 167 ++++++++++++++++++++++++++++-----
26
1 file changed, 2 insertions(+), 2 deletions(-)
29
tests/qemu-iotests/094.out | 10 +-
30
tests/qemu-iotests/095.out | 4 +-
31
tests/qemu-iotests/109.out | 88 ++++++++++++-----
32
tests/qemu-iotests/117.out | 13 ++-
33
tests/qemu-iotests/127.out | 12 ++-
34
tests/qemu-iotests/140.out | 10 +-
35
tests/qemu-iotests/141.out | 128 +++++++++++++++++++------
36
tests/qemu-iotests/143.out | 4 +-
37
tests/qemu-iotests/144.out | 28 +++++-
38
tests/qemu-iotests/153.out | 18 ++--
39
tests/qemu-iotests/156.out | 39 ++++++--
40
tests/qemu-iotests/161.out | 18 +++-
41
tests/qemu-iotests/173.out | 25 ++++-
42
tests/qemu-iotests/182.out | 42 +++++++--
43
tests/qemu-iotests/183.out | 19 +++-
44
tests/qemu-iotests/185.out | 45 +++++++--
45
tests/qemu-iotests/191.out | 12 ++-
46
tests/qemu-iotests/223.out | 92 ++++++++++++------
47
tests/qemu-iotests/229.out | 13 ++-
48
tests/qemu-iotests/249.out | 16 +++-
49
tests/qemu-iotests/308.out | 103 +++++++++++++++++---
50
tests/qemu-iotests/312.out | 10 +-
51
tests/qemu-iotests/common.qemu | 11 +--
52
24 files changed, 728 insertions(+), 199 deletions(-)
27
53
28
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
54
diff --git a/tests/qemu-iotests/085.out b/tests/qemu-iotests/085.out
29
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/block/nvme.c
56
--- a/tests/qemu-iotests/085.out
31
+++ b/hw/block/nvme.c
57
+++ b/tests/qemu-iotests/085.out
32
@@ -XXX,XX +XXX,XX @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req)
58
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.2', fmt=IMGFMT size=134217728
33
result = blk_enable_write_cache(n->conf.blk);
59
34
break;
60
=== Create a single snapshot on virtio0 ===
35
case NVME_NUMBER_OF_QUEUES:
61
36
- result = cpu_to_le32((n->num_queues - 1) | ((n->num_queues - 1) << 16));
62
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT', 'format': 'IMGFMT' } }
37
+ result = cpu_to_le32((n->num_queues - 2) | ((n->num_queues - 2) << 16));
63
+{ 'execute': 'blockdev-snapshot-sync',
38
break;
64
+ 'arguments': { 'device': 'virtio0',
39
default:
65
+ 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT',
40
return NVME_INVALID_FIELD | NVME_DNR;
66
+ 'format': 'IMGFMT' } }
41
@@ -XXX,XX +XXX,XX @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req)
67
Formatting 'TEST_DIR/1-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/t.qcow2.1 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
42
break;
68
{"return": {}}
43
case NVME_NUMBER_OF_QUEUES:
69
44
req->cqe.result =
70
=== Invalid command - missing device and nodename ===
45
- cpu_to_le32((n->num_queues - 1) | ((n->num_queues - 1) << 16));
71
46
+ cpu_to_le32((n->num_queues - 2) | ((n->num_queues - 2) << 16));
72
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT', 'format': 'IMGFMT' } }
47
break;
73
+{ 'execute': 'blockdev-snapshot-sync',
48
default:
74
+ 'arguments': { 'snapshot-file':'TEST_DIR/1-snapshot-v0.IMGFMT',
49
return NVME_INVALID_FIELD | NVME_DNR;
75
+ 'format': 'IMGFMT' } }
76
{"error": {"class": "GenericError", "desc": "Cannot find device= nor node_name="}}
77
78
=== Invalid command - missing snapshot-file ===
79
80
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'format': 'IMGFMT' } }
81
+{ 'execute': 'blockdev-snapshot-sync',
82
+ 'arguments': { 'device': 'virtio0',
83
+ 'format': 'IMGFMT' } }
84
{"error": {"class": "GenericError", "desc": "Parameter 'snapshot-file' is missing"}}
85
86
87
=== Create several transactional group snapshots ===
88
89
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/2-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/2-snapshot-v1.IMGFMT' } } ] } }
90
+{ 'execute': 'transaction', 'arguments':
91
+ {'actions': [
92
+ { 'type': 'blockdev-snapshot-sync', 'data' :
93
+ { 'device': 'virtio0',
94
+ 'snapshot-file': 'TEST_DIR/2-snapshot-v0.IMGFMT' } },
95
+ { 'type': 'blockdev-snapshot-sync', 'data' :
96
+ { 'device': 'virtio1',
97
+ 'snapshot-file': 'TEST_DIR/2-snapshot-v1.IMGFMT' } } ]
98
+ } }
99
Formatting 'TEST_DIR/2-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/1-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
100
Formatting 'TEST_DIR/2-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/t.qcow2.2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
101
{"return": {}}
102
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/3-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/3-snapshot-v1.IMGFMT' } } ] } }
103
+{ 'execute': 'transaction', 'arguments':
104
+ {'actions': [
105
+ { 'type': 'blockdev-snapshot-sync', 'data' :
106
+ { 'device': 'virtio0',
107
+ 'snapshot-file': 'TEST_DIR/3-snapshot-v0.IMGFMT' } },
108
+ { 'type': 'blockdev-snapshot-sync', 'data' :
109
+ { 'device': 'virtio1',
110
+ 'snapshot-file': 'TEST_DIR/3-snapshot-v1.IMGFMT' } } ]
111
+ } }
112
Formatting 'TEST_DIR/3-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/2-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
113
Formatting 'TEST_DIR/3-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/2-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
114
{"return": {}}
115
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/4-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/4-snapshot-v1.IMGFMT' } } ] } }
116
+{ 'execute': 'transaction', 'arguments':
117
+ {'actions': [
118
+ { 'type': 'blockdev-snapshot-sync', 'data' :
119
+ { 'device': 'virtio0',
120
+ 'snapshot-file': 'TEST_DIR/4-snapshot-v0.IMGFMT' } },
121
+ { 'type': 'blockdev-snapshot-sync', 'data' :
122
+ { 'device': 'virtio1',
123
+ 'snapshot-file': 'TEST_DIR/4-snapshot-v1.IMGFMT' } } ]
124
+ } }
125
Formatting 'TEST_DIR/4-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/3-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
126
Formatting 'TEST_DIR/4-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/3-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
127
{"return": {}}
128
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/5-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/5-snapshot-v1.IMGFMT' } } ] } }
129
+{ 'execute': 'transaction', 'arguments':
130
+ {'actions': [
131
+ { 'type': 'blockdev-snapshot-sync', 'data' :
132
+ { 'device': 'virtio0',
133
+ 'snapshot-file': 'TEST_DIR/5-snapshot-v0.IMGFMT' } },
134
+ { 'type': 'blockdev-snapshot-sync', 'data' :
135
+ { 'device': 'virtio1',
136
+ 'snapshot-file': 'TEST_DIR/5-snapshot-v1.IMGFMT' } } ]
137
+ } }
138
Formatting 'TEST_DIR/5-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/4-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
139
Formatting 'TEST_DIR/5-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/4-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
140
{"return": {}}
141
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/6-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/6-snapshot-v1.IMGFMT' } } ] } }
142
+{ 'execute': 'transaction', 'arguments':
143
+ {'actions': [
144
+ { 'type': 'blockdev-snapshot-sync', 'data' :
145
+ { 'device': 'virtio0',
146
+ 'snapshot-file': 'TEST_DIR/6-snapshot-v0.IMGFMT' } },
147
+ { 'type': 'blockdev-snapshot-sync', 'data' :
148
+ { 'device': 'virtio1',
149
+ 'snapshot-file': 'TEST_DIR/6-snapshot-v1.IMGFMT' } } ]
150
+ } }
151
Formatting 'TEST_DIR/6-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/5-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
152
Formatting 'TEST_DIR/6-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/5-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
153
{"return": {}}
154
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/7-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/7-snapshot-v1.IMGFMT' } } ] } }
155
+{ 'execute': 'transaction', 'arguments':
156
+ {'actions': [
157
+ { 'type': 'blockdev-snapshot-sync', 'data' :
158
+ { 'device': 'virtio0',
159
+ 'snapshot-file': 'TEST_DIR/7-snapshot-v0.IMGFMT' } },
160
+ { 'type': 'blockdev-snapshot-sync', 'data' :
161
+ { 'device': 'virtio1',
162
+ 'snapshot-file': 'TEST_DIR/7-snapshot-v1.IMGFMT' } } ]
163
+ } }
164
Formatting 'TEST_DIR/7-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/6-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
165
Formatting 'TEST_DIR/7-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/6-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
166
{"return": {}}
167
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/8-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/8-snapshot-v1.IMGFMT' } } ] } }
168
+{ 'execute': 'transaction', 'arguments':
169
+ {'actions': [
170
+ { 'type': 'blockdev-snapshot-sync', 'data' :
171
+ { 'device': 'virtio0',
172
+ 'snapshot-file': 'TEST_DIR/8-snapshot-v0.IMGFMT' } },
173
+ { 'type': 'blockdev-snapshot-sync', 'data' :
174
+ { 'device': 'virtio1',
175
+ 'snapshot-file': 'TEST_DIR/8-snapshot-v1.IMGFMT' } } ]
176
+ } }
177
Formatting 'TEST_DIR/8-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/7-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
178
Formatting 'TEST_DIR/8-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/7-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
179
{"return": {}}
180
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/9-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/9-snapshot-v1.IMGFMT' } } ] } }
181
+{ 'execute': 'transaction', 'arguments':
182
+ {'actions': [
183
+ { 'type': 'blockdev-snapshot-sync', 'data' :
184
+ { 'device': 'virtio0',
185
+ 'snapshot-file': 'TEST_DIR/9-snapshot-v0.IMGFMT' } },
186
+ { 'type': 'blockdev-snapshot-sync', 'data' :
187
+ { 'device': 'virtio1',
188
+ 'snapshot-file': 'TEST_DIR/9-snapshot-v1.IMGFMT' } } ]
189
+ } }
190
Formatting 'TEST_DIR/9-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/8-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
191
Formatting 'TEST_DIR/9-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/8-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
192
{"return": {}}
193
-{ 'execute': 'transaction', 'arguments': {'actions': [ { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio0', 'snapshot-file': 'TEST_DIR/10-snapshot-v0.IMGFMT' } }, { 'type': 'blockdev-snapshot-sync', 'data' : { 'device': 'virtio1', 'snapshot-file': 'TEST_DIR/10-snapshot-v1.IMGFMT' } } ] } }
194
+{ 'execute': 'transaction', 'arguments':
195
+ {'actions': [
196
+ { 'type': 'blockdev-snapshot-sync', 'data' :
197
+ { 'device': 'virtio0',
198
+ 'snapshot-file': 'TEST_DIR/10-snapshot-v0.IMGFMT' } },
199
+ { 'type': 'blockdev-snapshot-sync', 'data' :
200
+ { 'device': 'virtio1',
201
+ 'snapshot-file': 'TEST_DIR/10-snapshot-v1.IMGFMT' } } ]
202
+ } }
203
Formatting 'TEST_DIR/10-snapshot-v0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/9-snapshot-v0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
204
Formatting 'TEST_DIR/10-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=134217728 backing_file=TEST_DIR/9-snapshot-v1.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
205
{"return": {}}
206
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/10-snapshot-v1.qcow2', fmt=qcow2 cluster_size=65536 extende
207
=== Create a couple of snapshots using blockdev-snapshot ===
208
209
Formatting 'TEST_DIR/11-snapshot-v0.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/10-snapshot-v0.IMGFMT backing_fmt=IMGFMT
210
-{ 'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'snap_11', 'backing': null, 'file': { 'driver': 'file', 'filename': 'TEST_DIR/11-snapshot-v0.IMGFMT', 'node-name': 'file_11' } } }
211
+{ 'execute': 'blockdev-add', 'arguments':
212
+ { 'driver': 'IMGFMT', 'node-name': 'snap_11', 'backing': null,
213
+ 'file':
214
+ { 'driver': 'file', 'filename': 'TEST_DIR/11-snapshot-v0.IMGFMT',
215
+ 'node-name': 'file_11' } } }
216
{"return": {}}
217
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_11' } }
218
+{ 'execute': 'blockdev-snapshot',
219
+ 'arguments': { 'node': 'virtio0',
220
+ 'overlay':'snap_11' } }
221
{"return": {}}
222
Formatting 'TEST_DIR/12-snapshot-v0.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/11-snapshot-v0.IMGFMT backing_fmt=IMGFMT
223
-{ 'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'snap_12', 'backing': null, 'file': { 'driver': 'file', 'filename': 'TEST_DIR/12-snapshot-v0.IMGFMT', 'node-name': 'file_12' } } }
224
+{ 'execute': 'blockdev-add', 'arguments':
225
+ { 'driver': 'IMGFMT', 'node-name': 'snap_12', 'backing': null,
226
+ 'file':
227
+ { 'driver': 'file', 'filename': 'TEST_DIR/12-snapshot-v0.IMGFMT',
228
+ 'node-name': 'file_12' } } }
229
{"return": {}}
230
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_12' } }
231
+{ 'execute': 'blockdev-snapshot',
232
+ 'arguments': { 'node': 'virtio0',
233
+ 'overlay':'snap_12' } }
234
{"return": {}}
235
236
=== Invalid command - cannot create a snapshot using a file BDS ===
237
238
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'virtio0', 'overlay':'file_12' } }
239
+{ 'execute': 'blockdev-snapshot',
240
+ 'arguments': { 'node':'virtio0',
241
+ 'overlay':'file_12' }
242
+ }
243
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
244
245
=== Invalid command - snapshot node used as active layer ===
246
247
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_12' } }
248
+{ 'execute': 'blockdev-snapshot',
249
+ 'arguments': { 'node': 'virtio0',
250
+ 'overlay':'snap_12' } }
251
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
252
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'virtio0', 'overlay':'virtio0' } }
253
+{ 'execute': 'blockdev-snapshot',
254
+ 'arguments': { 'node':'virtio0',
255
+ 'overlay':'virtio0' }
256
+ }
257
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
258
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'virtio0', 'overlay':'virtio1' } }
259
+{ 'execute': 'blockdev-snapshot',
260
+ 'arguments': { 'node':'virtio0',
261
+ 'overlay':'virtio1' }
262
+ }
263
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
264
265
=== Invalid command - snapshot node used as backing hd ===
266
267
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_11' } }
268
+{ 'execute': 'blockdev-snapshot',
269
+ 'arguments': { 'node': 'virtio0',
270
+ 'overlay':'snap_11' } }
271
{"error": {"class": "GenericError", "desc": "The overlay is already in use"}}
272
273
=== Invalid command - snapshot node has a backing image ===
274
275
Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=134217728
276
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/t.IMGFMT.base backing_fmt=IMGFMT
277
-{ 'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'snap_13', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT', 'node-name': 'file_13' } } }
278
-{"return": {}}
279
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_13' } }
280
+{ 'execute': 'blockdev-add', 'arguments':
281
+ { 'driver': 'IMGFMT', 'node-name': 'snap_13',
282
+ 'file':
283
+ { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT',
284
+ 'node-name': 'file_13' } } }
285
+{"return": {}}
286
+{ 'execute': 'blockdev-snapshot',
287
+ 'arguments': { 'node': 'virtio0',
288
+ 'overlay':'snap_13' } }
289
{"error": {"class": "GenericError", "desc": "The overlay already has a backing image"}}
290
291
=== Invalid command - The node does not exist ===
292
293
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node': 'virtio0', 'overlay':'snap_14' } }
294
+{ 'execute': 'blockdev-snapshot',
295
+ 'arguments': { 'node': 'virtio0',
296
+ 'overlay':'snap_14' } }
297
{"error": {"class": "GenericError", "desc": "Cannot find device=snap_14 nor node_name=snap_14"}}
298
-{ 'execute': 'blockdev-snapshot', 'arguments': { 'node':'nodevice', 'overlay':'snap_13' } }
299
+{ 'execute': 'blockdev-snapshot',
300
+ 'arguments': { 'node':'nodevice',
301
+ 'overlay':'snap_13' }
302
+ }
303
{"error": {"class": "GenericError", "desc": "Cannot find device=nodevice nor node_name=nodevice"}}
304
*** done
305
diff --git a/tests/qemu-iotests/094.out b/tests/qemu-iotests/094.out
306
index XXXXXXX..XXXXXXX 100644
307
--- a/tests/qemu-iotests/094.out
308
+++ b/tests/qemu-iotests/094.out
309
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=67108864
310
Formatting 'TEST_DIR/source.IMGFMT', fmt=IMGFMT size=67108864
311
{'execute': 'qmp_capabilities'}
312
{"return": {}}
313
-{'execute': 'drive-mirror', 'arguments': {'device': 'src', 'target': 'nbd+unix:///?socket=SOCK_DIR/nbd', 'format': 'nbd', 'sync':'full', 'mode':'existing'}}
314
+{'execute': 'drive-mirror',
315
+ 'arguments': {'device': 'src',
316
+ 'target': 'nbd+unix:///?socket=SOCK_DIR/nbd',
317
+ 'format': 'nbd',
318
+ 'sync':'full',
319
+ 'mode':'existing'}}
320
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
321
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
322
{"return": {}}
323
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
324
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "src", "len": 67108864, "offset": 67108864, "speed": 0, "type": "mirror"}}
325
-{'execute': 'block-job-complete', 'arguments': {'device': 'src'}}
326
+{'execute': 'block-job-complete',
327
+ 'arguments': {'device': 'src'}}
328
{"return": {}}
329
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
330
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
331
diff --git a/tests/qemu-iotests/095.out b/tests/qemu-iotests/095.out
332
index XXXXXXX..XXXXXXX 100644
333
--- a/tests/qemu-iotests/095.out
334
+++ b/tests/qemu-iotests/095.out
335
@@ -XXX,XX +XXX,XX @@ virtual size: 5 MiB (5242880 bytes)
336
337
{ 'execute': 'qmp_capabilities' }
338
{"return": {}}
339
-{ 'execute': 'block-commit', 'arguments': { 'device': 'test', 'top': 'TEST_DIR/t.IMGFMT.snp1' } }
340
+{ 'execute': 'block-commit',
341
+ 'arguments': { 'device': 'test',
342
+ 'top': 'TEST_DIR/t.IMGFMT.snp1' } }
343
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "test"}}
344
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "test"}}
345
{"return": {}}
346
diff --git a/tests/qemu-iotests/109.out b/tests/qemu-iotests/109.out
347
index XXXXXXX..XXXXXXX 100644
348
--- a/tests/qemu-iotests/109.out
349
+++ b/tests/qemu-iotests/109.out
350
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
351
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
352
{ 'execute': 'qmp_capabilities' }
353
{"return": {}}
354
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
355
+{'execute':'drive-mirror', 'arguments':{
356
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
357
+ 'mode': 'existing', 'sync': 'full'}}
358
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
359
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
360
Specify the 'raw' format explicitly to remove the restrictions.
361
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
362
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
363
{ 'execute': 'qmp_capabilities' }
364
{"return": {}}
365
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
366
+{'execute':'drive-mirror', 'arguments':{
367
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
368
+ 'mode': 'existing', 'sync': 'full'}}
369
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
370
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
371
{"return": {}}
372
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
373
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
374
{ 'execute': 'qmp_capabilities' }
375
{"return": {}}
376
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
377
+{'execute':'drive-mirror', 'arguments':{
378
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
379
+ 'mode': 'existing', 'sync': 'full'}}
380
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
381
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
382
Specify the 'raw' format explicitly to remove the restrictions.
383
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
384
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
385
{ 'execute': 'qmp_capabilities' }
386
{"return": {}}
387
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
388
+{'execute':'drive-mirror', 'arguments':{
389
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
390
+ 'mode': 'existing', 'sync': 'full'}}
391
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
392
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
393
{"return": {}}
394
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
395
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
396
{ 'execute': 'qmp_capabilities' }
397
{"return": {}}
398
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
399
+{'execute':'drive-mirror', 'arguments':{
400
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
401
+ 'mode': 'existing', 'sync': 'full'}}
402
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
403
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
404
Specify the 'raw' format explicitly to remove the restrictions.
405
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
406
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
407
{ 'execute': 'qmp_capabilities' }
408
{"return": {}}
409
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
410
+{'execute':'drive-mirror', 'arguments':{
411
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
412
+ 'mode': 'existing', 'sync': 'full'}}
413
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
414
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
415
{"return": {}}
416
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
417
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
418
{ 'execute': 'qmp_capabilities' }
419
{"return": {}}
420
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
421
+{'execute':'drive-mirror', 'arguments':{
422
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
423
+ 'mode': 'existing', 'sync': 'full'}}
424
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
425
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
426
Specify the 'raw' format explicitly to remove the restrictions.
427
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
428
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
429
{ 'execute': 'qmp_capabilities' }
430
{"return": {}}
431
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
432
+{'execute':'drive-mirror', 'arguments':{
433
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
434
+ 'mode': 'existing', 'sync': 'full'}}
435
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
436
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
437
{"return": {}}
438
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
439
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
440
{ 'execute': 'qmp_capabilities' }
441
{"return": {}}
442
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
443
+{'execute':'drive-mirror', 'arguments':{
444
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
445
+ 'mode': 'existing', 'sync': 'full'}}
446
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
447
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
448
Specify the 'raw' format explicitly to remove the restrictions.
449
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
450
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
451
{ 'execute': 'qmp_capabilities' }
452
{"return": {}}
453
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
454
+{'execute':'drive-mirror', 'arguments':{
455
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
456
+ 'mode': 'existing', 'sync': 'full'}}
457
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
458
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
459
{"return": {}}
460
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.raw.src', fmt=IMGFMT size=67108864
461
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
462
{ 'execute': 'qmp_capabilities' }
463
{"return": {}}
464
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
465
+{'execute':'drive-mirror', 'arguments':{
466
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
467
+ 'mode': 'existing', 'sync': 'full'}}
468
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
469
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
470
Specify the 'raw' format explicitly to remove the restrictions.
471
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
472
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
473
{ 'execute': 'qmp_capabilities' }
474
{"return": {}}
475
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
476
+{'execute':'drive-mirror', 'arguments':{
477
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
478
+ 'mode': 'existing', 'sync': 'full'}}
479
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
480
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
481
{"return": {}}
482
@@ -XXX,XX +XXX,XX @@ Images are identical.
483
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
484
{ 'execute': 'qmp_capabilities' }
485
{"return": {}}
486
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
487
+{'execute':'drive-mirror', 'arguments':{
488
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
489
+ 'mode': 'existing', 'sync': 'full'}}
490
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
491
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
492
Specify the 'raw' format explicitly to remove the restrictions.
493
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
494
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
495
{ 'execute': 'qmp_capabilities' }
496
{"return": {}}
497
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
498
+{'execute':'drive-mirror', 'arguments':{
499
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
500
+ 'mode': 'existing', 'sync': 'full'}}
501
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
502
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
503
{"return": {}}
504
@@ -XXX,XX +XXX,XX @@ Images are identical.
505
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
506
{ 'execute': 'qmp_capabilities' }
507
{"return": {}}
508
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
509
+{'execute':'drive-mirror', 'arguments':{
510
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
511
+ 'mode': 'existing', 'sync': 'full'}}
512
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
513
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
514
Specify the 'raw' format explicitly to remove the restrictions.
515
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
516
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
517
{ 'execute': 'qmp_capabilities' }
518
{"return": {}}
519
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
520
+{'execute':'drive-mirror', 'arguments':{
521
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
522
+ 'mode': 'existing', 'sync': 'full'}}
523
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
524
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
525
{"return": {}}
526
@@ -XXX,XX +XXX,XX @@ Images are identical.
527
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
528
{ 'execute': 'qmp_capabilities' }
529
{"return": {}}
530
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
531
+{'execute':'drive-mirror', 'arguments':{
532
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
533
+ 'mode': 'existing', 'sync': 'full'}}
534
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
535
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
536
Specify the 'raw' format explicitly to remove the restrictions.
537
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
538
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
539
{ 'execute': 'qmp_capabilities' }
540
{"return": {}}
541
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
542
+{'execute':'drive-mirror', 'arguments':{
543
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
544
+ 'mode': 'existing', 'sync': 'full'}}
545
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
546
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
547
{"return": {}}
548
@@ -XXX,XX +XXX,XX @@ Images are identical.
549
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
550
{ 'execute': 'qmp_capabilities' }
551
{"return": {}}
552
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
553
+{'execute':'drive-mirror', 'arguments':{
554
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
555
+ 'mode': 'existing', 'sync': 'full'}}
556
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
557
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
558
Specify the 'raw' format explicitly to remove the restrictions.
559
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
560
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
561
{ 'execute': 'qmp_capabilities' }
562
{"return": {}}
563
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
564
+{'execute':'drive-mirror', 'arguments':{
565
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
566
+ 'mode': 'existing', 'sync': 'full'}}
567
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
568
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
569
{"return": {}}
570
@@ -XXX,XX +XXX,XX @@ Images are identical.
571
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE
572
{ 'execute': 'qmp_capabilities' }
573
{"return": {}}
574
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'mode': 'existing', 'sync': 'full'}}
575
+{'execute':'drive-mirror', 'arguments':{
576
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT',
577
+ 'mode': 'existing', 'sync': 'full'}}
578
WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
579
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
580
Specify the 'raw' format explicitly to remove the restrictions.
581
@@ -XXX,XX +XXX,XX @@ WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed
582
Images are identical.
583
{ 'execute': 'qmp_capabilities' }
584
{"return": {}}
585
-{'execute':'drive-mirror', 'arguments':{ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'existing', 'sync': 'full'}}
586
+{'execute':'drive-mirror', 'arguments':{
587
+ 'device': 'src', 'target': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT',
588
+ 'mode': 'existing', 'sync': 'full'}}
589
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "src"}}
590
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "src"}}
591
{"return": {}}
592
diff --git a/tests/qemu-iotests/117.out b/tests/qemu-iotests/117.out
593
index XXXXXXX..XXXXXXX 100644
594
--- a/tests/qemu-iotests/117.out
595
+++ b/tests/qemu-iotests/117.out
596
@@ -XXX,XX +XXX,XX @@ QA output created by 117
597
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=65536
598
{ 'execute': 'qmp_capabilities' }
599
{"return": {}}
600
-{ 'execute': 'blockdev-add', 'arguments': { 'node-name': 'protocol', 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' } }
601
+{ 'execute': 'blockdev-add',
602
+ 'arguments': { 'node-name': 'protocol',
603
+ 'driver': 'file',
604
+ 'filename': 'TEST_DIR/t.IMGFMT' } }
605
{"return": {}}
606
-{ 'execute': 'blockdev-add', 'arguments': { 'node-name': 'format', 'driver': 'IMGFMT', 'file': 'protocol' } }
607
+{ 'execute': 'blockdev-add',
608
+ 'arguments': { 'node-name': 'format',
609
+ 'driver': 'IMGFMT',
610
+ 'file': 'protocol' } }
611
{"return": {}}
612
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io format "write -P 42 0 64k"' } }
613
+{ 'execute': 'human-monitor-command',
614
+ 'arguments': { 'command-line': 'qemu-io format "write -P 42 0 64k"' } }
615
wrote 65536/65536 bytes at offset 0
616
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
617
{"return": ""}
618
diff --git a/tests/qemu-iotests/127.out b/tests/qemu-iotests/127.out
619
index XXXXXXX..XXXXXXX 100644
620
--- a/tests/qemu-iotests/127.out
621
+++ b/tests/qemu-iotests/127.out
622
@@ -XXX,XX +XXX,XX @@ wrote 42/42 bytes at offset 0
623
42 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
624
{ 'execute': 'qmp_capabilities' }
625
{"return": {}}
626
-{ 'execute': 'drive-mirror', 'arguments': { 'job-id': 'mirror', 'device': 'source', 'target': 'TEST_DIR/t.IMGFMT.overlay1', 'mode': 'existing', 'sync': 'top' } }
627
+{ 'execute': 'drive-mirror',
628
+ 'arguments': {
629
+ 'job-id': 'mirror',
630
+ 'device': 'source',
631
+ 'target': 'TEST_DIR/t.IMGFMT.overlay1',
632
+ 'mode': 'existing',
633
+ 'sync': 'top'
634
+ } }
635
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "mirror"}}
636
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "mirror"}}
637
{"return": {}}
638
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "mirror"}}
639
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "mirror", "len": 65536, "offset": 65536, "speed": 0, "type": "mirror"}}
640
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'mirror' } }
641
+{ 'execute': 'block-job-complete',
642
+ 'arguments': { 'device': 'mirror' } }
643
{"return": {}}
644
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "mirror"}}
645
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "mirror"}}
646
diff --git a/tests/qemu-iotests/140.out b/tests/qemu-iotests/140.out
647
index XXXXXXX..XXXXXXX 100644
648
--- a/tests/qemu-iotests/140.out
649
+++ b/tests/qemu-iotests/140.out
650
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 0
651
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
652
{ 'execute': 'qmp_capabilities' }
653
{"return": {}}
654
-{ 'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'unix', 'data': { 'path': 'SOCK_DIR/nbd' }}}}
655
+{ 'execute': 'nbd-server-start',
656
+ 'arguments': { 'addr': { 'type': 'unix',
657
+ 'data': { 'path': 'SOCK_DIR/nbd' }}}}
658
{"return": {}}
659
-{ 'execute': 'nbd-server-add', 'arguments': { 'device': 'drv' }}
660
+{ 'execute': 'nbd-server-add',
661
+ 'arguments': { 'device': 'drv' }}
662
{"return": {}}
663
read 65536/65536 bytes at offset 0
664
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
665
-{ 'execute': 'eject', 'arguments': { 'device': 'drv' }}
666
+{ 'execute': 'eject',
667
+ 'arguments': { 'device': 'drv' }}
668
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "drv"}}
669
qemu-io: can't open device nbd+unix:///drv?socket=SOCK_DIR/nbd: Requested export not available
670
server reported: export 'drv' not present
671
diff --git a/tests/qemu-iotests/141.out b/tests/qemu-iotests/141.out
672
index XXXXXXX..XXXXXXX 100644
673
--- a/tests/qemu-iotests/141.out
674
+++ b/tests/qemu-iotests/141.out
675
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/m.
676
677
=== Testing drive-backup ===
678
679
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
680
-{"return": {}}
681
-{'execute': 'drive-backup', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'target': 'TEST_DIR/o.IMGFMT', 'format': 'IMGFMT', 'sync': 'none'}}
682
+{'execute': 'blockdev-add',
683
+ 'arguments': {
684
+ 'node-name': 'drv0',
685
+ 'driver': 'IMGFMT',
686
+ 'file': {
687
+ 'driver': 'file',
688
+ 'filename': 'TEST_DIR/t.IMGFMT'
689
+ }}}
690
+{"return": {}}
691
+{'execute': 'drive-backup',
692
+'arguments': {'job-id': 'job0',
693
+'device': 'drv0',
694
+'target': 'TEST_DIR/o.IMGFMT',
695
+'format': 'IMGFMT',
696
+'sync': 'none'}}
697
Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
698
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
699
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
700
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "job0"}}
701
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
702
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
703
+{'execute': 'blockdev-del',
704
+ 'arguments': {'node-name': 'drv0'}}
705
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: node is used as backing hd of 'NODE_NAME'"}}
706
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
707
+{'execute': 'block-job-cancel',
708
+ 'arguments': {'device': 'job0'}}
709
{"return": {}}
710
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
711
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 0, "speed": 0, "type": "backup"}}
712
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
713
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
714
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
715
+{'execute': 'blockdev-del',
716
+ 'arguments': {'node-name': 'drv0'}}
717
{"return": {}}
718
719
=== Testing drive-mirror ===
720
721
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
722
-{"return": {}}
723
-{'execute': 'drive-mirror', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'target': 'TEST_DIR/o.IMGFMT', 'format': 'IMGFMT', 'sync': 'none'}}
724
+{'execute': 'blockdev-add',
725
+ 'arguments': {
726
+ 'node-name': 'drv0',
727
+ 'driver': 'IMGFMT',
728
+ 'file': {
729
+ 'driver': 'file',
730
+ 'filename': 'TEST_DIR/t.IMGFMT'
731
+ }}}
732
+{"return": {}}
733
+{'execute': 'drive-mirror',
734
+'arguments': {'job-id': 'job0',
735
+'device': 'drv0',
736
+'target': 'TEST_DIR/o.IMGFMT',
737
+'format': 'IMGFMT',
738
+'sync': 'none'}}
739
Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
740
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
741
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
742
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "job0"}}
743
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}}
744
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
745
+{'execute': 'blockdev-del',
746
+ 'arguments': {'node-name': 'drv0'}}
747
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: mirror"}}
748
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
749
+{'execute': 'block-job-cancel',
750
+ 'arguments': {'device': 'job0'}}
751
{"return": {}}
752
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job0"}}
753
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job0"}}
754
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "mirror"}}
755
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
756
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
757
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
758
+{'execute': 'blockdev-del',
759
+ 'arguments': {'node-name': 'drv0'}}
760
{"return": {}}
761
762
=== Testing active block-commit ===
763
764
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
765
-{"return": {}}
766
-{'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'drv0'}}
767
+{'execute': 'blockdev-add',
768
+ 'arguments': {
769
+ 'node-name': 'drv0',
770
+ 'driver': 'IMGFMT',
771
+ 'file': {
772
+ 'driver': 'file',
773
+ 'filename': 'TEST_DIR/t.IMGFMT'
774
+ }}}
775
+{"return": {}}
776
+{'execute': 'block-commit',
777
+'arguments': {'job-id': 'job0', 'device': 'drv0'}}
778
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
779
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
780
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "job0"}}
781
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
782
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
783
+{'execute': 'blockdev-del',
784
+ 'arguments': {'node-name': 'drv0'}}
785
{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: commit"}}
786
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
787
+{'execute': 'block-job-cancel',
788
+ 'arguments': {'device': 'job0'}}
789
{"return": {}}
790
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "job0"}}
791
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "job0"}}
792
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "job0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
793
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
794
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
795
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
796
+{'execute': 'blockdev-del',
797
+ 'arguments': {'node-name': 'drv0'}}
798
{"return": {}}
799
800
=== Testing non-active block-commit ===
801
802
wrote 1048576/1048576 bytes at offset 0
803
1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
804
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
805
-{"return": {}}
806
-{'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'top': 'TEST_DIR/m.IMGFMT', 'speed': 1}}
807
+{'execute': 'blockdev-add',
808
+ 'arguments': {
809
+ 'node-name': 'drv0',
810
+ 'driver': 'IMGFMT',
811
+ 'file': {
812
+ 'driver': 'file',
813
+ 'filename': 'TEST_DIR/t.IMGFMT'
814
+ }}}
815
+{"return": {}}
816
+{'execute': 'block-commit',
817
+'arguments': {'job-id': 'job0',
818
+'device': 'drv0',
819
+'top': 'TEST_DIR/m.IMGFMT',
820
+'speed': 1}}
821
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
822
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
823
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
824
+{'execute': 'blockdev-del',
825
+ 'arguments': {'node-name': 'drv0'}}
826
{"error": {"class": "GenericError", "desc": "Node drv0 is in use"}}
827
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
828
+{'execute': 'block-job-cancel',
829
+ 'arguments': {'device': 'job0'}}
830
{"return": {}}
831
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
832
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "commit"}}
833
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
834
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
835
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
836
+{'execute': 'blockdev-del',
837
+ 'arguments': {'node-name': 'drv0'}}
838
{"return": {}}
839
840
=== Testing block-stream ===
841
842
wrote 1048576/1048576 bytes at offset 0
843
1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
844
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'drv0', 'driver': 'IMGFMT', 'file': { 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT' }}}
845
-{"return": {}}
846
-{'execute': 'block-stream', 'arguments': {'job-id': 'job0', 'device': 'drv0', 'speed': 1}}
847
+{'execute': 'blockdev-add',
848
+ 'arguments': {
849
+ 'node-name': 'drv0',
850
+ 'driver': 'IMGFMT',
851
+ 'file': {
852
+ 'driver': 'file',
853
+ 'filename': 'TEST_DIR/t.IMGFMT'
854
+ }}}
855
+{"return": {}}
856
+{'execute': 'block-stream',
857
+'arguments': {'job-id': 'job0',
858
+'device': 'drv0',
859
+'speed': 1}}
860
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
861
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
862
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
863
+{'execute': 'blockdev-del',
864
+ 'arguments': {'node-name': 'drv0'}}
865
{"error": {"class": "GenericError", "desc": "Node drv0 is in use"}}
866
-{'execute': 'block-job-cancel', 'arguments': {'device': 'job0'}}
867
+{'execute': 'block-job-cancel',
868
+ 'arguments': {'device': 'job0'}}
869
{"return": {}}
870
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
871
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "job0", "len": 1048576, "offset": 524288, "speed": 1, "type": "stream"}}
872
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "job0"}}
873
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
874
-{'execute': 'blockdev-del', 'arguments': {'node-name': 'drv0'}}
875
+{'execute': 'blockdev-del',
876
+ 'arguments': {'node-name': 'drv0'}}
877
{"return": {}}
878
*** done
879
diff --git a/tests/qemu-iotests/143.out b/tests/qemu-iotests/143.out
880
index XXXXXXX..XXXXXXX 100644
881
--- a/tests/qemu-iotests/143.out
882
+++ b/tests/qemu-iotests/143.out
883
@@ -XXX,XX +XXX,XX @@
884
QA output created by 143
885
{ 'execute': 'qmp_capabilities' }
886
{"return": {}}
887
-{ 'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'unix', 'data': { 'path': 'SOCK_DIR/nbd' }}}}
888
+{ 'execute': 'nbd-server-start',
889
+ 'arguments': { 'addr': { 'type': 'unix',
890
+ 'data': { 'path': 'SOCK_DIR/nbd' }}}}
891
{"return": {}}
892
qemu-io: can't open device nbd+unix:///no_such_export?socket=SOCK_DIR/nbd: Requested export not available
893
server reported: export 'no_such_export' not present
894
diff --git a/tests/qemu-iotests/144.out b/tests/qemu-iotests/144.out
895
index XXXXXXX..XXXXXXX 100644
896
--- a/tests/qemu-iotests/144.out
897
+++ b/tests/qemu-iotests/144.out
898
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=536870912
899
900
{ 'execute': 'qmp_capabilities' }
901
{"return": {}}
902
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'snapshot-file':'TEST_DIR/tmp.IMGFMT', 'format': 'IMGFMT' } }
903
+{ 'execute': 'blockdev-snapshot-sync',
904
+ 'arguments': {
905
+ 'device': 'virtio0',
906
+ 'snapshot-file':'TEST_DIR/tmp.IMGFMT',
907
+ 'format': 'IMGFMT'
908
+ }
909
+ }
910
Formatting 'TEST_DIR/tmp.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=536870912 backing_file=TEST_DIR/t.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
911
{"return": {}}
912
913
=== Performing block-commit on active layer ===
914
915
-{ 'execute': 'block-commit', 'arguments': { 'device': 'virtio0' } }
916
+{ 'execute': 'block-commit',
917
+ 'arguments': {
918
+ 'device': 'virtio0'
919
+ }
920
+ }
921
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "virtio0"}}
922
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "virtio0"}}
923
{"return": {}}
924
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "virtio0"}}
925
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "virtio0", "len": 0, "offset": 0, "speed": 0, "type": "commit"}}
926
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'virtio0' } }
927
+{ 'execute': 'block-job-complete',
928
+ 'arguments': {
929
+ 'device': 'virtio0'
930
+ }
931
+ }
932
{"return": {}}
933
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "virtio0"}}
934
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "virtio0"}}
935
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/tmp.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off co
936
937
=== Performing Live Snapshot 2 ===
938
939
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'virtio0', 'snapshot-file':'TEST_DIR/tmp2.IMGFMT', 'format': 'IMGFMT' } }
940
+{ 'execute': 'blockdev-snapshot-sync',
941
+ 'arguments': {
942
+ 'device': 'virtio0',
943
+ 'snapshot-file':'TEST_DIR/tmp2.IMGFMT',
944
+ 'format': 'IMGFMT'
945
+ }
946
+ }
947
Formatting 'TEST_DIR/tmp2.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=536870912 backing_file=TEST_DIR/t.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
948
{"return": {}}
949
*** done
950
diff --git a/tests/qemu-iotests/153.out b/tests/qemu-iotests/153.out
951
index XXXXXXX..XXXXXXX 100644
952
--- a/tests/qemu-iotests/153.out
953
+++ b/tests/qemu-iotests/153.out
954
@@ -XXX,XX +XXX,XX @@ _qemu_img_wrapper commit -b TEST_DIR/t.qcow2.b TEST_DIR/t.qcow2.c
955
{ 'execute': 'qmp_capabilities' }
956
{"return": {}}
957
Adding drive
958
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT' } }
959
+{ 'execute': 'human-monitor-command',
960
+ 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT' } }
961
{"return": "OKrn"}
962
963
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
964
@@ -XXX,XX +XXX,XX @@ Creating overlay with qemu-img when the guest is running should be allowed
965
966
_qemu_img_wrapper create -f qcow2 -b TEST_DIR/t.qcow2 -F qcow2 TEST_DIR/t.qcow2.overlay
967
== Closing an image should unlock it ==
968
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_del d0' } }
969
+{ 'execute': 'human-monitor-command',
970
+ 'arguments': { 'command-line': 'drive_del d0' } }
971
{"return": ""}
972
973
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
974
Adding two and closing one
975
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT,readonly=on' } }
976
+{ 'execute': 'human-monitor-command',
977
+ 'arguments': { 'command-line': 'drive_add 0 if=none,id=d0,file=TEST_DIR/t.IMGFMT,readonly=on' } }
978
{"return": "OKrn"}
979
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_add 0 if=none,id=d1,file=TEST_DIR/t.IMGFMT,readonly=on' } }
980
+{ 'execute': 'human-monitor-command',
981
+ 'arguments': { 'command-line': 'drive_add 0 if=none,id=d1,file=TEST_DIR/t.IMGFMT,readonly=on' } }
982
{"return": "OKrn"}
983
984
_qemu_img_wrapper info TEST_DIR/t.qcow2
985
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_del d0' } }
986
+{ 'execute': 'human-monitor-command',
987
+ 'arguments': { 'command-line': 'drive_del d0' } }
988
{"return": ""}
989
990
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
991
qemu-io: can't open device TEST_DIR/t.qcow2: Failed to get "write" lock
992
Is another process using the image [TEST_DIR/t.qcow2]?
993
Closing the other
994
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'drive_del d1' } }
995
+{ 'execute': 'human-monitor-command',
996
+ 'arguments': { 'command-line': 'drive_del d1' } }
997
{"return": ""}
998
999
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
1000
diff --git a/tests/qemu-iotests/156.out b/tests/qemu-iotests/156.out
1001
index XXXXXXX..XXXXXXX 100644
1002
--- a/tests/qemu-iotests/156.out
1003
+++ b/tests/qemu-iotests/156.out
1004
@@ -XXX,XX +XXX,XX @@ wrote 196608/196608 bytes at offset 65536
1005
{ 'execute': 'qmp_capabilities' }
1006
{"return": {}}
1007
Formatting 'TEST_DIR/t.IMGFMT.overlay', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
1008
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'source', 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay', 'format': 'IMGFMT', 'mode': 'existing' } }
1009
+{ 'execute': 'blockdev-snapshot-sync',
1010
+ 'arguments': { 'device': 'source',
1011
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay',
1012
+ 'format': 'IMGFMT',
1013
+ 'mode': 'existing' } }
1014
{"return": {}}
1015
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "write -P 3 128k 128k"' } }
1016
+{ 'execute': 'human-monitor-command',
1017
+ 'arguments': { 'command-line':
1018
+ 'qemu-io source "write -P 3 128k 128k"' } }
1019
wrote 131072/131072 bytes at offset 131072
1020
128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1021
{"return": ""}
1022
Formatting 'TEST_DIR/t.IMGFMT.target.overlay', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT.target backing_fmt=IMGFMT
1023
-{ 'execute': 'drive-mirror', 'arguments': { 'device': 'source', 'target': 'TEST_DIR/t.IMGFMT.target.overlay', 'mode': 'existing', 'sync': 'top' } }
1024
+{ 'execute': 'drive-mirror',
1025
+ 'arguments': { 'device': 'source',
1026
+ 'target': 'TEST_DIR/t.IMGFMT.target.overlay',
1027
+ 'mode': 'existing',
1028
+ 'sync': 'top' } }
1029
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "source"}}
1030
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "source"}}
1031
{"return": {}}
1032
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "source"}}
1033
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "source", "len": 131072, "offset": 131072, "speed": 0, "type": "mirror"}}
1034
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "write -P 4 192k 64k"' } }
1035
+{ 'execute': 'human-monitor-command',
1036
+ 'arguments': { 'command-line':
1037
+ 'qemu-io source "write -P 4 192k 64k"' } }
1038
wrote 65536/65536 bytes at offset 196608
1039
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1040
{"return": ""}
1041
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'source' } }
1042
+{ 'execute': 'block-job-complete',
1043
+ 'arguments': { 'device': 'source' } }
1044
{"return": {}}
1045
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "source"}}
1046
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "source"}}
1047
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 196608
1048
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "source"}}
1049
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "source"}}
1050
1051
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 1 0k 64k"' } }
1052
+{ 'execute': 'human-monitor-command',
1053
+ 'arguments': { 'command-line':
1054
+ 'qemu-io source "read -P 1 0k 64k"' } }
1055
read 65536/65536 bytes at offset 0
1056
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1057
{"return": ""}
1058
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 2 64k 64k"' } }
1059
+{ 'execute': 'human-monitor-command',
1060
+ 'arguments': { 'command-line':
1061
+ 'qemu-io source "read -P 2 64k 64k"' } }
1062
read 65536/65536 bytes at offset 65536
1063
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1064
{"return": ""}
1065
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 3 128k 64k"' } }
1066
+{ 'execute': 'human-monitor-command',
1067
+ 'arguments': { 'command-line':
1068
+ 'qemu-io source "read -P 3 128k 64k"' } }
1069
read 65536/65536 bytes at offset 131072
1070
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1071
{"return": ""}
1072
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io source "read -P 4 192k 64k"' } }
1073
+{ 'execute': 'human-monitor-command',
1074
+ 'arguments': { 'command-line':
1075
+ 'qemu-io source "read -P 4 192k 64k"' } }
1076
read 65536/65536 bytes at offset 196608
1077
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1078
{"return": ""}
1079
diff --git a/tests/qemu-iotests/161.out b/tests/qemu-iotests/161.out
1080
index XXXXXXX..XXXXXXX 100644
1081
--- a/tests/qemu-iotests/161.out
1082
+++ b/tests/qemu-iotests/161.out
1083
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.
1084
1085
{ 'execute': 'qmp_capabilities' }
1086
{"return": {}}
1087
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1088
+{ 'execute': 'human-monitor-command',
1089
+ 'arguments': { 'command-line':
1090
+ 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1091
{"return": ""}
1092
1093
*** Stream and then change an option on the backing file
1094
1095
{ 'execute': 'qmp_capabilities' }
1096
{"return": {}}
1097
-{ 'execute': 'block-stream', 'arguments': { 'device': 'none0', 'base': 'TEST_DIR/t.IMGFMT.base' } }
1098
+{ 'execute': 'block-stream', 'arguments': { 'device': 'none0',
1099
+ 'base': 'TEST_DIR/t.IMGFMT.base' } }
1100
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "none0"}}
1101
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "none0"}}
1102
{"return": {}}
1103
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1104
+{ 'execute': 'human-monitor-command',
1105
+ 'arguments': { 'command-line':
1106
+ 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1107
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "none0"}}
1108
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "none0"}}
1109
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "none0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "stream"}}
1110
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.int', fmt=IMGFMT size=1048576 backing_file=TEST_DI
1111
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT.int backing_fmt=IMGFMT
1112
{ 'execute': 'qmp_capabilities' }
1113
{"return": {}}
1114
-{ 'execute': 'block-commit', 'arguments': { 'device': 'none0', 'top': 'TEST_DIR/t.IMGFMT.int' } }
1115
+{ 'execute': 'block-commit', 'arguments': { 'device': 'none0',
1116
+ 'top': 'TEST_DIR/t.IMGFMT.int' } }
1117
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "none0"}}
1118
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "none0"}}
1119
{"return": {}}
1120
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1121
+{ 'execute': 'human-monitor-command',
1122
+ 'arguments': { 'command-line':
1123
+ 'qemu-io none0 "reopen -o backing.detect-zeroes=on"' } }
1124
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "none0"}}
1125
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "none0"}}
1126
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "none0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "commit"}}
1127
diff --git a/tests/qemu-iotests/173.out b/tests/qemu-iotests/173.out
1128
index XXXXXXX..XXXXXXX 100644
1129
--- a/tests/qemu-iotests/173.out
1130
+++ b/tests/qemu-iotests/173.out
1131
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/image.snp1', fmt=IMGFMT size=104857600
1132
1133
{ 'execute': 'qmp_capabilities' }
1134
{"return": {}}
1135
-{ 'arguments': { 'device': 'disk2', 'format': 'IMGFMT', 'mode': 'existing', 'snapshot-file': 'TEST_DIR/image.snp1', 'snapshot-node-name': 'snp1' }, 'execute': 'blockdev-snapshot-sync' }
1136
+{ 'arguments': {
1137
+ 'device': 'disk2',
1138
+ 'format': 'IMGFMT',
1139
+ 'mode': 'existing',
1140
+ 'snapshot-file': 'TEST_DIR/image.snp1',
1141
+ 'snapshot-node-name': 'snp1'
1142
+ },
1143
+ 'execute': 'blockdev-snapshot-sync'
1144
+ }
1145
{"return": {}}
1146
-{ 'arguments': { 'backing-file': 'image.base', 'device': 'disk2', 'image-node-name': 'snp1' }, 'execute': 'change-backing-file' }
1147
+{ 'arguments': {
1148
+ 'backing-file': 'image.base',
1149
+ 'device': 'disk2',
1150
+ 'image-node-name': 'snp1'
1151
+ },
1152
+ 'execute': 'change-backing-file'
1153
+ }
1154
{"return": {}}
1155
-{ 'arguments': { 'base': 'TEST_DIR/image.base', 'device': 'disk2' }, 'execute': 'block-stream' }
1156
+{ 'arguments': {
1157
+ 'base': 'TEST_DIR/image.base',
1158
+ 'device': 'disk2'
1159
+ },
1160
+ 'execute': 'block-stream'
1161
+ }
1162
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk2"}}
1163
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk2"}}
1164
{"return": {}}
1165
diff --git a/tests/qemu-iotests/182.out b/tests/qemu-iotests/182.out
1166
index XXXXXXX..XXXXXXX 100644
1167
--- a/tests/qemu-iotests/182.out
1168
+++ b/tests/qemu-iotests/182.out
1169
@@ -XXX,XX +XXX,XX @@ Is another process using the image [TEST_DIR/t.qcow2]?
1170
1171
{'execute': 'qmp_capabilities'}
1172
{"return": {}}
1173
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'node0', 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT', 'locking': 'on' } }
1174
-{"return": {}}
1175
-{'execute': 'blockdev-snapshot-sync', 'arguments': { 'node-name': 'node0', 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay', 'snapshot-node-name': 'node1' } }
1176
+{'execute': 'blockdev-add',
1177
+ 'arguments': {
1178
+ 'node-name': 'node0',
1179
+ 'driver': 'file',
1180
+ 'filename': 'TEST_DIR/t.IMGFMT',
1181
+ 'locking': 'on'
1182
+ } }
1183
+{"return": {}}
1184
+{'execute': 'blockdev-snapshot-sync',
1185
+ 'arguments': {
1186
+ 'node-name': 'node0',
1187
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay',
1188
+ 'snapshot-node-name': 'node1'
1189
+ } }
1190
Formatting 'TEST_DIR/t.qcow2.overlay', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=197120 backing_file=TEST_DIR/t.qcow2 backing_fmt=file lazy_refcounts=off refcount_bits=16
1191
{"return": {}}
1192
-{'execute': 'blockdev-add', 'arguments': { 'node-name': 'node1', 'driver': 'file', 'filename': 'TEST_DIR/t.IMGFMT', 'locking': 'on' } }
1193
-{"return": {}}
1194
-{'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'unix', 'data': { 'path': 'SOCK_DIR/nbd.socket' } } } }
1195
-{"return": {}}
1196
-{'execute': 'nbd-server-add', 'arguments': { 'device': 'node1' } }
1197
+{'execute': 'blockdev-add',
1198
+ 'arguments': {
1199
+ 'node-name': 'node1',
1200
+ 'driver': 'file',
1201
+ 'filename': 'TEST_DIR/t.IMGFMT',
1202
+ 'locking': 'on'
1203
+ } }
1204
+{"return": {}}
1205
+{'execute': 'nbd-server-start',
1206
+ 'arguments': {
1207
+ 'addr': {
1208
+ 'type': 'unix',
1209
+ 'data': {
1210
+ 'path': 'SOCK_DIR/nbd.socket'
1211
+ } } } }
1212
+{"return": {}}
1213
+{'execute': 'nbd-server-add',
1214
+ 'arguments': {
1215
+ 'device': 'node1'
1216
+ } }
1217
{"return": {}}
1218
1219
=== Testing failure to loosen restrictions ===
1220
diff --git a/tests/qemu-iotests/183.out b/tests/qemu-iotests/183.out
1221
index XXXXXXX..XXXXXXX 100644
1222
--- a/tests/qemu-iotests/183.out
1223
+++ b/tests/qemu-iotests/183.out
1224
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.dest', fmt=IMGFMT size=67108864
1225
1226
=== Write something on the source ===
1227
1228
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "write -P 0x55 0 64k"' } }
1229
+{ 'execute': 'human-monitor-command',
1230
+ 'arguments': { 'command-line':
1231
+ 'qemu-io disk "write -P 0x55 0 64k"' } }
1232
wrote 65536/65536 bytes at offset 0
1233
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1234
{"return": ""}
1235
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "read -P 0x55 0 64k"' } }
1236
+{ 'execute': 'human-monitor-command',
1237
+ 'arguments': { 'command-line':
1238
+ 'qemu-io disk "read -P 0x55 0 64k"' } }
1239
read 65536/65536 bytes at offset 0
1240
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1241
{"return": ""}
1242
1243
=== Do block migration to destination ===
1244
1245
-{ 'execute': 'migrate', 'arguments': { 'uri': 'unix:SOCK_DIR/migrate', 'blk': true } }
1246
+{ 'execute': 'migrate',
1247
+ 'arguments': { 'uri': 'unix:SOCK_DIR/migrate', 'blk': true } }
1248
{"return": {}}
1249
{ 'execute': 'query-status' }
1250
{"return": {"status": "postmigrate", "singlestep": false, "running": false}}
1251
@@ -XXX,XX +XXX,XX @@ read 65536/65536 bytes at offset 0
1252
{ 'execute': 'query-status' }
1253
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"}
1254
{"return": {"status": "running", "singlestep": false, "running": true}}
1255
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "read -P 0x55 0 64k"' } }
1256
+{ 'execute': 'human-monitor-command',
1257
+ 'arguments': { 'command-line':
1258
+ 'qemu-io disk "read -P 0x55 0 64k"' } }
1259
read 65536/65536 bytes at offset 0
1260
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1261
{"return": ""}
1262
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "write -P 0x66 1M 64k"' } }
1263
+{ 'execute': 'human-monitor-command',
1264
+ 'arguments': { 'command-line':
1265
+ 'qemu-io disk "write -P 0x66 1M 64k"' } }
1266
wrote 65536/65536 bytes at offset 1048576
1267
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1268
{"return": ""}
1269
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
1270
index XXXXXXX..XXXXXXX 100644
1271
--- a/tests/qemu-iotests/185.out
1272
+++ b/tests/qemu-iotests/185.out
1273
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=67108864
1274
1275
=== Creating backing chain ===
1276
1277
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'disk', 'snapshot-file': 'TEST_DIR/t.IMGFMT.mid', 'format': 'IMGFMT', 'mode': 'absolute-paths' } }
1278
+{ 'execute': 'blockdev-snapshot-sync',
1279
+ 'arguments': { 'device': 'disk',
1280
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT.mid',
1281
+ 'format': 'IMGFMT',
1282
+ 'mode': 'absolute-paths' } }
1283
Formatting 'TEST_DIR/t.qcow2.mid', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 backing_file=TEST_DIR/t.qcow2.base backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
1284
{"return": {}}
1285
-{ 'execute': 'human-monitor-command', 'arguments': { 'command-line': 'qemu-io disk "write 0 4M"' } }
1286
+{ 'execute': 'human-monitor-command',
1287
+ 'arguments': { 'command-line':
1288
+ 'qemu-io disk "write 0 4M"' } }
1289
wrote 4194304/4194304 bytes at offset 0
1290
4 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1291
{"return": ""}
1292
-{ 'execute': 'blockdev-snapshot-sync', 'arguments': { 'device': 'disk', 'snapshot-file': 'TEST_DIR/t.IMGFMT', 'format': 'IMGFMT', 'mode': 'absolute-paths' } }
1293
+{ 'execute': 'blockdev-snapshot-sync',
1294
+ 'arguments': { 'device': 'disk',
1295
+ 'snapshot-file': 'TEST_DIR/t.IMGFMT',
1296
+ 'format': 'IMGFMT',
1297
+ 'mode': 'absolute-paths' } }
1298
Formatting 'TEST_DIR/t.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 backing_file=TEST_DIR/t.qcow2.mid backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
1299
{"return": {}}
1300
1301
=== Start commit job and exit qemu ===
1302
1303
-{ 'execute': 'block-commit', 'arguments': { 'device': 'disk', 'base':'TEST_DIR/t.IMGFMT.base', 'top': 'TEST_DIR/t.IMGFMT.mid', 'speed': 65536 } }
1304
+{ 'execute': 'block-commit',
1305
+ 'arguments': { 'device': 'disk',
1306
+ 'base':'TEST_DIR/t.IMGFMT.base',
1307
+ 'top': 'TEST_DIR/t.IMGFMT.mid',
1308
+ 'speed': 65536 } }
1309
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1310
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1311
{"return": {}}
1312
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off comp
1313
1314
{ 'execute': 'qmp_capabilities' }
1315
{"return": {}}
1316
-{ 'execute': 'block-commit', 'arguments': { 'device': 'disk', 'base':'TEST_DIR/t.IMGFMT.base', 'speed': 65536 } }
1317
+{ 'execute': 'block-commit',
1318
+ 'arguments': { 'device': 'disk',
1319
+ 'base':'TEST_DIR/t.IMGFMT.base',
1320
+ 'speed': 65536 } }
1321
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1322
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1323
{"return": {}}
1324
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off comp
1325
1326
{ 'execute': 'qmp_capabilities' }
1327
{"return": {}}
1328
-{ 'execute': 'drive-mirror', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536 } }
1329
+{ 'execute': 'drive-mirror',
1330
+ 'arguments': { 'device': 'disk',
1331
+ 'target': 'TEST_DIR/t.IMGFMT.copy',
1332
+ 'format': 'IMGFMT',
1333
+ 'sync': 'full',
1334
+ 'speed': 65536 } }
1335
Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 lazy_refcounts=off refcount_bits=16
1336
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1337
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1338
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off
1339
1340
{ 'execute': 'qmp_capabilities' }
1341
{"return": {}}
1342
-{ 'execute': 'drive-backup', 'arguments': { 'device': 'disk', 'target': 'TEST_DIR/t.IMGFMT.copy', 'format': 'IMGFMT', 'sync': 'full', 'speed': 65536 } }
1343
+{ 'execute': 'drive-backup',
1344
+ 'arguments': { 'device': 'disk',
1345
+ 'target': 'TEST_DIR/t.IMGFMT.copy',
1346
+ 'format': 'IMGFMT',
1347
+ 'sync': 'full',
1348
+ 'speed': 65536 } }
1349
Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 lazy_refcounts=off refcount_bits=16
1350
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1351
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1352
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off
1353
1354
{ 'execute': 'qmp_capabilities' }
1355
{"return": {}}
1356
-{ 'execute': 'block-stream', 'arguments': { 'device': 'disk', 'speed': 65536 } }
1357
+{ 'execute': 'block-stream',
1358
+ 'arguments': { 'device': 'disk',
1359
+ 'speed': 65536 } }
1360
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
1361
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
1362
{"return": {}}
1363
diff --git a/tests/qemu-iotests/191.out b/tests/qemu-iotests/191.out
1364
index XXXXXXX..XXXXXXX 100644
1365
--- a/tests/qemu-iotests/191.out
1366
+++ b/tests/qemu-iotests/191.out
1367
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 1048576
1368
1369
=== Perform commit job ===
1370
1371
-{ 'execute': 'block-commit', 'arguments': { 'job-id': 'commit0', 'device': 'top', 'base':'TEST_DIR/t.IMGFMT.base', 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1372
+{ 'execute': 'block-commit',
1373
+ 'arguments': { 'job-id': 'commit0',
1374
+ 'device': 'top',
1375
+ 'base':'TEST_DIR/t.IMGFMT.base',
1376
+ 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1377
{
1378
"timestamp": {
1379
"seconds": TIMESTAMP,
1380
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 1048576
1381
1382
=== Perform commit job ===
1383
1384
-{ 'execute': 'block-commit', 'arguments': { 'job-id': 'commit0', 'device': 'top', 'base':'TEST_DIR/t.IMGFMT.base', 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1385
+{ 'execute': 'block-commit',
1386
+ 'arguments': { 'job-id': 'commit0',
1387
+ 'device': 'top',
1388
+ 'base':'TEST_DIR/t.IMGFMT.base',
1389
+ 'top': 'TEST_DIR/t.IMGFMT.mid' } }
1390
{
1391
"timestamp": {
1392
"seconds": TIMESTAMP,
1393
diff --git a/tests/qemu-iotests/223.out b/tests/qemu-iotests/223.out
1394
index XXXXXXX..XXXXXXX 100644
1395
--- a/tests/qemu-iotests/223.out
1396
+++ b/tests/qemu-iotests/223.out
1397
@@ -XXX,XX +XXX,XX @@ wrote 2097152/2097152 bytes at offset 2097152
1398
1399
{"execute":"qmp_capabilities"}
1400
{"return": {}}
1401
-{"execute":"blockdev-add", "arguments":{"driver":"IMGFMT", "node-name":"n", "file":{"driver":"file", "filename":"TEST_DIR/t.IMGFMT"}}}
1402
+{"execute":"blockdev-add",
1403
+ "arguments":{"driver":"IMGFMT", "node-name":"n",
1404
+ "file":{"driver":"file", "filename":"TEST_DIR/t.IMGFMT"}}}
1405
{"return": {}}
1406
-{"execute":"block-dirty-bitmap-disable", "arguments":{"node":"n", "name":"b"}}
1407
+{"execute":"block-dirty-bitmap-disable",
1408
+ "arguments":{"node":"n", "name":"b"}}
1409
{"return": {}}
1410
1411
=== Set up NBD with normal access ===
1412
1413
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1414
+{"execute":"nbd-server-add",
1415
+ "arguments":{"device":"n"}}
1416
{"error": {"class": "GenericError", "desc": "NBD server not running"}}
1417
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd"}}}}
1418
+{"execute":"nbd-server-start",
1419
+ "arguments":{"addr":{"type":"unix",
1420
+ "data":{"path":"SOCK_DIR/nbd"}}}}
1421
{"return": {}}
1422
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd1"}}}}
1423
+{"execute":"nbd-server-start",
1424
+ "arguments":{"addr":{"type":"unix",
1425
+ "data":{"path":"SOCK_DIR/nbd1"}}}}
1426
{"error": {"class": "GenericError", "desc": "NBD server already running"}}
1427
exports available: 0
1428
-{"execute":"nbd-server-add", "arguments":{"device":"n", "bitmap":"b"}}
1429
+{"execute":"nbd-server-add",
1430
+ "arguments":{"device":"n", "bitmap":"b"}}
1431
{"return": {}}
1432
-{"execute":"nbd-server-add", "arguments":{"device":"nosuch"}}
1433
+{"execute":"nbd-server-add",
1434
+ "arguments":{"device":"nosuch"}}
1435
{"error": {"class": "GenericError", "desc": "Cannot find device=nosuch nor node_name=nosuch"}}
1436
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1437
+{"execute":"nbd-server-add",
1438
+ "arguments":{"device":"n"}}
1439
{"error": {"class": "GenericError", "desc": "Block export id 'n' is already in use"}}
1440
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b2"}}
1441
+{"execute":"nbd-server-add",
1442
+ "arguments":{"device":"n", "name":"n2",
1443
+ "bitmap":"b2"}}
1444
{"error": {"class": "GenericError", "desc": "Enabled bitmap 'b2' incompatible with readonly export"}}
1445
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b3"}}
1446
+{"execute":"nbd-server-add",
1447
+ "arguments":{"device":"n", "name":"n2",
1448
+ "bitmap":"b3"}}
1449
{"error": {"class": "GenericError", "desc": "Bitmap 'b3' is not found"}}
1450
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "writable":true, "description":"some text", "bitmap":"b2"}}
1451
+{"execute":"nbd-server-add",
1452
+ "arguments":{"device":"n", "name":"n2", "writable":true,
1453
+ "description":"some text", "bitmap":"b2"}}
1454
{"return": {}}
1455
exports available: 2
1456
export: 'n'
1457
@@ -XXX,XX +XXX,XX @@ read 2097152/2097152 bytes at offset 2097152
1458
1459
=== End qemu NBD server ===
1460
1461
-{"execute":"nbd-server-remove", "arguments":{"name":"n"}}
1462
+{"execute":"nbd-server-remove",
1463
+ "arguments":{"name":"n"}}
1464
{"return": {}}
1465
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1466
+{"execute":"nbd-server-remove",
1467
+ "arguments":{"name":"n2"}}
1468
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n"}}
1469
{"return": {}}
1470
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1471
+{"execute":"nbd-server-remove",
1472
+ "arguments":{"name":"n2"}}
1473
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n2"}}
1474
{"error": {"class": "GenericError", "desc": "Export 'n2' is not found"}}
1475
{"execute":"nbd-server-stop"}
1476
@@ -XXX,XX +XXX,XX @@ read 2097152/2097152 bytes at offset 2097152
1477
1478
=== Set up NBD with iothread access ===
1479
1480
-{"execute":"x-blockdev-set-iothread", "arguments":{"node-name":"n", "iothread":"io0"}}
1481
+{"execute":"x-blockdev-set-iothread",
1482
+ "arguments":{"node-name":"n", "iothread":"io0"}}
1483
{"return": {}}
1484
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1485
+{"execute":"nbd-server-add",
1486
+ "arguments":{"device":"n"}}
1487
{"error": {"class": "GenericError", "desc": "NBD server not running"}}
1488
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd"}}}}
1489
+{"execute":"nbd-server-start",
1490
+ "arguments":{"addr":{"type":"unix",
1491
+ "data":{"path":"SOCK_DIR/nbd"}}}}
1492
{"return": {}}
1493
-{"execute":"nbd-server-start", "arguments":{"addr":{"type":"unix", "data":{"path":"SOCK_DIR/nbd1"}}}}
1494
+{"execute":"nbd-server-start",
1495
+ "arguments":{"addr":{"type":"unix",
1496
+ "data":{"path":"SOCK_DIR/nbd1"}}}}
1497
{"error": {"class": "GenericError", "desc": "NBD server already running"}}
1498
exports available: 0
1499
-{"execute":"nbd-server-add", "arguments":{"device":"n", "bitmap":"b"}}
1500
+{"execute":"nbd-server-add",
1501
+ "arguments":{"device":"n", "bitmap":"b"}}
1502
{"return": {}}
1503
-{"execute":"nbd-server-add", "arguments":{"device":"nosuch"}}
1504
+{"execute":"nbd-server-add",
1505
+ "arguments":{"device":"nosuch"}}
1506
{"error": {"class": "GenericError", "desc": "Cannot find device=nosuch nor node_name=nosuch"}}
1507
-{"execute":"nbd-server-add", "arguments":{"device":"n"}}
1508
+{"execute":"nbd-server-add",
1509
+ "arguments":{"device":"n"}}
1510
{"error": {"class": "GenericError", "desc": "Block export id 'n' is already in use"}}
1511
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b2"}}
1512
+{"execute":"nbd-server-add",
1513
+ "arguments":{"device":"n", "name":"n2",
1514
+ "bitmap":"b2"}}
1515
{"error": {"class": "GenericError", "desc": "Enabled bitmap 'b2' incompatible with readonly export"}}
1516
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "bitmap":"b3"}}
1517
+{"execute":"nbd-server-add",
1518
+ "arguments":{"device":"n", "name":"n2",
1519
+ "bitmap":"b3"}}
1520
{"error": {"class": "GenericError", "desc": "Bitmap 'b3' is not found"}}
1521
-{"execute":"nbd-server-add", "arguments":{"device":"n", "name":"n2", "writable":true, "description":"some text", "bitmap":"b2"}}
1522
+{"execute":"nbd-server-add",
1523
+ "arguments":{"device":"n", "name":"n2", "writable":true,
1524
+ "description":"some text", "bitmap":"b2"}}
1525
{"return": {}}
1526
exports available: 2
1527
export: 'n'
1528
@@ -XXX,XX +XXX,XX @@ read 2097152/2097152 bytes at offset 2097152
1529
1530
=== End qemu NBD server ===
1531
1532
-{"execute":"nbd-server-remove", "arguments":{"name":"n"}}
1533
+{"execute":"nbd-server-remove",
1534
+ "arguments":{"name":"n"}}
1535
{"return": {}}
1536
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1537
+{"execute":"nbd-server-remove",
1538
+ "arguments":{"name":"n2"}}
1539
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n"}}
1540
{"return": {}}
1541
-{"execute":"nbd-server-remove", "arguments":{"name":"n2"}}
1542
+{"execute":"nbd-server-remove",
1543
+ "arguments":{"name":"n2"}}
1544
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "n2"}}
1545
{"error": {"class": "GenericError", "desc": "Export 'n2' is not found"}}
1546
{"execute":"nbd-server-stop"}
1547
diff --git a/tests/qemu-iotests/229.out b/tests/qemu-iotests/229.out
1548
index XXXXXXX..XXXXXXX 100644
1549
--- a/tests/qemu-iotests/229.out
1550
+++ b/tests/qemu-iotests/229.out
1551
@@ -XXX,XX +XXX,XX @@ wrote 2097152/2097152 bytes at offset 0
1552
1553
=== Starting drive-mirror, causing error & stop ===
1554
1555
-{'execute': 'drive-mirror', 'arguments': {'device': 'testdisk', 'format': 'IMGFMT', 'target': 'blkdebug:TEST_DIR/blkdebug.conf:TEST_DIR/t.IMGFMT.dest', 'sync': 'full', 'mode': 'existing', 'on-source-error': 'stop', 'on-target-error': 'stop' }}
1556
+{'execute': 'drive-mirror',
1557
+ 'arguments': {'device': 'testdisk',
1558
+ 'format': 'IMGFMT',
1559
+ 'target': 'blkdebug:TEST_DIR/blkdebug.conf:TEST_DIR/t.IMGFMT.dest',
1560
+ 'sync': 'full',
1561
+ 'mode': 'existing',
1562
+ 'on-source-error': 'stop',
1563
+ 'on-target-error': 'stop' }}
1564
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "testdisk"}}
1565
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "testdisk"}}
1566
{"return": {}}
1567
@@ -XXX,XX +XXX,XX @@ wrote 2097152/2097152 bytes at offset 0
1568
1569
=== Force cancel job paused in error state ===
1570
1571
-{'execute': 'block-job-cancel', 'arguments': { 'device': 'testdisk', 'force': true}}
1572
+{'execute': 'block-job-cancel',
1573
+ 'arguments': { 'device': 'testdisk',
1574
+ 'force': true}}
1575
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "testdisk"}}
1576
{"return": {}}
1577
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "testdisk"}}
1578
diff --git a/tests/qemu-iotests/249.out b/tests/qemu-iotests/249.out
1579
index XXXXXXX..XXXXXXX 100644
1580
--- a/tests/qemu-iotests/249.out
1581
+++ b/tests/qemu-iotests/249.out
1582
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.
1583
1584
=== Send a write command to a drive opened in read-only mode (1)
1585
1586
-{ 'execute': 'human-monitor-command', 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1587
+{ 'execute': 'human-monitor-command',
1588
+ 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1589
{"return": "Block node is read-onlyrn"}
1590
1591
=== Run block-commit on base using an invalid filter node name
1592
1593
-{ 'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int', 'filter-node-name': '1234'}}
1594
+{ 'execute': 'block-commit',
1595
+ 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int',
1596
+ 'filter-node-name': '1234'}}
1597
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
1598
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "job0"}}
1599
{"error": {"class": "GenericError", "desc": "Invalid node name"}}
1600
1601
=== Send a write command to a drive opened in read-only mode (2)
1602
1603
-{ 'execute': 'human-monitor-command', 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1604
+{ 'execute': 'human-monitor-command',
1605
+ 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1606
{"return": "Block node is read-onlyrn"}
1607
1608
=== Run block-commit on base using the default filter node name
1609
1610
-{ 'execute': 'block-commit', 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int'}}
1611
+{ 'execute': 'block-commit',
1612
+ 'arguments': {'job-id': 'job0', 'device': 'none1', 'top-node': 'int'}}
1613
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
1614
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
1615
{"return": {}}
1616
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.
1617
1618
=== Send a write command to a drive opened in read-only mode (3)
1619
1620
-{ 'execute': 'human-monitor-command', 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1621
+{ 'execute': 'human-monitor-command',
1622
+ 'arguments': {'command-line': 'qemu-io none0 "aio_write 0 2k"'}}
1623
{"return": "Block node is read-onlyrn"}
1624
*** done
1625
diff --git a/tests/qemu-iotests/308.out b/tests/qemu-iotests/308.out
1626
index XXXXXXX..XXXXXXX 100644
1627
--- a/tests/qemu-iotests/308.out
1628
+++ b/tests/qemu-iotests/308.out
1629
@@ -XXX,XX +XXX,XX @@ wrote 67108864/67108864 bytes at offset 0
1630
64 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1631
{'execute': 'qmp_capabilities'}
1632
{"return": {}}
1633
-{'execute': 'blockdev-add', 'arguments': { 'driver': 'file', 'node-name': 'node-protocol', 'filename': 'TEST_DIR/t.IMGFMT' } }
1634
+{'execute': 'blockdev-add',
1635
+ 'arguments': {
1636
+ 'driver': 'file',
1637
+ 'node-name': 'node-protocol',
1638
+ 'filename': 'TEST_DIR/t.IMGFMT'
1639
+ } }
1640
{"return": {}}
1641
-{'execute': 'blockdev-add', 'arguments': { 'driver': 'IMGFMT', 'node-name': 'node-format', 'file': 'node-protocol' } }
1642
+{'execute': 'blockdev-add',
1643
+ 'arguments': {
1644
+ 'driver': 'IMGFMT',
1645
+ 'node-name': 'node-format',
1646
+ 'file': 'node-protocol'
1647
+ } }
1648
{"return": {}}
1649
1650
=== Mountpoint not present ===
1651
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-err', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1652
+{'execute': 'block-export-add',
1653
+ 'arguments': {
1654
+ 'type': 'fuse',
1655
+ 'id': 'export-err',
1656
+ 'node-name': 'node-format',
1657
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1658
+ } }
1659
{"error": {"class": "GenericError", "desc": "Failed to stat 'TEST_DIR/t.IMGFMT.fuse': No such file or directory"}}
1660
1661
=== Mountpoint is a directory ===
1662
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-err', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1663
+{'execute': 'block-export-add',
1664
+ 'arguments': {
1665
+ 'type': 'fuse',
1666
+ 'id': 'export-err',
1667
+ 'node-name': 'node-format',
1668
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1669
+ } }
1670
{"error": {"class": "GenericError", "desc": "'TEST_DIR/t.IMGFMT.fuse' is not a regular file"}}
1671
1672
=== Mountpoint is a regular file ===
1673
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1674
+{'execute': 'block-export-add',
1675
+ 'arguments': {
1676
+ 'type': 'fuse',
1677
+ 'id': 'export-mp',
1678
+ 'node-name': 'node-format',
1679
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1680
+ } }
1681
{"return": {}}
1682
Images are identical.
1683
1684
=== Mount over existing file ===
1685
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-img', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT' } }
1686
+{'execute': 'block-export-add',
1687
+ 'arguments': {
1688
+ 'type': 'fuse',
1689
+ 'id': 'export-img',
1690
+ 'node-name': 'node-format',
1691
+ 'mountpoint': 'TEST_DIR/t.IMGFMT'
1692
+ } }
1693
{"return": {}}
1694
Images are identical.
1695
1696
=== Double export ===
1697
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-err', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse' } }
1698
+{'execute': 'block-export-add',
1699
+ 'arguments': {
1700
+ 'type': 'fuse',
1701
+ 'id': 'export-err',
1702
+ 'node-name': 'node-format',
1703
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse'
1704
+ } }
1705
{"error": {"class": "GenericError", "desc": "There already is a FUSE export on 'TEST_DIR/t.IMGFMT.fuse'"}}
1706
1707
=== Remove export ===
1708
virtual size: 64 MiB (67108864 bytes)
1709
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-mp' } }
1710
+{'execute': 'block-export-del',
1711
+ 'arguments': {
1712
+ 'id': 'export-mp'
1713
+ } }
1714
{"return": {}}
1715
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-mp"}}
1716
virtual size: 0 B (0 bytes)
1717
1718
=== Writable export ===
1719
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-format', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true } }
1720
+{'execute': 'block-export-add',
1721
+ 'arguments': {
1722
+ 'type': 'fuse',
1723
+ 'id': 'export-mp',
1724
+ 'node-name': 'node-format',
1725
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true
1726
+ } }
1727
{"return": {}}
1728
write failed: Permission denied
1729
wrote 65536/65536 bytes at offset 1048576
1730
@@ -XXX,XX +XXX,XX @@ wrote 65536/65536 bytes at offset 1048576
1731
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
1732
1733
=== Resizing exports ===
1734
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-mp' } }
1735
+{'execute': 'block-export-del',
1736
+ 'arguments': {
1737
+ 'id': 'export-mp'
1738
+ } }
1739
{"return": {}}
1740
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-mp"}}
1741
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-img' } }
1742
+{'execute': 'block-export-del',
1743
+ 'arguments': {
1744
+ 'id': 'export-img'
1745
+ } }
1746
{"return": {}}
1747
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-img"}}
1748
-{'execute': 'blockdev-del', 'arguments': { 'node-name': 'node-format' } }
1749
+{'execute': 'blockdev-del',
1750
+ 'arguments': {
1751
+ 'node-name': 'node-format'
1752
+ } }
1753
{"return": {}}
1754
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-protocol', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true } }
1755
+{'execute': 'block-export-add',
1756
+ 'arguments': {
1757
+ 'type': 'fuse',
1758
+ 'id': 'export-mp',
1759
+ 'node-name': 'node-protocol',
1760
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true
1761
+ } }
1762
{"return": {}}
1763
1764
--- Try growing non-growable export ---
1765
@@ -XXX,XX +XXX,XX @@ OK: Post-truncate image size is as expected
1766
OK: Disk usage grew with fallocate
1767
1768
--- Try growing growable export ---
1769
-{'execute': 'block-export-del', 'arguments': { 'id': 'export-mp' } }
1770
+{'execute': 'block-export-del',
1771
+ 'arguments': {
1772
+ 'id': 'export-mp'
1773
+ } }
1774
{"return": {}}
1775
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_EXPORT_DELETED", "data": {"id": "export-mp"}}
1776
-{'execute': 'block-export-add', 'arguments': { 'type': 'fuse', 'id': 'export-mp', 'node-name': 'node-protocol', 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true, 'growable': true } }
1777
+{'execute': 'block-export-add',
1778
+ 'arguments': {
1779
+ 'type': 'fuse',
1780
+ 'id': 'export-mp',
1781
+ 'node-name': 'node-protocol',
1782
+ 'mountpoint': 'TEST_DIR/t.IMGFMT.fuse', 'writable': true, 'growable': true
1783
+ } }
1784
{"return": {}}
1785
65536+0 records in
1786
65536+0 records out
1787
diff --git a/tests/qemu-iotests/312.out b/tests/qemu-iotests/312.out
1788
index XXXXXXX..XXXXXXX 100644
1789
--- a/tests/qemu-iotests/312.out
1790
+++ b/tests/qemu-iotests/312.out
1791
@@ -XXX,XX +XXX,XX @@ read 65536/65536 bytes at offset 2424832
1792
1793
{ 'execute': 'qmp_capabilities' }
1794
{"return": {}}
1795
-{'execute': 'drive-mirror', 'arguments': {'device': 'virtio0', 'format': 'IMGFMT', 'target': 'TEST_DIR/t.IMGFMT.3', 'sync': 'full', 'mode': 'existing' }}
1796
+{'execute': 'drive-mirror',
1797
+ 'arguments': {'device': 'virtio0',
1798
+ 'format': 'IMGFMT',
1799
+ 'target': 'TEST_DIR/t.IMGFMT.3',
1800
+ 'sync': 'full',
1801
+ 'mode': 'existing' }}
1802
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "virtio0"}}
1803
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "virtio0"}}
1804
{"return": {}}
1805
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "virtio0"}}
1806
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "virtio0", "len": 10485760, "offset": 10485760, "speed": 0, "type": "mirror"}}
1807
-{ 'execute': 'block-job-complete', 'arguments': { 'device': 'virtio0' } }
1808
+{ 'execute': 'block-job-complete',
1809
+ 'arguments': { 'device': 'virtio0' } }
1810
{"return": {}}
1811
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "virtio0"}}
1812
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "virtio0"}}
1813
diff --git a/tests/qemu-iotests/common.qemu b/tests/qemu-iotests/common.qemu
1814
index XXXXXXX..XXXXXXX 100644
1815
--- a/tests/qemu-iotests/common.qemu
1816
+++ b/tests/qemu-iotests/common.qemu
1817
@@ -XXX,XX +XXX,XX @@ _send_qemu_cmd()
1818
count=${qemu_cmd_repeat}
1819
use_error="no"
1820
fi
1821
- # This array element extraction is done to accommodate pathnames with spaces
1822
- if [ -z "${success_or_failure}" ]; then
1823
- cmd=${@: 1:${#@}-1}
1824
- shift $(($# - 1))
1825
- else
1826
- cmd=${@: 1:${#@}-2}
1827
- shift $(($# - 2))
1828
- fi
1829
+
1830
+ cmd=$1
1831
+ shift
1832
1833
# Display QMP being sent, but not HMP (since HMP already echoes its
1834
# input back to output); decide based on leading '{'
50
--
1835
--
51
2.13.5
1836
2.29.2
52
1837
53
1838
diff view generated by jsdifflib