1
The following changes since commit 9964e96dc9999cf7f7c936ee854a795415d19b60:
1
The following changes since commit 9d662a6b22a0838a85c5432385f35db2488a33a5:
2
2
3
Merge remote-tracking branch 'jasowang/tags/net-pull-request' into staging (2017-05-23 15:01:31 +0100)
3
Merge remote-tracking branch 'remotes/legoater/tags/pull-ppc-20220305' into staging (2022-03-05 18:03:15 +0000)
4
4
5
are available in the git repository at:
5
are available in the Git repository at:
6
6
7
git://github.com/codyprime/qemu-kvm-jtc.git tags/block-pull-request
7
https://gitlab.com/hreitz/qemu.git tags/pull-block-2022-03-07
8
8
9
for you to fetch changes up to 223a23c198787328ae75bc65d84edf5fde33c0b6:
9
for you to fetch changes up to 743da0b401cdc3ee94bc519975e339a3cdbe0ad1:
10
10
11
block/gluster: glfs_lseek() workaround (2017-05-24 16:44:46 -0400)
11
iotests/image-fleecing: test push backup with fleecing (2022-03-07 09:33:31 +0100)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Block patches
14
Block patches for 7.0-rc0:
15
- New fleecing backup scheme
16
- iotest fixes
17
- Fixes for the curl block driver
18
- Fix for the preallocate block driver
19
- IDE fix for zero-length TRIM requests
20
15
----------------------------------------------------------------
21
----------------------------------------------------------------
22
Hanna Reitz (2):
23
ide: Increment BB in-flight counter for TRIM BH
24
iotests: Write test output to TEST_DIR
16
25
17
Jeff Cody (1):
26
Peter Maydell (2):
18
block/gluster: glfs_lseek() workaround
27
block/curl.c: Set error message string if curl_init_state() fails
28
block/curl.c: Check error return from curl_easy_setopt()
19
29
20
Paolo Bonzini (11):
30
Thomas Huth (2):
21
blockjob: remove unnecessary check
31
tests/qemu-iotests/040: Skip TestCommitWithFilters without 'throttle'
22
blockjob: remove iostatus_reset callback
32
tests/qemu-iotests/testrunner: Quote "case not run" lines in TAP mode
23
blockjob: introduce block_job_early_fail
24
blockjob: introduce block_job_pause/resume_all
25
blockjob: separate monitor and blockjob APIs
26
blockjob: move iostatus reset inside block_job_user_resume
27
blockjob: introduce block_job_cancel_async, check iostatus invariants
28
blockjob: group BlockJob transaction functions together
29
blockjob: strengthen a bit test-blockjob-txn
30
blockjob: reorganize block_job_completed_txn_abort
31
blockjob: use deferred_to_main_loop to indicate the coroutine has
32
ended
33
33
34
block/backup.c | 2 +-
34
Vladimir Sementsov-Ogievskiy (17):
35
block/commit.c | 2 +-
35
block: fix preallocate filter: don't do unaligned preallocate requests
36
block/gluster.c | 18 +-
36
block/block-copy: move copy_bitmap initialization to
37
block/io.c | 19 +-
37
block_copy_state_new()
38
block/mirror.c | 2 +-
38
block/dirty-bitmap: bdrv_merge_dirty_bitmap(): add return value
39
blockdev.c | 1 -
39
block/block-copy: block_copy_state_new(): add bitmap parameter
40
blockjob.c | 750 ++++++++++++++++++++++++-------------------
40
block/copy-before-write: add bitmap open parameter
41
include/block/blockjob.h | 16 -
41
block/block-copy: add block_copy_reset()
42
include/block/blockjob_int.h | 27 +-
42
block: intoduce reqlist
43
tests/test-blockjob-txn.c | 7 +-
43
block/reqlist: reqlist_find_conflict(): use ranges_overlap()
44
tests/test-blockjob.c | 10 +-
44
block/dirty-bitmap: introduce bdrv_dirty_bitmap_status()
45
11 files changed, 463 insertions(+), 391 deletions(-)
45
block/reqlist: add reqlist_wait_all()
46
block/io: introduce block driver snapshot-access API
47
block: introduce snapshot-access block driver
48
block: copy-before-write: realize snapshot-access API
49
iotests/image-fleecing: add test-case for fleecing format node
50
iotests.py: add qemu_io_pipe_and_status()
51
iotests/image-fleecing: add test case with bitmap
52
iotests/image-fleecing: test push backup with fleecing
53
54
qapi/block-core.json | 14 +-
55
include/block/block-common.h | 3 +-
56
include/block/block-copy.h | 2 +
57
include/block/block_int-common.h | 24 ++
58
include/block/block_int-io.h | 9 +
59
include/block/dirty-bitmap.h | 4 +-
60
include/block/reqlist.h | 75 ++++++
61
include/qemu/hbitmap.h | 12 +
62
block/block-copy.c | 150 +++++------
63
block/copy-before-write.c | 265 +++++++++++++++++++-
64
block/curl.c | 92 ++++---
65
block/dirty-bitmap.c | 15 +-
66
block/io.c | 76 ++++++
67
block/monitor/bitmap-qmp-cmds.c | 5 +-
68
block/preallocate.c | 15 +-
69
block/reqlist.c | 85 +++++++
70
block/snapshot-access.c | 132 ++++++++++
71
hw/ide/core.c | 7 +
72
util/hbitmap.c | 33 +++
73
MAINTAINERS | 5 +-
74
block/meson.build | 2 +
75
tests/qemu-iotests/040 | 1 +
76
tests/qemu-iotests/257.out | 224 +++++++++++++++++
77
tests/qemu-iotests/common.rc | 6 +-
78
tests/qemu-iotests/iotests.py | 8 +-
79
tests/qemu-iotests/testenv.py | 5 +-
80
tests/qemu-iotests/testrunner.py | 19 +-
81
tests/qemu-iotests/tests/image-fleecing | 185 +++++++++++---
82
tests/qemu-iotests/tests/image-fleecing.out | 221 +++++++++++++++-
83
29 files changed, 1499 insertions(+), 195 deletions(-)
84
create mode 100644 include/block/reqlist.h
85
create mode 100644 block/reqlist.c
86
create mode 100644 block/snapshot-access.c
46
87
47
--
88
--
48
2.9.3
89
2.34.1
49
50
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
When we still have an AIOCB registered for DMA operations, we try to
2
settle the respective operation by draining the BlockBackend associated
3
with the IDE device.
2
4
3
!job is always checked prior to the call, drop it from here.
5
However, this assumes that every DMA operation is associated with an
6
increment of the BlockBackend’s in-flight counter (e.g. through some
7
ongoing I/O operation), so that draining the BB until its in-flight
8
counter reaches 0 will settle all DMA operations. That is not the case:
9
For TRIM, the guest can issue a zero-length operation that will not
10
result in any I/O operation forwarded to the BlockBackend, and also not
11
increment the in-flight counter in any other way. In such a case,
12
blk_drain() will be a no-op if no other operations are in flight.
4
13
5
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
14
It is clear that if blk_drain() is a no-op, the value of
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
15
s->bus->dma->aiocb will not change between checking it in the `if`
7
Reviewed-by: Jeff Cody <jcody@redhat.com>
16
condition and asserting that it is NULL after blk_drain().
8
Message-id: 20170508141310.8674-2-pbonzini@redhat.com
17
9
Signed-off-by: Jeff Cody <jcody@redhat.com>
18
The particular problem is that ide_issue_trim() creates a BH
19
(ide_trim_bh_cb()) to settle the TRIM request: iocb->common.cb() is
20
ide_dma_cb(), which will either create a new request, or find the
21
transfer to be done and call ide_set_inactive(), which clears
22
s->bus->dma->aiocb. Therefore, the blk_drain() must wait for
23
ide_trim_bh_cb() to run, which currently it will not always do.
24
25
To fix this issue, we increment the BlockBackend's in-flight counter
26
when the TRIM operation begins (in ide_issue_trim(), when the
27
ide_trim_bh_cb() BH is created) and decrement it when ide_trim_bh_cb()
28
is done.
29
30
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2029980
31
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
32
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
33
Message-Id: <20220120142259.120189-1-hreitz@redhat.com>
34
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
35
Reviewed-by: John Snow <jsnow@redhat.com>
36
Tested-by: John Snow <jsnow@redhat.com>
10
---
37
---
11
blockjob.c | 2 +-
38
hw/ide/core.c | 7 +++++++
12
1 file changed, 1 insertion(+), 1 deletion(-)
39
1 file changed, 7 insertions(+)
13
40
14
diff --git a/blockjob.c b/blockjob.c
41
diff --git a/hw/ide/core.c b/hw/ide/core.c
15
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
16
--- a/blockjob.c
43
--- a/hw/ide/core.c
17
+++ b/blockjob.c
44
+++ b/hw/ide/core.c
18
@@ -XXX,XX +XXX,XX @@ static bool block_job_should_pause(BlockJob *job)
45
@@ -XXX,XX +XXX,XX @@ static const AIOCBInfo trim_aiocb_info = {
19
46
static void ide_trim_bh_cb(void *opaque)
20
bool block_job_user_paused(BlockJob *job)
21
{
47
{
22
- return job ? job->user_paused : 0;
48
TrimAIOCB *iocb = opaque;
23
+ return job->user_paused;
49
+ BlockBackend *blk = iocb->s->blk;
50
51
iocb->common.cb(iocb->common.opaque, iocb->ret);
52
53
qemu_bh_delete(iocb->bh);
54
iocb->bh = NULL;
55
qemu_aio_unref(iocb);
56
+
57
+ /* Paired with an increment in ide_issue_trim() */
58
+ blk_dec_in_flight(blk);
24
}
59
}
25
60
26
void coroutine_fn block_job_pause_point(BlockJob *job)
61
static void ide_issue_trim_cb(void *opaque, int ret)
62
@@ -XXX,XX +XXX,XX @@ BlockAIOCB *ide_issue_trim(
63
IDEState *s = opaque;
64
TrimAIOCB *iocb;
65
66
+ /* Paired with a decrement in ide_trim_bh_cb() */
67
+ blk_inc_in_flight(s->blk);
68
+
69
iocb = blk_aio_get(&trim_aiocb_info, s->blk, cb, cb_opaque);
70
iocb->s = s;
71
iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb);
27
--
72
--
28
2.9.3
73
2.34.1
29
74
30
75
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Peter Maydell <peter.maydell@linaro.org>
2
2
3
This is unused since commit 66a0fae ("blockjob: Don't touch BDS iostatus",
3
In curl_open(), the 'out' label assumes that the state->errmsg string
4
2016-05-19).
4
has been set (either by curl_easy_perform() or by manually copying a
5
string into it); however if curl_init_state() fails we will jump to
6
that label without setting the string. Add the missing error string
7
setup.
5
8
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
(We can't be specific about the cause of failure: the documentation
7
Reviewed-by: John Snow <jsnow@redhat.com>
10
of curl_easy_init() just says "If this function returns NULL,
11
something went wrong".)
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-Id: <20220222152341.850419-2-peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
16
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
10
Reviewed-by: Jeff Cody <jcody@redhat.com>
17
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
11
Message-id: 20170508141310.8674-3-pbonzini@redhat.com
12
Signed-off-by: Jeff Cody <jcody@redhat.com>
13
---
18
---
14
blockjob.c | 3 ---
19
block/curl.c | 2 ++
15
include/block/blockjob_int.h | 3 ---
20
1 file changed, 2 insertions(+)
16
2 files changed, 6 deletions(-)
17
21
18
diff --git a/blockjob.c b/blockjob.c
22
diff --git a/block/curl.c b/block/curl.c
19
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
20
--- a/blockjob.c
24
--- a/block/curl.c
21
+++ b/blockjob.c
25
+++ b/block/curl.c
22
@@ -XXX,XX +XXX,XX @@ bool block_job_is_cancelled(BlockJob *job)
26
@@ -XXX,XX +XXX,XX @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
23
void block_job_iostatus_reset(BlockJob *job)
27
// Get file size
24
{
28
25
job->iostatus = BLOCK_DEVICE_IO_STATUS_OK;
29
if (curl_init_state(s, state) < 0) {
26
- if (job->driver->iostatus_reset) {
30
+ pstrcpy(state->errmsg, CURL_ERROR_SIZE,
27
- job->driver->iostatus_reset(job);
31
+ "curl library initialization failed.");
28
- }
32
goto out;
29
}
33
}
30
31
static int block_job_finish_sync(BlockJob *job,
32
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/include/block/blockjob_int.h
35
+++ b/include/block/blockjob_int.h
36
@@ -XXX,XX +XXX,XX @@ struct BlockJobDriver {
37
/** Optional callback for job types that support setting a speed limit */
38
void (*set_speed)(BlockJob *job, int64_t speed, Error **errp);
39
40
- /** Optional callback for job types that need to forward I/O status reset */
41
- void (*iostatus_reset)(BlockJob *job);
42
-
43
/** Mandatory: Entrypoint for the Coroutine. */
44
CoroutineEntry *start;
45
34
46
--
35
--
47
2.9.3
36
2.34.1
48
37
49
38
diff view generated by jsdifflib
New patch
1
From: Peter Maydell <peter.maydell@linaro.org>
1
2
3
Coverity points out that we aren't checking the return value
4
from curl_easy_setopt() for any of the calls to it we make
5
in block/curl.c.
6
7
Some of these options are documented as always succeeding (e.g.
8
CURLOPT_VERBOSE) but others have documented failure cases (e.g.
9
CURLOPT_URL). For consistency we check every call, even the ones
10
that theoretically cannot fail.
11
12
Fixes: Coverity CID 1459336, 1459482, 1460331
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-Id: <20220222152341.850419-3-peter.maydell@linaro.org>
15
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
16
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
17
---
18
block/curl.c | 90 +++++++++++++++++++++++++++++++++-------------------
19
1 file changed, 57 insertions(+), 33 deletions(-)
20
21
diff --git a/block/curl.c b/block/curl.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/block/curl.c
24
+++ b/block/curl.c
25
@@ -XXX,XX +XXX,XX @@ static int curl_init_state(BDRVCURLState *s, CURLState *state)
26
if (!state->curl) {
27
return -EIO;
28
}
29
- curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
30
- curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYPEER,
31
- (long) s->sslverify);
32
- curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYHOST,
33
- s->sslverify ? 2L : 0L);
34
+ if (curl_easy_setopt(state->curl, CURLOPT_URL, s->url) ||
35
+ curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYPEER,
36
+ (long) s->sslverify) ||
37
+ curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYHOST,
38
+ s->sslverify ? 2L : 0L)) {
39
+ goto err;
40
+ }
41
if (s->cookie) {
42
- curl_easy_setopt(state->curl, CURLOPT_COOKIE, s->cookie);
43
+ if (curl_easy_setopt(state->curl, CURLOPT_COOKIE, s->cookie)) {
44
+ goto err;
45
+ }
46
+ }
47
+ if (curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, (long)s->timeout) ||
48
+ curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION,
49
+ (void *)curl_read_cb) ||
50
+ curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state) ||
51
+ curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state) ||
52
+ curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1) ||
53
+ curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1) ||
54
+ curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1) ||
55
+ curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg) ||
56
+ curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1)) {
57
+ goto err;
58
}
59
- curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, (long)s->timeout);
60
- curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION,
61
- (void *)curl_read_cb);
62
- curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
63
- curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
64
- curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
65
- curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
66
- curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
67
- curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
68
- curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
69
-
70
if (s->username) {
71
- curl_easy_setopt(state->curl, CURLOPT_USERNAME, s->username);
72
+ if (curl_easy_setopt(state->curl, CURLOPT_USERNAME, s->username)) {
73
+ goto err;
74
+ }
75
}
76
if (s->password) {
77
- curl_easy_setopt(state->curl, CURLOPT_PASSWORD, s->password);
78
+ if (curl_easy_setopt(state->curl, CURLOPT_PASSWORD, s->password)) {
79
+ goto err;
80
+ }
81
}
82
if (s->proxyusername) {
83
- curl_easy_setopt(state->curl,
84
- CURLOPT_PROXYUSERNAME, s->proxyusername);
85
+ if (curl_easy_setopt(state->curl,
86
+ CURLOPT_PROXYUSERNAME, s->proxyusername)) {
87
+ goto err;
88
+ }
89
}
90
if (s->proxypassword) {
91
- curl_easy_setopt(state->curl,
92
- CURLOPT_PROXYPASSWORD, s->proxypassword);
93
+ if (curl_easy_setopt(state->curl,
94
+ CURLOPT_PROXYPASSWORD, s->proxypassword)) {
95
+ goto err;
96
+ }
97
}
98
99
/* Restrict supported protocols to avoid security issues in the more
100
@@ -XXX,XX +XXX,XX @@ static int curl_init_state(BDRVCURLState *s, CURLState *state)
101
* Restricting protocols is only supported from 7.19.4 upwards.
102
*/
103
#if LIBCURL_VERSION_NUM >= 0x071304
104
- curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS);
105
- curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS);
106
+ if (curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS) ||
107
+ curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS)) {
108
+ goto err;
109
+ }
110
#endif
111
112
#ifdef DEBUG_VERBOSE
113
- curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
114
+ if (curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1)) {
115
+ goto err;
116
+ }
117
#endif
118
}
119
120
state->s = s;
121
122
return 0;
123
+
124
+err:
125
+ curl_easy_cleanup(state->curl);
126
+ state->curl = NULL;
127
+ return -EIO;
128
}
129
130
/* Called with s->mutex held. */
131
@@ -XXX,XX +XXX,XX @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
132
}
133
134
s->accept_range = false;
135
- curl_easy_setopt(state->curl, CURLOPT_NOBODY, 1);
136
- curl_easy_setopt(state->curl, CURLOPT_HEADERFUNCTION,
137
- curl_header_cb);
138
- curl_easy_setopt(state->curl, CURLOPT_HEADERDATA, s);
139
+ if (curl_easy_setopt(state->curl, CURLOPT_NOBODY, 1) ||
140
+ curl_easy_setopt(state->curl, CURLOPT_HEADERFUNCTION, curl_header_cb) ||
141
+ curl_easy_setopt(state->curl, CURLOPT_HEADERDATA, s)) {
142
+ pstrcpy(state->errmsg, CURL_ERROR_SIZE,
143
+ "curl library initialization failed.");
144
+ goto out;
145
+ }
146
if (curl_easy_perform(state->curl))
147
goto out;
148
if (curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &d)) {
149
@@ -XXX,XX +XXX,XX @@ static void curl_setup_preadv(BlockDriverState *bs, CURLAIOCB *acb)
150
151
snprintf(state->range, 127, "%" PRIu64 "-%" PRIu64, start, end);
152
trace_curl_setup_preadv(acb->bytes, start, state->range);
153
- curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
154
-
155
- if (curl_multi_add_handle(s->multi, state->curl) != CURLM_OK) {
156
+ if (curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range) ||
157
+ curl_multi_add_handle(s->multi, state->curl) != CURLM_OK) {
158
state->acb[0] = NULL;
159
acb->ret = -EIO;
160
161
--
162
2.34.1
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
There is a bug in handling BDRV_REQ_NO_WAIT flag: we still may wait in
4
wait_serialising_requests() if request is unaligned. And this is
5
possible for the only user of this flag (preallocate filter) if
6
underlying file is unaligned to its request_alignment on start.
7
8
So, we have to fix preallocate filter to do only aligned preallocate
9
requests.
10
11
Next, we should fix generic block/io.c somehow. Keeping in mind that
12
preallocate is the only user of BDRV_REQ_NO_WAIT and that we have to
13
fix its behavior now, it seems more safe to just assert that we never
14
use BDRV_REQ_NO_WAIT with unaligned requests and add corresponding
15
comment. Let's do so.
16
17
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
18
Reviewed-by: Denis V. Lunev <den@openvz.org>
19
Message-Id: <20220215121609.38570-1-vsementsov@virtuozzo.com>
20
[hreitz: Rebased on block GS/IO split]
21
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
22
---
23
include/block/block-common.h | 3 ++-
24
block/io.c | 4 ++++
25
block/preallocate.c | 15 ++++++++++++---
26
3 files changed, 18 insertions(+), 4 deletions(-)
27
28
diff --git a/include/block/block-common.h b/include/block/block-common.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/block/block-common.h
31
+++ b/include/block/block-common.h
32
@@ -XXX,XX +XXX,XX @@ typedef enum {
33
34
/*
35
* If we need to wait for other requests, just fail immediately. Used
36
- * only together with BDRV_REQ_SERIALISING.
37
+ * only together with BDRV_REQ_SERIALISING. Used only with requests aligned
38
+ * to request_alignment (corresponding assertions are in block/io.c).
39
*/
40
BDRV_REQ_NO_WAIT = 0x400,
41
42
diff --git a/block/io.c b/block/io.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/block/io.c
45
+++ b/block/io.c
46
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_co_do_zero_pwritev(BdrvChild *child,
47
48
padding = bdrv_init_padding(bs, offset, bytes, &pad);
49
if (padding) {
50
+ assert(!(flags & BDRV_REQ_NO_WAIT));
51
bdrv_make_request_serialising(req, align);
52
53
bdrv_padding_rmw_read(child, req, &pad, true);
54
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
55
* serialize the request to prevent interactions of the
56
* widened region with other transactions.
57
*/
58
+ assert(!(flags & BDRV_REQ_NO_WAIT));
59
bdrv_make_request_serialising(&req, align);
60
bdrv_padding_rmw_read(child, &req, &pad, false);
61
}
62
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_co_copy_range_internal(
63
/* TODO We can support BDRV_REQ_NO_FALLBACK here */
64
assert(!(read_flags & BDRV_REQ_NO_FALLBACK));
65
assert(!(write_flags & BDRV_REQ_NO_FALLBACK));
66
+ assert(!(read_flags & BDRV_REQ_NO_WAIT));
67
+ assert(!(write_flags & BDRV_REQ_NO_WAIT));
68
69
if (!dst || !dst->bs || !bdrv_is_inserted(dst->bs)) {
70
return -ENOMEDIUM;
71
diff --git a/block/preallocate.c b/block/preallocate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/block/preallocate.c
74
+++ b/block/preallocate.c
75
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn handle_write(BlockDriverState *bs, int64_t offset,
76
int64_t end = offset + bytes;
77
int64_t prealloc_start, prealloc_end;
78
int ret;
79
+ uint32_t file_align = bs->file->bs->bl.request_alignment;
80
+ uint32_t prealloc_align = MAX(s->opts.prealloc_align, file_align);
81
+
82
+ assert(QEMU_IS_ALIGNED(prealloc_align, file_align));
83
84
if (!has_prealloc_perms(bs)) {
85
/* We don't have state neither should try to recover it */
86
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn handle_write(BlockDriverState *bs, int64_t offset,
87
88
/* Now we want new preallocation, as request writes beyond s->file_end. */
89
90
- prealloc_start = want_merge_zero ? MIN(offset, s->file_end) : s->file_end;
91
- prealloc_end = QEMU_ALIGN_UP(end + s->opts.prealloc_size,
92
- s->opts.prealloc_align);
93
+ prealloc_start = QEMU_ALIGN_UP(
94
+ want_merge_zero ? MIN(offset, s->file_end) : s->file_end,
95
+ file_align);
96
+ prealloc_end = QEMU_ALIGN_UP(
97
+ MAX(prealloc_start, end) + s->opts.prealloc_size,
98
+ prealloc_align);
99
+
100
+ want_merge_zero = want_merge_zero && (prealloc_start <= offset);
101
102
ret = bdrv_co_pwrite_zeroes(
103
bs->file, prealloc_start, prealloc_end - prealloc_start,
104
--
105
2.34.1
diff view generated by jsdifflib
New patch
1
From: Thomas Huth <thuth@redhat.com>
1
2
3
iotest 040 already has some checks for the availability of the 'throttle'
4
driver, but some new code has been added in the course of time that
5
depends on 'throttle' but does not check for its availability. Add
6
a check to the TestCommitWithFilters class so that this iotest now
7
also passes again if 'throttle' has not been enabled in the QEMU
8
binaries.
9
10
Signed-off-by: Thomas Huth <thuth@redhat.com>
11
Message-Id: <20220223123127.3206042-1-thuth@redhat.com>
12
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
13
---
14
tests/qemu-iotests/040 | 1 +
15
1 file changed, 1 insertion(+)
16
17
diff --git a/tests/qemu-iotests/040 b/tests/qemu-iotests/040
18
index XXXXXXX..XXXXXXX 100755
19
--- a/tests/qemu-iotests/040
20
+++ b/tests/qemu-iotests/040
21
@@ -XXX,XX +XXX,XX @@ class TestCommitWithFilters(iotests.QMPTestCase):
22
pattern_file)
23
self.assertFalse('Pattern verification failed' in result)
24
25
+ @iotests.skip_if_unsupported(['throttle'])
26
def setUp(self):
27
qemu_img('create', '-f', iotests.imgfmt, self.img0, '64M')
28
qemu_img('create', '-f', iotests.imgfmt, self.img1, '64M')
29
--
30
2.34.1
diff view generated by jsdifflib
New patch
1
From: Thomas Huth <thuth@redhat.com>
1
2
3
In TAP mode, the stdout is reserved for the TAP protocol, so we
4
have to make sure to mark other lines with a comment '#' character
5
at the beginning to avoid that the TAP parser at the other end
6
gets confused.
7
8
To test this condition, run "configure" for example with:
9
10
--block-drv-rw-whitelist=copy-before-write,qcow2,raw,file,host_device,blkdebug,null-co,copy-on-read
11
12
so that iotest 041 will report that some tests are not run due to
13
the missing "quorum" driver. Without this change, "make check-block"
14
fails since the meson tap parser gets confused by these messages.
15
16
Signed-off-by: Thomas Huth <thuth@redhat.com>
17
Message-Id: <20220223124353.3273898-1-thuth@redhat.com>
18
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
19
---
20
tests/qemu-iotests/testrunner.py | 5 ++++-
21
1 file changed, 4 insertions(+), 1 deletion(-)
22
23
diff --git a/tests/qemu-iotests/testrunner.py b/tests/qemu-iotests/testrunner.py
24
index XXXXXXX..XXXXXXX 100644
25
--- a/tests/qemu-iotests/testrunner.py
26
+++ b/tests/qemu-iotests/testrunner.py
27
@@ -XXX,XX +XXX,XX @@ def run_test(self, test: str,
28
description=res.description)
29
30
if res.casenotrun:
31
- print(res.casenotrun)
32
+ if self.tap:
33
+ print('#' + res.casenotrun.replace('\n', '\n#'))
34
+ else:
35
+ print(res.casenotrun)
36
37
return res
38
39
--
40
2.34.1
diff view generated by jsdifflib
New patch
1
Drop the use of OUTPUT_DIR (test/qemu-iotests under the build
2
directory), and instead write test output files (.out.bad, .notrun, and
3
.casenotrun) to TEST_DIR.
1
4
5
With this, the same test can be run concurrently without the separate
6
instances interfering, because they will need separate TEST_DIRs anyway.
7
Running the same test separately is useful when running the iotests with
8
various format/protocol combinations in parallel, or when you just want
9
to aggressively exercise a single test (e.g. when it fails only
10
sporadically).
11
12
Putting this output into TEST_DIR means that it will stick around for
13
inspection after the test run is done (though running the same test in
14
the same TEST_DIR will overwrite it, just as it used to be); but given
15
that TEST_DIR is a scratch directory, it should be clear that users can
16
delete all of its content at any point. (And if TEST_DIR is on tmpfs,
17
it will just disappear on shutdown.) Contrarily, alternative approaches
18
that would put these output files into OUTPUT_DIR with some prefix to
19
differentiate between separate test runs might easily lead to cluttering
20
OUTPUT_DIR.
21
22
(This change means OUTPUT_DIR is no longer written to by the iotests, so
23
we can drop its usage altogether.)
24
25
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
26
Message-Id: <20220221172909.762858-1-hreitz@redhat.com>
27
[hreitz: Simplified `Path(os.path.join(x, y))` to `Path(x, y)`, as
28
suggested by Vladimir; and rebased on 9086c7639822b6
29
("tests/qemu-iotests: Rework the checks and spots using GNU
30
sed")]
31
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
32
---
33
tests/qemu-iotests/common.rc | 6 +++---
34
tests/qemu-iotests/iotests.py | 5 ++---
35
tests/qemu-iotests/testenv.py | 5 +----
36
tests/qemu-iotests/testrunner.py | 14 ++++++++------
37
4 files changed, 14 insertions(+), 16 deletions(-)
38
39
diff --git a/tests/qemu-iotests/common.rc b/tests/qemu-iotests/common.rc
40
index XXXXXXX..XXXXXXX 100644
41
--- a/tests/qemu-iotests/common.rc
42
+++ b/tests/qemu-iotests/common.rc
43
@@ -XXX,XX +XXX,XX @@
44
# bail out, setting up .notrun file
45
_notrun()
46
{
47
- echo "$*" >"$OUTPUT_DIR/$seq.notrun"
48
+ echo "$*" >"$TEST_DIR/$seq.notrun"
49
echo "$seq not run: $*"
50
status=0
51
exit
52
@@ -XXX,XX +XXX,XX @@ _img_info()
53
#
54
_casenotrun()
55
{
56
- echo " [case not run] $*" >>"$OUTPUT_DIR/$seq.casenotrun"
57
+ echo " [case not run] $*" >>"$TEST_DIR/$seq.casenotrun"
58
}
59
60
# just plain bail out
61
#
62
_fail()
63
{
64
- echo "$*" | tee -a "$OUTPUT_DIR/$seq.full"
65
+ echo "$*" | tee -a "$TEST_DIR/$seq.full"
66
echo "(see $seq.full for details)"
67
status=1
68
exit 1
69
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
70
index XXXXXXX..XXXXXXX 100644
71
--- a/tests/qemu-iotests/iotests.py
72
+++ b/tests/qemu-iotests/iotests.py
73
@@ -XXX,XX +XXX,XX @@
74
75
imgfmt = os.environ.get('IMGFMT', 'raw')
76
imgproto = os.environ.get('IMGPROTO', 'file')
77
-output_dir = os.environ.get('OUTPUT_DIR', '.')
78
79
try:
80
test_dir = os.environ['TEST_DIR']
81
@@ -XXX,XX +XXX,XX @@ def notrun(reason):
82
# Each test in qemu-iotests has a number ("seq")
83
seq = os.path.basename(sys.argv[0])
84
85
- with open('%s/%s.notrun' % (output_dir, seq), 'w', encoding='utf-8') \
86
+ with open('%s/%s.notrun' % (test_dir, seq), 'w', encoding='utf-8') \
87
as outfile:
88
outfile.write(reason + '\n')
89
logger.warning("%s not run: %s", seq, reason)
90
@@ -XXX,XX +XXX,XX @@ def case_notrun(reason):
91
# Each test in qemu-iotests has a number ("seq")
92
seq = os.path.basename(sys.argv[0])
93
94
- with open('%s/%s.casenotrun' % (output_dir, seq), 'a', encoding='utf-8') \
95
+ with open('%s/%s.casenotrun' % (test_dir, seq), 'a', encoding='utf-8') \
96
as outfile:
97
outfile.write(' [case not run] ' + reason + '\n')
98
99
diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
100
index XXXXXXX..XXXXXXX 100644
101
--- a/tests/qemu-iotests/testenv.py
102
+++ b/tests/qemu-iotests/testenv.py
103
@@ -XXX,XX +XXX,XX @@ class TestEnv(ContextManager['TestEnv']):
104
# pylint: disable=too-many-instance-attributes
105
106
env_variables = ['PYTHONPATH', 'TEST_DIR', 'SOCK_DIR', 'SAMPLE_IMG_DIR',
107
- 'OUTPUT_DIR', 'PYTHON', 'QEMU_PROG', 'QEMU_IMG_PROG',
108
+ 'PYTHON', 'QEMU_PROG', 'QEMU_IMG_PROG',
109
'QEMU_IO_PROG', 'QEMU_NBD_PROG', 'QSD_PROG',
110
'QEMU_OPTIONS', 'QEMU_IMG_OPTIONS',
111
'QEMU_IO_OPTIONS', 'QEMU_IO_OPTIONS_NO_FMT',
112
@@ -XXX,XX +XXX,XX @@ def init_directories(self) -> None:
113
TEST_DIR
114
SOCK_DIR
115
SAMPLE_IMG_DIR
116
- OUTPUT_DIR
117
"""
118
119
# Path where qemu goodies live in this source tree.
120
@@ -XXX,XX +XXX,XX @@ def init_directories(self) -> None:
121
os.path.join(self.source_iotests,
122
'sample_images'))
123
124
- self.output_dir = os.getcwd() # OUTPUT_DIR
125
-
126
def init_binaries(self) -> None:
127
"""Init binary path variables:
128
PYTHON (for bash tests)
129
diff --git a/tests/qemu-iotests/testrunner.py b/tests/qemu-iotests/testrunner.py
130
index XXXXXXX..XXXXXXX 100644
131
--- a/tests/qemu-iotests/testrunner.py
132
+++ b/tests/qemu-iotests/testrunner.py
133
@@ -XXX,XX +XXX,XX @@ def do_run_test(self, test: str, mp: bool) -> TestResult:
134
"""
135
136
f_test = Path(test)
137
- f_bad = Path(f_test.name + '.out.bad')
138
- f_notrun = Path(f_test.name + '.notrun')
139
- f_casenotrun = Path(f_test.name + '.casenotrun')
140
f_reference = Path(self.find_reference(test))
141
142
if not f_test.exists():
143
@@ -XXX,XX +XXX,XX @@ def do_run_test(self, test: str, mp: bool) -> TestResult:
144
description='No qualified output '
145
f'(expected {f_reference})')
146
147
- for p in (f_bad, f_notrun, f_casenotrun):
148
- silent_unlink(p)
149
-
150
args = [str(f_test.resolve())]
151
env = self.env.prepare_subprocess(args)
152
if mp:
153
@@ -XXX,XX +XXX,XX @@ def do_run_test(self, test: str, mp: bool) -> TestResult:
154
env[d] = os.path.join(env[d], f_test.name)
155
Path(env[d]).mkdir(parents=True, exist_ok=True)
156
157
+ test_dir = env['TEST_DIR']
158
+ f_bad = Path(test_dir, f_test.name + '.out.bad')
159
+ f_notrun = Path(test_dir, f_test.name + '.notrun')
160
+ f_casenotrun = Path(test_dir, f_test.name + '.casenotrun')
161
+
162
+ for p in (f_notrun, f_casenotrun):
163
+ silent_unlink(p)
164
+
165
t0 = time.time()
166
with f_bad.open('w', encoding="utf-8") as f:
167
with subprocess.Popen(args, cwd=str(f_test.parent), env=env,
168
--
169
2.34.1
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Unlike test-blockjob-txn, QMP releases the reference to the transaction
3
We are going to complicate bitmap initialization in the further
4
before the jobs finish. Thus, qemu-iotest 124 showed a failure while
4
commit. And in future, backup job will be able to work without filter
5
working on the next patch that the unit tests did not have. Make
5
(when source is immutable), so we'll need same bitmap initialization in
6
the test a little nastier.
6
copy-before-write filter and in backup job. So, it's reasonable to do
7
it in block-copy.
7
8
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Note that for now cbw_open() is the only caller of
9
Reviewed-by: John Snow <jsnow@redhat.com>
10
block_copy_state_new().
10
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
11
11
Message-id: 20170508141310.8674-10-pbonzini@redhat.com
12
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
12
Signed-off-by: Jeff Cody <jcody@redhat.com>
13
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
14
Message-Id: <20220303194349.2304213-2-vsementsov@virtuozzo.com>
15
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
13
---
16
---
14
tests/test-blockjob-txn.c | 7 +++++--
17
block/block-copy.c | 1 +
15
1 file changed, 5 insertions(+), 2 deletions(-)
18
block/copy-before-write.c | 4 ----
19
2 files changed, 1 insertion(+), 4 deletions(-)
16
20
17
diff --git a/tests/test-blockjob-txn.c b/tests/test-blockjob-txn.c
21
diff --git a/block/block-copy.c b/block/block-copy.c
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/test-blockjob-txn.c
23
--- a/block/block-copy.c
20
+++ b/tests/test-blockjob-txn.c
24
+++ b/block/block-copy.c
21
@@ -XXX,XX +XXX,XX @@ static void test_pair_jobs(int expected1, int expected2)
25
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
22
block_job_start(job1);
26
return NULL;
23
block_job_start(job2);
24
25
+ /* Release our reference now to trigger as many nice
26
+ * use-after-free bugs as possible.
27
+ */
28
+ block_job_txn_unref(txn);
29
+
30
if (expected1 == -ECANCELED) {
31
block_job_cancel(job1);
32
}
27
}
33
@@ -XXX,XX +XXX,XX @@ static void test_pair_jobs(int expected1, int expected2)
28
bdrv_disable_dirty_bitmap(copy_bitmap);
34
29
+ bdrv_set_dirty_bitmap(copy_bitmap, 0, bdrv_dirty_bitmap_size(copy_bitmap));
35
g_assert_cmpint(result1, ==, expected1);
30
36
g_assert_cmpint(result2, ==, expected2);
31
/*
32
* If source is in backing chain of target assume that target is going to be
33
diff --git a/block/copy-before-write.c b/block/copy-before-write.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/block/copy-before-write.c
36
+++ b/block/copy-before-write.c
37
@@ -XXX,XX +XXX,XX @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
38
Error **errp)
39
{
40
BDRVCopyBeforeWriteState *s = bs->opaque;
41
- BdrvDirtyBitmap *copy_bitmap;
42
43
bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
44
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
45
@@ -XXX,XX +XXX,XX @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
46
return -EINVAL;
47
}
48
49
- copy_bitmap = block_copy_dirty_bitmap(s->bcs);
50
- bdrv_set_dirty_bitmap(copy_bitmap, 0, bdrv_dirty_bitmap_size(copy_bitmap));
37
-
51
-
38
- block_job_txn_unref(txn);
52
return 0;
39
}
53
}
40
54
41
static void test_pair_jobs_success(void)
42
--
55
--
43
2.9.3
56
2.34.1
44
45
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
This splits the part that touches job states from the part that invokes
3
That simplifies handling failure in existing code and in further new
4
callbacks. It will make the code simpler to understand once job states will
4
usage of bdrv_merge_dirty_bitmap().
5
be protected by a different mutex than the AioContext lock.
6
5
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
9
Message-id: 20170508141310.8674-11-pbonzini@redhat.com
8
Message-Id: <20220303194349.2304213-3-vsementsov@virtuozzo.com>
10
Signed-off-by: Jeff Cody <jcody@redhat.com>
9
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
11
---
10
---
12
blockjob.c | 35 ++++++++++++++++++++++-------------
11
include/block/dirty-bitmap.h | 2 +-
13
1 file changed, 22 insertions(+), 13 deletions(-)
12
block/dirty-bitmap.c | 9 +++++++--
13
block/monitor/bitmap-qmp-cmds.c | 5 +----
14
3 files changed, 9 insertions(+), 7 deletions(-)
14
15
15
diff --git a/blockjob.c b/blockjob.c
16
diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/blockjob.c
18
--- a/include/block/dirty-bitmap.h
18
+++ b/blockjob.c
19
+++ b/include/block/dirty-bitmap.h
19
@@ -XXX,XX +XXX,XX @@ void block_job_start(BlockJob *job)
20
@@ -XXX,XX +XXX,XX @@ void bdrv_dirty_bitmap_set_persistence(BdrvDirtyBitmap *bitmap,
20
21
bool persistent);
21
static void block_job_completed_single(BlockJob *job)
22
void bdrv_dirty_bitmap_set_inconsistent(BdrvDirtyBitmap *bitmap);
23
void bdrv_dirty_bitmap_set_busy(BdrvDirtyBitmap *bitmap, bool busy);
24
-void bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const BdrvDirtyBitmap *src,
25
+bool bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const BdrvDirtyBitmap *src,
26
HBitmap **backup, Error **errp);
27
void bdrv_dirty_bitmap_skip_store(BdrvDirtyBitmap *bitmap, bool skip);
28
bool bdrv_dirty_bitmap_get(BdrvDirtyBitmap *bitmap, int64_t offset);
29
diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/block/dirty-bitmap.c
32
+++ b/block/dirty-bitmap.c
33
@@ -XXX,XX +XXX,XX @@ bool bdrv_dirty_bitmap_next_dirty_area(BdrvDirtyBitmap *bitmap,
34
* Ensures permissions on bitmaps are reasonable; use for public API.
35
*
36
* @backup: If provided, make a copy of dest here prior to merge.
37
+ *
38
+ * Returns true on success, false on failure. In case of failure bitmaps are
39
+ * untouched.
40
*/
41
-void bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const BdrvDirtyBitmap *src,
42
+bool bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const BdrvDirtyBitmap *src,
43
HBitmap **backup, Error **errp)
22
{
44
{
23
+ assert(job->completed);
45
- bool ret;
24
+
46
+ bool ret = false;
25
if (!job->ret) {
47
26
if (job->driver->commit) {
48
bdrv_dirty_bitmaps_lock(dest->bs);
27
job->driver->commit(job);
49
if (src->bs != dest->bs) {
28
@@ -XXX,XX +XXX,XX @@ static int block_job_finish_sync(BlockJob *job,
50
@@ -XXX,XX +XXX,XX @@ out:
29
51
if (src->bs != dest->bs) {
30
block_job_ref(job);
52
bdrv_dirty_bitmaps_unlock(src->bs);
31
32
- finish(job, &local_err);
33
+ if (finish) {
34
+ finish(job, &local_err);
35
+ }
36
if (local_err) {
37
error_propagate(errp, local_err);
38
block_job_unref(job);
39
@@ -XXX,XX +XXX,XX @@ static void block_job_completed_txn_abort(BlockJob *job)
40
{
41
AioContext *ctx;
42
BlockJobTxn *txn = job->txn;
43
- BlockJob *other_job, *next;
44
+ BlockJob *other_job;
45
46
if (txn->aborting) {
47
/*
48
@@ -XXX,XX +XXX,XX @@ static void block_job_completed_txn_abort(BlockJob *job)
49
return;
50
}
51
txn->aborting = true;
52
+ block_job_txn_ref(txn);
53
+
54
/* We are the first failed job. Cancel other jobs. */
55
QLIST_FOREACH(other_job, &txn->jobs, txn_list) {
56
ctx = blk_get_aio_context(other_job->blk);
57
aio_context_acquire(ctx);
58
}
53
}
59
+
54
+
60
+ /* Other jobs are effectively cancelled by us, set the status for
55
+ return ret;
61
+ * them; this job, however, may or may not be cancelled, depending
56
}
62
+ * on the caller, so leave it. */
57
63
QLIST_FOREACH(other_job, &txn->jobs, txn_list) {
58
/**
64
- if (other_job == job || other_job->completed) {
59
diff --git a/block/monitor/bitmap-qmp-cmds.c b/block/monitor/bitmap-qmp-cmds.c
65
- /* Other jobs are "effectively" cancelled by us, set the status for
60
index XXXXXXX..XXXXXXX 100644
66
- * them; this job, however, may or may not be cancelled, depending
61
--- a/block/monitor/bitmap-qmp-cmds.c
67
- * on the caller, so leave it. */
62
+++ b/block/monitor/bitmap-qmp-cmds.c
68
- if (other_job != job) {
63
@@ -XXX,XX +XXX,XX @@ BdrvDirtyBitmap *block_dirty_bitmap_merge(const char *node, const char *target,
69
- block_job_cancel_async(other_job);
64
BlockDriverState *bs;
70
- }
65
BdrvDirtyBitmap *dst, *src, *anon;
71
- continue;
66
BlockDirtyBitmapMergeSourceList *lst;
72
+ if (other_job != job) {
67
- Error *local_err = NULL;
73
+ block_job_cancel_async(other_job);
68
69
GLOBAL_STATE_CODE();
70
71
@@ -XXX,XX +XXX,XX @@ BdrvDirtyBitmap *block_dirty_bitmap_merge(const char *node, const char *target,
72
abort();
74
}
73
}
75
- block_job_cancel_sync(other_job);
74
76
- assert(other_job->completed);
75
- bdrv_merge_dirty_bitmap(anon, src, NULL, &local_err);
77
}
76
- if (local_err) {
78
- QLIST_FOREACH_SAFE(other_job, &txn->jobs, txn_list, next) {
77
- error_propagate(errp, local_err);
79
+ while (!QLIST_EMPTY(&txn->jobs)) {
78
+ if (!bdrv_merge_dirty_bitmap(anon, src, NULL, errp)) {
80
+ other_job = QLIST_FIRST(&txn->jobs);
79
dst = NULL;
81
ctx = blk_get_aio_context(other_job->blk);
80
goto out;
82
+ if (!other_job->completed) {
81
}
83
+ assert(other_job->cancelled);
84
+ block_job_finish_sync(other_job, NULL, NULL);
85
+ }
86
block_job_completed_single(other_job);
87
aio_context_release(ctx);
88
}
89
+
90
+ block_job_txn_unref(txn);
91
}
92
93
static void block_job_completed_txn_success(BlockJob *job)
94
--
82
--
95
2.9.3
83
2.34.1
96
97
diff view generated by jsdifflib
1
On current released versions of glusterfs, glfs_lseek() will sometimes
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
return invalid values for SEEK_DATA or SEEK_HOLE. For SEEK_DATA and
3
SEEK_HOLE, the returned value should be >= the passed offset, or < 0 in
4
the case of error:
5
2
6
LSEEK(2):
3
This will be used in the following commit to bring "incremental" mode
4
to copy-before-write filter.
7
5
8
off_t lseek(int fd, off_t offset, int whence);
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
8
Message-Id: <20220303194349.2304213-4-vsementsov@virtuozzo.com>
9
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
10
---
11
include/block/block-copy.h | 1 +
12
block/block-copy.c | 14 +++++++++++++-
13
block/copy-before-write.c | 2 +-
14
3 files changed, 15 insertions(+), 2 deletions(-)
9
15
10
[...]
16
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
11
12
SEEK_HOLE
13
Adjust the file offset to the next hole in the file greater
14
than or equal to offset. If offset points into the middle of
15
a hole, then the file offset is set to offset. If there is no
16
hole past offset, then the file offset is adjusted to the end
17
of the file (i.e., there is an implicit hole at the end of
18
any file).
19
20
[...]
21
22
RETURN VALUE
23
Upon successful completion, lseek() returns the resulting
24
offset location as measured in bytes from the beginning of the
25
file. On error, the value (off_t) -1 is returned and errno is
26
set to indicate the error
27
28
However, occasionally glfs_lseek() for SEEK_HOLE/DATA will return a
29
value less than the passed offset, yet greater than zero.
30
31
For instance, here are example values observed from this call:
32
33
offs = glfs_lseek(s->fd, start, SEEK_HOLE);
34
if (offs < 0) {
35
return -errno; /* D1 and (H3 or H4) */
36
}
37
38
start == 7608336384
39
offs == 7607877632
40
41
This causes QEMU to abort on the assert test. When this value is
42
returned, errno is also 0.
43
44
This is a reported and known bug to glusterfs:
45
https://bugzilla.redhat.com/show_bug.cgi?id=1425293
46
47
Although this is being fixed in gluster, we still should work around it
48
in QEMU, given that multiple released versions of gluster behave this
49
way.
50
51
This patch treats the return case of (offs < start) the same as if an
52
error value other than ENXIO is returned; we will assume we learned
53
nothing, and there are no holes in the file.
54
55
Signed-off-by: Jeff Cody <jcody@redhat.com>
56
Reviewed-by: Eric Blake <eblake@redhat.com>
57
Reviewed-by: Niels de Vos <ndevos@redhat.com>
58
Message-id: 87c0140e9407c08f6e74b04131b610f2e27c014c.1495560397.git.jcody@redhat.com
59
Signed-off-by: Jeff Cody <jcody@redhat.com>
60
---
61
block/gluster.c | 18 ++++++++++++++++--
62
1 file changed, 16 insertions(+), 2 deletions(-)
63
64
diff --git a/block/gluster.c b/block/gluster.c
65
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
66
--- a/block/gluster.c
18
--- a/include/block/block-copy.h
67
+++ b/block/gluster.c
19
+++ b/include/block/block-copy.h
68
@@ -XXX,XX +XXX,XX @@ static int find_allocation(BlockDriverState *bs, off_t start,
20
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState BlockCopyState;
69
if (offs < 0) {
21
typedef struct BlockCopyCallState BlockCopyCallState;
70
return -errno; /* D3 or D4 */
22
23
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
24
+ const BdrvDirtyBitmap *bitmap,
25
Error **errp);
26
27
/* Function should be called prior any actual copy request */
28
diff --git a/block/block-copy.c b/block/block-copy.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/block/block-copy.c
31
+++ b/block/block-copy.c
32
@@ -XXX,XX +XXX,XX @@ static int64_t block_copy_calculate_cluster_size(BlockDriverState *target,
33
}
34
35
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
36
+ const BdrvDirtyBitmap *bitmap,
37
Error **errp)
38
{
39
+ ERRP_GUARD();
40
BlockCopyState *s;
41
int64_t cluster_size;
42
BdrvDirtyBitmap *copy_bitmap;
43
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
44
return NULL;
71
}
45
}
72
- assert(offs >= start);
46
bdrv_disable_dirty_bitmap(copy_bitmap);
73
+
47
- bdrv_set_dirty_bitmap(copy_bitmap, 0, bdrv_dirty_bitmap_size(copy_bitmap));
74
+ if (offs < start) {
48
+ if (bitmap) {
75
+ /* This is not a valid return by lseek(). We are safe to just return
49
+ if (!bdrv_merge_dirty_bitmap(copy_bitmap, bitmap, NULL, errp)) {
76
+ * -EIO in this case, and we'll treat it like D4. Unfortunately some
50
+ error_prepend(errp, "Failed to merge bitmap '%s' to internal "
77
+ * versions of gluster server will return offs < start, so an assert
51
+ "copy-bitmap: ", bdrv_dirty_bitmap_name(bitmap));
78
+ * here will unnecessarily abort QEMU. */
52
+ bdrv_release_dirty_bitmap(copy_bitmap);
79
+ return -EIO;
53
+ return NULL;
54
+ }
55
+ } else {
56
+ bdrv_set_dirty_bitmap(copy_bitmap, 0,
57
+ bdrv_dirty_bitmap_size(copy_bitmap));
80
+ }
58
+ }
81
59
82
if (offs > start) {
60
/*
83
/* D2: in hole, next data at offs */
61
* If source is in backing chain of target assume that target is going to be
84
@@ -XXX,XX +XXX,XX @@ static int find_allocation(BlockDriverState *bs, off_t start,
62
diff --git a/block/copy-before-write.c b/block/copy-before-write.c
85
if (offs < 0) {
63
index XXXXXXX..XXXXXXX 100644
86
return -errno; /* D1 and (H3 or H4) */
64
--- a/block/copy-before-write.c
87
}
65
+++ b/block/copy-before-write.c
88
- assert(offs >= start);
66
@@ -XXX,XX +XXX,XX @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
89
+
67
((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
90
+ if (offs < start) {
68
bs->file->bs->supported_zero_flags);
91
+ /* This is not a valid return by lseek(). We are safe to just return
69
92
+ * -EIO in this case, and we'll treat it like H4. Unfortunately some
70
- s->bcs = block_copy_state_new(bs->file, s->target, errp);
93
+ * versions of gluster server will return offs < start, so an assert
71
+ s->bcs = block_copy_state_new(bs->file, s->target, NULL, errp);
94
+ * here will unnecessarily abort QEMU. */
72
if (!s->bcs) {
95
+ return -EIO;
73
error_prepend(errp, "Cannot create block-copy-state: ");
96
+ }
74
return -EINVAL;
97
98
if (offs > start) {
99
/*
100
--
75
--
101
2.9.3
76
2.34.1
102
103
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Outside blockjob.c, the block_job_iostatus_reset function is used once
3
This brings "incremental" mode to copy-before-write filter: user can
4
in the monitor and once in BlockBackend. When we introduce the block
4
specify bitmap so that filter will copy only "dirty" areas.
5
job mutex, block_job_iostatus_reset's client is going to be the block
6
layer (for which blockjob.c will take the block job mutex) rather than
7
the monitor (which will take the block job mutex by itself).
8
5
9
The monitor's call to block_job_iostatus_reset from the monitor comes
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
just before the sole call to block_job_user_resume, so reset the
7
Message-Id: <20220303194349.2304213-5-vsementsov@virtuozzo.com>
11
iostatus directly from block_job_iostatus_reset. This will avoid
8
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
12
the need to introduce separate block_job_iostatus_reset and
9
---
13
block_job_iostatus_reset_locked APIs.
10
qapi/block-core.json | 10 +++++++-
11
block/copy-before-write.c | 51 ++++++++++++++++++++++++++++++++++++++-
12
2 files changed, 59 insertions(+), 2 deletions(-)
14
13
15
After making this change, move the function together with the others
14
diff --git a/qapi/block-core.json b/qapi/block-core.json
16
that were moved in the previous patch.
17
18
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
19
Reviewed-by: John Snow <jsnow@redhat.com>
20
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21
Reviewed-by: Jeff Cody <jcody@redhat.com>
22
Message-id: 20170508141310.8674-7-pbonzini@redhat.com
23
Signed-off-by: Jeff Cody <jcody@redhat.com>
24
---
25
blockdev.c | 1 -
26
blockjob.c | 11 ++++++-----
27
2 files changed, 6 insertions(+), 6 deletions(-)
28
29
diff --git a/blockdev.c b/blockdev.c
30
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
31
--- a/blockdev.c
16
--- a/qapi/block-core.json
32
+++ b/blockdev.c
17
+++ b/qapi/block-core.json
33
@@ -XXX,XX +XXX,XX @@ void qmp_block_job_resume(const char *device, Error **errp)
18
@@ -XXX,XX +XXX,XX @@
34
}
19
#
35
20
# @target: The target for copy-before-write operations.
36
trace_qmp_block_job_resume(job);
21
#
37
- block_job_iostatus_reset(job);
22
+# @bitmap: If specified, copy-before-write filter will do
38
block_job_user_resume(job);
23
+# copy-before-write operations only for dirty regions of the
39
aio_context_release(aio_context);
24
+# bitmap. Bitmap size must be equal to length of file and
40
}
25
+# target child of the filter. Note also, that bitmap is used
41
diff --git a/blockjob.c b/blockjob.c
26
+# only to initialize internal bitmap of the process, so further
27
+# modifications (or removing) of specified bitmap doesn't
28
+# influence the filter. (Since 7.0)
29
+#
30
# Since: 6.2
31
##
32
{ 'struct': 'BlockdevOptionsCbw',
33
'base': 'BlockdevOptionsGenericFormat',
34
- 'data': { 'target': 'BlockdevRef' } }
35
+ 'data': { 'target': 'BlockdevRef', '*bitmap': 'BlockDirtyBitmap' } }
36
37
##
38
# @BlockdevOptions:
39
diff --git a/block/copy-before-write.c b/block/copy-before-write.c
42
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
43
--- a/blockjob.c
41
--- a/block/copy-before-write.c
44
+++ b/blockjob.c
42
+++ b/block/copy-before-write.c
45
@@ -XXX,XX +XXX,XX @@ void block_job_user_resume(BlockJob *job)
43
@@ -XXX,XX +XXX,XX @@
46
{
44
47
if (job && job->user_paused && job->pause_count > 0) {
45
#include "block/copy-before-write.h"
48
job->user_paused = false;
46
49
+ block_job_iostatus_reset(job);
47
+#include "qapi/qapi-visit-block-core.h"
50
block_job_resume(job);
48
+
49
typedef struct BDRVCopyBeforeWriteState {
50
BlockCopyState *bcs;
51
BdrvChild *target;
52
@@ -XXX,XX +XXX,XX @@ static void cbw_child_perm(BlockDriverState *bs, BdrvChild *c,
51
}
53
}
52
}
54
}
53
@@ -XXX,XX +XXX,XX @@ void block_job_cancel(BlockJob *job)
55
54
}
56
+static bool cbw_parse_bitmap_option(QDict *options, BdrvDirtyBitmap **bitmap,
55
}
57
+ Error **errp)
56
57
-void block_job_iostatus_reset(BlockJob *job)
58
-{
59
- job->iostatus = BLOCK_DEVICE_IO_STATUS_OK;
60
-}
61
-
62
static int block_job_finish_sync(BlockJob *job,
63
void (*finish)(BlockJob *, Error **errp),
64
Error **errp)
65
@@ -XXX,XX +XXX,XX @@ void block_job_yield(BlockJob *job)
66
block_job_pause_point(job);
67
}
68
69
+void block_job_iostatus_reset(BlockJob *job)
70
+{
58
+{
71
+ job->iostatus = BLOCK_DEVICE_IO_STATUS_OK;
59
+ QDict *bitmap_qdict = NULL;
60
+ BlockDirtyBitmap *bmp_param = NULL;
61
+ Visitor *v = NULL;
62
+ bool ret = false;
63
+
64
+ *bitmap = NULL;
65
+
66
+ qdict_extract_subqdict(options, &bitmap_qdict, "bitmap.");
67
+ if (!qdict_size(bitmap_qdict)) {
68
+ ret = true;
69
+ goto out;
70
+ }
71
+
72
+ v = qobject_input_visitor_new_flat_confused(bitmap_qdict, errp);
73
+ if (!v) {
74
+ goto out;
75
+ }
76
+
77
+ visit_type_BlockDirtyBitmap(v, NULL, &bmp_param, errp);
78
+ if (!bmp_param) {
79
+ goto out;
80
+ }
81
+
82
+ *bitmap = block_dirty_bitmap_lookup(bmp_param->node, bmp_param->name, NULL,
83
+ errp);
84
+ if (!*bitmap) {
85
+ goto out;
86
+ }
87
+
88
+ ret = true;
89
+
90
+out:
91
+ qapi_free_BlockDirtyBitmap(bmp_param);
92
+ visit_free(v);
93
+ qobject_unref(bitmap_qdict);
94
+
95
+ return ret;
72
+}
96
+}
73
+
97
+
74
void block_job_event_ready(BlockJob *job)
98
static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
99
Error **errp)
75
{
100
{
76
job->ready = true;
101
BDRVCopyBeforeWriteState *s = bs->opaque;
102
+ BdrvDirtyBitmap *bitmap = NULL;
103
104
bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
105
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
106
@@ -XXX,XX +XXX,XX @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
107
return -EINVAL;
108
}
109
110
+ if (!cbw_parse_bitmap_option(options, &bitmap, errp)) {
111
+ return -EINVAL;
112
+ }
113
+
114
bs->total_sectors = bs->file->bs->total_sectors;
115
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
116
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
117
@@ -XXX,XX +XXX,XX @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
118
((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
119
bs->file->bs->supported_zero_flags);
120
121
- s->bcs = block_copy_state_new(bs->file, s->target, NULL, errp);
122
+ s->bcs = block_copy_state_new(bs->file, s->target, bitmap, errp);
123
if (!s->bcs) {
124
error_prepend(errp, "Cannot create block-copy-state: ");
125
return -EINVAL;
77
--
126
--
78
2.9.3
127
2.34.1
79
80
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
All block jobs are using block_job_defer_to_main_loop as the final
3
Split block_copy_reset() out of block_copy_reset_unallocated() to be
4
step just before the coroutine terminates. At this point,
4
used separately later.
5
block_job_enter should do nothing, but currently it restarts
6
the freed coroutine.
7
5
8
Now, the job->co states should probably be changed to an enum
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
(e.g. BEFORE_START, STARTED, YIELDED, COMPLETED) subsuming
7
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
10
block_job_started, job->deferred_to_main_loop and job->busy.
8
Message-Id: <20220303194349.2304213-6-vsementsov@virtuozzo.com>
11
For now, this patch eliminates the problematic reenter by
9
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
12
removing the reset of job->deferred_to_main_loop (which served
10
---
13
no purpose, as far as I could see) and checking the flag in
11
include/block/block-copy.h | 1 +
14
block_job_enter.
12
block/block-copy.c | 21 +++++++++++++--------
13
2 files changed, 14 insertions(+), 8 deletions(-)
15
14
16
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
15
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
17
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18
Message-id: 20170508141310.8674-12-pbonzini@redhat.com
19
Signed-off-by: Jeff Cody <jcody@redhat.com>
20
---
21
blockjob.c | 10 ++++++++--
22
include/block/blockjob_int.h | 3 ++-
23
2 files changed, 10 insertions(+), 3 deletions(-)
24
25
diff --git a/blockjob.c b/blockjob.c
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/blockjob.c
17
--- a/include/block/block-copy.h
28
+++ b/blockjob.c
18
+++ b/include/block/block-copy.h
29
@@ -XXX,XX +XXX,XX @@ void block_job_resume_all(void)
19
@@ -XXX,XX +XXX,XX @@ void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm);
30
20
31
void block_job_enter(BlockJob *job)
21
void block_copy_state_free(BlockCopyState *s);
32
{
22
33
- if (job->co && !job->busy) {
23
+void block_copy_reset(BlockCopyState *s, int64_t offset, int64_t bytes);
34
+ if (!block_job_started(job)) {
24
int64_t block_copy_reset_unallocated(BlockCopyState *s,
35
+ return;
25
int64_t offset, int64_t *count);
36
+ }
26
37
+ if (job->deferred_to_main_loop) {
27
diff --git a/block/block-copy.c b/block/block-copy.c
38
+ return;
28
index XXXXXXX..XXXXXXX 100644
39
+ }
29
--- a/block/block-copy.c
40
+
30
+++ b/block/block-copy.c
41
+ if (!job->busy) {
31
@@ -XXX,XX +XXX,XX @@ static int block_copy_is_cluster_allocated(BlockCopyState *s, int64_t offset,
42
bdrv_coroutine_enter(blk_bs(job->blk), job->co);
43
}
32
}
44
}
33
}
45
@@ -XXX,XX +XXX,XX @@ static void block_job_defer_to_main_loop_bh(void *opaque)
34
46
aio_context_acquire(aio_context);
35
+void block_copy_reset(BlockCopyState *s, int64_t offset, int64_t bytes)
36
+{
37
+ QEMU_LOCK_GUARD(&s->lock);
38
+
39
+ bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, bytes);
40
+ if (s->progress) {
41
+ progress_set_remaining(s->progress,
42
+ bdrv_get_dirty_count(s->copy_bitmap) +
43
+ s->in_flight_bytes);
44
+ }
45
+}
46
+
47
/*
48
* Reset bits in copy_bitmap starting at offset if they represent unallocated
49
* data in the image. May reset subsequent contiguous bits.
50
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
51
bytes = clusters * s->cluster_size;
52
53
if (!ret) {
54
- qemu_co_mutex_lock(&s->lock);
55
- bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, bytes);
56
- if (s->progress) {
57
- progress_set_remaining(s->progress,
58
- bdrv_get_dirty_count(s->copy_bitmap) +
59
- s->in_flight_bytes);
60
- }
61
- qemu_co_mutex_unlock(&s->lock);
62
+ block_copy_reset(s, offset, bytes);
47
}
63
}
48
64
49
- data->job->deferred_to_main_loop = false;
65
*count = bytes;
50
data->fn(data->job, data->opaque);
51
52
if (aio_context != data->aio_context) {
53
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/include/block/blockjob_int.h
56
+++ b/include/block/blockjob_int.h
57
@@ -XXX,XX +XXX,XX @@ typedef void BlockJobDeferToMainLoopFn(BlockJob *job, void *opaque);
58
* @fn: The function to run in the main loop
59
* @opaque: The opaque value that is passed to @fn
60
*
61
- * Execute a given function in the main loop with the BlockDriverState
62
+ * This function must be called by the main job coroutine just before it
63
+ * returns. @fn is executed in the main loop with the BlockDriverState
64
* AioContext acquired. Block jobs must call bdrv_unref(), bdrv_close(), and
65
* anything that uses bdrv_drain_all() in the main loop.
66
*
67
--
66
--
68
2.9.3
67
2.34.1
69
70
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
We have two different headers for block job operations, blockjob.h
3
Split intersecting-requests functionality out of block-copy to be
4
and blockjob_int.h. The former contains APIs called by the monitor,
4
reused in copy-before-write filter.
5
the latter contains APIs called by the block job drivers and the
6
block layer itself.
7
5
8
Keep the two APIs separate in the blockjob.c file too. This will
6
Note: while being here, fix tiny typo in MAINTAINERS.
9
be useful when transitioning away from the AioContext lock, because
10
there will be locking policies for the two categories, too---the
11
monitor will have to call new block_job_lock/unlock APIs, while blockjob
12
APIs will take care of this for the users.
13
7
14
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
15
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
16
Message-id: 20170508141310.8674-6-pbonzini@redhat.com
10
Message-Id: <20220303194349.2304213-7-vsementsov@virtuozzo.com>
17
Signed-off-by: Jeff Cody <jcody@redhat.com>
11
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
18
---
12
---
19
blockjob.c | 390 ++++++++++++++++++++++++++++++++-----------------------------
13
include/block/reqlist.h | 67 +++++++++++++++++++++++
20
1 file changed, 205 insertions(+), 185 deletions(-)
14
block/block-copy.c | 116 +++++++++++++---------------------------
15
block/reqlist.c | 76 ++++++++++++++++++++++++++
16
MAINTAINERS | 4 +-
17
block/meson.build | 1 +
18
5 files changed, 184 insertions(+), 80 deletions(-)
19
create mode 100644 include/block/reqlist.h
20
create mode 100644 block/reqlist.c
21
21
22
diff --git a/blockjob.c b/blockjob.c
22
diff --git a/include/block/reqlist.h b/include/block/reqlist.h
23
new file mode 100644
24
index XXXXXXX..XXXXXXX
25
--- /dev/null
26
+++ b/include/block/reqlist.h
27
@@ -XXX,XX +XXX,XX @@
28
+/*
29
+ * reqlist API
30
+ *
31
+ * Copyright (C) 2013 Proxmox Server Solutions
32
+ * Copyright (c) 2021 Virtuozzo International GmbH.
33
+ *
34
+ * Authors:
35
+ * Dietmar Maurer (dietmar@proxmox.com)
36
+ * Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
37
+ *
38
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
39
+ * See the COPYING file in the top-level directory.
40
+ */
41
+
42
+#ifndef REQLIST_H
43
+#define REQLIST_H
44
+
45
+#include "qemu/coroutine.h"
46
+
47
+/*
48
+ * The API is not thread-safe and shouldn't be. The struct is public to be part
49
+ * of other structures and protected by third-party locks, see
50
+ * block/block-copy.c for example.
51
+ */
52
+
53
+typedef struct BlockReq {
54
+ int64_t offset;
55
+ int64_t bytes;
56
+
57
+ CoQueue wait_queue; /* coroutines blocked on this req */
58
+ QLIST_ENTRY(BlockReq) list;
59
+} BlockReq;
60
+
61
+typedef QLIST_HEAD(, BlockReq) BlockReqList;
62
+
63
+/*
64
+ * Initialize new request and add it to the list. Caller must be sure that
65
+ * there are no conflicting requests in the list.
66
+ */
67
+void reqlist_init_req(BlockReqList *reqs, BlockReq *req, int64_t offset,
68
+ int64_t bytes);
69
+/* Search for request in the list intersecting with @offset/@bytes area. */
70
+BlockReq *reqlist_find_conflict(BlockReqList *reqs, int64_t offset,
71
+ int64_t bytes);
72
+
73
+/*
74
+ * If there are no intersecting requests return false. Otherwise, wait for the
75
+ * first found intersecting request to finish and return true.
76
+ *
77
+ * @lock is passed to qemu_co_queue_wait()
78
+ * False return value proves that lock was released at no point.
79
+ */
80
+bool coroutine_fn reqlist_wait_one(BlockReqList *reqs, int64_t offset,
81
+ int64_t bytes, CoMutex *lock);
82
+
83
+/*
84
+ * Shrink request and wake all waiting coroutines (maybe some of them are not
85
+ * intersecting with shrunk request).
86
+ */
87
+void coroutine_fn reqlist_shrink_req(BlockReq *req, int64_t new_bytes);
88
+
89
+/*
90
+ * Remove request and wake all waiting coroutines. Do not release any memory.
91
+ */
92
+void coroutine_fn reqlist_remove_req(BlockReq *req);
93
+
94
+#endif /* REQLIST_H */
95
diff --git a/block/block-copy.c b/block/block-copy.c
23
index XXXXXXX..XXXXXXX 100644
96
index XXXXXXX..XXXXXXX 100644
24
--- a/blockjob.c
97
--- a/block/block-copy.c
25
+++ b/blockjob.c
98
+++ b/block/block-copy.c
26
@@ -XXX,XX +XXX,XX @@ struct BlockJobTxn {
99
@@ -XXX,XX +XXX,XX @@
27
100
#include "trace.h"
28
static QLIST_HEAD(, BlockJob) block_jobs = QLIST_HEAD_INITIALIZER(block_jobs);
101
#include "qapi/error.h"
29
102
#include "block/block-copy.h"
30
+/*
103
+#include "block/reqlist.h"
31
+ * The block job API is composed of two categories of functions.
104
#include "sysemu/block-backend.h"
32
+ *
105
#include "qemu/units.h"
33
+ * The first includes functions used by the monitor. The monitor is
106
#include "qemu/coroutine.h"
34
+ * peculiar in that it accesses the block job list with block_job_get, and
107
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyTask {
35
+ * therefore needs consistency across block_job_get and the actual operation
108
*/
36
+ * (e.g. block_job_set_speed). The consistency is achieved with
109
BlockCopyState *s;
37
+ * aio_context_acquire/release. These functions are declared in blockjob.h.
110
BlockCopyCallState *call_state;
38
+ *
111
- int64_t offset;
39
+ * The second includes functions used by the block job drivers and sometimes
112
/*
40
+ * by the core block layer. These do not care about locking, because the
113
* @method can also be set again in the while loop of
41
+ * whole coroutine runs under the AioContext lock, and are declared in
114
* block_copy_dirty_clusters(), but it is never accessed concurrently
42
+ * blockjob_int.h.
115
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyTask {
43
+ */
116
BlockCopyMethod method;
44
+
117
45
BlockJob *block_job_next(BlockJob *job)
118
/*
119
- * Fields whose state changes throughout the execution
120
- * Protected by lock in BlockCopyState.
121
+ * Generally, req is protected by lock in BlockCopyState, Still req.offset
122
+ * is only set on task creation, so may be read concurrently after creation.
123
+ * req.bytes is changed at most once, and need only protecting the case of
124
+ * parallel read while updating @bytes value in block_copy_task_shrink().
125
*/
126
- CoQueue wait_queue; /* coroutines blocked on this task */
127
- /*
128
- * Only protect the case of parallel read while updating @bytes
129
- * value in block_copy_task_shrink().
130
- */
131
- int64_t bytes;
132
- QLIST_ENTRY(BlockCopyTask) list;
133
+ BlockReq req;
134
} BlockCopyTask;
135
136
static int64_t task_end(BlockCopyTask *task)
46
{
137
{
47
if (!job) {
138
- return task->offset + task->bytes;
48
@@ -XXX,XX +XXX,XX @@ int block_job_add_bdrv(BlockJob *job, const char *name, BlockDriverState *bs,
139
+ return task->req.offset + task->req.bytes;
49
return 0;
50
}
140
}
51
141
52
-void *block_job_create(const char *job_id, const BlockJobDriver *driver,
142
typedef struct BlockCopyState {
53
- BlockDriverState *bs, uint64_t perm,
143
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
54
- uint64_t shared_perm, int64_t speed, int flags,
144
CoMutex lock;
55
- BlockCompletionFunc *cb, void *opaque, Error **errp)
145
int64_t in_flight_bytes;
146
BlockCopyMethod method;
147
- QLIST_HEAD(, BlockCopyTask) tasks; /* All tasks from all block-copy calls */
148
+ BlockReqList reqs;
149
QLIST_HEAD(, BlockCopyCallState) calls;
150
/*
151
* skip_unallocated:
152
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
153
RateLimit rate_limit;
154
} BlockCopyState;
155
156
-/* Called with lock held */
157
-static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
158
- int64_t offset, int64_t bytes)
56
-{
159
-{
57
- BlockBackend *blk;
160
- BlockCopyTask *t;
58
- BlockJob *job;
161
-
59
- int ret;
162
- QLIST_FOREACH(t, &s->tasks, list) {
60
-
163
- if (offset + bytes > t->offset && offset < t->offset + t->bytes) {
61
- if (bs->job) {
164
- return t;
62
- error_setg(errp, QERR_DEVICE_IN_USE, bdrv_get_device_name(bs));
63
- return NULL;
64
- }
65
-
66
- if (job_id == NULL && !(flags & BLOCK_JOB_INTERNAL)) {
67
- job_id = bdrv_get_device_name(bs);
68
- if (!*job_id) {
69
- error_setg(errp, "An explicit job ID is required for this node");
70
- return NULL;
71
- }
165
- }
72
- }
166
- }
73
-
167
-
74
- if (job_id) {
168
- return NULL;
75
- if (flags & BLOCK_JOB_INTERNAL) {
169
-}
76
- error_setg(errp, "Cannot specify job ID for internal block job");
170
-
77
- return NULL;
171
-/*
78
- }
172
- * If there are no intersecting tasks return false. Otherwise, wait for the
79
-
173
- * first found intersecting tasks to finish and return true.
80
- if (!id_wellformed(job_id)) {
174
- *
81
- error_setg(errp, "Invalid job ID '%s'", job_id);
175
- * Called with lock held. May temporary release the lock.
82
- return NULL;
176
- * Return value of 0 proves that lock was NOT released.
83
- }
177
- */
84
-
178
-static bool coroutine_fn block_copy_wait_one(BlockCopyState *s, int64_t offset,
85
- if (block_job_get(job_id)) {
179
- int64_t bytes)
86
- error_setg(errp, "Job ID '%s' already in use", job_id);
180
-{
87
- return NULL;
181
- BlockCopyTask *task = find_conflicting_task(s, offset, bytes);
88
- }
182
-
183
- if (!task) {
184
- return false;
89
- }
185
- }
90
-
186
-
91
- blk = blk_new(perm, shared_perm);
187
- qemu_co_queue_wait(&task->wait_queue, &s->lock);
92
- ret = blk_insert_bs(blk, bs, errp);
188
-
93
- if (ret < 0) {
189
- return true;
94
- blk_unref(blk);
95
- return NULL;
96
- }
97
-
98
- job = g_malloc0(driver->instance_size);
99
- job->driver = driver;
100
- job->id = g_strdup(job_id);
101
- job->blk = blk;
102
- job->cb = cb;
103
- job->opaque = opaque;
104
- job->busy = false;
105
- job->paused = true;
106
- job->pause_count = 1;
107
- job->refcnt = 1;
108
-
109
- error_setg(&job->blocker, "block device is in use by block job: %s",
110
- BlockJobType_lookup[driver->job_type]);
111
- block_job_add_bdrv(job, "main node", bs, 0, BLK_PERM_ALL, &error_abort);
112
- bs->job = job;
113
-
114
- blk_set_dev_ops(blk, &block_job_dev_ops, job);
115
- bdrv_op_unblock(bs, BLOCK_OP_TYPE_DATAPLANE, job->blocker);
116
-
117
- QLIST_INSERT_HEAD(&block_jobs, job, job_list);
118
-
119
- blk_add_aio_context_notifier(blk, block_job_attached_aio_context,
120
- block_job_detach_aio_context, job);
121
-
122
- /* Only set speed when necessary to avoid NotSupported error */
123
- if (speed != 0) {
124
- Error *local_err = NULL;
125
-
126
- block_job_set_speed(job, speed, &local_err);
127
- if (local_err) {
128
- block_job_unref(job);
129
- error_propagate(errp, local_err);
130
- return NULL;
131
- }
132
- }
133
- return job;
134
-}
190
-}
135
-
191
-
136
bool block_job_is_internal(BlockJob *job)
192
/* Called with lock held */
193
static int64_t block_copy_chunk_size(BlockCopyState *s)
137
{
194
{
138
return (job->id == NULL);
195
@@ -XXX,XX +XXX,XX @@ block_copy_task_create(BlockCopyState *s, BlockCopyCallState *call_state,
139
@@ -XXX,XX +XXX,XX @@ void block_job_start(BlockJob *job)
196
bytes = QEMU_ALIGN_UP(bytes, s->cluster_size);
140
bdrv_coroutine_enter(blk_bs(job->blk), job->co);
197
198
/* region is dirty, so no existent tasks possible in it */
199
- assert(!find_conflicting_task(s, offset, bytes));
200
+ assert(!reqlist_find_conflict(&s->reqs, offset, bytes));
201
202
bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, bytes);
203
s->in_flight_bytes += bytes;
204
@@ -XXX,XX +XXX,XX @@ block_copy_task_create(BlockCopyState *s, BlockCopyCallState *call_state,
205
.task.func = block_copy_task_entry,
206
.s = s,
207
.call_state = call_state,
208
- .offset = offset,
209
- .bytes = bytes,
210
.method = s->method,
211
};
212
- qemu_co_queue_init(&task->wait_queue);
213
- QLIST_INSERT_HEAD(&s->tasks, task, list);
214
+ reqlist_init_req(&s->reqs, &task->req, offset, bytes);
215
216
return task;
141
}
217
}
142
218
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn block_copy_task_shrink(BlockCopyTask *task,
143
-void block_job_early_fail(BlockJob *job)
219
int64_t new_bytes)
144
-{
145
- block_job_unref(job);
146
-}
147
-
148
static void block_job_completed_single(BlockJob *job)
149
{
220
{
150
if (!job->ret) {
221
QEMU_LOCK_GUARD(&task->s->lock);
151
@@ -XXX,XX +XXX,XX @@ static void block_job_completed_txn_success(BlockJob *job)
222
- if (new_bytes == task->bytes) {
223
+ if (new_bytes == task->req.bytes) {
224
return;
152
}
225
}
226
227
- assert(new_bytes > 0 && new_bytes < task->bytes);
228
+ assert(new_bytes > 0 && new_bytes < task->req.bytes);
229
230
- task->s->in_flight_bytes -= task->bytes - new_bytes;
231
+ task->s->in_flight_bytes -= task->req.bytes - new_bytes;
232
bdrv_set_dirty_bitmap(task->s->copy_bitmap,
233
- task->offset + new_bytes, task->bytes - new_bytes);
234
+ task->req.offset + new_bytes,
235
+ task->req.bytes - new_bytes);
236
237
- task->bytes = new_bytes;
238
- qemu_co_queue_restart_all(&task->wait_queue);
239
+ reqlist_shrink_req(&task->req, new_bytes);
153
}
240
}
154
241
155
-void block_job_completed(BlockJob *job, int ret)
242
static void coroutine_fn block_copy_task_end(BlockCopyTask *task, int ret)
156
-{
157
- assert(blk_bs(job->blk)->job == job);
158
- assert(!job->completed);
159
- job->completed = true;
160
- job->ret = ret;
161
- if (!job->txn) {
162
- block_job_completed_single(job);
163
- } else if (ret < 0 || block_job_is_cancelled(job)) {
164
- block_job_completed_txn_abort(job);
165
- } else {
166
- block_job_completed_txn_success(job);
167
- }
168
-}
169
-
170
void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
171
{
243
{
172
Error *local_err = NULL;
244
QEMU_LOCK_GUARD(&task->s->lock);
173
@@ -XXX,XX +XXX,XX @@ void block_job_user_pause(BlockJob *job)
245
- task->s->in_flight_bytes -= task->bytes;
174
block_job_pause(job);
246
+ task->s->in_flight_bytes -= task->req.bytes;
247
if (ret < 0) {
248
- bdrv_set_dirty_bitmap(task->s->copy_bitmap, task->offset, task->bytes);
249
+ bdrv_set_dirty_bitmap(task->s->copy_bitmap, task->req.offset,
250
+ task->req.bytes);
251
}
252
- QLIST_REMOVE(task, list);
253
if (task->s->progress) {
254
progress_set_remaining(task->s->progress,
255
bdrv_get_dirty_count(task->s->copy_bitmap) +
256
task->s->in_flight_bytes);
257
}
258
- qemu_co_queue_restart_all(&task->wait_queue);
259
+ reqlist_remove_req(&task->req);
175
}
260
}
176
261
177
-static bool block_job_should_pause(BlockJob *job)
262
void block_copy_state_free(BlockCopyState *s)
178
-{
263
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
179
- return job->pause_count > 0;
264
180
-}
265
ratelimit_init(&s->rate_limit);
181
-
266
qemu_co_mutex_init(&s->lock);
182
bool block_job_user_paused(BlockJob *job)
267
- QLIST_INIT(&s->tasks);
183
{
268
+ QLIST_INIT(&s->reqs);
184
return job->user_paused;
269
QLIST_INIT(&s->calls);
185
}
270
186
271
return s;
187
-void coroutine_fn block_job_pause_point(BlockJob *job)
272
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_run(AioTaskPool *pool,
188
-{
273
189
- assert(job && block_job_started(job));
274
aio_task_pool_wait_slot(pool);
190
-
275
if (aio_task_pool_status(pool) < 0) {
191
- if (!block_job_should_pause(job)) {
276
- co_put_to_shres(task->s->mem, task->bytes);
192
- return;
277
+ co_put_to_shres(task->s->mem, task->req.bytes);
193
- }
278
block_copy_task_end(task, -ECANCELED);
194
- if (block_job_is_cancelled(job)) {
279
g_free(task);
195
- return;
280
return -ECANCELED;
196
- }
281
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
197
-
282
BlockCopyMethod method = t->method;
198
- if (job->driver->pause) {
283
int ret;
199
- job->driver->pause(job);
284
200
- }
285
- ret = block_copy_do_copy(s, t->offset, t->bytes, &method, &error_is_read);
201
-
286
+ ret = block_copy_do_copy(s, t->req.offset, t->req.bytes, &method,
202
- if (block_job_should_pause(job) && !block_job_is_cancelled(job)) {
287
+ &error_is_read);
203
- job->paused = true;
288
204
- job->busy = false;
289
WITH_QEMU_LOCK_GUARD(&s->lock) {
205
- qemu_coroutine_yield(); /* wait for block_job_resume() */
290
if (s->method == t->method) {
206
- job->busy = true;
291
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
207
- job->paused = false;
292
t->call_state->error_is_read = error_is_read;
208
- }
293
}
209
-
294
} else if (s->progress) {
210
- if (job->driver->resume) {
295
- progress_work_done(s->progress, t->bytes);
211
- job->driver->resume(job);
296
+ progress_work_done(s->progress, t->req.bytes);
212
- }
297
}
213
-}
214
-
215
void block_job_user_resume(BlockJob *job)
216
{
217
if (job && job->user_paused && job->pause_count > 0) {
218
@@ -XXX,XX +XXX,XX @@ void block_job_user_resume(BlockJob *job)
219
}
298
}
220
}
299
- co_put_to_shres(s->mem, t->bytes);
221
300
+ co_put_to_shres(s->mem, t->req.bytes);
222
-void block_job_enter(BlockJob *job)
301
block_copy_task_end(t, ret);
223
-{
302
224
- if (job->co && !job->busy) {
303
return ret;
225
- bdrv_coroutine_enter(blk_bs(job->blk), job->co);
304
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
226
- }
305
trace_block_copy_skip_range(s, offset, bytes);
227
-}
306
break;
228
-
307
}
229
void block_job_cancel(BlockJob *job)
308
- if (task->offset > offset) {
230
{
309
- trace_block_copy_skip_range(s, offset, task->offset - offset);
231
if (block_job_started(job)) {
310
+ if (task->req.offset > offset) {
232
@@ -XXX,XX +XXX,XX @@ void block_job_cancel(BlockJob *job)
311
+ trace_block_copy_skip_range(s, offset, task->req.offset - offset);
233
}
312
}
234
}
313
235
314
found_dirty = true;
236
-bool block_job_is_cancelled(BlockJob *job)
315
237
-{
316
- ret = block_copy_block_status(s, task->offset, task->bytes,
238
- return job->cancelled;
317
+ ret = block_copy_block_status(s, task->req.offset, task->req.bytes,
239
-}
318
&status_bytes);
240
-
319
assert(ret >= 0); /* never fail */
241
void block_job_iostatus_reset(BlockJob *job)
320
- if (status_bytes < task->bytes) {
242
{
321
+ if (status_bytes < task->req.bytes) {
243
job->iostatus = BLOCK_DEVICE_IO_STATUS_OK;
322
block_copy_task_shrink(task, status_bytes);
244
@@ -XXX,XX +XXX,XX @@ int block_job_complete_sync(BlockJob *job, Error **errp)
323
}
245
return block_job_finish_sync(job, &block_job_complete, errp);
324
if (qatomic_read(&s->skip_unallocated) &&
246
}
325
!(ret & BDRV_BLOCK_ALLOCATED)) {
247
326
block_copy_task_end(task, 0);
248
-void block_job_sleep_ns(BlockJob *job, QEMUClockType type, int64_t ns)
327
- trace_block_copy_skip_range(s, task->offset, task->bytes);
249
-{
328
+ trace_block_copy_skip_range(s, task->req.offset, task->req.bytes);
250
- assert(job->busy);
329
offset = task_end(task);
251
-
330
bytes = end - offset;
252
- /* Check cancellation *before* setting busy = false, too! */
331
g_free(task);
253
- if (block_job_is_cancelled(job)) {
332
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
254
- return;
333
}
255
- }
334
}
256
-
335
257
- job->busy = false;
336
- ratelimit_calculate_delay(&s->rate_limit, task->bytes);
258
- if (!block_job_should_pause(job)) {
337
+ ratelimit_calculate_delay(&s->rate_limit, task->req.bytes);
259
- co_aio_sleep_ns(blk_get_aio_context(job->blk), type, ns);
338
260
- }
339
- trace_block_copy_process(s, task->offset);
261
- job->busy = true;
340
+ trace_block_copy_process(s, task->req.offset);
262
-
341
263
- block_job_pause_point(job);
342
- co_get_from_shres(s->mem, task->bytes);
264
-}
343
+ co_get_from_shres(s->mem, task->req.bytes);
265
-
344
266
-void block_job_yield(BlockJob *job)
345
offset = task_end(task);
267
-{
346
bytes = end - offset;
268
- assert(job->busy);
347
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
269
-
348
* Check that there is no task we still need to
270
- /* Check cancellation *before* setting busy = false, too! */
349
* wait to complete
271
- if (block_job_is_cancelled(job)) {
350
*/
272
- return;
351
- ret = block_copy_wait_one(s, call_state->offset,
273
- }
352
- call_state->bytes);
274
-
353
+ ret = reqlist_wait_one(&s->reqs, call_state->offset,
275
- job->busy = false;
354
+ call_state->bytes, &s->lock);
276
- if (!block_job_should_pause(job)) {
355
if (ret == 0) {
277
- qemu_coroutine_yield();
356
/*
278
- }
357
* No pending tasks, but check again the bitmap in this
279
- job->busy = true;
358
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
280
-
359
* between this and the critical section in
281
- block_job_pause_point(job);
360
* block_copy_dirty_clusters().
282
-}
361
*
283
-
362
- * block_copy_wait_one return value 0 also means that it
284
BlockJobInfo *block_job_query(BlockJob *job, Error **errp)
363
+ * reqlist_wait_one return value 0 also means that it
285
{
364
* didn't release the lock. So, we are still in the same
286
BlockJobInfo *info;
365
* critical section, not interrupted by any concurrent
287
@@ -XXX,XX +XXX,XX @@ static void block_job_event_completed(BlockJob *job, const char *msg)
366
* access to state.
288
&error_abort);
367
diff --git a/block/reqlist.c b/block/reqlist.c
289
}
368
new file mode 100644
290
369
index XXXXXXX..XXXXXXX
291
+/*
370
--- /dev/null
292
+ * API for block job drivers and the block layer. These functions are
371
+++ b/block/reqlist.c
293
+ * declared in blockjob_int.h.
372
@@ -XXX,XX +XXX,XX @@
294
+ */
373
+/*
295
+
374
+ * reqlist API
296
+void *block_job_create(const char *job_id, const BlockJobDriver *driver,
375
+ *
297
+ BlockDriverState *bs, uint64_t perm,
376
+ * Copyright (C) 2013 Proxmox Server Solutions
298
+ uint64_t shared_perm, int64_t speed, int flags,
377
+ * Copyright (c) 2021 Virtuozzo International GmbH.
299
+ BlockCompletionFunc *cb, void *opaque, Error **errp)
378
+ *
379
+ * Authors:
380
+ * Dietmar Maurer (dietmar@proxmox.com)
381
+ * Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
382
+ *
383
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
384
+ * See the COPYING file in the top-level directory.
385
+ */
386
+
387
+#include "qemu/osdep.h"
388
+
389
+#include "block/reqlist.h"
390
+
391
+void reqlist_init_req(BlockReqList *reqs, BlockReq *req, int64_t offset,
392
+ int64_t bytes)
300
+{
393
+{
301
+ BlockBackend *blk;
394
+ assert(!reqlist_find_conflict(reqs, offset, bytes));
302
+ BlockJob *job;
395
+
303
+ int ret;
396
+ *req = (BlockReq) {
304
+
397
+ .offset = offset,
305
+ if (bs->job) {
398
+ .bytes = bytes,
306
+ error_setg(errp, QERR_DEVICE_IN_USE, bdrv_get_device_name(bs));
399
+ };
307
+ return NULL;
400
+ qemu_co_queue_init(&req->wait_queue);
308
+ }
401
+ QLIST_INSERT_HEAD(reqs, req, list);
309
+
402
+}
310
+ if (job_id == NULL && !(flags & BLOCK_JOB_INTERNAL)) {
403
+
311
+ job_id = bdrv_get_device_name(bs);
404
+BlockReq *reqlist_find_conflict(BlockReqList *reqs, int64_t offset,
312
+ if (!*job_id) {
405
+ int64_t bytes)
313
+ error_setg(errp, "An explicit job ID is required for this node");
406
+{
314
+ return NULL;
407
+ BlockReq *r;
408
+
409
+ QLIST_FOREACH(r, reqs, list) {
410
+ if (offset + bytes > r->offset && offset < r->offset + r->bytes) {
411
+ return r;
315
+ }
412
+ }
316
+ }
413
+ }
317
+
414
+
318
+ if (job_id) {
415
+ return NULL;
319
+ if (flags & BLOCK_JOB_INTERNAL) {
416
+}
320
+ error_setg(errp, "Cannot specify job ID for internal block job");
417
+
321
+ return NULL;
418
+bool coroutine_fn reqlist_wait_one(BlockReqList *reqs, int64_t offset,
322
+ }
419
+ int64_t bytes, CoMutex *lock)
323
+
420
+{
324
+ if (!id_wellformed(job_id)) {
421
+ BlockReq *r = reqlist_find_conflict(reqs, offset, bytes);
325
+ error_setg(errp, "Invalid job ID '%s'", job_id);
422
+
326
+ return NULL;
423
+ if (!r) {
327
+ }
424
+ return false;
328
+
329
+ if (block_job_get(job_id)) {
330
+ error_setg(errp, "Job ID '%s' already in use", job_id);
331
+ return NULL;
332
+ }
333
+ }
425
+ }
334
+
426
+
335
+ blk = blk_new(perm, shared_perm);
427
+ qemu_co_queue_wait(&r->wait_queue, lock);
336
+ ret = blk_insert_bs(blk, bs, errp);
428
+
337
+ if (ret < 0) {
429
+ return true;
338
+ blk_unref(blk);
339
+ return NULL;
340
+ }
341
+
342
+ job = g_malloc0(driver->instance_size);
343
+ job->driver = driver;
344
+ job->id = g_strdup(job_id);
345
+ job->blk = blk;
346
+ job->cb = cb;
347
+ job->opaque = opaque;
348
+ job->busy = false;
349
+ job->paused = true;
350
+ job->pause_count = 1;
351
+ job->refcnt = 1;
352
+
353
+ error_setg(&job->blocker, "block device is in use by block job: %s",
354
+ BlockJobType_lookup[driver->job_type]);
355
+ block_job_add_bdrv(job, "main node", bs, 0, BLK_PERM_ALL, &error_abort);
356
+ bs->job = job;
357
+
358
+ blk_set_dev_ops(blk, &block_job_dev_ops, job);
359
+ bdrv_op_unblock(bs, BLOCK_OP_TYPE_DATAPLANE, job->blocker);
360
+
361
+ QLIST_INSERT_HEAD(&block_jobs, job, job_list);
362
+
363
+ blk_add_aio_context_notifier(blk, block_job_attached_aio_context,
364
+ block_job_detach_aio_context, job);
365
+
366
+ /* Only set speed when necessary to avoid NotSupported error */
367
+ if (speed != 0) {
368
+ Error *local_err = NULL;
369
+
370
+ block_job_set_speed(job, speed, &local_err);
371
+ if (local_err) {
372
+ block_job_unref(job);
373
+ error_propagate(errp, local_err);
374
+ return NULL;
375
+ }
376
+ }
377
+ return job;
378
+}
430
+}
379
+
431
+
380
void block_job_pause_all(void)
432
+void coroutine_fn reqlist_shrink_req(BlockReq *req, int64_t new_bytes)
381
{
382
BlockJob *job = NULL;
383
@@ -XXX,XX +XXX,XX @@ void block_job_pause_all(void)
384
}
385
}
386
387
+void block_job_early_fail(BlockJob *job)
388
+{
433
+{
389
+ block_job_unref(job);
434
+ if (new_bytes == req->bytes) {
390
+}
391
+
392
+void block_job_completed(BlockJob *job, int ret)
393
+{
394
+ assert(blk_bs(job->blk)->job == job);
395
+ assert(!job->completed);
396
+ job->completed = true;
397
+ job->ret = ret;
398
+ if (!job->txn) {
399
+ block_job_completed_single(job);
400
+ } else if (ret < 0 || block_job_is_cancelled(job)) {
401
+ block_job_completed_txn_abort(job);
402
+ } else {
403
+ block_job_completed_txn_success(job);
404
+ }
405
+}
406
+
407
+static bool block_job_should_pause(BlockJob *job)
408
+{
409
+ return job->pause_count > 0;
410
+}
411
+
412
+void coroutine_fn block_job_pause_point(BlockJob *job)
413
+{
414
+ assert(job && block_job_started(job));
415
+
416
+ if (!block_job_should_pause(job)) {
417
+ return;
435
+ return;
418
+ }
436
+ }
419
+ if (block_job_is_cancelled(job)) {
437
+
420
+ return;
438
+ assert(new_bytes > 0 && new_bytes < req->bytes);
421
+ }
439
+
422
+
440
+ req->bytes = new_bytes;
423
+ if (job->driver->pause) {
441
+ qemu_co_queue_restart_all(&req->wait_queue);
424
+ job->driver->pause(job);
425
+ }
426
+
427
+ if (block_job_should_pause(job) && !block_job_is_cancelled(job)) {
428
+ job->paused = true;
429
+ job->busy = false;
430
+ qemu_coroutine_yield(); /* wait for block_job_resume() */
431
+ job->busy = true;
432
+ job->paused = false;
433
+ }
434
+
435
+ if (job->driver->resume) {
436
+ job->driver->resume(job);
437
+ }
438
+}
442
+}
439
+
443
+
440
void block_job_resume_all(void)
444
+void coroutine_fn reqlist_remove_req(BlockReq *req)
441
{
442
BlockJob *job = NULL;
443
@@ -XXX,XX +XXX,XX @@ void block_job_resume_all(void)
444
}
445
}
446
447
+void block_job_enter(BlockJob *job)
448
+{
445
+{
449
+ if (job->co && !job->busy) {
446
+ QLIST_REMOVE(req, list);
450
+ bdrv_coroutine_enter(blk_bs(job->blk), job->co);
447
+ qemu_co_queue_restart_all(&req->wait_queue);
451
+ }
452
+}
448
+}
453
+
449
diff --git a/MAINTAINERS b/MAINTAINERS
454
+bool block_job_is_cancelled(BlockJob *job)
450
index XXXXXXX..XXXXXXX 100644
455
+{
451
--- a/MAINTAINERS
456
+ return job->cancelled;
452
+++ b/MAINTAINERS
457
+}
453
@@ -XXX,XX +XXX,XX @@ F: block/stream.c
458
+
454
F: block/mirror.c
459
+void block_job_sleep_ns(BlockJob *job, QEMUClockType type, int64_t ns)
455
F: qapi/job.json
460
+{
456
F: block/block-copy.c
461
+ assert(job->busy);
457
-F: include/block/block-copy.c
462
+
458
+F: include/block/block-copy.h
463
+ /* Check cancellation *before* setting busy = false, too! */
459
+F: block/reqlist.c
464
+ if (block_job_is_cancelled(job)) {
460
+F: include/block/reqlist.h
465
+ return;
461
F: block/copy-before-write.h
466
+ }
462
F: block/copy-before-write.c
467
+
463
F: include/block/aio_task.h
468
+ job->busy = false;
464
diff --git a/block/meson.build b/block/meson.build
469
+ if (!block_job_should_pause(job)) {
465
index XXXXXXX..XXXXXXX 100644
470
+ co_aio_sleep_ns(blk_get_aio_context(job->blk), type, ns);
466
--- a/block/meson.build
471
+ }
467
+++ b/block/meson.build
472
+ job->busy = true;
468
@@ -XXX,XX +XXX,XX @@ block_ss.add(files(
473
+
469
'qcow2.c',
474
+ block_job_pause_point(job);
470
'quorum.c',
475
+}
471
'raw-format.c',
476
+
472
+ 'reqlist.c',
477
+void block_job_yield(BlockJob *job)
473
'snapshot.c',
478
+{
474
'throttle-groups.c',
479
+ assert(job->busy);
475
'throttle.c',
480
+
481
+ /* Check cancellation *before* setting busy = false, too! */
482
+ if (block_job_is_cancelled(job)) {
483
+ return;
484
+ }
485
+
486
+ job->busy = false;
487
+ if (!block_job_should_pause(job)) {
488
+ qemu_coroutine_yield();
489
+ }
490
+ job->busy = true;
491
+
492
+ block_job_pause_point(job);
493
+}
494
+
495
void block_job_event_ready(BlockJob *job)
496
{
497
job->ready = true;
498
--
476
--
499
2.9.3
477
2.34.1
500
501
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Let's reuse convenient helper.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
7
Message-Id: <20220303194349.2304213-8-vsementsov@virtuozzo.com>
8
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
9
---
10
block/reqlist.c | 3 ++-
11
1 file changed, 2 insertions(+), 1 deletion(-)
12
13
diff --git a/block/reqlist.c b/block/reqlist.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/block/reqlist.c
16
+++ b/block/reqlist.c
17
@@ -XXX,XX +XXX,XX @@
18
*/
19
20
#include "qemu/osdep.h"
21
+#include "qemu/range.h"
22
23
#include "block/reqlist.h"
24
25
@@ -XXX,XX +XXX,XX @@ BlockReq *reqlist_find_conflict(BlockReqList *reqs, int64_t offset,
26
BlockReq *r;
27
28
QLIST_FOREACH(r, reqs, list) {
29
- if (offset + bytes > r->offset && offset < r->offset + r->bytes) {
30
+ if (ranges_overlap(offset, bytes, r->offset, r->bytes)) {
31
return r;
32
}
33
}
34
--
35
2.34.1
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
The new functions helps respecting the invariant that the coroutine
3
Add a convenient function similar with bdrv_block_status() to get
4
is entered with false user_resume, zero pause count and no error
4
status of dirty bitmap.
5
recorded in the iostatus.
6
5
7
Resetting the iostatus is now common to all of block_job_cancel_async,
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
block_job_user_resume and block_job_iostatus_reset, albeit with slight
7
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
9
differences:
8
Message-Id: <20220303194349.2304213-9-vsementsov@virtuozzo.com>
9
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
10
---
11
include/block/dirty-bitmap.h | 2 ++
12
include/qemu/hbitmap.h | 12 ++++++++++++
13
block/dirty-bitmap.c | 6 ++++++
14
util/hbitmap.c | 33 +++++++++++++++++++++++++++++++++
15
4 files changed, 53 insertions(+)
10
16
11
- block_job_cancel_async resets the iostatus, and resumes the job if
17
diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
12
there was an error, but the coroutine is not restarted immediately.
13
For example the caller may continue with a call to block_job_finish_sync.
14
15
- block_job_user_resume resets the iostatus. It wants to resume the job
16
unconditionally, even if there was no error.
17
18
- block_job_iostatus_reset doesn't resume the job at all. Maybe that's
19
a bug but it should be fixed separately.
20
21
block_job_iostatus_reset does the least common denominator, so add some
22
checking but otherwise leave it as the entry point for resetting the
23
iostatus.
24
25
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
26
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
27
Message-id: 20170508141310.8674-8-pbonzini@redhat.com
28
Signed-off-by: Jeff Cody <jcody@redhat.com>
29
---
30
blockjob.c | 24 ++++++++++++++++++++----
31
1 file changed, 20 insertions(+), 4 deletions(-)
32
33
diff --git a/blockjob.c b/blockjob.c
34
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
35
--- a/blockjob.c
19
--- a/include/block/dirty-bitmap.h
36
+++ b/blockjob.c
20
+++ b/include/block/dirty-bitmap.h
37
@@ -XXX,XX +XXX,XX @@ static void block_job_completed_single(BlockJob *job)
21
@@ -XXX,XX +XXX,XX @@ int64_t bdrv_dirty_bitmap_next_zero(BdrvDirtyBitmap *bitmap, int64_t offset,
38
block_job_unref(job);
22
bool bdrv_dirty_bitmap_next_dirty_area(BdrvDirtyBitmap *bitmap,
23
int64_t start, int64_t end, int64_t max_dirty_count,
24
int64_t *dirty_start, int64_t *dirty_count);
25
+bool bdrv_dirty_bitmap_status(BdrvDirtyBitmap *bitmap, int64_t offset,
26
+ int64_t bytes, int64_t *count);
27
BdrvDirtyBitmap *bdrv_reclaim_dirty_bitmap_locked(BdrvDirtyBitmap *bitmap,
28
Error **errp);
29
30
diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/include/qemu/hbitmap.h
33
+++ b/include/qemu/hbitmap.h
34
@@ -XXX,XX +XXX,XX @@ bool hbitmap_next_dirty_area(const HBitmap *hb, int64_t start, int64_t end,
35
int64_t max_dirty_count,
36
int64_t *dirty_start, int64_t *dirty_count);
37
38
+/*
39
+ * bdrv_dirty_bitmap_status:
40
+ * @hb: The HBitmap to operate on
41
+ * @start: The bit to start from
42
+ * @count: Number of bits to proceed
43
+ * @pnum: Out-parameter. How many bits has same value starting from @start
44
+ *
45
+ * Returns true if bitmap is dirty at @start, false otherwise.
46
+ */
47
+bool hbitmap_status(const HBitmap *hb, int64_t start, int64_t count,
48
+ int64_t *pnum);
49
+
50
/**
51
* hbitmap_iter_next:
52
* @hbi: HBitmapIter to operate on.
53
diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/block/dirty-bitmap.c
56
+++ b/block/dirty-bitmap.c
57
@@ -XXX,XX +XXX,XX @@ bool bdrv_dirty_bitmap_next_dirty_area(BdrvDirtyBitmap *bitmap,
58
dirty_start, dirty_count);
39
}
59
}
40
60
41
+static void block_job_cancel_async(BlockJob *job)
61
+bool bdrv_dirty_bitmap_status(BdrvDirtyBitmap *bitmap, int64_t offset,
62
+ int64_t bytes, int64_t *count)
42
+{
63
+{
43
+ if (job->iostatus != BLOCK_DEVICE_IO_STATUS_OK) {
64
+ return hbitmap_status(bitmap->bitmap, offset, bytes, count);
44
+ block_job_iostatus_reset(job);
45
+ }
46
+ if (job->user_paused) {
47
+ /* Do not call block_job_enter here, the caller will handle it. */
48
+ job->user_paused = false;
49
+ job->pause_count--;
50
+ }
51
+ job->cancelled = true;
52
+}
65
+}
53
+
66
+
54
static void block_job_completed_txn_abort(BlockJob *job)
67
/**
68
* bdrv_merge_dirty_bitmap: merge src into dest.
69
* Ensures permissions on bitmaps are reasonable; use for public API.
70
diff --git a/util/hbitmap.c b/util/hbitmap.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/util/hbitmap.c
73
+++ b/util/hbitmap.c
74
@@ -XXX,XX +XXX,XX @@ bool hbitmap_next_dirty_area(const HBitmap *hb, int64_t start, int64_t end,
75
return true;
76
}
77
78
+bool hbitmap_status(const HBitmap *hb, int64_t start, int64_t count,
79
+ int64_t *pnum)
80
+{
81
+ int64_t next_dirty, next_zero;
82
+
83
+ assert(start >= 0);
84
+ assert(count > 0);
85
+ assert(start + count <= hb->orig_size);
86
+
87
+ next_dirty = hbitmap_next_dirty(hb, start, count);
88
+ if (next_dirty == -1) {
89
+ *pnum = count;
90
+ return false;
91
+ }
92
+
93
+ if (next_dirty > start) {
94
+ *pnum = next_dirty - start;
95
+ return false;
96
+ }
97
+
98
+ assert(next_dirty == start);
99
+
100
+ next_zero = hbitmap_next_zero(hb, start, count);
101
+ if (next_zero == -1) {
102
+ *pnum = count;
103
+ return true;
104
+ }
105
+
106
+ assert(next_zero > start);
107
+ *pnum = next_zero - start;
108
+ return false;
109
+}
110
+
111
bool hbitmap_empty(const HBitmap *hb)
55
{
112
{
56
AioContext *ctx;
113
return hb->count == 0;
57
@@ -XXX,XX +XXX,XX @@ static void block_job_completed_txn_abort(BlockJob *job)
58
* them; this job, however, may or may not be cancelled, depending
59
* on the caller, so leave it. */
60
if (other_job != job) {
61
- other_job->cancelled = true;
62
+ block_job_cancel_async(other_job);
63
}
64
continue;
65
}
66
@@ -XXX,XX +XXX,XX @@ bool block_job_user_paused(BlockJob *job)
67
void block_job_user_resume(BlockJob *job)
68
{
69
if (job && job->user_paused && job->pause_count > 0) {
70
- job->user_paused = false;
71
block_job_iostatus_reset(job);
72
+ job->user_paused = false;
73
block_job_resume(job);
74
}
75
}
76
@@ -XXX,XX +XXX,XX @@ void block_job_user_resume(BlockJob *job)
77
void block_job_cancel(BlockJob *job)
78
{
79
if (block_job_started(job)) {
80
- job->cancelled = true;
81
- block_job_iostatus_reset(job);
82
+ block_job_cancel_async(job);
83
block_job_enter(job);
84
} else {
85
block_job_completed(job, -ECANCELED);
86
@@ -XXX,XX +XXX,XX @@ void block_job_yield(BlockJob *job)
87
88
void block_job_iostatus_reset(BlockJob *job)
89
{
90
+ if (job->iostatus == BLOCK_DEVICE_IO_STATUS_OK) {
91
+ return;
92
+ }
93
+ assert(job->user_paused && job->pause_count > 0);
94
job->iostatus = BLOCK_DEVICE_IO_STATUS_OK;
95
}
96
97
--
114
--
98
2.9.3
115
2.34.1
99
100
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Outside blockjob.c, block_job_unref is only used when a block job fails
3
Add function to wait for all intersecting requests.
4
to start, and block_job_ref is not used at all. The reference counting
4
To be used in the further commit.
5
thus is pretty well hidden. Introduce a separate function to be used
6
by block jobs; because block_job_ref and block_job_unref now become
7
static, move them earlier in blockjob.c.
8
5
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Reviewed-by: John Snow <jsnow@redhat.com>
7
Reviewed-by: Nikita Lapshin <nikita.lapshin@virtuozzo.com>
11
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
8
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
12
Reviewed-by: Jeff Cody <jcody@redhat.com>
9
Message-Id: <20220303194349.2304213-10-vsementsov@virtuozzo.com>
13
Message-id: 20170508141310.8674-4-pbonzini@redhat.com
10
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
14
Signed-off-by: Jeff Cody <jcody@redhat.com>
15
---
11
---
16
block/backup.c | 2 +-
12
include/block/reqlist.h | 8 ++++++++
17
block/commit.c | 2 +-
13
block/reqlist.c | 8 ++++++++
18
block/mirror.c | 2 +-
14
2 files changed, 16 insertions(+)
19
blockjob.c | 47 ++++++++++++++++++++++++++------------------
20
include/block/blockjob_int.h | 15 +++-----------
21
tests/test-blockjob.c | 10 +++++-----
22
6 files changed, 39 insertions(+), 39 deletions(-)
23
15
24
diff --git a/block/backup.c b/block/backup.c
16
diff --git a/include/block/reqlist.h b/include/block/reqlist.h
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
26
--- a/block/backup.c
18
--- a/include/block/reqlist.h
27
+++ b/block/backup.c
19
+++ b/include/block/reqlist.h
28
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
20
@@ -XXX,XX +XXX,XX @@ BlockReq *reqlist_find_conflict(BlockReqList *reqs, int64_t offset,
29
}
21
bool coroutine_fn reqlist_wait_one(BlockReqList *reqs, int64_t offset,
30
if (job) {
22
int64_t bytes, CoMutex *lock);
31
backup_clean(&job->common);
23
32
- block_job_unref(&job->common);
24
+/*
33
+ block_job_early_fail(&job->common);
25
+ * Wait for all intersecting requests. It just calls reqlist_wait_one() in a
34
}
26
+ * loop, caller is responsible to stop producing new requests in this region
35
27
+ * in parallel, otherwise reqlist_wait_all() may never return.
36
return NULL;
28
+ */
37
diff --git a/block/commit.c b/block/commit.c
29
+void coroutine_fn reqlist_wait_all(BlockReqList *reqs, int64_t offset,
30
+ int64_t bytes, CoMutex *lock);
31
+
32
/*
33
* Shrink request and wake all waiting coroutines (maybe some of them are not
34
* intersecting with shrunk request).
35
diff --git a/block/reqlist.c b/block/reqlist.c
38
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
39
--- a/block/commit.c
37
--- a/block/reqlist.c
40
+++ b/block/commit.c
38
+++ b/block/reqlist.c
41
@@ -XXX,XX +XXX,XX @@ fail:
39
@@ -XXX,XX +XXX,XX @@ bool coroutine_fn reqlist_wait_one(BlockReqList *reqs, int64_t offset,
42
if (commit_top_bs) {
40
return true;
43
bdrv_set_backing_hd(overlay_bs, top, &error_abort);
44
}
45
- block_job_unref(&s->common);
46
+ block_job_early_fail(&s->common);
47
}
41
}
48
42
49
43
+void coroutine_fn reqlist_wait_all(BlockReqList *reqs, int64_t offset,
50
diff --git a/block/mirror.c b/block/mirror.c
44
+ int64_t bytes, CoMutex *lock)
51
index XXXXXXX..XXXXXXX 100644
52
--- a/block/mirror.c
53
+++ b/block/mirror.c
54
@@ -XXX,XX +XXX,XX @@ fail:
55
56
g_free(s->replaces);
57
blk_unref(s->target);
58
- block_job_unref(&s->common);
59
+ block_job_early_fail(&s->common);
60
}
61
62
bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
63
diff --git a/blockjob.c b/blockjob.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/blockjob.c
66
+++ b/blockjob.c
67
@@ -XXX,XX +XXX,XX @@ BlockJob *block_job_get(const char *id)
68
return NULL;
69
}
70
71
+static void block_job_ref(BlockJob *job)
72
+{
45
+{
73
+ ++job->refcnt;
46
+ while (reqlist_wait_one(reqs, offset, bytes, lock)) {
74
+}
47
+ /* continue */
75
+
76
+static void block_job_attached_aio_context(AioContext *new_context,
77
+ void *opaque);
78
+static void block_job_detach_aio_context(void *opaque);
79
+
80
+static void block_job_unref(BlockJob *job)
81
+{
82
+ if (--job->refcnt == 0) {
83
+ BlockDriverState *bs = blk_bs(job->blk);
84
+ bs->job = NULL;
85
+ block_job_remove_all_bdrv(job);
86
+ blk_remove_aio_context_notifier(job->blk,
87
+ block_job_attached_aio_context,
88
+ block_job_detach_aio_context, job);
89
+ blk_unref(job->blk);
90
+ error_free(job->blocker);
91
+ g_free(job->id);
92
+ QLIST_REMOVE(job, job_list);
93
+ g_free(job);
94
+ }
48
+ }
95
+}
49
+}
96
+
50
+
97
static void block_job_attached_aio_context(AioContext *new_context,
51
void coroutine_fn reqlist_shrink_req(BlockReq *req, int64_t new_bytes)
98
void *opaque)
99
{
52
{
100
@@ -XXX,XX +XXX,XX @@ void block_job_start(BlockJob *job)
53
if (new_bytes == req->bytes) {
101
bdrv_coroutine_enter(blk_bs(job->blk), job->co);
102
}
103
104
-void block_job_ref(BlockJob *job)
105
+void block_job_early_fail(BlockJob *job)
106
{
107
- ++job->refcnt;
108
-}
109
-
110
-void block_job_unref(BlockJob *job)
111
-{
112
- if (--job->refcnt == 0) {
113
- BlockDriverState *bs = blk_bs(job->blk);
114
- bs->job = NULL;
115
- block_job_remove_all_bdrv(job);
116
- blk_remove_aio_context_notifier(job->blk,
117
- block_job_attached_aio_context,
118
- block_job_detach_aio_context, job);
119
- blk_unref(job->blk);
120
- error_free(job->blocker);
121
- g_free(job->id);
122
- QLIST_REMOVE(job, job_list);
123
- g_free(job);
124
- }
125
+ block_job_unref(job);
126
}
127
128
static void block_job_completed_single(BlockJob *job)
129
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
130
index XXXXXXX..XXXXXXX 100644
131
--- a/include/block/blockjob_int.h
132
+++ b/include/block/blockjob_int.h
133
@@ -XXX,XX +XXX,XX @@ void block_job_sleep_ns(BlockJob *job, QEMUClockType type, int64_t ns);
134
void block_job_yield(BlockJob *job);
135
136
/**
137
- * block_job_ref:
138
+ * block_job_early_fail:
139
* @bs: The block device.
140
*
141
- * Grab a reference to the block job. Should be paired with block_job_unref.
142
+ * The block job could not be started, free it.
143
*/
144
-void block_job_ref(BlockJob *job);
145
-
146
-/**
147
- * block_job_unref:
148
- * @bs: The block device.
149
- *
150
- * Release reference to the block job and release resources if it is the last
151
- * reference.
152
- */
153
-void block_job_unref(BlockJob *job);
154
+void block_job_early_fail(BlockJob *job);
155
156
/**
157
* block_job_completed:
158
diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c
159
index XXXXXXX..XXXXXXX 100644
160
--- a/tests/test-blockjob.c
161
+++ b/tests/test-blockjob.c
162
@@ -XXX,XX +XXX,XX @@ static void test_job_ids(void)
163
job[1] = do_test_id(blk[1], "id0", false);
164
165
/* But once job[0] finishes we can reuse its ID */
166
- block_job_unref(job[0]);
167
+ block_job_early_fail(job[0]);
168
job[1] = do_test_id(blk[1], "id0", true);
169
170
/* No job ID specified, defaults to the backend name ('drive1') */
171
- block_job_unref(job[1]);
172
+ block_job_early_fail(job[1]);
173
job[1] = do_test_id(blk[1], NULL, true);
174
175
/* Duplicate job ID */
176
@@ -XXX,XX +XXX,XX @@ static void test_job_ids(void)
177
/* This one is valid */
178
job[2] = do_test_id(blk[2], "id_2", true);
179
180
- block_job_unref(job[0]);
181
- block_job_unref(job[1]);
182
- block_job_unref(job[2]);
183
+ block_job_early_fail(job[0]);
184
+ block_job_early_fail(job[1]);
185
+ block_job_early_fail(job[2]);
186
187
destroy_blk(blk[0]);
188
destroy_blk(blk[1]);
189
--
54
--
190
2.9.3
55
2.34.1
191
192
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Remove use of block_job_pause/resume from outside blockjob.c, thus
3
Add new block driver handlers and corresponding generic wrappers.
4
making them static. The new functions are used by the block layer,
4
It will be used to allow copy-before-write filter to provide
5
so place them in blockjob_int.h.
5
reach fleecing interface in further commit.
6
6
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
In future this approach may be used to allow reading qcow2 internal
8
Reviewed-by: John Snow <jsnow@redhat.com>
8
snapshots, for example to export them through NBD.
9
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9
10
Reviewed-by: Jeff Cody <jcody@redhat.com>
10
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
Message-id: 20170508141310.8674-5-pbonzini@redhat.com
11
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
12
Signed-off-by: Jeff Cody <jcody@redhat.com>
12
Message-Id: <20220303194349.2304213-11-vsementsov@virtuozzo.com>
13
[hreitz: Rebased on block GS/IO split]
14
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
13
---
15
---
14
block/io.c | 19 ++------
16
include/block/block_int-common.h | 18 ++++++++
15
blockjob.c | 114 ++++++++++++++++++++++++++-----------------
17
include/block/block_int-io.h | 9 ++++
16
include/block/blockjob.h | 16 ------
18
block/io.c | 72 ++++++++++++++++++++++++++++++++
17
include/block/blockjob_int.h | 14 ++++++
19
3 files changed, 99 insertions(+)
18
4 files changed, 86 insertions(+), 77 deletions(-)
19
20
21
diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/block/block_int-common.h
24
+++ b/include/block/block_int-common.h
25
@@ -XXX,XX +XXX,XX @@ struct BlockDriver {
26
bool want_zero, int64_t offset, int64_t bytes, int64_t *pnum,
27
int64_t *map, BlockDriverState **file);
28
29
+ /*
30
+ * Snapshot-access API.
31
+ *
32
+ * Block-driver may provide snapshot-access API: special functions to access
33
+ * some internal "snapshot". The functions are similar with normal
34
+ * read/block_status/discard handler, but don't have any specific handling
35
+ * in generic block-layer: no serializing, no alignment, no tracked
36
+ * requests. So, block-driver that realizes these APIs is fully responsible
37
+ * for synchronization between snapshot-access API and normal IO requests.
38
+ */
39
+ int coroutine_fn (*bdrv_co_preadv_snapshot)(BlockDriverState *bs,
40
+ int64_t offset, int64_t bytes, QEMUIOVector *qiov, size_t qiov_offset);
41
+ int coroutine_fn (*bdrv_co_snapshot_block_status)(BlockDriverState *bs,
42
+ bool want_zero, int64_t offset, int64_t bytes, int64_t *pnum,
43
+ int64_t *map, BlockDriverState **file);
44
+ int coroutine_fn (*bdrv_co_pdiscard_snapshot)(BlockDriverState *bs,
45
+ int64_t offset, int64_t bytes);
46
+
47
/*
48
* Invalidate any cached meta-data.
49
*/
50
diff --git a/include/block/block_int-io.h b/include/block/block_int-io.h
51
index XXXXXXX..XXXXXXX 100644
52
--- a/include/block/block_int-io.h
53
+++ b/include/block/block_int-io.h
54
@@ -XXX,XX +XXX,XX @@
55
* the I/O API.
56
*/
57
58
+int coroutine_fn bdrv_co_preadv_snapshot(BdrvChild *child,
59
+ int64_t offset, int64_t bytes, QEMUIOVector *qiov, size_t qiov_offset);
60
+int coroutine_fn bdrv_co_snapshot_block_status(BlockDriverState *bs,
61
+ bool want_zero, int64_t offset, int64_t bytes, int64_t *pnum,
62
+ int64_t *map, BlockDriverState **file);
63
+int coroutine_fn bdrv_co_pdiscard_snapshot(BlockDriverState *bs,
64
+ int64_t offset, int64_t bytes);
65
+
66
+
67
int coroutine_fn bdrv_co_preadv(BdrvChild *child,
68
int64_t offset, int64_t bytes, QEMUIOVector *qiov,
69
BdrvRequestFlags flags);
20
diff --git a/block/io.c b/block/io.c
70
diff --git a/block/io.c b/block/io.c
21
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
22
--- a/block/io.c
72
--- a/block/io.c
23
+++ b/block/io.c
73
+++ b/block/io.c
24
@@ -XXX,XX +XXX,XX @@
74
@@ -XXX,XX +XXX,XX @@ void bdrv_cancel_in_flight(BlockDriverState *bs)
25
#include "trace.h"
75
bs->drv->bdrv_cancel_in_flight(bs);
26
#include "sysemu/block-backend.h"
27
#include "block/blockjob.h"
28
+#include "block/blockjob_int.h"
29
#include "block/block_int.h"
30
#include "qemu/cutils.h"
31
#include "qapi/error.h"
32
@@ -XXX,XX +XXX,XX @@ void bdrv_drain_all_begin(void)
33
bool waited = true;
34
BlockDriverState *bs;
35
BdrvNextIterator it;
36
- BlockJob *job = NULL;
37
GSList *aio_ctxs = NULL, *ctx;
38
39
- while ((job = block_job_next(job))) {
40
- AioContext *aio_context = blk_get_aio_context(job->blk);
41
-
42
- aio_context_acquire(aio_context);
43
- block_job_pause(job);
44
- aio_context_release(aio_context);
45
- }
46
+ block_job_pause_all();
47
48
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
49
AioContext *aio_context = bdrv_get_aio_context(bs);
50
@@ -XXX,XX +XXX,XX @@ void bdrv_drain_all_end(void)
51
{
52
BlockDriverState *bs;
53
BdrvNextIterator it;
54
- BlockJob *job = NULL;
55
56
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
57
AioContext *aio_context = bdrv_get_aio_context(bs);
58
@@ -XXX,XX +XXX,XX @@ void bdrv_drain_all_end(void)
59
aio_context_release(aio_context);
60
}
76
}
61
62
- while ((job = block_job_next(job))) {
63
- AioContext *aio_context = blk_get_aio_context(job->blk);
64
-
65
- aio_context_acquire(aio_context);
66
- block_job_resume(job);
67
- aio_context_release(aio_context);
68
- }
69
+ block_job_resume_all();
70
}
77
}
71
78
+
72
void bdrv_drain_all(void)
79
+int coroutine_fn
73
diff --git a/blockjob.c b/blockjob.c
80
+bdrv_co_preadv_snapshot(BdrvChild *child, int64_t offset, int64_t bytes,
74
index XXXXXXX..XXXXXXX 100644
81
+ QEMUIOVector *qiov, size_t qiov_offset)
75
--- a/blockjob.c
76
+++ b/blockjob.c
77
@@ -XXX,XX +XXX,XX @@ struct BlockJobTxn {
78
79
static QLIST_HEAD(, BlockJob) block_jobs = QLIST_HEAD_INITIALIZER(block_jobs);
80
81
-static char *child_job_get_parent_desc(BdrvChild *c)
82
-{
83
- BlockJob *job = c->opaque;
84
- return g_strdup_printf("%s job '%s'",
85
- BlockJobType_lookup[job->driver->job_type],
86
- job->id);
87
-}
88
-
89
-static const BdrvChildRole child_job = {
90
- .get_parent_desc = child_job_get_parent_desc,
91
- .stay_at_node = true,
92
-};
93
-
94
-static void block_job_drained_begin(void *opaque)
95
-{
96
- BlockJob *job = opaque;
97
- block_job_pause(job);
98
-}
99
-
100
-static void block_job_drained_end(void *opaque)
101
-{
102
- BlockJob *job = opaque;
103
- block_job_resume(job);
104
-}
105
-
106
-static const BlockDevOps block_job_dev_ops = {
107
- .drained_begin = block_job_drained_begin,
108
- .drained_end = block_job_drained_end,
109
-};
110
-
111
BlockJob *block_job_next(BlockJob *job)
112
{
113
if (!job) {
114
@@ -XXX,XX +XXX,XX @@ BlockJob *block_job_get(const char *id)
115
return NULL;
116
}
117
118
+static void block_job_pause(BlockJob *job)
119
+{
82
+{
120
+ job->pause_count++;
83
+ BlockDriverState *bs = child->bs;
84
+ BlockDriver *drv = bs->drv;
85
+ int ret;
86
+ IO_CODE();
87
+
88
+ if (!drv) {
89
+ return -ENOMEDIUM;
90
+ }
91
+
92
+ if (!drv->bdrv_co_preadv_snapshot) {
93
+ return -ENOTSUP;
94
+ }
95
+
96
+ bdrv_inc_in_flight(bs);
97
+ ret = drv->bdrv_co_preadv_snapshot(bs, offset, bytes, qiov, qiov_offset);
98
+ bdrv_dec_in_flight(bs);
99
+
100
+ return ret;
121
+}
101
+}
122
+
102
+
123
+static void block_job_resume(BlockJob *job)
103
+int coroutine_fn
104
+bdrv_co_snapshot_block_status(BlockDriverState *bs,
105
+ bool want_zero, int64_t offset, int64_t bytes,
106
+ int64_t *pnum, int64_t *map,
107
+ BlockDriverState **file)
124
+{
108
+{
125
+ assert(job->pause_count > 0);
109
+ BlockDriver *drv = bs->drv;
126
+ job->pause_count--;
110
+ int ret;
127
+ if (job->pause_count) {
111
+ IO_CODE();
128
+ return;
112
+
113
+ if (!drv) {
114
+ return -ENOMEDIUM;
129
+ }
115
+ }
130
+ block_job_enter(job);
116
+
117
+ if (!drv->bdrv_co_snapshot_block_status) {
118
+ return -ENOTSUP;
119
+ }
120
+
121
+ bdrv_inc_in_flight(bs);
122
+ ret = drv->bdrv_co_snapshot_block_status(bs, want_zero, offset, bytes,
123
+ pnum, map, file);
124
+ bdrv_dec_in_flight(bs);
125
+
126
+ return ret;
131
+}
127
+}
132
+
128
+
133
static void block_job_ref(BlockJob *job)
129
+int coroutine_fn
134
{
130
+bdrv_co_pdiscard_snapshot(BlockDriverState *bs, int64_t offset, int64_t bytes)
135
++job->refcnt;
136
@@ -XXX,XX +XXX,XX @@ static void block_job_detach_aio_context(void *opaque)
137
block_job_unref(job);
138
}
139
140
+static char *child_job_get_parent_desc(BdrvChild *c)
141
+{
131
+{
142
+ BlockJob *job = c->opaque;
132
+ BlockDriver *drv = bs->drv;
143
+ return g_strdup_printf("%s job '%s'",
133
+ int ret;
144
+ BlockJobType_lookup[job->driver->job_type],
134
+ IO_CODE();
145
+ job->id);
135
+
136
+ if (!drv) {
137
+ return -ENOMEDIUM;
138
+ }
139
+
140
+ if (!drv->bdrv_co_pdiscard_snapshot) {
141
+ return -ENOTSUP;
142
+ }
143
+
144
+ bdrv_inc_in_flight(bs);
145
+ ret = drv->bdrv_co_pdiscard_snapshot(bs, offset, bytes);
146
+ bdrv_dec_in_flight(bs);
147
+
148
+ return ret;
146
+}
149
+}
147
+
148
+static const BdrvChildRole child_job = {
149
+ .get_parent_desc = child_job_get_parent_desc,
150
+ .stay_at_node = true,
151
+};
152
+
153
+static void block_job_drained_begin(void *opaque)
154
+{
155
+ BlockJob *job = opaque;
156
+ block_job_pause(job);
157
+}
158
+
159
+static void block_job_drained_end(void *opaque)
160
+{
161
+ BlockJob *job = opaque;
162
+ block_job_resume(job);
163
+}
164
+
165
+static const BlockDevOps block_job_dev_ops = {
166
+ .drained_begin = block_job_drained_begin,
167
+ .drained_end = block_job_drained_end,
168
+};
169
+
170
void block_job_remove_all_bdrv(BlockJob *job)
171
{
172
GSList *l;
173
@@ -XXX,XX +XXX,XX @@ void block_job_complete(BlockJob *job, Error **errp)
174
job->driver->complete(job, errp);
175
}
176
177
-void block_job_pause(BlockJob *job)
178
-{
179
- job->pause_count++;
180
-}
181
-
182
void block_job_user_pause(BlockJob *job)
183
{
184
job->user_paused = true;
185
@@ -XXX,XX +XXX,XX @@ void coroutine_fn block_job_pause_point(BlockJob *job)
186
}
187
}
188
189
-void block_job_resume(BlockJob *job)
190
-{
191
- assert(job->pause_count > 0);
192
- job->pause_count--;
193
- if (job->pause_count) {
194
- return;
195
- }
196
- block_job_enter(job);
197
-}
198
-
199
void block_job_user_resume(BlockJob *job)
200
{
201
if (job && job->user_paused && job->pause_count > 0) {
202
@@ -XXX,XX +XXX,XX @@ static void block_job_event_completed(BlockJob *job, const char *msg)
203
&error_abort);
204
}
205
206
+void block_job_pause_all(void)
207
+{
208
+ BlockJob *job = NULL;
209
+ while ((job = block_job_next(job))) {
210
+ AioContext *aio_context = blk_get_aio_context(job->blk);
211
+
212
+ aio_context_acquire(aio_context);
213
+ block_job_pause(job);
214
+ aio_context_release(aio_context);
215
+ }
216
+}
217
+
218
+void block_job_resume_all(void)
219
+{
220
+ BlockJob *job = NULL;
221
+ while ((job = block_job_next(job))) {
222
+ AioContext *aio_context = blk_get_aio_context(job->blk);
223
+
224
+ aio_context_acquire(aio_context);
225
+ block_job_resume(job);
226
+ aio_context_release(aio_context);
227
+ }
228
+}
229
+
230
void block_job_event_ready(BlockJob *job)
231
{
232
job->ready = true;
233
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
234
index XXXXXXX..XXXXXXX 100644
235
--- a/include/block/blockjob.h
236
+++ b/include/block/blockjob.h
237
@@ -XXX,XX +XXX,XX @@ void block_job_complete(BlockJob *job, Error **errp);
238
BlockJobInfo *block_job_query(BlockJob *job, Error **errp);
239
240
/**
241
- * block_job_pause:
242
- * @job: The job to be paused.
243
- *
244
- * Asynchronously pause the specified job.
245
- */
246
-void block_job_pause(BlockJob *job);
247
-
248
-/**
249
* block_job_user_pause:
250
* @job: The job to be paused.
251
*
252
@@ -XXX,XX +XXX,XX @@ void block_job_user_pause(BlockJob *job);
253
bool block_job_user_paused(BlockJob *job);
254
255
/**
256
- * block_job_resume:
257
- * @job: The job to be resumed.
258
- *
259
- * Resume the specified job. Must be paired with a preceding block_job_pause.
260
- */
261
-void block_job_resume(BlockJob *job);
262
-
263
-/**
264
* block_job_user_resume:
265
* @job: The job to be resumed.
266
*
267
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
268
index XXXXXXX..XXXXXXX 100644
269
--- a/include/block/blockjob_int.h
270
+++ b/include/block/blockjob_int.h
271
@@ -XXX,XX +XXX,XX @@ void block_job_sleep_ns(BlockJob *job, QEMUClockType type, int64_t ns);
272
void block_job_yield(BlockJob *job);
273
274
/**
275
+ * block_job_pause_all:
276
+ *
277
+ * Asynchronously pause all jobs.
278
+ */
279
+void block_job_pause_all(void);
280
+
281
+/**
282
+ * block_job_resume_all:
283
+ *
284
+ * Resume all block jobs. Must be paired with a preceding block_job_pause_all.
285
+ */
286
+void block_job_resume_all(void);
287
+
288
+/**
289
* block_job_early_fail:
290
* @bs: The block device.
291
*
292
--
150
--
293
2.9.3
151
2.34.1
294
295
diff view generated by jsdifflib
New patch
1
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
3
The new block driver simply utilizes snapshot-access API of underlying
4
block node.
5
6
In further patches we want to use it like this:
7
8
[guest] [NBD export]
9
| |
10
| root | root
11
v file v
12
[copy-before-write]<------[snapshot-access]
13
| |
14
| file | target
15
v v
16
[active-disk] [temp.img]
17
18
This way, NBD client will be able to read snapshotted state of active
19
disk, when active disk is continued to be written by guest. This is
20
known as "fleecing", and currently uses another scheme based on qcow2
21
temporary image which backing file is active-disk. New scheme comes
22
with benefits - see next commit.
23
24
The other possible application is exporting internal snapshots of
25
qcow2, like this:
26
27
[guest] [NBD export]
28
| |
29
| root | root
30
v file v
31
[qcow2]<---------[snapshot-access]
32
33
For this, we'll need to implement snapshot-access API handlers in
34
qcow2 driver, and improve snapshot-access block driver (and API) to
35
make it possible to select snapshot by name. Another thing to improve
36
is size of snapshot. Now for simplicity we just use size of bs->file,
37
which is OK for backup, but for qcow2 snapshots export we'll need to
38
imporve snapshot-access API to get size of snapshot.
39
40
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
41
Message-Id: <20220303194349.2304213-12-vsementsov@virtuozzo.com>
42
[hreitz: Rebased on block GS/IO split]
43
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
44
---
45
qapi/block-core.json | 4 +-
46
include/block/block_int-common.h | 6 ++
47
block/snapshot-access.c | 132 +++++++++++++++++++++++++++++++
48
MAINTAINERS | 1 +
49
block/meson.build | 1 +
50
5 files changed, 143 insertions(+), 1 deletion(-)
51
create mode 100644 block/snapshot-access.c
52
53
diff --git a/qapi/block-core.json b/qapi/block-core.json
54
index XXXXXXX..XXXXXXX 100644
55
--- a/qapi/block-core.json
56
+++ b/qapi/block-core.json
57
@@ -XXX,XX +XXX,XX @@
58
# @blkreplay: Since 4.2
59
# @compress: Since 5.0
60
# @copy-before-write: Since 6.2
61
+# @snapshot-access: Since 7.0
62
#
63
# Since: 2.9
64
##
65
{ 'enum': 'BlockdevDriver',
66
'data': [ 'blkdebug', 'blklogwrites', 'blkreplay', 'blkverify', 'bochs',
67
'cloop', 'compress', 'copy-before-write', 'copy-on-read', 'dmg',
68
- 'file', 'ftp', 'ftps', 'gluster',
69
+ 'file', 'snapshot-access', 'ftp', 'ftps', 'gluster',
70
{'name': 'host_cdrom', 'if': 'HAVE_HOST_BLOCK_DEVICE' },
71
{'name': 'host_device', 'if': 'HAVE_HOST_BLOCK_DEVICE' },
72
'http', 'https', 'iscsi',
73
@@ -XXX,XX +XXX,XX @@
74
'rbd': 'BlockdevOptionsRbd',
75
'replication': { 'type': 'BlockdevOptionsReplication',
76
'if': 'CONFIG_REPLICATION' },
77
+ 'snapshot-access': 'BlockdevOptionsGenericFormat',
78
'ssh': 'BlockdevOptionsSsh',
79
'throttle': 'BlockdevOptionsThrottle',
80
'vdi': 'BlockdevOptionsGenericFormat',
81
diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
82
index XXXXXXX..XXXXXXX 100644
83
--- a/include/block/block_int-common.h
84
+++ b/include/block/block_int-common.h
85
@@ -XXX,XX +XXX,XX @@ struct BlockDriver {
86
* in generic block-layer: no serializing, no alignment, no tracked
87
* requests. So, block-driver that realizes these APIs is fully responsible
88
* for synchronization between snapshot-access API and normal IO requests.
89
+ *
90
+ * TODO: To be able to support qcow2's internal snapshots, this API will
91
+ * need to be extended to:
92
+ * - be able to select a specific snapshot
93
+ * - receive the snapshot's actual length (which may differ from bs's
94
+ * length)
95
*/
96
int coroutine_fn (*bdrv_co_preadv_snapshot)(BlockDriverState *bs,
97
int64_t offset, int64_t bytes, QEMUIOVector *qiov, size_t qiov_offset);
98
diff --git a/block/snapshot-access.c b/block/snapshot-access.c
99
new file mode 100644
100
index XXXXXXX..XXXXXXX
101
--- /dev/null
102
+++ b/block/snapshot-access.c
103
@@ -XXX,XX +XXX,XX @@
104
+/*
105
+ * snapshot_access block driver
106
+ *
107
+ * Copyright (c) 2022 Virtuozzo International GmbH.
108
+ *
109
+ * Author:
110
+ * Sementsov-Ogievskiy Vladimir <vsementsov@virtuozzo.com>
111
+ *
112
+ * This program is free software; you can redistribute it and/or modify
113
+ * it under the terms of the GNU General Public License as published by
114
+ * the Free Software Foundation; either version 2 of the License, or
115
+ * (at your option) any later version.
116
+ *
117
+ * This program is distributed in the hope that it will be useful,
118
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
119
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
120
+ * GNU General Public License for more details.
121
+ *
122
+ * You should have received a copy of the GNU General Public License
123
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
124
+ */
125
+
126
+#include "qemu/osdep.h"
127
+
128
+#include "sysemu/block-backend.h"
129
+#include "qemu/cutils.h"
130
+#include "block/block_int.h"
131
+
132
+static coroutine_fn int
133
+snapshot_access_co_preadv_part(BlockDriverState *bs,
134
+ int64_t offset, int64_t bytes,
135
+ QEMUIOVector *qiov, size_t qiov_offset,
136
+ BdrvRequestFlags flags)
137
+{
138
+ if (flags) {
139
+ return -ENOTSUP;
140
+ }
141
+
142
+ return bdrv_co_preadv_snapshot(bs->file, offset, bytes, qiov, qiov_offset);
143
+}
144
+
145
+static int coroutine_fn
146
+snapshot_access_co_block_status(BlockDriverState *bs,
147
+ bool want_zero, int64_t offset,
148
+ int64_t bytes, int64_t *pnum,
149
+ int64_t *map, BlockDriverState **file)
150
+{
151
+ return bdrv_co_snapshot_block_status(bs->file->bs, want_zero, offset,
152
+ bytes, pnum, map, file);
153
+}
154
+
155
+static int coroutine_fn snapshot_access_co_pdiscard(BlockDriverState *bs,
156
+ int64_t offset, int64_t bytes)
157
+{
158
+ return bdrv_co_pdiscard_snapshot(bs->file->bs, offset, bytes);
159
+}
160
+
161
+static int coroutine_fn
162
+snapshot_access_co_pwrite_zeroes(BlockDriverState *bs,
163
+ int64_t offset, int64_t bytes,
164
+ BdrvRequestFlags flags)
165
+{
166
+ return -ENOTSUP;
167
+}
168
+
169
+static coroutine_fn int
170
+snapshot_access_co_pwritev_part(BlockDriverState *bs,
171
+ int64_t offset, int64_t bytes,
172
+ QEMUIOVector *qiov, size_t qiov_offset,
173
+ BdrvRequestFlags flags)
174
+{
175
+ return -ENOTSUP;
176
+}
177
+
178
+
179
+static void snapshot_access_refresh_filename(BlockDriverState *bs)
180
+{
181
+ pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
182
+ bs->file->bs->filename);
183
+}
184
+
185
+static int snapshot_access_open(BlockDriverState *bs, QDict *options, int flags,
186
+ Error **errp)
187
+{
188
+ bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
189
+ BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY,
190
+ false, errp);
191
+ if (!bs->file) {
192
+ return -EINVAL;
193
+ }
194
+
195
+ bs->total_sectors = bs->file->bs->total_sectors;
196
+
197
+ return 0;
198
+}
199
+
200
+static void snapshot_access_child_perm(BlockDriverState *bs, BdrvChild *c,
201
+ BdrvChildRole role,
202
+ BlockReopenQueue *reopen_queue,
203
+ uint64_t perm, uint64_t shared,
204
+ uint64_t *nperm, uint64_t *nshared)
205
+{
206
+ /*
207
+ * Currently, we don't need any permissions. If bs->file provides
208
+ * snapshot-access API, we can use it.
209
+ */
210
+ *nperm = 0;
211
+ *nshared = BLK_PERM_ALL;
212
+}
213
+
214
+BlockDriver bdrv_snapshot_access_drv = {
215
+ .format_name = "snapshot-access",
216
+
217
+ .bdrv_open = snapshot_access_open,
218
+
219
+ .bdrv_co_preadv_part = snapshot_access_co_preadv_part,
220
+ .bdrv_co_pwritev_part = snapshot_access_co_pwritev_part,
221
+ .bdrv_co_pwrite_zeroes = snapshot_access_co_pwrite_zeroes,
222
+ .bdrv_co_pdiscard = snapshot_access_co_pdiscard,
223
+ .bdrv_co_block_status = snapshot_access_co_block_status,
224
+
225
+ .bdrv_refresh_filename = snapshot_access_refresh_filename,
226
+
227
+ .bdrv_child_perm = snapshot_access_child_perm,
228
+};
229
+
230
+static void snapshot_access_init(void)
231
+{
232
+ bdrv_register(&bdrv_snapshot_access_drv);
233
+}
234
+
235
+block_init(snapshot_access_init);
236
diff --git a/MAINTAINERS b/MAINTAINERS
237
index XXXXXXX..XXXXXXX 100644
238
--- a/MAINTAINERS
239
+++ b/MAINTAINERS
240
@@ -XXX,XX +XXX,XX @@ F: block/reqlist.c
241
F: include/block/reqlist.h
242
F: block/copy-before-write.h
243
F: block/copy-before-write.c
244
+F: block/snapshot-access.c
245
F: include/block/aio_task.h
246
F: block/aio_task.c
247
F: util/qemu-co-shared-resource.c
248
diff --git a/block/meson.build b/block/meson.build
249
index XXXXXXX..XXXXXXX 100644
250
--- a/block/meson.build
251
+++ b/block/meson.build
252
@@ -XXX,XX +XXX,XX @@ block_ss.add(files(
253
'raw-format.c',
254
'reqlist.c',
255
'snapshot.c',
256
+ 'snapshot-access.c',
257
'throttle-groups.c',
258
'throttle.c',
259
'vhdx-endian.c',
260
--
261
2.34.1
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
Yet another pure code movement patch, preparing for the next change.
3
Current scheme of image fleecing looks like this:
4
4
5
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
5
[guest] [NBD export]
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
| |
7
Message-id: 20170508141310.8674-9-pbonzini@redhat.com
7
|root | root
8
Signed-off-by: Jeff Cody <jcody@redhat.com>
8
v v
9
[copy-before-write] -----> [temp.qcow2]
10
| target |
11
|file |backing
12
v |
13
[active disk] <-------------+
14
15
- On guest writes copy-before-write filter copies old data from active
16
disk to temp.qcow2. So fleecing client (NBD export) when reads
17
changed regions from temp.qcow2 image and unchanged from active disk
18
through backing link.
19
20
This patch makes possible new image fleecing scheme:
21
22
[guest] [NBD export]
23
| |
24
| root | root
25
v file v
26
[copy-before-write]<------[snapshot-access]
27
| |
28
| file | target
29
v v
30
[active-disk] [temp.img]
31
32
- copy-before-write does CBW operations and also provides
33
snapshot-access API. The API may be accessed through
34
snapshot-access driver.
35
36
Benefits of new scheme:
37
38
1. Access control: if remote client try to read data that not covered
39
by original dirty bitmap used on copy-before-write open, client gets
40
-EACCES.
41
42
2. Discard support: if remote client do DISCARD, this additionally to
43
discarding data in temp.img informs block-copy process to not copy
44
these clusters. Next read from discarded area will return -EACCES.
45
This is significant thing: when fleecing user reads data that was
46
not yet copied to temp.img, we can avoid copying it on further guest
47
write.
48
49
3. Synchronisation between client reads and block-copy write is more
50
efficient. In old scheme we just rely on BDRV_REQ_SERIALISING flag
51
used for writes to temp.qcow2. New scheme is less blocking:
52
- fleecing reads are never blocked: if data region is untouched or
53
in-flight, we just read from active-disk, otherwise we read from
54
temp.img
55
- writes to temp.img are not blocked by fleecing reads
56
- still, guest writes of-course are blocked by in-flight fleecing
57
reads, that currently read from active-disk - it's the minimum
58
necessary blocking
59
60
4. Temporary image may be of any format, as we don't rely on backing
61
feature.
62
63
5. Permission relation are simplified. With old scheme we have to share
64
write permission on target child of copy-before-write, otherwise
65
backing link conflicts with copy-before-write file child write
66
permissions. With new scheme we don't have backing link, and
67
copy-before-write node may have unshared access to temporary node.
68
(Not realized in this commit, will be in future).
69
70
6. Having control on fleecing reads we'll be able to implement
71
alternative behavior on failed copy-before-write operations.
72
Currently we just break guest request (that's a historical behavior
73
of backup). But in some scenarios it's a bad behavior: better
74
is to drop the backup as failed but don't break guest request.
75
With new scheme we can simply unset some bits in a bitmap on CBW
76
failure and further fleecing reads will -EACCES, or something like
77
this. (Not implemented in this commit, will be in future)
78
Additional application for this is implementing timeout for CBW
79
operations.
80
81
Iotest 257 output is updated, as two more bitmaps now live in
82
copy-before-write filter.
83
84
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
85
Message-Id: <20220303194349.2304213-13-vsementsov@virtuozzo.com>
86
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
9
---
87
---
10
blockjob.c | 128 ++++++++++++++++++++++++++++++-------------------------------
88
block/copy-before-write.c | 212 ++++++++++++++++++++++++++++++++++-
11
1 file changed, 64 insertions(+), 64 deletions(-)
89
tests/qemu-iotests/257.out | 224 +++++++++++++++++++++++++++++++++++++
12
90
2 files changed, 435 insertions(+), 1 deletion(-)
13
diff --git a/blockjob.c b/blockjob.c
91
92
diff --git a/block/copy-before-write.c b/block/copy-before-write.c
14
index XXXXXXX..XXXXXXX 100644
93
index XXXXXXX..XXXXXXX 100644
15
--- a/blockjob.c
94
--- a/block/copy-before-write.c
16
+++ b/blockjob.c
95
+++ b/block/copy-before-write.c
17
@@ -XXX,XX +XXX,XX @@ BlockJob *block_job_get(const char *id)
96
@@ -XXX,XX +XXX,XX @@
18
return NULL;
97
#include "block/block-copy.h"
98
99
#include "block/copy-before-write.h"
100
+#include "block/reqlist.h"
101
102
#include "qapi/qapi-visit-block-core.h"
103
104
typedef struct BDRVCopyBeforeWriteState {
105
BlockCopyState *bcs;
106
BdrvChild *target;
107
+
108
+ /*
109
+ * @lock: protects access to @access_bitmap, @done_bitmap and
110
+ * @frozen_read_reqs
111
+ */
112
+ CoMutex lock;
113
+
114
+ /*
115
+ * @access_bitmap: represents areas allowed for reading by fleecing user.
116
+ * Reading from non-dirty areas leads to -EACCES.
117
+ */
118
+ BdrvDirtyBitmap *access_bitmap;
119
+
120
+ /*
121
+ * @done_bitmap: represents areas that was successfully copied to @target by
122
+ * copy-before-write operations.
123
+ */
124
+ BdrvDirtyBitmap *done_bitmap;
125
+
126
+ /*
127
+ * @frozen_read_reqs: current read requests for fleecing user in bs->file
128
+ * node. These areas must not be rewritten by guest.
129
+ */
130
+ BlockReqList frozen_read_reqs;
131
} BDRVCopyBeforeWriteState;
132
133
static coroutine_fn int cbw_co_preadv(
134
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int cbw_co_preadv(
135
return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags);
19
}
136
}
20
137
21
+BlockJobTxn *block_job_txn_new(void)
138
+/*
139
+ * Do copy-before-write operation.
140
+ *
141
+ * On failure guest request must be failed too.
142
+ *
143
+ * On success, we also wait for all in-flight fleecing read requests in source
144
+ * node, and it's guaranteed that after cbw_do_copy_before_write() successful
145
+ * return there are no such requests and they will never appear.
146
+ */
147
static coroutine_fn int cbw_do_copy_before_write(BlockDriverState *bs,
148
uint64_t offset, uint64_t bytes, BdrvRequestFlags flags)
149
{
150
BDRVCopyBeforeWriteState *s = bs->opaque;
151
+ int ret;
152
uint64_t off, end;
153
int64_t cluster_size = block_copy_cluster_size(s->bcs);
154
155
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int cbw_do_copy_before_write(BlockDriverState *bs,
156
off = QEMU_ALIGN_DOWN(offset, cluster_size);
157
end = QEMU_ALIGN_UP(offset + bytes, cluster_size);
158
159
- return block_copy(s->bcs, off, end - off, true);
160
+ ret = block_copy(s->bcs, off, end - off, true);
161
+ if (ret < 0) {
162
+ return ret;
163
+ }
164
+
165
+ WITH_QEMU_LOCK_GUARD(&s->lock) {
166
+ bdrv_set_dirty_bitmap(s->done_bitmap, off, end - off);
167
+ reqlist_wait_all(&s->frozen_read_reqs, off, end - off, &s->lock);
168
+ }
169
+
170
+ return 0;
171
}
172
173
static int coroutine_fn cbw_co_pdiscard(BlockDriverState *bs,
174
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn cbw_co_flush(BlockDriverState *bs)
175
return bdrv_co_flush(bs->file->bs);
176
}
177
178
+/*
179
+ * If @offset not accessible - return NULL.
180
+ *
181
+ * Otherwise, set @pnum to some bytes that accessible from @file (@file is set
182
+ * to bs->file or to s->target). Return newly allocated BlockReq object that
183
+ * should be than passed to cbw_snapshot_read_unlock().
184
+ *
185
+ * It's guaranteed that guest writes will not interact in the region until
186
+ * cbw_snapshot_read_unlock() called.
187
+ */
188
+static BlockReq *cbw_snapshot_read_lock(BlockDriverState *bs,
189
+ int64_t offset, int64_t bytes,
190
+ int64_t *pnum, BdrvChild **file)
22
+{
191
+{
23
+ BlockJobTxn *txn = g_new0(BlockJobTxn, 1);
192
+ BDRVCopyBeforeWriteState *s = bs->opaque;
24
+ QLIST_INIT(&txn->jobs);
193
+ BlockReq *req = g_new(BlockReq, 1);
25
+ txn->refcnt = 1;
194
+ bool done;
26
+ return txn;
195
+
196
+ QEMU_LOCK_GUARD(&s->lock);
197
+
198
+ if (bdrv_dirty_bitmap_next_zero(s->access_bitmap, offset, bytes) != -1) {
199
+ g_free(req);
200
+ return NULL;
201
+ }
202
+
203
+ done = bdrv_dirty_bitmap_status(s->done_bitmap, offset, bytes, pnum);
204
+ if (done) {
205
+ /*
206
+ * Special invalid BlockReq, that is handled in
207
+ * cbw_snapshot_read_unlock(). We don't need to lock something to read
208
+ * from s->target.
209
+ */
210
+ *req = (BlockReq) {.offset = -1, .bytes = -1};
211
+ *file = s->target;
212
+ } else {
213
+ reqlist_init_req(&s->frozen_read_reqs, req, offset, bytes);
214
+ *file = bs->file;
215
+ }
216
+
217
+ return req;
27
+}
218
+}
28
+
219
+
29
+static void block_job_txn_ref(BlockJobTxn *txn)
220
+static void cbw_snapshot_read_unlock(BlockDriverState *bs, BlockReq *req)
30
+{
221
+{
31
+ txn->refcnt++;
222
+ BDRVCopyBeforeWriteState *s = bs->opaque;
223
+
224
+ if (req->offset == -1 && req->bytes == -1) {
225
+ g_free(req);
226
+ return;
227
+ }
228
+
229
+ QEMU_LOCK_GUARD(&s->lock);
230
+
231
+ reqlist_remove_req(req);
232
+ g_free(req);
32
+}
233
+}
33
+
234
+
34
+void block_job_txn_unref(BlockJobTxn *txn)
235
+static coroutine_fn int
236
+cbw_co_preadv_snapshot(BlockDriverState *bs, int64_t offset, int64_t bytes,
237
+ QEMUIOVector *qiov, size_t qiov_offset)
35
+{
238
+{
36
+ if (txn && --txn->refcnt == 0) {
239
+ BlockReq *req;
37
+ g_free(txn);
240
+ BdrvChild *file;
38
+ }
241
+ int ret;
242
+
243
+ /* TODO: upgrade to async loop using AioTask */
244
+ while (bytes) {
245
+ int64_t cur_bytes;
246
+
247
+ req = cbw_snapshot_read_lock(bs, offset, bytes, &cur_bytes, &file);
248
+ if (!req) {
249
+ return -EACCES;
250
+ }
251
+
252
+ ret = bdrv_co_preadv_part(file, offset, cur_bytes,
253
+ qiov, qiov_offset, 0);
254
+ cbw_snapshot_read_unlock(bs, req);
255
+ if (ret < 0) {
256
+ return ret;
257
+ }
258
+
259
+ bytes -= cur_bytes;
260
+ offset += cur_bytes;
261
+ qiov_offset += cur_bytes;
262
+ }
263
+
264
+ return 0;
39
+}
265
+}
40
+
266
+
41
+void block_job_txn_add_job(BlockJobTxn *txn, BlockJob *job)
267
+static int coroutine_fn
268
+cbw_co_snapshot_block_status(BlockDriverState *bs,
269
+ bool want_zero, int64_t offset, int64_t bytes,
270
+ int64_t *pnum, int64_t *map,
271
+ BlockDriverState **file)
42
+{
272
+{
43
+ if (!txn) {
273
+ BDRVCopyBeforeWriteState *s = bs->opaque;
44
+ return;
274
+ BlockReq *req;
45
+ }
46
+
47
+ assert(!job->txn);
48
+ job->txn = txn;
49
+
50
+ QLIST_INSERT_HEAD(&txn->jobs, job, txn_list);
51
+ block_job_txn_ref(txn);
52
+}
53
+
54
static void block_job_pause(BlockJob *job)
55
{
56
job->pause_count++;
57
@@ -XXX,XX +XXX,XX @@ static void block_job_cancel_async(BlockJob *job)
58
job->cancelled = true;
59
}
60
61
+static int block_job_finish_sync(BlockJob *job,
62
+ void (*finish)(BlockJob *, Error **errp),
63
+ Error **errp)
64
+{
65
+ Error *local_err = NULL;
66
+ int ret;
275
+ int ret;
67
+
276
+ int64_t cur_bytes;
68
+ assert(blk_bs(job->blk)->job == job);
277
+ BdrvChild *child;
69
+
278
+
70
+ block_job_ref(job);
279
+ req = cbw_snapshot_read_lock(bs, offset, bytes, &cur_bytes, &child);
71
+
280
+ if (!req) {
72
+ finish(job, &local_err);
281
+ return -EACCES;
73
+ if (local_err) {
282
+ }
74
+ error_propagate(errp, local_err);
283
+
75
+ block_job_unref(job);
284
+ ret = bdrv_block_status(child->bs, offset, cur_bytes, pnum, map, file);
76
+ return -EBUSY;
285
+ if (child == s->target) {
77
+ }
286
+ /*
78
+ /* block_job_drain calls block_job_enter, and it should be enough to
287
+ * We refer to s->target only for areas that we've written to it.
79
+ * induce progress until the job completes or moves to the main thread.
288
+ * And we can not report unallocated blocks in s->target: this will
80
+ */
289
+ * break generic block-status-above logic, that will go to
81
+ while (!job->deferred_to_main_loop && !job->completed) {
290
+ * copy-before-write filtered child in this case.
82
+ block_job_drain(job);
291
+ */
83
+ }
292
+ assert(ret & BDRV_BLOCK_ALLOCATED);
84
+ while (!job->completed) {
293
+ }
85
+ aio_poll(qemu_get_aio_context(), true);
294
+
86
+ }
295
+ cbw_snapshot_read_unlock(bs, req);
87
+ ret = (job->cancelled && job->ret == 0) ? -ECANCELED : job->ret;
296
+
88
+ block_job_unref(job);
89
+ return ret;
297
+ return ret;
90
+}
298
+}
91
+
299
+
92
static void block_job_completed_txn_abort(BlockJob *job)
300
+static int coroutine_fn cbw_co_pdiscard_snapshot(BlockDriverState *bs,
93
{
301
+ int64_t offset, int64_t bytes)
94
AioContext *ctx;
302
+{
95
@@ -XXX,XX +XXX,XX @@ void block_job_cancel(BlockJob *job)
303
+ BDRVCopyBeforeWriteState *s = bs->opaque;
304
+
305
+ WITH_QEMU_LOCK_GUARD(&s->lock) {
306
+ bdrv_reset_dirty_bitmap(s->access_bitmap, offset, bytes);
307
+ }
308
+
309
+ block_copy_reset(s->bcs, offset, bytes);
310
+
311
+ return bdrv_co_pdiscard(s->target, offset, bytes);
312
+}
313
+
314
static void cbw_refresh_filename(BlockDriverState *bs)
315
{
316
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
317
@@ -XXX,XX +XXX,XX @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
318
{
319
BDRVCopyBeforeWriteState *s = bs->opaque;
320
BdrvDirtyBitmap *bitmap = NULL;
321
+ int64_t cluster_size;
322
323
bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
324
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
325
@@ -XXX,XX +XXX,XX @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
326
return -EINVAL;
96
}
327
}
328
329
+ cluster_size = block_copy_cluster_size(s->bcs);
330
+
331
+ s->done_bitmap = bdrv_create_dirty_bitmap(bs, cluster_size, NULL, errp);
332
+ if (!s->done_bitmap) {
333
+ return -EINVAL;
334
+ }
335
+ bdrv_disable_dirty_bitmap(s->done_bitmap);
336
+
337
+ /* s->access_bitmap starts equal to bcs bitmap */
338
+ s->access_bitmap = bdrv_create_dirty_bitmap(bs, cluster_size, NULL, errp);
339
+ if (!s->access_bitmap) {
340
+ return -EINVAL;
341
+ }
342
+ bdrv_disable_dirty_bitmap(s->access_bitmap);
343
+ bdrv_dirty_bitmap_merge_internal(s->access_bitmap,
344
+ block_copy_dirty_bitmap(s->bcs), NULL,
345
+ true);
346
+
347
+ qemu_co_mutex_init(&s->lock);
348
+ QLIST_INIT(&s->frozen_read_reqs);
349
+
350
return 0;
97
}
351
}
98
352
99
-static int block_job_finish_sync(BlockJob *job,
353
@@ -XXX,XX +XXX,XX @@ static void cbw_close(BlockDriverState *bs)
100
- void (*finish)(BlockJob *, Error **errp),
354
{
101
- Error **errp)
355
BDRVCopyBeforeWriteState *s = bs->opaque;
102
-{
356
103
- Error *local_err = NULL;
357
+ bdrv_release_dirty_bitmap(s->access_bitmap);
104
- int ret;
358
+ bdrv_release_dirty_bitmap(s->done_bitmap);
105
-
359
+
106
- assert(blk_bs(job->blk)->job == job);
360
block_copy_state_free(s->bcs);
107
-
361
s->bcs = NULL;
108
- block_job_ref(job);
109
-
110
- finish(job, &local_err);
111
- if (local_err) {
112
- error_propagate(errp, local_err);
113
- block_job_unref(job);
114
- return -EBUSY;
115
- }
116
- /* block_job_drain calls block_job_enter, and it should be enough to
117
- * induce progress until the job completes or moves to the main thread.
118
- */
119
- while (!job->deferred_to_main_loop && !job->completed) {
120
- block_job_drain(job);
121
- }
122
- while (!job->completed) {
123
- aio_poll(qemu_get_aio_context(), true);
124
- }
125
- ret = (job->cancelled && job->ret == 0) ? -ECANCELED : job->ret;
126
- block_job_unref(job);
127
- return ret;
128
-}
129
-
130
/* A wrapper around block_job_cancel() taking an Error ** parameter so it may be
131
* used with block_job_finish_sync() without the need for (rather nasty)
132
* function pointer casts there. */
133
@@ -XXX,XX +XXX,XX @@ void block_job_defer_to_main_loop(BlockJob *job,
134
aio_bh_schedule_oneshot(qemu_get_aio_context(),
135
block_job_defer_to_main_loop_bh, data);
136
}
362
}
137
-
363
@@ -XXX,XX +XXX,XX @@ BlockDriver bdrv_cbw_filter = {
138
-BlockJobTxn *block_job_txn_new(void)
364
.bdrv_co_pdiscard = cbw_co_pdiscard,
139
-{
365
.bdrv_co_flush = cbw_co_flush,
140
- BlockJobTxn *txn = g_new0(BlockJobTxn, 1);
366
141
- QLIST_INIT(&txn->jobs);
367
+ .bdrv_co_preadv_snapshot = cbw_co_preadv_snapshot,
142
- txn->refcnt = 1;
368
+ .bdrv_co_pdiscard_snapshot = cbw_co_pdiscard_snapshot,
143
- return txn;
369
+ .bdrv_co_snapshot_block_status = cbw_co_snapshot_block_status,
144
-}
370
+
145
-
371
.bdrv_refresh_filename = cbw_refresh_filename,
146
-static void block_job_txn_ref(BlockJobTxn *txn)
372
147
-{
373
.bdrv_child_perm = cbw_child_perm,
148
- txn->refcnt++;
374
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
149
-}
375
index XXXXXXX..XXXXXXX 100644
150
-
376
--- a/tests/qemu-iotests/257.out
151
-void block_job_txn_unref(BlockJobTxn *txn)
377
+++ b/tests/qemu-iotests/257.out
152
-{
378
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
153
- if (txn && --txn->refcnt == 0) {
379
{"return": ""}
154
- g_free(txn);
380
{
155
- }
381
"bitmaps": {
156
-}
382
+ "backup-top": [
157
-
383
+ {
158
-void block_job_txn_add_job(BlockJobTxn *txn, BlockJob *job)
384
+ "busy": false,
159
-{
385
+ "count": 67108864,
160
- if (!txn) {
386
+ "granularity": 65536,
161
- return;
387
+ "persistent": false,
162
- }
388
+ "recording": false
163
-
389
+ },
164
- assert(!job->txn);
390
+ {
165
- job->txn = txn;
391
+ "busy": false,
166
-
392
+ "count": 458752,
167
- QLIST_INSERT_HEAD(&txn->jobs, job, txn_list);
393
+ "granularity": 65536,
168
- block_job_txn_ref(txn);
394
+ "persistent": false,
169
-}
395
+ "recording": false
396
+ }
397
+ ],
398
"drive0": [
399
{
400
"busy": false,
401
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
402
{"return": ""}
403
{
404
"bitmaps": {
405
+ "backup-top": [
406
+ {
407
+ "busy": false,
408
+ "count": 67108864,
409
+ "granularity": 65536,
410
+ "persistent": false,
411
+ "recording": false
412
+ },
413
+ {
414
+ "busy": false,
415
+ "count": 458752,
416
+ "granularity": 65536,
417
+ "persistent": false,
418
+ "recording": false
419
+ }
420
+ ],
421
"drive0": [
422
{
423
"busy": false,
424
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
425
{"return": ""}
426
{
427
"bitmaps": {
428
+ "backup-top": [
429
+ {
430
+ "busy": false,
431
+ "count": 67108864,
432
+ "granularity": 65536,
433
+ "persistent": false,
434
+ "recording": false
435
+ },
436
+ {
437
+ "busy": false,
438
+ "count": 458752,
439
+ "granularity": 65536,
440
+ "persistent": false,
441
+ "recording": false
442
+ }
443
+ ],
444
"drive0": [
445
{
446
"busy": false,
447
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
448
{"return": ""}
449
{
450
"bitmaps": {
451
+ "backup-top": [
452
+ {
453
+ "busy": false,
454
+ "count": 67108864,
455
+ "granularity": 65536,
456
+ "persistent": false,
457
+ "recording": false
458
+ },
459
+ {
460
+ "busy": false,
461
+ "count": 458752,
462
+ "granularity": 65536,
463
+ "persistent": false,
464
+ "recording": false
465
+ }
466
+ ],
467
"drive0": [
468
{
469
"busy": false,
470
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
471
{"return": ""}
472
{
473
"bitmaps": {
474
+ "backup-top": [
475
+ {
476
+ "busy": false,
477
+ "count": 67108864,
478
+ "granularity": 65536,
479
+ "persistent": false,
480
+ "recording": false
481
+ },
482
+ {
483
+ "busy": false,
484
+ "count": 458752,
485
+ "granularity": 65536,
486
+ "persistent": false,
487
+ "recording": false
488
+ }
489
+ ],
490
"drive0": [
491
{
492
"busy": false,
493
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
494
{"return": ""}
495
{
496
"bitmaps": {
497
+ "backup-top": [
498
+ {
499
+ "busy": false,
500
+ "count": 67108864,
501
+ "granularity": 65536,
502
+ "persistent": false,
503
+ "recording": false
504
+ },
505
+ {
506
+ "busy": false,
507
+ "count": 458752,
508
+ "granularity": 65536,
509
+ "persistent": false,
510
+ "recording": false
511
+ }
512
+ ],
513
"drive0": [
514
{
515
"busy": false,
516
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
517
{"return": ""}
518
{
519
"bitmaps": {
520
+ "backup-top": [
521
+ {
522
+ "busy": false,
523
+ "count": 67108864,
524
+ "granularity": 65536,
525
+ "persistent": false,
526
+ "recording": false
527
+ },
528
+ {
529
+ "busy": false,
530
+ "count": 458752,
531
+ "granularity": 65536,
532
+ "persistent": false,
533
+ "recording": false
534
+ }
535
+ ],
536
"drive0": [
537
{
538
"busy": false,
539
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
540
{"return": ""}
541
{
542
"bitmaps": {
543
+ "backup-top": [
544
+ {
545
+ "busy": false,
546
+ "count": 67108864,
547
+ "granularity": 65536,
548
+ "persistent": false,
549
+ "recording": false
550
+ },
551
+ {
552
+ "busy": false,
553
+ "count": 458752,
554
+ "granularity": 65536,
555
+ "persistent": false,
556
+ "recording": false
557
+ }
558
+ ],
559
"drive0": [
560
{
561
"busy": false,
562
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
563
{"return": ""}
564
{
565
"bitmaps": {
566
+ "backup-top": [
567
+ {
568
+ "busy": false,
569
+ "count": 67108864,
570
+ "granularity": 65536,
571
+ "persistent": false,
572
+ "recording": false
573
+ },
574
+ {
575
+ "busy": false,
576
+ "count": 458752,
577
+ "granularity": 65536,
578
+ "persistent": false,
579
+ "recording": false
580
+ }
581
+ ],
582
"drive0": [
583
{
584
"busy": false,
585
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
586
{"return": ""}
587
{
588
"bitmaps": {
589
+ "backup-top": [
590
+ {
591
+ "busy": false,
592
+ "count": 67108864,
593
+ "granularity": 65536,
594
+ "persistent": false,
595
+ "recording": false
596
+ },
597
+ {
598
+ "busy": false,
599
+ "count": 458752,
600
+ "granularity": 65536,
601
+ "persistent": false,
602
+ "recording": false
603
+ }
604
+ ],
605
"drive0": [
606
{
607
"busy": false,
608
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
609
{"return": ""}
610
{
611
"bitmaps": {
612
+ "backup-top": [
613
+ {
614
+ "busy": false,
615
+ "count": 67108864,
616
+ "granularity": 65536,
617
+ "persistent": false,
618
+ "recording": false
619
+ },
620
+ {
621
+ "busy": false,
622
+ "count": 458752,
623
+ "granularity": 65536,
624
+ "persistent": false,
625
+ "recording": false
626
+ }
627
+ ],
628
"drive0": [
629
{
630
"busy": false,
631
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
632
{"return": ""}
633
{
634
"bitmaps": {
635
+ "backup-top": [
636
+ {
637
+ "busy": false,
638
+ "count": 67108864,
639
+ "granularity": 65536,
640
+ "persistent": false,
641
+ "recording": false
642
+ },
643
+ {
644
+ "busy": false,
645
+ "count": 458752,
646
+ "granularity": 65536,
647
+ "persistent": false,
648
+ "recording": false
649
+ }
650
+ ],
651
"drive0": [
652
{
653
"busy": false,
654
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
655
{"return": ""}
656
{
657
"bitmaps": {
658
+ "backup-top": [
659
+ {
660
+ "busy": false,
661
+ "count": 67108864,
662
+ "granularity": 65536,
663
+ "persistent": false,
664
+ "recording": false
665
+ },
666
+ {
667
+ "busy": false,
668
+ "count": 458752,
669
+ "granularity": 65536,
670
+ "persistent": false,
671
+ "recording": false
672
+ }
673
+ ],
674
"drive0": [
675
{
676
"busy": false,
677
@@ -XXX,XX +XXX,XX @@ write -P0x67 0x3fe0000 0x20000
678
{"return": ""}
679
{
680
"bitmaps": {
681
+ "backup-top": [
682
+ {
683
+ "busy": false,
684
+ "count": 67108864,
685
+ "granularity": 65536,
686
+ "persistent": false,
687
+ "recording": false
688
+ },
689
+ {
690
+ "busy": false,
691
+ "count": 458752,
692
+ "granularity": 65536,
693
+ "persistent": false,
694
+ "recording": false
695
+ }
696
+ ],
697
"drive0": [
698
{
699
"busy": false,
170
--
700
--
171
2.9.3
701
2.34.1
172
173
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
5
Message-Id: <20220303194349.2304213-14-vsementsov@virtuozzo.com>
6
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
7
---
8
tests/qemu-iotests/tests/image-fleecing | 64 +++++++++++++-----
9
tests/qemu-iotests/tests/image-fleecing.out | 74 ++++++++++++++++++++-
10
2 files changed, 119 insertions(+), 19 deletions(-)
11
12
diff --git a/tests/qemu-iotests/tests/image-fleecing b/tests/qemu-iotests/tests/image-fleecing
13
index XXXXXXX..XXXXXXX 100755
14
--- a/tests/qemu-iotests/tests/image-fleecing
15
+++ b/tests/qemu-iotests/tests/image-fleecing
16
@@ -XXX,XX +XXX,XX @@ remainder = [('0xd5', '0x108000', '32k'), # Right-end of partial-left [1]
17
('0xdc', '32M', '32k'), # Left-end of partial-right [2]
18
('0xcd', '0x3ff0000', '64k')] # patterns[3]
19
20
-def do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm):
21
+def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
22
+ fleece_img_path, nbd_sock_path, vm):
23
log('--- Setting up images ---')
24
log('')
25
26
assert qemu_img('create', '-f', iotests.imgfmt, base_img_path, '64M') == 0
27
- assert qemu_img('create', '-f', 'qcow2', fleece_img_path, '64M') == 0
28
+ if use_snapshot_access_filter:
29
+ assert use_cbw
30
+ assert qemu_img('create', '-f', 'raw', fleece_img_path, '64M') == 0
31
+ else:
32
+ assert qemu_img('create', '-f', 'qcow2', fleece_img_path, '64M') == 0
33
34
for p in patterns:
35
qemu_io('-f', iotests.imgfmt,
36
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm):
37
log('')
38
39
40
- # create tmp_node backed by src_node
41
- log(vm.qmp('blockdev-add', {
42
- 'driver': 'qcow2',
43
- 'node-name': tmp_node,
44
- 'file': {
45
+ if use_snapshot_access_filter:
46
+ log(vm.qmp('blockdev-add', {
47
+ 'node-name': tmp_node,
48
'driver': 'file',
49
'filename': fleece_img_path,
50
- },
51
- 'backing': src_node,
52
- }))
53
+ }))
54
+ else:
55
+ # create tmp_node backed by src_node
56
+ log(vm.qmp('blockdev-add', {
57
+ 'driver': 'qcow2',
58
+ 'node-name': tmp_node,
59
+ 'file': {
60
+ 'driver': 'file',
61
+ 'filename': fleece_img_path,
62
+ },
63
+ 'backing': src_node,
64
+ }))
65
66
# Establish CBW from source to fleecing node
67
if use_cbw:
68
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm):
69
}))
70
71
log(vm.qmp('qom-set', path=qom_path, property='drive', value='fl-cbw'))
72
+
73
+ if use_snapshot_access_filter:
74
+ log(vm.qmp('blockdev-add', {
75
+ 'driver': 'snapshot-access',
76
+ 'node-name': 'fl-access',
77
+ 'file': 'fl-cbw',
78
+ }))
79
else:
80
log(vm.qmp('blockdev-backup',
81
job_id='fleecing',
82
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm):
83
target=tmp_node,
84
sync='none'))
85
86
+ export_node = 'fl-access' if use_snapshot_access_filter else tmp_node
87
+
88
log('')
89
log('--- Setting up NBD Export ---')
90
log('')
91
92
- nbd_uri = 'nbd+unix:///%s?socket=%s' % (tmp_node, nbd_sock_path)
93
+ nbd_uri = 'nbd+unix:///%s?socket=%s' % (export_node, nbd_sock_path)
94
log(vm.qmp('nbd-server-start',
95
{'addr': {'type': 'unix',
96
'data': {'path': nbd_sock_path}}}))
97
98
- log(vm.qmp('nbd-server-add', device=tmp_node))
99
+ log(vm.qmp('nbd-server-add', device=export_node))
100
101
log('')
102
log('--- Sanity Check ---')
103
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm):
104
log('--- Cleanup ---')
105
log('')
106
107
+ log(vm.qmp('nbd-server-stop'))
108
+
109
if use_cbw:
110
+ if use_snapshot_access_filter:
111
+ log(vm.qmp('blockdev-del', node_name='fl-access'))
112
log(vm.qmp('qom-set', path=qom_path, property='drive', value=src_node))
113
log(vm.qmp('blockdev-del', node_name='fl-cbw'))
114
else:
115
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm):
116
assert e is not None
117
log(e, filters=[iotests.filter_qmp_event])
118
119
- log(vm.qmp('nbd-server-stop'))
120
log(vm.qmp('blockdev-del', node_name=tmp_node))
121
vm.shutdown()
122
123
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm):
124
log('Done')
125
126
127
-def test(use_cbw):
128
+def test(use_cbw, use_snapshot_access_filter):
129
with iotests.FilePath('base.img') as base_img_path, \
130
iotests.FilePath('fleece.img') as fleece_img_path, \
131
iotests.FilePath('nbd.sock',
132
base_dir=iotests.sock_dir) as nbd_sock_path, \
133
iotests.VM() as vm:
134
- do_test(use_cbw, base_img_path, fleece_img_path, nbd_sock_path, vm)
135
+ do_test(use_cbw, use_snapshot_access_filter, base_img_path,
136
+ fleece_img_path, nbd_sock_path, vm)
137
138
139
log('=== Test backup(sync=none) based fleecing ===\n')
140
-test(False)
141
+test(False, False)
142
+
143
+log('=== Test cbw-filter based fleecing ===\n')
144
+test(True, False)
145
146
-log('=== Test filter based fleecing ===\n')
147
-test(True)
148
+log('=== Test fleecing-format based fleecing ===\n')
149
+test(True, True)
150
diff --git a/tests/qemu-iotests/tests/image-fleecing.out b/tests/qemu-iotests/tests/image-fleecing.out
151
index XXXXXXX..XXXXXXX 100644
152
--- a/tests/qemu-iotests/tests/image-fleecing.out
153
+++ b/tests/qemu-iotests/tests/image-fleecing.out
154
@@ -XXX,XX +XXX,XX @@ read -P0 0x3fe0000 64k
155
156
--- Cleanup ---
157
158
+{"return": {}}
159
{"return": {}}
160
{"data": {"device": "fleecing", "len": 67108864, "offset": 393216, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
161
{"return": {}}
162
+
163
+--- Confirming writes ---
164
+
165
+read -P0xab 0 64k
166
+read -P0xad 0x00f8000 64k
167
+read -P0x1d 0x2008000 64k
168
+read -P0xea 0x3fe0000 64k
169
+read -P0xd5 0x108000 32k
170
+read -P0xdc 32M 32k
171
+read -P0xcd 0x3ff0000 64k
172
+
173
+Done
174
+=== Test cbw-filter based fleecing ===
175
+
176
+--- Setting up images ---
177
+
178
+Done
179
+
180
+--- Launching VM ---
181
+
182
+Done
183
+
184
+--- Setting up Fleecing Graph ---
185
+
186
+{"return": {}}
187
+{"return": {}}
188
+{"return": {}}
189
+
190
+--- Setting up NBD Export ---
191
+
192
+{"return": {}}
193
+{"return": {}}
194
+
195
+--- Sanity Check ---
196
+
197
+read -P0x5d 0 64k
198
+read -P0xd5 1M 64k
199
+read -P0xdc 32M 64k
200
+read -P0xcd 0x3ff0000 64k
201
+read -P0 0x00f8000 32k
202
+read -P0 0x2010000 32k
203
+read -P0 0x3fe0000 64k
204
+
205
+--- Testing COW ---
206
+
207
+write -P0xab 0 64k
208
+{"return": ""}
209
+write -P0xad 0x00f8000 64k
210
+{"return": ""}
211
+write -P0x1d 0x2008000 64k
212
+{"return": ""}
213
+write -P0xea 0x3fe0000 64k
214
+{"return": ""}
215
+
216
+--- Verifying Data ---
217
+
218
+read -P0x5d 0 64k
219
+read -P0xd5 1M 64k
220
+read -P0xdc 32M 64k
221
+read -P0xcd 0x3ff0000 64k
222
+read -P0 0x00f8000 32k
223
+read -P0 0x2010000 32k
224
+read -P0 0x3fe0000 64k
225
+
226
+--- Cleanup ---
227
+
228
+{"return": {}}
229
+{"return": {}}
230
+{"return": {}}
231
{"return": {}}
232
233
--- Confirming writes ---
234
@@ -XXX,XX +XXX,XX @@ read -P0xdc 32M 32k
235
read -P0xcd 0x3ff0000 64k
236
237
Done
238
-=== Test filter based fleecing ===
239
+=== Test fleecing-format based fleecing ===
240
241
--- Setting up images ---
242
243
@@ -XXX,XX +XXX,XX @@ Done
244
{"return": {}}
245
{"return": {}}
246
{"return": {}}
247
+{"return": {}}
248
249
--- Setting up NBD Export ---
250
251
@@ -XXX,XX +XXX,XX @@ read -P0 0x3fe0000 64k
252
{"return": {}}
253
{"return": {}}
254
{"return": {}}
255
+{"return": {}}
256
257
--- Confirming writes ---
258
259
--
260
2.34.1
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Add helper that returns both status and output, to be used in the
4
following commit
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20220303194349.2304213-15-vsementsov@virtuozzo.com>
8
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
9
---
10
tests/qemu-iotests/iotests.py | 3 +++
11
1 file changed, 3 insertions(+)
12
13
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/qemu-iotests/iotests.py
16
+++ b/tests/qemu-iotests/iotests.py
17
@@ -XXX,XX +XXX,XX @@ def qemu_io(*args):
18
'''Run qemu-io and return the stdout data'''
19
return qemu_tool_pipe_and_status('qemu-io', qemu_io_wrap_args(args))[0]
20
21
+def qemu_io_pipe_and_status(*args):
22
+ return qemu_tool_pipe_and_status('qemu-io', qemu_io_wrap_args(args))
23
+
24
def qemu_io_log(*args):
25
result = qemu_io(*args)
26
log(result, filters=[filter_testfiles, filter_qemu_io])
27
--
28
2.34.1
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Note that reads zero areas (not dirty in the bitmap) fails, that's
4
correct.
5
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Message-Id: <20220303194349.2304213-16-vsementsov@virtuozzo.com>
8
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
9
---
10
tests/qemu-iotests/tests/image-fleecing | 38 +++++++---
11
tests/qemu-iotests/tests/image-fleecing.out | 84 +++++++++++++++++++++
12
2 files changed, 113 insertions(+), 9 deletions(-)
13
14
diff --git a/tests/qemu-iotests/tests/image-fleecing b/tests/qemu-iotests/tests/image-fleecing
15
index XXXXXXX..XXXXXXX 100755
16
--- a/tests/qemu-iotests/tests/image-fleecing
17
+++ b/tests/qemu-iotests/tests/image-fleecing
18
@@ -XXX,XX +XXX,XX @@
19
# Creator/Owner: John Snow <jsnow@redhat.com>
20
21
import iotests
22
-from iotests import log, qemu_img, qemu_io, qemu_io_silent
23
+from iotests import log, qemu_img, qemu_io, qemu_io_silent, \
24
+ qemu_io_pipe_and_status
25
26
iotests.script_initialize(
27
- supported_fmts=['qcow2', 'qcow', 'qed', 'vmdk', 'vhdx', 'raw'],
28
+ supported_fmts=['qcow2'],
29
supported_platforms=['linux'],
30
required_fmts=['copy-before-write'],
31
+ unsupported_imgopts=['compat']
32
)
33
34
patterns = [('0x5d', '0', '64k'),
35
@@ -XXX,XX +XXX,XX @@ remainder = [('0xd5', '0x108000', '32k'), # Right-end of partial-left [1]
36
('0xcd', '0x3ff0000', '64k')] # patterns[3]
37
38
def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
39
- fleece_img_path, nbd_sock_path, vm):
40
+ fleece_img_path, nbd_sock_path, vm,
41
+ bitmap=False):
42
log('--- Setting up images ---')
43
log('')
44
45
assert qemu_img('create', '-f', iotests.imgfmt, base_img_path, '64M') == 0
46
+ if bitmap:
47
+ assert qemu_img('bitmap', '--add', base_img_path, 'bitmap0') == 0
48
+
49
if use_snapshot_access_filter:
50
assert use_cbw
51
assert qemu_img('create', '-f', 'raw', fleece_img_path, '64M') == 0
52
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
53
54
# Establish CBW from source to fleecing node
55
if use_cbw:
56
- log(vm.qmp('blockdev-add', {
57
+ fl_cbw = {
58
'driver': 'copy-before-write',
59
'node-name': 'fl-cbw',
60
'file': src_node,
61
'target': tmp_node
62
- }))
63
+ }
64
+
65
+ if bitmap:
66
+ fl_cbw['bitmap'] = {'node': src_node, 'name': 'bitmap0'}
67
+
68
+ log(vm.qmp('blockdev-add', fl_cbw))
69
70
log(vm.qmp('qom-set', path=qom_path, property='drive', value='fl-cbw'))
71
72
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
73
for p in patterns + zeroes:
74
cmd = 'read -P%s %s %s' % p
75
log(cmd)
76
- assert qemu_io_silent('-r', '-f', 'raw', '-c', cmd, nbd_uri) == 0
77
+ out, ret = qemu_io_pipe_and_status('-r', '-f', 'raw', '-c', cmd,
78
+ nbd_uri)
79
+ if ret != 0:
80
+ print(out)
81
82
log('')
83
log('--- Testing COW ---')
84
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
85
for p in patterns + zeroes:
86
cmd = 'read -P%s %s %s' % p
87
log(cmd)
88
- assert qemu_io_silent('-r', '-f', 'raw', '-c', cmd, nbd_uri) == 0
89
+ out, ret = qemu_io_pipe_and_status('-r', '-f', 'raw', '-c', cmd,
90
+ nbd_uri)
91
+ if ret != 0:
92
+ print(out)
93
94
log('')
95
log('--- Cleanup ---')
96
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
97
log('Done')
98
99
100
-def test(use_cbw, use_snapshot_access_filter):
101
+def test(use_cbw, use_snapshot_access_filter, bitmap=False):
102
with iotests.FilePath('base.img') as base_img_path, \
103
iotests.FilePath('fleece.img') as fleece_img_path, \
104
iotests.FilePath('nbd.sock',
105
base_dir=iotests.sock_dir) as nbd_sock_path, \
106
iotests.VM() as vm:
107
do_test(use_cbw, use_snapshot_access_filter, base_img_path,
108
- fleece_img_path, nbd_sock_path, vm)
109
+ fleece_img_path, nbd_sock_path, vm, bitmap=bitmap)
110
111
112
log('=== Test backup(sync=none) based fleecing ===\n')
113
@@ -XXX,XX +XXX,XX @@ test(True, False)
114
115
log('=== Test fleecing-format based fleecing ===\n')
116
test(True, True)
117
+
118
+log('=== Test fleecing-format based fleecing with bitmap ===\n')
119
+test(True, True, bitmap=True)
120
diff --git a/tests/qemu-iotests/tests/image-fleecing.out b/tests/qemu-iotests/tests/image-fleecing.out
121
index XXXXXXX..XXXXXXX 100644
122
--- a/tests/qemu-iotests/tests/image-fleecing.out
123
+++ b/tests/qemu-iotests/tests/image-fleecing.out
124
@@ -XXX,XX +XXX,XX @@ read -P0 0x00f8000 32k
125
read -P0 0x2010000 32k
126
read -P0 0x3fe0000 64k
127
128
+--- Cleanup ---
129
+
130
+{"return": {}}
131
+{"return": {}}
132
+{"return": {}}
133
+{"return": {}}
134
+{"return": {}}
135
+
136
+--- Confirming writes ---
137
+
138
+read -P0xab 0 64k
139
+read -P0xad 0x00f8000 64k
140
+read -P0x1d 0x2008000 64k
141
+read -P0xea 0x3fe0000 64k
142
+read -P0xd5 0x108000 32k
143
+read -P0xdc 32M 32k
144
+read -P0xcd 0x3ff0000 64k
145
+
146
+Done
147
+=== Test fleecing-format based fleecing with bitmap ===
148
+
149
+--- Setting up images ---
150
+
151
+Done
152
+
153
+--- Launching VM ---
154
+
155
+Done
156
+
157
+--- Setting up Fleecing Graph ---
158
+
159
+{"return": {}}
160
+{"return": {}}
161
+{"return": {}}
162
+{"return": {}}
163
+
164
+--- Setting up NBD Export ---
165
+
166
+{"return": {}}
167
+{"return": {}}
168
+
169
+--- Sanity Check ---
170
+
171
+read -P0x5d 0 64k
172
+read -P0xd5 1M 64k
173
+read -P0xdc 32M 64k
174
+read -P0xcd 0x3ff0000 64k
175
+read -P0 0x00f8000 32k
176
+read failed: Invalid argument
177
+
178
+read -P0 0x2010000 32k
179
+read failed: Invalid argument
180
+
181
+read -P0 0x3fe0000 64k
182
+read failed: Invalid argument
183
+
184
+
185
+--- Testing COW ---
186
+
187
+write -P0xab 0 64k
188
+{"return": ""}
189
+write -P0xad 0x00f8000 64k
190
+{"return": ""}
191
+write -P0x1d 0x2008000 64k
192
+{"return": ""}
193
+write -P0xea 0x3fe0000 64k
194
+{"return": ""}
195
+
196
+--- Verifying Data ---
197
+
198
+read -P0x5d 0 64k
199
+read -P0xd5 1M 64k
200
+read -P0xdc 32M 64k
201
+read -P0xcd 0x3ff0000 64k
202
+read -P0 0x00f8000 32k
203
+read failed: Invalid argument
204
+
205
+read -P0 0x2010000 32k
206
+read failed: Invalid argument
207
+
208
+read -P0 0x3fe0000 64k
209
+read failed: Invalid argument
210
+
211
+
212
--- Cleanup ---
213
214
{"return": {}}
215
--
216
2.34.1
diff view generated by jsdifflib
New patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Message-Id: <20220303194349.2304213-17-vsementsov@virtuozzo.com>
5
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
6
---
7
tests/qemu-iotests/tests/image-fleecing | 125 +++++++++++++++-----
8
tests/qemu-iotests/tests/image-fleecing.out | 63 ++++++++++
9
2 files changed, 156 insertions(+), 32 deletions(-)
10
11
diff --git a/tests/qemu-iotests/tests/image-fleecing b/tests/qemu-iotests/tests/image-fleecing
12
index XXXXXXX..XXXXXXX 100755
13
--- a/tests/qemu-iotests/tests/image-fleecing
14
+++ b/tests/qemu-iotests/tests/image-fleecing
15
@@ -XXX,XX +XXX,XX @@ remainder = [('0xd5', '0x108000', '32k'), # Right-end of partial-left [1]
16
('0xdc', '32M', '32k'), # Left-end of partial-right [2]
17
('0xcd', '0x3ff0000', '64k')] # patterns[3]
18
19
-def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
20
- fleece_img_path, nbd_sock_path, vm,
21
+def do_test(vm, use_cbw, use_snapshot_access_filter, base_img_path,
22
+ fleece_img_path, nbd_sock_path=None,
23
+ target_img_path=None,
24
bitmap=False):
25
+ push_backup = target_img_path is not None
26
+ assert (nbd_sock_path is not None) != push_backup
27
+ if push_backup:
28
+ assert use_cbw
29
+
30
log('--- Setting up images ---')
31
log('')
32
33
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
34
else:
35
assert qemu_img('create', '-f', 'qcow2', fleece_img_path, '64M') == 0
36
37
+ if push_backup:
38
+ assert qemu_img('create', '-f', 'qcow2', target_img_path, '64M') == 0
39
+
40
for p in patterns:
41
qemu_io('-f', iotests.imgfmt,
42
'-c', 'write -P%s %s %s' % p, base_img_path)
43
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
44
45
export_node = 'fl-access' if use_snapshot_access_filter else tmp_node
46
47
- log('')
48
- log('--- Setting up NBD Export ---')
49
- log('')
50
+ if push_backup:
51
+ log('')
52
+ log('--- Starting actual backup ---')
53
+ log('')
54
55
- nbd_uri = 'nbd+unix:///%s?socket=%s' % (export_node, nbd_sock_path)
56
- log(vm.qmp('nbd-server-start',
57
- {'addr': {'type': 'unix',
58
- 'data': {'path': nbd_sock_path}}}))
59
+ log(vm.qmp('blockdev-add', **{
60
+ 'driver': iotests.imgfmt,
61
+ 'node-name': 'target',
62
+ 'file': {
63
+ 'driver': 'file',
64
+ 'filename': target_img_path
65
+ }
66
+ }))
67
+ log(vm.qmp('blockdev-backup', device=export_node,
68
+ sync='full', target='target',
69
+ job_id='push-backup', speed=1))
70
+ else:
71
+ log('')
72
+ log('--- Setting up NBD Export ---')
73
+ log('')
74
75
- log(vm.qmp('nbd-server-add', device=export_node))
76
+ nbd_uri = 'nbd+unix:///%s?socket=%s' % (export_node, nbd_sock_path)
77
+ log(vm.qmp('nbd-server-start',
78
+ {'addr': { 'type': 'unix',
79
+ 'data': { 'path': nbd_sock_path } } }))
80
81
- log('')
82
- log('--- Sanity Check ---')
83
- log('')
84
+ log(vm.qmp('nbd-server-add', device=export_node))
85
86
- for p in patterns + zeroes:
87
- cmd = 'read -P%s %s %s' % p
88
- log(cmd)
89
- out, ret = qemu_io_pipe_and_status('-r', '-f', 'raw', '-c', cmd,
90
- nbd_uri)
91
- if ret != 0:
92
- print(out)
93
+ log('')
94
+ log('--- Sanity Check ---')
95
+ log('')
96
+
97
+ for p in patterns + zeroes:
98
+ cmd = 'read -P%s %s %s' % p
99
+ log(cmd)
100
+ out, ret = qemu_io_pipe_and_status('-r', '-f', 'raw', '-c', cmd,
101
+ nbd_uri)
102
+ if ret != 0:
103
+ print(out)
104
105
log('')
106
log('--- Testing COW ---')
107
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
108
log(cmd)
109
log(vm.hmp_qemu_io(qom_path, cmd, qdev=True))
110
111
+ if push_backup:
112
+ # Check that previous operations were done during backup, not after
113
+ # If backup is already finished, it's possible that it was finished
114
+ # even before hmp qemu_io write, and we didn't actually test
115
+ # copy-before-write operation. This should not happen, as we use
116
+ # speed=1. But worth checking.
117
+ result = vm.qmp('query-block-jobs')
118
+ assert len(result['return']) == 1
119
+
120
+ result = vm.qmp('block-job-set-speed', device='push-backup', speed=0)
121
+ assert result == {'return': {}}
122
+
123
+ log(vm.event_wait(name='BLOCK_JOB_COMPLETED',
124
+ match={'data': {'device': 'push-backup'}}),
125
+ filters=[iotests.filter_qmp_event])
126
+ log(vm.qmp('blockdev-del', node_name='target'))
127
+
128
log('')
129
log('--- Verifying Data ---')
130
log('')
131
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
132
for p in patterns + zeroes:
133
cmd = 'read -P%s %s %s' % p
134
log(cmd)
135
- out, ret = qemu_io_pipe_and_status('-r', '-f', 'raw', '-c', cmd,
136
- nbd_uri)
137
+ args = ['-r', '-c', cmd]
138
+ if push_backup:
139
+ args += [target_img_path]
140
+ else:
141
+ args += ['-f', 'raw', nbd_uri]
142
+ out, ret = qemu_io_pipe_and_status(*args)
143
if ret != 0:
144
print(out)
145
146
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
147
log('--- Cleanup ---')
148
log('')
149
150
- log(vm.qmp('nbd-server-stop'))
151
+ if not push_backup:
152
+ log(vm.qmp('nbd-server-stop'))
153
154
if use_cbw:
155
if use_snapshot_access_filter:
156
@@ -XXX,XX +XXX,XX @@ def do_test(use_cbw, use_snapshot_access_filter, base_img_path,
157
log('Done')
158
159
160
-def test(use_cbw, use_snapshot_access_filter, bitmap=False):
161
+def test(use_cbw, use_snapshot_access_filter,
162
+ nbd_sock_path=None, target_img_path=None, bitmap=False):
163
with iotests.FilePath('base.img') as base_img_path, \
164
iotests.FilePath('fleece.img') as fleece_img_path, \
165
- iotests.FilePath('nbd.sock',
166
- base_dir=iotests.sock_dir) as nbd_sock_path, \
167
iotests.VM() as vm:
168
- do_test(use_cbw, use_snapshot_access_filter, base_img_path,
169
- fleece_img_path, nbd_sock_path, vm, bitmap=bitmap)
170
+ do_test(vm, use_cbw, use_snapshot_access_filter, base_img_path,
171
+ fleece_img_path, nbd_sock_path, target_img_path,
172
+ bitmap=bitmap)
173
+
174
+def test_pull(use_cbw, use_snapshot_access_filter, bitmap=False):
175
+ with iotests.FilePath('nbd.sock',
176
+ base_dir=iotests.sock_dir) as nbd_sock_path:
177
+ test(use_cbw, use_snapshot_access_filter, nbd_sock_path, None,
178
+ bitmap=bitmap)
179
+
180
+def test_push():
181
+ with iotests.FilePath('target.img') as target_img_path:
182
+ test(True, True, None, target_img_path)
183
184
185
log('=== Test backup(sync=none) based fleecing ===\n')
186
-test(False, False)
187
+test_pull(False, False)
188
189
log('=== Test cbw-filter based fleecing ===\n')
190
-test(True, False)
191
+test_pull(True, False)
192
193
log('=== Test fleecing-format based fleecing ===\n')
194
-test(True, True)
195
+test_pull(True, True)
196
197
log('=== Test fleecing-format based fleecing with bitmap ===\n')
198
-test(True, True, bitmap=True)
199
+test_pull(True, True, bitmap=True)
200
+
201
+log('=== Test push backup with fleecing ===\n')
202
+test_push()
203
diff --git a/tests/qemu-iotests/tests/image-fleecing.out b/tests/qemu-iotests/tests/image-fleecing.out
204
index XXXXXXX..XXXXXXX 100644
205
--- a/tests/qemu-iotests/tests/image-fleecing.out
206
+++ b/tests/qemu-iotests/tests/image-fleecing.out
207
@@ -XXX,XX +XXX,XX @@ read -P0xdc 32M 32k
208
read -P0xcd 0x3ff0000 64k
209
210
Done
211
+=== Test push backup with fleecing ===
212
+
213
+--- Setting up images ---
214
+
215
+Done
216
+
217
+--- Launching VM ---
218
+
219
+Done
220
+
221
+--- Setting up Fleecing Graph ---
222
+
223
+{"return": {}}
224
+{"return": {}}
225
+{"return": {}}
226
+{"return": {}}
227
+
228
+--- Starting actual backup ---
229
+
230
+{"return": {}}
231
+{"return": {}}
232
+
233
+--- Testing COW ---
234
+
235
+write -P0xab 0 64k
236
+{"return": ""}
237
+write -P0xad 0x00f8000 64k
238
+{"return": ""}
239
+write -P0x1d 0x2008000 64k
240
+{"return": ""}
241
+write -P0xea 0x3fe0000 64k
242
+{"return": ""}
243
+{"data": {"device": "push-backup", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
244
+{"return": {}}
245
+
246
+--- Verifying Data ---
247
+
248
+read -P0x5d 0 64k
249
+read -P0xd5 1M 64k
250
+read -P0xdc 32M 64k
251
+read -P0xcd 0x3ff0000 64k
252
+read -P0 0x00f8000 32k
253
+read -P0 0x2010000 32k
254
+read -P0 0x3fe0000 64k
255
+
256
+--- Cleanup ---
257
+
258
+{"return": {}}
259
+{"return": {}}
260
+{"return": {}}
261
+{"return": {}}
262
+
263
+--- Confirming writes ---
264
+
265
+read -P0xab 0 64k
266
+read -P0xad 0x00f8000 64k
267
+read -P0x1d 0x2008000 64k
268
+read -P0xea 0x3fe0000 64k
269
+read -P0xd5 0x108000 32k
270
+read -P0xdc 32M 32k
271
+read -P0xcd 0x3ff0000 64k
272
+
273
+Done
274
--
275
2.34.1
diff view generated by jsdifflib