1
The following changes since commit 31ee895047bdcf7387e3570cbd2a473c6f744b08:
1
The following changes since commit 56f9e46b841c7be478ca038d8d4085d776ab4b0d:
2
2
3
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2021-01-25 15:56:13 +0000)
3
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2017-02-20' into staging (2017-02-20 17:42:47 +0000)
4
4
5
are available in the Git repository at:
5
are available in the git repository at:
6
6
7
https://github.com/XanClic/qemu.git tags/pull-block-2021-01-26
7
git://github.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to bb24cdc5efee580e81f71c5ff0fd980f2cc179d0:
9
for you to fetch changes up to a7b91d35bab97a2d3e779d0c64c9b837b52a6cf7:
10
10
11
iotests/178: Pass value to invalid option (2021-01-26 14:36:37 +0100)
11
coroutine-lock: make CoRwlock thread-safe and fair (2017-02-21 11:39:40 +0000)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Block patches:
14
Pull request
15
- Make backup block jobs use asynchronous requests with the block-copy
15
16
module
16
v2:
17
- Use COR filter node for stream block jobs
17
* Rebased to resolve scsi conflicts
18
- Make coroutine-sigaltstack’s qemu_coroutine_new() function thread-safe
19
- Report error string when file locking fails with an unexpected errno
20
- iotest fixes, additions, and some refactoring
21
18
22
----------------------------------------------------------------
19
----------------------------------------------------------------
23
Alberto Garcia (1):
24
iotests: Add test for the regression fixed in c8bf9a9169
25
20
26
Andrey Shinkevich (10):
21
Paolo Bonzini (24):
27
copy-on-read: support preadv/pwritev_part functions
22
block: move AioContext, QEMUTimer, main-loop to libqemuutil
28
block: add API function to insert a node
23
aio: introduce aio_co_schedule and aio_co_wake
29
copy-on-read: add filter drop function
24
block-backend: allow blk_prw from coroutine context
30
qapi: add filter-node-name to block-stream
25
test-thread-pool: use generic AioContext infrastructure
31
qapi: copy-on-read filter: add 'bottom' option
26
io: add methods to set I/O handlers on AioContext
32
iotests: add #310 to test bottom node in COR driver
27
io: make qio_channel_yield aware of AioContexts
33
block: include supported_read_flags into BDS structure
28
nbd: convert to use qio_channel_yield
34
copy-on-read: skip non-guest reads if no copy needed
29
coroutine-lock: reschedule coroutine on the AioContext it was running
35
stream: rework backing-file changing
30
on
36
block: apply COR-filter to block-stream jobs
31
blkdebug: reschedule coroutine on the AioContext it is running on
32
qed: introduce qed_aio_start_io and qed_aio_next_io_cb
33
aio: push aio_context_acquire/release down to dispatching
34
block: explicitly acquire aiocontext in timers that need it
35
block: explicitly acquire aiocontext in callbacks that need it
36
block: explicitly acquire aiocontext in bottom halves that need it
37
block: explicitly acquire aiocontext in aio callbacks that need it
38
aio-posix: partially inline aio_dispatch into aio_poll
39
async: remove unnecessary inc/dec pairs
40
block: document fields protected by AioContext lock
41
coroutine-lock: make CoMutex thread-safe
42
coroutine-lock: add limited spinning to CoMutex
43
test-aio-multithread: add performance comparison with thread-based
44
mutexes
45
coroutine-lock: place CoMutex before CoQueue in header
46
coroutine-lock: add mutex argument to CoQueue APIs
47
coroutine-lock: make CoRwlock thread-safe and fair
37
48
38
David Edmondson (1):
49
Makefile.objs | 4 -
39
block: report errno when flock fcntl fails
50
stubs/Makefile.objs | 1 +
40
51
tests/Makefile.include | 19 +-
41
Max Reitz (14):
52
util/Makefile.objs | 6 +-
42
iotests.py: Assume a couple of variables as given
53
block/nbd-client.h | 2 +-
43
iotests/297: Rewrite in Python and extend reach
54
block/qed.h | 3 +
44
iotests: Move try_remove to iotests.py
55
include/block/aio.h | 38 ++-
45
iotests/129: Remove test images in tearDown()
56
include/block/block_int.h | 64 +++--
46
iotests/129: Do not check @busy
57
include/io/channel.h | 72 +++++-
47
iotests/129: Use throttle node
58
include/qemu/coroutine.h | 84 ++++---
48
iotests/129: Actually test a commit job
59
include/qemu/coroutine_int.h | 11 +-
49
iotests/129: Limit mirror job's buffer size
60
include/sysemu/block-backend.h | 14 +-
50
iotests/129: Clean up pylint and mypy complaints
61
tests/iothread.h | 25 ++
51
iotests/300: Clean up pylint and mypy complaints
62
block/backup.c | 2 +-
52
coroutine-sigaltstack: Add SIGUSR2 mutex
63
block/blkdebug.c | 9 +-
53
iotests/129: Limit backup's max-chunk/max-workers
64
block/blkreplay.c | 2 +-
54
iotests/118: Drop 'change' test
65
block/block-backend.c | 13 +-
55
iotests/178: Pass value to invalid option
66
block/curl.c | 44 +++-
56
67
block/gluster.c | 9 +-
57
Vladimir Sementsov-Ogievskiy (27):
68
block/io.c | 42 +---
58
iotests: fix _check_o_direct
69
block/iscsi.c | 15 +-
59
qapi: block-stream: add "bottom" argument
70
block/linux-aio.c | 10 +-
60
iotests: 30: prepare to COR filter insertion by stream job
71
block/mirror.c | 12 +-
61
block/stream: add s->target_bs
72
block/nbd-client.c | 119 +++++----
62
qapi: backup: add perf.use-copy-range parameter
73
block/nfs.c | 9 +-
63
block/block-copy: More explicit call_state
74
block/qcow2-cluster.c | 4 +-
64
block/block-copy: implement block_copy_async
75
block/qed-cluster.c | 2 +
65
block/block-copy: add max_chunk and max_workers parameters
76
block/qed-table.c | 12 +-
66
block/block-copy: add list of all call-states
77
block/qed.c | 58 +++--
67
block/block-copy: add ratelimit to block-copy
78
block/sheepdog.c | 31 +--
68
block/block-copy: add block_copy_cancel
79
block/ssh.c | 29 +--
69
blockjob: add set_speed to BlockJobDriver
80
block/throttle-groups.c | 4 +-
70
job: call job_enter from job_pause
81
block/win32-aio.c | 9 +-
71
qapi: backup: add max-chunk and max-workers to x-perf struct
82
dma-helpers.c | 2 +
72
iotests: 56: prepare for backup over block-copy
83
hw/9pfs/9p.c | 2 +-
73
iotests: 185: prepare for backup over block-copy
84
hw/block/virtio-blk.c | 19 +-
74
iotests: 219: prepare for backup over block-copy
85
hw/scsi/scsi-bus.c | 2 +
75
iotests: 257: prepare for backup over block-copy
86
hw/scsi/scsi-disk.c | 15 ++
76
block/block-copy: make progress_bytes_callback optional
87
hw/scsi/scsi-generic.c | 20 +-
77
block/backup: drop extra gotos from backup_run()
88
hw/scsi/virtio-scsi.c | 7 +
78
backup: move to block-copy
89
io/channel-command.c | 13 +
79
qapi: backup: disable copy_range by default
90
io/channel-file.c | 11 +
80
block/block-copy: drop unused block_copy_set_progress_callback()
91
io/channel-socket.c | 16 +-
81
block/block-copy: drop unused argument of block_copy()
92
io/channel-tls.c | 12 +
82
simplebench/bench_block_job: use correct shebang line with python3
93
io/channel-watch.c | 6 +
83
simplebench: bench_block_job: add cmd_options argument
94
io/channel.c | 97 ++++++--
84
simplebench: add bench-backup.py
95
nbd/client.c | 2 +-
85
96
nbd/common.c | 9 +-
86
qapi/block-core.json | 66 +++++-
97
nbd/server.c | 94 +++-----
87
block/backup-top.h | 1 +
98
stubs/linux-aio.c | 32 +++
88
block/copy-on-read.h | 32 +++
99
stubs/set-fd-handler.c | 11 -
89
include/block/block-copy.h | 61 ++++-
100
tests/iothread.c | 91 +++++++
90
include/block/block.h | 10 +-
101
tests/test-aio-multithread.c | 463 ++++++++++++++++++++++++++++++++++++
91
include/block/block_int.h | 15 +-
102
tests/test-thread-pool.c | 12 +-
92
include/block/blockjob_int.h | 2 +
103
aio-posix.c => util/aio-posix.c | 62 ++---
93
block.c | 25 ++
104
aio-win32.c => util/aio-win32.c | 30 +--
94
block/backup-top.c | 6 +-
105
util/aiocb.c | 55 +++++
95
block/backup.c | 233 ++++++++++++-------
106
async.c => util/async.c | 84 ++++++-
96
block/block-copy.c | 227 +++++++++++++++---
107
iohandler.c => util/iohandler.c | 0
97
block/copy-on-read.c | 184 ++++++++++++++-
108
main-loop.c => util/main-loop.c | 0
98
block/file-posix.c | 38 ++-
109
util/qemu-coroutine-lock.c | 254 ++++++++++++++++++--
99
block/io.c | 10 +-
110
util/qemu-coroutine-sleep.c | 2 +-
100
block/monitor/block-hmp-cmds.c | 7 +-
111
util/qemu-coroutine.c | 8 +
101
block/replication.c | 2 +
112
qemu-timer.c => util/qemu-timer.c | 0
102
block/stream.c | 185 +++++++++------
113
thread-pool.c => util/thread-pool.c | 8 +-
103
blockdev.c | 83 +++++--
114
trace-events | 11 -
104
blockjob.c | 6 +
115
util/trace-events | 17 +-
105
job.c | 3 +
116
67 files changed, 1712 insertions(+), 533 deletions(-)
106
util/coroutine-sigaltstack.c | 9 +
117
create mode 100644 tests/iothread.h
107
scripts/simplebench/bench-backup.py | 167 ++++++++++++++
118
create mode 100644 stubs/linux-aio.c
108
scripts/simplebench/bench-example.py | 2 +-
119
create mode 100644 tests/iothread.c
109
scripts/simplebench/bench_block_job.py | 13 +-
120
create mode 100644 tests/test-aio-multithread.c
110
tests/qemu-iotests/030 | 12 +-
121
rename aio-posix.c => util/aio-posix.c (94%)
111
tests/qemu-iotests/056 | 9 +-
122
rename aio-win32.c => util/aio-win32.c (95%)
112
tests/qemu-iotests/109.out | 24 ++
123
create mode 100644 util/aiocb.c
113
tests/qemu-iotests/118 | 20 +-
124
rename async.c => util/async.c (82%)
114
tests/qemu-iotests/118.out | 4 +-
125
rename iohandler.c => util/iohandler.c (100%)
115
tests/qemu-iotests/124 | 8 +-
126
rename main-loop.c => util/main-loop.c (100%)
116
tests/qemu-iotests/129 | 79 ++++---
127
rename qemu-timer.c => util/qemu-timer.c (100%)
117
tests/qemu-iotests/141.out | 2 +-
128
rename thread-pool.c => util/thread-pool.c (97%)
118
tests/qemu-iotests/178 | 2 +-
119
tests/qemu-iotests/178.out.qcow2 | 2 +-
120
tests/qemu-iotests/178.out.raw | 2 +-
121
tests/qemu-iotests/185 | 3 +-
122
tests/qemu-iotests/185.out | 3 +-
123
tests/qemu-iotests/219 | 13 +-
124
tests/qemu-iotests/245 | 20 +-
125
tests/qemu-iotests/257 | 1 +
126
tests/qemu-iotests/257.out | 306 ++++++++++++-------------
127
tests/qemu-iotests/297 | 112 +++++++--
128
tests/qemu-iotests/297.out | 5 +-
129
tests/qemu-iotests/300 | 19 +-
130
tests/qemu-iotests/310 | 117 ++++++++++
131
tests/qemu-iotests/310.out | 15 ++
132
tests/qemu-iotests/313 | 104 +++++++++
133
tests/qemu-iotests/313.out | 29 +++
134
tests/qemu-iotests/common.rc | 7 +-
135
tests/qemu-iotests/group | 2 +
136
tests/qemu-iotests/iotests.py | 37 +--
137
51 files changed, 1797 insertions(+), 547 deletions(-)
138
create mode 100644 block/copy-on-read.h
139
create mode 100755 scripts/simplebench/bench-backup.py
140
create mode 100755 tests/qemu-iotests/310
141
create mode 100644 tests/qemu-iotests/310.out
142
create mode 100755 tests/qemu-iotests/313
143
create mode 100644 tests/qemu-iotests/313.out
144
129
145
--
130
--
146
2.29.2
131
2.9.3
147
132
148
133
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Provide API for the COR-filter removal. Also, drop the filter child
3
AioContext is fairly self contained, the only dependency is QEMUTimer but
4
permissions for an inactive state when the filter node is being
4
that in turn doesn't need anything else. So move them out of block-obj-y
5
removed.
5
to avoid introducing a dependency from io/ to block-obj-y.
6
To insert the filter, the block generic layer function
6
7
bdrv_insert_node() can be used.
7
main-loop and its dependency iohandler also need to be moved, because
8
The new function bdrv_cor_filter_drop() may be considered as an
8
later in this series io/ will call iohandler_get_aio_context.
9
intermediate solution before the QEMU permission update system has
9
10
overhauled. Then we are able to implement the API function
10
[Changed copyright "the QEMU team" to "other QEMU contributors" as
11
bdrv_remove_node() on the block generic layer.
11
suggested by Daniel Berrange and agreed by Paolo.
12
12
--Stefan]
13
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
13
14
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
14
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
15
Reviewed-by: Max Reitz <mreitz@redhat.com>
15
Reviewed-by: Fam Zheng <famz@redhat.com>
16
Message-Id: <20201216061703.70908-4-vsementsov@virtuozzo.com>
16
Message-id: 20170213135235.12274-2-pbonzini@redhat.com
17
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
18
---
18
---
19
block/copy-on-read.h | 32 +++++++++++++++++++++++++
19
Makefile.objs | 4 ---
20
block/copy-on-read.c | 56 ++++++++++++++++++++++++++++++++++++++++++++
20
stubs/Makefile.objs | 1 +
21
2 files changed, 88 insertions(+)
21
tests/Makefile.include | 11 ++++----
22
create mode 100644 block/copy-on-read.h
22
util/Makefile.objs | 6 +++-
23
23
block/io.c | 29 -------------------
24
diff --git a/block/copy-on-read.h b/block/copy-on-read.h
24
stubs/linux-aio.c | 32 +++++++++++++++++++++
25
stubs/set-fd-handler.c | 11 --------
26
aio-posix.c => util/aio-posix.c | 2 +-
27
aio-win32.c => util/aio-win32.c | 0
28
util/aiocb.c | 55 +++++++++++++++++++++++++++++++++++++
29
async.c => util/async.c | 3 +-
30
iohandler.c => util/iohandler.c | 0
31
main-loop.c => util/main-loop.c | 0
32
qemu-timer.c => util/qemu-timer.c | 0
33
thread-pool.c => util/thread-pool.c | 2 +-
34
trace-events | 11 --------
35
util/trace-events | 11 ++++++++
36
17 files changed, 114 insertions(+), 64 deletions(-)
37
create mode 100644 stubs/linux-aio.c
38
rename aio-posix.c => util/aio-posix.c (99%)
39
rename aio-win32.c => util/aio-win32.c (100%)
40
create mode 100644 util/aiocb.c
41
rename async.c => util/async.c (99%)
42
rename iohandler.c => util/iohandler.c (100%)
43
rename main-loop.c => util/main-loop.c (100%)
44
rename qemu-timer.c => util/qemu-timer.c (100%)
45
rename thread-pool.c => util/thread-pool.c (99%)
46
47
diff --git a/Makefile.objs b/Makefile.objs
48
index XXXXXXX..XXXXXXX 100644
49
--- a/Makefile.objs
50
+++ b/Makefile.objs
51
@@ -XXX,XX +XXX,XX @@ chardev-obj-y = chardev/
52
#######################################################################
53
# block-obj-y is code used by both qemu system emulation and qemu-img
54
55
-block-obj-y = async.o thread-pool.o
56
block-obj-y += nbd/
57
block-obj-y += block.o blockjob.o
58
-block-obj-y += main-loop.o iohandler.o qemu-timer.o
59
-block-obj-$(CONFIG_POSIX) += aio-posix.o
60
-block-obj-$(CONFIG_WIN32) += aio-win32.o
61
block-obj-y += block/
62
block-obj-y += qemu-io-cmds.o
63
block-obj-$(CONFIG_REPLICATION) += replication.o
64
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
65
index XXXXXXX..XXXXXXX 100644
66
--- a/stubs/Makefile.objs
67
+++ b/stubs/Makefile.objs
68
@@ -XXX,XX +XXX,XX @@ stub-obj-y += get-vm-name.o
69
stub-obj-y += iothread.o
70
stub-obj-y += iothread-lock.o
71
stub-obj-y += is-daemonized.o
72
+stub-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
73
stub-obj-y += machine-init-done.o
74
stub-obj-y += migr-blocker.o
75
stub-obj-y += monitor.o
76
diff --git a/tests/Makefile.include b/tests/Makefile.include
77
index XXXXXXX..XXXXXXX 100644
78
--- a/tests/Makefile.include
79
+++ b/tests/Makefile.include
80
@@ -XXX,XX +XXX,XX @@ check-unit-y += tests/test-visitor-serialization$(EXESUF)
81
check-unit-y += tests/test-iov$(EXESUF)
82
gcov-files-test-iov-y = util/iov.c
83
check-unit-y += tests/test-aio$(EXESUF)
84
+gcov-files-test-aio-y = util/async.c util/qemu-timer.o
85
+gcov-files-test-aio-$(CONFIG_WIN32) += util/aio-win32.c
86
+gcov-files-test-aio-$(CONFIG_POSIX) += util/aio-posix.c
87
check-unit-y += tests/test-throttle$(EXESUF)
88
gcov-files-test-aio-$(CONFIG_WIN32) = aio-win32.c
89
gcov-files-test-aio-$(CONFIG_POSIX) = aio-posix.c
90
@@ -XXX,XX +XXX,XX @@ tests/check-qjson$(EXESUF): tests/check-qjson.o $(test-util-obj-y)
91
tests/check-qom-interface$(EXESUF): tests/check-qom-interface.o $(test-qom-obj-y)
92
tests/check-qom-proplist$(EXESUF): tests/check-qom-proplist.o $(test-qom-obj-y)
93
94
-tests/test-char$(EXESUF): tests/test-char.o qemu-timer.o \
95
-    $(test-util-obj-y) $(qtest-obj-y) $(test-block-obj-y) $(chardev-obj-y)
96
+tests/test-char$(EXESUF): tests/test-char.o $(test-util-obj-y) $(qtest-obj-y) $(test-io-obj-y) $(chardev-obj-y)
97
tests/test-coroutine$(EXESUF): tests/test-coroutine.o $(test-block-obj-y)
98
tests/test-aio$(EXESUF): tests/test-aio.o $(test-block-obj-y)
99
tests/test-throttle$(EXESUF): tests/test-throttle.o $(test-block-obj-y)
100
@@ -XXX,XX +XXX,XX @@ tests/test-vmstate$(EXESUF): tests/test-vmstate.o \
101
    migration/vmstate.o migration/qemu-file.o \
102
migration/qemu-file-channel.o migration/qjson.o \
103
    $(test-io-obj-y)
104
-tests/test-timed-average$(EXESUF): tests/test-timed-average.o qemu-timer.o \
105
-    $(test-util-obj-y)
106
+tests/test-timed-average$(EXESUF): tests/test-timed-average.o $(test-util-obj-y)
107
tests/test-base64$(EXESUF): tests/test-base64.o \
108
    libqemuutil.a libqemustub.a
109
tests/ptimer-test$(EXESUF): tests/ptimer-test.o tests/ptimer-test-stubs.o hw/core/ptimer.o libqemustub.a
110
@@ -XXX,XX +XXX,XX @@ tests/usb-hcd-ehci-test$(EXESUF): tests/usb-hcd-ehci-test.o $(libqos-usb-obj-y)
111
tests/usb-hcd-xhci-test$(EXESUF): tests/usb-hcd-xhci-test.o $(libqos-usb-obj-y)
112
tests/pc-cpu-test$(EXESUF): tests/pc-cpu-test.o
113
tests/postcopy-test$(EXESUF): tests/postcopy-test.o
114
-tests/vhost-user-test$(EXESUF): tests/vhost-user-test.o qemu-timer.o \
115
+tests/vhost-user-test$(EXESUF): tests/vhost-user-test.o $(test-util-obj-y) \
116
    $(qtest-obj-y) $(test-io-obj-y) $(libqos-virtio-obj-y) $(libqos-pc-obj-y) \
117
    $(chardev-obj-y)
118
tests/qemu-iotests/socket_scm_helper$(EXESUF): tests/qemu-iotests/socket_scm_helper.o
119
diff --git a/util/Makefile.objs b/util/Makefile.objs
120
index XXXXXXX..XXXXXXX 100644
121
--- a/util/Makefile.objs
122
+++ b/util/Makefile.objs
123
@@ -XXX,XX +XXX,XX @@
124
util-obj-y = osdep.o cutils.o unicode.o qemu-timer-common.o
125
util-obj-y += bufferiszero.o
126
util-obj-y += lockcnt.o
127
+util-obj-y += aiocb.o async.o thread-pool.o qemu-timer.o
128
+util-obj-y += main-loop.o iohandler.o
129
+util-obj-$(CONFIG_POSIX) += aio-posix.o
130
util-obj-$(CONFIG_POSIX) += compatfd.o
131
util-obj-$(CONFIG_POSIX) += event_notifier-posix.o
132
util-obj-$(CONFIG_POSIX) += mmap-alloc.o
133
util-obj-$(CONFIG_POSIX) += oslib-posix.o
134
util-obj-$(CONFIG_POSIX) += qemu-openpty.o
135
util-obj-$(CONFIG_POSIX) += qemu-thread-posix.o
136
-util-obj-$(CONFIG_WIN32) += event_notifier-win32.o
137
util-obj-$(CONFIG_POSIX) += memfd.o
138
+util-obj-$(CONFIG_WIN32) += aio-win32.o
139
+util-obj-$(CONFIG_WIN32) += event_notifier-win32.o
140
util-obj-$(CONFIG_WIN32) += oslib-win32.o
141
util-obj-$(CONFIG_WIN32) += qemu-thread-win32.o
142
util-obj-y += envlist.o path.o module.o
143
diff --git a/block/io.c b/block/io.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/block/io.c
146
+++ b/block/io.c
147
@@ -XXX,XX +XXX,XX @@ BlockAIOCB *bdrv_aio_flush(BlockDriverState *bs,
148
return &acb->common;
149
}
150
151
-void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs,
152
- BlockCompletionFunc *cb, void *opaque)
153
-{
154
- BlockAIOCB *acb;
155
-
156
- acb = g_malloc(aiocb_info->aiocb_size);
157
- acb->aiocb_info = aiocb_info;
158
- acb->bs = bs;
159
- acb->cb = cb;
160
- acb->opaque = opaque;
161
- acb->refcnt = 1;
162
- return acb;
163
-}
164
-
165
-void qemu_aio_ref(void *p)
166
-{
167
- BlockAIOCB *acb = p;
168
- acb->refcnt++;
169
-}
170
-
171
-void qemu_aio_unref(void *p)
172
-{
173
- BlockAIOCB *acb = p;
174
- assert(acb->refcnt > 0);
175
- if (--acb->refcnt == 0) {
176
- g_free(acb);
177
- }
178
-}
179
-
180
/**************************************************************/
181
/* Coroutine block device emulation */
182
183
diff --git a/stubs/linux-aio.c b/stubs/linux-aio.c
25
new file mode 100644
184
new file mode 100644
26
index XXXXXXX..XXXXXXX
185
index XXXXXXX..XXXXXXX
27
--- /dev/null
186
--- /dev/null
28
+++ b/block/copy-on-read.h
187
+++ b/stubs/linux-aio.c
29
@@ -XXX,XX +XXX,XX @@
188
@@ -XXX,XX +XXX,XX @@
30
+/*
189
+/*
31
+ * Copy-on-read filter block driver
190
+ * Linux native AIO support.
32
+ *
191
+ *
33
+ * The filter driver performs Copy-On-Read (COR) operations
192
+ * Copyright (C) 2009 IBM, Corp.
34
+ *
193
+ * Copyright (C) 2009 Red Hat, Inc.
35
+ * Copyright (c) 2018-2020 Virtuozzo International GmbH.
194
+ *
36
+ *
195
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
37
+ * Author:
196
+ * See the COPYING file in the top-level directory.
38
+ * Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
39
+ *
40
+ * This program is free software; you can redistribute it and/or modify
41
+ * it under the terms of the GNU General Public License as published by
42
+ * the Free Software Foundation; either version 2 of the License, or
43
+ * (at your option) any later version.
44
+ *
45
+ * This program is distributed in the hope that it will be useful,
46
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
47
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48
+ * GNU General Public License for more details.
49
+ *
50
+ * You should have received a copy of the GNU General Public License
51
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
52
+ */
197
+ */
53
+
198
+#include "qemu/osdep.h"
54
+#ifndef BLOCK_COPY_ON_READ
199
+#include "block/aio.h"
55
+#define BLOCK_COPY_ON_READ
200
+#include "block/raw-aio.h"
56
+
201
+
57
+#include "block/block_int.h"
202
+void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
58
+
203
+{
59
+void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
204
+ abort();
60
+
205
+}
61
+#endif /* BLOCK_COPY_ON_READ */
206
+
62
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
207
+void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
63
index XXXXXXX..XXXXXXX 100644
208
+{
64
--- a/block/copy-on-read.c
209
+ abort();
65
+++ b/block/copy-on-read.c
210
+}
66
@@ -XXX,XX +XXX,XX @@
211
+
67
#include "qemu/osdep.h"
212
+LinuxAioState *laio_init(void)
68
#include "block/block_int.h"
213
+{
69
#include "qemu/module.h"
214
+ abort();
70
+#include "qapi/error.h"
215
+}
71
+#include "block/copy-on-read.h"
216
+
72
+
217
+void laio_cleanup(LinuxAioState *s)
73
+
218
+{
74
+typedef struct BDRVStateCOR {
219
+ abort();
75
+ bool active;
220
+}
76
+} BDRVStateCOR;
221
diff --git a/stubs/set-fd-handler.c b/stubs/set-fd-handler.c
77
222
index XXXXXXX..XXXXXXX 100644
78
223
--- a/stubs/set-fd-handler.c
79
static int cor_open(BlockDriverState *bs, QDict *options, int flags,
224
+++ b/stubs/set-fd-handler.c
80
Error **errp)
225
@@ -XXX,XX +XXX,XX @@ void qemu_set_fd_handler(int fd,
81
{
226
{
82
+ BDRVStateCOR *state = bs->opaque;
227
abort();
83
+
84
bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
85
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
86
false, errp);
87
@@ -XXX,XX +XXX,XX @@ static int cor_open(BlockDriverState *bs, QDict *options, int flags,
88
((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
89
bs->file->bs->supported_zero_flags);
90
91
+ state->active = true;
92
+
93
+ /*
94
+ * We don't need to call bdrv_child_refresh_perms() now as the permissions
95
+ * will be updated later when the filter node gets its parent.
96
+ */
97
+
98
return 0;
99
}
228
}
100
229
-
101
@@ -XXX,XX +XXX,XX @@ static void cor_child_perm(BlockDriverState *bs, BdrvChild *c,
230
-void aio_set_fd_handler(AioContext *ctx,
102
uint64_t perm, uint64_t shared,
231
- int fd,
103
uint64_t *nperm, uint64_t *nshared)
232
- bool is_external,
104
{
233
- IOHandler *io_read,
105
+ BDRVStateCOR *s = bs->opaque;
234
- IOHandler *io_write,
106
+
235
- AioPollFn *io_poll,
107
+ if (!s->active) {
236
- void *opaque)
108
+ /*
237
-{
109
+ * While the filter is being removed
238
- abort();
110
+ */
239
-}
111
+ *nperm = 0;
240
diff --git a/aio-posix.c b/util/aio-posix.c
112
+ *nshared = BLK_PERM_ALL;
241
similarity index 99%
113
+ return;
242
rename from aio-posix.c
243
rename to util/aio-posix.c
244
index XXXXXXX..XXXXXXX 100644
245
--- a/aio-posix.c
246
+++ b/util/aio-posix.c
247
@@ -XXX,XX +XXX,XX @@
248
#include "qemu/rcu_queue.h"
249
#include "qemu/sockets.h"
250
#include "qemu/cutils.h"
251
-#include "trace-root.h"
252
+#include "trace.h"
253
#ifdef CONFIG_EPOLL_CREATE1
254
#include <sys/epoll.h>
255
#endif
256
diff --git a/aio-win32.c b/util/aio-win32.c
257
similarity index 100%
258
rename from aio-win32.c
259
rename to util/aio-win32.c
260
diff --git a/util/aiocb.c b/util/aiocb.c
261
new file mode 100644
262
index XXXXXXX..XXXXXXX
263
--- /dev/null
264
+++ b/util/aiocb.c
265
@@ -XXX,XX +XXX,XX @@
266
+/*
267
+ * BlockAIOCB allocation
268
+ *
269
+ * Copyright (c) 2003-2017 Fabrice Bellard and other QEMU contributors
270
+ *
271
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
272
+ * of this software and associated documentation files (the "Software"), to deal
273
+ * in the Software without restriction, including without limitation the rights
274
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
275
+ * copies of the Software, and to permit persons to whom the Software is
276
+ * furnished to do so, subject to the following conditions:
277
+ *
278
+ * The above copyright notice and this permission notice shall be included in
279
+ * all copies or substantial portions of the Software.
280
+ *
281
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
282
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
283
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
284
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
285
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
286
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
287
+ * THE SOFTWARE.
288
+ */
289
+
290
+#include "qemu/osdep.h"
291
+#include "block/aio.h"
292
+
293
+void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs,
294
+ BlockCompletionFunc *cb, void *opaque)
295
+{
296
+ BlockAIOCB *acb;
297
+
298
+ acb = g_malloc(aiocb_info->aiocb_size);
299
+ acb->aiocb_info = aiocb_info;
300
+ acb->bs = bs;
301
+ acb->cb = cb;
302
+ acb->opaque = opaque;
303
+ acb->refcnt = 1;
304
+ return acb;
305
+}
306
+
307
+void qemu_aio_ref(void *p)
308
+{
309
+ BlockAIOCB *acb = p;
310
+ acb->refcnt++;
311
+}
312
+
313
+void qemu_aio_unref(void *p)
314
+{
315
+ BlockAIOCB *acb = p;
316
+ assert(acb->refcnt > 0);
317
+ if (--acb->refcnt == 0) {
318
+ g_free(acb);
114
+ }
319
+ }
115
+
320
+}
116
*nperm = perm & PERM_PASSTHROUGH;
321
diff --git a/async.c b/util/async.c
117
*nshared = (shared & PERM_PASSTHROUGH) | PERM_UNCHANGED;
322
similarity index 99%
118
323
rename from async.c
119
@@ -XXX,XX +XXX,XX @@ static void cor_lock_medium(BlockDriverState *bs, bool locked)
324
rename to util/async.c
120
325
index XXXXXXX..XXXXXXX 100644
121
static BlockDriver bdrv_copy_on_read = {
326
--- a/async.c
122
.format_name = "copy-on-read",
327
+++ b/util/async.c
123
+ .instance_size = sizeof(BDRVStateCOR),
328
@@ -XXX,XX +XXX,XX @@
124
329
/*
125
.bdrv_open = cor_open,
330
- * QEMU System Emulator
126
.bdrv_child_perm = cor_child_perm,
331
+ * Data plane event loop
127
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_copy_on_read = {
332
*
128
.is_filter = true,
333
* Copyright (c) 2003-2008 Fabrice Bellard
129
};
334
+ * Copyright (c) 2009-2017 QEMU contributors
130
335
*
131
+
336
* Permission is hereby granted, free of charge, to any person obtaining a copy
132
+void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
337
* of this software and associated documentation files (the "Software"), to deal
133
+{
338
diff --git a/iohandler.c b/util/iohandler.c
134
+ BdrvChild *child;
339
similarity index 100%
135
+ BlockDriverState *bs;
340
rename from iohandler.c
136
+ BDRVStateCOR *s = cor_filter_bs->opaque;
341
rename to util/iohandler.c
137
+
342
diff --git a/main-loop.c b/util/main-loop.c
138
+ child = bdrv_filter_child(cor_filter_bs);
343
similarity index 100%
139
+ if (!child) {
344
rename from main-loop.c
140
+ return;
345
rename to util/main-loop.c
141
+ }
346
diff --git a/qemu-timer.c b/util/qemu-timer.c
142
+ bs = child->bs;
347
similarity index 100%
143
+
348
rename from qemu-timer.c
144
+ /* Retain the BDS until we complete the graph change. */
349
rename to util/qemu-timer.c
145
+ bdrv_ref(bs);
350
diff --git a/thread-pool.c b/util/thread-pool.c
146
+ /* Hold a guest back from writing while permissions are being reset. */
351
similarity index 99%
147
+ bdrv_drained_begin(bs);
352
rename from thread-pool.c
148
+ /* Drop permissions before the graph change. */
353
rename to util/thread-pool.c
149
+ s->active = false;
354
index XXXXXXX..XXXXXXX 100644
150
+ bdrv_child_refresh_perms(cor_filter_bs, child, &error_abort);
355
--- a/thread-pool.c
151
+ bdrv_replace_node(cor_filter_bs, bs, &error_abort);
356
+++ b/util/thread-pool.c
152
+
357
@@ -XXX,XX +XXX,XX @@
153
+ bdrv_drained_end(bs);
358
#include "qemu/queue.h"
154
+ bdrv_unref(bs);
359
#include "qemu/thread.h"
155
+ bdrv_unref(cor_filter_bs);
360
#include "qemu/coroutine.h"
156
+}
361
-#include "trace-root.h"
157
+
362
+#include "trace.h"
158
+
363
#include "block/thread-pool.h"
159
static void bdrv_copy_on_read_init(void)
364
#include "qemu/main-loop.h"
160
{
365
161
bdrv_register(&bdrv_copy_on_read);
366
diff --git a/trace-events b/trace-events
367
index XXXXXXX..XXXXXXX 100644
368
--- a/trace-events
369
+++ b/trace-events
370
@@ -XXX,XX +XXX,XX @@
371
#
372
# The <format-string> should be a sprintf()-compatible format string.
373
374
-# aio-posix.c
375
-run_poll_handlers_begin(void *ctx, int64_t max_ns) "ctx %p max_ns %"PRId64
376
-run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d"
377
-poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64
378
-poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64
379
-
380
-# thread-pool.c
381
-thread_pool_submit(void *pool, void *req, void *opaque) "pool %p req %p opaque %p"
382
-thread_pool_complete(void *pool, void *req, void *opaque, int ret) "pool %p req %p opaque %p ret %d"
383
-thread_pool_cancel(void *req, void *opaque) "req %p opaque %p"
384
-
385
# ioport.c
386
cpu_in(unsigned int addr, char size, unsigned int val) "addr %#x(%c) value %u"
387
cpu_out(unsigned int addr, char size, unsigned int val) "addr %#x(%c) value %u"
388
diff --git a/util/trace-events b/util/trace-events
389
index XXXXXXX..XXXXXXX 100644
390
--- a/util/trace-events
391
+++ b/util/trace-events
392
@@ -XXX,XX +XXX,XX @@
393
# See docs/tracing.txt for syntax documentation.
394
395
+# util/aio-posix.c
396
+run_poll_handlers_begin(void *ctx, int64_t max_ns) "ctx %p max_ns %"PRId64
397
+run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d"
398
+poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64
399
+poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64
400
+
401
+# util/thread-pool.c
402
+thread_pool_submit(void *pool, void *req, void *opaque) "pool %p req %p opaque %p"
403
+thread_pool_complete(void *pool, void *req, void *opaque, int ret) "pool %p req %p opaque %p ret %d"
404
+thread_pool_cancel(void *req, void *opaque) "req %p opaque %p"
405
+
406
# util/buffer.c
407
buffer_resize(const char *buf, size_t olen, size_t len) "%s: old %zd, new %zd"
408
buffer_move_empty(const char *buf, size_t len, const char *from) "%s: %zd bytes from %s"
162
--
409
--
163
2.29.2
410
2.9.3
164
411
165
412
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
We'll need async block-copy invocation to use in backup directly.
3
aio_co_wake provides the infrastructure to start a coroutine on a "home"
4
4
AioContext. It will be used by CoMutex and CoQueue, so that coroutines
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
5
don't jump from one context to another when they go to sleep on a
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
6
mutex or waitqueue. However, it can also be used as a more efficient
7
Message-Id: <20210116214705.822267-4-vsementsov@virtuozzo.com>
7
alternative to one-shot bottom halves, and saves the effort of tracking
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
8
which AioContext a coroutine is running on.
9
10
aio_co_schedule is the part of aio_co_wake that starts a coroutine
11
on a remove AioContext, but it is also useful to implement e.g.
12
bdrv_set_aio_context callbacks.
13
14
The implementation of aio_co_schedule is based on a lock-free
15
multiple-producer, single-consumer queue. The multiple producers use
16
cmpxchg to add to a LIFO stack. The consumer (a per-AioContext bottom
17
half) grabs all items added so far, inverts the list to make it FIFO,
18
and goes through it one item at a time until it's empty. The data
19
structure was inspired by OSv, which uses it in the very code we'll
20
"port" to QEMU for the thread-safe CoMutex.
21
22
Most of the new code is really tests.
23
24
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
25
Reviewed-by: Fam Zheng <famz@redhat.com>
26
Message-id: 20170213135235.12274-3-pbonzini@redhat.com
27
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
28
---
10
include/block/block-copy.h | 29 ++++++++++++++
29
tests/Makefile.include | 8 +-
11
block/block-copy.c | 81 ++++++++++++++++++++++++++++++++++++--
30
include/block/aio.h | 32 +++++++
12
2 files changed, 106 insertions(+), 4 deletions(-)
31
include/qemu/coroutine_int.h | 11 ++-
13
32
tests/iothread.h | 25 +++++
14
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
33
tests/iothread.c | 91 ++++++++++++++++++
34
tests/test-aio-multithread.c | 213 +++++++++++++++++++++++++++++++++++++++++++
35
util/async.c | 65 +++++++++++++
36
util/qemu-coroutine.c | 8 ++
37
util/trace-events | 4 +
38
9 files changed, 453 insertions(+), 4 deletions(-)
39
create mode 100644 tests/iothread.h
40
create mode 100644 tests/iothread.c
41
create mode 100644 tests/test-aio-multithread.c
42
43
diff --git a/tests/Makefile.include b/tests/Makefile.include
15
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
16
--- a/include/block/block-copy.h
45
--- a/tests/Makefile.include
17
+++ b/include/block/block-copy.h
46
+++ b/tests/Makefile.include
47
@@ -XXX,XX +XXX,XX @@ check-unit-y += tests/test-aio$(EXESUF)
48
gcov-files-test-aio-y = util/async.c util/qemu-timer.o
49
gcov-files-test-aio-$(CONFIG_WIN32) += util/aio-win32.c
50
gcov-files-test-aio-$(CONFIG_POSIX) += util/aio-posix.c
51
+check-unit-y += tests/test-aio-multithread$(EXESUF)
52
+gcov-files-test-aio-multithread-y = $(gcov-files-test-aio-y)
53
+gcov-files-test-aio-multithread-y += util/qemu-coroutine.c tests/iothread.c
54
check-unit-y += tests/test-throttle$(EXESUF)
55
-gcov-files-test-aio-$(CONFIG_WIN32) = aio-win32.c
56
-gcov-files-test-aio-$(CONFIG_POSIX) = aio-posix.c
57
check-unit-y += tests/test-thread-pool$(EXESUF)
58
gcov-files-test-thread-pool-y = thread-pool.c
59
gcov-files-test-hbitmap-y = util/hbitmap.c
60
@@ -XXX,XX +XXX,XX @@ test-qapi-obj-y = tests/test-qapi-visit.o tests/test-qapi-types.o \
61
    $(test-qom-obj-y)
62
test-crypto-obj-y = $(crypto-obj-y) $(test-qom-obj-y)
63
test-io-obj-y = $(io-obj-y) $(test-crypto-obj-y)
64
-test-block-obj-y = $(block-obj-y) $(test-io-obj-y)
65
+test-block-obj-y = $(block-obj-y) $(test-io-obj-y) tests/iothread.o
66
67
tests/check-qint$(EXESUF): tests/check-qint.o $(test-util-obj-y)
68
tests/check-qstring$(EXESUF): tests/check-qstring.o $(test-util-obj-y)
69
@@ -XXX,XX +XXX,XX @@ tests/check-qom-proplist$(EXESUF): tests/check-qom-proplist.o $(test-qom-obj-y)
70
tests/test-char$(EXESUF): tests/test-char.o $(test-util-obj-y) $(qtest-obj-y) $(test-io-obj-y) $(chardev-obj-y)
71
tests/test-coroutine$(EXESUF): tests/test-coroutine.o $(test-block-obj-y)
72
tests/test-aio$(EXESUF): tests/test-aio.o $(test-block-obj-y)
73
+tests/test-aio-multithread$(EXESUF): tests/test-aio-multithread.o $(test-block-obj-y)
74
tests/test-throttle$(EXESUF): tests/test-throttle.o $(test-block-obj-y)
75
tests/test-blockjob$(EXESUF): tests/test-blockjob.o $(test-block-obj-y) $(test-util-obj-y)
76
tests/test-blockjob-txn$(EXESUF): tests/test-blockjob-txn.o $(test-block-obj-y) $(test-util-obj-y)
77
diff --git a/include/block/aio.h b/include/block/aio.h
78
index XXXXXXX..XXXXXXX 100644
79
--- a/include/block/aio.h
80
+++ b/include/block/aio.h
81
@@ -XXX,XX +XXX,XX @@ typedef void QEMUBHFunc(void *opaque);
82
typedef bool AioPollFn(void *opaque);
83
typedef void IOHandler(void *opaque);
84
85
+struct Coroutine;
86
struct ThreadPool;
87
struct LinuxAioState;
88
89
@@ -XXX,XX +XXX,XX @@ struct AioContext {
90
bool notified;
91
EventNotifier notifier;
92
93
+ QSLIST_HEAD(, Coroutine) scheduled_coroutines;
94
+ QEMUBH *co_schedule_bh;
95
+
96
/* Thread pool for performing work and receiving completion callbacks.
97
* Has its own locking.
98
*/
99
@@ -XXX,XX +XXX,XX @@ static inline bool aio_node_check(AioContext *ctx, bool is_external)
100
}
101
102
/**
103
+ * aio_co_schedule:
104
+ * @ctx: the aio context
105
+ * @co: the coroutine
106
+ *
107
+ * Start a coroutine on a remote AioContext.
108
+ *
109
+ * The coroutine must not be entered by anyone else while aio_co_schedule()
110
+ * is active. In addition the coroutine must have yielded unless ctx
111
+ * is the context in which the coroutine is running (i.e. the value of
112
+ * qemu_get_current_aio_context() from the coroutine itself).
113
+ */
114
+void aio_co_schedule(AioContext *ctx, struct Coroutine *co);
115
+
116
+/**
117
+ * aio_co_wake:
118
+ * @co: the coroutine
119
+ *
120
+ * Restart a coroutine on the AioContext where it was running last, thus
121
+ * preventing coroutines from jumping from one context to another when they
122
+ * go to sleep.
123
+ *
124
+ * aio_co_wake may be executed either in coroutine or non-coroutine
125
+ * context. The coroutine must not be entered by anyone else while
126
+ * aio_co_wake() is active.
127
+ */
128
+void aio_co_wake(struct Coroutine *co);
129
+
130
+/**
131
* Return the AioContext whose event loop runs in the current thread.
132
*
133
* If called from an IOThread this will be the IOThread's AioContext. If
134
diff --git a/include/qemu/coroutine_int.h b/include/qemu/coroutine_int.h
135
index XXXXXXX..XXXXXXX 100644
136
--- a/include/qemu/coroutine_int.h
137
+++ b/include/qemu/coroutine_int.h
138
@@ -XXX,XX +XXX,XX @@ struct Coroutine {
139
CoroutineEntry *entry;
140
void *entry_arg;
141
Coroutine *caller;
142
+
143
+ /* Only used when the coroutine has terminated. */
144
QSLIST_ENTRY(Coroutine) pool_next;
145
+
146
size_t locks_held;
147
148
- /* Coroutines that should be woken up when we yield or terminate */
149
+ /* Coroutines that should be woken up when we yield or terminate.
150
+ * Only used when the coroutine is running.
151
+ */
152
QSIMPLEQ_HEAD(, Coroutine) co_queue_wakeup;
153
+
154
+ /* Only used when the coroutine has yielded. */
155
+ AioContext *ctx;
156
QSIMPLEQ_ENTRY(Coroutine) co_queue_next;
157
+ QSLIST_ENTRY(Coroutine) co_scheduled_next;
158
};
159
160
Coroutine *qemu_coroutine_new(void);
161
diff --git a/tests/iothread.h b/tests/iothread.h
162
new file mode 100644
163
index XXXXXXX..XXXXXXX
164
--- /dev/null
165
+++ b/tests/iothread.h
18
@@ -XXX,XX +XXX,XX @@
166
@@ -XXX,XX +XXX,XX @@
19
#include "qemu/co-shared-resource.h"
20
21
typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
22
+typedef void (*BlockCopyAsyncCallbackFunc)(void *opaque);
23
typedef struct BlockCopyState BlockCopyState;
24
+typedef struct BlockCopyCallState BlockCopyCallState;
25
26
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
27
int64_t cluster_size, bool use_copy_range,
28
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
29
int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
30
bool *error_is_read);
31
32
+/*
167
+/*
33
+ * Run block-copy in a coroutine, create corresponding BlockCopyCallState
168
+ * Event loop thread implementation for unit tests
34
+ * object and return pointer to it. Never returns NULL.
169
+ *
35
+ *
170
+ * Copyright Red Hat Inc., 2013, 2016
36
+ * Caller is responsible to call block_copy_call_free() to free
171
+ *
37
+ * BlockCopyCallState object.
172
+ * Authors:
173
+ * Stefan Hajnoczi <stefanha@redhat.com>
174
+ * Paolo Bonzini <pbonzini@redhat.com>
175
+ *
176
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
177
+ * See the COPYING file in the top-level directory.
38
+ */
178
+ */
39
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
179
+#ifndef TEST_IOTHREAD_H
40
+ int64_t offset, int64_t bytes,
180
+#define TEST_IOTHREAD_H
41
+ BlockCopyAsyncCallbackFunc cb,
181
+
42
+ void *cb_opaque);
182
+#include "block/aio.h"
43
+
183
+#include "qemu/thread.h"
184
+
185
+typedef struct IOThread IOThread;
186
+
187
+IOThread *iothread_new(void);
188
+void iothread_join(IOThread *iothread);
189
+AioContext *iothread_get_aio_context(IOThread *iothread);
190
+
191
+#endif
192
diff --git a/tests/iothread.c b/tests/iothread.c
193
new file mode 100644
194
index XXXXXXX..XXXXXXX
195
--- /dev/null
196
+++ b/tests/iothread.c
197
@@ -XXX,XX +XXX,XX @@
44
+/*
198
+/*
45
+ * Free finished BlockCopyCallState. Trying to free running
199
+ * Event loop thread implementation for unit tests
46
+ * block-copy will crash.
200
+ *
201
+ * Copyright Red Hat Inc., 2013, 2016
202
+ *
203
+ * Authors:
204
+ * Stefan Hajnoczi <stefanha@redhat.com>
205
+ * Paolo Bonzini <pbonzini@redhat.com>
206
+ *
207
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
208
+ * See the COPYING file in the top-level directory.
209
+ *
47
+ */
210
+ */
48
+void block_copy_call_free(BlockCopyCallState *call_state);
211
+
49
+
212
+#include "qemu/osdep.h"
213
+#include "qapi/error.h"
214
+#include "block/aio.h"
215
+#include "qemu/main-loop.h"
216
+#include "qemu/rcu.h"
217
+#include "iothread.h"
218
+
219
+struct IOThread {
220
+ AioContext *ctx;
221
+
222
+ QemuThread thread;
223
+ QemuMutex init_done_lock;
224
+ QemuCond init_done_cond; /* is thread initialization done? */
225
+ bool stopping;
226
+};
227
+
228
+static __thread IOThread *my_iothread;
229
+
230
+AioContext *qemu_get_current_aio_context(void)
231
+{
232
+ return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
233
+}
234
+
235
+static void *iothread_run(void *opaque)
236
+{
237
+ IOThread *iothread = opaque;
238
+
239
+ rcu_register_thread();
240
+
241
+ my_iothread = iothread;
242
+ qemu_mutex_lock(&iothread->init_done_lock);
243
+ iothread->ctx = aio_context_new(&error_abort);
244
+ qemu_cond_signal(&iothread->init_done_cond);
245
+ qemu_mutex_unlock(&iothread->init_done_lock);
246
+
247
+ while (!atomic_read(&iothread->stopping)) {
248
+ aio_poll(iothread->ctx, true);
249
+ }
250
+
251
+ rcu_unregister_thread();
252
+ return NULL;
253
+}
254
+
255
+void iothread_join(IOThread *iothread)
256
+{
257
+ iothread->stopping = true;
258
+ aio_notify(iothread->ctx);
259
+ qemu_thread_join(&iothread->thread);
260
+ qemu_cond_destroy(&iothread->init_done_cond);
261
+ qemu_mutex_destroy(&iothread->init_done_lock);
262
+ aio_context_unref(iothread->ctx);
263
+ g_free(iothread);
264
+}
265
+
266
+IOThread *iothread_new(void)
267
+{
268
+ IOThread *iothread = g_new0(IOThread, 1);
269
+
270
+ qemu_mutex_init(&iothread->init_done_lock);
271
+ qemu_cond_init(&iothread->init_done_cond);
272
+ qemu_thread_create(&iothread->thread, NULL, iothread_run,
273
+ iothread, QEMU_THREAD_JOINABLE);
274
+
275
+ /* Wait for initialization to complete */
276
+ qemu_mutex_lock(&iothread->init_done_lock);
277
+ while (iothread->ctx == NULL) {
278
+ qemu_cond_wait(&iothread->init_done_cond,
279
+ &iothread->init_done_lock);
280
+ }
281
+ qemu_mutex_unlock(&iothread->init_done_lock);
282
+ return iothread;
283
+}
284
+
285
+AioContext *iothread_get_aio_context(IOThread *iothread)
286
+{
287
+ return iothread->ctx;
288
+}
289
diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c
290
new file mode 100644
291
index XXXXXXX..XXXXXXX
292
--- /dev/null
293
+++ b/tests/test-aio-multithread.c
294
@@ -XXX,XX +XXX,XX @@
50
+/*
295
+/*
51
+ * Note, that block-copy call is marked finished prior to calling
296
+ * AioContext multithreading tests
52
+ * the callback.
297
+ *
298
+ * Copyright Red Hat, Inc. 2016
299
+ *
300
+ * Authors:
301
+ * Paolo Bonzini <pbonzini@redhat.com>
302
+ *
303
+ * This work is licensed under the terms of the GNU LGPL, version 2 or later.
304
+ * See the COPYING.LIB file in the top-level directory.
53
+ */
305
+ */
54
+bool block_copy_call_finished(BlockCopyCallState *call_state);
306
+
55
+bool block_copy_call_succeeded(BlockCopyCallState *call_state);
307
+#include "qemu/osdep.h"
56
+bool block_copy_call_failed(BlockCopyCallState *call_state);
308
+#include <glib.h>
57
+int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
309
+#include "block/aio.h"
58
+
310
+#include "qapi/error.h"
59
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
311
+#include "qemu/coroutine.h"
60
void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
312
+#include "qemu/thread.h"
61
313
+#include "qemu/error-report.h"
62
diff --git a/block/block-copy.c b/block/block-copy.c
314
+#include "iothread.h"
315
+
316
+/* AioContext management */
317
+
318
+#define NUM_CONTEXTS 5
319
+
320
+static IOThread *threads[NUM_CONTEXTS];
321
+static AioContext *ctx[NUM_CONTEXTS];
322
+static __thread int id = -1;
323
+
324
+static QemuEvent done_event;
325
+
326
+/* Run a function synchronously on a remote iothread. */
327
+
328
+typedef struct CtxRunData {
329
+ QEMUBHFunc *cb;
330
+ void *arg;
331
+} CtxRunData;
332
+
333
+static void ctx_run_bh_cb(void *opaque)
334
+{
335
+ CtxRunData *data = opaque;
336
+
337
+ data->cb(data->arg);
338
+ qemu_event_set(&done_event);
339
+}
340
+
341
+static void ctx_run(int i, QEMUBHFunc *cb, void *opaque)
342
+{
343
+ CtxRunData data = {
344
+ .cb = cb,
345
+ .arg = opaque
346
+ };
347
+
348
+ qemu_event_reset(&done_event);
349
+ aio_bh_schedule_oneshot(ctx[i], ctx_run_bh_cb, &data);
350
+ qemu_event_wait(&done_event);
351
+}
352
+
353
+/* Starting the iothreads. */
354
+
355
+static void set_id_cb(void *opaque)
356
+{
357
+ int *i = opaque;
358
+
359
+ id = *i;
360
+}
361
+
362
+static void create_aio_contexts(void)
363
+{
364
+ int i;
365
+
366
+ for (i = 0; i < NUM_CONTEXTS; i++) {
367
+ threads[i] = iothread_new();
368
+ ctx[i] = iothread_get_aio_context(threads[i]);
369
+ }
370
+
371
+ qemu_event_init(&done_event, false);
372
+ for (i = 0; i < NUM_CONTEXTS; i++) {
373
+ ctx_run(i, set_id_cb, &i);
374
+ }
375
+}
376
+
377
+/* Stopping the iothreads. */
378
+
379
+static void join_aio_contexts(void)
380
+{
381
+ int i;
382
+
383
+ for (i = 0; i < NUM_CONTEXTS; i++) {
384
+ aio_context_ref(ctx[i]);
385
+ }
386
+ for (i = 0; i < NUM_CONTEXTS; i++) {
387
+ iothread_join(threads[i]);
388
+ }
389
+ for (i = 0; i < NUM_CONTEXTS; i++) {
390
+ aio_context_unref(ctx[i]);
391
+ }
392
+ qemu_event_destroy(&done_event);
393
+}
394
+
395
+/* Basic test for the stuff above. */
396
+
397
+static void test_lifecycle(void)
398
+{
399
+ create_aio_contexts();
400
+ join_aio_contexts();
401
+}
402
+
403
+/* aio_co_schedule test. */
404
+
405
+static Coroutine *to_schedule[NUM_CONTEXTS];
406
+
407
+static bool now_stopping;
408
+
409
+static int count_retry;
410
+static int count_here;
411
+static int count_other;
412
+
413
+static bool schedule_next(int n)
414
+{
415
+ Coroutine *co;
416
+
417
+ co = atomic_xchg(&to_schedule[n], NULL);
418
+ if (!co) {
419
+ atomic_inc(&count_retry);
420
+ return false;
421
+ }
422
+
423
+ if (n == id) {
424
+ atomic_inc(&count_here);
425
+ } else {
426
+ atomic_inc(&count_other);
427
+ }
428
+
429
+ aio_co_schedule(ctx[n], co);
430
+ return true;
431
+}
432
+
433
+static void finish_cb(void *opaque)
434
+{
435
+ schedule_next(id);
436
+}
437
+
438
+static coroutine_fn void test_multi_co_schedule_entry(void *opaque)
439
+{
440
+ g_assert(to_schedule[id] == NULL);
441
+ atomic_mb_set(&to_schedule[id], qemu_coroutine_self());
442
+
443
+ while (!atomic_mb_read(&now_stopping)) {
444
+ int n;
445
+
446
+ n = g_test_rand_int_range(0, NUM_CONTEXTS);
447
+ schedule_next(n);
448
+ qemu_coroutine_yield();
449
+
450
+ g_assert(to_schedule[id] == NULL);
451
+ atomic_mb_set(&to_schedule[id], qemu_coroutine_self());
452
+ }
453
+}
454
+
455
+
456
+static void test_multi_co_schedule(int seconds)
457
+{
458
+ int i;
459
+
460
+ count_here = count_other = count_retry = 0;
461
+ now_stopping = false;
462
+
463
+ create_aio_contexts();
464
+ for (i = 0; i < NUM_CONTEXTS; i++) {
465
+ Coroutine *co1 = qemu_coroutine_create(test_multi_co_schedule_entry, NULL);
466
+ aio_co_schedule(ctx[i], co1);
467
+ }
468
+
469
+ g_usleep(seconds * 1000000);
470
+
471
+ atomic_mb_set(&now_stopping, true);
472
+ for (i = 0; i < NUM_CONTEXTS; i++) {
473
+ ctx_run(i, finish_cb, NULL);
474
+ to_schedule[i] = NULL;
475
+ }
476
+
477
+ join_aio_contexts();
478
+ g_test_message("scheduled %d, queued %d, retry %d, total %d\n",
479
+ count_other, count_here, count_retry,
480
+ count_here + count_other + count_retry);
481
+}
482
+
483
+static void test_multi_co_schedule_1(void)
484
+{
485
+ test_multi_co_schedule(1);
486
+}
487
+
488
+static void test_multi_co_schedule_10(void)
489
+{
490
+ test_multi_co_schedule(10);
491
+}
492
+
493
+/* End of tests. */
494
+
495
+int main(int argc, char **argv)
496
+{
497
+ init_clocks();
498
+
499
+ g_test_init(&argc, &argv, NULL);
500
+ g_test_add_func("/aio/multi/lifecycle", test_lifecycle);
501
+ if (g_test_quick()) {
502
+ g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_1);
503
+ } else {
504
+ g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_10);
505
+ }
506
+ return g_test_run();
507
+}
508
diff --git a/util/async.c b/util/async.c
63
index XXXXXXX..XXXXXXX 100644
509
index XXXXXXX..XXXXXXX 100644
64
--- a/block/block-copy.c
510
--- a/util/async.c
65
+++ b/block/block-copy.c
511
+++ b/util/async.c
66
@@ -XXX,XX +XXX,XX @@
512
@@ -XXX,XX +XXX,XX @@
67
static coroutine_fn int block_copy_task_entry(AioTask *task);
513
#include "qemu/main-loop.h"
68
514
#include "qemu/atomic.h"
69
typedef struct BlockCopyCallState {
515
#include "block/raw-aio.h"
70
- /* IN parameters */
516
+#include "qemu/coroutine_int.h"
71
+ /* IN parameters. Initialized in block_copy_async() and never changed. */
517
+#include "trace.h"
72
BlockCopyState *s;
518
73
int64_t offset;
519
/***********************************************************/
74
int64_t bytes;
520
/* bottom halves (can be seen as timers which expire ASAP) */
75
+ BlockCopyAsyncCallbackFunc cb;
521
@@ -XXX,XX +XXX,XX @@ aio_ctx_finalize(GSource *source)
76
+ void *cb_opaque;
522
}
77
+
523
#endif
78
+ /* Coroutine where async block-copy is running */
524
79
+ Coroutine *co;
525
+ assert(QSLIST_EMPTY(&ctx->scheduled_coroutines));
80
526
+ qemu_bh_delete(ctx->co_schedule_bh);
81
/* State */
527
+
82
- bool failed;
528
qemu_lockcnt_lock(&ctx->list_lock);
83
+ int ret;
529
assert(!qemu_lockcnt_count(&ctx->list_lock));
84
+ bool finished;
530
while (ctx->first_bh) {
85
531
@@ -XXX,XX +XXX,XX @@ static bool event_notifier_poll(void *opaque)
86
/* OUT parameters */
532
return atomic_read(&ctx->notified);
87
bool error_is_read;
88
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
89
90
ret = block_copy_do_copy(t->s, t->offset, t->bytes, t->zeroes,
91
&error_is_read);
92
- if (ret < 0 && !t->call_state->failed) {
93
- t->call_state->failed = true;
94
+ if (ret < 0 && !t->call_state->ret) {
95
+ t->call_state->ret = ret;
96
t->call_state->error_is_read = error_is_read;
97
} else {
98
progress_work_done(t->s->progress, t->bytes);
99
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
100
*/
101
} while (ret > 0);
102
103
+ call_state->finished = true;
104
+
105
+ if (call_state->cb) {
106
+ call_state->cb(call_state->cb_opaque);
107
+ }
108
+
109
return ret;
110
}
533
}
111
534
112
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
535
+static void co_schedule_bh_cb(void *opaque)
113
return ret;
536
+{
537
+ AioContext *ctx = opaque;
538
+ QSLIST_HEAD(, Coroutine) straight, reversed;
539
+
540
+ QSLIST_MOVE_ATOMIC(&reversed, &ctx->scheduled_coroutines);
541
+ QSLIST_INIT(&straight);
542
+
543
+ while (!QSLIST_EMPTY(&reversed)) {
544
+ Coroutine *co = QSLIST_FIRST(&reversed);
545
+ QSLIST_REMOVE_HEAD(&reversed, co_scheduled_next);
546
+ QSLIST_INSERT_HEAD(&straight, co, co_scheduled_next);
547
+ }
548
+
549
+ while (!QSLIST_EMPTY(&straight)) {
550
+ Coroutine *co = QSLIST_FIRST(&straight);
551
+ QSLIST_REMOVE_HEAD(&straight, co_scheduled_next);
552
+ trace_aio_co_schedule_bh_cb(ctx, co);
553
+ qemu_coroutine_enter(co);
554
+ }
555
+}
556
+
557
AioContext *aio_context_new(Error **errp)
558
{
559
int ret;
560
@@ -XXX,XX +XXX,XX @@ AioContext *aio_context_new(Error **errp)
561
}
562
g_source_set_can_recurse(&ctx->source, true);
563
qemu_lockcnt_init(&ctx->list_lock);
564
+
565
+ ctx->co_schedule_bh = aio_bh_new(ctx, co_schedule_bh_cb, ctx);
566
+ QSLIST_INIT(&ctx->scheduled_coroutines);
567
+
568
aio_set_event_notifier(ctx, &ctx->notifier,
569
false,
570
(EventNotifierHandler *)
571
@@ -XXX,XX +XXX,XX @@ fail:
572
return NULL;
114
}
573
}
115
574
116
+static void coroutine_fn block_copy_async_co_entry(void *opaque)
575
+void aio_co_schedule(AioContext *ctx, Coroutine *co)
117
+{
576
+{
118
+ block_copy_common(opaque);
577
+ trace_aio_co_schedule(ctx, co);
119
+}
578
+ QSLIST_INSERT_HEAD_ATOMIC(&ctx->scheduled_coroutines,
120
+
579
+ co, co_scheduled_next);
121
+BlockCopyCallState *block_copy_async(BlockCopyState *s,
580
+ qemu_bh_schedule(ctx->co_schedule_bh);
122
+ int64_t offset, int64_t bytes,
581
+}
123
+ BlockCopyAsyncCallbackFunc cb,
582
+
124
+ void *cb_opaque)
583
+void aio_co_wake(struct Coroutine *co)
125
+{
584
+{
126
+ BlockCopyCallState *call_state = g_new(BlockCopyCallState, 1);
585
+ AioContext *ctx;
127
+
586
+
128
+ *call_state = (BlockCopyCallState) {
587
+ /* Read coroutine before co->ctx. Matches smp_wmb in
129
+ .s = s,
588
+ * qemu_coroutine_enter.
130
+ .offset = offset,
589
+ */
131
+ .bytes = bytes,
590
+ smp_read_barrier_depends();
132
+ .cb = cb,
591
+ ctx = atomic_read(&co->ctx);
133
+ .cb_opaque = cb_opaque,
592
+
134
+
593
+ if (ctx != qemu_get_current_aio_context()) {
135
+ .co = qemu_coroutine_create(block_copy_async_co_entry, call_state),
594
+ aio_co_schedule(ctx, co);
136
+ };
137
+
138
+ qemu_coroutine_enter(call_state->co);
139
+
140
+ return call_state;
141
+}
142
+
143
+void block_copy_call_free(BlockCopyCallState *call_state)
144
+{
145
+ if (!call_state) {
146
+ return;
595
+ return;
147
+ }
596
+ }
148
+
597
+
149
+ assert(call_state->finished);
598
+ if (qemu_in_coroutine()) {
150
+ g_free(call_state);
599
+ Coroutine *self = qemu_coroutine_self();
151
+}
600
+ assert(self != co);
152
+
601
+ QSIMPLEQ_INSERT_TAIL(&self->co_queue_wakeup, co, co_queue_next);
153
+bool block_copy_call_finished(BlockCopyCallState *call_state)
602
+ } else {
154
+{
603
+ aio_context_acquire(ctx);
155
+ return call_state->finished;
604
+ qemu_coroutine_enter(co);
156
+}
605
+ aio_context_release(ctx);
157
+
606
+ }
158
+bool block_copy_call_succeeded(BlockCopyCallState *call_state)
607
+}
159
+{
608
+
160
+ return call_state->finished && call_state->ret == 0;
609
void aio_context_ref(AioContext *ctx)
161
+}
162
+
163
+bool block_copy_call_failed(BlockCopyCallState *call_state)
164
+{
165
+ return call_state->finished && call_state->ret < 0;
166
+}
167
+
168
+int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
169
+{
170
+ assert(call_state->finished);
171
+ if (error_is_read) {
172
+ *error_is_read = call_state->error_is_read;
173
+ }
174
+ return call_state->ret;
175
+}
176
+
177
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
178
{
610
{
179
return s->copy_bitmap;
611
g_source_ref(&ctx->source);
612
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
613
index XXXXXXX..XXXXXXX 100644
614
--- a/util/qemu-coroutine.c
615
+++ b/util/qemu-coroutine.c
616
@@ -XXX,XX +XXX,XX @@
617
#include "qemu/atomic.h"
618
#include "qemu/coroutine.h"
619
#include "qemu/coroutine_int.h"
620
+#include "block/aio.h"
621
622
enum {
623
POOL_BATCH_SIZE = 64,
624
@@ -XXX,XX +XXX,XX @@ void qemu_coroutine_enter(Coroutine *co)
625
}
626
627
co->caller = self;
628
+ co->ctx = qemu_get_current_aio_context();
629
+
630
+ /* Store co->ctx before anything that stores co. Matches
631
+ * barrier in aio_co_wake.
632
+ */
633
+ smp_wmb();
634
+
635
ret = qemu_coroutine_switch(self, co, COROUTINE_ENTER);
636
637
qemu_co_queue_run_restart(co);
638
diff --git a/util/trace-events b/util/trace-events
639
index XXXXXXX..XXXXXXX 100644
640
--- a/util/trace-events
641
+++ b/util/trace-events
642
@@ -XXX,XX +XXX,XX @@ run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d"
643
poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64
644
poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64
645
646
+# util/async.c
647
+aio_co_schedule(void *ctx, void *co) "ctx %p co %p"
648
+aio_co_schedule_bh_cb(void *ctx, void *co) "ctx %p co %p"
649
+
650
# util/thread-pool.c
651
thread_pool_submit(void *pool, void *req, void *opaque) "pool %p req %p opaque %p"
652
thread_pool_complete(void *pool, void *req, void *opaque, int ret) "pool %p req %p opaque %p ret %d"
180
--
653
--
181
2.29.2
654
2.9.3
182
655
183
656
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Unfortunately commit "iotests: handle tmpfs" breaks running iotests
3
qcow2_create2 calls this. Do not run a nested event loop, as that
4
with -nbd -nocache, as _check_o_direct tries to create
4
breaks when aio_co_wake tries to queue the coroutine on the co_queue_wakeup
5
$TEST_IMG.test_o_direct, but in case of nbd TEST_IMG is something like
5
list of the currently running one.
6
nbd+unix:///... , and test fails with message
7
6
8
qemu-img: nbd+unix:///?socket[...]test_o_direct: Protocol driver
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
'nbd' does not support image creation, and opening the image
8
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10
failed: Failed to connect to '/tmp/tmp.[...]/nbd/test_o_direct': No
9
Reviewed-by: Fam Zheng <famz@redhat.com>
11
such file or directory
10
Message-id: 20170213135235.12274-4-pbonzini@redhat.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
13
block/block-backend.c | 12 ++++++++----
14
1 file changed, 8 insertions(+), 4 deletions(-)
12
15
13
Use TEST_DIR instead.
16
diff --git a/block/block-backend.c b/block/block-backend.c
14
15
Fixes: cfdca2b9f9d4ca26bb2b2dfe8de3149092e39170
16
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
17
Message-Id: <20201218182012.47607-1-vsementsov@virtuozzo.com>
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
---
20
tests/qemu-iotests/common.rc | 7 ++++---
21
1 file changed, 4 insertions(+), 3 deletions(-)
22
23
diff --git a/tests/qemu-iotests/common.rc b/tests/qemu-iotests/common.rc
24
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
25
--- a/tests/qemu-iotests/common.rc
18
--- a/block/block-backend.c
26
+++ b/tests/qemu-iotests/common.rc
19
+++ b/block/block-backend.c
27
@@ -XXX,XX +XXX,XX @@ _supported_cache_modes()
20
@@ -XXX,XX +XXX,XX @@ static int blk_prw(BlockBackend *blk, int64_t offset, uint8_t *buf,
28
# Check whether the filesystem supports O_DIRECT
29
_check_o_direct()
30
{
21
{
31
- $QEMU_IMG create -f raw "$TEST_IMG".test_o_direct 1M > /dev/null
22
QEMUIOVector qiov;
32
- out=$($QEMU_IO -f raw -t none -c quit "$TEST_IMG".test_o_direct 2>&1)
23
struct iovec iov;
33
- rm -f "$TEST_IMG".test_o_direct
24
- Coroutine *co;
34
+ testfile="$TEST_DIR"/_check_o_direct
25
BlkRwCo rwco;
35
+ $QEMU_IMG create -f raw "$testfile" 1M > /dev/null
26
36
+ out=$($QEMU_IO -f raw -t none -c quit "$testfile" 2>&1)
27
iov = (struct iovec) {
37
+ rm -f "$testfile"
28
@@ -XXX,XX +XXX,XX @@ static int blk_prw(BlockBackend *blk, int64_t offset, uint8_t *buf,
38
29
.ret = NOT_DONE,
39
[[ "$out" != *"O_DIRECT"* ]]
30
};
31
32
- co = qemu_coroutine_create(co_entry, &rwco);
33
- qemu_coroutine_enter(co);
34
- BDRV_POLL_WHILE(blk_bs(blk), rwco.ret == NOT_DONE);
35
+ if (qemu_in_coroutine()) {
36
+ /* Fast-path if already in coroutine context */
37
+ co_entry(&rwco);
38
+ } else {
39
+ Coroutine *co = qemu_coroutine_create(co_entry, &rwco);
40
+ qemu_coroutine_enter(co);
41
+ BDRV_POLL_WHILE(blk_bs(blk), rwco.ret == NOT_DONE);
42
+ }
43
44
return rwco.ret;
40
}
45
}
41
--
46
--
42
2.29.2
47
2.9.3
43
48
44
49
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Refactor common path to use BlockCopyCallState pointer as parameter, to
3
Once the thread pool starts using aio_co_wake, it will also need
4
prepare it for use in asynchronous block-copy (at least, we'll need to
4
qemu_get_current_aio_context(). Make test-thread-pool create
5
run block-copy in a coroutine, passing the whole parameters as one
5
an AioContext with qemu_init_main_loop, so that stubs/iothread.c
6
pointer).
6
and tests/iothread.c can provide the rest.
7
7
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10
Message-Id: <20210116214705.822267-3-vsementsov@virtuozzo.com>
10
Reviewed-by: Fam Zheng <famz@redhat.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
Message-id: 20170213135235.12274-5-pbonzini@redhat.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
13
---
13
block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
14
tests/test-thread-pool.c | 12 +++---------
14
1 file changed, 38 insertions(+), 13 deletions(-)
15
1 file changed, 3 insertions(+), 9 deletions(-)
15
16
16
diff --git a/block/block-copy.c b/block/block-copy.c
17
diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/block/block-copy.c
19
--- a/tests/test-thread-pool.c
19
+++ b/block/block-copy.c
20
+++ b/tests/test-thread-pool.c
20
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
21
static coroutine_fn int block_copy_task_entry(AioTask *task);
22
#include "qapi/error.h"
22
23
#include "qemu/timer.h"
23
typedef struct BlockCopyCallState {
24
#include "qemu/error-report.h"
24
+ /* IN parameters */
25
+#include "qemu/main-loop.h"
25
+ BlockCopyState *s;
26
26
+ int64_t offset;
27
static AioContext *ctx;
27
+ int64_t bytes;
28
static ThreadPool *pool;
28
+
29
@@ -XXX,XX +XXX,XX @@ static void test_cancel_async(void)
29
+ /* State */
30
int main(int argc, char **argv)
30
bool failed;
31
+
32
+ /* OUT parameters */
33
bool error_is_read;
34
} BlockCopyCallState;
35
36
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
37
* Returns 1 if dirty clusters found and successfully copied, 0 if no dirty
38
* clusters found and -errno on failure.
39
*/
40
-static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
41
- int64_t offset, int64_t bytes,
42
- bool *error_is_read)
43
+static int coroutine_fn
44
+block_copy_dirty_clusters(BlockCopyCallState *call_state)
45
{
46
+ BlockCopyState *s = call_state->s;
47
+ int64_t offset = call_state->offset;
48
+ int64_t bytes = call_state->bytes;
49
+
50
int ret = 0;
51
bool found_dirty = false;
52
int64_t end = offset + bytes;
53
AioTaskPool *aio = NULL;
54
- BlockCopyCallState call_state = {false, false};
55
56
/*
57
* block_copy() user is responsible for keeping source and target in same
58
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
59
BlockCopyTask *task;
60
int64_t status_bytes;
61
62
- task = block_copy_task_create(s, &call_state, offset, bytes);
63
+ task = block_copy_task_create(s, call_state, offset, bytes);
64
if (!task) {
65
/* No more dirty bits in the bitmap */
66
trace_block_copy_skip_range(s, offset, bytes);
67
@@ -XXX,XX +XXX,XX @@ out:
68
69
aio_task_pool_free(aio);
70
}
71
- if (error_is_read && ret < 0) {
72
- *error_is_read = call_state.error_is_read;
73
- }
74
75
return ret < 0 ? ret : found_dirty;
76
}
77
78
/*
79
- * block_copy
80
+ * block_copy_common
81
*
82
* Copy requested region, accordingly to dirty bitmap.
83
* Collaborate with parallel block_copy requests: if they succeed it will help
84
@@ -XXX,XX +XXX,XX @@ out:
85
* it means that some I/O operation failed in context of _this_ block_copy call,
86
* not some parallel operation.
87
*/
88
-int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
89
- bool *error_is_read)
90
+static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
91
{
31
{
92
int ret;
32
int ret;
93
33
- Error *local_error = NULL;
94
do {
34
95
- ret = block_copy_dirty_clusters(s, offset, bytes, error_is_read);
35
- init_clocks();
96
+ ret = block_copy_dirty_clusters(call_state);
36
-
97
37
- ctx = aio_context_new(&local_error);
98
if (ret == 0) {
38
- if (!ctx) {
99
- ret = block_copy_wait_one(s, offset, bytes);
39
- error_reportf_err(local_error, "Failed to create AIO Context: ");
100
+ ret = block_copy_wait_one(call_state->s, call_state->offset,
40
- exit(1);
101
+ call_state->bytes);
41
- }
102
}
42
+ qemu_init_main_loop(&error_abort);
103
43
+ ctx = qemu_get_current_aio_context();
104
/*
44
pool = aio_get_thread_pool(ctx);
105
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
45
46
g_test_init(&argc, &argv, NULL);
47
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
48
49
ret = g_test_run();
50
51
- aio_context_unref(ctx);
106
return ret;
52
return ret;
107
}
53
}
108
109
+int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
110
+ bool *error_is_read)
111
+{
112
+ BlockCopyCallState call_state = {
113
+ .s = s,
114
+ .offset = start,
115
+ .bytes = bytes,
116
+ };
117
+
118
+ int ret = block_copy_common(&call_state);
119
+
120
+ if (error_is_read && ret < 0) {
121
+ *error_is_read = call_state.error_is_read;
122
+ }
123
+
124
+ return ret;
125
+}
126
+
127
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
128
{
129
return s->copy_bitmap;
130
--
54
--
131
2.29.2
55
2.9.3
132
56
133
57
diff view generated by jsdifflib
1
Signed-off-by: Max Reitz <mreitz@redhat.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
Reviewed-by: Eric Blake <eblake@redhat.com>
2
3
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
3
This is in preparation for making qio_channel_yield work on
4
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
4
AioContexts other than the main one.
5
Message-Id: <20210118105720.14824-4-mreitz@redhat.com>
5
6
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9
Reviewed-by: Fam Zheng <famz@redhat.com>
10
Message-id: 20170213135235.12274-6-pbonzini@redhat.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
---
12
---
7
tests/qemu-iotests/124 | 8 +-------
13
include/io/channel.h | 25 +++++++++++++++++++++++++
8
tests/qemu-iotests/iotests.py | 11 +++++++----
14
io/channel-command.c | 13 +++++++++++++
9
2 files changed, 8 insertions(+), 11 deletions(-)
15
io/channel-file.c | 11 +++++++++++
10
16
io/channel-socket.c | 16 +++++++++++-----
11
diff --git a/tests/qemu-iotests/124 b/tests/qemu-iotests/124
17
io/channel-tls.c | 12 ++++++++++++
12
index XXXXXXX..XXXXXXX 100755
18
io/channel-watch.c | 6 ++++++
13
--- a/tests/qemu-iotests/124
19
io/channel.c | 11 +++++++++++
14
+++ b/tests/qemu-iotests/124
20
7 files changed, 89 insertions(+), 5 deletions(-)
21
22
diff --git a/include/io/channel.h b/include/io/channel.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/include/io/channel.h
25
+++ b/include/io/channel.h
15
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@
16
27
17
import os
28
#include "qemu-common.h"
18
import iotests
29
#include "qom/object.h"
19
+from iotests import try_remove
30
+#include "block/aio.h"
20
31
21
32
#define TYPE_QIO_CHANNEL "qio-channel"
22
def io_write_patterns(img, patterns):
33
#define QIO_CHANNEL(obj) \
23
@@ -XXX,XX +XXX,XX @@ def io_write_patterns(img, patterns):
34
@@ -XXX,XX +XXX,XX @@ struct QIOChannelClass {
24
iotests.qemu_io('-c', 'write -P%s %s %s' % pattern, img)
35
off_t offset,
25
36
int whence,
26
37
Error **errp);
27
-def try_remove(img):
38
+ void (*io_set_aio_fd_handler)(QIOChannel *ioc,
28
- try:
39
+ AioContext *ctx,
29
- os.remove(img)
40
+ IOHandler *io_read,
30
- except OSError:
41
+ IOHandler *io_write,
31
- pass
42
+ void *opaque);
32
-
43
};
33
-
44
34
def transaction_action(action, **kwargs):
45
/* General I/O handling functions */
35
return {
46
@@ -XXX,XX +XXX,XX @@ void qio_channel_yield(QIOChannel *ioc,
36
'type': action,
47
void qio_channel_wait(QIOChannel *ioc,
37
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
48
GIOCondition condition);
38
index XXXXXXX..XXXXXXX 100644
49
39
--- a/tests/qemu-iotests/iotests.py
50
+/**
40
+++ b/tests/qemu-iotests/iotests.py
51
+ * qio_channel_set_aio_fd_handler:
41
@@ -XXX,XX +XXX,XX @@ class FilePath:
52
+ * @ioc: the channel object
42
return False
53
+ * @ctx: the AioContext to set the handlers on
43
54
+ * @io_read: the read handler
44
55
+ * @io_write: the write handler
45
+def try_remove(img):
56
+ * @opaque: the opaque value passed to the handler
46
+ try:
57
+ *
47
+ os.remove(img)
58
+ * This is used internally by qio_channel_yield(). It can
48
+ except OSError:
59
+ * be used by channel implementations to forward the handlers
49
+ pass
60
+ * to another channel (e.g. from #QIOChannelTLS to the
50
+
61
+ * underlying socket).
51
def file_path_remover():
62
+ */
52
for path in reversed(file_path_remover.paths):
63
+void qio_channel_set_aio_fd_handler(QIOChannel *ioc,
53
- try:
64
+ AioContext *ctx,
54
- os.remove(path)
65
+ IOHandler *io_read,
55
- except OSError:
66
+ IOHandler *io_write,
56
- pass
67
+ void *opaque);
57
+ try_remove(path)
68
+
58
69
#endif /* QIO_CHANNEL_H */
59
70
diff --git a/io/channel-command.c b/io/channel-command.c
60
def file_path(*names, base_dir=test_dir):
71
index XXXXXXX..XXXXXXX 100644
72
--- a/io/channel-command.c
73
+++ b/io/channel-command.c
74
@@ -XXX,XX +XXX,XX @@ static int qio_channel_command_close(QIOChannel *ioc,
75
}
76
77
78
+static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc,
79
+ AioContext *ctx,
80
+ IOHandler *io_read,
81
+ IOHandler *io_write,
82
+ void *opaque)
83
+{
84
+ QIOChannelCommand *cioc = QIO_CHANNEL_COMMAND(ioc);
85
+ aio_set_fd_handler(ctx, cioc->readfd, false, io_read, NULL, NULL, opaque);
86
+ aio_set_fd_handler(ctx, cioc->writefd, false, NULL, io_write, NULL, opaque);
87
+}
88
+
89
+
90
static GSource *qio_channel_command_create_watch(QIOChannel *ioc,
91
GIOCondition condition)
92
{
93
@@ -XXX,XX +XXX,XX @@ static void qio_channel_command_class_init(ObjectClass *klass,
94
ioc_klass->io_set_blocking = qio_channel_command_set_blocking;
95
ioc_klass->io_close = qio_channel_command_close;
96
ioc_klass->io_create_watch = qio_channel_command_create_watch;
97
+ ioc_klass->io_set_aio_fd_handler = qio_channel_command_set_aio_fd_handler;
98
}
99
100
static const TypeInfo qio_channel_command_info = {
101
diff --git a/io/channel-file.c b/io/channel-file.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/io/channel-file.c
104
+++ b/io/channel-file.c
105
@@ -XXX,XX +XXX,XX @@ static int qio_channel_file_close(QIOChannel *ioc,
106
}
107
108
109
+static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc,
110
+ AioContext *ctx,
111
+ IOHandler *io_read,
112
+ IOHandler *io_write,
113
+ void *opaque)
114
+{
115
+ QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
116
+ aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write, NULL, opaque);
117
+}
118
+
119
static GSource *qio_channel_file_create_watch(QIOChannel *ioc,
120
GIOCondition condition)
121
{
122
@@ -XXX,XX +XXX,XX @@ static void qio_channel_file_class_init(ObjectClass *klass,
123
ioc_klass->io_seek = qio_channel_file_seek;
124
ioc_klass->io_close = qio_channel_file_close;
125
ioc_klass->io_create_watch = qio_channel_file_create_watch;
126
+ ioc_klass->io_set_aio_fd_handler = qio_channel_file_set_aio_fd_handler;
127
}
128
129
static const TypeInfo qio_channel_file_info = {
130
diff --git a/io/channel-socket.c b/io/channel-socket.c
131
index XXXXXXX..XXXXXXX 100644
132
--- a/io/channel-socket.c
133
+++ b/io/channel-socket.c
134
@@ -XXX,XX +XXX,XX @@ qio_channel_socket_set_blocking(QIOChannel *ioc,
135
qemu_set_block(sioc->fd);
136
} else {
137
qemu_set_nonblock(sioc->fd);
138
-#ifdef WIN32
139
- WSAEventSelect(sioc->fd, ioc->event,
140
- FD_READ | FD_ACCEPT | FD_CLOSE |
141
- FD_CONNECT | FD_WRITE | FD_OOB);
142
-#endif
143
}
144
return 0;
145
}
146
@@ -XXX,XX +XXX,XX @@ qio_channel_socket_shutdown(QIOChannel *ioc,
147
return 0;
148
}
149
150
+static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc,
151
+ AioContext *ctx,
152
+ IOHandler *io_read,
153
+ IOHandler *io_write,
154
+ void *opaque)
155
+{
156
+ QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
157
+ aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, NULL, opaque);
158
+}
159
+
160
static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
161
GIOCondition condition)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void qio_channel_socket_class_init(ObjectClass *klass,
164
ioc_klass->io_set_cork = qio_channel_socket_set_cork;
165
ioc_klass->io_set_delay = qio_channel_socket_set_delay;
166
ioc_klass->io_create_watch = qio_channel_socket_create_watch;
167
+ ioc_klass->io_set_aio_fd_handler = qio_channel_socket_set_aio_fd_handler;
168
}
169
170
static const TypeInfo qio_channel_socket_info = {
171
diff --git a/io/channel-tls.c b/io/channel-tls.c
172
index XXXXXXX..XXXXXXX 100644
173
--- a/io/channel-tls.c
174
+++ b/io/channel-tls.c
175
@@ -XXX,XX +XXX,XX @@ static int qio_channel_tls_close(QIOChannel *ioc,
176
return qio_channel_close(tioc->master, errp);
177
}
178
179
+static void qio_channel_tls_set_aio_fd_handler(QIOChannel *ioc,
180
+ AioContext *ctx,
181
+ IOHandler *io_read,
182
+ IOHandler *io_write,
183
+ void *opaque)
184
+{
185
+ QIOChannelTLS *tioc = QIO_CHANNEL_TLS(ioc);
186
+
187
+ qio_channel_set_aio_fd_handler(tioc->master, ctx, io_read, io_write, opaque);
188
+}
189
+
190
static GSource *qio_channel_tls_create_watch(QIOChannel *ioc,
191
GIOCondition condition)
192
{
193
@@ -XXX,XX +XXX,XX @@ static void qio_channel_tls_class_init(ObjectClass *klass,
194
ioc_klass->io_close = qio_channel_tls_close;
195
ioc_klass->io_shutdown = qio_channel_tls_shutdown;
196
ioc_klass->io_create_watch = qio_channel_tls_create_watch;
197
+ ioc_klass->io_set_aio_fd_handler = qio_channel_tls_set_aio_fd_handler;
198
}
199
200
static const TypeInfo qio_channel_tls_info = {
201
diff --git a/io/channel-watch.c b/io/channel-watch.c
202
index XXXXXXX..XXXXXXX 100644
203
--- a/io/channel-watch.c
204
+++ b/io/channel-watch.c
205
@@ -XXX,XX +XXX,XX @@ GSource *qio_channel_create_socket_watch(QIOChannel *ioc,
206
GSource *source;
207
QIOChannelSocketSource *ssource;
208
209
+#ifdef WIN32
210
+ WSAEventSelect(socket, ioc->event,
211
+ FD_READ | FD_ACCEPT | FD_CLOSE |
212
+ FD_CONNECT | FD_WRITE | FD_OOB);
213
+#endif
214
+
215
source = g_source_new(&qio_channel_socket_source_funcs,
216
sizeof(QIOChannelSocketSource));
217
ssource = (QIOChannelSocketSource *)source;
218
diff --git a/io/channel.c b/io/channel.c
219
index XXXXXXX..XXXXXXX 100644
220
--- a/io/channel.c
221
+++ b/io/channel.c
222
@@ -XXX,XX +XXX,XX @@ GSource *qio_channel_create_watch(QIOChannel *ioc,
223
}
224
225
226
+void qio_channel_set_aio_fd_handler(QIOChannel *ioc,
227
+ AioContext *ctx,
228
+ IOHandler *io_read,
229
+ IOHandler *io_write,
230
+ void *opaque)
231
+{
232
+ QIOChannelClass *klass = QIO_CHANNEL_GET_CLASS(ioc);
233
+
234
+ klass->io_set_aio_fd_handler(ioc, ctx, io_read, io_write, opaque);
235
+}
236
+
237
guint qio_channel_add_watch(QIOChannel *ioc,
238
GIOCondition condition,
239
QIOChannelFunc func,
61
--
240
--
62
2.29.2
241
2.9.3
63
242
64
243
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Add support for the recently introduced functions
3
Support separate coroutines for reading and writing, and place the
4
bdrv_co_preadv_part()
4
read/write handlers on the AioContext that the QIOChannel is registered
5
and
5
with.
6
bdrv_co_pwritev_part()
6
7
to the COR-filter driver.
7
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
8
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
9
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Reviewed-by: Fam Zheng <famz@redhat.com>
11
Message-Id: <20201216061703.70908-2-vsementsov@virtuozzo.com>
11
Message-id: 20170213135235.12274-7-pbonzini@redhat.com
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
13
---
14
block/copy-on-read.c | 28 ++++++++++++++++------------
14
include/io/channel.h | 47 ++++++++++++++++++++++++++--
15
1 file changed, 16 insertions(+), 12 deletions(-)
15
io/channel.c | 86 +++++++++++++++++++++++++++++++++++++++-------------
16
16
2 files changed, 109 insertions(+), 24 deletions(-)
17
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
17
18
diff --git a/include/io/channel.h b/include/io/channel.h
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/block/copy-on-read.c
20
--- a/include/io/channel.h
20
+++ b/block/copy-on-read.c
21
+++ b/include/io/channel.h
21
@@ -XXX,XX +XXX,XX @@ static int64_t cor_getlength(BlockDriverState *bs)
22
@@ -XXX,XX +XXX,XX @@
23
24
#include "qemu-common.h"
25
#include "qom/object.h"
26
+#include "qemu/coroutine.h"
27
#include "block/aio.h"
28
29
#define TYPE_QIO_CHANNEL "qio-channel"
30
@@ -XXX,XX +XXX,XX @@ struct QIOChannel {
31
Object parent;
32
unsigned int features; /* bitmask of QIOChannelFeatures */
33
char *name;
34
+ AioContext *ctx;
35
+ Coroutine *read_coroutine;
36
+ Coroutine *write_coroutine;
37
#ifdef _WIN32
38
HANDLE event; /* For use with GSource on Win32 */
39
#endif
40
@@ -XXX,XX +XXX,XX @@ guint qio_channel_add_watch(QIOChannel *ioc,
41
42
43
/**
44
+ * qio_channel_attach_aio_context:
45
+ * @ioc: the channel object
46
+ * @ctx: the #AioContext to set the handlers on
47
+ *
48
+ * Request that qio_channel_yield() sets I/O handlers on
49
+ * the given #AioContext. If @ctx is %NULL, qio_channel_yield()
50
+ * uses QEMU's main thread event loop.
51
+ *
52
+ * You can move a #QIOChannel from one #AioContext to another even if
53
+ * I/O handlers are set for a coroutine. However, #QIOChannel provides
54
+ * no synchronization between the calls to qio_channel_yield() and
55
+ * qio_channel_attach_aio_context().
56
+ *
57
+ * Therefore you should first call qio_channel_detach_aio_context()
58
+ * to ensure that the coroutine is not entered concurrently. Then,
59
+ * while the coroutine has yielded, call qio_channel_attach_aio_context(),
60
+ * and then aio_co_schedule() to place the coroutine on the new
61
+ * #AioContext. The calls to qio_channel_detach_aio_context()
62
+ * and qio_channel_attach_aio_context() should be protected with
63
+ * aio_context_acquire() and aio_context_release().
64
+ */
65
+void qio_channel_attach_aio_context(QIOChannel *ioc,
66
+ AioContext *ctx);
67
+
68
+/**
69
+ * qio_channel_detach_aio_context:
70
+ * @ioc: the channel object
71
+ *
72
+ * Disable any I/O handlers set by qio_channel_yield(). With the
73
+ * help of aio_co_schedule(), this allows moving a coroutine that was
74
+ * paused by qio_channel_yield() to another context.
75
+ */
76
+void qio_channel_detach_aio_context(QIOChannel *ioc);
77
+
78
+/**
79
* qio_channel_yield:
80
* @ioc: the channel object
81
* @condition: the I/O condition to wait for
82
*
83
- * Yields execution from the current coroutine until
84
- * the condition indicated by @condition becomes
85
- * available.
86
+ * Yields execution from the current coroutine until the condition
87
+ * indicated by @condition becomes available. @condition must
88
+ * be either %G_IO_IN or %G_IO_OUT; it cannot contain both. In
89
+ * addition, no two coroutine can be waiting on the same condition
90
+ * and channel at the same time.
91
*
92
* This must only be called from coroutine context
93
*/
94
diff --git a/io/channel.c b/io/channel.c
95
index XXXXXXX..XXXXXXX 100644
96
--- a/io/channel.c
97
+++ b/io/channel.c
98
@@ -XXX,XX +XXX,XX @@
99
#include "qemu/osdep.h"
100
#include "io/channel.h"
101
#include "qapi/error.h"
102
-#include "qemu/coroutine.h"
103
+#include "qemu/main-loop.h"
104
105
bool qio_channel_has_feature(QIOChannel *ioc,
106
QIOChannelFeature feature)
107
@@ -XXX,XX +XXX,XX @@ off_t qio_channel_io_seek(QIOChannel *ioc,
22
}
108
}
23
109
24
110
25
-static int coroutine_fn cor_co_preadv(BlockDriverState *bs,
111
-typedef struct QIOChannelYieldData QIOChannelYieldData;
26
- uint64_t offset, uint64_t bytes,
112
-struct QIOChannelYieldData {
27
- QEMUIOVector *qiov, int flags)
113
- QIOChannel *ioc;
28
+static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
114
- Coroutine *co;
29
+ uint64_t offset, uint64_t bytes,
115
-};
30
+ QEMUIOVector *qiov,
116
+static void qio_channel_set_aio_fd_handlers(QIOChannel *ioc);
31
+ size_t qiov_offset,
117
32
+ int flags)
118
+static void qio_channel_restart_read(void *opaque)
119
+{
120
+ QIOChannel *ioc = opaque;
121
+ Coroutine *co = ioc->read_coroutine;
122
+
123
+ ioc->read_coroutine = NULL;
124
+ qio_channel_set_aio_fd_handlers(ioc);
125
+ aio_co_wake(co);
126
+}
127
128
-static gboolean qio_channel_yield_enter(QIOChannel *ioc,
129
- GIOCondition condition,
130
- gpointer opaque)
131
+static void qio_channel_restart_write(void *opaque)
33
{
132
{
34
- return bdrv_co_preadv(bs->file, offset, bytes, qiov,
133
- QIOChannelYieldData *data = opaque;
35
- flags | BDRV_REQ_COPY_ON_READ);
134
- qemu_coroutine_enter(data->co);
36
+ return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
135
- return FALSE;
37
+ flags | BDRV_REQ_COPY_ON_READ);
136
+ QIOChannel *ioc = opaque;
137
+ Coroutine *co = ioc->write_coroutine;
138
+
139
+ ioc->write_coroutine = NULL;
140
+ qio_channel_set_aio_fd_handlers(ioc);
141
+ aio_co_wake(co);
38
}
142
}
39
143
40
144
+static void qio_channel_set_aio_fd_handlers(QIOChannel *ioc)
41
-static int coroutine_fn cor_co_pwritev(BlockDriverState *bs,
145
+{
42
- uint64_t offset, uint64_t bytes,
146
+ IOHandler *rd_handler = NULL, *wr_handler = NULL;
43
- QEMUIOVector *qiov, int flags)
147
+ AioContext *ctx;
44
+static int coroutine_fn cor_co_pwritev_part(BlockDriverState *bs,
148
+
45
+ uint64_t offset,
149
+ if (ioc->read_coroutine) {
46
+ uint64_t bytes,
150
+ rd_handler = qio_channel_restart_read;
47
+ QEMUIOVector *qiov,
151
+ }
48
+ size_t qiov_offset, int flags)
152
+ if (ioc->write_coroutine) {
153
+ wr_handler = qio_channel_restart_write;
154
+ }
155
+
156
+ ctx = ioc->ctx ? ioc->ctx : iohandler_get_aio_context();
157
+ qio_channel_set_aio_fd_handler(ioc, ctx, rd_handler, wr_handler, ioc);
158
+}
159
+
160
+void qio_channel_attach_aio_context(QIOChannel *ioc,
161
+ AioContext *ctx)
162
+{
163
+ AioContext *old_ctx;
164
+ if (ioc->ctx == ctx) {
165
+ return;
166
+ }
167
+
168
+ old_ctx = ioc->ctx ? ioc->ctx : iohandler_get_aio_context();
169
+ qio_channel_set_aio_fd_handler(ioc, old_ctx, NULL, NULL, NULL);
170
+ ioc->ctx = ctx;
171
+ qio_channel_set_aio_fd_handlers(ioc);
172
+}
173
+
174
+void qio_channel_detach_aio_context(QIOChannel *ioc)
175
+{
176
+ ioc->read_coroutine = NULL;
177
+ ioc->write_coroutine = NULL;
178
+ qio_channel_set_aio_fd_handlers(ioc);
179
+ ioc->ctx = NULL;
180
+}
181
182
void coroutine_fn qio_channel_yield(QIOChannel *ioc,
183
GIOCondition condition)
49
{
184
{
185
- QIOChannelYieldData data;
50
-
186
-
51
- return bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags);
187
assert(qemu_in_coroutine());
52
+ return bdrv_co_pwritev_part(bs->file, offset, bytes, qiov, qiov_offset,
188
- data.ioc = ioc;
53
+ flags);
189
- data.co = qemu_coroutine_self();
190
- qio_channel_add_watch(ioc,
191
- condition,
192
- qio_channel_yield_enter,
193
- &data,
194
- NULL);
195
+ if (condition == G_IO_IN) {
196
+ assert(!ioc->read_coroutine);
197
+ ioc->read_coroutine = qemu_coroutine_self();
198
+ } else if (condition == G_IO_OUT) {
199
+ assert(!ioc->write_coroutine);
200
+ ioc->write_coroutine = qemu_coroutine_self();
201
+ } else {
202
+ abort();
203
+ }
204
+ qio_channel_set_aio_fd_handlers(ioc);
205
qemu_coroutine_yield();
54
}
206
}
55
207
56
57
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_copy_on_read = {
58
59
.bdrv_getlength = cor_getlength,
60
61
- .bdrv_co_preadv = cor_co_preadv,
62
- .bdrv_co_pwritev = cor_co_pwritev,
63
+ .bdrv_co_preadv_part = cor_co_preadv_part,
64
+ .bdrv_co_pwritev_part = cor_co_pwritev_part,
65
.bdrv_co_pwrite_zeroes = cor_co_pwrite_zeroes,
66
.bdrv_co_pdiscard = cor_co_pdiscard,
67
.bdrv_co_pwritev_compressed = cor_co_pwritev_compressed,
68
--
208
--
69
2.29.2
209
2.9.3
70
210
71
211
diff view generated by jsdifflib
Deleted patch
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
2
1
3
Provide API for insertion a node to backing chain.
4
5
Suggested-by: Max Reitz <mreitz@redhat.com>
6
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
7
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Message-Id: <20201216061703.70908-3-vsementsov@virtuozzo.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
---
12
include/block/block.h | 2 ++
13
block.c | 25 +++++++++++++++++++++++++
14
2 files changed, 27 insertions(+)
15
16
diff --git a/include/block/block.h b/include/block/block.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/block/block.h
19
+++ b/include/block/block.h
20
@@ -XXX,XX +XXX,XX @@ void bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top,
21
Error **errp);
22
void bdrv_replace_node(BlockDriverState *from, BlockDriverState *to,
23
Error **errp);
24
+BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *node_options,
25
+ int flags, Error **errp);
26
27
int bdrv_parse_aio(const char *mode, int *flags);
28
int bdrv_parse_cache_mode(const char *mode, int *flags, bool *writethrough);
29
diff --git a/block.c b/block.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/block.c
32
+++ b/block.c
33
@@ -XXX,XX +XXX,XX @@ static void bdrv_delete(BlockDriverState *bs)
34
g_free(bs);
35
}
36
37
+BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *node_options,
38
+ int flags, Error **errp)
39
+{
40
+ BlockDriverState *new_node_bs;
41
+ Error *local_err = NULL;
42
+
43
+ new_node_bs = bdrv_open(NULL, NULL, node_options, flags, errp);
44
+ if (new_node_bs == NULL) {
45
+ error_prepend(errp, "Could not create node: ");
46
+ return NULL;
47
+ }
48
+
49
+ bdrv_drained_begin(bs);
50
+ bdrv_replace_node(bs, new_node_bs, &local_err);
51
+ bdrv_drained_end(bs);
52
+
53
+ if (local_err) {
54
+ bdrv_unref(new_node_bs);
55
+ error_propagate(errp, local_err);
56
+ return NULL;
57
+ }
58
+
59
+ return new_node_bs;
60
+}
61
+
62
/*
63
* Run consistency checks on an image
64
*
65
--
66
2.29.2
67
68
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
This brings async request handling and block-status driven chunk sizes
3
In the client, read the reply headers from a coroutine, switching the
4
to backup out of the box, which improves backup performance.
4
read side between the "read header" coroutine and the I/O coroutine that
5
reads the body of the reply.
5
6
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
In the server, if the server can read more requests it will create a new
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
"read request" coroutine as soon as a request has been read. Otherwise,
8
Message-Id: <20210116214705.822267-18-vsementsov@virtuozzo.com>
9
the new coroutine is created in nbd_request_put.
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
11
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
13
Reviewed-by: Fam Zheng <famz@redhat.com>
14
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
15
Message-id: 20170213135235.12274-8-pbonzini@redhat.com
16
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
17
---
11
block/backup.c | 187 +++++++++++++++++++++++++++++++------------------
18
block/nbd-client.h | 2 +-
12
1 file changed, 120 insertions(+), 67 deletions(-)
19
block/nbd-client.c | 117 ++++++++++++++++++++++++-----------------------------
20
nbd/client.c | 2 +-
21
nbd/common.c | 9 +----
22
nbd/server.c | 94 +++++++++++++-----------------------------
23
5 files changed, 83 insertions(+), 141 deletions(-)
13
24
14
diff --git a/block/backup.c b/block/backup.c
25
diff --git a/block/nbd-client.h b/block/nbd-client.h
15
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
16
--- a/block/backup.c
27
--- a/block/nbd-client.h
17
+++ b/block/backup.c
28
+++ b/block/nbd-client.h
29
@@ -XXX,XX +XXX,XX @@ typedef struct NBDClientSession {
30
31
CoMutex send_mutex;
32
CoQueue free_sema;
33
- Coroutine *send_coroutine;
34
+ Coroutine *read_reply_co;
35
int in_flight;
36
37
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
38
diff --git a/block/nbd-client.c b/block/nbd-client.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/block/nbd-client.c
41
+++ b/block/nbd-client.c
18
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
19
#include "block/block-copy.h"
43
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
20
#include "qapi/error.h"
44
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
21
#include "qapi/qmp/qerror.h"
45
22
-#include "qemu/ratelimit.h"
46
-static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
23
#include "qemu/cutils.h"
47
+static void nbd_recv_coroutines_enter_all(BlockDriverState *bs)
24
#include "sysemu/block-backend.h"
48
{
25
#include "qemu/bitmap.h"
49
+ NBDClientSession *s = nbd_get_client_session(bs);
26
@@ -XXX,XX +XXX,XX @@ typedef struct BackupBlockJob {
50
int i;
27
BlockdevOnError on_source_error;
51
28
BlockdevOnError on_target_error;
52
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
29
uint64_t len;
53
@@ -XXX,XX +XXX,XX @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s)
30
- uint64_t bytes_read;
54
qemu_coroutine_enter(s->recv_coroutine[i]);
31
int64_t cluster_size;
55
}
32
BackupPerf perf;
56
}
33
57
+ BDRV_POLL_WHILE(bs, s->read_reply_co);
34
BlockCopyState *bcs;
58
}
59
60
static void nbd_teardown_connection(BlockDriverState *bs)
61
@@ -XXX,XX +XXX,XX @@ static void nbd_teardown_connection(BlockDriverState *bs)
62
qio_channel_shutdown(client->ioc,
63
QIO_CHANNEL_SHUTDOWN_BOTH,
64
NULL);
65
- nbd_recv_coroutines_enter_all(client);
66
+ nbd_recv_coroutines_enter_all(bs);
67
68
nbd_client_detach_aio_context(bs);
69
object_unref(OBJECT(client->sioc));
70
@@ -XXX,XX +XXX,XX @@ static void nbd_teardown_connection(BlockDriverState *bs)
71
client->ioc = NULL;
72
}
73
74
-static void nbd_reply_ready(void *opaque)
75
+static coroutine_fn void nbd_read_reply_entry(void *opaque)
76
{
77
- BlockDriverState *bs = opaque;
78
- NBDClientSession *s = nbd_get_client_session(bs);
79
+ NBDClientSession *s = opaque;
80
uint64_t i;
81
int ret;
82
83
- if (!s->ioc) { /* Already closed */
84
- return;
85
- }
86
-
87
- if (s->reply.handle == 0) {
88
- /* No reply already in flight. Fetch a header. It is possible
89
- * that another thread has done the same thing in parallel, so
90
- * the socket is not readable anymore.
91
- */
92
+ for (;;) {
93
+ assert(s->reply.handle == 0);
94
ret = nbd_receive_reply(s->ioc, &s->reply);
95
- if (ret == -EAGAIN) {
96
- return;
97
- }
98
if (ret < 0) {
99
- s->reply.handle = 0;
100
- goto fail;
101
+ break;
102
}
103
- }
104
105
- /* There's no need for a mutex on the receive side, because the
106
- * handler acts as a synchronization point and ensures that only
107
- * one coroutine is called until the reply finishes. */
108
- i = HANDLE_TO_INDEX(s, s->reply.handle);
109
- if (i >= MAX_NBD_REQUESTS) {
110
- goto fail;
111
- }
112
+ /* There's no need for a mutex on the receive side, because the
113
+ * handler acts as a synchronization point and ensures that only
114
+ * one coroutine is called until the reply finishes.
115
+ */
116
+ i = HANDLE_TO_INDEX(s, s->reply.handle);
117
+ if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) {
118
+ break;
119
+ }
120
121
- if (s->recv_coroutine[i]) {
122
- qemu_coroutine_enter(s->recv_coroutine[i]);
123
- return;
124
+ /* We're woken up by the recv_coroutine itself. Note that there
125
+ * is no race between yielding and reentering read_reply_co. This
126
+ * is because:
127
+ *
128
+ * - if recv_coroutine[i] runs on the same AioContext, it is only
129
+ * entered after we yield
130
+ *
131
+ * - if recv_coroutine[i] runs on a different AioContext, reentering
132
+ * read_reply_co happens through a bottom half, which can only
133
+ * run after we yield.
134
+ */
135
+ aio_co_wake(s->recv_coroutine[i]);
136
+ qemu_coroutine_yield();
137
}
138
-
139
-fail:
140
- nbd_teardown_connection(bs);
141
-}
142
-
143
-static void nbd_restart_write(void *opaque)
144
-{
145
- BlockDriverState *bs = opaque;
146
-
147
- qemu_coroutine_enter(nbd_get_client_session(bs)->send_coroutine);
148
+ s->read_reply_co = NULL;
149
}
150
151
static int nbd_co_send_request(BlockDriverState *bs,
152
@@ -XXX,XX +XXX,XX @@ static int nbd_co_send_request(BlockDriverState *bs,
153
QEMUIOVector *qiov)
154
{
155
NBDClientSession *s = nbd_get_client_session(bs);
156
- AioContext *aio_context;
157
int rc, ret, i;
158
159
qemu_co_mutex_lock(&s->send_mutex);
160
@@ -XXX,XX +XXX,XX @@ static int nbd_co_send_request(BlockDriverState *bs,
161
return -EPIPE;
162
}
163
164
- s->send_coroutine = qemu_coroutine_self();
165
- aio_context = bdrv_get_aio_context(bs);
166
-
167
- aio_set_fd_handler(aio_context, s->sioc->fd, false,
168
- nbd_reply_ready, nbd_restart_write, NULL, bs);
169
if (qiov) {
170
qio_channel_set_cork(s->ioc, true);
171
rc = nbd_send_request(s->ioc, request);
172
@@ -XXX,XX +XXX,XX @@ static int nbd_co_send_request(BlockDriverState *bs,
173
} else {
174
rc = nbd_send_request(s->ioc, request);
175
}
176
- aio_set_fd_handler(aio_context, s->sioc->fd, false,
177
- nbd_reply_ready, NULL, NULL, bs);
178
- s->send_coroutine = NULL;
179
qemu_co_mutex_unlock(&s->send_mutex);
180
return rc;
181
}
182
@@ -XXX,XX +XXX,XX @@ static void nbd_co_receive_reply(NBDClientSession *s,
183
{
184
int ret;
185
186
- /* Wait until we're woken up by the read handler. TODO: perhaps
187
- * peek at the next reply and avoid yielding if it's ours? */
188
+ /* Wait until we're woken up by nbd_read_reply_entry. */
189
qemu_coroutine_yield();
190
*reply = s->reply;
191
if (reply->handle != request->handle ||
192
@@ -XXX,XX +XXX,XX @@ static void nbd_coroutine_start(NBDClientSession *s,
193
/* s->recv_coroutine[i] is set as soon as we get the send_lock. */
194
}
195
196
-static void nbd_coroutine_end(NBDClientSession *s,
197
+static void nbd_coroutine_end(BlockDriverState *bs,
198
NBDRequest *request)
199
{
200
+ NBDClientSession *s = nbd_get_client_session(bs);
201
int i = HANDLE_TO_INDEX(s, request->handle);
35
+
202
+
36
+ bool wait;
203
s->recv_coroutine[i] = NULL;
37
+ BlockCopyCallState *bg_bcs_call;
204
- if (s->in_flight-- == MAX_NBD_REQUESTS) {
38
} BackupBlockJob;
205
- qemu_co_queue_next(&s->free_sema);
39
206
+ s->in_flight--;
40
static const BlockJobDriver backup_job_driver;
207
+ qemu_co_queue_next(&s->free_sema);
41
208
+
42
-static void backup_progress_bytes_callback(int64_t bytes, void *opaque)
209
+ /* Kick the read_reply_co to get the next reply. */
210
+ if (s->read_reply_co) {
211
+ aio_co_wake(s->read_reply_co);
212
}
213
}
214
215
@@ -XXX,XX +XXX,XX @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
216
} else {
217
nbd_co_receive_reply(client, &request, &reply, qiov);
218
}
219
- nbd_coroutine_end(client, &request);
220
+ nbd_coroutine_end(bs, &request);
221
return -reply.error;
222
}
223
224
@@ -XXX,XX +XXX,XX @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
225
} else {
226
nbd_co_receive_reply(client, &request, &reply, NULL);
227
}
228
- nbd_coroutine_end(client, &request);
229
+ nbd_coroutine_end(bs, &request);
230
return -reply.error;
231
}
232
233
@@ -XXX,XX +XXX,XX @@ int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
234
} else {
235
nbd_co_receive_reply(client, &request, &reply, NULL);
236
}
237
- nbd_coroutine_end(client, &request);
238
+ nbd_coroutine_end(bs, &request);
239
return -reply.error;
240
}
241
242
@@ -XXX,XX +XXX,XX @@ int nbd_client_co_flush(BlockDriverState *bs)
243
} else {
244
nbd_co_receive_reply(client, &request, &reply, NULL);
245
}
246
- nbd_coroutine_end(client, &request);
247
+ nbd_coroutine_end(bs, &request);
248
return -reply.error;
249
}
250
251
@@ -XXX,XX +XXX,XX @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int count)
252
} else {
253
nbd_co_receive_reply(client, &request, &reply, NULL);
254
}
255
- nbd_coroutine_end(client, &request);
256
+ nbd_coroutine_end(bs, &request);
257
return -reply.error;
258
259
}
260
261
void nbd_client_detach_aio_context(BlockDriverState *bs)
262
{
263
- aio_set_fd_handler(bdrv_get_aio_context(bs),
264
- nbd_get_client_session(bs)->sioc->fd,
265
- false, NULL, NULL, NULL, NULL);
266
+ NBDClientSession *client = nbd_get_client_session(bs);
267
+ qio_channel_detach_aio_context(QIO_CHANNEL(client->sioc));
268
}
269
270
void nbd_client_attach_aio_context(BlockDriverState *bs,
271
AioContext *new_context)
272
{
273
- aio_set_fd_handler(new_context, nbd_get_client_session(bs)->sioc->fd,
274
- false, nbd_reply_ready, NULL, NULL, bs);
275
+ NBDClientSession *client = nbd_get_client_session(bs);
276
+ qio_channel_attach_aio_context(QIO_CHANNEL(client->sioc), new_context);
277
+ aio_co_schedule(new_context, client->read_reply_co);
278
}
279
280
void nbd_client_close(BlockDriverState *bs)
281
@@ -XXX,XX +XXX,XX @@ int nbd_client_init(BlockDriverState *bs,
282
/* Now that we're connected, set the socket to be non-blocking and
283
* kick the reply mechanism. */
284
qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
285
-
286
+ client->read_reply_co = qemu_coroutine_create(nbd_read_reply_entry, client);
287
nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
288
289
logout("Established connection with NBD server\n");
290
diff --git a/nbd/client.c b/nbd/client.c
291
index XXXXXXX..XXXXXXX 100644
292
--- a/nbd/client.c
293
+++ b/nbd/client.c
294
@@ -XXX,XX +XXX,XX @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply)
295
ssize_t ret;
296
297
ret = read_sync(ioc, buf, sizeof(buf));
298
- if (ret < 0) {
299
+ if (ret <= 0) {
300
return ret;
301
}
302
303
diff --git a/nbd/common.c b/nbd/common.c
304
index XXXXXXX..XXXXXXX 100644
305
--- a/nbd/common.c
306
+++ b/nbd/common.c
307
@@ -XXX,XX +XXX,XX @@ ssize_t nbd_wr_syncv(QIOChannel *ioc,
308
}
309
if (len == QIO_CHANNEL_ERR_BLOCK) {
310
if (qemu_in_coroutine()) {
311
- /* XXX figure out if we can create a variant on
312
- * qio_channel_yield() that works with AIO contexts
313
- * and consider using that in this branch */
314
- qemu_coroutine_yield();
315
- } else if (done) {
316
- /* XXX this is needed by nbd_reply_ready. */
317
- qio_channel_wait(ioc,
318
- do_read ? G_IO_IN : G_IO_OUT);
319
+ qio_channel_yield(ioc, do_read ? G_IO_IN : G_IO_OUT);
320
} else {
321
return -EAGAIN;
322
}
323
diff --git a/nbd/server.c b/nbd/server.c
324
index XXXXXXX..XXXXXXX 100644
325
--- a/nbd/server.c
326
+++ b/nbd/server.c
327
@@ -XXX,XX +XXX,XX @@ struct NBDClient {
328
CoMutex send_lock;
329
Coroutine *send_coroutine;
330
331
- bool can_read;
332
-
333
QTAILQ_ENTRY(NBDClient) next;
334
int nb_requests;
335
bool closing;
336
@@ -XXX,XX +XXX,XX @@ struct NBDClient {
337
338
/* That's all folks */
339
340
-static void nbd_set_handlers(NBDClient *client);
341
-static void nbd_unset_handlers(NBDClient *client);
342
-static void nbd_update_can_read(NBDClient *client);
343
+static void nbd_client_receive_next_request(NBDClient *client);
344
345
static gboolean nbd_negotiate_continue(QIOChannel *ioc,
346
GIOCondition condition,
347
@@ -XXX,XX +XXX,XX @@ void nbd_client_put(NBDClient *client)
348
*/
349
assert(client->closing);
350
351
- nbd_unset_handlers(client);
352
+ qio_channel_detach_aio_context(client->ioc);
353
object_unref(OBJECT(client->sioc));
354
object_unref(OBJECT(client->ioc));
355
if (client->tlscreds) {
356
@@ -XXX,XX +XXX,XX @@ static NBDRequestData *nbd_request_get(NBDClient *client)
357
358
assert(client->nb_requests <= MAX_NBD_REQUESTS - 1);
359
client->nb_requests++;
360
- nbd_update_can_read(client);
361
362
req = g_new0(NBDRequestData, 1);
363
nbd_client_get(client);
364
@@ -XXX,XX +XXX,XX @@ static void nbd_request_put(NBDRequestData *req)
365
g_free(req);
366
367
client->nb_requests--;
368
- nbd_update_can_read(client);
369
+ nbd_client_receive_next_request(client);
370
+
371
nbd_client_put(client);
372
}
373
374
@@ -XXX,XX +XXX,XX @@ static void blk_aio_attached(AioContext *ctx, void *opaque)
375
exp->ctx = ctx;
376
377
QTAILQ_FOREACH(client, &exp->clients, next) {
378
- nbd_set_handlers(client);
379
+ qio_channel_attach_aio_context(client->ioc, ctx);
380
+ if (client->recv_coroutine) {
381
+ aio_co_schedule(ctx, client->recv_coroutine);
382
+ }
383
+ if (client->send_coroutine) {
384
+ aio_co_schedule(ctx, client->send_coroutine);
385
+ }
386
}
387
}
388
389
@@ -XXX,XX +XXX,XX @@ static void blk_aio_detach(void *opaque)
390
TRACE("Export %s: Detaching clients from AIO context %p\n", exp->name, exp->ctx);
391
392
QTAILQ_FOREACH(client, &exp->clients, next) {
393
- nbd_unset_handlers(client);
394
+ qio_channel_detach_aio_context(client->ioc);
395
}
396
397
exp->ctx = NULL;
398
@@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_send_reply(NBDRequestData *req, NBDReply *reply,
399
g_assert(qemu_in_coroutine());
400
qemu_co_mutex_lock(&client->send_lock);
401
client->send_coroutine = qemu_coroutine_self();
402
- nbd_set_handlers(client);
403
404
if (!len) {
405
rc = nbd_send_reply(client->ioc, reply);
406
@@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_send_reply(NBDRequestData *req, NBDReply *reply,
407
}
408
409
client->send_coroutine = NULL;
410
- nbd_set_handlers(client);
411
qemu_co_mutex_unlock(&client->send_lock);
412
return rc;
413
}
414
@@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_receive_request(NBDRequestData *req,
415
ssize_t rc;
416
417
g_assert(qemu_in_coroutine());
418
- client->recv_coroutine = qemu_coroutine_self();
419
- nbd_update_can_read(client);
420
-
421
+ assert(client->recv_coroutine == qemu_coroutine_self());
422
rc = nbd_receive_request(client->ioc, request);
423
if (rc < 0) {
424
if (rc != -EAGAIN) {
425
@@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_receive_request(NBDRequestData *req,
426
427
out:
428
client->recv_coroutine = NULL;
429
- nbd_update_can_read(client);
430
+ nbd_client_receive_next_request(client);
431
432
return rc;
433
}
434
435
-static void nbd_trip(void *opaque)
436
+/* Owns a reference to the NBDClient passed as opaque. */
437
+static coroutine_fn void nbd_trip(void *opaque)
438
{
439
NBDClient *client = opaque;
440
NBDExport *exp = client->exp;
441
NBDRequestData *req;
442
- NBDRequest request;
443
+ NBDRequest request = { 0 }; /* GCC thinks it can be used uninitialized */
444
NBDReply reply;
445
ssize_t ret;
446
int flags;
447
448
TRACE("Reading request.");
449
if (client->closing) {
450
+ nbd_client_put(client);
451
return;
452
}
453
454
@@ -XXX,XX +XXX,XX @@ static void nbd_trip(void *opaque)
455
456
done:
457
nbd_request_put(req);
458
+ nbd_client_put(client);
459
return;
460
461
out:
462
nbd_request_put(req);
463
client_close(client);
464
+ nbd_client_put(client);
465
}
466
467
-static void nbd_read(void *opaque)
468
+static void nbd_client_receive_next_request(NBDClient *client)
469
{
470
- NBDClient *client = opaque;
471
-
472
- if (client->recv_coroutine) {
473
- qemu_coroutine_enter(client->recv_coroutine);
474
- } else {
475
- qemu_coroutine_enter(qemu_coroutine_create(nbd_trip, client));
476
- }
477
-}
478
-
479
-static void nbd_restart_write(void *opaque)
43
-{
480
-{
44
- BackupBlockJob *s = opaque;
481
- NBDClient *client = opaque;
45
-
482
-
46
- s->bytes_read += bytes;
483
- qemu_coroutine_enter(client->send_coroutine);
47
-}
484
-}
48
-
485
-
49
-static int coroutine_fn backup_do_cow(BackupBlockJob *job,
486
-static void nbd_set_handlers(NBDClient *client)
50
- int64_t offset, uint64_t bytes,
51
- bool *error_is_read)
52
-{
487
-{
53
- int ret = 0;
488
- if (client->exp && client->exp->ctx) {
54
- int64_t start, end; /* bytes */
489
- aio_set_fd_handler(client->exp->ctx, client->sioc->fd, true,
55
-
490
- client->can_read ? nbd_read : NULL,
56
- start = QEMU_ALIGN_DOWN(offset, job->cluster_size);
491
- client->send_coroutine ? nbd_restart_write : NULL,
57
- end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size);
492
- NULL, client);
58
-
493
- }
59
- trace_backup_do_cow_enter(job, start, offset, bytes);
60
-
61
- ret = block_copy(job->bcs, start, end - start, true, error_is_read);
62
-
63
- trace_backup_do_cow_return(job, offset, bytes, ret);
64
-
65
- return ret;
66
-}
494
-}
67
-
495
-
68
static void backup_cleanup_sync_bitmap(BackupBlockJob *job, int ret)
496
-static void nbd_unset_handlers(NBDClient *client)
69
{
497
-{
70
BdrvDirtyBitmap *bm;
498
- if (client->exp && client->exp->ctx) {
71
@@ -XXX,XX +XXX,XX @@ static BlockErrorAction backup_error_action(BackupBlockJob *job,
499
- aio_set_fd_handler(client->exp->ctx, client->sioc->fd, true, NULL,
72
}
500
- NULL, NULL, NULL);
73
}
74
75
-static bool coroutine_fn yield_and_check(BackupBlockJob *job)
76
+static void coroutine_fn backup_block_copy_callback(void *opaque)
77
{
78
- uint64_t delay_ns;
79
-
80
- if (job_is_cancelled(&job->common.job)) {
81
- return true;
82
- }
501
- }
83
-
502
-}
84
- /*
503
-
85
- * We need to yield even for delay_ns = 0 so that bdrv_drain_all() can
504
-static void nbd_update_can_read(NBDClient *client)
86
- * return. Without a yield, the VM would not reboot.
505
-{
87
- */
506
- bool can_read = client->recv_coroutine ||
88
- delay_ns = block_job_ratelimit_get_delay(&job->common, job->bytes_read);
507
- client->nb_requests < MAX_NBD_REQUESTS;
89
- job->bytes_read = 0;
508
-
90
- job_sleep_ns(&job->common.job, delay_ns);
509
- if (can_read != client->can_read) {
91
+ BackupBlockJob *s = opaque;
510
- client->can_read = can_read;
92
511
- nbd_set_handlers(client);
93
- if (job_is_cancelled(&job->common.job)) {
512
-
94
- return true;
513
- /* There is no need to invoke aio_notify(), since aio_set_fd_handler()
95
+ if (s->wait) {
514
- * in nbd_set_handlers() will have taken care of that */
96
+ s->wait = false;
515
+ if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS) {
97
+ aio_co_wake(s->common.job.co);
516
+ nbd_client_get(client);
98
+ } else {
517
+ client->recv_coroutine = qemu_coroutine_create(nbd_trip, client);
99
+ job_enter(&s->common.job);
518
+ aio_co_schedule(client->exp->ctx, client->recv_coroutine);
100
}
519
}
101
-
520
}
102
- return false;
521
103
}
522
@@ -XXX,XX +XXX,XX @@ static coroutine_fn void nbd_co_client_start(void *opaque)
104
523
goto out;
105
static int coroutine_fn backup_loop(BackupBlockJob *job)
524
}
106
{
525
qemu_co_mutex_init(&client->send_lock);
107
- bool error_is_read;
526
- nbd_set_handlers(client);
108
- int64_t offset;
527
109
- BdrvDirtyBitmapIter *bdbi;
528
if (exp) {
110
+ BlockCopyCallState *s = NULL;
529
QTAILQ_INSERT_TAIL(&exp->clients, client, next);
111
int ret = 0;
530
}
112
+ bool error_is_read;
113
+ BlockErrorAction act;
114
+
531
+
115
+ while (true) { /* retry loop */
532
+ nbd_client_receive_next_request(client);
116
+ job->bg_bcs_call = s = block_copy_async(job->bcs, 0,
117
+ QEMU_ALIGN_UP(job->len, job->cluster_size),
118
+ job->perf.max_workers, job->perf.max_chunk,
119
+ backup_block_copy_callback, job);
120
+
533
+
121
+ while (!block_copy_call_finished(s) &&
534
out:
122
+ !job_is_cancelled(&job->common.job))
535
g_free(data);
123
+ {
536
}
124
+ job_yield(&job->common.job);
537
@@ -XXX,XX +XXX,XX @@ void nbd_client_new(NBDExport *exp,
125
+ }
538
object_ref(OBJECT(client->sioc));
126
539
client->ioc = QIO_CHANNEL(sioc);
127
- bdbi = bdrv_dirty_iter_new(block_copy_dirty_bitmap(job->bcs));
540
object_ref(OBJECT(client->ioc));
128
- while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
541
- client->can_read = true;
129
- do {
542
client->close = close_fn;
130
- if (yield_and_check(job)) {
543
131
- goto out;
544
data->client = client;
132
- }
133
- ret = backup_do_cow(job, offset, job->cluster_size, &error_is_read);
134
- if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
135
- BLOCK_ERROR_ACTION_REPORT)
136
- {
137
- goto out;
138
- }
139
- } while (ret < 0);
140
+ if (!block_copy_call_finished(s)) {
141
+ assert(job_is_cancelled(&job->common.job));
142
+ /*
143
+ * Note that we can't use job_yield() here, as it doesn't work for
144
+ * cancelled job.
145
+ */
146
+ block_copy_call_cancel(s);
147
+ job->wait = true;
148
+ qemu_coroutine_yield();
149
+ assert(block_copy_call_finished(s));
150
+ ret = 0;
151
+ goto out;
152
+ }
153
+
154
+ if (job_is_cancelled(&job->common.job) ||
155
+ block_copy_call_succeeded(s))
156
+ {
157
+ ret = 0;
158
+ goto out;
159
+ }
160
+
161
+ if (block_copy_call_cancelled(s)) {
162
+ /*
163
+ * Job is not cancelled but only block-copy call. This is possible
164
+ * after job pause. Now the pause is finished, start new block-copy
165
+ * iteration.
166
+ */
167
+ block_copy_call_free(s);
168
+ continue;
169
+ }
170
+
171
+ /* The only remaining case is failed block-copy call. */
172
+ assert(block_copy_call_failed(s));
173
+
174
+ ret = block_copy_call_status(s, &error_is_read);
175
+ act = backup_error_action(job, error_is_read, -ret);
176
+ switch (act) {
177
+ case BLOCK_ERROR_ACTION_REPORT:
178
+ goto out;
179
+ case BLOCK_ERROR_ACTION_STOP:
180
+ /*
181
+ * Go to pause prior to starting new block-copy call on the next
182
+ * iteration.
183
+ */
184
+ job_pause_point(&job->common.job);
185
+ break;
186
+ case BLOCK_ERROR_ACTION_IGNORE:
187
+ /* Proceed to new block-copy call to retry. */
188
+ break;
189
+ default:
190
+ abort();
191
+ }
192
+
193
+ block_copy_call_free(s);
194
}
195
196
- out:
197
- bdrv_dirty_iter_free(bdbi);
198
+out:
199
+ block_copy_call_free(s);
200
+ job->bg_bcs_call = NULL;
201
return ret;
202
}
203
204
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
205
int64_t count;
206
207
for (offset = 0; offset < s->len; ) {
208
- if (yield_and_check(s)) {
209
+ if (job_is_cancelled(job)) {
210
+ return -ECANCELED;
211
+ }
212
+
213
+ job_pause_point(job);
214
+
215
+ if (job_is_cancelled(job)) {
216
return -ECANCELED;
217
}
218
219
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
220
return 0;
221
}
222
223
+static void coroutine_fn backup_pause(Job *job)
224
+{
225
+ BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
226
+
227
+ if (s->bg_bcs_call && !block_copy_call_finished(s->bg_bcs_call)) {
228
+ block_copy_call_cancel(s->bg_bcs_call);
229
+ s->wait = true;
230
+ qemu_coroutine_yield();
231
+ }
232
+}
233
+
234
+static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
235
+{
236
+ BackupBlockJob *s = container_of(job, BackupBlockJob, common);
237
+
238
+ /*
239
+ * block_job_set_speed() is called first from block_job_create(), when we
240
+ * don't yet have s->bcs.
241
+ */
242
+ if (s->bcs) {
243
+ block_copy_set_speed(s->bcs, speed);
244
+ if (s->bg_bcs_call) {
245
+ block_copy_kick(s->bg_bcs_call);
246
+ }
247
+ }
248
+}
249
+
250
static const BlockJobDriver backup_job_driver = {
251
.job_driver = {
252
.instance_size = sizeof(BackupBlockJob),
253
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver backup_job_driver = {
254
.commit = backup_commit,
255
.abort = backup_abort,
256
.clean = backup_clean,
257
- }
258
+ .pause = backup_pause,
259
+ },
260
+ .set_speed = backup_set_speed,
261
};
262
263
static int64_t backup_calculate_cluster_size(BlockDriverState *target,
264
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
265
job->len = len;
266
job->perf = *perf;
267
268
- block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
269
block_copy_set_progress_meter(bcs, &job->common.job.progress);
270
+ block_copy_set_speed(bcs, speed);
271
272
/* Required permissions are already taken by backup-top target */
273
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
274
--
545
--
275
2.29.2
546
2.9.3
276
547
277
548
diff view generated by jsdifflib
1
ccd3b3b8112 has deprecated short-hand boolean options (i.e., options
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
with values). All options without values are interpreted as boolean
3
options, so this includes the invalid option "snapshot.foo" used in
4
iotest 178.
5
2
6
So after ccd3b3b8112, 178 fails with:
3
As a small step towards the introduction of multiqueue, we want
4
coroutines to remain on the same AioContext that started them,
5
unless they are moved explicitly with e.g. aio_co_schedule. This patch
6
avoids that coroutines switch AioContext when they use a CoMutex.
7
For now it does not make much of a difference, because the CoMutex
8
is not thread-safe and the AioContext itself is used to protect the
9
CoMutex from concurrent access. However, this is going to change.
7
10
8
+qemu-img: warning: short-form boolean option 'snapshot.foo' deprecated
11
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
+Please use snapshot.foo=on instead
12
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
13
Reviewed-by: Fam Zheng <famz@redhat.com>
14
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
15
Message-id: 20170213135235.12274-9-pbonzini@redhat.com
16
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
17
---
18
util/qemu-coroutine-lock.c | 5 ++---
19
util/trace-events | 1 -
20
2 files changed, 2 insertions(+), 4 deletions(-)
10
21
11
Suppress that deprecation warning by passing some value to it (it does
22
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
12
not matter which, because the option is invalid anyway).
13
14
Fixes: ccd3b3b8112b670fdccf8a392b8419b173ffccb4
15
("qemu-option: warn for short-form boolean options")
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
Message-Id: <20210126123834.115915-1-mreitz@redhat.com>
18
---
19
tests/qemu-iotests/178 | 2 +-
20
tests/qemu-iotests/178.out.qcow2 | 2 +-
21
tests/qemu-iotests/178.out.raw | 2 +-
22
3 files changed, 3 insertions(+), 3 deletions(-)
23
24
diff --git a/tests/qemu-iotests/178 b/tests/qemu-iotests/178
25
index XXXXXXX..XXXXXXX 100755
26
--- a/tests/qemu-iotests/178
27
+++ b/tests/qemu-iotests/178
28
@@ -XXX,XX +XXX,XX @@ $QEMU_IMG measure --image-opts # missing filename
29
$QEMU_IMG measure -f qcow2 # missing filename
30
$QEMU_IMG measure -l snap1 # missing filename
31
$QEMU_IMG measure -o , # invalid option list
32
-$QEMU_IMG measure -l snapshot.foo # invalid snapshot option
33
+$QEMU_IMG measure -l snapshot.foo=bar # invalid snapshot option
34
$QEMU_IMG measure --output foo # invalid output format
35
$QEMU_IMG measure --size -1 # invalid image size
36
$QEMU_IMG measure -O foo "$TEST_IMG" # unknown image file format
37
diff --git a/tests/qemu-iotests/178.out.qcow2 b/tests/qemu-iotests/178.out.qcow2
38
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
39
--- a/tests/qemu-iotests/178.out.qcow2
24
--- a/util/qemu-coroutine-lock.c
40
+++ b/tests/qemu-iotests/178.out.qcow2
25
+++ b/util/qemu-coroutine-lock.c
41
@@ -XXX,XX +XXX,XX @@ qemu-img: --image-opts, -f, and -l require a filename argument.
26
@@ -XXX,XX +XXX,XX @@
42
qemu-img: --image-opts, -f, and -l require a filename argument.
27
#include "qemu/coroutine.h"
43
qemu-img: Invalid option list: ,
28
#include "qemu/coroutine_int.h"
44
qemu-img: Invalid parameter 'snapshot.foo'
29
#include "qemu/queue.h"
45
-qemu-img: Failed in parsing snapshot param 'snapshot.foo'
30
+#include "block/aio.h"
46
+qemu-img: Failed in parsing snapshot param 'snapshot.foo=bar'
31
#include "trace.h"
47
qemu-img: --output must be used with human or json as argument.
32
48
qemu-img: Invalid image size specified. Must be between 0 and 9223372036854775807.
33
void qemu_co_queue_init(CoQueue *queue)
49
qemu-img: Unknown file format 'foo'
34
@@ -XXX,XX +XXX,XX @@ void qemu_co_queue_run_restart(Coroutine *co)
50
diff --git a/tests/qemu-iotests/178.out.raw b/tests/qemu-iotests/178.out.raw
35
36
static bool qemu_co_queue_do_restart(CoQueue *queue, bool single)
37
{
38
- Coroutine *self = qemu_coroutine_self();
39
Coroutine *next;
40
41
if (QSIMPLEQ_EMPTY(&queue->entries)) {
42
@@ -XXX,XX +XXX,XX @@ static bool qemu_co_queue_do_restart(CoQueue *queue, bool single)
43
44
while ((next = QSIMPLEQ_FIRST(&queue->entries)) != NULL) {
45
QSIMPLEQ_REMOVE_HEAD(&queue->entries, co_queue_next);
46
- QSIMPLEQ_INSERT_TAIL(&self->co_queue_wakeup, next, co_queue_next);
47
- trace_qemu_co_queue_next(next);
48
+ aio_co_wake(next);
49
if (single) {
50
break;
51
}
52
diff --git a/util/trace-events b/util/trace-events
51
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
52
--- a/tests/qemu-iotests/178.out.raw
54
--- a/util/trace-events
53
+++ b/tests/qemu-iotests/178.out.raw
55
+++ b/util/trace-events
54
@@ -XXX,XX +XXX,XX @@ qemu-img: --image-opts, -f, and -l require a filename argument.
56
@@ -XXX,XX +XXX,XX @@ qemu_coroutine_terminate(void *co) "self %p"
55
qemu-img: --image-opts, -f, and -l require a filename argument.
57
56
qemu-img: Invalid option list: ,
58
# util/qemu-coroutine-lock.c
57
qemu-img: Invalid parameter 'snapshot.foo'
59
qemu_co_queue_run_restart(void *co) "co %p"
58
-qemu-img: Failed in parsing snapshot param 'snapshot.foo'
60
-qemu_co_queue_next(void *nxt) "next %p"
59
+qemu-img: Failed in parsing snapshot param 'snapshot.foo=bar'
61
qemu_co_mutex_lock_entry(void *mutex, void *self) "mutex %p self %p"
60
qemu-img: --output must be used with human or json as argument.
62
qemu_co_mutex_lock_return(void *mutex, void *self) "mutex %p self %p"
61
qemu-img: Invalid image size specified. Must be between 0 and 9223372036854775807.
63
qemu_co_mutex_unlock_entry(void *mutex, void *self) "mutex %p self %p"
62
qemu-img: Unknown file format 'foo'
63
--
64
--
64
2.29.2
65
2.9.3
65
66
66
67
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Drop unused code.
3
Keep the coroutine on the same AioContext. Without this change,
4
there would be a race between yielding the coroutine and reentering it.
5
While the race cannot happen now, because the code only runs from a single
6
AioContext, this will change with multiqueue support in the block layer.
4
7
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
While doing the change, replace custom bottom half with aio_co_schedule.
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
7
Message-Id: <20210116214705.822267-20-vsementsov@virtuozzo.com>
10
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
Reviewed-by: Fam Zheng <famz@redhat.com>
12
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
14
Message-id: 20170213135235.12274-10-pbonzini@redhat.com
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
16
---
10
include/block/block-copy.h | 6 ------
17
block/blkdebug.c | 9 +--------
11
block/block-copy.c | 15 ---------------
18
1 file changed, 1 insertion(+), 8 deletions(-)
12
2 files changed, 21 deletions(-)
13
19
14
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
20
diff --git a/block/blkdebug.c b/block/blkdebug.c
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/include/block/block-copy.h
22
--- a/block/blkdebug.c
17
+++ b/include/block/block-copy.h
23
+++ b/block/blkdebug.c
18
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ out:
19
#include "block/block.h"
25
return ret;
20
#include "qemu/co-shared-resource.h"
21
22
-typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
23
typedef void (*BlockCopyAsyncCallbackFunc)(void *opaque);
24
typedef struct BlockCopyState BlockCopyState;
25
typedef struct BlockCopyCallState BlockCopyCallState;
26
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
27
BdrvRequestFlags write_flags,
28
Error **errp);
29
30
-void block_copy_set_progress_callback(
31
- BlockCopyState *s,
32
- ProgressBytesCallbackFunc progress_bytes_callback,
33
- void *progress_opaque);
34
-
35
void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm);
36
37
void block_copy_state_free(BlockCopyState *s);
38
diff --git a/block/block-copy.c b/block/block-copy.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/block/block-copy.c
41
+++ b/block/block-copy.c
42
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
43
bool skip_unallocated;
44
45
ProgressMeter *progress;
46
- /* progress_bytes_callback: called when some copying progress is done. */
47
- ProgressBytesCallbackFunc progress_bytes_callback;
48
- void *progress_opaque;
49
50
SharedResource *mem;
51
52
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
53
return s;
54
}
26
}
55
27
56
-void block_copy_set_progress_callback(
28
-static void error_callback_bh(void *opaque)
57
- BlockCopyState *s,
58
- ProgressBytesCallbackFunc progress_bytes_callback,
59
- void *progress_opaque)
60
-{
29
-{
61
- s->progress_bytes_callback = progress_bytes_callback;
30
- Coroutine *co = opaque;
62
- s->progress_opaque = progress_opaque;
31
- qemu_coroutine_enter(co);
63
-}
32
-}
64
-
33
-
65
void block_copy_set_progress_meter(BlockCopyState *s, ProgressMeter *pm)
34
static int inject_error(BlockDriverState *bs, BlkdebugRule *rule)
66
{
35
{
67
s->progress = pm;
36
BDRVBlkdebugState *s = bs->opaque;
68
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
37
@@ -XXX,XX +XXX,XX @@ static int inject_error(BlockDriverState *bs, BlkdebugRule *rule)
69
t->call_state->error_is_read = error_is_read;
70
} else {
71
progress_work_done(t->s->progress, t->bytes);
72
- if (t->s->progress_bytes_callback) {
73
- t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
74
- }
75
}
38
}
76
co_put_to_shres(t->s->mem, t->bytes);
39
77
block_copy_task_end(t, ret);
40
if (!immediately) {
41
- aio_bh_schedule_oneshot(bdrv_get_aio_context(bs), error_callback_bh,
42
- qemu_coroutine_self());
43
+ aio_co_schedule(qemu_get_current_aio_context(), qemu_coroutine_self());
44
qemu_coroutine_yield();
45
}
46
78
--
47
--
79
2.29.2
48
2.9.3
80
49
81
50
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Add function to cancel running async block-copy call. It will be used
3
qed_aio_start_io and qed_aio_next_io will not have to acquire/release
4
in backup.
4
the AioContext, while qed_aio_next_io_cb will. Split the functionality
5
and gain a little type-safety in the process.
5
6
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
8
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
8
Message-Id: <20210116214705.822267-8-vsementsov@virtuozzo.com>
9
Reviewed-by: Fam Zheng <famz@redhat.com>
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
11
Message-id: 20170213135235.12274-11-pbonzini@redhat.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
13
---
11
include/block/block-copy.h | 13 +++++++++++++
14
block/qed.c | 39 +++++++++++++++++++++++++--------------
12
block/block-copy.c | 24 +++++++++++++++++++-----
15
1 file changed, 25 insertions(+), 14 deletions(-)
13
2 files changed, 32 insertions(+), 5 deletions(-)
14
16
15
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
17
diff --git a/block/qed.c b/block/qed.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/include/block/block-copy.h
19
--- a/block/qed.c
18
+++ b/include/block/block-copy.h
20
+++ b/block/qed.c
19
@@ -XXX,XX +XXX,XX @@ void block_copy_call_free(BlockCopyCallState *call_state);
21
@@ -XXX,XX +XXX,XX @@ static CachedL2Table *qed_new_l2_table(BDRVQEDState *s)
20
bool block_copy_call_finished(BlockCopyCallState *call_state);
22
return l2_table;
21
bool block_copy_call_succeeded(BlockCopyCallState *call_state);
23
}
22
bool block_copy_call_failed(BlockCopyCallState *call_state);
24
23
+bool block_copy_call_cancelled(BlockCopyCallState *call_state);
25
-static void qed_aio_next_io(void *opaque, int ret);
24
int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
26
+static void qed_aio_next_io(QEDAIOCB *acb, int ret);
25
26
void block_copy_set_speed(BlockCopyState *s, uint64_t speed);
27
void block_copy_kick(BlockCopyCallState *call_state);
28
29
+/*
30
+ * Cancel running block-copy call.
31
+ *
32
+ * Cancel leaves block-copy state valid: dirty bits are correct and you may use
33
+ * cancel + <run block_copy with same parameters> to emulate pause/resume.
34
+ *
35
+ * Note also, that the cancel is async: it only marks block-copy call to be
36
+ * cancelled. So, the call may be cancelled (block_copy_call_cancelled() reports
37
+ * true) but not yet finished (block_copy_call_finished() reports false).
38
+ */
39
+void block_copy_call_cancel(BlockCopyCallState *call_state);
40
+
27
+
41
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
28
+static void qed_aio_start_io(QEDAIOCB *acb)
42
void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
29
+{
43
30
+ qed_aio_next_io(acb, 0);
44
diff --git a/block/block-copy.c b/block/block-copy.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/block/block-copy.c
47
+++ b/block/block-copy.c
48
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
49
int ret;
50
bool finished;
51
QemuCoSleepState *sleep_state;
52
+ bool cancelled;
53
54
/* OUT parameters */
55
bool error_is_read;
56
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
57
assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
58
assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
59
60
- while (bytes && aio_task_pool_status(aio) == 0) {
61
+ while (bytes && aio_task_pool_status(aio) == 0 && !call_state->cancelled) {
62
BlockCopyTask *task;
63
int64_t status_bytes;
64
65
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
66
do {
67
ret = block_copy_dirty_clusters(call_state);
68
69
- if (ret == 0) {
70
+ if (ret == 0 && !call_state->cancelled) {
71
ret = block_copy_wait_one(call_state->s, call_state->offset,
72
call_state->bytes);
73
}
74
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
75
* 2. We have waited for some intersecting block-copy request
76
* It may have failed and produced new dirty bits.
77
*/
78
- } while (ret > 0);
79
+ } while (ret > 0 && !call_state->cancelled);
80
81
call_state->finished = true;
82
83
@@ -XXX,XX +XXX,XX @@ bool block_copy_call_finished(BlockCopyCallState *call_state)
84
85
bool block_copy_call_succeeded(BlockCopyCallState *call_state)
86
{
87
- return call_state->finished && call_state->ret == 0;
88
+ return call_state->finished && !call_state->cancelled &&
89
+ call_state->ret == 0;
90
}
91
92
bool block_copy_call_failed(BlockCopyCallState *call_state)
93
{
94
- return call_state->finished && call_state->ret < 0;
95
+ return call_state->finished && !call_state->cancelled &&
96
+ call_state->ret < 0;
97
+}
31
+}
98
+
32
+
99
+bool block_copy_call_cancelled(BlockCopyCallState *call_state)
33
+static void qed_aio_next_io_cb(void *opaque, int ret)
100
+{
34
+{
101
+ return call_state->cancelled;
35
+ QEDAIOCB *acb = opaque;
36
+
37
+ qed_aio_next_io(acb, ret);
38
+}
39
40
static void qed_plug_allocating_write_reqs(BDRVQEDState *s)
41
{
42
@@ -XXX,XX +XXX,XX @@ static void qed_unplug_allocating_write_reqs(BDRVQEDState *s)
43
44
acb = QSIMPLEQ_FIRST(&s->allocating_write_reqs);
45
if (acb) {
46
- qed_aio_next_io(acb, 0);
47
+ qed_aio_start_io(acb);
48
}
102
}
49
}
103
50
104
int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
51
@@ -XXX,XX +XXX,XX @@ static void qed_aio_complete(QEDAIOCB *acb, int ret)
105
@@ -XXX,XX +XXX,XX @@ int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read)
52
QSIMPLEQ_REMOVE_HEAD(&s->allocating_write_reqs, next);
106
return call_state->ret;
53
acb = QSIMPLEQ_FIRST(&s->allocating_write_reqs);
54
if (acb) {
55
- qed_aio_next_io(acb, 0);
56
+ qed_aio_start_io(acb);
57
} else if (s->header.features & QED_F_NEED_CHECK) {
58
qed_start_need_check_timer(s);
59
}
60
@@ -XXX,XX +XXX,XX @@ static void qed_commit_l2_update(void *opaque, int ret)
61
acb->request.l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset);
62
assert(acb->request.l2_table != NULL);
63
64
- qed_aio_next_io(opaque, ret);
65
+ qed_aio_next_io(acb, ret);
107
}
66
}
108
67
109
+void block_copy_call_cancel(BlockCopyCallState *call_state)
68
/**
110
+{
69
@@ -XXX,XX +XXX,XX @@ static void qed_aio_write_l2_update(QEDAIOCB *acb, int ret, uint64_t offset)
111
+ call_state->cancelled = true;
70
if (need_alloc) {
112
+ block_copy_kick(call_state);
71
/* Write out the whole new L2 table */
113
+}
72
qed_write_l2_table(s, &acb->request, 0, s->table_nelems, true,
114
+
73
- qed_aio_write_l1_update, acb);
115
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s)
74
+ qed_aio_write_l1_update, acb);
75
} else {
76
/* Write out only the updated part of the L2 table */
77
qed_write_l2_table(s, &acb->request, index, acb->cur_nclusters, false,
78
- qed_aio_next_io, acb);
79
+ qed_aio_next_io_cb, acb);
80
}
81
return;
82
83
@@ -XXX,XX +XXX,XX @@ static void qed_aio_write_main(void *opaque, int ret)
84
}
85
86
if (acb->find_cluster_ret == QED_CLUSTER_FOUND) {
87
- next_fn = qed_aio_next_io;
88
+ next_fn = qed_aio_next_io_cb;
89
} else {
90
if (s->bs->backing) {
91
next_fn = qed_aio_write_flush_before_l2_update;
92
@@ -XXX,XX +XXX,XX @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
93
if (acb->flags & QED_AIOCB_ZERO) {
94
/* Skip ahead if the clusters are already zero */
95
if (acb->find_cluster_ret == QED_CLUSTER_ZERO) {
96
- qed_aio_next_io(acb, 0);
97
+ qed_aio_start_io(acb);
98
return;
99
}
100
101
@@ -XXX,XX +XXX,XX @@ static void qed_aio_read_data(void *opaque, int ret,
102
/* Handle zero cluster and backing file reads */
103
if (ret == QED_CLUSTER_ZERO) {
104
qemu_iovec_memset(&acb->cur_qiov, 0, 0, acb->cur_qiov.size);
105
- qed_aio_next_io(acb, 0);
106
+ qed_aio_start_io(acb);
107
return;
108
} else if (ret != QED_CLUSTER_FOUND) {
109
qed_read_backing_file(s, acb->cur_pos, &acb->cur_qiov,
110
- &acb->backing_qiov, qed_aio_next_io, acb);
111
+ &acb->backing_qiov, qed_aio_next_io_cb, acb);
112
return;
113
}
114
115
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
116
bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE,
117
&acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE,
118
- qed_aio_next_io, acb);
119
+ qed_aio_next_io_cb, acb);
120
return;
121
122
err:
123
@@ -XXX,XX +XXX,XX @@ err:
124
/**
125
* Begin next I/O or complete the request
126
*/
127
-static void qed_aio_next_io(void *opaque, int ret)
128
+static void qed_aio_next_io(QEDAIOCB *acb, int ret)
116
{
129
{
117
return s->copy_bitmap;
130
- QEDAIOCB *acb = opaque;
131
BDRVQEDState *s = acb_to_s(acb);
132
QEDFindClusterFunc *io_fn = (acb->flags & QED_AIOCB_WRITE) ?
133
qed_aio_write_data : qed_aio_read_data;
134
@@ -XXX,XX +XXX,XX @@ static BlockAIOCB *qed_aio_setup(BlockDriverState *bs,
135
qemu_iovec_init(&acb->cur_qiov, qiov->niov);
136
137
/* Start request */
138
- qed_aio_next_io(acb, 0);
139
+ qed_aio_start_io(acb);
140
return &acb->common;
141
}
142
118
--
143
--
119
2.29.2
144
2.9.3
120
145
121
146
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Add a direct link to target bs for convenience and to simplify
3
The AioContext data structures are now protected by list_lock and/or
4
following commit which will insert COR filter above target bs.
4
they are walked with FOREACH_RCU primitives. There is no need anymore
5
to acquire the AioContext for the entire duration of aio_dispatch.
6
Instead, just acquire it before and after invoking the callbacks.
7
The next step is then to push it further down.
5
8
6
This is a part of original commit written by Andrey.
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
11
Reviewed-by: Fam Zheng <famz@redhat.com>
12
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
13
Message-id: 20170213135235.12274-12-pbonzini@redhat.com
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
---
16
util/aio-posix.c | 25 +++++++++++--------------
17
util/aio-win32.c | 15 +++++++--------
18
util/async.c | 2 ++
19
3 files changed, 20 insertions(+), 22 deletions(-)
7
20
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
21
diff --git a/util/aio-posix.c b/util/aio-posix.c
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
10
Message-Id: <20201216061703.70908-13-vsementsov@virtuozzo.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
13
block/stream.c | 23 ++++++++++-------------
14
1 file changed, 10 insertions(+), 13 deletions(-)
15
16
diff --git a/block/stream.c b/block/stream.c
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/block/stream.c
23
--- a/util/aio-posix.c
19
+++ b/block/stream.c
24
+++ b/util/aio-posix.c
20
@@ -XXX,XX +XXX,XX @@ typedef struct StreamBlockJob {
25
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
21
BlockJob common;
26
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
22
BlockDriverState *base_overlay; /* COW overlay (stream from this) */
27
aio_node_check(ctx, node->is_external) &&
23
BlockDriverState *above_base; /* Node directly above the base */
28
node->io_read) {
24
+ BlockDriverState *target_bs;
29
+ aio_context_acquire(ctx);
25
BlockdevOnError on_error;
30
node->io_read(node->opaque);
26
char *backing_file_str;
31
+ aio_context_release(ctx);
27
bool bs_read_only;
32
28
@@ -XXX,XX +XXX,XX @@ static void stream_abort(Job *job)
33
/* aio_notify() does not count as progress */
29
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
34
if (node->opaque != &ctx->notifier) {
30
35
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
31
if (s->chain_frozen) {
36
(revents & (G_IO_OUT | G_IO_ERR)) &&
32
- BlockJob *bjob = &s->common;
37
aio_node_check(ctx, node->is_external) &&
33
- bdrv_unfreeze_backing_chain(blk_bs(bjob->blk), s->above_base);
38
node->io_write) {
34
+ bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
39
+ aio_context_acquire(ctx);
40
node->io_write(node->opaque);
41
+ aio_context_release(ctx);
42
progress = true;
43
}
44
45
@@ -XXX,XX +XXX,XX @@ bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
35
}
46
}
47
48
/* Run our timers */
49
+ aio_context_acquire(ctx);
50
progress |= timerlistgroup_run_timers(&ctx->tlg);
51
+ aio_context_release(ctx);
52
53
return progress;
36
}
54
}
37
55
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
38
static int stream_prepare(Job *job)
56
int64_t timeout;
39
{
57
int64_t start = 0;
40
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
58
41
- BlockJob *bjob = &s->common;
59
- aio_context_acquire(ctx);
42
- BlockDriverState *bs = blk_bs(bjob->blk);
60
- progress = false;
43
- BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
61
-
44
+ BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
62
/* aio_notify can avoid the expensive event_notifier_set if
45
BlockDriverState *base = bdrv_filter_or_cow_bs(s->above_base);
63
* everything (file descriptors, bottom halves, timers) will
46
BlockDriverState *unfiltered_base = bdrv_skip_filters(base);
64
* be re-evaluated before the next blocking poll(). This is
47
Error *local_err = NULL;
65
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
48
int ret = 0;
66
start = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
49
50
- bdrv_unfreeze_backing_chain(bs, s->above_base);
51
+ bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
52
s->chain_frozen = false;
53
54
if (bdrv_cow_child(unfiltered_bs)) {
55
@@ -XXX,XX +XXX,XX @@ static void stream_clean(Job *job)
56
{
57
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
58
BlockJob *bjob = &s->common;
59
- BlockDriverState *bs = blk_bs(bjob->blk);
60
61
/* Reopen the image back in read-only mode if necessary */
62
if (s->bs_read_only) {
63
/* Give up write permissions before making it read-only */
64
blk_set_perm(bjob->blk, 0, BLK_PERM_ALL, &error_abort);
65
- bdrv_reopen_set_read_only(bs, true, NULL);
66
+ bdrv_reopen_set_read_only(s->target_bs, true, NULL);
67
}
67
}
68
68
69
g_free(s->backing_file_str);
69
- if (try_poll_mode(ctx, blocking)) {
70
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
70
- progress = true;
71
{
71
- } else {
72
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
72
+ aio_context_acquire(ctx);
73
BlockBackend *blk = s->common.blk;
73
+ progress = try_poll_mode(ctx, blocking);
74
- BlockDriverState *bs = blk_bs(blk);
74
+ aio_context_release(ctx);
75
- BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
75
+
76
+ BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
76
+ if (!progress) {
77
bool enable_cor = !bdrv_cow_child(s->base_overlay);
77
assert(npfd == 0);
78
int64_t len;
78
79
int64_t offset = 0;
79
/* fill pollfds */
80
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
80
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
81
return 0;
81
timeout = blocking ? aio_compute_timeout(ctx) : 0;
82
83
/* wait until next event */
84
- if (timeout) {
85
- aio_context_release(ctx);
86
- }
87
if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) {
88
AioHandler epoll_handler;
89
90
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
91
} else {
92
ret = qemu_poll_ns(pollfds, npfd, timeout);
93
}
94
- if (timeout) {
95
- aio_context_acquire(ctx);
96
- }
82
}
97
}
83
98
84
- len = bdrv_getlength(bs);
99
if (blocking) {
85
+ len = bdrv_getlength(s->target_bs);
100
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
86
if (len < 0) {
101
progress = true;
87
return len;
88
}
102
}
89
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
103
90
* account.
104
- aio_context_release(ctx);
91
*/
105
-
92
if (enable_cor) {
106
return progress;
93
- bdrv_enable_copy_on_read(bs);
107
}
94
+ bdrv_enable_copy_on_read(s->target_bs);
108
95
}
109
diff --git a/util/aio-win32.c b/util/aio-win32.c
96
110
index XXXXXXX..XXXXXXX 100644
97
for ( ; offset < len; offset += n) {
111
--- a/util/aio-win32.c
98
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
112
+++ b/util/aio-win32.c
99
}
113
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
100
114
(revents || event_notifier_get_handle(node->e) == event) &&
101
if (enable_cor) {
115
node->io_notify) {
102
- bdrv_disable_copy_on_read(bs);
116
node->pfd.revents = 0;
103
+ bdrv_disable_copy_on_read(s->target_bs);
117
+ aio_context_acquire(ctx);
104
}
118
node->io_notify(node->e);
105
119
+ aio_context_release(ctx);
106
/* Do not remove the backing file if an error was there but ignored. */
120
107
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
121
/* aio_notify() does not count as progress */
108
s->base_overlay = base_overlay;
122
if (node->e != &ctx->notifier) {
109
s->above_base = above_base;
123
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
110
s->backing_file_str = g_strdup(backing_file_str);
124
(node->io_read || node->io_write)) {
111
+ s->target_bs = bs;
125
node->pfd.revents = 0;
112
s->bs_read_only = bs_read_only;
126
if ((revents & G_IO_IN) && node->io_read) {
113
s->chain_frozen = true;
127
+ aio_context_acquire(ctx);
114
128
node->io_read(node->opaque);
129
+ aio_context_release(ctx);
130
progress = true;
131
}
132
if ((revents & G_IO_OUT) && node->io_write) {
133
+ aio_context_acquire(ctx);
134
node->io_write(node->opaque);
135
+ aio_context_release(ctx);
136
progress = true;
137
}
138
139
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
140
int count;
141
int timeout;
142
143
- aio_context_acquire(ctx);
144
progress = false;
145
146
/* aio_notify can avoid the expensive event_notifier_set if
147
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
148
149
timeout = blocking && !have_select_revents
150
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
151
- if (timeout) {
152
- aio_context_release(ctx);
153
- }
154
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
155
if (blocking) {
156
assert(first);
157
atomic_sub(&ctx->notify_me, 2);
158
}
159
- if (timeout) {
160
- aio_context_acquire(ctx);
161
- }
162
163
if (first) {
164
aio_notify_accept(ctx);
165
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
166
progress |= aio_dispatch_handlers(ctx, event);
167
} while (count > 0);
168
169
+ aio_context_acquire(ctx);
170
progress |= timerlistgroup_run_timers(&ctx->tlg);
171
-
172
aio_context_release(ctx);
173
return progress;
174
}
175
diff --git a/util/async.c b/util/async.c
176
index XXXXXXX..XXXXXXX 100644
177
--- a/util/async.c
178
+++ b/util/async.c
179
@@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx)
180
ret = 1;
181
}
182
bh->idle = 0;
183
+ aio_context_acquire(ctx);
184
aio_bh_call(bh);
185
+ aio_context_release(ctx);
186
}
187
if (bh->deleted) {
188
deleted = true;
115
--
189
--
116
2.29.2
190
2.9.3
117
191
118
192
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Add an option to limit copy-on-read operations to specified sub-chain
3
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
4
of backing-chain, to make copy-on-read filter useful for block-stream
4
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5
job.
5
Reviewed-by: Fam Zheng <famz@redhat.com>
6
6
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
7
Suggested-by: Max Reitz <mreitz@redhat.com>
7
Message-id: 20170213135235.12274-13-pbonzini@redhat.com
8
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
10
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
[vsementsov: change subject, modified to freeze the chain,
12
do some fixes]
13
Message-Id: <20201216061703.70908-6-vsementsov@virtuozzo.com>
14
Signed-off-by: Max Reitz <mreitz@redhat.com>
15
---
9
---
16
qapi/block-core.json | 20 ++++++++-
10
block/qed.h | 3 +++
17
block/copy-on-read.c | 98 +++++++++++++++++++++++++++++++++++++++++++-
11
block/curl.c | 2 ++
18
2 files changed, 115 insertions(+), 3 deletions(-)
12
block/io.c | 5 +++++
19
13
block/iscsi.c | 8 ++++++--
20
diff --git a/qapi/block-core.json b/qapi/block-core.json
14
block/null.c | 4 ++++
21
index XXXXXXX..XXXXXXX 100644
15
block/qed.c | 12 ++++++++++++
22
--- a/qapi/block-core.json
16
block/throttle-groups.c | 2 ++
23
+++ b/qapi/block-core.json
17
util/aio-posix.c | 2 --
24
@@ -XXX,XX +XXX,XX @@
18
util/aio-win32.c | 2 --
25
'data': { 'throttle-group': 'str',
19
util/qemu-coroutine-sleep.c | 2 +-
26
'file' : 'BlockdevRef'
20
10 files changed, 35 insertions(+), 7 deletions(-)
27
} }
21
28
+
22
diff --git a/block/qed.h b/block/qed.h
29
+##
23
index XXXXXXX..XXXXXXX 100644
30
+# @BlockdevOptionsCor:
24
--- a/block/qed.h
31
+#
25
+++ b/block/qed.h
32
+# Driver specific block device options for the copy-on-read driver.
26
@@ -XXX,XX +XXX,XX @@ enum {
33
+#
27
*/
34
+# @bottom: The name of a non-filter node (allocation-bearing layer) that
28
typedef void QEDFindClusterFunc(void *opaque, int ret, uint64_t offset, size_t len);
35
+# limits the COR operations in the backing chain (inclusive), so
29
36
+# that no data below this node will be copied by this filter.
30
+void qed_acquire(BDRVQEDState *s);
37
+# If option is absent, the limit is not applied, so that data
31
+void qed_release(BDRVQEDState *s);
38
+# from all backing layers may be copied.
32
+
39
+#
33
/**
40
+# Since: 6.0
34
* Generic callback for chaining async callbacks
41
+##
35
*/
42
+{ 'struct': 'BlockdevOptionsCor',
36
diff --git a/block/curl.c b/block/curl.c
43
+ 'base': 'BlockdevOptionsGenericFormat',
37
index XXXXXXX..XXXXXXX 100644
44
+ 'data': { '*bottom': 'str' } }
38
--- a/block/curl.c
45
+
39
+++ b/block/curl.c
46
##
40
@@ -XXX,XX +XXX,XX @@ static void curl_multi_timeout_do(void *arg)
47
# @BlockdevOptions:
41
return;
48
#
42
}
49
@@ -XXX,XX +XXX,XX @@
43
50
'bochs': 'BlockdevOptionsGenericFormat',
44
+ aio_context_acquire(s->aio_context);
51
'cloop': 'BlockdevOptionsGenericFormat',
45
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
52
'compress': 'BlockdevOptionsGenericFormat',
46
53
- 'copy-on-read':'BlockdevOptionsGenericFormat',
47
curl_multi_check_completion(s);
54
+ 'copy-on-read':'BlockdevOptionsCor',
48
+ aio_context_release(s->aio_context);
55
'dmg': 'BlockdevOptionsGenericFormat',
49
#else
56
'file': 'BlockdevOptionsFile',
50
abort();
57
'ftp': 'BlockdevOptionsCurlFtp',
51
#endif
58
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
52
diff --git a/block/io.c b/block/io.c
59
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
60
--- a/block/copy-on-read.c
54
--- a/block/io.c
61
+++ b/block/copy-on-read.c
55
+++ b/block/io.c
62
@@ -XXX,XX +XXX,XX @@
56
@@ -XXX,XX +XXX,XX @@ void bdrv_aio_cancel(BlockAIOCB *acb)
63
#include "block/block_int.h"
57
if (acb->aiocb_info->get_aio_context) {
64
#include "qemu/module.h"
58
aio_poll(acb->aiocb_info->get_aio_context(acb), true);
65
#include "qapi/error.h"
59
} else if (acb->bs) {
66
+#include "qapi/qmp/qdict.h"
60
+ /* qemu_aio_ref and qemu_aio_unref are not thread-safe, so
67
#include "block/copy-on-read.h"
61
+ * assert that we're not using an I/O thread. Thread-safe
68
62
+ * code should use bdrv_aio_cancel_async exclusively.
69
63
+ */
70
typedef struct BDRVStateCOR {
64
+ assert(bdrv_get_aio_context(acb->bs) == qemu_get_aio_context());
71
bool active;
65
aio_poll(bdrv_get_aio_context(acb->bs), true);
72
+ BlockDriverState *bottom_bs;
66
} else {
73
+ bool chain_frozen;
67
abort();
74
} BDRVStateCOR;
68
diff --git a/block/iscsi.c b/block/iscsi.c
75
69
index XXXXXXX..XXXXXXX 100644
76
70
--- a/block/iscsi.c
77
static int cor_open(BlockDriverState *bs, QDict *options, int flags,
71
+++ b/block/iscsi.c
78
Error **errp)
72
@@ -XXX,XX +XXX,XX @@ static void iscsi_retry_timer_expired(void *opaque)
73
struct IscsiTask *iTask = opaque;
74
iTask->complete = 1;
75
if (iTask->co) {
76
- qemu_coroutine_enter(iTask->co);
77
+ aio_co_wake(iTask->co);
78
}
79
}
80
81
@@ -XXX,XX +XXX,XX @@ static void iscsi_nop_timed_event(void *opaque)
79
{
82
{
80
+ BlockDriverState *bottom_bs = NULL;
83
IscsiLun *iscsilun = opaque;
81
BDRVStateCOR *state = bs->opaque;
84
82
+ /* Find a bottom node name, if any */
85
+ aio_context_acquire(iscsilun->aio_context);
83
+ const char *bottom_node = qdict_get_try_str(options, "bottom");
86
if (iscsi_get_nops_in_flight(iscsilun->iscsi) >= MAX_NOP_FAILURES) {
84
87
error_report("iSCSI: NOP timeout. Reconnecting...");
85
bs->file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
88
iscsilun->request_timed_out = true;
86
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
89
} else if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) {
87
@@ -XXX,XX +XXX,XX @@ static int cor_open(BlockDriverState *bs, QDict *options, int flags,
90
error_report("iSCSI: failed to sent NOP-Out. Disabling NOP messages.");
88
((BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK) &
91
- return;
89
bs->file->bs->supported_zero_flags);
92
+ goto out;
90
93
}
91
+ if (bottom_node) {
94
92
+ bottom_bs = bdrv_find_node(bottom_node);
95
timer_mod(iscsilun->nop_timer, qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + NOP_INTERVAL);
93
+ if (!bottom_bs) {
96
iscsi_set_events(iscsilun);
94
+ error_setg(errp, "Bottom node '%s' not found", bottom_node);
97
+
95
+ qdict_del(options, "bottom");
98
+out:
96
+ return -EINVAL;
99
+ aio_context_release(iscsilun->aio_context);
97
+ }
100
}
98
+ qdict_del(options, "bottom");
101
99
+
102
static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
100
+ if (!bottom_bs->drv) {
103
diff --git a/block/null.c b/block/null.c
101
+ error_setg(errp, "Bottom node '%s' not opened", bottom_node);
104
index XXXXXXX..XXXXXXX 100644
102
+ return -EINVAL;
105
--- a/block/null.c
103
+ }
106
+++ b/block/null.c
104
+
107
@@ -XXX,XX +XXX,XX @@ static void null_bh_cb(void *opaque)
105
+ if (bottom_bs->drv->is_filter) {
108
static void null_timer_cb(void *opaque)
106
+ error_setg(errp, "Bottom node '%s' is a filter", bottom_node);
107
+ return -EINVAL;
108
+ }
109
+
110
+ if (bdrv_freeze_backing_chain(bs, bottom_bs, errp) < 0) {
111
+ return -EINVAL;
112
+ }
113
+ state->chain_frozen = true;
114
+
115
+ /*
116
+ * We do freeze the chain, so it shouldn't be removed. Still, storing a
117
+ * pointer worth bdrv_ref().
118
+ */
119
+ bdrv_ref(bottom_bs);
120
+ }
121
state->active = true;
122
+ state->bottom_bs = bottom_bs;
123
124
/*
125
* We don't need to call bdrv_child_refresh_perms() now as the permissions
126
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
127
size_t qiov_offset,
128
int flags)
129
{
109
{
130
- return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
110
NullAIOCB *acb = opaque;
131
- flags | BDRV_REQ_COPY_ON_READ);
111
+ AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
132
+ int64_t n;
112
+
133
+ int local_flags;
113
+ aio_context_acquire(ctx);
134
+ int ret;
114
acb->common.cb(acb->common.opaque, 0);
135
+ BDRVStateCOR *state = bs->opaque;
115
+ aio_context_release(ctx);
136
+
116
timer_deinit(&acb->timer);
137
+ if (!state->bottom_bs) {
117
qemu_aio_unref(acb);
138
+ return bdrv_co_preadv_part(bs->file, offset, bytes, qiov, qiov_offset,
118
}
139
+ flags | BDRV_REQ_COPY_ON_READ);
119
diff --git a/block/qed.c b/block/qed.c
140
+ }
120
index XXXXXXX..XXXXXXX 100644
141
+
121
--- a/block/qed.c
142
+ while (bytes) {
122
+++ b/block/qed.c
143
+ local_flags = flags;
123
@@ -XXX,XX +XXX,XX @@ static void qed_need_check_timer_cb(void *opaque)
144
+
124
145
+ /* In case of failure, try to copy-on-read anyway */
125
trace_qed_need_check_timer_cb(s);
146
+ ret = bdrv_is_allocated(bs->file->bs, offset, bytes, &n);
126
147
+ if (ret <= 0) {
127
+ qed_acquire(s);
148
+ ret = bdrv_is_allocated_above(bdrv_backing_chain_next(bs->file->bs),
128
qed_plug_allocating_write_reqs(s);
149
+ state->bottom_bs, true, offset,
129
150
+ n, &n);
130
/* Ensure writes are on disk before clearing flag */
151
+ if (ret > 0 || ret < 0) {
131
bdrv_aio_flush(s->bs->file->bs, qed_clear_need_check, s);
152
+ local_flags |= BDRV_REQ_COPY_ON_READ;
132
+ qed_release(s);
153
+ }
133
+}
154
+ /* Finish earlier if the end of a backing file has been reached */
134
+
155
+ if (n == 0) {
135
+void qed_acquire(BDRVQEDState *s)
156
+ break;
157
+ }
158
+ }
159
+
160
+ ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
161
+ local_flags);
162
+ if (ret < 0) {
163
+ return ret;
164
+ }
165
+
166
+ offset += n;
167
+ qiov_offset += n;
168
+ bytes -= n;
169
+ }
170
+
171
+ return 0;
172
}
173
174
175
@@ -XXX,XX +XXX,XX @@ static void cor_lock_medium(BlockDriverState *bs, bool locked)
176
}
177
178
179
+static void cor_close(BlockDriverState *bs)
180
+{
136
+{
181
+ BDRVStateCOR *s = bs->opaque;
137
+ aio_context_acquire(bdrv_get_aio_context(s->bs));
182
+
183
+ if (s->chain_frozen) {
184
+ s->chain_frozen = false;
185
+ bdrv_unfreeze_backing_chain(bs, s->bottom_bs);
186
+ }
187
+
188
+ bdrv_unref(s->bottom_bs);
189
+}
138
+}
190
+
139
+
191
+
140
+void qed_release(BDRVQEDState *s)
192
static BlockDriver bdrv_copy_on_read = {
141
+{
193
.format_name = "copy-on-read",
142
+ aio_context_release(bdrv_get_aio_context(s->bs));
194
.instance_size = sizeof(BDRVStateCOR),
143
}
195
144
196
.bdrv_open = cor_open,
145
static void qed_start_need_check_timer(BDRVQEDState *s)
197
+ .bdrv_close = cor_close,
146
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
198
.bdrv_child_perm = cor_child_perm,
147
index XXXXXXX..XXXXXXX 100644
199
148
--- a/block/throttle-groups.c
200
.bdrv_getlength = cor_getlength,
149
+++ b/block/throttle-groups.c
201
@@ -XXX,XX +XXX,XX @@ void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
150
@@ -XXX,XX +XXX,XX @@ static void timer_cb(BlockBackend *blk, bool is_write)
202
bdrv_drained_begin(bs);
151
qemu_mutex_unlock(&tg->lock);
203
/* Drop permissions before the graph change. */
152
204
s->active = false;
153
/* Run the request that was waiting for this timer */
205
+ /* unfreeze, as otherwise bdrv_replace_node() will fail */
154
+ aio_context_acquire(blk_get_aio_context(blk));
206
+ if (s->chain_frozen) {
155
empty_queue = !qemu_co_enter_next(&blkp->throttled_reqs[is_write]);
207
+ s->chain_frozen = false;
156
+ aio_context_release(blk_get_aio_context(blk));
208
+ bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
157
209
+ }
158
/* If the request queue was empty then we have to take care of
210
bdrv_child_refresh_perms(cor_filter_bs, child, &error_abort);
159
* scheduling the next one */
211
bdrv_replace_node(cor_filter_bs, bs, &error_abort);
160
diff --git a/util/aio-posix.c b/util/aio-posix.c
212
161
index XXXXXXX..XXXXXXX 100644
162
--- a/util/aio-posix.c
163
+++ b/util/aio-posix.c
164
@@ -XXX,XX +XXX,XX @@ bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
165
}
166
167
/* Run our timers */
168
- aio_context_acquire(ctx);
169
progress |= timerlistgroup_run_timers(&ctx->tlg);
170
- aio_context_release(ctx);
171
172
return progress;
173
}
174
diff --git a/util/aio-win32.c b/util/aio-win32.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/util/aio-win32.c
177
+++ b/util/aio-win32.c
178
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
179
progress |= aio_dispatch_handlers(ctx, event);
180
} while (count > 0);
181
182
- aio_context_acquire(ctx);
183
progress |= timerlistgroup_run_timers(&ctx->tlg);
184
- aio_context_release(ctx);
185
return progress;
186
}
187
188
diff --git a/util/qemu-coroutine-sleep.c b/util/qemu-coroutine-sleep.c
189
index XXXXXXX..XXXXXXX 100644
190
--- a/util/qemu-coroutine-sleep.c
191
+++ b/util/qemu-coroutine-sleep.c
192
@@ -XXX,XX +XXX,XX @@ static void co_sleep_cb(void *opaque)
193
{
194
CoSleepCB *sleep_cb = opaque;
195
196
- qemu_coroutine_enter(sleep_cb->co);
197
+ aio_co_wake(sleep_cb->co);
198
}
199
200
void coroutine_fn co_aio_sleep_ns(AioContext *ctx, QEMUClockType type,
213
--
201
--
214
2.29.2
202
2.9.3
215
203
216
204
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
It simplifies debugging.
3
This covers both file descriptor callbacks and polling callbacks,
4
since they execute related code.
4
5
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
7
Message-Id: <20210116214705.822267-6-vsementsov@virtuozzo.com>
8
Reviewed-by: Fam Zheng <famz@redhat.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
10
Message-id: 20170213135235.12274-14-pbonzini@redhat.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
12
---
10
block/block-copy.c | 11 ++++++++++-
13
block/curl.c | 16 +++++++++++++---
11
1 file changed, 10 insertions(+), 1 deletion(-)
14
block/iscsi.c | 4 ++++
15
block/linux-aio.c | 4 ++++
16
block/nfs.c | 6 ++++++
17
block/sheepdog.c | 29 +++++++++++++++--------------
18
block/ssh.c | 29 +++++++++--------------------
19
block/win32-aio.c | 10 ++++++----
20
hw/block/virtio-blk.c | 5 ++++-
21
hw/scsi/virtio-scsi.c | 7 +++++++
22
util/aio-posix.c | 7 -------
23
util/aio-win32.c | 6 ------
24
11 files changed, 68 insertions(+), 55 deletions(-)
12
25
13
diff --git a/block/block-copy.c b/block/block-copy.c
26
diff --git a/block/curl.c b/block/curl.c
14
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
15
--- a/block/block-copy.c
28
--- a/block/curl.c
16
+++ b/block/block-copy.c
29
+++ b/block/curl.c
17
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
30
@@ -XXX,XX +XXX,XX @@ static void curl_multi_check_completion(BDRVCURLState *s)
18
/* Coroutine where async block-copy is running */
31
}
19
Coroutine *co;
32
}
20
33
21
+ /* To reference all call states from BlockCopyState */
34
-static void curl_multi_do(void *arg)
22
+ QLIST_ENTRY(BlockCopyCallState) list;
35
+static void curl_multi_do_locked(CURLState *s)
23
+
36
{
24
/* State */
37
- CURLState *s = (CURLState *)arg;
38
CURLSocket *socket, *next_socket;
39
int running;
40
int r;
41
@@ -XXX,XX +XXX,XX @@ static void curl_multi_do(void *arg)
42
}
43
}
44
45
+static void curl_multi_do(void *arg)
46
+{
47
+ CURLState *s = (CURLState *)arg;
48
+
49
+ aio_context_acquire(s->s->aio_context);
50
+ curl_multi_do_locked(s);
51
+ aio_context_release(s->s->aio_context);
52
+}
53
+
54
static void curl_multi_read(void *arg)
55
{
56
CURLState *s = (CURLState *)arg;
57
58
- curl_multi_do(arg);
59
+ aio_context_acquire(s->s->aio_context);
60
+ curl_multi_do_locked(s);
61
curl_multi_check_completion(s->s);
62
+ aio_context_release(s->s->aio_context);
63
}
64
65
static void curl_multi_timeout_do(void *arg)
66
diff --git a/block/iscsi.c b/block/iscsi.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/block/iscsi.c
69
+++ b/block/iscsi.c
70
@@ -XXX,XX +XXX,XX @@ iscsi_process_read(void *arg)
71
IscsiLun *iscsilun = arg;
72
struct iscsi_context *iscsi = iscsilun->iscsi;
73
74
+ aio_context_acquire(iscsilun->aio_context);
75
iscsi_service(iscsi, POLLIN);
76
iscsi_set_events(iscsilun);
77
+ aio_context_release(iscsilun->aio_context);
78
}
79
80
static void
81
@@ -XXX,XX +XXX,XX @@ iscsi_process_write(void *arg)
82
IscsiLun *iscsilun = arg;
83
struct iscsi_context *iscsi = iscsilun->iscsi;
84
85
+ aio_context_acquire(iscsilun->aio_context);
86
iscsi_service(iscsi, POLLOUT);
87
iscsi_set_events(iscsilun);
88
+ aio_context_release(iscsilun->aio_context);
89
}
90
91
static int64_t sector_lun2qemu(int64_t sector, IscsiLun *iscsilun)
92
diff --git a/block/linux-aio.c b/block/linux-aio.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/block/linux-aio.c
95
+++ b/block/linux-aio.c
96
@@ -XXX,XX +XXX,XX @@ static void qemu_laio_completion_cb(EventNotifier *e)
97
LinuxAioState *s = container_of(e, LinuxAioState, e);
98
99
if (event_notifier_test_and_clear(&s->e)) {
100
+ aio_context_acquire(s->aio_context);
101
qemu_laio_process_completions_and_submit(s);
102
+ aio_context_release(s->aio_context);
103
}
104
}
105
106
@@ -XXX,XX +XXX,XX @@ static bool qemu_laio_poll_cb(void *opaque)
107
return false;
108
}
109
110
+ aio_context_acquire(s->aio_context);
111
qemu_laio_process_completions_and_submit(s);
112
+ aio_context_release(s->aio_context);
113
return true;
114
}
115
116
diff --git a/block/nfs.c b/block/nfs.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/block/nfs.c
119
+++ b/block/nfs.c
120
@@ -XXX,XX +XXX,XX @@ static void nfs_set_events(NFSClient *client)
121
static void nfs_process_read(void *arg)
122
{
123
NFSClient *client = arg;
124
+
125
+ aio_context_acquire(client->aio_context);
126
nfs_service(client->context, POLLIN);
127
nfs_set_events(client);
128
+ aio_context_release(client->aio_context);
129
}
130
131
static void nfs_process_write(void *arg)
132
{
133
NFSClient *client = arg;
134
+
135
+ aio_context_acquire(client->aio_context);
136
nfs_service(client->context, POLLOUT);
137
nfs_set_events(client);
138
+ aio_context_release(client->aio_context);
139
}
140
141
static void nfs_co_init_task(BlockDriverState *bs, NFSRPC *task)
142
diff --git a/block/sheepdog.c b/block/sheepdog.c
143
index XXXXXXX..XXXXXXX 100644
144
--- a/block/sheepdog.c
145
+++ b/block/sheepdog.c
146
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int send_co_req(int sockfd, SheepdogReq *hdr, void *data,
147
return ret;
148
}
149
150
-static void restart_co_req(void *opaque)
151
-{
152
- Coroutine *co = opaque;
153
-
154
- qemu_coroutine_enter(co);
155
-}
156
-
157
typedef struct SheepdogReqCo {
158
int sockfd;
159
BlockDriverState *bs;
160
@@ -XXX,XX +XXX,XX @@ typedef struct SheepdogReqCo {
161
unsigned int *rlen;
25
int ret;
162
int ret;
26
bool finished;
163
bool finished;
27
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
164
+ Coroutine *co;
28
bool use_copy_range;
165
} SheepdogReqCo;
29
int64_t copy_size;
166
30
uint64_t len;
167
+static void restart_co_req(void *opaque)
31
- QLIST_HEAD(, BlockCopyTask) tasks;
168
+{
32
+ QLIST_HEAD(, BlockCopyTask) tasks; /* All tasks from all block-copy calls */
169
+ SheepdogReqCo *srco = opaque;
33
+ QLIST_HEAD(, BlockCopyCallState) calls;
170
+
34
171
+ aio_co_wake(srco->co);
35
BdrvRequestFlags write_flags;
172
+}
36
173
+
37
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
174
static coroutine_fn void do_co_req(void *opaque)
38
}
39
40
QLIST_INIT(&s->tasks);
41
+ QLIST_INIT(&s->calls);
42
43
return s;
44
}
45
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
46
{
175
{
47
int ret;
176
int ret;
48
177
- Coroutine *co;
49
+ QLIST_INSERT_HEAD(&call_state->s->calls, call_state, list);
178
SheepdogReqCo *srco = opaque;
50
+
179
int sockfd = srco->sockfd;
180
SheepdogReq *hdr = srco->hdr;
181
@@ -XXX,XX +XXX,XX @@ static coroutine_fn void do_co_req(void *opaque)
182
unsigned int *wlen = srco->wlen;
183
unsigned int *rlen = srco->rlen;
184
185
- co = qemu_coroutine_self();
186
+ srco->co = qemu_coroutine_self();
187
aio_set_fd_handler(srco->aio_context, sockfd, false,
188
- NULL, restart_co_req, NULL, co);
189
+ NULL, restart_co_req, NULL, srco);
190
191
ret = send_co_req(sockfd, hdr, data, wlen);
192
if (ret < 0) {
193
@@ -XXX,XX +XXX,XX @@ static coroutine_fn void do_co_req(void *opaque)
194
}
195
196
aio_set_fd_handler(srco->aio_context, sockfd, false,
197
- restart_co_req, NULL, NULL, co);
198
+ restart_co_req, NULL, NULL, srco);
199
200
ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
201
if (ret != sizeof(*hdr)) {
202
@@ -XXX,XX +XXX,XX @@ out:
203
aio_set_fd_handler(srco->aio_context, sockfd, false,
204
NULL, NULL, NULL, NULL);
205
206
+ srco->co = NULL;
207
srco->ret = ret;
208
srco->finished = true;
209
if (srco->bs) {
210
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn aio_read_response(void *opaque)
211
* We've finished all requests which belong to the AIOCB, so
212
* we can switch back to sd_co_readv/writev now.
213
*/
214
- qemu_coroutine_enter(acb->coroutine);
215
+ aio_co_wake(acb->coroutine);
216
}
217
218
return;
219
@@ -XXX,XX +XXX,XX @@ static void co_read_response(void *opaque)
220
s->co_recv = qemu_coroutine_create(aio_read_response, opaque);
221
}
222
223
- qemu_coroutine_enter(s->co_recv);
224
+ aio_co_wake(s->co_recv);
225
}
226
227
static void co_write_request(void *opaque)
228
{
229
BDRVSheepdogState *s = opaque;
230
231
- qemu_coroutine_enter(s->co_send);
232
+ aio_co_wake(s->co_send);
233
}
234
235
/*
236
diff --git a/block/ssh.c b/block/ssh.c
237
index XXXXXXX..XXXXXXX 100644
238
--- a/block/ssh.c
239
+++ b/block/ssh.c
240
@@ -XXX,XX +XXX,XX @@ static void restart_coroutine(void *opaque)
241
242
DPRINTF("co=%p", co);
243
244
- qemu_coroutine_enter(co);
245
+ aio_co_wake(co);
246
}
247
248
-static coroutine_fn void set_fd_handler(BDRVSSHState *s, BlockDriverState *bs)
249
+/* A non-blocking call returned EAGAIN, so yield, ensuring the
250
+ * handlers are set up so that we'll be rescheduled when there is an
251
+ * interesting event on the socket.
252
+ */
253
+static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
254
{
255
int r;
256
IOHandler *rd_handler = NULL, *wr_handler = NULL;
257
@@ -XXX,XX +XXX,XX @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s, BlockDriverState *bs)
258
259
aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock,
260
false, rd_handler, wr_handler, NULL, co);
261
-}
262
-
263
-static coroutine_fn void clear_fd_handler(BDRVSSHState *s,
264
- BlockDriverState *bs)
265
-{
266
- DPRINTF("s->sock=%d", s->sock);
267
- aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock,
268
- false, NULL, NULL, NULL, NULL);
269
-}
270
-
271
-/* A non-blocking call returned EAGAIN, so yield, ensuring the
272
- * handlers are set up so that we'll be rescheduled when there is an
273
- * interesting event on the socket.
274
- */
275
-static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
276
-{
277
- set_fd_handler(s, bs);
278
qemu_coroutine_yield();
279
- clear_fd_handler(s, bs);
280
+ DPRINTF("s->sock=%d - back", s->sock);
281
+ aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, false,
282
+ NULL, NULL, NULL, NULL);
283
}
284
285
/* SFTP has a function `libssh2_sftp_seek64' which seeks to a position
286
diff --git a/block/win32-aio.c b/block/win32-aio.c
287
index XXXXXXX..XXXXXXX 100644
288
--- a/block/win32-aio.c
289
+++ b/block/win32-aio.c
290
@@ -XXX,XX +XXX,XX @@ struct QEMUWin32AIOState {
291
HANDLE hIOCP;
292
EventNotifier e;
293
int count;
294
- bool is_aio_context_attached;
295
+ AioContext *aio_ctx;
296
};
297
298
typedef struct QEMUWin32AIOCB {
299
@@ -XXX,XX +XXX,XX @@ static void win32_aio_process_completion(QEMUWin32AIOState *s,
300
}
301
302
303
+ aio_context_acquire(s->aio_ctx);
304
waiocb->common.cb(waiocb->common.opaque, ret);
305
+ aio_context_release(s->aio_ctx);
306
qemu_aio_unref(waiocb);
307
}
308
309
@@ -XXX,XX +XXX,XX @@ void win32_aio_detach_aio_context(QEMUWin32AIOState *aio,
310
AioContext *old_context)
311
{
312
aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL);
313
- aio->is_aio_context_attached = false;
314
+ aio->aio_ctx = NULL;
315
}
316
317
void win32_aio_attach_aio_context(QEMUWin32AIOState *aio,
318
AioContext *new_context)
319
{
320
- aio->is_aio_context_attached = true;
321
+ aio->aio_ctx = new_context;
322
aio_set_event_notifier(new_context, &aio->e, false,
323
win32_aio_completion_cb, NULL);
324
}
325
@@ -XXX,XX +XXX,XX @@ out_free_state:
326
327
void win32_aio_cleanup(QEMUWin32AIOState *aio)
328
{
329
- assert(!aio->is_aio_context_attached);
330
+ assert(!aio->aio_ctx);
331
CloseHandle(aio->hIOCP);
332
event_notifier_cleanup(&aio->e);
333
g_free(aio);
334
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
335
index XXXXXXX..XXXXXXX 100644
336
--- a/hw/block/virtio-blk.c
337
+++ b/hw/block/virtio-blk.c
338
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_ioctl_complete(void *opaque, int status)
339
{
340
VirtIOBlockIoctlReq *ioctl_req = opaque;
341
VirtIOBlockReq *req = ioctl_req->req;
342
- VirtIODevice *vdev = VIRTIO_DEVICE(req->dev);
343
+ VirtIOBlock *s = req->dev;
344
+ VirtIODevice *vdev = VIRTIO_DEVICE(s);
345
struct virtio_scsi_inhdr *scsi;
346
struct sg_io_hdr *hdr;
347
348
@@ -XXX,XX +XXX,XX @@ bool virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
349
MultiReqBuffer mrb = {};
350
bool progress = false;
351
352
+ aio_context_acquire(blk_get_aio_context(s->blk));
353
blk_io_plug(s->blk);
354
51
do {
355
do {
52
ret = block_copy_dirty_clusters(call_state);
356
@@ -XXX,XX +XXX,XX @@ bool virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
53
357
}
54
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
358
55
call_state->cb(call_state->cb_opaque);
359
blk_io_unplug(s->blk);
56
}
360
+ aio_context_release(blk_get_aio_context(s->blk));
57
361
return progress;
58
+ QLIST_REMOVE(call_state, list);
362
}
59
+
363
60
return ret;
364
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
61
}
365
index XXXXXXX..XXXXXXX 100644
366
--- a/hw/scsi/virtio-scsi.c
367
+++ b/hw/scsi/virtio-scsi.c
368
@@ -XXX,XX +XXX,XX @@ bool virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
369
VirtIOSCSIReq *req;
370
bool progress = false;
371
372
+ virtio_scsi_acquire(s);
373
while ((req = virtio_scsi_pop_req(s, vq))) {
374
progress = true;
375
virtio_scsi_handle_ctrl_req(s, req);
376
}
377
+ virtio_scsi_release(s);
378
return progress;
379
}
380
381
@@ -XXX,XX +XXX,XX @@ bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
382
383
QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs);
384
385
+ virtio_scsi_acquire(s);
386
do {
387
virtio_queue_set_notification(vq, 0);
388
389
@@ -XXX,XX +XXX,XX @@ bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
390
QTAILQ_FOREACH_SAFE(req, &reqs, next, next) {
391
virtio_scsi_handle_cmd_req_submit(s, req);
392
}
393
+ virtio_scsi_release(s);
394
return progress;
395
}
396
397
@@ -XXX,XX +XXX,XX @@ out:
398
399
bool virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
400
{
401
+ virtio_scsi_acquire(s);
402
if (s->events_dropped) {
403
virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
404
+ virtio_scsi_release(s);
405
return true;
406
}
407
+ virtio_scsi_release(s);
408
return false;
409
}
410
411
diff --git a/util/aio-posix.c b/util/aio-posix.c
412
index XXXXXXX..XXXXXXX 100644
413
--- a/util/aio-posix.c
414
+++ b/util/aio-posix.c
415
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
416
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
417
aio_node_check(ctx, node->is_external) &&
418
node->io_read) {
419
- aio_context_acquire(ctx);
420
node->io_read(node->opaque);
421
- aio_context_release(ctx);
422
423
/* aio_notify() does not count as progress */
424
if (node->opaque != &ctx->notifier) {
425
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
426
(revents & (G_IO_OUT | G_IO_ERR)) &&
427
aio_node_check(ctx, node->is_external) &&
428
node->io_write) {
429
- aio_context_acquire(ctx);
430
node->io_write(node->opaque);
431
- aio_context_release(ctx);
432
progress = true;
433
}
434
435
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
436
start = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
437
}
438
439
- aio_context_acquire(ctx);
440
progress = try_poll_mode(ctx, blocking);
441
- aio_context_release(ctx);
442
-
443
if (!progress) {
444
assert(npfd == 0);
445
446
diff --git a/util/aio-win32.c b/util/aio-win32.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/util/aio-win32.c
449
+++ b/util/aio-win32.c
450
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
451
(revents || event_notifier_get_handle(node->e) == event) &&
452
node->io_notify) {
453
node->pfd.revents = 0;
454
- aio_context_acquire(ctx);
455
node->io_notify(node->e);
456
- aio_context_release(ctx);
457
458
/* aio_notify() does not count as progress */
459
if (node->e != &ctx->notifier) {
460
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
461
(node->io_read || node->io_write)) {
462
node->pfd.revents = 0;
463
if ((revents & G_IO_IN) && node->io_read) {
464
- aio_context_acquire(ctx);
465
node->io_read(node->opaque);
466
- aio_context_release(ctx);
467
progress = true;
468
}
469
if ((revents & G_IO_OUT) && node->io_write) {
470
- aio_context_acquire(ctx);
471
node->io_write(node->opaque);
472
- aio_context_release(ctx);
473
progress = true;
474
}
62
475
63
--
476
--
64
2.29.2
477
2.9.3
65
478
66
479
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Provide the possibility to pass the 'filter-node-name' parameter to the
3
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
4
block-stream job as it is done for the commit block job.
4
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5
Reviewed-by: Fam Zheng <famz@redhat.com>
6
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
7
Message-id: 20170213135235.12274-15-pbonzini@redhat.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
10
block/archipelago.c | 3 +++
11
block/blkreplay.c | 2 +-
12
block/block-backend.c | 6 ++++++
13
block/curl.c | 26 ++++++++++++++++++--------
14
block/gluster.c | 9 +--------
15
block/io.c | 6 +++++-
16
block/iscsi.c | 6 +++++-
17
block/linux-aio.c | 15 +++++++++------
18
block/nfs.c | 3 ++-
19
block/null.c | 4 ++++
20
block/qed.c | 3 +++
21
block/rbd.c | 4 ++++
22
dma-helpers.c | 2 ++
23
hw/block/virtio-blk.c | 2 ++
24
hw/scsi/scsi-bus.c | 2 ++
25
util/async.c | 4 ++--
26
util/thread-pool.c | 2 ++
27
17 files changed, 71 insertions(+), 28 deletions(-)
5
28
6
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
29
diff --git a/block/archipelago.c b/block/archipelago.c
7
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
30
index XXXXXXX..XXXXXXX 100644
8
[vsementsov: comment indentation, s/Since: 5.2/Since: 6.0/]
31
--- a/block/archipelago.c
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
32
+++ b/block/archipelago.c
10
Message-Id: <20201216061703.70908-5-vsementsov@virtuozzo.com>
33
@@ -XXX,XX +XXX,XX @@ static void qemu_archipelago_complete_aio(void *opaque)
11
[mreitz: s/commit/stream/]
34
{
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
35
AIORequestData *reqdata = (AIORequestData *) opaque;
13
---
36
ArchipelagoAIOCB *aio_cb = (ArchipelagoAIOCB *) reqdata->aio_cb;
14
qapi/block-core.json | 6 ++++++
37
+ AioContext *ctx = bdrv_get_aio_context(aio_cb->common.bs);
15
include/block/block_int.h | 7 ++++++-
38
16
block/monitor/block-hmp-cmds.c | 4 ++--
39
+ aio_context_acquire(ctx);
17
block/stream.c | 4 +++-
40
aio_cb->common.cb(aio_cb->common.opaque, aio_cb->ret);
18
blockdev.c | 4 +++-
41
+ aio_context_release(ctx);
19
5 files changed, 20 insertions(+), 5 deletions(-)
42
aio_cb->status = 0;
20
43
21
diff --git a/qapi/block-core.json b/qapi/block-core.json
44
qemu_aio_unref(aio_cb);
22
index XXXXXXX..XXXXXXX 100644
45
diff --git a/block/blkreplay.c b/block/blkreplay.c
23
--- a/qapi/block-core.json
46
index XXXXXXX..XXXXXXX 100755
24
+++ b/qapi/block-core.json
47
--- a/block/blkreplay.c
25
@@ -XXX,XX +XXX,XX @@
48
+++ b/block/blkreplay.c
26
# 'stop' and 'enospc' can only be used if the block device
49
@@ -XXX,XX +XXX,XX @@ static int64_t blkreplay_getlength(BlockDriverState *bs)
27
# supports io-status (see BlockInfo). Since 1.3.
50
static void blkreplay_bh_cb(void *opaque)
28
#
51
{
29
+# @filter-node-name: the node name that should be assigned to the
52
Request *req = opaque;
30
+# filter driver that the stream job inserts into the graph
53
- qemu_coroutine_enter(req->co);
31
+# above @device. If this option is not given, a node name is
54
+ aio_co_wake(req->co);
32
+# autogenerated. (Since: 6.0)
55
qemu_bh_delete(req->bh);
33
+#
56
g_free(req);
34
# @auto-finalize: When false, this job will wait in a PENDING state after it has
57
}
35
# finished its work, waiting for @block-job-finalize before
58
diff --git a/block/block-backend.c b/block/block-backend.c
36
# making any block graph changes.
59
index XXXXXXX..XXXXXXX 100644
37
@@ -XXX,XX +XXX,XX @@
60
--- a/block/block-backend.c
38
'data': { '*job-id': 'str', 'device': 'str', '*base': 'str',
61
+++ b/block/block-backend.c
39
'*base-node': 'str', '*backing-file': 'str', '*speed': 'int',
62
@@ -XXX,XX +XXX,XX @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags)
40
'*on-error': 'BlockdevOnError',
63
static void error_callback_bh(void *opaque)
41
+ '*filter-node-name': 'str',
64
{
42
'*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
65
struct BlockBackendAIOCB *acb = opaque;
43
66
+ AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
44
##
67
45
diff --git a/include/block/block_int.h b/include/block/block_int.h
68
bdrv_dec_in_flight(acb->common.bs);
46
index XXXXXXX..XXXXXXX 100644
69
+ aio_context_acquire(ctx);
47
--- a/include/block/block_int.h
70
acb->common.cb(acb->common.opaque, acb->ret);
48
+++ b/include/block/block_int.h
71
+ aio_context_release(ctx);
49
@@ -XXX,XX +XXX,XX @@ int is_windows_drive(const char *filename);
72
qemu_aio_unref(acb);
50
* See @BlockJobCreateFlags
73
}
51
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
74
52
* @on_error: The action to take upon error.
75
@@ -XXX,XX +XXX,XX @@ static void blk_aio_complete(BlkAioEmAIOCB *acb)
53
+ * @filter_node_name: The node name that should be assigned to the filter
76
static void blk_aio_complete_bh(void *opaque)
54
+ * driver that the stream job inserts into the graph above
77
{
55
+ * @bs. NULL means that a node name should be autogenerated.
78
BlkAioEmAIOCB *acb = opaque;
56
* @errp: Error object.
79
+ AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
57
*
80
58
* Start a streaming operation on @bs. Clusters that are unallocated
81
assert(acb->has_returned);
59
@@ -XXX,XX +XXX,XX @@ int is_windows_drive(const char *filename);
82
+ aio_context_acquire(ctx);
60
void stream_start(const char *job_id, BlockDriverState *bs,
83
blk_aio_complete(acb);
61
BlockDriverState *base, const char *backing_file_str,
84
+ aio_context_release(ctx);
62
int creation_flags, int64_t speed,
85
}
63
- BlockdevOnError on_error, Error **errp);
86
64
+ BlockdevOnError on_error,
87
static BlockAIOCB *blk_aio_prwv(BlockBackend *blk, int64_t offset, int bytes,
65
+ const char *filter_node_name,
88
diff --git a/block/curl.c b/block/curl.c
66
+ Error **errp);
89
index XXXXXXX..XXXXXXX 100644
90
--- a/block/curl.c
91
+++ b/block/curl.c
92
@@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p)
93
{
94
CURLState *state;
95
int running;
96
+ int ret = -EINPROGRESS;
97
98
CURLAIOCB *acb = p;
99
- BDRVCURLState *s = acb->common.bs->opaque;
100
+ BlockDriverState *bs = acb->common.bs;
101
+ BDRVCURLState *s = bs->opaque;
102
+ AioContext *ctx = bdrv_get_aio_context(bs);
103
104
size_t start = acb->sector_num * BDRV_SECTOR_SIZE;
105
size_t end;
106
107
+ aio_context_acquire(ctx);
108
+
109
// In case we have the requested data already (e.g. read-ahead),
110
// we can just call the callback and be done.
111
switch (curl_find_buf(s, start, acb->nb_sectors * BDRV_SECTOR_SIZE, acb)) {
112
@@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p)
113
qemu_aio_unref(acb);
114
// fall through
115
case FIND_RET_WAIT:
116
- return;
117
+ goto out;
118
default:
119
break;
120
}
121
@@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p)
122
// No cache found, so let's start a new request
123
state = curl_init_state(acb->common.bs, s);
124
if (!state) {
125
- acb->common.cb(acb->common.opaque, -EIO);
126
- qemu_aio_unref(acb);
127
- return;
128
+ ret = -EIO;
129
+ goto out;
130
}
131
132
acb->start = 0;
133
@@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p)
134
state->orig_buf = g_try_malloc(state->buf_len);
135
if (state->buf_len && state->orig_buf == NULL) {
136
curl_clean_state(state);
137
- acb->common.cb(acb->common.opaque, -ENOMEM);
138
- qemu_aio_unref(acb);
139
- return;
140
+ ret = -ENOMEM;
141
+ goto out;
142
}
143
state->acb[0] = acb;
144
145
@@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p)
146
147
/* Tell curl it needs to kick things off */
148
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
149
+
150
+out:
151
+ if (ret != -EINPROGRESS) {
152
+ acb->common.cb(acb->common.opaque, ret);
153
+ qemu_aio_unref(acb);
154
+ }
155
+ aio_context_release(ctx);
156
}
157
158
static BlockAIOCB *curl_aio_readv(BlockDriverState *bs,
159
diff --git a/block/gluster.c b/block/gluster.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/block/gluster.c
162
+++ b/block/gluster.c
163
@@ -XXX,XX +XXX,XX @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
164
return qemu_gluster_glfs_init(gconf, errp);
165
}
166
167
-static void qemu_gluster_complete_aio(void *opaque)
168
-{
169
- GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
170
-
171
- qemu_coroutine_enter(acb->coroutine);
172
-}
173
-
174
/*
175
* AIO callback routine called from GlusterFS thread.
176
*/
177
@@ -XXX,XX +XXX,XX @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
178
acb->ret = -EIO; /* Partial read/write - fail it */
179
}
180
181
- aio_bh_schedule_oneshot(acb->aio_context, qemu_gluster_complete_aio, acb);
182
+ aio_co_schedule(acb->aio_context, acb->coroutine);
183
}
184
185
static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
186
diff --git a/block/io.c b/block/io.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/block/io.c
189
+++ b/block/io.c
190
@@ -XXX,XX +XXX,XX @@ static void bdrv_co_drain_bh_cb(void *opaque)
191
bdrv_dec_in_flight(bs);
192
bdrv_drained_begin(bs);
193
data->done = true;
194
- qemu_coroutine_enter(co);
195
+ aio_co_wake(co);
196
}
197
198
static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs)
199
@@ -XXX,XX +XXX,XX @@ static void bdrv_co_complete(BlockAIOCBCoroutine *acb)
200
static void bdrv_co_em_bh(void *opaque)
201
{
202
BlockAIOCBCoroutine *acb = opaque;
203
+ BlockDriverState *bs = acb->common.bs;
204
+ AioContext *ctx = bdrv_get_aio_context(bs);
205
206
assert(!acb->need_bh);
207
+ aio_context_acquire(ctx);
208
bdrv_co_complete(acb);
209
+ aio_context_release(ctx);
210
}
211
212
static void bdrv_co_maybe_schedule_bh(BlockAIOCBCoroutine *acb)
213
diff --git a/block/iscsi.c b/block/iscsi.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/block/iscsi.c
216
+++ b/block/iscsi.c
217
@@ -XXX,XX +XXX,XX @@ static void
218
iscsi_bh_cb(void *p)
219
{
220
IscsiAIOCB *acb = p;
221
+ AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
222
223
qemu_bh_delete(acb->bh);
224
225
g_free(acb->buf);
226
acb->buf = NULL;
227
228
+ aio_context_acquire(ctx);
229
acb->common.cb(acb->common.opaque, acb->status);
230
+ aio_context_release(ctx);
231
232
if (acb->task != NULL) {
233
scsi_free_scsi_task(acb->task);
234
@@ -XXX,XX +XXX,XX @@ iscsi_schedule_bh(IscsiAIOCB *acb)
235
static void iscsi_co_generic_bh_cb(void *opaque)
236
{
237
struct IscsiTask *iTask = opaque;
238
+
239
iTask->complete = 1;
240
- qemu_coroutine_enter(iTask->co);
241
+ aio_co_wake(iTask->co);
242
}
243
244
static void iscsi_retry_timer_expired(void *opaque)
245
diff --git a/block/linux-aio.c b/block/linux-aio.c
246
index XXXXXXX..XXXXXXX 100644
247
--- a/block/linux-aio.c
248
+++ b/block/linux-aio.c
249
@@ -XXX,XX +XXX,XX @@ struct LinuxAioState {
250
io_context_t ctx;
251
EventNotifier e;
252
253
- /* io queue for submit at batch */
254
+ /* io queue for submit at batch. Protected by AioContext lock. */
255
LaioQueue io_q;
256
257
- /* I/O completion processing */
258
+ /* I/O completion processing. Only runs in I/O thread. */
259
QEMUBH *completion_bh;
260
int event_idx;
261
int event_max;
262
@@ -XXX,XX +XXX,XX @@ static inline ssize_t io_event_ret(struct io_event *ev)
263
*/
264
static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
265
{
266
+ LinuxAioState *s = laiocb->ctx;
267
int ret;
268
269
ret = laiocb->ret;
270
@@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
271
}
272
273
laiocb->ret = ret;
274
+ aio_context_acquire(s->aio_context);
275
if (laiocb->co) {
276
/* If the coroutine is already entered it must be in ioq_submit() and
277
* will notice laio->ret has been filled in when it eventually runs
278
@@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
279
laiocb->common.cb(laiocb->common.opaque, ret);
280
qemu_aio_unref(laiocb);
281
}
282
+ aio_context_release(s->aio_context);
283
}
67
284
68
/**
285
/**
69
* commit_start:
286
@@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completions(LinuxAioState *s)
70
diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
287
static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
71
index XXXXXXX..XXXXXXX 100644
288
{
72
--- a/block/monitor/block-hmp-cmds.c
289
qemu_laio_process_completions(s);
73
+++ b/block/monitor/block-hmp-cmds.c
290
+
74
@@ -XXX,XX +XXX,XX @@ void hmp_block_stream(Monitor *mon, const QDict *qdict)
291
+ aio_context_acquire(s->aio_context);
75
292
if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
76
qmp_block_stream(true, device, device, base != NULL, base, false, NULL,
293
ioq_submit(s);
77
false, NULL, qdict_haskey(qdict, "speed"), speed, true,
294
}
78
- BLOCKDEV_ON_ERROR_REPORT, false, false, false, false,
295
+ aio_context_release(s->aio_context);
79
- &error);
296
}
80
+ BLOCKDEV_ON_ERROR_REPORT, false, NULL, false, false, false,
297
81
+ false, &error);
298
static void qemu_laio_completion_bh(void *opaque)
82
299
@@ -XXX,XX +XXX,XX @@ static void qemu_laio_completion_cb(EventNotifier *e)
83
hmp_handle_error(mon, error);
300
LinuxAioState *s = container_of(e, LinuxAioState, e);
84
}
301
85
diff --git a/block/stream.c b/block/stream.c
302
if (event_notifier_test_and_clear(&s->e)) {
86
index XXXXXXX..XXXXXXX 100644
303
- aio_context_acquire(s->aio_context);
87
--- a/block/stream.c
304
qemu_laio_process_completions_and_submit(s);
88
+++ b/block/stream.c
305
- aio_context_release(s->aio_context);
89
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver stream_job_driver = {
306
}
90
void stream_start(const char *job_id, BlockDriverState *bs,
307
}
91
BlockDriverState *base, const char *backing_file_str,
308
92
int creation_flags, int64_t speed,
309
@@ -XXX,XX +XXX,XX @@ static bool qemu_laio_poll_cb(void *opaque)
93
- BlockdevOnError on_error, Error **errp)
310
return false;
94
+ BlockdevOnError on_error,
311
}
95
+ const char *filter_node_name,
312
96
+ Error **errp)
313
- aio_context_acquire(s->aio_context);
97
{
314
qemu_laio_process_completions_and_submit(s);
98
StreamBlockJob *s;
315
- aio_context_release(s->aio_context);
99
BlockDriverState *iter;
316
return true;
100
diff --git a/blockdev.c b/blockdev.c
317
}
101
index XXXXXXX..XXXXXXX 100644
318
102
--- a/blockdev.c
319
@@ -XXX,XX +XXX,XX @@ void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
103
+++ b/blockdev.c
320
{
104
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
321
aio_set_event_notifier(old_context, &s->e, false, NULL, NULL);
105
bool has_backing_file, const char *backing_file,
322
qemu_bh_delete(s->completion_bh);
106
bool has_speed, int64_t speed,
323
+ s->aio_context = NULL;
107
bool has_on_error, BlockdevOnError on_error,
324
}
108
+ bool has_filter_node_name, const char *filter_node_name,
325
109
bool has_auto_finalize, bool auto_finalize,
326
void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
110
bool has_auto_dismiss, bool auto_dismiss,
327
diff --git a/block/nfs.c b/block/nfs.c
111
Error **errp)
328
index XXXXXXX..XXXXXXX 100644
112
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
329
--- a/block/nfs.c
113
}
330
+++ b/block/nfs.c
114
331
@@ -XXX,XX +XXX,XX @@ static void nfs_co_init_task(BlockDriverState *bs, NFSRPC *task)
115
stream_start(has_job_id ? job_id : NULL, bs, base_bs, base_name,
332
static void nfs_co_generic_bh_cb(void *opaque)
116
- job_flags, has_speed ? speed : 0, on_error, &local_err);
333
{
117
+ job_flags, has_speed ? speed : 0, on_error,
334
NFSRPC *task = opaque;
118
+ filter_node_name, &local_err);
335
+
119
if (local_err) {
336
task->complete = 1;
120
error_propagate(errp, local_err);
337
- qemu_coroutine_enter(task->co);
121
goto out;
338
+ aio_co_wake(task->co);
339
}
340
341
static void
342
diff --git a/block/null.c b/block/null.c
343
index XXXXXXX..XXXXXXX 100644
344
--- a/block/null.c
345
+++ b/block/null.c
346
@@ -XXX,XX +XXX,XX @@ static const AIOCBInfo null_aiocb_info = {
347
static void null_bh_cb(void *opaque)
348
{
349
NullAIOCB *acb = opaque;
350
+ AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
351
+
352
+ aio_context_acquire(ctx);
353
acb->common.cb(acb->common.opaque, 0);
354
+ aio_context_release(ctx);
355
qemu_aio_unref(acb);
356
}
357
358
diff --git a/block/qed.c b/block/qed.c
359
index XXXXXXX..XXXXXXX 100644
360
--- a/block/qed.c
361
+++ b/block/qed.c
362
@@ -XXX,XX +XXX,XX @@ static void qed_update_l2_table(BDRVQEDState *s, QEDTable *table, int index,
363
static void qed_aio_complete_bh(void *opaque)
364
{
365
QEDAIOCB *acb = opaque;
366
+ BDRVQEDState *s = acb_to_s(acb);
367
BlockCompletionFunc *cb = acb->common.cb;
368
void *user_opaque = acb->common.opaque;
369
int ret = acb->bh_ret;
370
@@ -XXX,XX +XXX,XX @@ static void qed_aio_complete_bh(void *opaque)
371
qemu_aio_unref(acb);
372
373
/* Invoke callback */
374
+ qed_acquire(s);
375
cb(user_opaque, ret);
376
+ qed_release(s);
377
}
378
379
static void qed_aio_complete(QEDAIOCB *acb, int ret)
380
diff --git a/block/rbd.c b/block/rbd.c
381
index XXXXXXX..XXXXXXX 100644
382
--- a/block/rbd.c
383
+++ b/block/rbd.c
384
@@ -XXX,XX +XXX,XX @@ shutdown:
385
static void qemu_rbd_complete_aio(RADOSCB *rcb)
386
{
387
RBDAIOCB *acb = rcb->acb;
388
+ AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
389
int64_t r;
390
391
r = rcb->ret;
392
@@ -XXX,XX +XXX,XX @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
393
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
394
}
395
qemu_vfree(acb->bounce);
396
+
397
+ aio_context_acquire(ctx);
398
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
399
+ aio_context_release(ctx);
400
401
qemu_aio_unref(acb);
402
}
403
diff --git a/dma-helpers.c b/dma-helpers.c
404
index XXXXXXX..XXXXXXX 100644
405
--- a/dma-helpers.c
406
+++ b/dma-helpers.c
407
@@ -XXX,XX +XXX,XX @@ static void dma_blk_cb(void *opaque, int ret)
408
QEMU_ALIGN_DOWN(dbs->iov.size, dbs->align));
409
}
410
411
+ aio_context_acquire(dbs->ctx);
412
dbs->acb = dbs->io_func(dbs->offset, &dbs->iov,
413
dma_blk_cb, dbs, dbs->io_func_opaque);
414
+ aio_context_release(dbs->ctx);
415
assert(dbs->acb);
416
}
417
418
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
419
index XXXXXXX..XXXXXXX 100644
420
--- a/hw/block/virtio-blk.c
421
+++ b/hw/block/virtio-blk.c
422
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_dma_restart_bh(void *opaque)
423
424
s->rq = NULL;
425
426
+ aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
427
while (req) {
428
VirtIOBlockReq *next = req->next;
429
if (virtio_blk_handle_request(req, &mrb)) {
430
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_dma_restart_bh(void *opaque)
431
if (mrb.num_reqs) {
432
virtio_blk_submit_multireq(s->blk, &mrb);
433
}
434
+ aio_context_release(blk_get_aio_context(s->conf.conf.blk));
435
}
436
437
static void virtio_blk_dma_restart_cb(void *opaque, int running,
438
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
439
index XXXXXXX..XXXXXXX 100644
440
--- a/hw/scsi/scsi-bus.c
441
+++ b/hw/scsi/scsi-bus.c
442
@@ -XXX,XX +XXX,XX @@ static void scsi_dma_restart_bh(void *opaque)
443
qemu_bh_delete(s->bh);
444
s->bh = NULL;
445
446
+ aio_context_acquire(blk_get_aio_context(s->conf.blk));
447
QTAILQ_FOREACH_SAFE(req, &s->requests, next, next) {
448
scsi_req_ref(req);
449
if (req->retry) {
450
@@ -XXX,XX +XXX,XX @@ static void scsi_dma_restart_bh(void *opaque)
451
}
452
scsi_req_unref(req);
453
}
454
+ aio_context_release(blk_get_aio_context(s->conf.blk));
455
}
456
457
void scsi_req_retry(SCSIRequest *req)
458
diff --git a/util/async.c b/util/async.c
459
index XXXXXXX..XXXXXXX 100644
460
--- a/util/async.c
461
+++ b/util/async.c
462
@@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx)
463
ret = 1;
464
}
465
bh->idle = 0;
466
- aio_context_acquire(ctx);
467
aio_bh_call(bh);
468
- aio_context_release(ctx);
469
}
470
if (bh->deleted) {
471
deleted = true;
472
@@ -XXX,XX +XXX,XX @@ static void co_schedule_bh_cb(void *opaque)
473
Coroutine *co = QSLIST_FIRST(&straight);
474
QSLIST_REMOVE_HEAD(&straight, co_scheduled_next);
475
trace_aio_co_schedule_bh_cb(ctx, co);
476
+ aio_context_acquire(ctx);
477
qemu_coroutine_enter(co);
478
+ aio_context_release(ctx);
479
}
480
}
481
482
diff --git a/util/thread-pool.c b/util/thread-pool.c
483
index XXXXXXX..XXXXXXX 100644
484
--- a/util/thread-pool.c
485
+++ b/util/thread-pool.c
486
@@ -XXX,XX +XXX,XX @@ static void thread_pool_completion_bh(void *opaque)
487
ThreadPool *pool = opaque;
488
ThreadPoolElement *elem, *next;
489
490
+ aio_context_acquire(pool->ctx);
491
restart:
492
QLIST_FOREACH_SAFE(elem, &pool->head, all, next) {
493
if (elem->state != THREAD_DONE) {
494
@@ -XXX,XX +XXX,XX @@ restart:
495
qemu_aio_unref(elem);
496
}
497
}
498
+ aio_context_release(pool->ctx);
499
}
500
501
static void thread_pool_cancel(BlockAIOCB *acb)
122
--
502
--
123
2.29.2
503
2.9.3
124
504
125
505
diff view generated by jsdifflib
Deleted patch
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
2
1
3
The test case #310 is similar to #216 by Max Reitz. The difference is
4
that the test #310 involves a bottom node to the COR filter driver.
5
6
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
7
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
[vsementsov: detach backing to test reads from top, limit to qcow2]
9
Message-Id: <20201216061703.70908-7-vsementsov@virtuozzo.com>
10
[mreitz: Add "# group:" line]
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
13
tests/qemu-iotests/310 | 117 +++++++++++++++++++++++++++++++++++++
14
tests/qemu-iotests/310.out | 15 +++++
15
tests/qemu-iotests/group | 1 +
16
3 files changed, 133 insertions(+)
17
create mode 100755 tests/qemu-iotests/310
18
create mode 100644 tests/qemu-iotests/310.out
19
20
diff --git a/tests/qemu-iotests/310 b/tests/qemu-iotests/310
21
new file mode 100755
22
index XXXXXXX..XXXXXXX
23
--- /dev/null
24
+++ b/tests/qemu-iotests/310
25
@@ -XXX,XX +XXX,XX @@
26
+#!/usr/bin/env python3
27
+# group: rw quick
28
+#
29
+# Copy-on-read tests using a COR filter with a bottom node
30
+#
31
+# Copyright (C) 2018 Red Hat, Inc.
32
+# Copyright (c) 2020 Virtuozzo International GmbH
33
+#
34
+# This program is free software; you can redistribute it and/or modify
35
+# it under the terms of the GNU General Public License as published by
36
+# the Free Software Foundation; either version 2 of the License, or
37
+# (at your option) any later version.
38
+#
39
+# This program is distributed in the hope that it will be useful,
40
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
41
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
42
+# GNU General Public License for more details.
43
+#
44
+# You should have received a copy of the GNU General Public License
45
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
46
+#
47
+
48
+import iotests
49
+from iotests import log, qemu_img, qemu_io_silent
50
+
51
+# Need backing file support
52
+iotests.script_initialize(supported_fmts=['qcow2'],
53
+ supported_platforms=['linux'])
54
+
55
+log('')
56
+log('=== Copy-on-read across nodes ===')
57
+log('')
58
+
59
+# This test is similar to the 216 one by Max Reitz <mreitz@redhat.com>
60
+# The difference is that this test case involves a bottom node to the
61
+# COR filter driver.
62
+
63
+with iotests.FilePath('base.img') as base_img_path, \
64
+ iotests.FilePath('mid.img') as mid_img_path, \
65
+ iotests.FilePath('top.img') as top_img_path, \
66
+ iotests.VM() as vm:
67
+
68
+ log('--- Setting up images ---')
69
+ log('')
70
+
71
+ assert qemu_img('create', '-f', iotests.imgfmt, base_img_path, '64M') == 0
72
+ assert qemu_io_silent(base_img_path, '-c', 'write -P 1 0M 1M') == 0
73
+ assert qemu_io_silent(base_img_path, '-c', 'write -P 1 3M 1M') == 0
74
+ assert qemu_img('create', '-f', iotests.imgfmt, '-b', base_img_path,
75
+ '-F', iotests.imgfmt, mid_img_path) == 0
76
+ assert qemu_io_silent(mid_img_path, '-c', 'write -P 3 2M 1M') == 0
77
+ assert qemu_io_silent(mid_img_path, '-c', 'write -P 3 4M 1M') == 0
78
+ assert qemu_img('create', '-f', iotests.imgfmt, '-b', mid_img_path,
79
+ '-F', iotests.imgfmt, top_img_path) == 0
80
+ assert qemu_io_silent(top_img_path, '-c', 'write -P 2 1M 1M') == 0
81
+
82
+# 0 1 2 3 4
83
+# top 2
84
+# mid 3 3
85
+# base 1 1
86
+
87
+ log('Done')
88
+
89
+ log('')
90
+ log('--- Doing COR ---')
91
+ log('')
92
+
93
+ vm.launch()
94
+
95
+ log(vm.qmp('blockdev-add',
96
+ node_name='node0',
97
+ driver='copy-on-read',
98
+ bottom='node2',
99
+ file={
100
+ 'driver': iotests.imgfmt,
101
+ 'file': {
102
+ 'driver': 'file',
103
+ 'filename': top_img_path
104
+ },
105
+ 'backing': {
106
+ 'node-name': 'node2',
107
+ 'driver': iotests.imgfmt,
108
+ 'file': {
109
+ 'driver': 'file',
110
+ 'filename': mid_img_path
111
+ },
112
+ 'backing': {
113
+ 'driver': iotests.imgfmt,
114
+ 'file': {
115
+ 'driver': 'file',
116
+ 'filename': base_img_path
117
+ }
118
+ },
119
+ }
120
+ }))
121
+
122
+ # Trigger COR
123
+ log(vm.qmp('human-monitor-command',
124
+ command_line='qemu-io node0 "read 0 5M"'))
125
+
126
+ vm.shutdown()
127
+
128
+ log('')
129
+ log('--- Checking COR result ---')
130
+ log('')
131
+
132
+ # Detach backing to check that we can read the data from the top level now
133
+ assert qemu_img('rebase', '-u', '-b', '', '-f', iotests.imgfmt,
134
+ top_img_path) == 0
135
+
136
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 0 0 1M') == 0
137
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 2 1M 1M') == 0
138
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 3 2M 1M') == 0
139
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 0 3M 1M') == 0
140
+ assert qemu_io_silent(top_img_path, '-c', 'read -P 3 4M 1M') == 0
141
+
142
+ log('Done')
143
diff --git a/tests/qemu-iotests/310.out b/tests/qemu-iotests/310.out
144
new file mode 100644
145
index XXXXXXX..XXXXXXX
146
--- /dev/null
147
+++ b/tests/qemu-iotests/310.out
148
@@ -XXX,XX +XXX,XX @@
149
+
150
+=== Copy-on-read across nodes ===
151
+
152
+--- Setting up images ---
153
+
154
+Done
155
+
156
+--- Doing COR ---
157
+
158
+{"return": {}}
159
+{"return": ""}
160
+
161
+--- Checking COR result ---
162
+
163
+Done
164
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
165
index XXXXXXX..XXXXXXX 100644
166
--- a/tests/qemu-iotests/group
167
+++ b/tests/qemu-iotests/group
168
@@ -XXX,XX +XXX,XX @@
169
307 rw quick export
170
308 rw
171
309 rw auto quick
172
+310 rw quick
173
312 rw quick
174
--
175
2.29.2
176
177
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
We are going to use async block-copy call in backup, so we'll need to
3
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
4
passthrough setting backup speed to block-copy call.
4
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5
Reviewed-by: Fam Zheng <famz@redhat.com>
6
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
7
Message-id: 20170213135235.12274-16-pbonzini@redhat.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
10
block/archipelago.c | 3 ---
11
block/block-backend.c | 7 -------
12
block/curl.c | 2 +-
13
block/io.c | 6 +-----
14
block/iscsi.c | 3 ---
15
block/linux-aio.c | 5 +----
16
block/mirror.c | 12 +++++++++---
17
block/null.c | 8 --------
18
block/qed-cluster.c | 2 ++
19
block/qed-table.c | 12 ++++++++++--
20
block/qed.c | 4 ++--
21
block/rbd.c | 4 ----
22
block/win32-aio.c | 3 ---
23
hw/block/virtio-blk.c | 12 +++++++++++-
24
hw/scsi/scsi-disk.c | 15 +++++++++++++++
25
hw/scsi/scsi-generic.c | 20 +++++++++++++++++---
26
util/thread-pool.c | 4 +++-
27
17 files changed, 72 insertions(+), 50 deletions(-)
5
28
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
29
diff --git a/block/archipelago.c b/block/archipelago.c
7
Reviewed-by: Max Reitz <mreitz@redhat.com>
30
index XXXXXXX..XXXXXXX 100644
8
Message-Id: <20210116214705.822267-9-vsementsov@virtuozzo.com>
31
--- a/block/archipelago.c
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
32
+++ b/block/archipelago.c
10
---
33
@@ -XXX,XX +XXX,XX @@ static void qemu_archipelago_complete_aio(void *opaque)
11
include/block/blockjob_int.h | 2 ++
34
{
12
blockjob.c | 6 ++++++
35
AIORequestData *reqdata = (AIORequestData *) opaque;
13
2 files changed, 8 insertions(+)
36
ArchipelagoAIOCB *aio_cb = (ArchipelagoAIOCB *) reqdata->aio_cb;
14
37
- AioContext *ctx = bdrv_get_aio_context(aio_cb->common.bs);
15
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
38
16
index XXXXXXX..XXXXXXX 100644
39
- aio_context_acquire(ctx);
17
--- a/include/block/blockjob_int.h
40
aio_cb->common.cb(aio_cb->common.opaque, aio_cb->ret);
18
+++ b/include/block/blockjob_int.h
41
- aio_context_release(ctx);
19
@@ -XXX,XX +XXX,XX @@ struct BlockJobDriver {
42
aio_cb->status = 0;
20
* besides job->blk to the new AioContext.
43
21
*/
44
qemu_aio_unref(aio_cb);
22
void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
45
diff --git a/block/block-backend.c b/block/block-backend.c
23
+
46
index XXXXXXX..XXXXXXX 100644
24
+ void (*set_speed)(BlockJob *job, int64_t speed);
47
--- a/block/block-backend.c
25
};
48
+++ b/block/block-backend.c
49
@@ -XXX,XX +XXX,XX @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags)
50
static void error_callback_bh(void *opaque)
51
{
52
struct BlockBackendAIOCB *acb = opaque;
53
- AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
54
55
bdrv_dec_in_flight(acb->common.bs);
56
- aio_context_acquire(ctx);
57
acb->common.cb(acb->common.opaque, acb->ret);
58
- aio_context_release(ctx);
59
qemu_aio_unref(acb);
60
}
61
62
@@ -XXX,XX +XXX,XX @@ static void blk_aio_complete(BlkAioEmAIOCB *acb)
63
static void blk_aio_complete_bh(void *opaque)
64
{
65
BlkAioEmAIOCB *acb = opaque;
66
- AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
67
-
68
assert(acb->has_returned);
69
- aio_context_acquire(ctx);
70
blk_aio_complete(acb);
71
- aio_context_release(ctx);
72
}
73
74
static BlockAIOCB *blk_aio_prwv(BlockBackend *blk, int64_t offset, int bytes,
75
diff --git a/block/curl.c b/block/curl.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/block/curl.c
78
+++ b/block/curl.c
79
@@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p)
80
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
81
82
out:
83
+ aio_context_release(ctx);
84
if (ret != -EINPROGRESS) {
85
acb->common.cb(acb->common.opaque, ret);
86
qemu_aio_unref(acb);
87
}
88
- aio_context_release(ctx);
89
}
90
91
static BlockAIOCB *curl_aio_readv(BlockDriverState *bs,
92
diff --git a/block/io.c b/block/io.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/block/io.c
95
+++ b/block/io.c
96
@@ -XXX,XX +XXX,XX @@ static void bdrv_co_io_em_complete(void *opaque, int ret)
97
CoroutineIOCompletion *co = opaque;
98
99
co->ret = ret;
100
- qemu_coroutine_enter(co->coroutine);
101
+ aio_co_wake(co->coroutine);
102
}
103
104
static int coroutine_fn bdrv_driver_preadv(BlockDriverState *bs,
105
@@ -XXX,XX +XXX,XX @@ static void bdrv_co_complete(BlockAIOCBCoroutine *acb)
106
static void bdrv_co_em_bh(void *opaque)
107
{
108
BlockAIOCBCoroutine *acb = opaque;
109
- BlockDriverState *bs = acb->common.bs;
110
- AioContext *ctx = bdrv_get_aio_context(bs);
111
112
assert(!acb->need_bh);
113
- aio_context_acquire(ctx);
114
bdrv_co_complete(acb);
115
- aio_context_release(ctx);
116
}
117
118
static void bdrv_co_maybe_schedule_bh(BlockAIOCBCoroutine *acb)
119
diff --git a/block/iscsi.c b/block/iscsi.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/block/iscsi.c
122
+++ b/block/iscsi.c
123
@@ -XXX,XX +XXX,XX @@ static void
124
iscsi_bh_cb(void *p)
125
{
126
IscsiAIOCB *acb = p;
127
- AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
128
129
qemu_bh_delete(acb->bh);
130
131
g_free(acb->buf);
132
acb->buf = NULL;
133
134
- aio_context_acquire(ctx);
135
acb->common.cb(acb->common.opaque, acb->status);
136
- aio_context_release(ctx);
137
138
if (acb->task != NULL) {
139
scsi_free_scsi_task(acb->task);
140
diff --git a/block/linux-aio.c b/block/linux-aio.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/block/linux-aio.c
143
+++ b/block/linux-aio.c
144
@@ -XXX,XX +XXX,XX @@ static inline ssize_t io_event_ret(struct io_event *ev)
145
*/
146
static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
147
{
148
- LinuxAioState *s = laiocb->ctx;
149
int ret;
150
151
ret = laiocb->ret;
152
@@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
153
}
154
155
laiocb->ret = ret;
156
- aio_context_acquire(s->aio_context);
157
if (laiocb->co) {
158
/* If the coroutine is already entered it must be in ioq_submit() and
159
* will notice laio->ret has been filled in when it eventually runs
160
@@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
161
* that!
162
*/
163
if (!qemu_coroutine_entered(laiocb->co)) {
164
- qemu_coroutine_enter(laiocb->co);
165
+ aio_co_wake(laiocb->co);
166
}
167
} else {
168
laiocb->common.cb(laiocb->common.opaque, ret);
169
qemu_aio_unref(laiocb);
170
}
171
- aio_context_release(s->aio_context);
172
}
26
173
27
/**
174
/**
28
diff --git a/blockjob.c b/blockjob.c
175
diff --git a/block/mirror.c b/block/mirror.c
29
index XXXXXXX..XXXXXXX 100644
176
index XXXXXXX..XXXXXXX 100644
30
--- a/blockjob.c
177
--- a/block/mirror.c
31
+++ b/blockjob.c
178
+++ b/block/mirror.c
32
@@ -XXX,XX +XXX,XX @@ static bool job_timer_pending(Job *job)
179
@@ -XXX,XX +XXX,XX @@ static void mirror_write_complete(void *opaque, int ret)
33
180
{
34
void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
181
MirrorOp *op = opaque;
35
{
182
MirrorBlockJob *s = op->s;
36
+ const BlockJobDriver *drv = block_job_driver(job);
183
+
37
int64_t old_speed = job->speed;
184
+ aio_context_acquire(blk_get_aio_context(s->common.blk));
38
185
if (ret < 0) {
39
if (job_apply_verb(&job->job, JOB_VERB_SET_SPEED, errp)) {
186
BlockErrorAction action;
40
@@ -XXX,XX +XXX,XX @@ void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
187
41
ratelimit_set_speed(&job->limit, speed, BLOCK_JOB_SLICE_TIME);
188
@@ -XXX,XX +XXX,XX @@ static void mirror_write_complete(void *opaque, int ret)
42
189
}
43
job->speed = speed;
190
}
44
+
191
mirror_iteration_done(op, ret);
45
+ if (drv->set_speed) {
192
+ aio_context_release(blk_get_aio_context(s->common.blk));
46
+ drv->set_speed(job, speed);
193
}
47
+ }
194
48
+
195
static void mirror_read_complete(void *opaque, int ret)
49
if (speed && speed <= old_speed) {
196
{
197
MirrorOp *op = opaque;
198
MirrorBlockJob *s = op->s;
199
+
200
+ aio_context_acquire(blk_get_aio_context(s->common.blk));
201
if (ret < 0) {
202
BlockErrorAction action;
203
204
@@ -XXX,XX +XXX,XX @@ static void mirror_read_complete(void *opaque, int ret)
205
}
206
207
mirror_iteration_done(op, ret);
208
- return;
209
+ } else {
210
+ blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE, &op->qiov,
211
+ 0, mirror_write_complete, op);
212
}
213
- blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE, &op->qiov,
214
- 0, mirror_write_complete, op);
215
+ aio_context_release(blk_get_aio_context(s->common.blk));
216
}
217
218
static inline void mirror_clip_sectors(MirrorBlockJob *s,
219
diff --git a/block/null.c b/block/null.c
220
index XXXXXXX..XXXXXXX 100644
221
--- a/block/null.c
222
+++ b/block/null.c
223
@@ -XXX,XX +XXX,XX @@ static const AIOCBInfo null_aiocb_info = {
224
static void null_bh_cb(void *opaque)
225
{
226
NullAIOCB *acb = opaque;
227
- AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
228
-
229
- aio_context_acquire(ctx);
230
acb->common.cb(acb->common.opaque, 0);
231
- aio_context_release(ctx);
232
qemu_aio_unref(acb);
233
}
234
235
static void null_timer_cb(void *opaque)
236
{
237
NullAIOCB *acb = opaque;
238
- AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
239
-
240
- aio_context_acquire(ctx);
241
acb->common.cb(acb->common.opaque, 0);
242
- aio_context_release(ctx);
243
timer_deinit(&acb->timer);
244
qemu_aio_unref(acb);
245
}
246
diff --git a/block/qed-cluster.c b/block/qed-cluster.c
247
index XXXXXXX..XXXXXXX 100644
248
--- a/block/qed-cluster.c
249
+++ b/block/qed-cluster.c
250
@@ -XXX,XX +XXX,XX @@ static void qed_find_cluster_cb(void *opaque, int ret)
251
unsigned int index;
252
unsigned int n;
253
254
+ qed_acquire(s);
255
if (ret) {
256
goto out;
257
}
258
@@ -XXX,XX +XXX,XX @@ static void qed_find_cluster_cb(void *opaque, int ret)
259
260
out:
261
find_cluster_cb->cb(find_cluster_cb->opaque, ret, offset, len);
262
+ qed_release(s);
263
g_free(find_cluster_cb);
264
}
265
266
diff --git a/block/qed-table.c b/block/qed-table.c
267
index XXXXXXX..XXXXXXX 100644
268
--- a/block/qed-table.c
269
+++ b/block/qed-table.c
270
@@ -XXX,XX +XXX,XX @@ static void qed_read_table_cb(void *opaque, int ret)
271
{
272
QEDReadTableCB *read_table_cb = opaque;
273
QEDTable *table = read_table_cb->table;
274
+ BDRVQEDState *s = read_table_cb->s;
275
int noffsets = read_table_cb->qiov.size / sizeof(uint64_t);
276
int i;
277
278
@@ -XXX,XX +XXX,XX @@ static void qed_read_table_cb(void *opaque, int ret)
279
}
280
281
/* Byteswap offsets */
282
+ qed_acquire(s);
283
for (i = 0; i < noffsets; i++) {
284
table->offsets[i] = le64_to_cpu(table->offsets[i]);
285
}
286
+ qed_release(s);
287
288
out:
289
/* Completion */
290
- trace_qed_read_table_cb(read_table_cb->s, read_table_cb->table, ret);
291
+ trace_qed_read_table_cb(s, read_table_cb->table, ret);
292
gencb_complete(&read_table_cb->gencb, ret);
293
}
294
295
@@ -XXX,XX +XXX,XX @@ typedef struct {
296
static void qed_write_table_cb(void *opaque, int ret)
297
{
298
QEDWriteTableCB *write_table_cb = opaque;
299
+ BDRVQEDState *s = write_table_cb->s;
300
301
- trace_qed_write_table_cb(write_table_cb->s,
302
+ trace_qed_write_table_cb(s,
303
write_table_cb->orig_table,
304
write_table_cb->flush,
305
ret);
306
@@ -XXX,XX +XXX,XX @@ static void qed_write_table_cb(void *opaque, int ret)
307
if (write_table_cb->flush) {
308
/* We still need to flush first */
309
write_table_cb->flush = false;
310
+ qed_acquire(s);
311
bdrv_aio_flush(write_table_cb->s->bs, qed_write_table_cb,
312
write_table_cb);
313
+ qed_release(s);
50
return;
314
return;
51
}
315
}
316
317
@@ -XXX,XX +XXX,XX @@ static void qed_read_l2_table_cb(void *opaque, int ret)
318
CachedL2Table *l2_table = request->l2_table;
319
uint64_t l2_offset = read_l2_table_cb->l2_offset;
320
321
+ qed_acquire(s);
322
if (ret) {
323
/* can't trust loaded L2 table anymore */
324
qed_unref_l2_cache_entry(l2_table);
325
@@ -XXX,XX +XXX,XX @@ static void qed_read_l2_table_cb(void *opaque, int ret)
326
request->l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset);
327
assert(request->l2_table != NULL);
328
}
329
+ qed_release(s);
330
331
gencb_complete(&read_l2_table_cb->gencb, ret);
332
}
333
diff --git a/block/qed.c b/block/qed.c
334
index XXXXXXX..XXXXXXX 100644
335
--- a/block/qed.c
336
+++ b/block/qed.c
337
@@ -XXX,XX +XXX,XX @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l
338
}
339
340
if (cb->co) {
341
- qemu_coroutine_enter(cb->co);
342
+ aio_co_wake(cb->co);
343
}
344
}
345
346
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn qed_co_pwrite_zeroes_cb(void *opaque, int ret)
347
cb->done = true;
348
cb->ret = ret;
349
if (cb->co) {
350
- qemu_coroutine_enter(cb->co);
351
+ aio_co_wake(cb->co);
352
}
353
}
354
355
diff --git a/block/rbd.c b/block/rbd.c
356
index XXXXXXX..XXXXXXX 100644
357
--- a/block/rbd.c
358
+++ b/block/rbd.c
359
@@ -XXX,XX +XXX,XX @@ shutdown:
360
static void qemu_rbd_complete_aio(RADOSCB *rcb)
361
{
362
RBDAIOCB *acb = rcb->acb;
363
- AioContext *ctx = bdrv_get_aio_context(acb->common.bs);
364
int64_t r;
365
366
r = rcb->ret;
367
@@ -XXX,XX +XXX,XX @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
368
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
369
}
370
qemu_vfree(acb->bounce);
371
-
372
- aio_context_acquire(ctx);
373
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
374
- aio_context_release(ctx);
375
376
qemu_aio_unref(acb);
377
}
378
diff --git a/block/win32-aio.c b/block/win32-aio.c
379
index XXXXXXX..XXXXXXX 100644
380
--- a/block/win32-aio.c
381
+++ b/block/win32-aio.c
382
@@ -XXX,XX +XXX,XX @@ static void win32_aio_process_completion(QEMUWin32AIOState *s,
383
qemu_vfree(waiocb->buf);
384
}
385
386
-
387
- aio_context_acquire(s->aio_ctx);
388
waiocb->common.cb(waiocb->common.opaque, ret);
389
- aio_context_release(s->aio_ctx);
390
qemu_aio_unref(waiocb);
391
}
392
393
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
394
index XXXXXXX..XXXXXXX 100644
395
--- a/hw/block/virtio-blk.c
396
+++ b/hw/block/virtio-blk.c
397
@@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error,
398
static void virtio_blk_rw_complete(void *opaque, int ret)
399
{
400
VirtIOBlockReq *next = opaque;
401
+ VirtIOBlock *s = next->dev;
402
403
+ aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
404
while (next) {
405
VirtIOBlockReq *req = next;
406
next = req->mr_next;
407
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_rw_complete(void *opaque, int ret)
408
block_acct_done(blk_get_stats(req->dev->blk), &req->acct);
409
virtio_blk_free_request(req);
410
}
411
+ aio_context_release(blk_get_aio_context(s->conf.conf.blk));
412
}
413
414
static void virtio_blk_flush_complete(void *opaque, int ret)
415
{
416
VirtIOBlockReq *req = opaque;
417
+ VirtIOBlock *s = req->dev;
418
419
+ aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
420
if (ret) {
421
if (virtio_blk_handle_rw_error(req, -ret, 0)) {
422
- return;
423
+ goto out;
424
}
425
}
426
427
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
428
block_acct_done(blk_get_stats(req->dev->blk), &req->acct);
429
virtio_blk_free_request(req);
430
+
431
+out:
432
+ aio_context_release(blk_get_aio_context(s->conf.conf.blk));
433
}
434
435
#ifdef __linux__
436
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_ioctl_complete(void *opaque, int status)
437
virtio_stl_p(vdev, &scsi->data_len, hdr->dxfer_len);
438
439
out:
440
+ aio_context_acquire(blk_get_aio_context(s->conf.conf.blk));
441
virtio_blk_req_complete(req, status);
442
virtio_blk_free_request(req);
443
+ aio_context_release(blk_get_aio_context(s->conf.conf.blk));
444
g_free(ioctl_req);
445
}
446
447
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
448
index XXXXXXX..XXXXXXX 100644
449
--- a/hw/scsi/scsi-disk.c
450
+++ b/hw/scsi/scsi-disk.c
451
@@ -XXX,XX +XXX,XX @@ static void scsi_aio_complete(void *opaque, int ret)
452
453
assert(r->req.aiocb != NULL);
454
r->req.aiocb = NULL;
455
+ aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
456
if (scsi_disk_req_check_error(r, ret, true)) {
457
goto done;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void scsi_aio_complete(void *opaque, int ret)
460
scsi_req_complete(&r->req, GOOD);
461
462
done:
463
+ aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
464
scsi_req_unref(&r->req);
465
}
466
467
@@ -XXX,XX +XXX,XX @@ static void scsi_dma_complete(void *opaque, int ret)
468
assert(r->req.aiocb != NULL);
469
r->req.aiocb = NULL;
470
471
+ aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
472
if (ret < 0) {
473
block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct);
474
} else {
475
block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct);
476
}
477
scsi_dma_complete_noio(r, ret);
478
+ aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
479
}
480
481
static void scsi_read_complete(void * opaque, int ret)
482
@@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret)
483
484
assert(r->req.aiocb != NULL);
485
r->req.aiocb = NULL;
486
+ aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
487
if (scsi_disk_req_check_error(r, ret, true)) {
488
goto done;
489
}
490
@@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret)
491
492
done:
493
scsi_req_unref(&r->req);
494
+ aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
495
}
496
497
/* Actually issue a read to the block device. */
498
@@ -XXX,XX +XXX,XX @@ static void scsi_do_read_cb(void *opaque, int ret)
499
assert (r->req.aiocb != NULL);
500
r->req.aiocb = NULL;
501
502
+ aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
503
if (ret < 0) {
504
block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct);
505
} else {
506
block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct);
507
}
508
scsi_do_read(opaque, ret);
509
+ aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
510
}
511
512
/* Read more data from scsi device into buffer. */
513
@@ -XXX,XX +XXX,XX @@ static void scsi_write_complete(void * opaque, int ret)
514
assert (r->req.aiocb != NULL);
515
r->req.aiocb = NULL;
516
517
+ aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
518
if (ret < 0) {
519
block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct);
520
} else {
521
block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct);
522
}
523
scsi_write_complete_noio(r, ret);
524
+ aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
525
}
526
527
static void scsi_write_data(SCSIRequest *req)
528
@@ -XXX,XX +XXX,XX @@ static void scsi_unmap_complete(void *opaque, int ret)
529
{
530
UnmapCBData *data = opaque;
531
SCSIDiskReq *r = data->r;
532
+ SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, r->req.dev);
533
534
assert(r->req.aiocb != NULL);
535
r->req.aiocb = NULL;
536
537
+ aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
538
scsi_unmap_complete_noio(data, ret);
539
+ aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
540
}
541
542
static void scsi_disk_emulate_unmap(SCSIDiskReq *r, uint8_t *inbuf)
543
@@ -XXX,XX +XXX,XX @@ static void scsi_write_same_complete(void *opaque, int ret)
544
545
assert(r->req.aiocb != NULL);
546
r->req.aiocb = NULL;
547
+ aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk));
548
if (scsi_disk_req_check_error(r, ret, true)) {
549
goto done;
550
}
551
@@ -XXX,XX +XXX,XX @@ done:
552
scsi_req_unref(&r->req);
553
qemu_vfree(data->iov.iov_base);
554
g_free(data);
555
+ aio_context_release(blk_get_aio_context(s->qdev.conf.blk));
556
}
557
558
static void scsi_disk_emulate_write_same(SCSIDiskReq *r, uint8_t *inbuf)
559
diff --git a/hw/scsi/scsi-generic.c b/hw/scsi/scsi-generic.c
560
index XXXXXXX..XXXXXXX 100644
561
--- a/hw/scsi/scsi-generic.c
562
+++ b/hw/scsi/scsi-generic.c
563
@@ -XXX,XX +XXX,XX @@ done:
564
static void scsi_command_complete(void *opaque, int ret)
565
{
566
SCSIGenericReq *r = (SCSIGenericReq *)opaque;
567
+ SCSIDevice *s = r->req.dev;
568
569
assert(r->req.aiocb != NULL);
570
r->req.aiocb = NULL;
571
+
572
+ aio_context_acquire(blk_get_aio_context(s->conf.blk));
573
scsi_command_complete_noio(r, ret);
574
+ aio_context_release(blk_get_aio_context(s->conf.blk));
575
}
576
577
static int execute_command(BlockBackend *blk,
578
@@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret)
579
assert(r->req.aiocb != NULL);
580
r->req.aiocb = NULL;
581
582
+ aio_context_acquire(blk_get_aio_context(s->conf.blk));
583
+
584
if (ret || r->req.io_canceled) {
585
scsi_command_complete_noio(r, ret);
586
- return;
587
+ goto done;
588
}
589
590
len = r->io_header.dxfer_len - r->io_header.resid;
591
@@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret)
592
r->len = -1;
593
if (len == 0) {
594
scsi_command_complete_noio(r, 0);
595
- return;
596
+ goto done;
597
}
598
599
/* Snoop READ CAPACITY output to set the blocksize. */
600
@@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret)
601
}
602
scsi_req_data(&r->req, len);
603
scsi_req_unref(&r->req);
604
+
605
+done:
606
+ aio_context_release(blk_get_aio_context(s->conf.blk));
607
}
608
609
/* Read more data from scsi device into buffer. */
610
@@ -XXX,XX +XXX,XX @@ static void scsi_write_complete(void * opaque, int ret)
611
assert(r->req.aiocb != NULL);
612
r->req.aiocb = NULL;
613
614
+ aio_context_acquire(blk_get_aio_context(s->conf.blk));
615
+
616
if (ret || r->req.io_canceled) {
617
scsi_command_complete_noio(r, ret);
618
- return;
619
+ goto done;
620
}
621
622
if (r->req.cmd.buf[0] == MODE_SELECT && r->req.cmd.buf[4] == 12 &&
623
@@ -XXX,XX +XXX,XX @@ static void scsi_write_complete(void * opaque, int ret)
624
}
625
626
scsi_command_complete_noio(r, ret);
627
+
628
+done:
629
+ aio_context_release(blk_get_aio_context(s->conf.blk));
630
}
631
632
/* Write data to a scsi device. Returns nonzero on failure.
633
diff --git a/util/thread-pool.c b/util/thread-pool.c
634
index XXXXXXX..XXXXXXX 100644
635
--- a/util/thread-pool.c
636
+++ b/util/thread-pool.c
637
@@ -XXX,XX +XXX,XX @@ restart:
638
*/
639
qemu_bh_schedule(pool->completion_bh);
640
641
+ aio_context_release(pool->ctx);
642
elem->common.cb(elem->common.opaque, elem->ret);
643
+ aio_context_acquire(pool->ctx);
644
qemu_aio_unref(elem);
645
goto restart;
646
} else {
647
@@ -XXX,XX +XXX,XX @@ static void thread_pool_co_cb(void *opaque, int ret)
648
ThreadPoolCo *co = opaque;
649
650
co->ret = ret;
651
- qemu_coroutine_enter(co->co);
652
+ aio_co_wake(co->co);
653
}
654
655
int coroutine_fn thread_pool_submit_co(ThreadPool *pool, ThreadPoolFunc *func,
52
--
656
--
53
2.29.2
657
2.9.3
54
658
55
659
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
The code already don't freeze base node and we try to make it prepared
3
This patch prepares for the removal of unnecessary lockcnt inc/dec pairs.
4
for the situation when base node is changed during the operation. In
4
Extract the dispatching loop for file descriptor handlers into a new
5
other words, block-stream doesn't own base node.
5
function aio_dispatch_handlers, and then inline aio_dispatch into
6
aio_poll.
6
7
7
Let's introduce a new interface which should replace the current one,
8
aio_dispatch can now become void.
8
which will in better relations with the code. Specifying bottom node
9
instead of base, and requiring it to be non-filter gives us the
10
following benefits:
11
9
12
- drop difference between above_base and base_overlay, which will be
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
renamed to just bottom, when old interface dropped
11
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
12
Reviewed-by: Fam Zheng <famz@redhat.com>
13
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
14
Message-id: 20170213135235.12274-17-pbonzini@redhat.com
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
---
17
include/block/aio.h | 6 +-----
18
util/aio-posix.c | 44 ++++++++++++++------------------------------
19
util/aio-win32.c | 13 ++++---------
20
util/async.c | 2 +-
21
4 files changed, 20 insertions(+), 45 deletions(-)
14
22
15
- clean way to work with parallel streams/commits on the same backing
23
diff --git a/include/block/aio.h b/include/block/aio.h
16
chain, which otherwise become a problem when we introduce a filter
17
for stream job
18
19
- cleaner interface. Nobody will surprised the fact that base node may
20
disappear during block-stream, when there is no word about "base" in
21
the interface.
22
23
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
24
Message-Id: <20201216061703.70908-11-vsementsov@virtuozzo.com>
25
Reviewed-by: Max Reitz <mreitz@redhat.com>
26
Signed-off-by: Max Reitz <mreitz@redhat.com>
27
---
28
qapi/block-core.json | 12 ++++---
29
include/block/block_int.h | 1 +
30
block/monitor/block-hmp-cmds.c | 3 +-
31
block/stream.c | 50 +++++++++++++++++++---------
32
blockdev.c | 59 ++++++++++++++++++++++++++++------
33
5 files changed, 94 insertions(+), 31 deletions(-)
34
35
diff --git a/qapi/block-core.json b/qapi/block-core.json
36
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
37
--- a/qapi/block-core.json
25
--- a/include/block/aio.h
38
+++ b/qapi/block-core.json
26
+++ b/include/block/aio.h
39
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@ bool aio_pending(AioContext *ctx);
40
# @device: the device or node name of the top image
28
/* Dispatch any pending callbacks from the GSource attached to the AioContext.
41
#
29
*
42
# @base: the common backing file name.
30
* This is used internally in the implementation of the GSource.
43
-# It cannot be set if @base-node is also set.
31
- *
44
+# It cannot be set if @base-node or @bottom is also set.
32
- * @dispatch_fds: true to process fds, false to skip them
45
#
33
- * (can be used as an optimization by callers that know there
46
# @base-node: the node name of the backing file.
34
- * are no fds ready)
47
-# It cannot be set if @base is also set. (Since 2.8)
35
*/
48
+# It cannot be set if @base or @bottom is also set. (Since 2.8)
36
-bool aio_dispatch(AioContext *ctx, bool dispatch_fds);
49
+#
37
+void aio_dispatch(AioContext *ctx);
50
+# @bottom: the last node in the chain that should be streamed into
38
51
+# top. It cannot be set if @base or @base-node is also set.
39
/* Progress in completing AIO work to occur. This can issue new pending
52
+# It cannot be filter node. (Since 6.0)
40
* aio as a result of executing I/O completion or bh callbacks.
53
#
41
diff --git a/util/aio-posix.c b/util/aio-posix.c
54
# @backing-file: The backing file string to write into the top
55
# image. This filename is not validated.
56
@@ -XXX,XX +XXX,XX @@
57
##
58
{ 'command': 'block-stream',
59
'data': { '*job-id': 'str', 'device': 'str', '*base': 'str',
60
- '*base-node': 'str', '*backing-file': 'str', '*speed': 'int',
61
- '*on-error': 'BlockdevOnError',
62
+ '*base-node': 'str', '*backing-file': 'str', '*bottom': 'str',
63
+ '*speed': 'int', '*on-error': 'BlockdevOnError',
64
'*filter-node-name': 'str',
65
'*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
66
67
diff --git a/include/block/block_int.h b/include/block/block_int.h
68
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
69
--- a/include/block/block_int.h
43
--- a/util/aio-posix.c
70
+++ b/include/block/block_int.h
44
+++ b/util/aio-posix.c
71
@@ -XXX,XX +XXX,XX @@ int is_windows_drive(const char *filename);
45
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
72
*/
46
AioHandler *node, *tmp;
73
void stream_start(const char *job_id, BlockDriverState *bs,
47
bool progress = false;
74
BlockDriverState *base, const char *backing_file_str,
75
+ BlockDriverState *bottom,
76
int creation_flags, int64_t speed,
77
BlockdevOnError on_error,
78
const char *filter_node_name,
79
diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/block/monitor/block-hmp-cmds.c
82
+++ b/block/monitor/block-hmp-cmds.c
83
@@ -XXX,XX +XXX,XX @@ void hmp_block_stream(Monitor *mon, const QDict *qdict)
84
int64_t speed = qdict_get_try_int(qdict, "speed", 0);
85
86
qmp_block_stream(true, device, device, base != NULL, base, false, NULL,
87
- false, NULL, qdict_haskey(qdict, "speed"), speed, true,
88
+ false, NULL, false, NULL,
89
+ qdict_haskey(qdict, "speed"), speed, true,
90
BLOCKDEV_ON_ERROR_REPORT, false, NULL, false, false, false,
91
false, &error);
92
93
diff --git a/block/stream.c b/block/stream.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/block/stream.c
96
+++ b/block/stream.c
97
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver stream_job_driver = {
98
99
void stream_start(const char *job_id, BlockDriverState *bs,
100
BlockDriverState *base, const char *backing_file_str,
101
+ BlockDriverState *bottom,
102
int creation_flags, int64_t speed,
103
BlockdevOnError on_error,
104
const char *filter_node_name,
105
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
106
BlockDriverState *iter;
107
bool bs_read_only;
108
int basic_flags = BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED;
109
- BlockDriverState *base_overlay = bdrv_find_overlay(bs, base);
110
+ BlockDriverState *base_overlay;
111
BlockDriverState *above_base;
112
113
- if (!base_overlay) {
114
- error_setg(errp, "'%s' is not in the backing chain of '%s'",
115
- base->node_name, bs->node_name);
116
- return;
117
- }
118
+ assert(!(base && bottom));
119
+ assert(!(backing_file_str && bottom));
120
+
121
+ if (bottom) {
122
+ /*
123
+ * New simple interface. The code is written in terms of old interface
124
+ * with @base parameter (still, it doesn't freeze link to base, so in
125
+ * this mean old code is correct for new interface). So, for now, just
126
+ * emulate base_overlay and above_base. Still, when old interface
127
+ * finally removed, we should refactor code to use only "bottom", but
128
+ * not "*base*" things.
129
+ */
130
+ assert(!bottom->drv->is_filter);
131
+ base_overlay = above_base = bottom;
132
+ } else {
133
+ base_overlay = bdrv_find_overlay(bs, base);
134
+ if (!base_overlay) {
135
+ error_setg(errp, "'%s' is not in the backing chain of '%s'",
136
+ base->node_name, bs->node_name);
137
+ return;
138
+ }
139
48
140
- /*
49
- /*
141
- * Find the node directly above @base. @base_overlay is a COW overlay, so
50
- * We have to walk very carefully in case aio_set_fd_handler is
142
- * it must have a bdrv_cow_child(), but it is the immediate overlay of
51
- * called while we're walking.
143
- * @base, so between the two there can only be filters.
144
- */
52
- */
145
- above_base = base_overlay;
53
- qemu_lockcnt_inc(&ctx->list_lock);
146
- if (bdrv_cow_bs(above_base) != base) {
54
-
147
- above_base = bdrv_cow_bs(above_base);
55
QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) {
148
- while (bdrv_filter_bs(above_base) != base) {
56
int revents;
149
- above_base = bdrv_filter_bs(above_base);
57
150
+ /*
58
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
151
+ * Find the node directly above @base. @base_overlay is a COW overlay,
152
+ * so it must have a bdrv_cow_child(), but it is the immediate overlay
153
+ * of @base, so between the two there can only be filters.
154
+ */
155
+ above_base = base_overlay;
156
+ if (bdrv_cow_bs(above_base) != base) {
157
+ above_base = bdrv_cow_bs(above_base);
158
+ while (bdrv_filter_bs(above_base) != base) {
159
+ above_base = bdrv_filter_bs(above_base);
160
+ }
161
}
59
}
162
}
60
}
163
61
164
diff --git a/blockdev.c b/blockdev.c
62
- qemu_lockcnt_dec(&ctx->list_lock);
165
index XXXXXXX..XXXXXXX 100644
63
return progress;
166
--- a/blockdev.c
64
}
167
+++ b/blockdev.c
65
168
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
66
-/*
169
bool has_base, const char *base,
67
- * Note that dispatch_fds == false has the side-effect of post-poning the
170
bool has_base_node, const char *base_node,
68
- * freeing of deleted handlers.
171
bool has_backing_file, const char *backing_file,
69
- */
172
+ bool has_bottom, const char *bottom,
70
-bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
173
bool has_speed, int64_t speed,
71
+void aio_dispatch(AioContext *ctx)
174
bool has_on_error, BlockdevOnError on_error,
175
bool has_filter_node_name, const char *filter_node_name,
176
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
177
bool has_auto_dismiss, bool auto_dismiss,
178
Error **errp)
179
{
72
{
180
- BlockDriverState *bs, *iter;
73
- bool progress;
181
+ BlockDriverState *bs, *iter, *iter_end;
74
+ aio_bh_poll(ctx);
182
BlockDriverState *base_bs = NULL;
75
183
+ BlockDriverState *bottom_bs = NULL;
76
- /*
184
AioContext *aio_context;
77
- * If there are callbacks left that have been queued, we need to call them.
185
Error *local_err = NULL;
78
- * Do not call select in this case, because it is possible that the caller
186
int job_flags = JOB_DEFAULT;
79
- * does not need a complete flush (as is the case for aio_poll loops).
187
80
- */
188
+ if (has_base && has_base_node) {
81
- progress = aio_bh_poll(ctx);
189
+ error_setg(errp, "'base' and 'base-node' cannot be specified "
82
+ qemu_lockcnt_inc(&ctx->list_lock);
190
+ "at the same time");
83
+ aio_dispatch_handlers(ctx);
191
+ return;
84
+ qemu_lockcnt_dec(&ctx->list_lock);
192
+ }
85
193
+
86
- if (dispatch_fds) {
194
+ if (has_base && has_bottom) {
87
- progress |= aio_dispatch_handlers(ctx);
195
+ error_setg(errp, "'base' and 'bottom' cannot be specified "
196
+ "at the same time");
197
+ return;
198
+ }
199
+
200
+ if (has_bottom && has_base_node) {
201
+ error_setg(errp, "'bottom' and 'base-node' cannot be specified "
202
+ "at the same time");
203
+ return;
204
+ }
205
+
206
if (!has_on_error) {
207
on_error = BLOCKDEV_ON_ERROR_REPORT;
208
}
209
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
210
aio_context = bdrv_get_aio_context(bs);
211
aio_context_acquire(aio_context);
212
213
- if (has_base && has_base_node) {
214
- error_setg(errp, "'base' and 'base-node' cannot be specified "
215
- "at the same time");
216
- goto out;
217
- }
88
- }
218
-
89
-
219
if (has_base) {
90
- /* Run our timers */
220
base_bs = bdrv_find_backing_image(bs, base);
91
- progress |= timerlistgroup_run_timers(&ctx->tlg);
221
if (base_bs == NULL) {
92
-
222
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
93
- return progress;
223
bdrv_refresh_filename(base_bs);
94
+ timerlistgroup_run_timers(&ctx->tlg);
95
}
96
97
/* These thread-local variables are used only in a small part of aio_poll
98
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
99
npfd = 0;
100
qemu_lockcnt_dec(&ctx->list_lock);
101
102
- /* Run dispatch even if there were no readable fds to run timers */
103
- if (aio_dispatch(ctx, ret > 0)) {
104
- progress = true;
105
+ progress |= aio_bh_poll(ctx);
106
+
107
+ if (ret > 0) {
108
+ qemu_lockcnt_inc(&ctx->list_lock);
109
+ progress |= aio_dispatch_handlers(ctx);
110
+ qemu_lockcnt_dec(&ctx->list_lock);
224
}
111
}
225
112
226
- /* Check for op blockers in the whole chain between bs and base */
113
+ progress |= timerlistgroup_run_timers(&ctx->tlg);
227
- for (iter = bs; iter && iter != base_bs;
228
+ if (has_bottom) {
229
+ bottom_bs = bdrv_lookup_bs(NULL, bottom, errp);
230
+ if (!bottom_bs) {
231
+ goto out;
232
+ }
233
+ if (!bottom_bs->drv) {
234
+ error_setg(errp, "Node '%s' is not open", bottom);
235
+ goto out;
236
+ }
237
+ if (bottom_bs->drv->is_filter) {
238
+ error_setg(errp, "Node '%s' is a filter, use a non-filter node "
239
+ "as 'bottom'", bottom);
240
+ goto out;
241
+ }
242
+ if (!bdrv_chain_contains(bs, bottom_bs)) {
243
+ error_setg(errp, "Node '%s' is not in a chain starting from '%s'",
244
+ bottom, device);
245
+ goto out;
246
+ }
247
+ assert(bdrv_get_aio_context(bottom_bs) == aio_context);
248
+ }
249
+
114
+
250
+ /*
115
return progress;
251
+ * Check for op blockers in the whole chain between bs and base (or bottom)
116
}
252
+ */
117
253
+ iter_end = has_bottom ? bdrv_filter_or_cow_bs(bottom_bs) : base_bs;
118
diff --git a/util/aio-win32.c b/util/aio-win32.c
254
+ for (iter = bs; iter && iter != iter_end;
119
index XXXXXXX..XXXXXXX 100644
255
iter = bdrv_filter_or_cow_bs(iter))
120
--- a/util/aio-win32.c
256
{
121
+++ b/util/aio-win32.c
257
if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) {
122
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
258
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
123
return progress;
259
}
124
}
260
125
261
stream_start(has_job_id ? job_id : NULL, bs, base_bs, backing_file,
126
-bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
262
- job_flags, has_speed ? speed : 0, on_error,
127
+void aio_dispatch(AioContext *ctx)
263
+ bottom_bs, job_flags, has_speed ? speed : 0, on_error,
128
{
264
filter_node_name, &local_err);
129
- bool progress;
265
if (local_err) {
130
-
266
error_propagate(errp, local_err);
131
- progress = aio_bh_poll(ctx);
132
- if (dispatch_fds) {
133
- progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
134
- }
135
- progress |= timerlistgroup_run_timers(&ctx->tlg);
136
- return progress;
137
+ aio_bh_poll(ctx);
138
+ aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
139
+ timerlistgroup_run_timers(&ctx->tlg);
140
}
141
142
bool aio_poll(AioContext *ctx, bool blocking)
143
diff --git a/util/async.c b/util/async.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/util/async.c
146
+++ b/util/async.c
147
@@ -XXX,XX +XXX,XX @@ aio_ctx_dispatch(GSource *source,
148
AioContext *ctx = (AioContext *) source;
149
150
assert(callback == NULL);
151
- aio_dispatch(ctx, true);
152
+ aio_dispatch(ctx);
153
return true;
154
}
155
267
--
156
--
268
2.29.2
157
2.9.3
269
158
270
159
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
This patch completes the series with the COR-filter applied to
3
Pull the increment/decrement pair out of aio_bh_poll and into the
4
block-stream operations.
4
callers.
5
5
6
Adding the filter makes it possible in future implement discarding
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
copied regions in backing files during the block-stream job, to reduce
7
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
8
the disk overuse (we need control on permissions).
8
Reviewed-by: Fam Zheng <famz@redhat.com>
9
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
10
Message-id: 20170213135235.12274-18-pbonzini@redhat.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
13
util/aio-posix.c | 8 +++-----
14
util/aio-win32.c | 8 ++++----
15
util/async.c | 12 ++++++------
16
3 files changed, 13 insertions(+), 15 deletions(-)
9
17
10
Also, the filter now is smart enough to do copy-on-read with specified
18
diff --git a/util/aio-posix.c b/util/aio-posix.c
11
base, so we have benefit on guest reads even when doing block-stream of
12
the part of the backing chain.
13
14
Several iotests are slightly modified due to filter insertion.
15
16
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
17
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
18
Message-Id: <20201216061703.70908-14-vsementsov@virtuozzo.com>
19
Reviewed-by: Max Reitz <mreitz@redhat.com>
20
Signed-off-by: Max Reitz <mreitz@redhat.com>
21
---
22
block/stream.c | 105 ++++++++++++++++++++++---------------
23
tests/qemu-iotests/030 | 8 +--
24
tests/qemu-iotests/141.out | 2 +-
25
tests/qemu-iotests/245 | 20 ++++---
26
4 files changed, 80 insertions(+), 55 deletions(-)
27
28
diff --git a/block/stream.c b/block/stream.c
29
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
30
--- a/block/stream.c
20
--- a/util/aio-posix.c
31
+++ b/block/stream.c
21
+++ b/util/aio-posix.c
32
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
33
#include "block/blockjob_int.h"
23
34
#include "qapi/error.h"
24
void aio_dispatch(AioContext *ctx)
35
#include "qapi/qmp/qerror.h"
25
{
36
+#include "qapi/qmp/qdict.h"
26
+ qemu_lockcnt_inc(&ctx->list_lock);
37
#include "qemu/ratelimit.h"
27
aio_bh_poll(ctx);
38
#include "sysemu/block-backend.h"
28
-
39
+#include "block/copy-on-read.h"
29
- qemu_lockcnt_inc(&ctx->list_lock);
40
30
aio_dispatch_handlers(ctx);
41
enum {
31
qemu_lockcnt_dec(&ctx->list_lock);
32
33
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
34
}
35
36
npfd = 0;
37
- qemu_lockcnt_dec(&ctx->list_lock);
38
39
progress |= aio_bh_poll(ctx);
40
41
if (ret > 0) {
42
- qemu_lockcnt_inc(&ctx->list_lock);
43
progress |= aio_dispatch_handlers(ctx);
44
- qemu_lockcnt_dec(&ctx->list_lock);
45
}
46
47
+ qemu_lockcnt_dec(&ctx->list_lock);
48
+
49
progress |= timerlistgroup_run_timers(&ctx->tlg);
50
51
return progress;
52
diff --git a/util/aio-win32.c b/util/aio-win32.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/util/aio-win32.c
55
+++ b/util/aio-win32.c
56
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
57
bool progress = false;
58
AioHandler *tmp;
59
60
- qemu_lockcnt_inc(&ctx->list_lock);
61
-
42
/*
62
/*
43
@@ -XXX,XX +XXX,XX @@ typedef struct StreamBlockJob {
63
* We have to walk very carefully in case aio_set_fd_handler is
44
BlockJob common;
64
* called while we're walking.
45
BlockDriverState *base_overlay; /* COW overlay (stream from this) */
65
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
46
BlockDriverState *above_base; /* Node directly above the base */
47
+ BlockDriverState *cor_filter_bs;
48
BlockDriverState *target_bs;
49
BlockdevOnError on_error;
50
char *backing_file_str;
51
bool bs_read_only;
52
- bool chain_frozen;
53
} StreamBlockJob;
54
55
static int coroutine_fn stream_populate(BlockBackend *blk,
56
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_populate(BlockBackend *blk,
57
{
58
assert(bytes < SIZE_MAX);
59
60
- return blk_co_preadv(blk, offset, bytes, NULL,
61
- BDRV_REQ_COPY_ON_READ | BDRV_REQ_PREFETCH);
62
-}
63
-
64
-static void stream_abort(Job *job)
65
-{
66
- StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
67
-
68
- if (s->chain_frozen) {
69
- bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
70
- }
71
+ return blk_co_preadv(blk, offset, bytes, NULL, BDRV_REQ_PREFETCH);
72
}
73
74
static int stream_prepare(Job *job)
75
@@ -XXX,XX +XXX,XX @@ static int stream_prepare(Job *job)
76
Error *local_err = NULL;
77
int ret = 0;
78
79
- bdrv_unfreeze_backing_chain(s->target_bs, s->above_base);
80
- s->chain_frozen = false;
81
+ /* We should drop filter at this point, as filter hold the backing chain */
82
+ bdrv_cor_filter_drop(s->cor_filter_bs);
83
+ s->cor_filter_bs = NULL;
84
85
if (bdrv_cow_child(unfiltered_bs)) {
86
const char *base_id = NULL, *base_fmt = NULL;
87
@@ -XXX,XX +XXX,XX @@ static void stream_clean(Job *job)
88
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
89
BlockJob *bjob = &s->common;
90
91
+ if (s->cor_filter_bs) {
92
+ bdrv_cor_filter_drop(s->cor_filter_bs);
93
+ s->cor_filter_bs = NULL;
94
+ }
95
+
96
/* Reopen the image back in read-only mode if necessary */
97
if (s->bs_read_only) {
98
/* Give up write permissions before making it read-only */
99
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
100
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
101
BlockBackend *blk = s->common.blk;
102
BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
103
- bool enable_cor = !bdrv_cow_child(s->base_overlay);
104
int64_t len;
105
int64_t offset = 0;
106
uint64_t delay_ns = 0;
107
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
108
}
109
job_progress_set_remaining(&s->common.job, len);
110
111
- /* Turn on copy-on-read for the whole block device so that guest read
112
- * requests help us make progress. Only do this when copying the entire
113
- * backing chain since the copy-on-read operation does not take base into
114
- * account.
115
- */
116
- if (enable_cor) {
117
- bdrv_enable_copy_on_read(s->target_bs);
118
- }
119
-
120
for ( ; offset < len; offset += n) {
121
bool copy;
122
int ret;
123
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn stream_run(Job *job, Error **errp)
124
}
66
}
125
}
67
}
126
68
127
- if (enable_cor) {
69
- qemu_lockcnt_dec(&ctx->list_lock);
128
- bdrv_disable_copy_on_read(s->target_bs);
70
return progress;
129
- }
130
-
131
/* Do not remove the backing file if an error was there but ignored. */
132
return error;
133
}
71
}
134
@@ -XXX,XX +XXX,XX @@ static const BlockJobDriver stream_job_driver = {
72
135
.free = block_job_free,
73
void aio_dispatch(AioContext *ctx)
136
.run = stream_run,
74
{
137
.prepare = stream_prepare,
75
+ qemu_lockcnt_inc(&ctx->list_lock);
138
- .abort = stream_abort,
76
aio_bh_poll(ctx);
139
.clean = stream_clean,
77
aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
140
.user_resume = block_job_user_resume,
78
+ qemu_lockcnt_dec(&ctx->list_lock);
141
},
79
timerlistgroup_run_timers(&ctx->tlg);
142
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
80
}
143
bool bs_read_only;
81
144
int basic_flags = BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED;
82
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
145
BlockDriverState *base_overlay;
146
+ BlockDriverState *cor_filter_bs = NULL;
147
BlockDriverState *above_base;
148
+ QDict *opts;
149
150
assert(!(base && bottom));
151
assert(!(backing_file_str && bottom));
152
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
153
}
83
}
154
}
84
}
155
85
156
- if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
86
- qemu_lockcnt_dec(&ctx->list_lock);
157
- return;
87
first = true;
158
- }
88
89
/* ctx->notifier is always registered. */
90
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
91
progress |= aio_dispatch_handlers(ctx, event);
92
} while (count > 0);
93
94
+ qemu_lockcnt_dec(&ctx->list_lock);
95
+
96
progress |= timerlistgroup_run_timers(&ctx->tlg);
97
return progress;
98
}
99
diff --git a/util/async.c b/util/async.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/util/async.c
102
+++ b/util/async.c
103
@@ -XXX,XX +XXX,XX @@ void aio_bh_call(QEMUBH *bh)
104
bh->cb(bh->opaque);
105
}
106
107
-/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
108
+/* Multiple occurrences of aio_bh_poll cannot be called concurrently.
109
+ * The count in ctx->list_lock is incremented before the call, and is
110
+ * not affected by the call.
111
+ */
112
int aio_bh_poll(AioContext *ctx)
113
{
114
QEMUBH *bh, **bhp, *next;
115
int ret;
116
bool deleted = false;
117
118
- qemu_lockcnt_inc(&ctx->list_lock);
159
-
119
-
160
/* Make sure that the image is opened in read-write mode */
120
ret = 0;
161
bs_read_only = bdrv_is_read_only(bs);
121
for (bh = atomic_rcu_read(&ctx->first_bh); bh; bh = next) {
162
if (bs_read_only) {
122
next = atomic_rcu_read(&bh->next);
163
- if (bdrv_reopen_set_read_only(bs, false, errp) != 0) {
123
@@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx)
164
- bs_read_only = false;
124
165
- goto fail;
125
/* remove deleted bhs */
166
+ int ret;
126
if (!deleted) {
167
+ /* Hold the chain during reopen */
127
- qemu_lockcnt_dec(&ctx->list_lock);
168
+ if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
128
return ret;
169
+ return;
129
}
170
+ }
130
171
+
131
- if (qemu_lockcnt_dec_and_lock(&ctx->list_lock)) {
172
+ ret = bdrv_reopen_set_read_only(bs, false, errp);
132
+ if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) {
173
+
133
bhp = &ctx->first_bh;
174
+ /* failure, or cor-filter will hold the chain */
134
while (*bhp) {
175
+ bdrv_unfreeze_backing_chain(bs, above_base);
135
bh = *bhp;
176
+
136
@@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx)
177
+ if (ret < 0) {
137
bhp = &bh->next;
178
+ return;
138
}
179
}
139
}
140
- qemu_lockcnt_unlock(&ctx->list_lock);
141
+ qemu_lockcnt_inc_and_unlock(&ctx->list_lock);
180
}
142
}
181
143
return ret;
182
- /* Prevent concurrent jobs trying to modify the graph structure here, we
183
- * already have our own plans. Also don't allow resize as the image size is
184
- * queried only at the job start and then cached. */
185
- s = block_job_create(job_id, &stream_job_driver, NULL, bs,
186
- basic_flags | BLK_PERM_GRAPH_MOD,
187
+ opts = qdict_new();
188
+
189
+ qdict_put_str(opts, "driver", "copy-on-read");
190
+ qdict_put_str(opts, "file", bdrv_get_node_name(bs));
191
+ /* Pass the base_overlay node name as 'bottom' to COR driver */
192
+ qdict_put_str(opts, "bottom", base_overlay->node_name);
193
+ if (filter_node_name) {
194
+ qdict_put_str(opts, "node-name", filter_node_name);
195
+ }
196
+
197
+ cor_filter_bs = bdrv_insert_node(bs, opts, BDRV_O_RDWR, errp);
198
+ if (!cor_filter_bs) {
199
+ goto fail;
200
+ }
201
+
202
+ if (!filter_node_name) {
203
+ cor_filter_bs->implicit = true;
204
+ }
205
+
206
+ s = block_job_create(job_id, &stream_job_driver, NULL, cor_filter_bs,
207
+ BLK_PERM_CONSISTENT_READ,
208
basic_flags | BLK_PERM_WRITE,
209
speed, creation_flags, NULL, NULL, errp);
210
if (!s) {
211
goto fail;
212
}
213
214
+ /*
215
+ * Prevent concurrent jobs trying to modify the graph structure here, we
216
+ * already have our own plans. Also don't allow resize as the image size is
217
+ * queried only at the job start and then cached.
218
+ */
219
+ if (block_job_add_bdrv(&s->common, "active node", bs, 0,
220
+ basic_flags | BLK_PERM_WRITE, &error_abort)) {
221
+ goto fail;
222
+ }
223
+
224
/* Block all intermediate nodes between bs and base, because they will
225
* disappear from the chain after this operation. The streaming job reads
226
* every block only once, assuming that it doesn't change, so forbid writes
227
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
228
s->base_overlay = base_overlay;
229
s->above_base = above_base;
230
s->backing_file_str = g_strdup(backing_file_str);
231
+ s->cor_filter_bs = cor_filter_bs;
232
s->target_bs = bs;
233
s->bs_read_only = bs_read_only;
234
- s->chain_frozen = true;
235
236
s->on_error = on_error;
237
trace_stream_start(bs, base, s);
238
@@ -XXX,XX +XXX,XX @@ void stream_start(const char *job_id, BlockDriverState *bs,
239
return;
240
241
fail:
242
+ if (cor_filter_bs) {
243
+ bdrv_cor_filter_drop(cor_filter_bs);
244
+ }
245
if (bs_read_only) {
246
bdrv_reopen_set_read_only(bs, true, NULL);
247
}
248
- bdrv_unfreeze_backing_chain(bs, above_base);
249
}
144
}
250
diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
251
index XXXXXXX..XXXXXXX 100755
252
--- a/tests/qemu-iotests/030
253
+++ b/tests/qemu-iotests/030
254
@@ -XXX,XX +XXX,XX @@ class TestParallelOps(iotests.QMPTestCase):
255
self.assert_no_active_block_jobs()
256
257
# Set a speed limit to make sure that this job blocks the rest
258
- result = self.vm.qmp('block-stream', device='node4', job_id='stream-node4', base=self.imgs[1], speed=1024*1024)
259
+ result = self.vm.qmp('block-stream', device='node4',
260
+ job_id='stream-node4', base=self.imgs[1],
261
+ filter_node_name='stream-filter', speed=1024*1024)
262
self.assert_qmp(result, 'return', {})
263
264
result = self.vm.qmp('block-stream', device='node5', job_id='stream-node5', base=self.imgs[2])
265
self.assert_qmp(result, 'error/desc',
266
- "Node 'node4' is busy: block device is in use by block job: stream")
267
+ "Node 'stream-filter' is busy: block device is in use by block job: stream")
268
269
result = self.vm.qmp('block-stream', device='node3', job_id='stream-node3', base=self.imgs[2])
270
self.assert_qmp(result, 'error/desc',
271
@@ -XXX,XX +XXX,XX @@ class TestParallelOps(iotests.QMPTestCase):
272
# block-commit should also fail if it touches nodes used by the stream job
273
result = self.vm.qmp('block-commit', device='drive0', base=self.imgs[4], job_id='commit-node4')
274
self.assert_qmp(result, 'error/desc',
275
- "Node 'node4' is busy: block device is in use by block job: stream")
276
+ "Node 'stream-filter' is busy: block device is in use by block job: stream")
277
278
result = self.vm.qmp('block-commit', device='drive0', base=self.imgs[1], top=self.imgs[3], job_id='commit-node1')
279
self.assert_qmp(result, 'error/desc',
280
diff --git a/tests/qemu-iotests/141.out b/tests/qemu-iotests/141.out
281
index XXXXXXX..XXXXXXX 100644
282
--- a/tests/qemu-iotests/141.out
283
+++ b/tests/qemu-iotests/141.out
284
@@ -XXX,XX +XXX,XX @@ wrote 1048576/1048576 bytes at offset 0
285
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
286
{'execute': 'blockdev-del',
287
'arguments': {'node-name': 'drv0'}}
288
-{"error": {"class": "GenericError", "desc": "Node drv0 is in use"}}
289
+{"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: block device is in use by block job: stream"}}
290
{'execute': 'block-job-cancel',
291
'arguments': {'device': 'job0'}}
292
{"return": {}}
293
diff --git a/tests/qemu-iotests/245 b/tests/qemu-iotests/245
294
index XXXXXXX..XXXXXXX 100755
295
--- a/tests/qemu-iotests/245
296
+++ b/tests/qemu-iotests/245
297
@@ -XXX,XX +XXX,XX @@ class TestBlockdevReopen(iotests.QMPTestCase):
298
299
# hd1 <- hd0
300
result = self.vm.qmp('block-stream', conv_keys = True, job_id = 'stream0',
301
- device = 'hd1', auto_finalize = False)
302
+ device = 'hd1', filter_node_name='cor',
303
+ auto_finalize = False)
304
self.assert_qmp(result, 'return', {})
305
306
- # We can't reopen with the original options because that would
307
- # make hd1 read-only and block-stream requires it to be read-write
308
- # (Which error message appears depends on whether the stream job is
309
- # already done with copying at this point.)
310
+ # We can't reopen with the original options because there is a filter
311
+ # inserted by stream job above hd1.
312
self.reopen(opts, {},
313
- ["Can't set node 'hd1' to r/o with copy-on-read enabled",
314
- "Cannot make block node read-only, there is a writer on it"])
315
+ "Cannot change the option 'backing.backing.file.node-name'")
316
+
317
+ # We can't reopen hd1 to read-only, as block-stream requires it to be
318
+ # read-write
319
+ self.reopen(opts['backing'], {'read-only': True},
320
+ "Cannot make block node read-only, there is a writer on it")
321
322
# We can't remove hd2 while the stream job is ongoing
323
opts['backing']['backing'] = None
324
- self.reopen(opts, {'backing.read-only': False}, "Cannot change 'backing' link from 'hd1' to 'hd2'")
325
+ self.reopen(opts['backing'], {'read-only': False},
326
+ "Cannot change 'backing' link from 'hd1' to 'hd2'")
327
328
# We can detach hd1 from hd0 because it doesn't affect the stream job
329
opts['backing'] = None
330
--
145
--
331
2.29.2
146
2.9.3
332
147
333
148
diff view generated by jsdifflib
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Add the new member supported_read_flags to the BlockDriverState
3
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
4
structure. It will control the flags set for copy-on-read operations.
4
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5
Make the block generic layer evaluate supported read flags before they
5
Reviewed-by: Fam Zheng <famz@redhat.com>
6
go to a block driver.
6
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
7
7
Message-id: 20170213135235.12274-19-pbonzini@redhat.com
8
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
10
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
[vsementsov: use assert instead of abort]
12
Reviewed-by: Max Reitz <mreitz@redhat.com>
13
Message-Id: <20201216061703.70908-8-vsementsov@virtuozzo.com>
14
Signed-off-by: Max Reitz <mreitz@redhat.com>
15
---
9
---
16
include/block/block_int.h | 4 ++++
10
include/block/block_int.h | 64 +++++++++++++++++++++++++-----------------
17
block/io.c | 10 ++++++++--
11
include/sysemu/block-backend.h | 14 ++++++---
18
2 files changed, 12 insertions(+), 2 deletions(-)
12
2 files changed, 49 insertions(+), 29 deletions(-)
19
13
20
diff --git a/include/block/block_int.h b/include/block/block_int.h
14
diff --git a/include/block/block_int.h b/include/block/block_int.h
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/include/block/block_int.h
16
--- a/include/block/block_int.h
23
+++ b/include/block/block_int.h
17
+++ b/include/block/block_int.h
18
@@ -XXX,XX +XXX,XX @@ struct BdrvChild {
19
* copied as well.
20
*/
21
struct BlockDriverState {
22
- int64_t total_sectors; /* if we are reading a disk image, give its
23
- size in sectors */
24
+ /* Protected by big QEMU lock or read-only after opening. No special
25
+ * locking needed during I/O...
26
+ */
27
int open_flags; /* flags used to open the file, re-used for re-open */
28
bool read_only; /* if true, the media is read only */
29
bool encrypted; /* if true, the media is encrypted */
24
@@ -XXX,XX +XXX,XX @@ struct BlockDriverState {
30
@@ -XXX,XX +XXX,XX @@ struct BlockDriverState {
31
bool sg; /* if true, the device is a /dev/sg* */
32
bool probed; /* if true, format was probed rather than specified */
33
34
- int copy_on_read; /* if nonzero, copy read backing sectors into image.
35
- note this is a reference count */
36
-
37
- CoQueue flush_queue; /* Serializing flush queue */
38
- bool active_flush_req; /* Flush request in flight? */
39
- unsigned int write_gen; /* Current data generation */
40
- unsigned int flushed_gen; /* Flushed write generation */
41
-
42
BlockDriver *drv; /* NULL means no media */
43
void *opaque;
44
45
@@ -XXX,XX +XXX,XX @@ struct BlockDriverState {
46
BdrvChild *backing;
47
BdrvChild *file;
48
49
- /* Callback before write request is processed */
50
- NotifierWithReturnList before_write_notifiers;
51
-
52
- /* number of in-flight requests; overall and serialising */
53
- unsigned int in_flight;
54
- unsigned int serialising_in_flight;
55
-
56
- bool wakeup;
57
-
58
- /* Offset after the highest byte written to */
59
- uint64_t wr_highest_offset;
60
-
25
/* I/O Limits */
61
/* I/O Limits */
26
BlockLimits bl;
62
BlockLimits bl;
27
63
28
+ /*
64
@@ -XXX,XX +XXX,XX @@ struct BlockDriverState {
29
+ * Flags honored during pread
65
QTAILQ_ENTRY(BlockDriverState) bs_list;
66
/* element of the list of monitor-owned BDS */
67
QTAILQ_ENTRY(BlockDriverState) monitor_list;
68
- QLIST_HEAD(, BdrvDirtyBitmap) dirty_bitmaps;
69
int refcnt;
70
71
- QLIST_HEAD(, BdrvTrackedRequest) tracked_requests;
72
-
73
/* operation blockers */
74
QLIST_HEAD(, BdrvOpBlocker) op_blockers[BLOCK_OP_TYPE_MAX];
75
76
@@ -XXX,XX +XXX,XX @@ struct BlockDriverState {
77
/* The error object in use for blocking operations on backing_hd */
78
Error *backing_blocker;
79
80
+ /* Protected by AioContext lock */
81
+
82
+ /* If true, copy read backing sectors into image. Can be >1 if more
83
+ * than one client has requested copy-on-read.
30
+ */
84
+ */
31
+ unsigned int supported_read_flags;
85
+ int copy_on_read;
32
/* Flags honored during pwrite (so far: BDRV_REQ_FUA,
86
+
33
* BDRV_REQ_WRITE_UNCHANGED).
87
+ /* If we are reading a disk image, give its size in sectors.
34
* If a driver does not support BDRV_REQ_WRITE_UNCHANGED, those
88
+ * Generally read-only; it is written to by load_vmstate and save_vmstate,
35
diff --git a/block/io.c b/block/io.c
89
+ * but the block layer is quiescent during those.
90
+ */
91
+ int64_t total_sectors;
92
+
93
+ /* Callback before write request is processed */
94
+ NotifierWithReturnList before_write_notifiers;
95
+
96
+ /* number of in-flight requests; overall and serialising */
97
+ unsigned int in_flight;
98
+ unsigned int serialising_in_flight;
99
+
100
+ bool wakeup;
101
+
102
+ /* Offset after the highest byte written to */
103
+ uint64_t wr_highest_offset;
104
+
105
/* threshold limit for writes, in bytes. "High water mark". */
106
uint64_t write_threshold_offset;
107
NotifierWithReturn write_threshold_notifier;
108
@@ -XXX,XX +XXX,XX @@ struct BlockDriverState {
109
/* counter for nested bdrv_io_plug */
110
unsigned io_plugged;
111
112
+ QLIST_HEAD(, BdrvTrackedRequest) tracked_requests;
113
+ CoQueue flush_queue; /* Serializing flush queue */
114
+ bool active_flush_req; /* Flush request in flight? */
115
+ unsigned int write_gen; /* Current data generation */
116
+ unsigned int flushed_gen; /* Flushed write generation */
117
+
118
+ QLIST_HEAD(, BdrvDirtyBitmap) dirty_bitmaps;
119
+
120
+ /* do we need to tell the quest if we have a volatile write cache? */
121
+ int enable_write_cache;
122
+
123
int quiesce_counter;
124
};
125
126
diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
36
index XXXXXXX..XXXXXXX 100644
127
index XXXXXXX..XXXXXXX 100644
37
--- a/block/io.c
128
--- a/include/sysemu/block-backend.h
38
+++ b/block/io.c
129
+++ b/include/sysemu/block-backend.h
39
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
130
@@ -XXX,XX +XXX,XX @@ typedef struct BlockDevOps {
40
if (flags & BDRV_REQ_COPY_ON_READ) {
131
* fields that must be public. This is in particular for QLIST_ENTRY() and
41
int64_t pnum;
132
* friends so that BlockBackends can be kept in lists outside block-backend.c */
42
133
typedef struct BlockBackendPublic {
43
+ /* The flag BDRV_REQ_COPY_ON_READ has reached its addressee */
134
- /* I/O throttling.
44
+ flags &= ~BDRV_REQ_COPY_ON_READ;
135
- * throttle_state tells us if this BlockBackend has I/O limits configured.
136
- * io_limits_disabled tells us if they are currently being enforced */
137
+ /* I/O throttling has its own locking, but also some fields are
138
+ * protected by the AioContext lock.
139
+ */
45
+
140
+
46
ret = bdrv_is_allocated(bs, offset, bytes, &pnum);
141
+ /* Protected by AioContext lock. */
47
if (ret < 0) {
142
CoQueue throttled_reqs[2];
48
goto out;
49
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
50
goto out;
51
}
52
53
+ assert(!(flags & ~bs->supported_read_flags));
54
+
143
+
55
max_bytes = ROUND_UP(MAX(0, total_bytes - offset), align);
144
+ /* Nonzero if the I/O limits are currently being ignored; generally
56
if (bytes <= max_bytes && bytes <= max_transfer) {
145
+ * it is zero. */
57
- ret = bdrv_driver_preadv(bs, offset, bytes, qiov, qiov_offset, 0);
146
unsigned int io_limits_disabled;
58
+ ret = bdrv_driver_preadv(bs, offset, bytes, qiov, qiov_offset, flags);
147
59
goto out;
148
/* The following fields are protected by the ThrottleGroup lock.
60
}
149
- * See the ThrottleGroup documentation for details. */
61
150
+ * See the ThrottleGroup documentation for details.
62
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
151
+ * throttle_state tells us if I/O limits are configured. */
63
152
ThrottleState *throttle_state;
64
ret = bdrv_driver_preadv(bs, offset + bytes - bytes_remaining,
153
ThrottleTimers throttle_timers;
65
num, qiov,
154
unsigned pending_reqs[2];
66
- qiov_offset + bytes - bytes_remaining, 0);
67
+ qiov_offset + bytes - bytes_remaining,
68
+ flags);
69
max_bytes -= num;
70
} else {
71
num = bytes_remaining;
72
--
155
--
73
2.29.2
156
2.9.3
74
157
75
158
diff view generated by jsdifflib
Deleted patch
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
2
1
3
If the flag BDRV_REQ_PREFETCH was set, skip idling read/write
4
operations in COR-driver. It can be taken into account for the
5
COR-algorithms optimization. That check is being made during the
6
block stream job by the moment.
7
8
Add the BDRV_REQ_PREFETCH flag to the supported_read_flags of the
9
COR-filter.
10
11
block: Modify the comment for the flag BDRV_REQ_PREFETCH as we are
12
going to use it alone and pass it to the COR-filter driver for further
13
processing.
14
15
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
16
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
17
Reviewed-by: Max Reitz <mreitz@redhat.com>
18
Message-Id: <20201216061703.70908-9-vsementsov@virtuozzo.com>
19
Signed-off-by: Max Reitz <mreitz@redhat.com>
20
---
21
include/block/block.h | 8 +++++---
22
block/copy-on-read.c | 14 ++++++++++----
23
2 files changed, 15 insertions(+), 7 deletions(-)
24
25
diff --git a/include/block/block.h b/include/block/block.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/include/block/block.h
28
+++ b/include/block/block.h
29
@@ -XXX,XX +XXX,XX @@ typedef enum {
30
BDRV_REQ_NO_FALLBACK = 0x100,
31
32
/*
33
- * BDRV_REQ_PREFETCH may be used only together with BDRV_REQ_COPY_ON_READ
34
- * on read request and means that caller doesn't really need data to be
35
- * written to qiov parameter which may be NULL.
36
+ * BDRV_REQ_PREFETCH makes sense only in the context of copy-on-read
37
+ * (i.e., together with the BDRV_REQ_COPY_ON_READ flag or when a COR
38
+ * filter is involved), in which case it signals that the COR operation
39
+ * need not read the data into memory (qiov) but only ensure they are
40
+ * copied to the top layer (i.e., that COR operation is done).
41
*/
42
BDRV_REQ_PREFETCH = 0x200,
43
44
diff --git a/block/copy-on-read.c b/block/copy-on-read.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/block/copy-on-read.c
47
+++ b/block/copy-on-read.c
48
@@ -XXX,XX +XXX,XX @@ static int cor_open(BlockDriverState *bs, QDict *options, int flags,
49
return -EINVAL;
50
}
51
52
+ bs->supported_read_flags = BDRV_REQ_PREFETCH;
53
+
54
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
55
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
56
57
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn cor_co_preadv_part(BlockDriverState *bs,
58
}
59
}
60
61
- ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
62
- local_flags);
63
- if (ret < 0) {
64
- return ret;
65
+ /* Skip if neither read nor write are needed */
66
+ if ((local_flags & (BDRV_REQ_PREFETCH | BDRV_REQ_COPY_ON_READ)) !=
67
+ BDRV_REQ_PREFETCH) {
68
+ ret = bdrv_co_preadv_part(bs->file, offset, n, qiov, qiov_offset,
69
+ local_flags);
70
+ if (ret < 0) {
71
+ return ret;
72
+ }
73
}
74
75
offset += n;
76
--
77
2.29.2
78
79
diff view generated by jsdifflib
Deleted patch
1
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
2
1
3
Stream in stream_prepare calls bdrv_change_backing_file() to change
4
backing-file in the metadata of bs.
5
6
It may use either backing-file parameter given by user or just take
7
filename of base on job start.
8
9
Backing file format is determined by base on job finish.
10
11
There are some problems with this design, we solve only two by this
12
patch:
13
14
1. Consider scenario with backing-file unset. Current concept of stream
15
supports changing of the base during the job (we don't freeze link to
16
the base). So, we should not save base filename at job start,
17
18
- let's determine name of the base on job finish.
19
20
2. Using direct base to determine filename and format is not very good:
21
base node may be a filter, so its filename may be JSON, and format_name
22
is not good for storing into qcow2 metadata as backing file format.
23
24
- let's use unfiltered_base
25
26
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
27
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
28
[vsementsov: change commit subject, change logic in stream_prepare]
29
Message-Id: <20201216061703.70908-10-vsementsov@virtuozzo.com>
30
Reviewed-by: Max Reitz <mreitz@redhat.com>
31
Signed-off-by: Max Reitz <mreitz@redhat.com>
32
---
33
block/stream.c | 9 +++++----
34
blockdev.c | 8 +-------
35
2 files changed, 6 insertions(+), 11 deletions(-)
36
37
diff --git a/block/stream.c b/block/stream.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/block/stream.c
40
+++ b/block/stream.c
41
@@ -XXX,XX +XXX,XX @@ static int stream_prepare(Job *job)
42
BlockDriverState *bs = blk_bs(bjob->blk);
43
BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
44
BlockDriverState *base = bdrv_filter_or_cow_bs(s->above_base);
45
+ BlockDriverState *unfiltered_base = bdrv_skip_filters(base);
46
Error *local_err = NULL;
47
int ret = 0;
48
49
@@ -XXX,XX +XXX,XX @@ static int stream_prepare(Job *job)
50
51
if (bdrv_cow_child(unfiltered_bs)) {
52
const char *base_id = NULL, *base_fmt = NULL;
53
- if (base) {
54
- base_id = s->backing_file_str;
55
- if (base->drv) {
56
- base_fmt = base->drv->format_name;
57
+ if (unfiltered_base) {
58
+ base_id = s->backing_file_str ?: unfiltered_base->filename;
59
+ if (unfiltered_base->drv) {
60
+ base_fmt = unfiltered_base->drv->format_name;
61
}
62
}
63
bdrv_set_backing_hd(unfiltered_bs, base, &local_err);
64
diff --git a/blockdev.c b/blockdev.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/blockdev.c
67
+++ b/blockdev.c
68
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
69
BlockDriverState *base_bs = NULL;
70
AioContext *aio_context;
71
Error *local_err = NULL;
72
- const char *base_name = NULL;
73
int job_flags = JOB_DEFAULT;
74
75
if (!has_on_error) {
76
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
77
goto out;
78
}
79
assert(bdrv_get_aio_context(base_bs) == aio_context);
80
- base_name = base;
81
}
82
83
if (has_base_node) {
84
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
85
}
86
assert(bdrv_get_aio_context(base_bs) == aio_context);
87
bdrv_refresh_filename(base_bs);
88
- base_name = base_bs->filename;
89
}
90
91
/* Check for op blockers in the whole chain between bs and base */
92
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
93
goto out;
94
}
95
96
- /* backing_file string overrides base bs filename */
97
- base_name = has_backing_file ? backing_file : base_name;
98
-
99
if (has_auto_finalize && !auto_finalize) {
100
job_flags |= JOB_MANUAL_FINALIZE;
101
}
102
@@ -XXX,XX +XXX,XX @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
103
job_flags |= JOB_MANUAL_DISMISS;
104
}
105
106
- stream_start(has_job_id ? job_id : NULL, bs, base_bs, base_name,
107
+ stream_start(has_job_id ? job_id : NULL, bs, base_bs, backing_file,
108
job_flags, has_speed ? speed : 0, on_error,
109
filter_node_name, &local_err);
110
if (local_err) {
111
--
112
2.29.2
113
114
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
test_stream_parallel run parallel stream jobs, intersecting so that top
4
of one is base of another. It's OK now, but it would be a problem if
5
insert the filter, as one job will want to use another job's filter as
6
above_base node.
7
8
Correct thing to do is move to new interface: "bottom" argument instead
9
of base. This guarantees that jobs don't intersect by their actions.
10
11
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
12
Reviewed-by: Max Reitz <mreitz@redhat.com>
13
Message-Id: <20201216061703.70908-12-vsementsov@virtuozzo.com>
14
Signed-off-by: Max Reitz <mreitz@redhat.com>
15
---
16
tests/qemu-iotests/030 | 4 +++-
17
1 file changed, 3 insertions(+), 1 deletion(-)
18
19
diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
20
index XXXXXXX..XXXXXXX 100755
21
--- a/tests/qemu-iotests/030
22
+++ b/tests/qemu-iotests/030
23
@@ -XXX,XX +XXX,XX @@ class TestParallelOps(iotests.QMPTestCase):
24
node_name = 'node%d' % i
25
job_id = 'stream-%s' % node_name
26
pending_jobs.append(job_id)
27
- result = self.vm.qmp('block-stream', device=node_name, job_id=job_id, base=self.imgs[i-2], speed=1024)
28
+ result = self.vm.qmp('block-stream', device=node_name,
29
+ job_id=job_id, bottom=f'node{i-1}',
30
+ speed=1024)
31
self.assert_qmp(result, 'return', {})
32
33
for job in pending_jobs:
34
--
35
2.29.2
36
37
diff view generated by jsdifflib
Deleted patch
1
There are a couple of environment variables that we fetch with
2
os.environ.get() without supplying a default. Clearly they are required
3
and expected to be set by the ./check script (as evidenced by
4
execute_setup_common(), which checks for test_dir and
5
qemu_default_machine to be set, and aborts if they are not).
6
1
7
Using .get() this way has the disadvantage of returning an Optional[str]
8
type, which mypy will complain about when tests just assume these values
9
to be str.
10
11
Use [] instead, which raises a KeyError for environment variables that
12
are not set. When this exception is raised, catch it and move the abort
13
code from execute_setup_common() there.
14
15
Drop the 'assert iotests.sock_dir is not None' from iotest 300, because
16
that sort of thing is precisely what this patch wants to prevent.
17
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
20
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
21
Message-Id: <20210118105720.14824-2-mreitz@redhat.com>
22
---
23
tests/qemu-iotests/300 | 1 -
24
tests/qemu-iotests/iotests.py | 26 +++++++++++++-------------
25
2 files changed, 13 insertions(+), 14 deletions(-)
26
27
diff --git a/tests/qemu-iotests/300 b/tests/qemu-iotests/300
28
index XXXXXXX..XXXXXXX 100755
29
--- a/tests/qemu-iotests/300
30
+++ b/tests/qemu-iotests/300
31
@@ -XXX,XX +XXX,XX @@ import qemu
32
33
BlockBitmapMapping = List[Dict[str, Union[str, List[Dict[str, str]]]]]
34
35
-assert iotests.sock_dir is not None
36
mig_sock = os.path.join(iotests.sock_dir, 'mig_sock')
37
38
39
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
40
index XXXXXXX..XXXXXXX 100644
41
--- a/tests/qemu-iotests/iotests.py
42
+++ b/tests/qemu-iotests/iotests.py
43
@@ -XXX,XX +XXX,XX @@ qemu_opts = os.environ.get('QEMU_OPTIONS', '').strip().split(' ')
44
45
imgfmt = os.environ.get('IMGFMT', 'raw')
46
imgproto = os.environ.get('IMGPROTO', 'file')
47
-test_dir = os.environ.get('TEST_DIR')
48
-sock_dir = os.environ.get('SOCK_DIR')
49
output_dir = os.environ.get('OUTPUT_DIR', '.')
50
-cachemode = os.environ.get('CACHEMODE')
51
-aiomode = os.environ.get('AIOMODE')
52
-qemu_default_machine = os.environ.get('QEMU_DEFAULT_MACHINE')
53
+
54
+try:
55
+ test_dir = os.environ['TEST_DIR']
56
+ sock_dir = os.environ['SOCK_DIR']
57
+ cachemode = os.environ['CACHEMODE']
58
+ aiomode = os.environ['AIOMODE']
59
+ qemu_default_machine = os.environ['QEMU_DEFAULT_MACHINE']
60
+except KeyError:
61
+ # We are using these variables as proxies to indicate that we're
62
+ # not being run via "check". There may be other things set up by
63
+ # "check" that individual test cases rely on.
64
+ sys.stderr.write('Please run this test via the "check" script\n')
65
+ sys.exit(os.EX_USAGE)
66
67
socket_scm_helper = os.environ.get('SOCKET_SCM_HELPER', 'socket_scm_helper')
68
69
@@ -XXX,XX +XXX,XX @@ def execute_setup_common(supported_fmts: Sequence[str] = (),
70
"""
71
# Note: Python 3.6 and pylint do not like 'Collection' so use 'Sequence'.
72
73
- # We are using TEST_DIR and QEMU_DEFAULT_MACHINE as proxies to
74
- # indicate that we're not being run via "check". There may be
75
- # other things set up by "check" that individual test cases rely
76
- # on.
77
- if test_dir is None or qemu_default_machine is None:
78
- sys.stderr.write('Please run this test via the "check" script\n')
79
- sys.exit(os.EX_USAGE)
80
-
81
debug = '-d' in sys.argv
82
if debug:
83
sys.argv.remove('-d')
84
--
85
2.29.2
86
87
diff view generated by jsdifflib
Deleted patch
1
Instead of checking iotests.py only, check all Python files in the
2
qemu-iotests/ directory. Of course, most of them do not pass, so there
3
is an extensive skip list for now. (The only files that do pass are
4
209, 254, 283, and iotests.py.)
5
1
6
(Alternatively, we could have the opposite, i.e. an explicit list of
7
files that we do want to check, but I think it is better to check files
8
by default.)
9
10
Unless started in debug mode (./check -d), the output has no information
11
on which files are tested, so we will not have a problem e.g. with
12
backports, where some files may be missing when compared to upstream.
13
14
Besides the technical rewrite, some more things are changed:
15
16
- For the pylint invocation, PYTHONPATH is adjusted. This mirrors
17
setting MYPYPATH for mypy.
18
19
- Also, MYPYPATH is now derived from PYTHONPATH, so that we include
20
paths set by the environment. Maybe at some point we want to let the
21
check script add '../../python/' to PYTHONPATH so that iotests.py does
22
not need to do that.
23
24
- Passing --notes=FIXME,XXX to pylint suppresses warnings for TODO
25
comments. TODO is fine, we do not need 297 to complain about such
26
comments.
27
28
- The "Success" line from mypy's output is suppressed, because (A) it
29
does not add useful information, and (B) it would leak information
30
about the files having been tested to the reference output, which we
31
decidedly do not want.
32
33
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
34
Signed-off-by: Max Reitz <mreitz@redhat.com>
35
Message-Id: <20210118105720.14824-3-mreitz@redhat.com>
36
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
37
---
38
tests/qemu-iotests/297 | 112 +++++++++++++++++++++++++++++--------
39
tests/qemu-iotests/297.out | 5 +-
40
2 files changed, 92 insertions(+), 25 deletions(-)
41
42
diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
43
index XXXXXXX..XXXXXXX 100755
44
--- a/tests/qemu-iotests/297
45
+++ b/tests/qemu-iotests/297
46
@@ -XXX,XX +XXX,XX @@
47
-#!/usr/bin/env bash
48
+#!/usr/bin/env python3
49
# group: meta
50
#
51
# Copyright (C) 2020 Red Hat, Inc.
52
@@ -XXX,XX +XXX,XX @@
53
# You should have received a copy of the GNU General Public License
54
# along with this program. If not, see <http://www.gnu.org/licenses/>.
55
56
-seq=$(basename $0)
57
-echo "QA output created by $seq"
58
+import os
59
+import re
60
+import shutil
61
+import subprocess
62
+import sys
63
64
-status=1    # failure is the default!
65
+import iotests
66
67
-# get standard environment
68
-. ./common.rc
69
70
-if ! type -p "pylint-3" > /dev/null; then
71
- _notrun "pylint-3 not found"
72
-fi
73
-if ! type -p "mypy" > /dev/null; then
74
- _notrun "mypy not found"
75
-fi
76
+# TODO: Empty this list!
77
+SKIP_FILES = (
78
+ '030', '040', '041', '044', '045', '055', '056', '057', '065', '093',
79
+ '096', '118', '124', '129', '132', '136', '139', '147', '148', '149',
80
+ '151', '152', '155', '163', '165', '169', '194', '196', '199', '202',
81
+ '203', '205', '206', '207', '208', '210', '211', '212', '213', '216',
82
+ '218', '219', '222', '224', '228', '234', '235', '236', '237', '238',
83
+ '240', '242', '245', '246', '248', '255', '256', '257', '258', '260',
84
+ '262', '264', '266', '274', '277', '280', '281', '295', '296', '298',
85
+ '299', '300', '302', '303', '304', '307',
86
+ 'nbd-fault-injector.py', 'qcow2.py', 'qcow2_format.py', 'qed.py'
87
+)
88
89
-pylint-3 --score=n iotests.py
90
91
-MYPYPATH=../../python/ mypy --warn-unused-configs --disallow-subclassing-any \
92
- --disallow-any-generics --disallow-incomplete-defs \
93
- --disallow-untyped-decorators --no-implicit-optional \
94
- --warn-redundant-casts --warn-unused-ignores \
95
- --no-implicit-reexport iotests.py
96
+def is_python_file(filename):
97
+ if not os.path.isfile(filename):
98
+ return False
99
100
-# success, all done
101
-echo "*** done"
102
-rm -f $seq.full
103
-status=0
104
+ if filename.endswith('.py'):
105
+ return True
106
+
107
+ with open(filename) as f:
108
+ try:
109
+ first_line = f.readline()
110
+ return re.match('^#!.*python', first_line) is not None
111
+ except UnicodeDecodeError: # Ignore binary files
112
+ return False
113
+
114
+
115
+def run_linters():
116
+ files = [filename for filename in (set(os.listdir('.')) - set(SKIP_FILES))
117
+ if is_python_file(filename)]
118
+
119
+ iotests.logger.debug('Files to be checked:')
120
+ iotests.logger.debug(', '.join(sorted(files)))
121
+
122
+ print('=== pylint ===')
123
+ sys.stdout.flush()
124
+
125
+ # Todo notes are fine, but fixme's or xxx's should probably just be
126
+ # fixed (in tests, at least)
127
+ env = os.environ.copy()
128
+ qemu_module_path = os.path.join(os.path.dirname(__file__),
129
+ '..', '..', 'python')
130
+ try:
131
+ env['PYTHONPATH'] += os.pathsep + qemu_module_path
132
+ except KeyError:
133
+ env['PYTHONPATH'] = qemu_module_path
134
+ subprocess.run(('pylint-3', '--score=n', '--notes=FIXME,XXX', *files),
135
+ env=env, check=False)
136
+
137
+ print('=== mypy ===')
138
+ sys.stdout.flush()
139
+
140
+ # We have to call mypy separately for each file. Otherwise, it
141
+ # will interpret all given files as belonging together (i.e., they
142
+ # may not both define the same classes, etc.; most notably, they
143
+ # must not both define the __main__ module).
144
+ env['MYPYPATH'] = env['PYTHONPATH']
145
+ for filename in files:
146
+ p = subprocess.run(('mypy',
147
+ '--warn-unused-configs',
148
+ '--disallow-subclassing-any',
149
+ '--disallow-any-generics',
150
+ '--disallow-incomplete-defs',
151
+ '--disallow-untyped-decorators',
152
+ '--no-implicit-optional',
153
+ '--warn-redundant-casts',
154
+ '--warn-unused-ignores',
155
+ '--no-implicit-reexport',
156
+ filename),
157
+ env=env,
158
+ check=False,
159
+ stdout=subprocess.PIPE,
160
+ stderr=subprocess.STDOUT,
161
+ universal_newlines=True)
162
+
163
+ if p.returncode != 0:
164
+ print(p.stdout)
165
+
166
+
167
+for linter in ('pylint-3', 'mypy'):
168
+ if shutil.which(linter) is None:
169
+ iotests.notrun(f'{linter} not found')
170
+
171
+iotests.script_main(run_linters)
172
diff --git a/tests/qemu-iotests/297.out b/tests/qemu-iotests/297.out
173
index XXXXXXX..XXXXXXX 100644
174
--- a/tests/qemu-iotests/297.out
175
+++ b/tests/qemu-iotests/297.out
176
@@ -XXX,XX +XXX,XX @@
177
-QA output created by 297
178
-Success: no issues found in 1 source file
179
-*** done
180
+=== pylint ===
181
+=== mypy ===
182
--
183
2.29.2
184
185
diff view generated by jsdifflib
Deleted patch
1
Signed-off-by: Max Reitz <mreitz@redhat.com>
2
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
3
Reviewed-by: Eric Blake <eblake@redhat.com>
4
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
5
Message-Id: <20210118105720.14824-5-mreitz@redhat.com>
6
---
7
tests/qemu-iotests/129 | 2 ++
8
1 file changed, 2 insertions(+)
9
1
10
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
11
index XXXXXXX..XXXXXXX 100755
12
--- a/tests/qemu-iotests/129
13
+++ b/tests/qemu-iotests/129
14
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
15
result = self.vm.qmp("block_set_io_throttle", conv_keys=False,
16
**params)
17
self.vm.shutdown()
18
+ for img in (self.test_img, self.target_img, self.base_img):
19
+ iotests.try_remove(img)
20
21
def do_test_stop(self, cmd, **args):
22
"""Test 'stop' while block job is running on a throttled drive.
23
--
24
2.29.2
25
26
diff view generated by jsdifflib
Deleted patch
1
@busy is false when the job is paused, which happens all the time
2
because that is how jobs yield (e.g. for mirror at least since commit
3
565ac01f8d3).
4
1
5
Back when 129 was added (2015), perhaps there was no better way of
6
checking whether the job was still actually running. Now we have the
7
@status field (as of 58b295ba52c, i.e. 2018), which can give us exactly
8
that information.
9
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
13
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
14
Message-Id: <20210118105720.14824-6-mreitz@redhat.com>
15
---
16
tests/qemu-iotests/129 | 2 +-
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
19
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
20
index XXXXXXX..XXXXXXX 100755
21
--- a/tests/qemu-iotests/129
22
+++ b/tests/qemu-iotests/129
23
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
24
result = self.vm.qmp("stop")
25
self.assert_qmp(result, 'return', {})
26
result = self.vm.qmp("query-block-jobs")
27
- self.assert_qmp(result, 'return[0]/busy', True)
28
+ self.assert_qmp(result, 'return[0]/status', 'running')
29
self.assert_qmp(result, 'return[0]/ready', False)
30
31
def test_drive_mirror(self):
32
--
33
2.29.2
34
35
diff view generated by jsdifflib
Deleted patch
1
Throttling on the BB has not affected block jobs in a while, so it is
2
possible that one of the jobs in 129 finishes before the VM is stopped.
3
We can fix that by running the job from a throttle node.
4
1
5
Signed-off-by: Max Reitz <mreitz@redhat.com>
6
Reviewed-by: Eric Blake <eblake@redhat.com>
7
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
9
Message-Id: <20210118105720.14824-7-mreitz@redhat.com>
10
---
11
tests/qemu-iotests/129 | 37 +++++++++++++------------------------
12
1 file changed, 13 insertions(+), 24 deletions(-)
13
14
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
15
index XXXXXXX..XXXXXXX 100755
16
--- a/tests/qemu-iotests/129
17
+++ b/tests/qemu-iotests/129
18
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
19
iotests.qemu_img('create', '-f', iotests.imgfmt, self.test_img,
20
"-b", self.base_img, '-F', iotests.imgfmt)
21
iotests.qemu_io('-f', iotests.imgfmt, '-c', 'write -P0x5d 1M 128M', self.test_img)
22
- self.vm = iotests.VM().add_drive(self.test_img)
23
+ self.vm = iotests.VM()
24
+ self.vm.add_object('throttle-group,id=tg0,x-bps-total=1024')
25
+
26
+ source_drive = 'driver=throttle,' \
27
+ 'throttle-group=tg0,' \
28
+ f'file.driver={iotests.imgfmt},' \
29
+ f'file.file.filename={self.test_img}'
30
+
31
+ self.vm.add_drive(None, source_drive)
32
self.vm.launch()
33
34
def tearDown(self):
35
- params = {"device": "drive0",
36
- "bps": 0,
37
- "bps_rd": 0,
38
- "bps_wr": 0,
39
- "iops": 0,
40
- "iops_rd": 0,
41
- "iops_wr": 0,
42
- }
43
- result = self.vm.qmp("block_set_io_throttle", conv_keys=False,
44
- **params)
45
self.vm.shutdown()
46
for img in (self.test_img, self.target_img, self.base_img):
47
iotests.try_remove(img)
48
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
49
def do_test_stop(self, cmd, **args):
50
"""Test 'stop' while block job is running on a throttled drive.
51
The 'stop' command shouldn't drain the job"""
52
- params = {"device": "drive0",
53
- "bps": 1024,
54
- "bps_rd": 0,
55
- "bps_wr": 0,
56
- "iops": 0,
57
- "iops_rd": 0,
58
- "iops_wr": 0,
59
- }
60
- result = self.vm.qmp("block_set_io_throttle", conv_keys=False,
61
- **params)
62
- self.assert_qmp(result, 'return', {})
63
result = self.vm.qmp(cmd, **args)
64
self.assert_qmp(result, 'return', {})
65
+
66
result = self.vm.qmp("stop")
67
self.assert_qmp(result, 'return', {})
68
result = self.vm.qmp("query-block-jobs")
69
+
70
self.assert_qmp(result, 'return[0]/status', 'running')
71
self.assert_qmp(result, 'return[0]/ready', False)
72
73
def test_drive_mirror(self):
74
self.do_test_stop("drive-mirror", device="drive0",
75
- target=self.target_img,
76
+ target=self.target_img, format=iotests.imgfmt,
77
sync="full")
78
79
def test_drive_backup(self):
80
self.do_test_stop("drive-backup", device="drive0",
81
- target=self.target_img,
82
+ target=self.target_img, format=iotests.imgfmt,
83
sync="full")
84
85
def test_block_commit(self):
86
--
87
2.29.2
88
89
diff view generated by jsdifflib
Deleted patch
1
Before this patch, test_block_commit() performs an active commit, which
2
under the hood is a mirror job. If we want to test various different
3
block jobs, we should perhaps run an actual commit job instead.
4
1
5
Doing so requires adding an overlay above the source node before the
6
commit is done (and then specifying the source node as the top node for
7
the commit job).
8
9
Signed-off-by: Max Reitz <mreitz@redhat.com>
10
Reviewed-by: Eric Blake <eblake@redhat.com>
11
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
12
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
13
Message-Id: <20210118105720.14824-8-mreitz@redhat.com>
14
---
15
tests/qemu-iotests/129 | 27 +++++++++++++++++++++++++--
16
1 file changed, 25 insertions(+), 2 deletions(-)
17
18
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
19
index XXXXXXX..XXXXXXX 100755
20
--- a/tests/qemu-iotests/129
21
+++ b/tests/qemu-iotests/129
22
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
23
test_img = os.path.join(iotests.test_dir, 'test.img')
24
target_img = os.path.join(iotests.test_dir, 'target.img')
25
base_img = os.path.join(iotests.test_dir, 'base.img')
26
+ overlay_img = os.path.join(iotests.test_dir, 'overlay.img')
27
28
def setUp(self):
29
iotests.qemu_img('create', '-f', iotests.imgfmt, self.base_img, "1G")
30
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
31
self.vm.add_object('throttle-group,id=tg0,x-bps-total=1024')
32
33
source_drive = 'driver=throttle,' \
34
+ 'node-name=source,' \
35
'throttle-group=tg0,' \
36
f'file.driver={iotests.imgfmt},' \
37
f'file.file.filename={self.test_img}'
38
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
39
40
def tearDown(self):
41
self.vm.shutdown()
42
- for img in (self.test_img, self.target_img, self.base_img):
43
+ for img in (self.test_img, self.target_img, self.base_img,
44
+ self.overlay_img):
45
iotests.try_remove(img)
46
47
def do_test_stop(self, cmd, **args):
48
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
49
sync="full")
50
51
def test_block_commit(self):
52
- self.do_test_stop("block-commit", device="drive0")
53
+ # Add overlay above the source node so that we actually use a
54
+ # commit job instead of a mirror job
55
+
56
+ iotests.qemu_img('create', '-f', iotests.imgfmt, self.overlay_img,
57
+ '1G')
58
+
59
+ result = self.vm.qmp('blockdev-add', **{
60
+ 'node-name': 'overlay',
61
+ 'driver': iotests.imgfmt,
62
+ 'file': {
63
+ 'driver': 'file',
64
+ 'filename': self.overlay_img
65
+ }
66
+ })
67
+ self.assert_qmp(result, 'return', {})
68
+
69
+ result = self.vm.qmp('blockdev-snapshot',
70
+ node='source', overlay='overlay')
71
+ self.assert_qmp(result, 'return', {})
72
+
73
+ self.do_test_stop('block-commit', device='drive0', top_node='source')
74
75
if __name__ == '__main__':
76
iotests.main(supported_fmts=["qcow2"],
77
--
78
2.29.2
79
80
diff view generated by jsdifflib
Deleted patch
1
Issuing 'stop' on the VM drains all nodes. If the mirror job has many
2
large requests in flight, this may lead to significant I/O that looks a
3
bit like 'stop' would make the job try to complete (which is what 129
4
should verify not to happen).
5
1
6
We can limit the I/O in flight by limiting the buffer size, so mirror
7
will make very little progress during the 'stop' drain.
8
9
(We do not need to do anything about commit, which has a buffer size of
10
512 kB by default; or backup, which goes cluster by cluster. Once we
11
have asynchronous requests for backup, that will change, but then we can
12
fine-tune the backup job to only perform a single request on a very
13
small chunk, too.)
14
15
Signed-off-by: Max Reitz <mreitz@redhat.com>
16
Reviewed-by: Eric Blake <eblake@redhat.com>
17
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
18
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
19
Message-Id: <20210118105720.14824-9-mreitz@redhat.com>
20
---
21
tests/qemu-iotests/129 | 2 +-
22
1 file changed, 1 insertion(+), 1 deletion(-)
23
24
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
25
index XXXXXXX..XXXXXXX 100755
26
--- a/tests/qemu-iotests/129
27
+++ b/tests/qemu-iotests/129
28
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
29
def test_drive_mirror(self):
30
self.do_test_stop("drive-mirror", device="drive0",
31
target=self.target_img, format=iotests.imgfmt,
32
- sync="full")
33
+ sync="full", buf_size=65536)
34
35
def test_drive_backup(self):
36
self.do_test_stop("drive-backup", device="drive0",
37
--
38
2.29.2
39
40
diff view generated by jsdifflib
Deleted patch
1
And consequentially drop it from 297's skip list.
2
1
3
Signed-off-by: Max Reitz <mreitz@redhat.com>
4
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
5
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Message-Id: <20210118105720.14824-10-mreitz@redhat.com>
7
---
8
tests/qemu-iotests/129 | 4 ++--
9
tests/qemu-iotests/297 | 2 +-
10
2 files changed, 3 insertions(+), 3 deletions(-)
11
12
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
13
index XXXXXXX..XXXXXXX 100755
14
--- a/tests/qemu-iotests/129
15
+++ b/tests/qemu-iotests/129
16
@@ -XXX,XX +XXX,XX @@
17
18
import os
19
import iotests
20
-import time
21
22
class TestStopWithBlockJob(iotests.QMPTestCase):
23
test_img = os.path.join(iotests.test_dir, 'test.img')
24
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
25
iotests.qemu_img('create', '-f', iotests.imgfmt, self.base_img, "1G")
26
iotests.qemu_img('create', '-f', iotests.imgfmt, self.test_img,
27
"-b", self.base_img, '-F', iotests.imgfmt)
28
- iotests.qemu_io('-f', iotests.imgfmt, '-c', 'write -P0x5d 1M 128M', self.test_img)
29
+ iotests.qemu_io('-f', iotests.imgfmt, '-c', 'write -P0x5d 1M 128M',
30
+ self.test_img)
31
self.vm = iotests.VM()
32
self.vm.add_object('throttle-group,id=tg0,x-bps-total=1024')
33
34
diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
35
index XXXXXXX..XXXXXXX 100755
36
--- a/tests/qemu-iotests/297
37
+++ b/tests/qemu-iotests/297
38
@@ -XXX,XX +XXX,XX @@ import iotests
39
# TODO: Empty this list!
40
SKIP_FILES = (
41
'030', '040', '041', '044', '045', '055', '056', '057', '065', '093',
42
- '096', '118', '124', '129', '132', '136', '139', '147', '148', '149',
43
+ '096', '118', '124', '132', '136', '139', '147', '148', '149',
44
'151', '152', '155', '163', '165', '169', '194', '196', '199', '202',
45
'203', '205', '206', '207', '208', '210', '211', '212', '213', '216',
46
'218', '219', '222', '224', '228', '234', '235', '236', '237', '238',
47
--
48
2.29.2
49
50
diff view generated by jsdifflib
Deleted patch
1
And consequentially drop it from 297's skip list.
2
1
3
Signed-off-by: Max Reitz <mreitz@redhat.com>
4
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
5
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
6
Message-Id: <20210118105720.14824-11-mreitz@redhat.com>
7
---
8
tests/qemu-iotests/297 | 2 +-
9
tests/qemu-iotests/300 | 18 +++++++++++++++---
10
2 files changed, 16 insertions(+), 4 deletions(-)
11
12
diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
13
index XXXXXXX..XXXXXXX 100755
14
--- a/tests/qemu-iotests/297
15
+++ b/tests/qemu-iotests/297
16
@@ -XXX,XX +XXX,XX @@ SKIP_FILES = (
17
'218', '219', '222', '224', '228', '234', '235', '236', '237', '238',
18
'240', '242', '245', '246', '248', '255', '256', '257', '258', '260',
19
'262', '264', '266', '274', '277', '280', '281', '295', '296', '298',
20
- '299', '300', '302', '303', '304', '307',
21
+ '299', '302', '303', '304', '307',
22
'nbd-fault-injector.py', 'qcow2.py', 'qcow2_format.py', 'qed.py'
23
)
24
25
diff --git a/tests/qemu-iotests/300 b/tests/qemu-iotests/300
26
index XXXXXXX..XXXXXXX 100755
27
--- a/tests/qemu-iotests/300
28
+++ b/tests/qemu-iotests/300
29
@@ -XXX,XX +XXX,XX @@ import os
30
import random
31
import re
32
from typing import Dict, List, Optional, Union
33
+
34
import iotests
35
+
36
+# Import qemu after iotests.py has amended sys.path
37
+# pylint: disable=wrong-import-order
38
import qemu
39
40
BlockBitmapMapping = List[Dict[str, Union[str, List[Dict[str, str]]]]]
41
@@ -XXX,XX +XXX,XX @@ class TestDirtyBitmapMigration(iotests.QMPTestCase):
42
If @msg is None, check that there has not been any error.
43
"""
44
self.vm_b.shutdown()
45
+
46
+ log = self.vm_b.get_log()
47
+ assert log is not None # Loaded after shutdown
48
+
49
if msg is None:
50
- self.assertNotIn('qemu-system-', self.vm_b.get_log())
51
+ self.assertNotIn('qemu-system-', log)
52
else:
53
- self.assertIn(msg, self.vm_b.get_log())
54
+ self.assertIn(msg, log)
55
56
@staticmethod
57
def mapping(node_name: str, node_alias: str,
58
@@ -XXX,XX +XXX,XX @@ class TestBlockBitmapMappingErrors(TestDirtyBitmapMigration):
59
60
# Check for the error in the source's log
61
self.vm_a.shutdown()
62
+
63
+ log = self.vm_a.get_log()
64
+ assert log is not None # Loaded after shutdown
65
+
66
self.assertIn(f"Cannot migrate bitmap '{name}' on node "
67
f"'{self.src_node_name}': Name is longer than 255 bytes",
68
- self.vm_a.get_log())
69
+ log)
70
71
# Expect abnormal shutdown of the destination VM because of
72
# the failed migration
73
--
74
2.29.2
75
76
diff view generated by jsdifflib
Deleted patch
1
Disposition (action) for any given signal is global for the process.
2
When two threads run coroutine-sigaltstack's qemu_coroutine_new()
3
concurrently, they may interfere with each other: One of them may revert
4
the SIGUSR2 handler to SIG_DFL, between the other thread (a) setting up
5
coroutine_trampoline() as the handler and (b) raising SIGUSR2. That
6
SIGUSR2 will then terminate the QEMU process abnormally.
7
1
8
We have to ensure that only one thread at a time can modify the
9
process-global SIGUSR2 handler. To do so, wrap the whole section where
10
that is done in a mutex.
11
12
Alternatively, we could for example have the SIGUSR2 handler always be
13
coroutine_trampoline(), so there would be no need to invoke sigaction()
14
in qemu_coroutine_new(). Laszlo has posted a patch to do so here:
15
16
https://lists.nongnu.org/archive/html/qemu-devel/2021-01/msg05962.html
17
18
However, given that coroutine-sigaltstack is more of a fallback
19
implementation for platforms that do not support ucontext, that change
20
may be a bit too invasive to be comfortable with it. The mutex proposed
21
here may negatively impact performance, but the change is much simpler.
22
23
Signed-off-by: Max Reitz <mreitz@redhat.com>
24
Message-Id: <20210125120305.19520-1-mreitz@redhat.com>
25
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
26
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
27
---
28
util/coroutine-sigaltstack.c | 9 +++++++++
29
1 file changed, 9 insertions(+)
30
31
diff --git a/util/coroutine-sigaltstack.c b/util/coroutine-sigaltstack.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/util/coroutine-sigaltstack.c
34
+++ b/util/coroutine-sigaltstack.c
35
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_new(void)
36
sigset_t sigs;
37
sigset_t osigs;
38
sigjmp_buf old_env;
39
+ static pthread_mutex_t sigusr2_mutex = PTHREAD_MUTEX_INITIALIZER;
40
41
/* The way to manipulate stack is with the sigaltstack function. We
42
* prepare a stack, with it delivering a signal to ourselves and then
43
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_new(void)
44
sa.sa_handler = coroutine_trampoline;
45
sigfillset(&sa.sa_mask);
46
sa.sa_flags = SA_ONSTACK;
47
+
48
+ /*
49
+ * sigaction() is a process-global operation. We must not run
50
+ * this code in multiple threads at once.
51
+ */
52
+ pthread_mutex_lock(&sigusr2_mutex);
53
if (sigaction(SIGUSR2, &sa, &osa) != 0) {
54
abort();
55
}
56
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_new(void)
57
* Restore the old SIGUSR2 signal handler and mask
58
*/
59
sigaction(SIGUSR2, &osa, NULL);
60
+ pthread_mutex_unlock(&sigusr2_mutex);
61
+
62
pthread_sigmask(SIG_SETMASK, &osigs, NULL);
63
64
/*
65
--
66
2.29.2
67
68
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
3
This uses the lock-free mutex described in the paper '"Blocking without
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
4
Locking", or LFTHREADS: A lock-free thread library' by Gidenstam and
5
Message-Id: <20210116214705.822267-17-vsementsov@virtuozzo.com>
5
Papatriantafilou. The same technique is used in OSv, and in fact
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
6
the code is essentially a conversion to C of OSv's code.
7
8
[Added missing coroutine_fn in tests/test-aio-multithread.c.
9
--Stefan]
10
11
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
12
Reviewed-by: Fam Zheng <famz@redhat.com>
13
Message-id: 20170213181244.16297-2-pbonzini@redhat.com
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
---
15
---
8
block/backup.c | 12 +++++-------
16
include/qemu/coroutine.h | 17 ++++-
9
1 file changed, 5 insertions(+), 7 deletions(-)
17
tests/test-aio-multithread.c | 86 ++++++++++++++++++++++++
10
18
util/qemu-coroutine-lock.c | 155 ++++++++++++++++++++++++++++++++++++++++---
11
diff --git a/block/backup.c b/block/backup.c
19
util/trace-events | 1 +
20
4 files changed, 246 insertions(+), 13 deletions(-)
21
22
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
12
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
13
--- a/block/backup.c
24
--- a/include/qemu/coroutine.h
14
+++ b/block/backup.c
25
+++ b/include/qemu/coroutine.h
15
@@ -XXX,XX +XXX,XX @@ static void backup_init_bcs_bitmap(BackupBlockJob *job)
26
@@ -XXX,XX +XXX,XX @@ bool qemu_co_queue_empty(CoQueue *queue);
16
static int coroutine_fn backup_run(Job *job, Error **errp)
27
/**
28
* Provides a mutex that can be used to synchronise coroutines
29
*/
30
+struct CoWaitRecord;
31
typedef struct CoMutex {
32
- bool locked;
33
+ /* Count of pending lockers; 0 for a free mutex, 1 for an
34
+ * uncontended mutex.
35
+ */
36
+ unsigned locked;
37
+
38
+ /* A queue of waiters. Elements are added atomically in front of
39
+ * from_push. to_pop is only populated, and popped from, by whoever
40
+ * is in charge of the next wakeup. This can be an unlocker or,
41
+ * through the handoff protocol, a locker that is about to go to sleep.
42
+ */
43
+ QSLIST_HEAD(, CoWaitRecord) from_push, to_pop;
44
+
45
+ unsigned handoff, sequence;
46
+
47
Coroutine *holder;
48
- CoQueue queue;
49
} CoMutex;
50
51
/**
52
diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/tests/test-aio-multithread.c
55
+++ b/tests/test-aio-multithread.c
56
@@ -XXX,XX +XXX,XX @@ static void test_multi_co_schedule_10(void)
57
test_multi_co_schedule(10);
58
}
59
60
+/* CoMutex thread-safety. */
61
+
62
+static uint32_t atomic_counter;
63
+static uint32_t running;
64
+static uint32_t counter;
65
+static CoMutex comutex;
66
+
67
+static void coroutine_fn test_multi_co_mutex_entry(void *opaque)
68
+{
69
+ while (!atomic_mb_read(&now_stopping)) {
70
+ qemu_co_mutex_lock(&comutex);
71
+ counter++;
72
+ qemu_co_mutex_unlock(&comutex);
73
+
74
+ /* Increase atomic_counter *after* releasing the mutex. Otherwise
75
+ * there is a chance (it happens about 1 in 3 runs) that the iothread
76
+ * exits before the coroutine is woken up, causing a spurious
77
+ * assertion failure.
78
+ */
79
+ atomic_inc(&atomic_counter);
80
+ }
81
+ atomic_dec(&running);
82
+}
83
+
84
+static void test_multi_co_mutex(int threads, int seconds)
85
+{
86
+ int i;
87
+
88
+ qemu_co_mutex_init(&comutex);
89
+ counter = 0;
90
+ atomic_counter = 0;
91
+ now_stopping = false;
92
+
93
+ create_aio_contexts();
94
+ assert(threads <= NUM_CONTEXTS);
95
+ running = threads;
96
+ for (i = 0; i < threads; i++) {
97
+ Coroutine *co1 = qemu_coroutine_create(test_multi_co_mutex_entry, NULL);
98
+ aio_co_schedule(ctx[i], co1);
99
+ }
100
+
101
+ g_usleep(seconds * 1000000);
102
+
103
+ atomic_mb_set(&now_stopping, true);
104
+ while (running > 0) {
105
+ g_usleep(100000);
106
+ }
107
+
108
+ join_aio_contexts();
109
+ g_test_message("%d iterations/second\n", counter / seconds);
110
+ g_assert_cmpint(counter, ==, atomic_counter);
111
+}
112
+
113
+/* Testing with NUM_CONTEXTS threads focuses on the queue. The mutex however
114
+ * is too contended (and the threads spend too much time in aio_poll)
115
+ * to actually stress the handoff protocol.
116
+ */
117
+static void test_multi_co_mutex_1(void)
118
+{
119
+ test_multi_co_mutex(NUM_CONTEXTS, 1);
120
+}
121
+
122
+static void test_multi_co_mutex_10(void)
123
+{
124
+ test_multi_co_mutex(NUM_CONTEXTS, 10);
125
+}
126
+
127
+/* Testing with fewer threads stresses the handoff protocol too. Still, the
128
+ * case where the locker _can_ pick up a handoff is very rare, happening
129
+ * about 10 times in 1 million, so increase the runtime a bit compared to
130
+ * other "quick" testcases that only run for 1 second.
131
+ */
132
+static void test_multi_co_mutex_2_3(void)
133
+{
134
+ test_multi_co_mutex(2, 3);
135
+}
136
+
137
+static void test_multi_co_mutex_2_30(void)
138
+{
139
+ test_multi_co_mutex(2, 30);
140
+}
141
+
142
/* End of tests. */
143
144
int main(int argc, char **argv)
145
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
146
g_test_add_func("/aio/multi/lifecycle", test_lifecycle);
147
if (g_test_quick()) {
148
g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_1);
149
+ g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_1);
150
+ g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_3);
151
} else {
152
g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_10);
153
+ g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_10);
154
+ g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_30);
155
}
156
return g_test_run();
157
}
158
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
159
index XXXXXXX..XXXXXXX 100644
160
--- a/util/qemu-coroutine-lock.c
161
+++ b/util/qemu-coroutine-lock.c
162
@@ -XXX,XX +XXX,XX @@
163
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
164
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
165
* THE SOFTWARE.
166
+ *
167
+ * The lock-free mutex implementation is based on OSv
168
+ * (core/lfmutex.cc, include/lockfree/mutex.hh).
169
+ * Copyright (C) 2013 Cloudius Systems, Ltd.
170
*/
171
172
#include "qemu/osdep.h"
173
@@ -XXX,XX +XXX,XX @@ bool qemu_co_queue_empty(CoQueue *queue)
174
return QSIMPLEQ_FIRST(&queue->entries) == NULL;
175
}
176
177
+/* The wait records are handled with a multiple-producer, single-consumer
178
+ * lock-free queue. There cannot be two concurrent pop_waiter() calls
179
+ * because pop_waiter() can only be called while mutex->handoff is zero.
180
+ * This can happen in three cases:
181
+ * - in qemu_co_mutex_unlock, before the hand-off protocol has started.
182
+ * In this case, qemu_co_mutex_lock will see mutex->handoff == 0 and
183
+ * not take part in the handoff.
184
+ * - in qemu_co_mutex_lock, if it steals the hand-off responsibility from
185
+ * qemu_co_mutex_unlock. In this case, qemu_co_mutex_unlock will fail
186
+ * the cmpxchg (it will see either 0 or the next sequence value) and
187
+ * exit. The next hand-off cannot begin until qemu_co_mutex_lock has
188
+ * woken up someone.
189
+ * - in qemu_co_mutex_unlock, if it takes the hand-off token itself.
190
+ * In this case another iteration starts with mutex->handoff == 0;
191
+ * a concurrent qemu_co_mutex_lock will fail the cmpxchg, and
192
+ * qemu_co_mutex_unlock will go back to case (1).
193
+ *
194
+ * The following functions manage this queue.
195
+ */
196
+typedef struct CoWaitRecord {
197
+ Coroutine *co;
198
+ QSLIST_ENTRY(CoWaitRecord) next;
199
+} CoWaitRecord;
200
+
201
+static void push_waiter(CoMutex *mutex, CoWaitRecord *w)
202
+{
203
+ w->co = qemu_coroutine_self();
204
+ QSLIST_INSERT_HEAD_ATOMIC(&mutex->from_push, w, next);
205
+}
206
+
207
+static void move_waiters(CoMutex *mutex)
208
+{
209
+ QSLIST_HEAD(, CoWaitRecord) reversed;
210
+ QSLIST_MOVE_ATOMIC(&reversed, &mutex->from_push);
211
+ while (!QSLIST_EMPTY(&reversed)) {
212
+ CoWaitRecord *w = QSLIST_FIRST(&reversed);
213
+ QSLIST_REMOVE_HEAD(&reversed, next);
214
+ QSLIST_INSERT_HEAD(&mutex->to_pop, w, next);
215
+ }
216
+}
217
+
218
+static CoWaitRecord *pop_waiter(CoMutex *mutex)
219
+{
220
+ CoWaitRecord *w;
221
+
222
+ if (QSLIST_EMPTY(&mutex->to_pop)) {
223
+ move_waiters(mutex);
224
+ if (QSLIST_EMPTY(&mutex->to_pop)) {
225
+ return NULL;
226
+ }
227
+ }
228
+ w = QSLIST_FIRST(&mutex->to_pop);
229
+ QSLIST_REMOVE_HEAD(&mutex->to_pop, next);
230
+ return w;
231
+}
232
+
233
+static bool has_waiters(CoMutex *mutex)
234
+{
235
+ return QSLIST_EMPTY(&mutex->to_pop) || QSLIST_EMPTY(&mutex->from_push);
236
+}
237
+
238
void qemu_co_mutex_init(CoMutex *mutex)
17
{
239
{
18
BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
240
memset(mutex, 0, sizeof(*mutex));
19
- int ret = 0;
241
- qemu_co_queue_init(&mutex->queue);
20
+ int ret;
242
}
21
243
22
backup_init_bcs_bitmap(s);
244
-void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex)
23
245
+static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex)
24
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
246
{
25
247
Coroutine *self = qemu_coroutine_self();
26
for (offset = 0; offset < s->len; ) {
248
+ CoWaitRecord w;
27
if (yield_and_check(s)) {
249
+ unsigned old_handoff;
28
- ret = -ECANCELED;
250
29
- goto out;
251
trace_qemu_co_mutex_lock_entry(mutex, self);
30
+ return -ECANCELED;
252
+ w.co = self;
31
}
253
+ push_waiter(mutex, &w);
32
254
33
ret = block_copy_reset_unallocated(s->bcs, offset, &count);
255
- while (mutex->locked) {
34
if (ret < 0) {
256
- qemu_co_queue_wait(&mutex->queue);
35
- goto out;
257
+ /* This is the "Responsibility Hand-Off" protocol; a lock() picks from
36
+ return ret;
258
+ * a concurrent unlock() the responsibility of waking somebody up.
37
}
259
+ */
38
260
+ old_handoff = atomic_mb_read(&mutex->handoff);
39
offset += count;
261
+ if (old_handoff &&
40
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_run(Job *job, Error **errp)
262
+ has_waiters(mutex) &&
41
job_yield(job);
263
+ atomic_cmpxchg(&mutex->handoff, old_handoff, 0) == old_handoff) {
42
}
264
+ /* There can be no concurrent pops, because there can be only
43
} else {
265
+ * one active handoff at a time.
44
- ret = backup_loop(s);
266
+ */
45
+ return backup_loop(s);
267
+ CoWaitRecord *to_wake = pop_waiter(mutex);
268
+ Coroutine *co = to_wake->co;
269
+ if (co == self) {
270
+ /* We got the lock ourselves! */
271
+ assert(to_wake == &w);
272
+ return;
273
+ }
274
+
275
+ aio_co_wake(co);
46
}
276
}
47
277
48
- out:
278
- mutex->locked = true;
49
- return ret;
279
- mutex->holder = self;
50
+ return 0;
280
- self->locks_held++;
51
}
281
-
52
282
+ qemu_coroutine_yield();
53
static const BlockJobDriver backup_job_driver = {
283
trace_qemu_co_mutex_lock_return(mutex, self);
284
}
285
286
+void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex)
287
+{
288
+ Coroutine *self = qemu_coroutine_self();
289
+
290
+ if (atomic_fetch_inc(&mutex->locked) == 0) {
291
+ /* Uncontended. */
292
+ trace_qemu_co_mutex_lock_uncontended(mutex, self);
293
+ } else {
294
+ qemu_co_mutex_lock_slowpath(mutex);
295
+ }
296
+ mutex->holder = self;
297
+ self->locks_held++;
298
+}
299
+
300
void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
301
{
302
Coroutine *self = qemu_coroutine_self();
303
304
trace_qemu_co_mutex_unlock_entry(mutex, self);
305
306
- assert(mutex->locked == true);
307
+ assert(mutex->locked);
308
assert(mutex->holder == self);
309
assert(qemu_in_coroutine());
310
311
- mutex->locked = false;
312
mutex->holder = NULL;
313
self->locks_held--;
314
- qemu_co_queue_next(&mutex->queue);
315
+ if (atomic_fetch_dec(&mutex->locked) == 1) {
316
+ /* No waiting qemu_co_mutex_lock(). Pfew, that was easy! */
317
+ return;
318
+ }
319
+
320
+ for (;;) {
321
+ CoWaitRecord *to_wake = pop_waiter(mutex);
322
+ unsigned our_handoff;
323
+
324
+ if (to_wake) {
325
+ Coroutine *co = to_wake->co;
326
+ aio_co_wake(co);
327
+ break;
328
+ }
329
+
330
+ /* Some concurrent lock() is in progress (we know this because
331
+ * mutex->locked was >1) but it hasn't yet put itself on the wait
332
+ * queue. Pick a sequence number for the handoff protocol (not 0).
333
+ */
334
+ if (++mutex->sequence == 0) {
335
+ mutex->sequence = 1;
336
+ }
337
+
338
+ our_handoff = mutex->sequence;
339
+ atomic_mb_set(&mutex->handoff, our_handoff);
340
+ if (!has_waiters(mutex)) {
341
+ /* The concurrent lock has not added itself yet, so it
342
+ * will be able to pick our handoff.
343
+ */
344
+ break;
345
+ }
346
+
347
+ /* Try to do the handoff protocol ourselves; if somebody else has
348
+ * already taken it, however, we're done and they're responsible.
349
+ */
350
+ if (atomic_cmpxchg(&mutex->handoff, our_handoff, 0) != our_handoff) {
351
+ break;
352
+ }
353
+ }
354
355
trace_qemu_co_mutex_unlock_return(mutex, self);
356
}
357
diff --git a/util/trace-events b/util/trace-events
358
index XXXXXXX..XXXXXXX 100644
359
--- a/util/trace-events
360
+++ b/util/trace-events
361
@@ -XXX,XX +XXX,XX @@ qemu_coroutine_terminate(void *co) "self %p"
362
363
# util/qemu-coroutine-lock.c
364
qemu_co_queue_run_restart(void *co) "co %p"
365
+qemu_co_mutex_lock_uncontended(void *mutex, void *self) "mutex %p self %p"
366
qemu_co_mutex_lock_entry(void *mutex, void *self) "mutex %p self %p"
367
qemu_co_mutex_lock_return(void *mutex, void *self) "mutex %p self %p"
368
qemu_co_mutex_unlock_entry(void *mutex, void *self) "mutex %p self %p"
54
--
369
--
55
2.29.2
370
2.9.3
56
371
57
372
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
We are going to directly use one async block-copy operation for backup
3
Running a very small critical section on pthread_mutex_t and CoMutex
4
job, so we need rate limiter.
4
shows that pthread_mutex_t is much faster because it doesn't actually
5
go to sleep. What happens is that the critical section is shorter
6
than the latency of entering the kernel and thus FUTEX_WAIT always
7
fails. With CoMutex there is no such latency but you still want to
8
avoid wait and wakeup. So introduce it artificially.
5
9
6
We want to maintain current backup behavior: only background copying is
10
This only works with one waiters; because CoMutex is fair, it will
7
limited and copy-before-write operations only participate in limit
11
always have more waits and wakeups than a pthread_mutex_t.
8
calculation. Therefore we need one rate limiter for block-copy state
9
and boolean flag for block-copy call state for actual limitation.
10
12
11
Note, that we can't just calculate each chunk in limiter after
13
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
12
successful copying: it will not save us from starting a lot of async
14
Reviewed-by: Fam Zheng <famz@redhat.com>
13
sub-requests which will exceed limit too much. Instead let's use the
15
Message-id: 20170213181244.16297-3-pbonzini@redhat.com
14
following scheme on sub-request creation:
16
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
1. If at the moment limit is not exceeded, create the request and
17
---
16
account it immediately.
18
include/qemu/coroutine.h | 5 +++++
17
2. If at the moment limit is already exceeded, drop create sub-request
19
util/qemu-coroutine-lock.c | 51 ++++++++++++++++++++++++++++++++++++++++------
18
and handle limit instead (by sleep).
20
util/qemu-coroutine.c | 2 +-
19
With this approach we'll never exceed the limit more than by one
21
3 files changed, 51 insertions(+), 7 deletions(-)
20
sub-request (which pretty much matches current backup behavior).
21
22
22
Note also, that if there is in-flight block-copy async call,
23
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
23
block_copy_kick() should be used after set-speed to apply new setup
24
faster. For that block_copy_kick() published in this patch.
25
26
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
27
Reviewed-by: Max Reitz <mreitz@redhat.com>
28
Message-Id: <20210116214705.822267-7-vsementsov@virtuozzo.com>
29
Signed-off-by: Max Reitz <mreitz@redhat.com>
30
---
31
include/block/block-copy.h | 5 ++++-
32
block/backup-top.c | 2 +-
33
block/backup.c | 2 +-
34
block/block-copy.c | 46 +++++++++++++++++++++++++++++++++++++-
35
4 files changed, 51 insertions(+), 4 deletions(-)
36
37
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
38
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
39
--- a/include/block/block-copy.h
25
--- a/include/qemu/coroutine.h
40
+++ b/include/block/block-copy.h
26
+++ b/include/qemu/coroutine.h
41
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
27
@@ -XXX,XX +XXX,XX @@ typedef struct CoMutex {
42
int64_t offset, int64_t *count);
28
*/
43
29
unsigned locked;
44
int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
30
45
- bool *error_is_read);
31
+ /* Context that is holding the lock. Useful to avoid spinning
46
+ bool ignore_ratelimit, bool *error_is_read);
32
+ * when two coroutines on the same AioContext try to get the lock. :)
47
33
+ */
48
/*
34
+ AioContext *ctx;
49
* Run block-copy in a coroutine, create corresponding BlockCopyCallState
50
@@ -XXX,XX +XXX,XX @@ bool block_copy_call_succeeded(BlockCopyCallState *call_state);
51
bool block_copy_call_failed(BlockCopyCallState *call_state);
52
int block_copy_call_status(BlockCopyCallState *call_state, bool *error_is_read);
53
54
+void block_copy_set_speed(BlockCopyState *s, uint64_t speed);
55
+void block_copy_kick(BlockCopyCallState *call_state);
56
+
35
+
57
BdrvDirtyBitmap *block_copy_dirty_bitmap(BlockCopyState *s);
36
/* A queue of waiters. Elements are added atomically in front of
58
void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip);
37
* from_push. to_pop is only populated, and popped from, by whoever
59
38
* is in charge of the next wakeup. This can be an unlocker or,
60
diff --git a/block/backup-top.c b/block/backup-top.c
39
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
61
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
62
--- a/block/backup-top.c
41
--- a/util/qemu-coroutine-lock.c
63
+++ b/block/backup-top.c
42
+++ b/util/qemu-coroutine-lock.c
64
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int backup_top_cbw(BlockDriverState *bs, uint64_t offset,
43
@@ -XXX,XX +XXX,XX @@
65
off = QEMU_ALIGN_DOWN(offset, s->cluster_size);
44
#include "qemu-common.h"
66
end = QEMU_ALIGN_UP(offset + bytes, s->cluster_size);
45
#include "qemu/coroutine.h"
67
46
#include "qemu/coroutine_int.h"
68
- return block_copy(s->bcs, off, end - off, NULL);
47
+#include "qemu/processor.h"
69
+ return block_copy(s->bcs, off, end - off, true, NULL);
48
#include "qemu/queue.h"
49
#include "block/aio.h"
50
#include "trace.h"
51
@@ -XXX,XX +XXX,XX @@ void qemu_co_mutex_init(CoMutex *mutex)
52
memset(mutex, 0, sizeof(*mutex));
70
}
53
}
71
54
72
static int coroutine_fn backup_top_co_pdiscard(BlockDriverState *bs,
55
-static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex)
73
diff --git a/block/backup.c b/block/backup.c
56
+static void coroutine_fn qemu_co_mutex_wake(CoMutex *mutex, Coroutine *co)
74
index XXXXXXX..XXXXXXX 100644
75
--- a/block/backup.c
76
+++ b/block/backup.c
77
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
78
79
trace_backup_do_cow_enter(job, start, offset, bytes);
80
81
- ret = block_copy(job->bcs, start, end - start, error_is_read);
82
+ ret = block_copy(job->bcs, start, end - start, true, error_is_read);
83
84
trace_backup_do_cow_return(job, offset, bytes, ret);
85
86
diff --git a/block/block-copy.c b/block/block-copy.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/block/block-copy.c
89
+++ b/block/block-copy.c
90
@@ -XXX,XX +XXX,XX @@
91
#define BLOCK_COPY_MAX_BUFFER (1 * MiB)
92
#define BLOCK_COPY_MAX_MEM (128 * MiB)
93
#define BLOCK_COPY_MAX_WORKERS 64
94
+#define BLOCK_COPY_SLICE_TIME 100000000ULL /* ns */
95
96
static coroutine_fn int block_copy_task_entry(AioTask *task);
97
98
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
99
int64_t bytes;
100
int max_workers;
101
int64_t max_chunk;
102
+ bool ignore_ratelimit;
103
BlockCopyAsyncCallbackFunc cb;
104
void *cb_opaque;
105
106
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
107
/* State */
108
int ret;
109
bool finished;
110
+ QemuCoSleepState *sleep_state;
111
112
/* OUT parameters */
113
bool error_is_read;
114
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyState {
115
void *progress_opaque;
116
117
SharedResource *mem;
118
+
119
+ uint64_t speed;
120
+ RateLimit rate_limit;
121
} BlockCopyState;
122
123
static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
124
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
125
}
126
task->zeroes = ret & BDRV_BLOCK_ZERO;
127
128
+ if (s->speed) {
129
+ if (!call_state->ignore_ratelimit) {
130
+ uint64_t ns = ratelimit_calculate_delay(&s->rate_limit, 0);
131
+ if (ns > 0) {
132
+ block_copy_task_end(task, -EAGAIN);
133
+ g_free(task);
134
+ qemu_co_sleep_ns_wakeable(QEMU_CLOCK_REALTIME, ns,
135
+ &call_state->sleep_state);
136
+ continue;
137
+ }
138
+ }
139
+
140
+ ratelimit_calculate_delay(&s->rate_limit, task->bytes);
141
+ }
142
+
143
trace_block_copy_process(s, task->offset);
144
145
co_get_from_shres(s->mem, task->bytes);
146
@@ -XXX,XX +XXX,XX @@ out:
147
return ret < 0 ? ret : found_dirty;
148
}
149
150
+void block_copy_kick(BlockCopyCallState *call_state)
151
+{
57
+{
152
+ if (call_state->sleep_state) {
58
+ /* Read co before co->ctx; pairs with smp_wmb() in
153
+ qemu_co_sleep_wake(call_state->sleep_state);
59
+ * qemu_coroutine_enter().
154
+ }
60
+ */
61
+ smp_read_barrier_depends();
62
+ mutex->ctx = co->ctx;
63
+ aio_co_wake(co);
155
+}
64
+}
156
+
65
+
157
/*
66
+static void coroutine_fn qemu_co_mutex_lock_slowpath(AioContext *ctx,
158
* block_copy_common
67
+ CoMutex *mutex)
159
*
160
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
161
}
162
163
int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
164
- bool *error_is_read)
165
+ bool ignore_ratelimit, bool *error_is_read)
166
{
68
{
167
BlockCopyCallState call_state = {
69
Coroutine *self = qemu_coroutine_self();
168
.s = s,
70
CoWaitRecord w;
169
.offset = start,
71
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex)
170
.bytes = bytes,
72
if (co == self) {
171
+ .ignore_ratelimit = ignore_ratelimit,
73
/* We got the lock ourselves! */
172
.max_workers = BLOCK_COPY_MAX_WORKERS,
74
assert(to_wake == &w);
173
};
75
+ mutex->ctx = ctx;
174
76
return;
175
@@ -XXX,XX +XXX,XX @@ void block_copy_set_skip_unallocated(BlockCopyState *s, bool skip)
77
}
78
79
- aio_co_wake(co);
80
+ qemu_co_mutex_wake(mutex, co);
81
}
82
83
qemu_coroutine_yield();
84
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex)
85
86
void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex)
176
{
87
{
177
s->skip_unallocated = skip;
88
+ AioContext *ctx = qemu_get_current_aio_context();
178
}
89
Coroutine *self = qemu_coroutine_self();
179
+
90
+ int waiters, i;
180
+void block_copy_set_speed(BlockCopyState *s, uint64_t speed)
91
181
+{
92
- if (atomic_fetch_inc(&mutex->locked) == 0) {
182
+ s->speed = speed;
93
+ /* Running a very small critical section on pthread_mutex_t and CoMutex
183
+ if (speed > 0) {
94
+ * shows that pthread_mutex_t is much faster because it doesn't actually
184
+ ratelimit_set_speed(&s->rate_limit, speed, BLOCK_COPY_SLICE_TIME);
95
+ * go to sleep. What happens is that the critical section is shorter
96
+ * than the latency of entering the kernel and thus FUTEX_WAIT always
97
+ * fails. With CoMutex there is no such latency but you still want to
98
+ * avoid wait and wakeup. So introduce it artificially.
99
+ */
100
+ i = 0;
101
+retry_fast_path:
102
+ waiters = atomic_cmpxchg(&mutex->locked, 0, 1);
103
+ if (waiters != 0) {
104
+ while (waiters == 1 && ++i < 1000) {
105
+ if (atomic_read(&mutex->ctx) == ctx) {
106
+ break;
107
+ }
108
+ if (atomic_read(&mutex->locked) == 0) {
109
+ goto retry_fast_path;
110
+ }
111
+ cpu_relax();
112
+ }
113
+ waiters = atomic_fetch_inc(&mutex->locked);
185
+ }
114
+ }
186
+
115
+
187
+ /*
116
+ if (waiters == 0) {
188
+ * Note: it's good to kick all call states from here, but it should be done
117
/* Uncontended. */
189
+ * only from a coroutine, to not crash if s->calls list changed while
118
trace_qemu_co_mutex_lock_uncontended(mutex, self);
190
+ * entering one call. So for now, the only user of this function kicks its
119
+ mutex->ctx = ctx;
191
+ * only one call_state by hand.
120
} else {
192
+ */
121
- qemu_co_mutex_lock_slowpath(mutex);
193
+}
122
+ qemu_co_mutex_lock_slowpath(ctx, mutex);
123
}
124
mutex->holder = self;
125
self->locks_held++;
126
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
127
assert(mutex->holder == self);
128
assert(qemu_in_coroutine());
129
130
+ mutex->ctx = NULL;
131
mutex->holder = NULL;
132
self->locks_held--;
133
if (atomic_fetch_dec(&mutex->locked) == 1) {
134
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
135
unsigned our_handoff;
136
137
if (to_wake) {
138
- Coroutine *co = to_wake->co;
139
- aio_co_wake(co);
140
+ qemu_co_mutex_wake(mutex, to_wake->co);
141
break;
142
}
143
144
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/util/qemu-coroutine.c
147
+++ b/util/qemu-coroutine.c
148
@@ -XXX,XX +XXX,XX @@ void qemu_coroutine_enter(Coroutine *co)
149
co->ctx = qemu_get_current_aio_context();
150
151
/* Store co->ctx before anything that stores co. Matches
152
- * barrier in aio_co_wake.
153
+ * barrier in aio_co_wake and qemu_co_mutex_wake.
154
*/
155
smp_wmb();
156
194
--
157
--
195
2.29.2
158
2.9.3
196
159
197
160
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
If main job coroutine called job_yield (while some background process
3
Add two implementations of the same benchmark as the previous patch,
4
is in progress), we should give it a chance to call job_pause_point().
4
but using pthreads. One uses a normal QemuMutex, the other is Linux
5
It will be used in backup, when moved on async block-copy.
5
only and implements a fair mutex based on MCS locks and futexes.
6
6
This shows that the slower performance of the 5-thread case is due to
7
Note, that job_user_pause is not enough: we want to handle
7
the fairness of CoMutex, rather than to coroutines. If fairness does
8
child_job_drained_begin() as well, which call job_pause().
8
not matter, as is the case with two threads, CoMutex can actually be
9
9
faster than pthreads.
10
Still, if job is already in job_do_yield() in job_pause_point() we
10
11
should not enter it.
11
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
12
12
Reviewed-by: Fam Zheng <famz@redhat.com>
13
iotest 109 output is modified: on stop we do bdrv_drain_all() which now
13
Message-id: 20170213181244.16297-4-pbonzini@redhat.com
14
triggers job pause immediately (and pause after ready is standby).
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
16
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
17
Message-Id: <20210116214705.822267-10-vsementsov@virtuozzo.com>
18
Reviewed-by: Max Reitz <mreitz@redhat.com>
19
Signed-off-by: Max Reitz <mreitz@redhat.com>
20
---
15
---
21
job.c | 3 +++
16
tests/test-aio-multithread.c | 164 +++++++++++++++++++++++++++++++++++++++++++
22
tests/qemu-iotests/109.out | 24 ++++++++++++++++++++++++
17
1 file changed, 164 insertions(+)
23
2 files changed, 27 insertions(+)
18
24
19
diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c
25
diff --git a/job.c b/job.c
26
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
27
--- a/job.c
21
--- a/tests/test-aio-multithread.c
28
+++ b/job.c
22
+++ b/tests/test-aio-multithread.c
29
@@ -XXX,XX +XXX,XX @@ static bool job_timer_not_pending(Job *job)
23
@@ -XXX,XX +XXX,XX @@ static void test_multi_co_mutex_2_30(void)
30
void job_pause(Job *job)
24
test_multi_co_mutex(2, 30);
31
{
32
job->pause_count++;
33
+ if (!job->paused) {
34
+ job_enter(job);
35
+ }
36
}
25
}
37
26
38
void job_resume(Job *job)
27
+/* Same test with fair mutexes, for performance comparison. */
39
diff --git a/tests/qemu-iotests/109.out b/tests/qemu-iotests/109.out
28
+
40
index XXXXXXX..XXXXXXX 100644
29
+#ifdef CONFIG_LINUX
41
--- a/tests/qemu-iotests/109.out
30
+#include "qemu/futex.h"
42
+++ b/tests/qemu-iotests/109.out
31
+
43
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
32
+/* The nodes for the mutex reside in this structure (on which we try to avoid
44
{"execute":"quit"}
33
+ * false sharing). The head of the mutex is in the "mutex_head" variable.
45
{"return": {}}
34
+ */
46
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
35
+static struct {
47
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
36
+ int next, locked;
48
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
37
+ int padding[14];
49
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
38
+} nodes[NUM_CONTEXTS] __attribute__((__aligned__(64)));
50
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
39
+
51
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 1024, "offset": 1024, "speed": 0, "type": "mirror"}}
40
+static int mutex_head = -1;
52
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
41
+
53
{"execute":"quit"}
42
+static void mcs_mutex_lock(void)
54
{"return": {}}
43
+{
55
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
44
+ int prev;
56
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
45
+
57
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
46
+ nodes[id].next = -1;
58
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
47
+ nodes[id].locked = 1;
59
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
48
+ prev = atomic_xchg(&mutex_head, id);
60
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 197120, "offset": 197120, "speed": 0, "type": "mirror"}}
49
+ if (prev != -1) {
61
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
50
+ atomic_set(&nodes[prev].next, id);
62
{"execute":"quit"}
51
+ qemu_futex_wait(&nodes[id].locked, 1);
63
{"return": {}}
52
+ }
64
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
53
+}
65
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
54
+
66
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
55
+static void mcs_mutex_unlock(void)
67
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
56
+{
68
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
57
+ int next;
69
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 327680, "offset": 327680, "speed": 0, "type": "mirror"}}
58
+ if (nodes[id].next == -1) {
70
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
59
+ if (atomic_read(&mutex_head) == id &&
71
{"execute":"quit"}
60
+ atomic_cmpxchg(&mutex_head, id, -1) == id) {
72
{"return": {}}
61
+ /* Last item in the list, exit. */
73
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
62
+ return;
74
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
63
+ }
75
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
64
+ while (atomic_read(&nodes[id].next) == -1) {
76
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
65
+ /* mcs_mutex_lock did the xchg, but has not updated
77
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
66
+ * nodes[prev].next yet.
78
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 1024, "offset": 1024, "speed": 0, "type": "mirror"}}
67
+ */
79
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
68
+ }
80
{"execute":"quit"}
69
+ }
81
{"return": {}}
70
+
82
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
71
+ /* Wake up the next in line. */
83
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
72
+ next = nodes[id].next;
84
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
73
+ nodes[next].locked = 0;
85
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
74
+ qemu_futex_wake(&nodes[next].locked, 1);
86
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
75
+}
87
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 65536, "offset": 65536, "speed": 0, "type": "mirror"}}
76
+
88
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
77
+static void test_multi_fair_mutex_entry(void *opaque)
89
{"execute":"quit"}
78
+{
90
{"return": {}}
79
+ while (!atomic_mb_read(&now_stopping)) {
91
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
80
+ mcs_mutex_lock();
92
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
81
+ counter++;
93
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
82
+ mcs_mutex_unlock();
94
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
83
+ atomic_inc(&atomic_counter);
95
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
84
+ }
96
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2560, "offset": 2560, "speed": 0, "type": "mirror"}}
85
+ atomic_dec(&running);
97
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
86
+}
98
{"execute":"quit"}
87
+
99
{"return": {}}
88
+static void test_multi_fair_mutex(int threads, int seconds)
100
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
89
+{
101
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
90
+ int i;
102
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
91
+
103
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
92
+ assert(mutex_head == -1);
104
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
93
+ counter = 0;
105
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2560, "offset": 2560, "speed": 0, "type": "mirror"}}
94
+ atomic_counter = 0;
106
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
95
+ now_stopping = false;
107
{"execute":"quit"}
96
+
108
{"return": {}}
97
+ create_aio_contexts();
109
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
98
+ assert(threads <= NUM_CONTEXTS);
110
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
99
+ running = threads;
111
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
100
+ for (i = 0; i < threads; i++) {
112
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
101
+ Coroutine *co1 = qemu_coroutine_create(test_multi_fair_mutex_entry, NULL);
113
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
102
+ aio_co_schedule(ctx[i], co1);
114
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 31457280, "offset": 31457280, "speed": 0, "type": "mirror"}}
103
+ }
115
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
104
+
116
{"execute":"quit"}
105
+ g_usleep(seconds * 1000000);
117
{"return": {}}
106
+
118
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
107
+ atomic_mb_set(&now_stopping, true);
119
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
108
+ while (running > 0) {
120
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
109
+ g_usleep(100000);
121
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
110
+ }
122
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
111
+
123
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 327680, "offset": 327680, "speed": 0, "type": "mirror"}}
112
+ join_aio_contexts();
124
@@ -XXX,XX +XXX,XX @@ read 512/512 bytes at offset 0
113
+ g_test_message("%d iterations/second\n", counter / seconds);
125
{"execute":"quit"}
114
+ g_assert_cmpint(counter, ==, atomic_counter);
126
{"return": {}}
115
+}
127
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
116
+
128
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
117
+static void test_multi_fair_mutex_1(void)
129
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
118
+{
130
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
119
+ test_multi_fair_mutex(NUM_CONTEXTS, 1);
131
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
120
+}
132
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 2048, "offset": 2048, "speed": 0, "type": "mirror"}}
121
+
133
@@ -XXX,XX +XXX,XX @@ WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed
122
+static void test_multi_fair_mutex_10(void)
134
{"execute":"quit"}
123
+{
135
{"return": {}}
124
+ test_multi_fair_mutex(NUM_CONTEXTS, 10);
136
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
125
+}
137
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
126
+#endif
138
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
127
+
139
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
128
+/* Same test with pthread mutexes, for performance comparison and
140
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
129
+ * portability. */
141
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 512, "offset": 512, "speed": 0, "type": "mirror"}}
130
+
142
@@ -XXX,XX +XXX,XX @@ Images are identical.
131
+static QemuMutex mutex;
143
{"execute":"quit"}
132
+
144
{"return": {}}
133
+static void test_multi_mutex_entry(void *opaque)
145
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
134
+{
146
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "src"}}
135
+ while (!atomic_mb_read(&now_stopping)) {
147
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "src"}}
136
+ qemu_mutex_lock(&mutex);
148
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "src"}}
137
+ counter++;
149
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "src"}}
138
+ qemu_mutex_unlock(&mutex);
150
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "src", "len": 512, "offset": 512, "speed": 0, "type": "mirror"}}
139
+ atomic_inc(&atomic_counter);
140
+ }
141
+ atomic_dec(&running);
142
+}
143
+
144
+static void test_multi_mutex(int threads, int seconds)
145
+{
146
+ int i;
147
+
148
+ qemu_mutex_init(&mutex);
149
+ counter = 0;
150
+ atomic_counter = 0;
151
+ now_stopping = false;
152
+
153
+ create_aio_contexts();
154
+ assert(threads <= NUM_CONTEXTS);
155
+ running = threads;
156
+ for (i = 0; i < threads; i++) {
157
+ Coroutine *co1 = qemu_coroutine_create(test_multi_mutex_entry, NULL);
158
+ aio_co_schedule(ctx[i], co1);
159
+ }
160
+
161
+ g_usleep(seconds * 1000000);
162
+
163
+ atomic_mb_set(&now_stopping, true);
164
+ while (running > 0) {
165
+ g_usleep(100000);
166
+ }
167
+
168
+ join_aio_contexts();
169
+ g_test_message("%d iterations/second\n", counter / seconds);
170
+ g_assert_cmpint(counter, ==, atomic_counter);
171
+}
172
+
173
+static void test_multi_mutex_1(void)
174
+{
175
+ test_multi_mutex(NUM_CONTEXTS, 1);
176
+}
177
+
178
+static void test_multi_mutex_10(void)
179
+{
180
+ test_multi_mutex(NUM_CONTEXTS, 10);
181
+}
182
+
183
/* End of tests. */
184
185
int main(int argc, char **argv)
186
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
187
g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_1);
188
g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_1);
189
g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_3);
190
+#ifdef CONFIG_LINUX
191
+ g_test_add_func("/aio/multi/mutex/mcs", test_multi_fair_mutex_1);
192
+#endif
193
+ g_test_add_func("/aio/multi/mutex/pthread", test_multi_mutex_1);
194
} else {
195
g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_10);
196
g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_10);
197
g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_30);
198
+#ifdef CONFIG_LINUX
199
+ g_test_add_func("/aio/multi/mutex/mcs", test_multi_fair_mutex_10);
200
+#endif
201
+ g_test_add_func("/aio/multi/mutex/pthread", test_multi_mutex_10);
202
}
203
return g_test_run();
204
}
151
--
205
--
152
2.29.2
206
2.9.3
153
207
154
208
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Add script to benchmark new backup architecture.
3
This will avoid forward references in the next patch. It is also
4
more logical because CoQueue is not anymore the basic primitive.
4
5
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
Message-Id: <20210116214705.822267-24-vsementsov@virtuozzo.com>
7
Reviewed-by: Fam Zheng <famz@redhat.com>
7
[mreitz: s/not unsupported/not supported/]
8
Message-id: 20170213181244.16297-5-pbonzini@redhat.com
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
10
---
10
scripts/simplebench/bench-backup.py | 167 ++++++++++++++++++++++++++++
11
include/qemu/coroutine.h | 89 ++++++++++++++++++++++++------------------------
11
1 file changed, 167 insertions(+)
12
1 file changed, 44 insertions(+), 45 deletions(-)
12
create mode 100755 scripts/simplebench/bench-backup.py
13
13
14
diff --git a/scripts/simplebench/bench-backup.py b/scripts/simplebench/bench-backup.py
14
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
15
new file mode 100755
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX
16
--- a/include/qemu/coroutine.h
17
--- /dev/null
17
+++ b/include/qemu/coroutine.h
18
+++ b/scripts/simplebench/bench-backup.py
18
@@ -XXX,XX +XXX,XX @@ bool qemu_in_coroutine(void);
19
@@ -XXX,XX +XXX,XX @@
19
*/
20
+#!/usr/bin/env python3
20
bool qemu_coroutine_entered(Coroutine *co);
21
+#
21
22
+# Bench backup block-job
22
-
23
+#
23
-/**
24
+# Copyright (c) 2020 Virtuozzo International GmbH.
24
- * CoQueues are a mechanism to queue coroutines in order to continue executing
25
+#
25
- * them later. They provide the fundamental primitives on which coroutine locks
26
+# This program is free software; you can redistribute it and/or modify
26
- * are built.
27
+# it under the terms of the GNU General Public License as published by
27
- */
28
+# the Free Software Foundation; either version 2 of the License, or
28
-typedef struct CoQueue {
29
+# (at your option) any later version.
29
- QSIMPLEQ_HEAD(, Coroutine) entries;
30
+#
30
-} CoQueue;
31
+# This program is distributed in the hope that it will be useful,
31
-
32
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
32
-/**
33
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
33
- * Initialise a CoQueue. This must be called before any other operation is used
34
+# GNU General Public License for more details.
34
- * on the CoQueue.
35
+#
35
- */
36
+# You should have received a copy of the GNU General Public License
36
-void qemu_co_queue_init(CoQueue *queue);
37
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
37
-
38
+#
38
-/**
39
- * Adds the current coroutine to the CoQueue and transfers control to the
40
- * caller of the coroutine.
41
- */
42
-void coroutine_fn qemu_co_queue_wait(CoQueue *queue);
43
-
44
-/**
45
- * Restarts the next coroutine in the CoQueue and removes it from the queue.
46
- *
47
- * Returns true if a coroutine was restarted, false if the queue is empty.
48
- */
49
-bool coroutine_fn qemu_co_queue_next(CoQueue *queue);
50
-
51
-/**
52
- * Restarts all coroutines in the CoQueue and leaves the queue empty.
53
- */
54
-void coroutine_fn qemu_co_queue_restart_all(CoQueue *queue);
55
-
56
-/**
57
- * Enter the next coroutine in the queue
58
- */
59
-bool qemu_co_enter_next(CoQueue *queue);
60
-
61
-/**
62
- * Checks if the CoQueue is empty.
63
- */
64
-bool qemu_co_queue_empty(CoQueue *queue);
65
-
66
-
67
/**
68
* Provides a mutex that can be used to synchronise coroutines
69
*/
70
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex);
71
*/
72
void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex);
73
39
+
74
+
40
+import argparse
75
+/**
41
+import json
76
+ * CoQueues are a mechanism to queue coroutines in order to continue executing
77
+ * them later.
78
+ */
79
+typedef struct CoQueue {
80
+ QSIMPLEQ_HEAD(, Coroutine) entries;
81
+} CoQueue;
42
+
82
+
43
+import simplebench
83
+/**
44
+from results_to_text import results_to_text
84
+ * Initialise a CoQueue. This must be called before any other operation is used
45
+from bench_block_job import bench_block_copy, drv_file, drv_nbd
85
+ * on the CoQueue.
86
+ */
87
+void qemu_co_queue_init(CoQueue *queue);
88
+
89
+/**
90
+ * Adds the current coroutine to the CoQueue and transfers control to the
91
+ * caller of the coroutine.
92
+ */
93
+void coroutine_fn qemu_co_queue_wait(CoQueue *queue);
94
+
95
+/**
96
+ * Restarts the next coroutine in the CoQueue and removes it from the queue.
97
+ *
98
+ * Returns true if a coroutine was restarted, false if the queue is empty.
99
+ */
100
+bool coroutine_fn qemu_co_queue_next(CoQueue *queue);
101
+
102
+/**
103
+ * Restarts all coroutines in the CoQueue and leaves the queue empty.
104
+ */
105
+void coroutine_fn qemu_co_queue_restart_all(CoQueue *queue);
106
+
107
+/**
108
+ * Enter the next coroutine in the queue
109
+ */
110
+bool qemu_co_enter_next(CoQueue *queue);
111
+
112
+/**
113
+ * Checks if the CoQueue is empty.
114
+ */
115
+bool qemu_co_queue_empty(CoQueue *queue);
46
+
116
+
47
+
117
+
48
+def bench_func(env, case):
118
typedef struct CoRwlock {
49
+ """ Handle one "cell" of benchmarking table. """
119
bool writer;
50
+ cmd_options = env['cmd-options'] if 'cmd-options' in env else {}
120
int reader;
51
+ return bench_block_copy(env['qemu-binary'], env['cmd'],
52
+ cmd_options,
53
+ case['source'], case['target'])
54
+
55
+
56
+def bench(args):
57
+ test_cases = []
58
+
59
+ sources = {}
60
+ targets = {}
61
+ for d in args.dir:
62
+ label, path = d.split(':') # paths with colon not supported
63
+ sources[label] = drv_file(path + '/test-source')
64
+ targets[label] = drv_file(path + '/test-target')
65
+
66
+ if args.nbd:
67
+ nbd = args.nbd.split(':')
68
+ host = nbd[0]
69
+ port = '10809' if len(nbd) == 1 else nbd[1]
70
+ drv = drv_nbd(host, port)
71
+ sources['nbd'] = drv
72
+ targets['nbd'] = drv
73
+
74
+ for t in args.test:
75
+ src, dst = t.split(':')
76
+
77
+ test_cases.append({
78
+ 'id': t,
79
+ 'source': sources[src],
80
+ 'target': targets[dst]
81
+ })
82
+
83
+ binaries = [] # list of (<label>, <path>, [<options>])
84
+ for i, q in enumerate(args.env):
85
+ name_path = q.split(':')
86
+ if len(name_path) == 1:
87
+ label = f'q{i}'
88
+ path_opts = name_path[0].split(',')
89
+ else:
90
+ assert len(name_path) == 2 # paths with colon not supported
91
+ label = name_path[0]
92
+ path_opts = name_path[1].split(',')
93
+
94
+ binaries.append((label, path_opts[0], path_opts[1:]))
95
+
96
+ test_envs = []
97
+
98
+ bin_paths = {}
99
+ for i, q in enumerate(args.env):
100
+ opts = q.split(',')
101
+ label_path = opts[0]
102
+ opts = opts[1:]
103
+
104
+ if ':' in label_path:
105
+ # path with colon inside is not supported
106
+ label, path = label_path.split(':')
107
+ bin_paths[label] = path
108
+ elif label_path in bin_paths:
109
+ label = label_path
110
+ path = bin_paths[label]
111
+ else:
112
+ path = label_path
113
+ label = f'q{i}'
114
+ bin_paths[label] = path
115
+
116
+ x_perf = {}
117
+ is_mirror = False
118
+ for opt in opts:
119
+ if opt == 'mirror':
120
+ is_mirror = True
121
+ elif opt == 'copy-range=on':
122
+ x_perf['use-copy-range'] = True
123
+ elif opt == 'copy-range=off':
124
+ x_perf['use-copy-range'] = False
125
+ elif opt.startswith('max-workers='):
126
+ x_perf['max-workers'] = int(opt.split('=')[1])
127
+
128
+ if is_mirror:
129
+ assert not x_perf
130
+ test_envs.append({
131
+ 'id': f'mirror({label})',
132
+ 'cmd': 'blockdev-mirror',
133
+ 'qemu-binary': path
134
+ })
135
+ else:
136
+ test_envs.append({
137
+ 'id': f'backup({label})\n' + '\n'.join(opts),
138
+ 'cmd': 'blockdev-backup',
139
+ 'cmd-options': {'x-perf': x_perf} if x_perf else {},
140
+ 'qemu-binary': path
141
+ })
142
+
143
+ result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
144
+ with open('results.json', 'w') as f:
145
+ json.dump(result, f, indent=4)
146
+ print(results_to_text(result))
147
+
148
+
149
+class ExtendAction(argparse.Action):
150
+ def __call__(self, parser, namespace, values, option_string=None):
151
+ items = getattr(namespace, self.dest) or []
152
+ items.extend(values)
153
+ setattr(namespace, self.dest, items)
154
+
155
+
156
+if __name__ == '__main__':
157
+ p = argparse.ArgumentParser('Backup benchmark', epilog='''
158
+ENV format
159
+
160
+ (LABEL:PATH|LABEL|PATH)[,max-workers=N][,use-copy-range=(on|off)][,mirror]
161
+
162
+ LABEL short name for the binary
163
+ PATH path to the binary
164
+ max-workers set x-perf.max-workers of backup job
165
+ use-copy-range set x-perf.use-copy-range of backup job
166
+ mirror use mirror job instead of backup''',
167
+ formatter_class=argparse.RawTextHelpFormatter)
168
+ p.add_argument('--env', nargs='+', help='''\
169
+Qemu binaries with labels and options, see below
170
+"ENV format" section''',
171
+ action=ExtendAction)
172
+ p.add_argument('--dir', nargs='+', help='''\
173
+Directories, each containing "test-source" and/or
174
+"test-target" files, raw images to used in
175
+benchmarking. File path with label, like
176
+label:/path/to/directory''',
177
+ action=ExtendAction)
178
+ p.add_argument('--nbd', help='''\
179
+host:port for remote NBD image, (or just host, for
180
+default port 10809). Use it in tests, label is "nbd"
181
+(but you cannot create test nbd:nbd).''')
182
+ p.add_argument('--test', nargs='+', help='''\
183
+Tests, in form source-dir-label:target-dir-label''',
184
+ action=ExtendAction)
185
+
186
+ bench(p.parse_args())
187
--
121
--
188
2.29.2
122
2.9.3
189
123
190
124
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Experiments show, that copy_range is not always making things faster.
3
All that CoQueue needs in order to become thread-safe is help
4
So, to make experimentation simpler, let's add a parameter. Some more
4
from an external mutex. Add this to the API.
5
perf parameters will be added soon, so here is a new struct.
5
6
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
7
For now, add new backup qmp parameter with x- prefix for the following
7
Reviewed-by: Fam Zheng <famz@redhat.com>
8
reasons:
8
Message-id: 20170213181244.16297-6-pbonzini@redhat.com
9
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
- We are going to add more performance parameters, some will be
11
related to the whole block-copy process, some only to background
12
copying in backup (ignored for copy-before-write operations).
13
- On the other hand, we are going to use block-copy interface in other
14
block jobs, which will need performance options as well.. And it
15
should be the same structure or at least somehow related.
16
17
So, there are too much unclean things about how the interface and now
18
we need the new options mostly for testing. Let's keep them
19
experimental for a while.
20
21
In do_backup_common() new x-perf parameter handled in a way to
22
make further options addition simpler.
23
24
We add use-copy-range with default=true, and we'll change the default
25
in further patch, after moving backup to use block-copy.
26
27
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
28
Reviewed-by: Max Reitz <mreitz@redhat.com>
29
Message-Id: <20210116214705.822267-2-vsementsov@virtuozzo.com>
30
[mreitz: s/5\.2/6.0/]
31
Signed-off-by: Max Reitz <mreitz@redhat.com>
32
---
10
---
33
qapi/block-core.json | 17 ++++++++++++++++-
11
include/qemu/coroutine.h | 8 +++++---
34
block/backup-top.h | 1 +
12
block/backup.c | 2 +-
35
include/block/block-copy.h | 2 +-
13
block/io.c | 4 ++--
36
include/block/block_int.h | 3 +++
14
block/nbd-client.c | 2 +-
37
block/backup-top.c | 4 +++-
15
block/qcow2-cluster.c | 4 +---
38
block/backup.c | 6 +++++-
16
block/sheepdog.c | 2 +-
39
block/block-copy.c | 4 ++--
17
block/throttle-groups.c | 2 +-
40
block/replication.c | 2 ++
18
hw/9pfs/9p.c | 2 +-
41
blockdev.c | 8 ++++++++
19
util/qemu-coroutine-lock.c | 24 +++++++++++++++++++++---
42
9 files changed, 41 insertions(+), 6 deletions(-)
20
9 files changed, 34 insertions(+), 16 deletions(-)
43
21
44
diff --git a/qapi/block-core.json b/qapi/block-core.json
22
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
45
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
46
--- a/qapi/block-core.json
24
--- a/include/qemu/coroutine.h
47
+++ b/qapi/block-core.json
25
+++ b/include/qemu/coroutine.h
48
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex);
49
{ 'struct': 'BlockdevSnapshot',
27
50
'data': { 'node': 'str', 'overlay': 'str' } }
28
/**
51
29
* CoQueues are a mechanism to queue coroutines in order to continue executing
52
+##
30
- * them later.
53
+# @BackupPerf:
31
+ * them later. They are similar to condition variables, but they need help
54
+#
32
+ * from an external mutex in order to maintain thread-safety.
55
+# Optional parameters for backup. These parameters don't affect
33
*/
56
+# functionality, but may significantly affect performance.
34
typedef struct CoQueue {
57
+#
35
QSIMPLEQ_HEAD(, Coroutine) entries;
58
+# @use-copy-range: Use copy offloading. Default true.
36
@@ -XXX,XX +XXX,XX @@ void qemu_co_queue_init(CoQueue *queue);
59
+#
37
60
+# Since: 6.0
38
/**
61
+##
39
* Adds the current coroutine to the CoQueue and transfers control to the
62
+{ 'struct': 'BackupPerf',
40
- * caller of the coroutine.
63
+ 'data': { '*use-copy-range': 'bool' }}
41
+ * caller of the coroutine. The mutex is unlocked during the wait and
64
+
42
+ * locked again afterwards.
65
##
43
*/
66
# @BackupCommon:
44
-void coroutine_fn qemu_co_queue_wait(CoQueue *queue);
67
#
45
+void coroutine_fn qemu_co_queue_wait(CoQueue *queue, CoMutex *mutex);
68
@@ -XXX,XX +XXX,XX @@
46
69
# above node specified by @drive. If this option is not given,
47
/**
70
# a node name is autogenerated. (Since: 4.2)
48
* Restarts the next coroutine in the CoQueue and removes it from the queue.
71
#
72
+# @x-perf: Performance options. (Since 6.0)
73
+#
74
# Note: @on-source-error and @on-target-error only affect background
75
# I/O. If an error occurs during a guest write request, the device's
76
# rerror/werror actions will be used.
77
@@ -XXX,XX +XXX,XX @@
78
'*on-source-error': 'BlockdevOnError',
79
'*on-target-error': 'BlockdevOnError',
80
'*auto-finalize': 'bool', '*auto-dismiss': 'bool',
81
- '*filter-node-name': 'str' } }
82
+ '*filter-node-name': 'str', '*x-perf': 'BackupPerf' } }
83
84
##
85
# @DriveBackup:
86
diff --git a/block/backup-top.h b/block/backup-top.h
87
index XXXXXXX..XXXXXXX 100644
88
--- a/block/backup-top.h
89
+++ b/block/backup-top.h
90
@@ -XXX,XX +XXX,XX @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
91
BlockDriverState *target,
92
const char *filter_node_name,
93
uint64_t cluster_size,
94
+ BackupPerf *perf,
95
BdrvRequestFlags write_flags,
96
BlockCopyState **bcs,
97
Error **errp);
98
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
99
index XXXXXXX..XXXXXXX 100644
100
--- a/include/block/block-copy.h
101
+++ b/include/block/block-copy.h
102
@@ -XXX,XX +XXX,XX @@ typedef void (*ProgressBytesCallbackFunc)(int64_t bytes, void *opaque);
103
typedef struct BlockCopyState BlockCopyState;
104
105
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
106
- int64_t cluster_size,
107
+ int64_t cluster_size, bool use_copy_range,
108
BdrvRequestFlags write_flags,
109
Error **errp);
110
111
diff --git a/include/block/block_int.h b/include/block/block_int.h
112
index XXXXXXX..XXXXXXX 100644
113
--- a/include/block/block_int.h
114
+++ b/include/block/block_int.h
115
@@ -XXX,XX +XXX,XX @@ void mirror_start(const char *job_id, BlockDriverState *bs,
116
* @sync_mode: What parts of the disk image should be copied to the destination.
117
* @sync_bitmap: The dirty bitmap if sync_mode is 'bitmap' or 'incremental'
118
* @bitmap_mode: The bitmap synchronization policy to use.
119
+ * @perf: Performance options. All actual fields assumed to be present,
120
+ * all ".has_*" fields are ignored.
121
* @on_source_error: The action to take upon error reading from the source.
122
* @on_target_error: The action to take upon error writing to the target.
123
* @creation_flags: Flags that control the behavior of the Job lifetime.
124
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
125
BitmapSyncMode bitmap_mode,
126
bool compress,
127
const char *filter_node_name,
128
+ BackupPerf *perf,
129
BlockdevOnError on_source_error,
130
BlockdevOnError on_target_error,
131
int creation_flags,
132
diff --git a/block/backup-top.c b/block/backup-top.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/block/backup-top.c
135
+++ b/block/backup-top.c
136
@@ -XXX,XX +XXX,XX @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
137
BlockDriverState *target,
138
const char *filter_node_name,
139
uint64_t cluster_size,
140
+ BackupPerf *perf,
141
BdrvRequestFlags write_flags,
142
BlockCopyState **bcs,
143
Error **errp)
144
@@ -XXX,XX +XXX,XX @@ BlockDriverState *bdrv_backup_top_append(BlockDriverState *source,
145
146
state->cluster_size = cluster_size;
147
state->bcs = block_copy_state_new(top->backing, state->target,
148
- cluster_size, write_flags, &local_err);
149
+ cluster_size, perf->use_copy_range,
150
+ write_flags, &local_err);
151
if (local_err) {
152
error_prepend(&local_err, "Cannot create block-copy-state: ");
153
goto fail;
154
diff --git a/block/backup.c b/block/backup.c
49
diff --git a/block/backup.c b/block/backup.c
155
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
156
--- a/block/backup.c
51
--- a/block/backup.c
157
+++ b/block/backup.c
52
+++ b/block/backup.c
158
@@ -XXX,XX +XXX,XX @@ typedef struct BackupBlockJob {
53
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
159
uint64_t len;
54
retry = false;
160
uint64_t bytes_read;
55
QLIST_FOREACH(req, &job->inflight_reqs, list) {
161
int64_t cluster_size;
56
if (end > req->start && start < req->end) {
162
+ BackupPerf perf;
57
- qemu_co_queue_wait(&req->wait_queue);
163
58
+ qemu_co_queue_wait(&req->wait_queue, NULL);
164
BlockCopyState *bcs;
59
retry = true;
165
} BackupBlockJob;
60
break;
166
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
61
}
167
BitmapSyncMode bitmap_mode,
62
diff --git a/block/io.c b/block/io.c
168
bool compress,
63
index XXXXXXX..XXXXXXX 100644
169
const char *filter_node_name,
64
--- a/block/io.c
170
+ BackupPerf *perf,
65
+++ b/block/io.c
171
BlockdevOnError on_source_error,
66
@@ -XXX,XX +XXX,XX @@ static bool coroutine_fn wait_serialising_requests(BdrvTrackedRequest *self)
172
BlockdevOnError on_target_error,
67
* (instead of producing a deadlock in the former case). */
173
int creation_flags,
68
if (!req->waiting_for) {
174
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
69
self->waiting_for = req;
175
(compress ? BDRV_REQ_WRITE_COMPRESSED : 0),
70
- qemu_co_queue_wait(&req->wait_queue);
176
71
+ qemu_co_queue_wait(&req->wait_queue, NULL);
177
backup_top = bdrv_backup_top_append(bs, target, filter_node_name,
72
self->waiting_for = NULL;
178
- cluster_size, write_flags, &bcs, errp);
73
retry = true;
179
+ cluster_size, perf,
74
waited = true;
180
+ write_flags, &bcs, errp);
75
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
181
if (!backup_top) {
76
182
goto error;
77
/* Wait until any previous flushes are completed */
183
}
78
while (bs->active_flush_req) {
184
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
79
- qemu_co_queue_wait(&bs->flush_queue);
185
job->bcs = bcs;
80
+ qemu_co_queue_wait(&bs->flush_queue, NULL);
186
job->cluster_size = cluster_size;
81
}
187
job->len = len;
82
188
+ job->perf = *perf;
83
bs->active_flush_req = true;
189
84
diff --git a/block/nbd-client.c b/block/nbd-client.c
190
block_copy_set_progress_callback(bcs, backup_progress_bytes_callback, job);
85
index XXXXXXX..XXXXXXX 100644
191
block_copy_set_progress_meter(bcs, &job->common.job.progress);
86
--- a/block/nbd-client.c
192
diff --git a/block/block-copy.c b/block/block-copy.c
87
+++ b/block/nbd-client.c
193
index XXXXXXX..XXXXXXX 100644
88
@@ -XXX,XX +XXX,XX @@ static void nbd_coroutine_start(NBDClientSession *s,
194
--- a/block/block-copy.c
89
/* Poor man semaphore. The free_sema is locked when no other request
195
+++ b/block/block-copy.c
90
* can be accepted, and unlocked after receiving one reply. */
196
@@ -XXX,XX +XXX,XX @@ static uint32_t block_copy_max_transfer(BdrvChild *source, BdrvChild *target)
91
if (s->in_flight == MAX_NBD_REQUESTS) {
92
- qemu_co_queue_wait(&s->free_sema);
93
+ qemu_co_queue_wait(&s->free_sema, NULL);
94
assert(s->in_flight < MAX_NBD_REQUESTS);
95
}
96
s->in_flight++;
97
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/block/qcow2-cluster.c
100
+++ b/block/qcow2-cluster.c
101
@@ -XXX,XX +XXX,XX @@ static int handle_dependencies(BlockDriverState *bs, uint64_t guest_offset,
102
if (bytes == 0) {
103
/* Wait for the dependency to complete. We need to recheck
104
* the free/allocated clusters when we continue. */
105
- qemu_co_mutex_unlock(&s->lock);
106
- qemu_co_queue_wait(&old_alloc->dependent_requests);
107
- qemu_co_mutex_lock(&s->lock);
108
+ qemu_co_queue_wait(&old_alloc->dependent_requests, &s->lock);
109
return -EAGAIN;
110
}
111
}
112
diff --git a/block/sheepdog.c b/block/sheepdog.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/block/sheepdog.c
115
+++ b/block/sheepdog.c
116
@@ -XXX,XX +XXX,XX @@ static void wait_for_overlapping_aiocb(BDRVSheepdogState *s, SheepdogAIOCB *acb)
117
retry:
118
QLIST_FOREACH(cb, &s->inflight_aiocb_head, aiocb_siblings) {
119
if (AIOCBOverlapping(acb, cb)) {
120
- qemu_co_queue_wait(&s->overlapping_queue);
121
+ qemu_co_queue_wait(&s->overlapping_queue, NULL);
122
goto retry;
123
}
124
}
125
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/block/throttle-groups.c
128
+++ b/block/throttle-groups.c
129
@@ -XXX,XX +XXX,XX @@ void coroutine_fn throttle_group_co_io_limits_intercept(BlockBackend *blk,
130
if (must_wait || blkp->pending_reqs[is_write]) {
131
blkp->pending_reqs[is_write]++;
132
qemu_mutex_unlock(&tg->lock);
133
- qemu_co_queue_wait(&blkp->throttled_reqs[is_write]);
134
+ qemu_co_queue_wait(&blkp->throttled_reqs[is_write], NULL);
135
qemu_mutex_lock(&tg->lock);
136
blkp->pending_reqs[is_write]--;
137
}
138
diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
139
index XXXXXXX..XXXXXXX 100644
140
--- a/hw/9pfs/9p.c
141
+++ b/hw/9pfs/9p.c
142
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn v9fs_flush(void *opaque)
143
/*
144
* Wait for pdu to complete.
145
*/
146
- qemu_co_queue_wait(&cancel_pdu->complete);
147
+ qemu_co_queue_wait(&cancel_pdu->complete, NULL);
148
cancel_pdu->cancelled = 0;
149
pdu_free(cancel_pdu);
150
}
151
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/util/qemu-coroutine-lock.c
154
+++ b/util/qemu-coroutine-lock.c
155
@@ -XXX,XX +XXX,XX @@ void qemu_co_queue_init(CoQueue *queue)
156
QSIMPLEQ_INIT(&queue->entries);
197
}
157
}
198
158
199
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
159
-void coroutine_fn qemu_co_queue_wait(CoQueue *queue)
200
- int64_t cluster_size,
160
+void coroutine_fn qemu_co_queue_wait(CoQueue *queue, CoMutex *mutex)
201
+ int64_t cluster_size, bool use_copy_range,
202
BdrvRequestFlags write_flags, Error **errp)
203
{
161
{
204
BlockCopyState *s;
162
Coroutine *self = qemu_coroutine_self();
205
@@ -XXX,XX +XXX,XX @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
163
QSIMPLEQ_INSERT_TAIL(&queue->entries, self, co_queue_next);
206
* We enable copy-range, but keep small copy_size, until first
164
+
207
* successful copy_range (look at block_copy_do_copy).
165
+ if (mutex) {
208
*/
166
+ qemu_co_mutex_unlock(mutex);
209
- s->use_copy_range = true;
210
+ s->use_copy_range = use_copy_range;
211
s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
212
}
213
214
diff --git a/block/replication.c b/block/replication.c
215
index XXXXXXX..XXXXXXX 100644
216
--- a/block/replication.c
217
+++ b/block/replication.c
218
@@ -XXX,XX +XXX,XX @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
219
int64_t active_length, hidden_length, disk_length;
220
AioContext *aio_context;
221
Error *local_err = NULL;
222
+ BackupPerf perf = { .use_copy_range = true };
223
224
aio_context = bdrv_get_aio_context(bs);
225
aio_context_acquire(aio_context);
226
@@ -XXX,XX +XXX,XX @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
227
s->backup_job = backup_job_create(
228
NULL, s->secondary_disk->bs, s->hidden_disk->bs,
229
0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
230
+ &perf,
231
BLOCKDEV_ON_ERROR_REPORT,
232
BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
233
backup_job_completed, bs, NULL, &local_err);
234
diff --git a/blockdev.c b/blockdev.c
235
index XXXXXXX..XXXXXXX 100644
236
--- a/blockdev.c
237
+++ b/blockdev.c
238
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
239
{
240
BlockJob *job = NULL;
241
BdrvDirtyBitmap *bmap = NULL;
242
+ BackupPerf perf = { .use_copy_range = true };
243
int job_flags = JOB_DEFAULT;
244
245
if (!backup->has_speed) {
246
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
247
backup->compress = false;
248
}
249
250
+ if (backup->x_perf) {
251
+ if (backup->x_perf->has_use_copy_range) {
252
+ perf.use_copy_range = backup->x_perf->use_copy_range;
253
+ }
254
+ }
167
+ }
255
+
168
+
256
if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
169
+ /* There is no race condition here. Other threads will call
257
(backup->sync == MIRROR_SYNC_MODE_INCREMENTAL)) {
170
+ * aio_co_schedule on our AioContext, which can reenter this
258
/* done before desugaring 'incremental' to print the right message */
171
+ * coroutine but only after this yield and after the main loop
259
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
172
+ * has gone through the next iteration.
260
backup->sync, bmap, backup->bitmap_mode,
173
+ */
261
backup->compress,
174
qemu_coroutine_yield();
262
backup->filter_node_name,
175
assert(qemu_in_coroutine());
263
+ &perf,
176
+
264
backup->on_source_error,
177
+ /* TODO: OSv implements wait morphing here, where the wakeup
265
backup->on_target_error,
178
+ * primitive automatically places the woken coroutine on the
266
job_flags, NULL, NULL, txn, errp);
179
+ * mutex's queue. This avoids the thundering herd effect.
180
+ */
181
+ if (mutex) {
182
+ qemu_co_mutex_lock(mutex);
183
+ }
184
}
185
186
/**
187
@@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_rdlock(CoRwlock *lock)
188
Coroutine *self = qemu_coroutine_self();
189
190
while (lock->writer) {
191
- qemu_co_queue_wait(&lock->queue);
192
+ qemu_co_queue_wait(&lock->queue, NULL);
193
}
194
lock->reader++;
195
self->locks_held++;
196
@@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_wrlock(CoRwlock *lock)
197
Coroutine *self = qemu_coroutine_self();
198
199
while (lock->writer || lock->reader) {
200
- qemu_co_queue_wait(&lock->queue);
201
+ qemu_co_queue_wait(&lock->queue, NULL);
202
}
203
lock->writer = true;
204
self->locks_held++;
267
--
205
--
268
2.29.2
206
2.9.3
269
207
270
208
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
They will be used for backup.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20210116214705.822267-5-vsementsov@virtuozzo.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
include/block/block-copy.h | 6 ++++++
11
block/block-copy.c | 11 +++++++++--
12
2 files changed, 15 insertions(+), 2 deletions(-)
13
14
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/block/block-copy.h
17
+++ b/include/block/block-copy.h
18
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
19
*
20
* Caller is responsible to call block_copy_call_free() to free
21
* BlockCopyCallState object.
22
+ *
23
+ * @max_workers means maximum of parallel coroutines to execute sub-requests,
24
+ * must be > 0.
25
+ *
26
+ * @max_chunk means maximum length for one IO operation. Zero means unlimited.
27
*/
28
BlockCopyCallState *block_copy_async(BlockCopyState *s,
29
int64_t offset, int64_t bytes,
30
+ int max_workers, int64_t max_chunk,
31
BlockCopyAsyncCallbackFunc cb,
32
void *cb_opaque);
33
34
diff --git a/block/block-copy.c b/block/block-copy.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/block/block-copy.c
37
+++ b/block/block-copy.c
38
@@ -XXX,XX +XXX,XX @@ typedef struct BlockCopyCallState {
39
BlockCopyState *s;
40
int64_t offset;
41
int64_t bytes;
42
+ int max_workers;
43
+ int64_t max_chunk;
44
BlockCopyAsyncCallbackFunc cb;
45
void *cb_opaque;
46
47
@@ -XXX,XX +XXX,XX @@ static BlockCopyTask *block_copy_task_create(BlockCopyState *s,
48
int64_t offset, int64_t bytes)
49
{
50
BlockCopyTask *task;
51
+ int64_t max_chunk = MIN_NON_ZERO(s->copy_size, call_state->max_chunk);
52
53
if (!bdrv_dirty_bitmap_next_dirty_area(s->copy_bitmap,
54
offset, offset + bytes,
55
- s->copy_size, &offset, &bytes))
56
+ max_chunk, &offset, &bytes))
57
{
58
return NULL;
59
}
60
@@ -XXX,XX +XXX,XX @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)
61
bytes = end - offset;
62
63
if (!aio && bytes) {
64
- aio = aio_task_pool_new(BLOCK_COPY_MAX_WORKERS);
65
+ aio = aio_task_pool_new(call_state->max_workers);
66
}
67
68
ret = block_copy_task_run(aio, task);
69
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
70
.s = s,
71
.offset = start,
72
.bytes = bytes,
73
+ .max_workers = BLOCK_COPY_MAX_WORKERS,
74
};
75
76
int ret = block_copy_common(&call_state);
77
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn block_copy_async_co_entry(void *opaque)
78
79
BlockCopyCallState *block_copy_async(BlockCopyState *s,
80
int64_t offset, int64_t bytes,
81
+ int max_workers, int64_t max_chunk,
82
BlockCopyAsyncCallbackFunc cb,
83
void *cb_opaque)
84
{
85
@@ -XXX,XX +XXX,XX @@ BlockCopyCallState *block_copy_async(BlockCopyState *s,
86
.s = s,
87
.offset = offset,
88
.bytes = bytes,
89
+ .max_workers = max_workers,
90
+ .max_chunk = max_chunk,
91
.cb = cb,
92
.cb_opaque = cb_opaque,
93
94
--
95
2.29.2
96
97
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Add new parameters to configure future backup features. The patch
4
doesn't introduce aio backup requests (so we actually have only one
5
worker) neither requests larger than one cluster. Still, formally we
6
satisfy these maximums anyway, so add the parameters now, to facilitate
7
further patch which will really change backup job behavior.
8
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
11
Message-Id: <20210116214705.822267-11-vsementsov@virtuozzo.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
qapi/block-core.json | 13 ++++++++++++-
15
block/backup.c | 28 +++++++++++++++++++++++-----
16
block/replication.c | 2 +-
17
blockdev.c | 8 +++++++-
18
4 files changed, 43 insertions(+), 8 deletions(-)
19
20
diff --git a/qapi/block-core.json b/qapi/block-core.json
21
index XXXXXXX..XXXXXXX 100644
22
--- a/qapi/block-core.json
23
+++ b/qapi/block-core.json
24
@@ -XXX,XX +XXX,XX @@
25
#
26
# @use-copy-range: Use copy offloading. Default true.
27
#
28
+# @max-workers: Maximum number of parallel requests for the sustained background
29
+# copying process. Doesn't influence copy-before-write operations.
30
+# Default 64.
31
+#
32
+# @max-chunk: Maximum request length for the sustained background copying
33
+# process. Doesn't influence copy-before-write operations.
34
+# 0 means unlimited. If max-chunk is non-zero then it should not be
35
+# less than job cluster size which is calculated as maximum of
36
+# target image cluster size and 64k. Default 0.
37
+#
38
# Since: 6.0
39
##
40
{ 'struct': 'BackupPerf',
41
- 'data': { '*use-copy-range': 'bool' }}
42
+ 'data': { '*use-copy-range': 'bool',
43
+ '*max-workers': 'int', '*max-chunk': 'int64' } }
44
45
##
46
# @BackupCommon:
47
diff --git a/block/backup.c b/block/backup.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/block/backup.c
50
+++ b/block/backup.c
51
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
52
return NULL;
53
}
54
55
+ cluster_size = backup_calculate_cluster_size(target, errp);
56
+ if (cluster_size < 0) {
57
+ goto error;
58
+ }
59
+
60
+ if (perf->max_workers < 1) {
61
+ error_setg(errp, "max-workers must be greater than zero");
62
+ return NULL;
63
+ }
64
+
65
+ if (perf->max_chunk < 0) {
66
+ error_setg(errp, "max-chunk must be zero (which means no limit) or "
67
+ "positive");
68
+ return NULL;
69
+ }
70
+
71
+ if (perf->max_chunk && perf->max_chunk < cluster_size) {
72
+ error_setg(errp, "Required max-chunk (%" PRIi64 ") is less than backup "
73
+ "cluster size (%" PRIi64 ")", perf->max_chunk, cluster_size);
74
+ return NULL;
75
+ }
76
+
77
+
78
if (sync_bitmap) {
79
/* If we need to write to this bitmap, check that we can: */
80
if (bitmap_mode != BITMAP_SYNC_MODE_NEVER &&
81
@@ -XXX,XX +XXX,XX @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
82
goto error;
83
}
84
85
- cluster_size = backup_calculate_cluster_size(target, errp);
86
- if (cluster_size < 0) {
87
- goto error;
88
- }
89
-
90
/*
91
* If source is in backing chain of target assume that target is going to be
92
* used for "image fleecing", i.e. it should represent a kind of snapshot of
93
diff --git a/block/replication.c b/block/replication.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/block/replication.c
96
+++ b/block/replication.c
97
@@ -XXX,XX +XXX,XX @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
98
int64_t active_length, hidden_length, disk_length;
99
AioContext *aio_context;
100
Error *local_err = NULL;
101
- BackupPerf perf = { .use_copy_range = true };
102
+ BackupPerf perf = { .use_copy_range = true, .max_workers = 1 };
103
104
aio_context = bdrv_get_aio_context(bs);
105
aio_context_acquire(aio_context);
106
diff --git a/blockdev.c b/blockdev.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/blockdev.c
109
+++ b/blockdev.c
110
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
111
{
112
BlockJob *job = NULL;
113
BdrvDirtyBitmap *bmap = NULL;
114
- BackupPerf perf = { .use_copy_range = true };
115
+ BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
116
int job_flags = JOB_DEFAULT;
117
118
if (!backup->has_speed) {
119
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
120
if (backup->x_perf->has_use_copy_range) {
121
perf.use_copy_range = backup->x_perf->use_copy_range;
122
}
123
+ if (backup->x_perf->has_max_workers) {
124
+ perf.max_workers = backup->x_perf->max_workers;
125
+ }
126
+ if (backup->x_perf->has_max_chunk) {
127
+ perf.max_chunk = backup->x_perf->max_chunk;
128
+ }
129
}
130
131
if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
132
--
133
2.29.2
134
135
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
After introducing parallel async copy requests instead of plain
4
cluster-by-cluster copying loop, we'll have to wait for paused status,
5
as we need to wait for several parallel request. So, let's gently wait
6
instead of just asserting that job already paused.
7
8
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
Reviewed-by: Max Reitz <mreitz@redhat.com>
10
Message-Id: <20210116214705.822267-12-vsementsov@virtuozzo.com>
11
Signed-off-by: Max Reitz <mreitz@redhat.com>
12
---
13
tests/qemu-iotests/056 | 9 +++++++--
14
1 file changed, 7 insertions(+), 2 deletions(-)
15
16
diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
17
index XXXXXXX..XXXXXXX 100755
18
--- a/tests/qemu-iotests/056
19
+++ b/tests/qemu-iotests/056
20
@@ -XXX,XX +XXX,XX @@ class BackupTest(iotests.QMPTestCase):
21
event = self.vm.event_wait(name="BLOCK_JOB_ERROR",
22
match={'data': {'device': 'drive0'}})
23
self.assertNotEqual(event, None)
24
- # OK, job should be wedged
25
- res = self.vm.qmp('query-block-jobs')
26
+ # OK, job should pause, but it can't do it immediately, as it can't
27
+ # cancel other parallel requests (which didn't fail)
28
+ with iotests.Timeout(60, "Timeout waiting for backup actually paused"):
29
+ while True:
30
+ res = self.vm.qmp('query-block-jobs')
31
+ if res['return'][0]['status'] == 'paused':
32
+ break
33
self.assert_qmp(res, 'return[0]/status', 'paused')
34
res = self.vm.qmp('block-job-dismiss', id='drive0')
35
self.assert_qmp(res, 'error/desc',
36
--
37
2.29.2
38
39
diff view generated by jsdifflib
Deleted patch
1
Right now, this does not change anything, because backup ignores
2
max-chunk and max-workers. However, as soon as backup is switched over
3
to block-copy for the background copying process, we will need it to
4
keep 129 passing.
5
1
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20210120102043.28346-1-mreitz@redhat.com>
8
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
9
---
10
tests/qemu-iotests/129 | 7 ++++++-
11
1 file changed, 6 insertions(+), 1 deletion(-)
12
13
diff --git a/tests/qemu-iotests/129 b/tests/qemu-iotests/129
14
index XXXXXXX..XXXXXXX 100755
15
--- a/tests/qemu-iotests/129
16
+++ b/tests/qemu-iotests/129
17
@@ -XXX,XX +XXX,XX @@ class TestStopWithBlockJob(iotests.QMPTestCase):
18
sync="full", buf_size=65536)
19
20
def test_drive_backup(self):
21
+ # Limit max-chunk and max-workers so that block-copy will not
22
+ # launch so many workers working on so much data each that
23
+ # stop's bdrv_drain_all() would finish the job
24
self.do_test_stop("drive-backup", device="drive0",
25
target=self.target_img, format=iotests.imgfmt,
26
- sync="full")
27
+ sync="full",
28
+ x_perf={ 'max-chunk': 65536,
29
+ 'max-workers': 8 })
30
31
def test_block_commit(self):
32
# Add overlay above the source node so that we actually use a
33
--
34
2.29.2
35
36
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
The further change of moving backup to be a one block-copy call will
4
make copying chunk-size and cluster-size two separate things. So, even
5
with 64k cluster sized qcow2 image, default chunk would be 1M.
6
185 test however assumes, that with speed limited to 64K, one iteration
7
would result in offset=64K. It will change, as first iteration would
8
result in offset=1M independently of speed.
9
10
So, let's explicitly specify, what test wants: set max-chunk to 64K, so
11
that one iteration is 64K. Note, that we don't need to limit
12
max-workers, as block-copy rate limiter will handle the situation and
13
wouldn't start new workers when speed limit is obviously reached.
14
15
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
16
Reviewed-by: Max Reitz <mreitz@redhat.com>
17
Message-Id: <20210116214705.822267-13-vsementsov@virtuozzo.com>
18
Signed-off-by: Max Reitz <mreitz@redhat.com>
19
---
20
tests/qemu-iotests/185 | 3 ++-
21
tests/qemu-iotests/185.out | 3 ++-
22
2 files changed, 4 insertions(+), 2 deletions(-)
23
24
diff --git a/tests/qemu-iotests/185 b/tests/qemu-iotests/185
25
index XXXXXXX..XXXXXXX 100755
26
--- a/tests/qemu-iotests/185
27
+++ b/tests/qemu-iotests/185
28
@@ -XXX,XX +XXX,XX @@ _send_qemu_cmd $h \
29
'target': '$TEST_IMG.copy',
30
'format': '$IMGFMT',
31
'sync': 'full',
32
- 'speed': 65536 } }" \
33
+ 'speed': 65536,
34
+ 'x-perf': {'max-chunk': 65536} } }" \
35
"return"
36
37
# If we don't sleep here 'quit' command races with disk I/O
38
diff --git a/tests/qemu-iotests/185.out b/tests/qemu-iotests/185.out
39
index XXXXXXX..XXXXXXX 100644
40
--- a/tests/qemu-iotests/185.out
41
+++ b/tests/qemu-iotests/185.out
42
@@ -XXX,XX +XXX,XX @@ Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off
43
'target': 'TEST_DIR/t.IMGFMT.copy',
44
'format': 'IMGFMT',
45
'sync': 'full',
46
- 'speed': 65536 } }
47
+ 'speed': 65536,
48
+ 'x-perf': { 'max-chunk': 65536 } } }
49
Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=67108864 lazy_refcounts=off refcount_bits=16
50
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "disk"}}
51
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "disk"}}
52
--
53
2.29.2
54
55
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
The further change of moving backup to be a one block-copy call will
4
make copying chunk-size and cluster-size two separate things. So, even
5
with 64k cluster sized qcow2 image, default chunk would be 1M.
6
Test 219 depends on specified chunk-size. Update it for explicit
7
chunk-size for backup as for mirror.
8
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
10
Reviewed-by: Max Reitz <mreitz@redhat.com>
11
Message-Id: <20210116214705.822267-14-vsementsov@virtuozzo.com>
12
Signed-off-by: Max Reitz <mreitz@redhat.com>
13
---
14
tests/qemu-iotests/219 | 13 +++++++------
15
1 file changed, 7 insertions(+), 6 deletions(-)
16
17
diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
18
index XXXXXXX..XXXXXXX 100755
19
--- a/tests/qemu-iotests/219
20
+++ b/tests/qemu-iotests/219
21
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('disk.img') as disk_path, \
22
# but related to this also automatic state transitions like job
23
# completion), but still get pause points often enough to avoid making this
24
# test very slow, it's important to have the right ratio between speed and
25
- # buf_size.
26
+ # copy-chunk-size.
27
#
28
- # For backup, buf_size is hard-coded to the source image cluster size (64k),
29
- # so we'll pick the same for mirror. The slice time, i.e. the granularity
30
- # of the rate limiting is 100ms. With a speed of 256k per second, we can
31
- # get four pause points per second. This gives us 250ms per iteration,
32
- # which should be enough to stay deterministic.
33
+ # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
34
+ # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
35
+ # is 100ms. With a speed of 256k per second, we can get four pause points
36
+ # per second. This gives us 250ms per iteration, which should be enough to
37
+ # stay deterministic.
38
39
test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
40
'device': 'drive0-node',
41
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('disk.img') as disk_path, \
42
'target': copy_path,
43
'sync': 'full',
44
'speed': 262144,
45
+ 'x-perf': {'max-chunk': 65536},
46
'auto-finalize': auto_finalize,
47
'auto-dismiss': auto_dismiss,
48
})
49
--
50
2.29.2
51
52
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Iotest 257 dumps a lot of in-progress information of backup job, such
4
as offset and bitmap dirtiness. Further commit will move backup to be
5
one block-copy call, which will introduce async parallel requests
6
instead of plain cluster-by-cluster copying. To keep things
7
deterministic, allow only one worker (only one copy request at a time)
8
for this test.
9
10
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
Reviewed-by: Max Reitz <mreitz@redhat.com>
12
Message-Id: <20210116214705.822267-15-vsementsov@virtuozzo.com>
13
Signed-off-by: Max Reitz <mreitz@redhat.com>
14
---
15
tests/qemu-iotests/257 | 1 +
16
tests/qemu-iotests/257.out | 306 ++++++++++++++++++-------------------
17
2 files changed, 154 insertions(+), 153 deletions(-)
18
19
diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
20
index XXXXXXX..XXXXXXX 100755
21
--- a/tests/qemu-iotests/257
22
+++ b/tests/qemu-iotests/257
23
@@ -XXX,XX +XXX,XX @@ def blockdev_backup(vm, device, target, sync, **kwargs):
24
target=target,
25
sync=sync,
26
filter_node_name='backup-top',
27
+ x_perf={'max-workers': 1},
28
**kwargs)
29
return result
30
31
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/qemu-iotests/257.out
34
+++ b/tests/qemu-iotests/257.out
35
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
36
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
37
{"return": {}}
38
{}
39
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
40
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
41
{"return": {}}
42
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
43
44
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
45
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
46
{"return": {}}
47
{}
48
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
49
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
50
{"return": {}}
51
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
52
53
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
54
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
55
{"return": {}}
56
{}
57
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
58
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
59
{"return": {}}
60
61
--- Write #2 ---
62
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
63
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
64
{"return": {}}
65
{}
66
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
67
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
68
{"return": {}}
69
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
70
71
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
72
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
73
{"return": {}}
74
{}
75
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
76
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
77
{"return": {}}
78
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
79
{"return": {}}
80
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
81
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
82
{"return": {}}
83
{}
84
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
85
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
86
{"return": {}}
87
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
88
89
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
90
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
91
{"return": {}}
92
{}
93
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
94
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
95
{"return": {}}
96
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
97
98
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
99
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
100
{"return": {}}
101
{}
102
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
103
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
104
{"return": {}}
105
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
106
{"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
107
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
108
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
109
{"return": {}}
110
{}
111
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
112
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
113
{"return": {}}
114
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
115
116
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
117
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
118
{"return": {}}
119
{}
120
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
121
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
122
{"return": {}}
123
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
124
{"return": {}}
125
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
126
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
127
{"return": {}}
128
{}
129
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
130
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
131
{"return": {}}
132
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
133
134
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
135
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
136
{"return": {}}
137
{}
138
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
139
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
140
{"return": {}}
141
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
142
143
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
144
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
145
{"return": {}}
146
{}
147
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
148
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
149
{"return": {}}
150
151
--- Write #2 ---
152
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
153
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
154
{"return": {}}
155
{}
156
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
157
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
158
{"return": {}}
159
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
160
161
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
162
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
163
{"return": {}}
164
{}
165
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
166
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
167
{"return": {}}
168
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
169
{"return": {}}
170
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
171
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
172
{"return": {}}
173
{}
174
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
175
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
176
{"return": {}}
177
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
178
179
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
180
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
181
{"return": {}}
182
{}
183
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
184
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
185
{"return": {}}
186
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
187
188
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
189
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
190
{"return": {}}
191
{}
192
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
193
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
194
{"return": {}}
195
196
--- Write #2 ---
197
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
198
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
199
{"return": {}}
200
{}
201
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
202
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
203
{"return": {}}
204
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
205
206
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
207
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
208
{"return": {}}
209
{}
210
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
211
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
212
{"return": {}}
213
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
214
{"return": {}}
215
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
216
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
217
{"return": {}}
218
{}
219
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
220
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
221
{"return": {}}
222
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
223
224
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
225
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
226
{"return": {}}
227
{}
228
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
229
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
230
{"return": {}}
231
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
232
233
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
234
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
235
{"return": {}}
236
{}
237
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
238
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
239
{"return": {}}
240
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
241
{"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
242
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
243
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
244
{"return": {}}
245
{}
246
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
247
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
248
{"return": {}}
249
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
250
251
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
252
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
253
{"return": {}}
254
{}
255
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
256
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
257
{"return": {}}
258
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
259
{"return": {}}
260
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
261
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
262
{"return": {}}
263
{}
264
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
265
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
266
{"return": {}}
267
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
268
269
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
270
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
271
{"return": {}}
272
{}
273
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
274
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
275
{"return": {}}
276
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
277
278
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
279
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
280
{"return": {}}
281
{}
282
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
283
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
284
{"return": {}}
285
286
--- Write #2 ---
287
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
288
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
289
{"return": {}}
290
{}
291
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
292
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
293
{"return": {}}
294
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
295
296
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
297
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
298
{"return": {}}
299
{}
300
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
301
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
302
{"return": {}}
303
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
304
{"return": {}}
305
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
306
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
307
{"return": {}}
308
{}
309
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
310
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
311
{"return": {}}
312
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
313
314
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
315
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
316
{"return": {}}
317
{}
318
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
319
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
320
{"return": {}}
321
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
322
323
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
324
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
325
{"return": {}}
326
{}
327
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
328
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
329
{"return": {}}
330
331
--- Write #2 ---
332
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
333
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
334
{"return": {}}
335
{}
336
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
337
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
338
{"return": {}}
339
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
340
341
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
342
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
343
{"return": {}}
344
{}
345
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
346
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
347
{"return": {}}
348
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
349
{"return": {}}
350
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
351
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
352
{"return": {}}
353
{}
354
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
355
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
356
{"return": {}}
357
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
358
359
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
360
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
361
{"return": {}}
362
{}
363
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
364
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
365
{"return": {}}
366
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
367
368
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
369
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
370
{"return": {}}
371
{}
372
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
373
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
374
{"return": {}}
375
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
376
{"data": {"device": "backup_1", "error": "Input/output error", "len": 393216, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
377
@@ -XXX,XX +XXX,XX @@ expecting 13 dirty sectors; have 13. OK!
378
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
379
{"return": {}}
380
{}
381
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
382
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
383
{"return": {}}
384
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
385
386
@@ -XXX,XX +XXX,XX @@ expecting 13 dirty sectors; have 13. OK!
387
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
388
{"return": {}}
389
{}
390
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
391
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
392
{"return": {}}
393
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
394
{"return": {}}
395
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
396
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
397
{"return": {}}
398
{}
399
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
400
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
401
{"return": {}}
402
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
403
404
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
405
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
406
{"return": {}}
407
{}
408
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
409
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
410
{"return": {}}
411
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
412
413
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
414
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
415
{"return": {}}
416
{}
417
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1"}}
418
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "bitmap", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
419
{"return": {}}
420
421
--- Write #2 ---
422
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
423
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
424
{"return": {}}
425
{}
426
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
427
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
428
{"return": {}}
429
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
430
431
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
432
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
433
{"return": {}}
434
{}
435
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
436
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
437
{"return": {}}
438
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
439
{"return": {}}
440
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
441
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
442
{"return": {}}
443
{}
444
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
445
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
446
{"return": {}}
447
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
448
449
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
450
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
451
{"return": {}}
452
{}
453
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
454
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
455
{"return": {}}
456
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
457
458
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
459
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
460
{"return": {}}
461
{}
462
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
463
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
464
{"return": {}}
465
466
--- Write #2 ---
467
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
468
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
469
{"return": {}}
470
{}
471
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
472
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
473
{"return": {}}
474
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
475
476
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
477
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
478
{"return": {}}
479
{}
480
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
481
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
482
{"return": {}}
483
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
484
{"return": {}}
485
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
486
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
487
{"return": {}}
488
{}
489
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
490
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
491
{"return": {}}
492
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
493
494
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
495
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
496
{"return": {}}
497
{}
498
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
499
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
500
{"return": {}}
501
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
502
503
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
504
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
505
{"return": {}}
506
{}
507
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
508
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
509
{"return": {}}
510
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
511
{"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
512
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
513
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
514
{"return": {}}
515
{}
516
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
517
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
518
{"return": {}}
519
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
520
521
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
522
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
523
{"return": {}}
524
{}
525
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
526
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
527
{"return": {}}
528
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
529
{"return": {}}
530
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
531
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
532
{"return": {}}
533
{}
534
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
535
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
536
{"return": {}}
537
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
538
539
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
540
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
541
{"return": {}}
542
{}
543
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
544
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
545
{"return": {}}
546
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
547
548
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
549
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
550
{"return": {}}
551
{}
552
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
553
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
554
{"return": {}}
555
556
--- Write #2 ---
557
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
558
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
559
{"return": {}}
560
{}
561
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
562
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
563
{"return": {}}
564
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
565
566
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
567
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
568
{"return": {}}
569
{}
570
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
571
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
572
{"return": {}}
573
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
574
{"return": {}}
575
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
576
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
577
{"return": {}}
578
{}
579
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
580
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
581
{"return": {}}
582
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
583
584
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
585
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
586
{"return": {}}
587
{}
588
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
589
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
590
{"return": {}}
591
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
592
593
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
594
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
595
{"return": {}}
596
{}
597
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
598
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
599
{"return": {}}
600
601
--- Write #2 ---
602
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
603
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
604
{"return": {}}
605
{}
606
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
607
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
608
{"return": {}}
609
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
610
611
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
612
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
613
{"return": {}}
614
{}
615
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
616
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
617
{"return": {}}
618
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
619
{"return": {}}
620
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
621
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
622
{"return": {}}
623
{}
624
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
625
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
626
{"return": {}}
627
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
628
629
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
630
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
631
{"return": {}}
632
{}
633
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
634
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
635
{"return": {}}
636
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
637
638
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
639
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
640
{"return": {}}
641
{}
642
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
643
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
644
{"return": {}}
645
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
646
{"data": {"device": "backup_1", "error": "Input/output error", "len": 67108864, "offset": 983040, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
647
@@ -XXX,XX +XXX,XX @@ expecting 1014 dirty sectors; have 1014. OK!
648
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
649
{"return": {}}
650
{}
651
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
652
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
653
{"return": {}}
654
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
655
656
@@ -XXX,XX +XXX,XX @@ expecting 1014 dirty sectors; have 1014. OK!
657
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
658
{"return": {}}
659
{}
660
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
661
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
662
{"return": {}}
663
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
664
{"return": {}}
665
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
666
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
667
{"return": {}}
668
{}
669
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
670
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
671
{"return": {}}
672
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
673
674
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
675
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
676
{"return": {}}
677
{}
678
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
679
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
680
{"return": {}}
681
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
682
683
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
684
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
685
{"return": {}}
686
{}
687
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1"}}
688
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "full", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
689
{"return": {}}
690
691
--- Write #2 ---
692
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
693
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
694
{"return": {}}
695
{}
696
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
697
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
698
{"return": {}}
699
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
700
701
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
702
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
703
{"return": {}}
704
{}
705
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
706
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
707
{"return": {}}
708
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
709
{"return": {}}
710
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
711
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
712
{"return": {}}
713
{}
714
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
715
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
716
{"return": {}}
717
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
718
719
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
720
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
721
{"return": {}}
722
{}
723
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
724
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
725
{"return": {}}
726
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
727
728
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
729
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
730
{"return": {}}
731
{}
732
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
733
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
734
{"return": {}}
735
736
--- Write #2 ---
737
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
738
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
739
{"return": {}}
740
{}
741
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
742
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
743
{"return": {}}
744
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
745
746
@@ -XXX,XX +XXX,XX @@ expecting 15 dirty sectors; have 15. OK!
747
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
748
{"return": {}}
749
{}
750
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
751
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
752
{"return": {}}
753
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
754
{"return": {}}
755
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
756
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
757
{"return": {}}
758
{}
759
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
760
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
761
{"return": {}}
762
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
763
764
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
765
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
766
{"return": {}}
767
{}
768
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
769
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
770
{"return": {}}
771
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
772
773
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
774
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
775
{"return": {}}
776
{}
777
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
778
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
779
{"return": {}}
780
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
781
{"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
782
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
783
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
784
{"return": {}}
785
{}
786
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
787
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
788
{"return": {}}
789
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
790
791
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
792
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
793
{"return": {}}
794
{}
795
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
796
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
797
{"return": {}}
798
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
799
{"return": {}}
800
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
801
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
802
{"return": {}}
803
{}
804
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
805
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
806
{"return": {}}
807
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
808
809
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
810
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
811
{"return": {}}
812
{}
813
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
814
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
815
{"return": {}}
816
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
817
818
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
819
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
820
{"return": {}}
821
{}
822
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
823
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
824
{"return": {}}
825
826
--- Write #2 ---
827
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
828
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
829
{"return": {}}
830
{}
831
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
832
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
833
{"return": {}}
834
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
835
836
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
837
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
838
{"return": {}}
839
{}
840
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
841
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
842
{"return": {}}
843
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
844
{"return": {}}
845
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
846
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
847
{"return": {}}
848
{}
849
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
850
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
851
{"return": {}}
852
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
853
854
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
855
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
856
{"return": {}}
857
{}
858
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
859
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
860
{"return": {}}
861
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
862
863
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
864
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
865
{"return": {}}
866
{}
867
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
868
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
869
{"return": {}}
870
871
--- Write #2 ---
872
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
873
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
874
{"return": {}}
875
{}
876
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
877
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
878
{"return": {}}
879
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
880
881
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
882
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
883
{"return": {}}
884
{}
885
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
886
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
887
{"return": {}}
888
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
889
{"return": {}}
890
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
891
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
892
{"return": {}}
893
{}
894
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
895
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
896
{"return": {}}
897
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
898
899
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
900
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
901
{"return": {}}
902
{}
903
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
904
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
905
{"return": {}}
906
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
907
908
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
909
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
910
{"return": {}}
911
{}
912
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
913
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
914
{"return": {}}
915
{"data": {"action": "report", "device": "backup_1", "operation": "read"}, "event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
916
{"data": {"device": "backup_1", "error": "Input/output error", "len": 458752, "offset": 65536, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
917
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
918
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
919
{"return": {}}
920
{}
921
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
922
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
923
{"return": {}}
924
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
925
926
@@ -XXX,XX +XXX,XX @@ expecting 14 dirty sectors; have 14. OK!
927
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
928
{"return": {}}
929
{}
930
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
931
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
932
{"return": {}}
933
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
934
{"return": {}}
935
@@ -XXX,XX +XXX,XX @@ write -P0x76 0x3ff0000 0x10000
936
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
937
{"return": {}}
938
{}
939
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0"}}
940
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_0", "sync": "full", "target": "ref_target_0", "x-perf": {"max-workers": 1}}}
941
{"return": {}}
942
{"data": {"device": "ref_backup_0", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
943
944
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
945
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
946
{"return": {}}
947
{}
948
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1"}}
949
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_1", "sync": "full", "target": "ref_target_1", "x-perf": {"max-workers": 1}}}
950
{"return": {}}
951
{"data": {"device": "ref_backup_1", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
952
953
@@ -XXX,XX +XXX,XX @@ expecting 6 dirty sectors; have 6. OK!
954
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
955
{"return": {}}
956
{}
957
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1"}}
958
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_1", "sync": "top", "target": "backup_target_1", "x-perf": {"max-workers": 1}}}
959
{"return": {}}
960
961
--- Write #2 ---
962
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
963
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
964
{"return": {}}
965
{}
966
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2"}}
967
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "ref_backup_2", "sync": "full", "target": "ref_target_2", "x-perf": {"max-workers": 1}}}
968
{"return": {}}
969
{"data": {"device": "ref_backup_2", "len": 67108864, "offset": 67108864, "speed": 0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
970
971
@@ -XXX,XX +XXX,XX @@ expecting 12 dirty sectors; have 12. OK!
972
{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
973
{"return": {}}
974
{}
975
-{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2"}}
976
+{"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "backup_2", "sync": "bitmap", "target": "backup_target_2", "x-perf": {"max-workers": 1}}}
977
{"return": {}}
978
{"execute": "job-finalize", "arguments": {"id": "backup_2"}}
979
{"return": {}}
980
@@ -XXX,XX +XXX,XX @@ qemu_img compare "TEST_DIR/PID-img" "TEST_DIR/PID-fbackup2" ==> Identical, OK!
981
982
-- Sync mode incremental tests --
983
984
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
985
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
986
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
987
988
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
989
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
990
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
991
992
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
993
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
994
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
995
996
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
997
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
998
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}
999
1000
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1001
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1002
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1003
1004
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1005
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1006
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1007
1008
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1009
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1010
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1011
1012
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1013
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1014
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1015
1016
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1017
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1018
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1019
1020
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target"}}
1021
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "incremental", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1022
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be 'on-success' when using sync mode 'incremental'"}}
1023
1024
-- Sync mode bitmap tests --
1025
1026
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1027
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1028
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1029
1030
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1031
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1032
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1033
1034
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1035
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1036
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1037
1038
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1039
+{"execute": "blockdev-backup", "arguments": {"device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1040
{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'bitmap' sync mode"}}
1041
1042
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1043
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1044
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1045
1046
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1047
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1048
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1049
1050
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1051
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1052
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1053
1054
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1055
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1056
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1057
1058
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target"}}
1059
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "bitmap", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1060
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1061
1062
-- Sync mode full tests --
1063
1064
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1065
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1066
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1067
1068
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1069
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1070
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1071
1072
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1073
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1074
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1075
1076
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1077
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1078
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1079
1080
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1081
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1082
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1083
1084
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1085
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1086
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1087
1088
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1089
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1090
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1091
1092
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1093
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1094
{"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'full'"}}
1095
1096
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target"}}
1097
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "full", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1098
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1099
1100
-- Sync mode top tests --
1101
1102
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1103
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1104
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1105
1106
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1107
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1108
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1109
1110
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1111
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1112
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1113
1114
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1115
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1116
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1117
1118
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1119
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1120
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1121
1122
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1123
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1124
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1125
1126
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1127
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1128
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1129
1130
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1131
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1132
{"error": {"class": "GenericError", "desc": "Bitmap sync mode 'never' has no meaningful effect when combined with sync mode 'top'"}}
1133
1134
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target"}}
1135
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "top", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1136
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1137
1138
-- Sync mode none tests --
1139
1140
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1141
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1142
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1143
1144
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1145
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1146
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1147
1148
-{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1149
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1150
{"error": {"class": "GenericError", "desc": "Cannot specify bitmap sync mode without a bitmap"}}
1151
1152
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1153
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1154
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1155
1156
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1157
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1158
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1159
1160
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1161
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1162
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1163
1164
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1165
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap404", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1166
{"error": {"class": "GenericError", "desc": "Bitmap 'bitmap404' could not be found"}}
1167
1168
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1169
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "on-success", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1170
{"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
1171
1172
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1173
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "always", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1174
{"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
1175
1176
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1177
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "bitmap-mode": "never", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1178
{"error": {"class": "GenericError", "desc": "sync mode 'none' does not produce meaningful bitmap outputs"}}
1179
1180
-{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target"}}
1181
+{"execute": "blockdev-backup", "arguments": {"bitmap": "bitmap0", "device": "drive0", "filter-node-name": "backup-top", "job-id": "api_job", "sync": "none", "target": "backup_target", "x-perf": {"max-workers": 1}}}
1182
{"error": {"class": "GenericError", "desc": "Bitmap sync mode must be given when providing a bitmap"}}
1183
1184
--
1185
2.29.2
1186
1187
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
We are going to stop use of this callback in the following commit.
4
Still the callback handling code will be dropped in a separate commit.
5
So, for now let's make it optional.
6
7
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
8
Reviewed-by: Max Reitz <mreitz@redhat.com>
9
Message-Id: <20210116214705.822267-16-vsementsov@virtuozzo.com>
10
Signed-off-by: Max Reitz <mreitz@redhat.com>
11
---
12
block/block-copy.c | 4 +++-
13
1 file changed, 3 insertions(+), 1 deletion(-)
14
15
diff --git a/block/block-copy.c b/block/block-copy.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/block/block-copy.c
18
+++ b/block/block-copy.c
19
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
20
t->call_state->error_is_read = error_is_read;
21
} else {
22
progress_work_done(t->s->progress, t->bytes);
23
- t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
24
+ if (t->s->progress_bytes_callback) {
25
+ t->s->progress_bytes_callback(t->bytes, t->s->progress_opaque);
26
+ }
27
}
28
co_put_to_shres(t->s->mem, t->bytes);
29
block_copy_task_end(t, ret);
30
--
31
2.29.2
32
33
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Further commit will add a benchmark
4
(scripts/simplebench/bench-backup.py), which will show that backup
5
works better with async parallel requests (previous commit) and
6
disabled copy_range. So, let's disable copy_range by default.
7
8
Note: the option was added several commits ago with default to true,
9
to follow old behavior (the feature was enabled unconditionally), and
10
only now we are going to change the default behavior.
11
12
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
13
Reviewed-by: Max Reitz <mreitz@redhat.com>
14
Message-Id: <20210116214705.822267-19-vsementsov@virtuozzo.com>
15
Signed-off-by: Max Reitz <mreitz@redhat.com>
16
---
17
qapi/block-core.json | 2 +-
18
blockdev.c | 2 +-
19
2 files changed, 2 insertions(+), 2 deletions(-)
20
21
diff --git a/qapi/block-core.json b/qapi/block-core.json
22
index XXXXXXX..XXXXXXX 100644
23
--- a/qapi/block-core.json
24
+++ b/qapi/block-core.json
25
@@ -XXX,XX +XXX,XX @@
26
# Optional parameters for backup. These parameters don't affect
27
# functionality, but may significantly affect performance.
28
#
29
-# @use-copy-range: Use copy offloading. Default true.
30
+# @use-copy-range: Use copy offloading. Default false.
31
#
32
# @max-workers: Maximum number of parallel requests for the sustained background
33
# copying process. Doesn't influence copy-before-write operations.
34
diff --git a/blockdev.c b/blockdev.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/blockdev.c
37
+++ b/blockdev.c
38
@@ -XXX,XX +XXX,XX @@ static BlockJob *do_backup_common(BackupCommon *backup,
39
{
40
BlockJob *job = NULL;
41
BdrvDirtyBitmap *bmap = NULL;
42
- BackupPerf perf = { .use_copy_range = true, .max_workers = 64 };
43
+ BackupPerf perf = { .max_workers = 64 };
44
int job_flags = JOB_DEFAULT;
45
46
if (!backup->has_speed) {
47
--
48
2.29.2
49
50
diff view generated by jsdifflib
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
3
This adds a CoMutex around the existing CoQueue. Because the write-side
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
4
can just take CoMutex, the old "writer" field is not necessary anymore.
5
Message-Id: <20210116214705.822267-21-vsementsov@virtuozzo.com>
5
Instead of removing it altogether, count the number of pending writers
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
6
during a read-side critical section and forbid further readers from
7
entering.
8
9
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10
Reviewed-by: Fam Zheng <famz@redhat.com>
11
Message-id: 20170213181244.16297-7-pbonzini@redhat.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
---
13
---
8
include/block/block-copy.h | 2 +-
14
include/qemu/coroutine.h | 3 ++-
9
block/backup-top.c | 2 +-
15
util/qemu-coroutine-lock.c | 35 ++++++++++++++++++++++++-----------
10
block/block-copy.c | 10 ++--------
16
2 files changed, 26 insertions(+), 12 deletions(-)
11
3 files changed, 4 insertions(+), 10 deletions(-)
12
17
13
diff --git a/include/block/block-copy.h b/include/block/block-copy.h
18
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/include/block/block-copy.h
20
--- a/include/qemu/coroutine.h
16
+++ b/include/block/block-copy.h
21
+++ b/include/qemu/coroutine.h
17
@@ -XXX,XX +XXX,XX @@ int64_t block_copy_reset_unallocated(BlockCopyState *s,
22
@@ -XXX,XX +XXX,XX @@ bool qemu_co_queue_empty(CoQueue *queue);
18
int64_t offset, int64_t *count);
23
19
24
20
int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
25
typedef struct CoRwlock {
21
- bool ignore_ratelimit, bool *error_is_read);
26
- bool writer;
22
+ bool ignore_ratelimit);
27
+ int pending_writer;
23
28
int reader;
24
/*
29
+ CoMutex mutex;
25
* Run block-copy in a coroutine, create corresponding BlockCopyCallState
30
CoQueue queue;
26
diff --git a/block/backup-top.c b/block/backup-top.c
31
} CoRwlock;
32
33
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
27
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
28
--- a/block/backup-top.c
35
--- a/util/qemu-coroutine-lock.c
29
+++ b/block/backup-top.c
36
+++ b/util/qemu-coroutine-lock.c
30
@@ -XXX,XX +XXX,XX @@ static coroutine_fn int backup_top_cbw(BlockDriverState *bs, uint64_t offset,
37
@@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_init(CoRwlock *lock)
31
off = QEMU_ALIGN_DOWN(offset, s->cluster_size);
38
{
32
end = QEMU_ALIGN_UP(offset + bytes, s->cluster_size);
39
memset(lock, 0, sizeof(*lock));
33
40
qemu_co_queue_init(&lock->queue);
34
- return block_copy(s->bcs, off, end - off, true, NULL);
41
+ qemu_co_mutex_init(&lock->mutex);
35
+ return block_copy(s->bcs, off, end - off, true);
36
}
42
}
37
43
38
static int coroutine_fn backup_top_co_pdiscard(BlockDriverState *bs,
44
void qemu_co_rwlock_rdlock(CoRwlock *lock)
39
diff --git a/block/block-copy.c b/block/block-copy.c
45
{
40
index XXXXXXX..XXXXXXX 100644
46
Coroutine *self = qemu_coroutine_self();
41
--- a/block/block-copy.c
47
42
+++ b/block/block-copy.c
48
- while (lock->writer) {
43
@@ -XXX,XX +XXX,XX @@ static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
49
- qemu_co_queue_wait(&lock->queue, NULL);
50
+ qemu_co_mutex_lock(&lock->mutex);
51
+ /* For fairness, wait if a writer is in line. */
52
+ while (lock->pending_writer) {
53
+ qemu_co_queue_wait(&lock->queue, &lock->mutex);
54
}
55
lock->reader++;
56
+ qemu_co_mutex_unlock(&lock->mutex);
57
+
58
+ /* The rest of the read-side critical section is run without the mutex. */
59
self->locks_held++;
44
}
60
}
45
61
46
int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
62
@@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_unlock(CoRwlock *lock)
47
- bool ignore_ratelimit, bool *error_is_read)
63
Coroutine *self = qemu_coroutine_self();
48
+ bool ignore_ratelimit)
64
65
assert(qemu_in_coroutine());
66
- if (lock->writer) {
67
- lock->writer = false;
68
+ if (!lock->reader) {
69
+ /* The critical section started in qemu_co_rwlock_wrlock. */
70
qemu_co_queue_restart_all(&lock->queue);
71
} else {
72
+ self->locks_held--;
73
+
74
+ qemu_co_mutex_lock(&lock->mutex);
75
lock->reader--;
76
assert(lock->reader >= 0);
77
/* Wakeup only one waiting writer */
78
@@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_unlock(CoRwlock *lock)
79
qemu_co_queue_next(&lock->queue);
80
}
81
}
82
- self->locks_held--;
83
+ qemu_co_mutex_unlock(&lock->mutex);
84
}
85
86
void qemu_co_rwlock_wrlock(CoRwlock *lock)
49
{
87
{
50
BlockCopyCallState call_state = {
88
- Coroutine *self = qemu_coroutine_self();
51
.s = s,
52
@@ -XXX,XX +XXX,XX @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes,
53
.max_workers = BLOCK_COPY_MAX_WORKERS,
54
};
55
56
- int ret = block_copy_common(&call_state);
57
-
89
-
58
- if (error_is_read && ret < 0) {
90
- while (lock->writer || lock->reader) {
59
- *error_is_read = call_state.error_is_read;
91
- qemu_co_queue_wait(&lock->queue, NULL);
60
- }
92
+ qemu_co_mutex_lock(&lock->mutex);
61
-
93
+ lock->pending_writer++;
62
- return ret;
94
+ while (lock->reader) {
63
+ return block_copy_common(&call_state);
95
+ qemu_co_queue_wait(&lock->queue, &lock->mutex);
96
}
97
- lock->writer = true;
98
- self->locks_held++;
99
+ lock->pending_writer--;
100
+
101
+ /* The rest of the write-side critical section is run with
102
+ * the mutex taken, so that lock->reader remains zero.
103
+ * There is no need to update self->locks_held.
104
+ */
64
}
105
}
65
66
static void coroutine_fn block_copy_async_co_entry(void *opaque)
67
--
106
--
68
2.29.2
107
2.9.3
69
108
70
109
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
4
Reviewed-by: Max Reitz <mreitz@redhat.com>
5
Message-Id: <20210116214705.822267-22-vsementsov@virtuozzo.com>
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
---
8
scripts/simplebench/bench_block_job.py | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
11
diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
12
index XXXXXXX..XXXXXXX 100755
13
--- a/scripts/simplebench/bench_block_job.py
14
+++ b/scripts/simplebench/bench_block_job.py
15
@@ -XXX,XX +XXX,XX @@
16
-#!/usr/bin/env python
17
+#!/usr/bin/env python3
18
#
19
# Benchmark block jobs
20
#
21
--
22
2.29.2
23
24
diff view generated by jsdifflib
Deleted patch
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
1
3
Add argument to allow additional block-job options.
4
5
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
6
Reviewed-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20210116214705.822267-23-vsementsov@virtuozzo.com>
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
scripts/simplebench/bench-example.py | 2 +-
11
scripts/simplebench/bench_block_job.py | 11 +++++++----
12
2 files changed, 8 insertions(+), 5 deletions(-)
13
14
diff --git a/scripts/simplebench/bench-example.py b/scripts/simplebench/bench-example.py
15
index XXXXXXX..XXXXXXX 100644
16
--- a/scripts/simplebench/bench-example.py
17
+++ b/scripts/simplebench/bench-example.py
18
@@ -XXX,XX +XXX,XX @@ from bench_block_job import bench_block_copy, drv_file, drv_nbd
19
20
def bench_func(env, case):
21
""" Handle one "cell" of benchmarking table. """
22
- return bench_block_copy(env['qemu_binary'], env['cmd'],
23
+ return bench_block_copy(env['qemu_binary'], env['cmd'], {}
24
case['source'], case['target'])
25
26
27
diff --git a/scripts/simplebench/bench_block_job.py b/scripts/simplebench/bench_block_job.py
28
index XXXXXXX..XXXXXXX 100755
29
--- a/scripts/simplebench/bench_block_job.py
30
+++ b/scripts/simplebench/bench_block_job.py
31
@@ -XXX,XX +XXX,XX @@ def bench_block_job(cmd, cmd_args, qemu_args):
32
33
34
# Bench backup or mirror
35
-def bench_block_copy(qemu_binary, cmd, source, target):
36
+def bench_block_copy(qemu_binary, cmd, cmd_options, source, target):
37
"""Helper to run bench_block_job() for mirror or backup"""
38
assert cmd in ('blockdev-backup', 'blockdev-mirror')
39
40
source['node-name'] = 'source'
41
target['node-name'] = 'target'
42
43
- return bench_block_job(cmd,
44
- {'job-id': 'job0', 'device': 'source',
45
- 'target': 'target', 'sync': 'full'},
46
+ cmd_options['job-id'] = 'job0'
47
+ cmd_options['device'] = 'source'
48
+ cmd_options['target'] = 'target'
49
+ cmd_options['sync'] = 'full'
50
+
51
+ return bench_block_job(cmd, cmd_options,
52
[qemu_binary,
53
'-blockdev', json.dumps(source),
54
'-blockdev', json.dumps(target)])
55
--
56
2.29.2
57
58
diff view generated by jsdifflib
Deleted patch
1
From: David Edmondson <david.edmondson@oracle.com>
2
1
3
When a call to fcntl(2) for the purpose of adding file locks fails
4
with an error other than EAGAIN or EACCES, report the error returned
5
by fcntl.
6
7
EAGAIN or EACCES are elided as they are considered to be common
8
failures, indicating that a conflicting lock is held by another
9
process.
10
11
No errors are elided when removing file locks.
12
13
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
14
Message-Id: <20210113164447.2545785-1-david.edmondson@oracle.com>
15
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
16
Signed-off-by: Max Reitz <mreitz@redhat.com>
17
---
18
block/file-posix.c | 38 ++++++++++++++++++++++++++++----------
19
1 file changed, 28 insertions(+), 10 deletions(-)
20
21
diff --git a/block/file-posix.c b/block/file-posix.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/block/file-posix.c
24
+++ b/block/file-posix.c
25
@@ -XXX,XX +XXX,XX @@ typedef struct RawPosixAIOData {
26
static int cdrom_reopen(BlockDriverState *bs);
27
#endif
28
29
+/*
30
+ * Elide EAGAIN and EACCES details when failing to lock, as this
31
+ * indicates that the specified file region is already locked by
32
+ * another process, which is considered a common scenario.
33
+ */
34
+#define raw_lock_error_setg_errno(errp, err, fmt, ...) \
35
+ do { \
36
+ if ((err) == EAGAIN || (err) == EACCES) { \
37
+ error_setg((errp), (fmt), ## __VA_ARGS__); \
38
+ } else { \
39
+ error_setg_errno((errp), (err), (fmt), ## __VA_ARGS__); \
40
+ } \
41
+ } while (0)
42
+
43
#if defined(__NetBSD__)
44
static int raw_normalize_devicepath(const char **filename, Error **errp)
45
{
46
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
47
if ((perm_lock_bits & bit) && !(locked_perm & bit)) {
48
ret = qemu_lock_fd(fd, off, 1, false);
49
if (ret) {
50
- error_setg(errp, "Failed to lock byte %d", off);
51
+ raw_lock_error_setg_errno(errp, -ret, "Failed to lock byte %d",
52
+ off);
53
return ret;
54
} else if (s) {
55
s->locked_perm |= bit;
56
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
57
} else if (unlock && (locked_perm & bit) && !(perm_lock_bits & bit)) {
58
ret = qemu_unlock_fd(fd, off, 1);
59
if (ret) {
60
- error_setg(errp, "Failed to unlock byte %d", off);
61
+ error_setg_errno(errp, -ret, "Failed to unlock byte %d", off);
62
return ret;
63
} else if (s) {
64
s->locked_perm &= ~bit;
65
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
66
if ((shared_perm_lock_bits & bit) && !(locked_shared_perm & bit)) {
67
ret = qemu_lock_fd(fd, off, 1, false);
68
if (ret) {
69
- error_setg(errp, "Failed to lock byte %d", off);
70
+ raw_lock_error_setg_errno(errp, -ret, "Failed to lock byte %d",
71
+ off);
72
return ret;
73
} else if (s) {
74
s->locked_shared_perm |= bit;
75
@@ -XXX,XX +XXX,XX @@ static int raw_apply_lock_bytes(BDRVRawState *s, int fd,
76
!(shared_perm_lock_bits & bit)) {
77
ret = qemu_unlock_fd(fd, off, 1);
78
if (ret) {
79
- error_setg(errp, "Failed to unlock byte %d", off);
80
+ error_setg_errno(errp, -ret, "Failed to unlock byte %d", off);
81
return ret;
82
} else if (s) {
83
s->locked_shared_perm &= ~bit;
84
@@ -XXX,XX +XXX,XX @@ static int raw_check_lock_bytes(int fd, uint64_t perm, uint64_t shared_perm,
85
ret = qemu_lock_fd_test(fd, off, 1, true);
86
if (ret) {
87
char *perm_name = bdrv_perm_names(p);
88
- error_setg(errp,
89
- "Failed to get \"%s\" lock",
90
- perm_name);
91
+
92
+ raw_lock_error_setg_errno(errp, -ret,
93
+ "Failed to get \"%s\" lock",
94
+ perm_name);
95
g_free(perm_name);
96
return ret;
97
}
98
@@ -XXX,XX +XXX,XX @@ static int raw_check_lock_bytes(int fd, uint64_t perm, uint64_t shared_perm,
99
ret = qemu_lock_fd_test(fd, off, 1, true);
100
if (ret) {
101
char *perm_name = bdrv_perm_names(p);
102
- error_setg(errp,
103
- "Failed to get shared \"%s\" lock",
104
- perm_name);
105
+
106
+ raw_lock_error_setg_errno(errp, -ret,
107
+ "Failed to get shared \"%s\" lock",
108
+ perm_name);
109
g_free(perm_name);
110
return ret;
111
}
112
--
113
2.29.2
114
115
diff view generated by jsdifflib
Deleted patch
1
From: Alberto Garcia <berto@igalia.com>
2
1
3
Signed-off-by: Alberto Garcia <berto@igalia.com>
4
Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
5
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
6
Message-Id: <20210112170540.2912-1-berto@igalia.com>
7
[mreitz: Add "# group:" line]
8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9
---
10
tests/qemu-iotests/313 | 104 +++++++++++++++++++++++++++++++++++++
11
tests/qemu-iotests/313.out | 29 +++++++++++
12
tests/qemu-iotests/group | 1 +
13
3 files changed, 134 insertions(+)
14
create mode 100755 tests/qemu-iotests/313
15
create mode 100644 tests/qemu-iotests/313.out
16
17
diff --git a/tests/qemu-iotests/313 b/tests/qemu-iotests/313
18
new file mode 100755
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/tests/qemu-iotests/313
22
@@ -XXX,XX +XXX,XX @@
23
+#!/usr/bin/env bash
24
+# group: rw auto quick
25
+#
26
+# Test for the regression fixed in commit c8bf9a9169
27
+#
28
+# Copyright (C) 2020 Igalia, S.L.
29
+# Author: Alberto Garcia <berto@igalia.com>
30
+# Based on a test case by Maxim Levitsky <mlevitsk@redhat.com>
31
+#
32
+# This program is free software; you can redistribute it and/or modify
33
+# it under the terms of the GNU General Public License as published by
34
+# the Free Software Foundation; either version 2 of the License, or
35
+# (at your option) any later version.
36
+#
37
+# This program is distributed in the hope that it will be useful,
38
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
39
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
40
+# GNU General Public License for more details.
41
+#
42
+# You should have received a copy of the GNU General Public License
43
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
44
+#
45
+
46
+# creator
47
+owner=berto@igalia.com
48
+
49
+seq=`basename $0`
50
+echo "QA output created by $seq"
51
+
52
+status=1 # failure is the default!
53
+
54
+_cleanup()
55
+{
56
+ _cleanup_test_img
57
+}
58
+trap "_cleanup; exit \$status" 0 1 2 3 15
59
+
60
+# get standard environment, filters and checks
61
+. ./common.rc
62
+. ./common.filter
63
+
64
+_supported_fmt qcow2
65
+_supported_proto file
66
+_supported_os Linux
67
+_unsupported_imgopts cluster_size refcount_bits extended_l2 compat=0.10 data_file
68
+
69
+# The cluster size must be at least the granularity of the mirror job (4KB)
70
+# Note that larger cluster sizes will produce very large images (several GBs)
71
+cluster_size=4096
72
+refcount_bits=64 # Make it equal to the L2 entry size for convenience
73
+options="cluster_size=${cluster_size},refcount_bits=${refcount_bits}"
74
+
75
+# Number of refcount entries per refcount blocks
76
+ref_entries=$(( ${cluster_size} * 8 / ${refcount_bits} ))
77
+
78
+# Number of data clusters needed to fill a refcount block
79
+# Equals ${ref_entries} minus two (one L2 table and one refcount block)
80
+data_clusters_per_refblock=$(( ${ref_entries} - 2 ))
81
+
82
+# Number of entries in the refcount cache
83
+ref_blocks=4
84
+
85
+# Write enough data clusters to fill the refcount cache and allocate
86
+# one more refcount block.
87
+# Subtract 3 clusters from the total: qcow2 header, refcount table, L1 table
88
+total_data_clusters=$(( ${data_clusters_per_refblock} * ${ref_blocks} + 1 - 3 ))
89
+
90
+# Total size to write in bytes
91
+total_size=$(( ${total_data_clusters} * ${cluster_size} ))
92
+
93
+echo
94
+echo '### Create the image'
95
+echo
96
+TEST_IMG_FILE=$TEST_IMG.base _make_test_img -o $options $total_size | _filter_img_create_size
97
+
98
+echo
99
+echo '### Write data to allocate more refcount blocks than the cache can hold'
100
+echo
101
+$QEMU_IO -c "write -P 1 0 $total_size" $TEST_IMG.base | _filter_qemu_io
102
+
103
+echo
104
+echo '### Create an overlay'
105
+echo
106
+_make_test_img -F $IMGFMT -b $TEST_IMG.base -o $options | _filter_img_create_size
107
+
108
+echo
109
+echo '### Fill the overlay with zeroes'
110
+echo
111
+$QEMU_IO -c "write -z 0 $total_size" $TEST_IMG | _filter_qemu_io
112
+
113
+echo
114
+echo '### Commit changes to the base image'
115
+echo
116
+$QEMU_IMG commit $TEST_IMG
117
+
118
+echo
119
+echo '### Check the base image'
120
+echo
121
+$QEMU_IMG check $TEST_IMG.base
122
+
123
+# success, all done
124
+echo "*** done"
125
+rm -f $seq.full
126
+status=0
127
diff --git a/tests/qemu-iotests/313.out b/tests/qemu-iotests/313.out
128
new file mode 100644
129
index XXXXXXX..XXXXXXX
130
--- /dev/null
131
+++ b/tests/qemu-iotests/313.out
132
@@ -XXX,XX +XXX,XX @@
133
+QA output created by 313
134
+
135
+### Create the image
136
+
137
+Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=SIZE
138
+
139
+### Write data to allocate more refcount blocks than the cache can hold
140
+
141
+wrote 8347648/8347648 bytes at offset 0
142
+7.961 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
143
+
144
+### Create an overlay
145
+
146
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=SIZE backing_file=TEST_DIR/t.IMGFMT.base backing_fmt=IMGFMT
147
+
148
+### Fill the overlay with zeroes
149
+
150
+wrote 8347648/8347648 bytes at offset 0
151
+7.961 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
152
+
153
+### Commit changes to the base image
154
+
155
+Image committed.
156
+
157
+### Check the base image
158
+
159
+No errors were found on the image.
160
+Image end offset: 8396800
161
+*** done
162
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
163
index XXXXXXX..XXXXXXX 100644
164
--- a/tests/qemu-iotests/group
165
+++ b/tests/qemu-iotests/group
166
@@ -XXX,XX +XXX,XX @@
167
309 rw auto quick
168
310 rw quick
169
312 rw quick
170
+313 rw auto quick
171
--
172
2.29.2
173
174
diff view generated by jsdifflib
Deleted patch
1
Commit 0afec75734331 removed the 'change' QMP command, so we can no
2
longer test it in 118.
3
1
4
Fixes: 0afec75734331a0b52fa3aa4235220eda8c7846f
5
('qmp: remove deprecated "change" command')
6
Signed-off-by: Max Reitz <mreitz@redhat.com>
7
Message-Id: <20210126104833.57026-1-mreitz@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
---
10
tests/qemu-iotests/118 | 20 +-------------------
11
tests/qemu-iotests/118.out | 4 ++--
12
2 files changed, 3 insertions(+), 21 deletions(-)
13
14
diff --git a/tests/qemu-iotests/118 b/tests/qemu-iotests/118
15
index XXXXXXX..XXXXXXX 100755
16
--- a/tests/qemu-iotests/118
17
+++ b/tests/qemu-iotests/118
18
@@ -XXX,XX +XXX,XX @@
19
#!/usr/bin/env python3
20
# group: rw
21
#
22
-# Test case for the QMP 'change' command and all other associated
23
-# commands
24
+# Test case for media change monitor commands
25
#
26
# Copyright (C) 2015 Red Hat, Inc.
27
#
28
@@ -XXX,XX +XXX,XX @@ class ChangeBaseClass(iotests.QMPTestCase):
29
30
class GeneralChangeTestsBaseClass(ChangeBaseClass):
31
32
- def test_change(self):
33
- # 'change' requires a drive name, so skip the test for blockdev
34
- if not self.use_drive:
35
- return
36
-
37
- result = self.vm.qmp('change', device='drive0', target=new_img,
38
- arg=iotests.imgfmt)
39
- self.assert_qmp(result, 'return', {})
40
-
41
- self.wait_for_open()
42
- self.wait_for_close()
43
-
44
- result = self.vm.qmp('query-block')
45
- if self.has_real_tray:
46
- self.assert_qmp(result, 'return[0]/tray_open', False)
47
- self.assert_qmp(result, 'return[0]/inserted/image/filename', new_img)
48
-
49
def test_blockdev_change_medium(self):
50
result = self.vm.qmp('blockdev-change-medium',
51
id=self.device_name, filename=new_img,
52
diff --git a/tests/qemu-iotests/118.out b/tests/qemu-iotests/118.out
53
index XXXXXXX..XXXXXXX 100644
54
--- a/tests/qemu-iotests/118.out
55
+++ b/tests/qemu-iotests/118.out
56
@@ -XXX,XX +XXX,XX @@
57
-.......................................................................................................................................................................
58
+...........................................................................................................................................................
59
----------------------------------------------------------------------
60
-Ran 167 tests
61
+Ran 155 tests
62
63
OK
64
--
65
2.29.2
66
67
diff view generated by jsdifflib