1
The following changes since commit 9ac5df20f51fabcba0d902025df4bd7ea987c158:
1
The following changes since commit ac793156f650ae2d77834932d72224175ee69086:
2
2
3
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20200221-1' into staging (2020-02-21 16:18:38 +0000)
3
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20201020-1' into staging (2020-10-20 21:11:35 +0100)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://github.com/stefanha/qemu.git tags/block-pull-request
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to e5c59355ae9f724777c61c859292ec9db2c8c2ab:
9
for you to fetch changes up to 32a3fd65e7e3551337fd26bfc0e2f899d70c028c:
10
10
11
fuzz: add documentation to docs/devel/ (2020-02-22 08:26:48 +0000)
11
iotests: add commit top->base cases to 274 (2020-10-22 09:55:39 +0100)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Pull request
14
Pull request
15
15
16
This pull request contains a virtio-blk/scsi performance optimization, event
16
v2:
17
loop scalability improvements, and a qtest-based device fuzzing framework. I
17
* Fix format string issues on 32-bit hosts [Peter]
18
am including the fuzzing patches because I have reviewed them and Thomas Huth
18
* Fix qemu-nbd.c CONFIG_POSIX ifdef issue [Eric]
19
is currently away on leave.
19
* Fix missing eventfd.h header on macOS [Peter]
20
* Drop unreliable vhost-user-blk test (will send a new patch when ready) [Peter]
21
22
This pull request contains the vhost-user-blk server by Coiby Xu along with my
23
additions, block/nvme.c alignment and hardware error statistics by Philippe
24
Mathieu-Daudé, and bdrv_co_block_status_above() fixes by Vladimir
25
Sementsov-Ogievskiy.
20
26
21
----------------------------------------------------------------
27
----------------------------------------------------------------
22
28
23
Alexander Bulekov (22):
29
Coiby Xu (6):
24
softmmu: move vl.c to softmmu/
30
libvhost-user: Allow vu_message_read to be replaced
25
softmmu: split off vl.c:main() into main.c
31
libvhost-user: remove watch for kick_fd when de-initialize vu-dev
26
module: check module wasn't already initialized
32
util/vhost-user-server: generic vhost user server
27
fuzz: add FUZZ_TARGET module type
33
block: move logical block size check function to a common utility
28
qtest: add qtest_server_send abstraction
34
function
29
libqtest: add a layer of abstraction to send/recv
35
block/export: vhost-user block device backend server
30
libqtest: make bufwrite rely on the TransportOps
36
MAINTAINERS: Add vhost-user block device backend server maintainer
31
qtest: add in-process incoming command handler
32
libqos: rename i2c_send and i2c_recv
33
libqos: split qos-test and libqos makefile vars
34
libqos: move useful qos-test funcs to qos_external
35
fuzz: add fuzzer skeleton
36
exec: keep ram block across fork when using qtest
37
main: keep rcu_atfork callback enabled for qtest
38
fuzz: support for fork-based fuzzing.
39
fuzz: add support for qos-assisted fuzz targets
40
fuzz: add target/fuzz makefile rules
41
fuzz: add configure flag --enable-fuzzing
42
fuzz: add i440fx fuzz targets
43
fuzz: add virtio-net fuzz target
44
fuzz: add virtio-scsi fuzz target
45
fuzz: add documentation to docs/devel/
46
37
47
Denis Plotnikov (1):
38
Philippe Mathieu-Daudé (1):
48
virtio: increase virtqueue size for virtio-scsi and virtio-blk
39
block/nvme: Add driver statistics for access alignment and hw errors
49
40
50
Paolo Bonzini (1):
41
Stefan Hajnoczi (16):
51
rcu_queue: add QSLIST functions
42
util/vhost-user-server: s/fileds/fields/ typo fix
43
util/vhost-user-server: drop unnecessary QOM cast
44
util/vhost-user-server: drop unnecessary watch deletion
45
block/export: consolidate request structs into VuBlockReq
46
util/vhost-user-server: drop unused DevicePanicNotifier
47
util/vhost-user-server: fix memory leak in vu_message_read()
48
util/vhost-user-server: check EOF when reading payload
49
util/vhost-user-server: rework vu_client_trip() coroutine lifecycle
50
block/export: report flush errors
51
block/export: convert vhost-user-blk server to block export API
52
util/vhost-user-server: move header to include/
53
util/vhost-user-server: use static library in meson.build
54
qemu-storage-daemon: avoid compiling blockdev_ss twice
55
block: move block exports to libblockdev
56
block/export: add iothread and fixed-iothread options
57
block/export: add vhost-user-blk multi-queue support
52
58
53
Stefan Hajnoczi (7):
59
Vladimir Sementsov-Ogievskiy (5):
54
aio-posix: avoid reacquiring rcu_read_lock() when polling
60
block/io: fix bdrv_co_block_status_above
55
util/async: make bh_aio_poll() O(1)
61
block/io: bdrv_common_block_status_above: support include_base
56
aio-posix: fix use after leaving scope in aio_poll()
62
block/io: bdrv_common_block_status_above: support bs == base
57
aio-posix: don't pass ns timeout to epoll_wait()
63
block/io: fix bdrv_is_allocated_above
58
qemu/queue.h: add QLIST_SAFE_REMOVE()
64
iotests: add commit top->base cases to 274
59
aio-posix: make AioHandler deletion O(1)
60
aio-posix: make AioHandler dispatch O(1) with epoll
61
65
62
MAINTAINERS | 11 +-
66
MAINTAINERS | 9 +
63
Makefile | 15 +-
67
qapi/block-core.json | 24 +-
64
Makefile.objs | 2 -
68
qapi/block-export.json | 36 +-
65
Makefile.target | 19 ++-
69
block/coroutines.h | 2 +
66
block.c | 5 +-
70
block/export/vhost-user-blk-server.h | 19 +
67
chardev/spice.c | 4 +-
71
contrib/libvhost-user/libvhost-user.h | 21 +
68
configure | 39 +++++
72
include/qemu/vhost-user-server.h | 65 +++
69
docs/devel/fuzzing.txt | 116 ++++++++++++++
73
util/block-helpers.h | 19 +
70
exec.c | 12 +-
74
block/export/export.c | 37 +-
71
hw/block/virtio-blk.c | 2 +-
75
block/export/vhost-user-blk-server.c | 431 ++++++++++++++++++++
72
hw/core/machine.c | 2 +
76
block/io.c | 132 +++---
73
hw/scsi/virtio-scsi.c | 2 +-
77
block/nvme.c | 27 ++
74
include/block/aio.h | 26 ++-
78
block/qcow2.c | 16 +-
75
include/qemu/module.h | 4 +-
79
contrib/libvhost-user/libvhost-user-glib.c | 2 +-
76
include/qemu/queue.h | 32 +++-
80
contrib/libvhost-user/libvhost-user.c | 15 +-
77
include/qemu/rcu_queue.h | 47 ++++++
81
hw/core/qdev-properties-system.c | 31 +-
78
include/sysemu/qtest.h | 4 +
82
nbd/server.c | 2 -
79
include/sysemu/sysemu.h | 4 +
83
qemu-nbd.c | 21 +-
80
qtest.c | 31 +++-
84
softmmu/vl.c | 4 +
81
scripts/checkpatch.pl | 2 +-
85
stubs/blk-exp-close-all.c | 7 +
82
scripts/get_maintainer.pl | 3 +-
86
tests/vhost-user-bridge.c | 2 +
83
softmmu/Makefile.objs | 3 +
87
tools/virtiofsd/fuse_virtio.c | 4 +-
84
softmmu/main.c | 53 +++++++
88
util/block-helpers.c | 46 +++
85
vl.c => softmmu/vl.c | 48 +++---
89
util/vhost-user-server.c | 446 +++++++++++++++++++++
86
tests/Makefile.include | 2 +
90
block/export/meson.build | 3 +-
87
tests/qtest/Makefile.include | 72 +++++----
91
contrib/libvhost-user/meson.build | 1 +
88
tests/qtest/fuzz/Makefile.include | 18 +++
92
meson.build | 22 +-
89
tests/qtest/fuzz/fork_fuzz.c | 55 +++++++
93
nbd/meson.build | 2 +
90
tests/qtest/fuzz/fork_fuzz.h | 23 +++
94
storage-daemon/meson.build | 3 +-
91
tests/qtest/fuzz/fork_fuzz.ld | 37 +++++
95
stubs/meson.build | 1 +
92
tests/qtest/fuzz/fuzz.c | 179 +++++++++++++++++++++
96
tests/qemu-iotests/274 | 20 +
93
tests/qtest/fuzz/fuzz.h | 95 +++++++++++
97
tests/qemu-iotests/274.out | 68 ++++
94
tests/qtest/fuzz/i440fx_fuzz.c | 193 ++++++++++++++++++++++
98
util/meson.build | 4 +
95
tests/qtest/fuzz/qos_fuzz.c | 234 +++++++++++++++++++++++++++
99
33 files changed, 1420 insertions(+), 122 deletions(-)
96
tests/qtest/fuzz/qos_fuzz.h | 33 ++++
100
create mode 100644 block/export/vhost-user-blk-server.h
97
tests/qtest/fuzz/virtio_net_fuzz.c | 198 +++++++++++++++++++++++
101
create mode 100644 include/qemu/vhost-user-server.h
98
tests/qtest/fuzz/virtio_scsi_fuzz.c | 213 +++++++++++++++++++++++++
102
create mode 100644 util/block-helpers.h
99
tests/qtest/libqos/i2c.c | 10 +-
103
create mode 100644 block/export/vhost-user-blk-server.c
100
tests/qtest/libqos/i2c.h | 4 +-
104
create mode 100644 stubs/blk-exp-close-all.c
101
tests/qtest/libqos/qos_external.c | 168 ++++++++++++++++++++
105
create mode 100644 util/block-helpers.c
102
tests/qtest/libqos/qos_external.h | 28 ++++
106
create mode 100644 util/vhost-user-server.c
103
tests/qtest/libqtest.c | 119 ++++++++++++--
104
tests/qtest/libqtest.h | 4 +
105
tests/qtest/pca9552-test.c | 10 +-
106
tests/qtest/qos-test.c | 132 +---------------
107
tests/test-aio.c | 3 +-
108
tests/test-rcu-list.c | 16 ++
109
tests/test-rcu-slist.c | 2 +
110
util/aio-posix.c | 187 +++++++++++++++-------
111
util/async.c | 237 ++++++++++++++++------------
112
util/module.c | 7 +
113
51 files changed, 2365 insertions(+), 400 deletions(-)
114
create mode 100644 docs/devel/fuzzing.txt
115
create mode 100644 softmmu/Makefile.objs
116
create mode 100644 softmmu/main.c
117
rename vl.c => softmmu/vl.c (99%)
118
create mode 100644 tests/qtest/fuzz/Makefile.include
119
create mode 100644 tests/qtest/fuzz/fork_fuzz.c
120
create mode 100644 tests/qtest/fuzz/fork_fuzz.h
121
create mode 100644 tests/qtest/fuzz/fork_fuzz.ld
122
create mode 100644 tests/qtest/fuzz/fuzz.c
123
create mode 100644 tests/qtest/fuzz/fuzz.h
124
create mode 100644 tests/qtest/fuzz/i440fx_fuzz.c
125
create mode 100644 tests/qtest/fuzz/qos_fuzz.c
126
create mode 100644 tests/qtest/fuzz/qos_fuzz.h
127
create mode 100644 tests/qtest/fuzz/virtio_net_fuzz.c
128
create mode 100644 tests/qtest/fuzz/virtio_scsi_fuzz.c
129
create mode 100644 tests/qtest/libqos/qos_external.c
130
create mode 100644 tests/qtest/libqos/qos_external.h
131
create mode 100644 tests/test-rcu-slist.c
132
107
133
--
108
--
134
2.24.1
109
2.26.2
135
110
diff view generated by jsdifflib
Deleted patch
1
From: Denis Plotnikov <dplotnikov@virtuozzo.com>
2
1
3
The goal is to reduce the amount of requests issued by a guest on
4
1M reads/writes. This rises the performance up to 4% on that kind of
5
disk access pattern.
6
7
The maximum chunk size to be used for the guest disk accessing is
8
limited with seg_max parameter, which represents the max amount of
9
pices in the scatter-geather list in one guest disk request.
10
11
Since seg_max is virqueue_size dependent, increasing the virtqueue
12
size increases seg_max, which, in turn, increases the maximum size
13
of data to be read/write from a guest disk.
14
15
More details in the original problem statment:
16
https://lists.gnu.org/archive/html/qemu-devel/2017-12/msg03721.html
17
18
Suggested-by: Denis V. Lunev <den@openvz.org>
19
Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
20
Message-id: 20200214074648.958-1-dplotnikov@virtuozzo.com
21
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
22
---
23
hw/block/virtio-blk.c | 2 +-
24
hw/core/machine.c | 2 ++
25
hw/scsi/virtio-scsi.c | 2 +-
26
3 files changed, 4 insertions(+), 2 deletions(-)
27
28
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/block/virtio-blk.c
31
+++ b/hw/block/virtio-blk.c
32
@@ -XXX,XX +XXX,XX @@ static Property virtio_blk_properties[] = {
33
DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0,
34
true),
35
DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
36
- DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 128),
37
+ DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 256),
38
DEFINE_PROP_BOOL("seg-max-adjust", VirtIOBlock, conf.seg_max_adjust, true),
39
DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD,
40
IOThread *),
41
diff --git a/hw/core/machine.c b/hw/core/machine.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/core/machine.c
44
+++ b/hw/core/machine.c
45
@@ -XXX,XX +XXX,XX @@
46
#include "hw/mem/nvdimm.h"
47
48
GlobalProperty hw_compat_4_2[] = {
49
+ { "virtio-blk-device", "queue-size", "128"},
50
+ { "virtio-scsi-device", "virtqueue_size", "128"},
51
{ "virtio-blk-device", "x-enable-wce-if-config-wce", "off" },
52
{ "virtio-blk-device", "seg-max-adjust", "off"},
53
{ "virtio-scsi-device", "seg_max_adjust", "off"},
54
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/scsi/virtio-scsi.c
57
+++ b/hw/scsi/virtio-scsi.c
58
@@ -XXX,XX +XXX,XX @@ static void virtio_scsi_device_unrealize(DeviceState *dev, Error **errp)
59
static Property virtio_scsi_properties[] = {
60
DEFINE_PROP_UINT32("num_queues", VirtIOSCSI, parent_obj.conf.num_queues, 1),
61
DEFINE_PROP_UINT32("virtqueue_size", VirtIOSCSI,
62
- parent_obj.conf.virtqueue_size, 128),
63
+ parent_obj.conf.virtqueue_size, 256),
64
DEFINE_PROP_BOOL("seg_max_adjust", VirtIOSCSI,
65
parent_obj.conf.seg_max_adjust, true),
66
DEFINE_PROP_UINT32("max_sectors", VirtIOSCSI, parent_obj.conf.max_sectors,
67
--
68
2.24.1
69
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
3
Keep statistics of some hardware errors, and number of
4
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
4
aligned/unaligned I/O accesses.
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
6
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
6
QMP example booting a full RHEL 8.3 aarch64 guest:
7
Message-id: 20200220041118.23264-19-alxndr@bu.edu
7
8
{ "execute": "query-blockstats" }
9
{
10
"return": [
11
{
12
"device": "",
13
"node-name": "drive0",
14
"stats": {
15
"flush_total_time_ns": 6026948,
16
"wr_highest_offset": 3383991230464,
17
"wr_total_time_ns": 807450995,
18
"failed_wr_operations": 0,
19
"failed_rd_operations": 0,
20
"wr_merged": 3,
21
"wr_bytes": 50133504,
22
"failed_unmap_operations": 0,
23
"failed_flush_operations": 0,
24
"account_invalid": false,
25
"rd_total_time_ns": 1846979900,
26
"flush_operations": 130,
27
"wr_operations": 659,
28
"rd_merged": 1192,
29
"rd_bytes": 218244096,
30
"account_failed": false,
31
"idle_time_ns": 2678641497,
32
"rd_operations": 7406,
33
},
34
"driver-specific": {
35
"driver": "nvme",
36
"completion-errors": 0,
37
"unaligned-accesses": 2959,
38
"aligned-accesses": 4477
39
},
40
"qdev": "/machine/peripheral-anon/device[0]/virtio-backend"
41
}
42
]
43
}
44
45
Suggested-by: Stefan Hajnoczi <stefanha@gmail.com>
46
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
47
Acked-by: Markus Armbruster <armbru@redhat.com>
48
Message-id: 20201001162939.1567915-1-philmd@redhat.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
49
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
50
---
10
configure | 39 +++++++++++++++++++++++++++++++++++++++
51
qapi/block-core.json | 24 +++++++++++++++++++++++-
11
1 file changed, 39 insertions(+)
52
block/nvme.c | 27 +++++++++++++++++++++++++++
53
2 files changed, 50 insertions(+), 1 deletion(-)
12
54
13
diff --git a/configure b/configure
55
diff --git a/qapi/block-core.json b/qapi/block-core.json
14
index XXXXXXX..XXXXXXX 100755
56
index XXXXXXX..XXXXXXX 100644
15
--- a/configure
57
--- a/qapi/block-core.json
16
+++ b/configure
58
+++ b/qapi/block-core.json
17
@@ -XXX,XX +XXX,XX @@ debug_mutex="no"
59
@@ -XXX,XX +XXX,XX @@
18
libpmem=""
60
'discard-nb-failed': 'uint64',
19
default_devices="yes"
61
'discard-bytes-ok': 'uint64' } }
20
plugins="no"
62
21
+fuzzing="no"
63
+##
22
64
+# @BlockStatsSpecificNvme:
23
supported_cpu="no"
65
+#
24
supported_os="no"
66
+# NVMe driver statistics
25
@@ -XXX,XX +XXX,XX @@ int main(void) { return 0; }
67
+#
26
EOF
68
+# @completion-errors: The number of completion errors.
69
+#
70
+# @aligned-accesses: The number of aligned accesses performed by
71
+# the driver.
72
+#
73
+# @unaligned-accesses: The number of unaligned accesses performed by
74
+# the driver.
75
+#
76
+# Since: 5.2
77
+##
78
+{ 'struct': 'BlockStatsSpecificNvme',
79
+ 'data': {
80
+ 'completion-errors': 'uint64',
81
+ 'aligned-accesses': 'uint64',
82
+ 'unaligned-accesses': 'uint64' } }
83
+
84
##
85
# @BlockStatsSpecific:
86
#
87
@@ -XXX,XX +XXX,XX @@
88
'discriminator': 'driver',
89
'data': {
90
'file': 'BlockStatsSpecificFile',
91
- 'host_device': 'BlockStatsSpecificFile' } }
92
+ 'host_device': 'BlockStatsSpecificFile',
93
+ 'nvme': 'BlockStatsSpecificNvme' } }
94
95
##
96
# @BlockStats:
97
diff --git a/block/nvme.c b/block/nvme.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/block/nvme.c
100
+++ b/block/nvme.c
101
@@ -XXX,XX +XXX,XX @@ struct BDRVNVMeState {
102
103
/* PCI address (required for nvme_refresh_filename()) */
104
char *device;
105
+
106
+ struct {
107
+ uint64_t completion_errors;
108
+ uint64_t aligned_accesses;
109
+ uint64_t unaligned_accesses;
110
+ } stats;
111
};
112
113
#define NVME_BLOCK_OPT_DEVICE "device"
114
@@ -XXX,XX +XXX,XX @@ static bool nvme_process_completion(NVMeQueuePair *q)
115
break;
116
}
117
ret = nvme_translate_error(c);
118
+ if (ret) {
119
+ s->stats.completion_errors++;
120
+ }
121
q->cq.head = (q->cq.head + 1) % NVME_QUEUE_SIZE;
122
if (!q->cq.head) {
123
q->cq_phase = !q->cq_phase;
124
@@ -XXX,XX +XXX,XX @@ static int nvme_co_prw(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
125
assert(QEMU_IS_ALIGNED(bytes, s->page_size));
126
assert(bytes <= s->max_transfer);
127
if (nvme_qiov_aligned(bs, qiov)) {
128
+ s->stats.aligned_accesses++;
129
return nvme_co_prw_aligned(bs, offset, bytes, qiov, is_write, flags);
130
}
131
+ s->stats.unaligned_accesses++;
132
trace_nvme_prw_buffered(s, offset, bytes, qiov->niov, is_write);
133
buf = qemu_try_memalign(s->page_size, bytes);
134
135
@@ -XXX,XX +XXX,XX @@ static void nvme_unregister_buf(BlockDriverState *bs, void *host)
136
qemu_vfio_dma_unmap(s->vfio, host);
27
}
137
}
28
138
29
+write_c_fuzzer_skeleton() {
139
+static BlockStatsSpecific *nvme_get_specific_stats(BlockDriverState *bs)
30
+ cat > $TMPC <<EOF
140
+{
31
+#include <stdint.h>
141
+ BlockStatsSpecific *stats = g_new(BlockStatsSpecific, 1);
32
+#include <sys/types.h>
142
+ BDRVNVMeState *s = bs->opaque;
33
+int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size);
143
+
34
+int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) { return 0; }
144
+ stats->driver = BLOCKDEV_DRIVER_NVME;
35
+EOF
145
+ stats->u.nvme = (BlockStatsSpecificNvme) {
146
+ .completion_errors = s->stats.completion_errors,
147
+ .aligned_accesses = s->stats.aligned_accesses,
148
+ .unaligned_accesses = s->stats.unaligned_accesses,
149
+ };
150
+
151
+ return stats;
36
+}
152
+}
37
+
153
+
38
if check_define __linux__ ; then
154
static const char *const nvme_strong_runtime_opts[] = {
39
targetos="Linux"
155
NVME_BLOCK_OPT_DEVICE,
40
elif check_define _WIN32 ; then
156
NVME_BLOCK_OPT_NAMESPACE,
41
@@ -XXX,XX +XXX,XX @@ for opt do
157
@@ -XXX,XX +XXX,XX @@ static BlockDriver bdrv_nvme = {
42
;;
158
.bdrv_refresh_filename = nvme_refresh_filename,
43
--disable-containers) use_containers="no"
159
.bdrv_refresh_limits = nvme_refresh_limits,
44
;;
160
.strong_runtime_opts = nvme_strong_runtime_opts,
45
+ --enable-fuzzing) fuzzing=yes
161
+ .bdrv_get_specific_stats = nvme_get_specific_stats,
46
+ ;;
162
47
+ --disable-fuzzing) fuzzing=no
163
.bdrv_detach_aio_context = nvme_detach_aio_context,
48
+ ;;
164
.bdrv_attach_aio_context = nvme_attach_aio_context,
49
*)
50
echo "ERROR: unknown option $opt"
51
echo "Try '$0 --help' for more information"
52
@@ -XXX,XX +XXX,XX @@ EOF
53
fi
54
fi
55
56
+##########################################
57
+# checks for fuzzer
58
+if test "$fuzzing" = "yes" ; then
59
+ write_c_fuzzer_skeleton
60
+ if compile_prog "$CPU_CFLAGS -Werror -fsanitize=address,fuzzer" ""; then
61
+ have_fuzzer=yes
62
+ fi
63
+fi
64
+
65
##########################################
66
# check for libpmem
67
68
@@ -XXX,XX +XXX,XX @@ echo "libpmem support $libpmem"
69
echo "libudev $libudev"
70
echo "default devices $default_devices"
71
echo "plugin support $plugins"
72
+echo "fuzzing support $fuzzing"
73
74
if test "$supported_cpu" = "no"; then
75
echo
76
@@ -XXX,XX +XXX,XX @@ fi
77
if test "$sheepdog" = "yes" ; then
78
echo "CONFIG_SHEEPDOG=y" >> $config_host_mak
79
fi
80
+if test "$fuzzing" = "yes" ; then
81
+ if test "$have_fuzzer" = "yes"; then
82
+ FUZZ_LDFLAGS=" -fsanitize=address,fuzzer"
83
+ FUZZ_CFLAGS=" -fsanitize=address,fuzzer"
84
+ CFLAGS=" -fsanitize=address,fuzzer-no-link"
85
+ else
86
+ error_exit "Your compiler doesn't support -fsanitize=address,fuzzer"
87
+ exit 1
88
+ fi
89
+fi
90
91
if test "$plugins" = "yes" ; then
92
echo "CONFIG_PLUGIN=y" >> $config_host_mak
93
@@ -XXX,XX +XXX,XX @@ if test "$libudev" != "no"; then
94
echo "CONFIG_LIBUDEV=y" >> $config_host_mak
95
echo "LIBUDEV_LIBS=$libudev_libs" >> $config_host_mak
96
fi
97
+if test "$fuzzing" != "no"; then
98
+ echo "CONFIG_FUZZ=y" >> $config_host_mak
99
+ echo "FUZZ_CFLAGS=$FUZZ_CFLAGS" >> $config_host_mak
100
+ echo "FUZZ_LDFLAGS=$FUZZ_LDFLAGS" >> $config_host_mak
101
+fi
102
103
if test "$edk2_blobs" = "yes" ; then
104
echo "DECOMPRESS_EDK2_BLOBS=y" >> $config_host_mak
105
--
165
--
106
2.24.1
166
2.26.2
107
167
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
Ram blocks were marked MADV_DONTFORK breaking fuzzing-tests which
3
Allow vu_message_read to be replaced by one which will make use of the
4
execute each test-input in a forked process.
4
QIOChannel functions. Thus reading vhost-user message won't stall the
5
guest. For slave channel, we still use the default vu_message_read.
5
6
6
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
7
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
8
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
10
Message-id: 20200918080912.321299-2-coiby.xu@gmail.com
9
Message-id: 20200220041118.23264-14-alxndr@bu.edu
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
12
---
12
exec.c | 12 ++++++++++--
13
contrib/libvhost-user/libvhost-user.h | 21 +++++++++++++++++++++
13
1 file changed, 10 insertions(+), 2 deletions(-)
14
contrib/libvhost-user/libvhost-user-glib.c | 2 +-
15
contrib/libvhost-user/libvhost-user.c | 14 +++++++-------
16
tests/vhost-user-bridge.c | 2 ++
17
tools/virtiofsd/fuse_virtio.c | 4 ++--
18
5 files changed, 33 insertions(+), 10 deletions(-)
14
19
15
diff --git a/exec.c b/exec.c
20
diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/exec.c
22
--- a/contrib/libvhost-user/libvhost-user.h
18
+++ b/exec.c
23
+++ b/contrib/libvhost-user/libvhost-user.h
19
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
20
#include "sysemu/kvm.h"
25
*/
21
#include "sysemu/sysemu.h"
26
#define VHOST_USER_MAX_RAM_SLOTS 32
22
#include "sysemu/tcg.h"
27
23
+#include "sysemu/qtest.h"
28
+#define VHOST_USER_HDR_SIZE offsetof(VhostUserMsg, payload.u64)
24
#include "qemu/timer.h"
29
+
25
#include "qemu/config-file.h"
30
typedef enum VhostSetConfigType {
26
#include "qemu/error-report.h"
31
VHOST_SET_CONFIG_TYPE_MASTER = 0,
27
@@ -XXX,XX +XXX,XX @@ static void ram_block_add(RAMBlock *new_block, Error **errp, bool shared)
32
VHOST_SET_CONFIG_TYPE_MIGRATION = 1,
28
if (new_block->host) {
33
@@ -XXX,XX +XXX,XX @@ typedef uint64_t (*vu_get_features_cb) (VuDev *dev);
29
qemu_ram_setup_dump(new_block->host, new_block->max_length);
34
typedef void (*vu_set_features_cb) (VuDev *dev, uint64_t features);
30
qemu_madvise(new_block->host, new_block->max_length, QEMU_MADV_HUGEPAGE);
35
typedef int (*vu_process_msg_cb) (VuDev *dev, VhostUserMsg *vmsg,
31
- /* MADV_DONTFORK is also needed by KVM in absence of synchronous MMU */
36
int *do_reply);
32
- qemu_madvise(new_block->host, new_block->max_length, QEMU_MADV_DONTFORK);
37
+typedef bool (*vu_read_msg_cb) (VuDev *dev, int sock, VhostUserMsg *vmsg);
33
+ /*
38
typedef void (*vu_queue_set_started_cb) (VuDev *dev, int qidx, bool started);
34
+ * MADV_DONTFORK is also needed by KVM in absence of synchronous MMU
39
typedef bool (*vu_queue_is_processed_in_order_cb) (VuDev *dev, int qidx);
35
+ * Configure it unless the machine is a qtest server, in which case
40
typedef int (*vu_get_config_cb) (VuDev *dev, uint8_t *config, uint32_t len);
36
+ * KVM is not used and it may be forked (eg for fuzzing purposes).
41
@@ -XXX,XX +XXX,XX @@ struct VuDev {
37
+ */
42
bool broken;
38
+ if (!qtest_enabled()) {
43
uint16_t max_queues;
39
+ qemu_madvise(new_block->host, new_block->max_length,
44
40
+ QEMU_MADV_DONTFORK);
45
+ /* @read_msg: custom method to read vhost-user message
41
+ }
46
+ *
42
ram_block_notify_add(new_block->host, new_block->max_length);
47
+ * Read data from vhost_user socket fd and fill up
48
+ * the passed VhostUserMsg *vmsg struct.
49
+ *
50
+ * If reading fails, it should close the received set of file
51
+ * descriptors as socket message's auxiliary data.
52
+ *
53
+ * For the details, please refer to vu_message_read in libvhost-user.c
54
+ * which will be used by default if not custom method is provided when
55
+ * calling vu_init
56
+ *
57
+ * Returns: true if vhost-user message successfully received,
58
+ * otherwise return false.
59
+ *
60
+ */
61
+ vu_read_msg_cb read_msg;
62
/* @set_watch: add or update the given fd to the watch set,
63
* call cb when condition is met */
64
vu_set_watch_cb set_watch;
65
@@ -XXX,XX +XXX,XX @@ bool vu_init(VuDev *dev,
66
uint16_t max_queues,
67
int socket,
68
vu_panic_cb panic,
69
+ vu_read_msg_cb read_msg,
70
vu_set_watch_cb set_watch,
71
vu_remove_watch_cb remove_watch,
72
const VuDevIface *iface);
73
diff --git a/contrib/libvhost-user/libvhost-user-glib.c b/contrib/libvhost-user/libvhost-user-glib.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/contrib/libvhost-user/libvhost-user-glib.c
76
+++ b/contrib/libvhost-user/libvhost-user-glib.c
77
@@ -XXX,XX +XXX,XX @@ vug_init(VugDev *dev, uint16_t max_queues, int socket,
78
g_assert(dev);
79
g_assert(iface);
80
81
- if (!vu_init(&dev->parent, max_queues, socket, panic, set_watch,
82
+ if (!vu_init(&dev->parent, max_queues, socket, panic, NULL, set_watch,
83
remove_watch, iface)) {
84
return false;
43
}
85
}
86
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/contrib/libvhost-user/libvhost-user.c
89
+++ b/contrib/libvhost-user/libvhost-user.c
90
@@ -XXX,XX +XXX,XX @@
91
/* The version of inflight buffer */
92
#define INFLIGHT_VERSION 1
93
94
-#define VHOST_USER_HDR_SIZE offsetof(VhostUserMsg, payload.u64)
95
-
96
/* The version of the protocol we support */
97
#define VHOST_USER_VERSION 1
98
#define LIBVHOST_USER_DEBUG 0
99
@@ -XXX,XX +XXX,XX @@ have_userfault(void)
100
}
101
102
static bool
103
-vu_message_read(VuDev *dev, int conn_fd, VhostUserMsg *vmsg)
104
+vu_message_read_default(VuDev *dev, int conn_fd, VhostUserMsg *vmsg)
105
{
106
char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = {};
107
struct iovec iov = {
108
@@ -XXX,XX +XXX,XX @@ vu_process_message_reply(VuDev *dev, const VhostUserMsg *vmsg)
109
goto out;
110
}
111
112
- if (!vu_message_read(dev, dev->slave_fd, &msg_reply)) {
113
+ if (!vu_message_read_default(dev, dev->slave_fd, &msg_reply)) {
114
goto out;
115
}
116
117
@@ -XXX,XX +XXX,XX @@ vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg)
118
/* Wait for QEMU to confirm that it's registered the handler for the
119
* faults.
120
*/
121
- if (!vu_message_read(dev, dev->sock, vmsg) ||
122
+ if (!dev->read_msg(dev, dev->sock, vmsg) ||
123
vmsg->size != sizeof(vmsg->payload.u64) ||
124
vmsg->payload.u64 != 0) {
125
vu_panic(dev, "failed to receive valid ack for postcopy set-mem-table");
126
@@ -XXX,XX +XXX,XX @@ vu_dispatch(VuDev *dev)
127
int reply_requested;
128
bool need_reply, success = false;
129
130
- if (!vu_message_read(dev, dev->sock, &vmsg)) {
131
+ if (!dev->read_msg(dev, dev->sock, &vmsg)) {
132
goto end;
133
}
134
135
@@ -XXX,XX +XXX,XX @@ vu_init(VuDev *dev,
136
uint16_t max_queues,
137
int socket,
138
vu_panic_cb panic,
139
+ vu_read_msg_cb read_msg,
140
vu_set_watch_cb set_watch,
141
vu_remove_watch_cb remove_watch,
142
const VuDevIface *iface)
143
@@ -XXX,XX +XXX,XX @@ vu_init(VuDev *dev,
144
145
dev->sock = socket;
146
dev->panic = panic;
147
+ dev->read_msg = read_msg ? read_msg : vu_message_read_default;
148
dev->set_watch = set_watch;
149
dev->remove_watch = remove_watch;
150
dev->iface = iface;
151
@@ -XXX,XX +XXX,XX @@ static void _vu_queue_notify(VuDev *dev, VuVirtq *vq, bool sync)
152
153
vu_message_write(dev, dev->slave_fd, &vmsg);
154
if (ack) {
155
- vu_message_read(dev, dev->slave_fd, &vmsg);
156
+ vu_message_read_default(dev, dev->slave_fd, &vmsg);
157
}
158
return;
159
}
160
diff --git a/tests/vhost-user-bridge.c b/tests/vhost-user-bridge.c
161
index XXXXXXX..XXXXXXX 100644
162
--- a/tests/vhost-user-bridge.c
163
+++ b/tests/vhost-user-bridge.c
164
@@ -XXX,XX +XXX,XX @@ vubr_accept_cb(int sock, void *ctx)
165
VHOST_USER_BRIDGE_MAX_QUEUES,
166
conn_fd,
167
vubr_panic,
168
+ NULL,
169
vubr_set_watch,
170
vubr_remove_watch,
171
&vuiface)) {
172
@@ -XXX,XX +XXX,XX @@ vubr_new(const char *path, bool client)
173
VHOST_USER_BRIDGE_MAX_QUEUES,
174
dev->sock,
175
vubr_panic,
176
+ NULL,
177
vubr_set_watch,
178
vubr_remove_watch,
179
&vuiface)) {
180
diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/tools/virtiofsd/fuse_virtio.c
183
+++ b/tools/virtiofsd/fuse_virtio.c
184
@@ -XXX,XX +XXX,XX @@ int virtio_session_mount(struct fuse_session *se)
185
se->vu_socketfd = data_sock;
186
se->virtio_dev->se = se;
187
pthread_rwlock_init(&se->virtio_dev->vu_dispatch_rwlock, NULL);
188
- vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, fv_set_watch,
189
- fv_remove_watch, &fv_iface);
190
+ vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, NULL,
191
+ fv_set_watch, fv_remove_watch, &fv_iface);
192
193
return 0;
44
}
194
}
45
--
195
--
46
2.24.1
196
2.26.2
47
197
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
3
When the client is running in gdb and quit command is run in gdb,
4
QEMU will still dispatch the event which will cause segment fault in
5
the callback function.
6
7
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
4
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
9
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
6
Message-id: 20200220041118.23264-23-alxndr@bu.edu
10
Message-id: 20200918080912.321299-3-coiby.xu@gmail.com
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
---
12
---
9
docs/devel/fuzzing.txt | 116 +++++++++++++++++++++++++++++++++++++++++
13
contrib/libvhost-user/libvhost-user.c | 1 +
10
1 file changed, 116 insertions(+)
14
1 file changed, 1 insertion(+)
11
create mode 100644 docs/devel/fuzzing.txt
12
15
13
diff --git a/docs/devel/fuzzing.txt b/docs/devel/fuzzing.txt
16
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
14
new file mode 100644
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX
18
--- a/contrib/libvhost-user/libvhost-user.c
16
--- /dev/null
19
+++ b/contrib/libvhost-user/libvhost-user.c
17
+++ b/docs/devel/fuzzing.txt
20
@@ -XXX,XX +XXX,XX @@ vu_deinit(VuDev *dev)
18
@@ -XXX,XX +XXX,XX @@
21
}
19
+= Fuzzing =
22
20
+
23
if (vq->kick_fd != -1) {
21
+== Introduction ==
24
+ dev->remove_watch(dev, vq->kick_fd);
22
+
25
close(vq->kick_fd);
23
+This document describes the virtual-device fuzzing infrastructure in QEMU and
26
vq->kick_fd = -1;
24
+how to use it to implement additional fuzzers.
27
}
25
+
26
+== Basics ==
27
+
28
+Fuzzing operates by passing inputs to an entry point/target function. The
29
+fuzzer tracks the code coverage triggered by the input. Based on these
30
+findings, the fuzzer mutates the input and repeats the fuzzing.
31
+
32
+To fuzz QEMU, we rely on libfuzzer. Unlike other fuzzers such as AFL, libfuzzer
33
+is an _in-process_ fuzzer. For the developer, this means that it is their
34
+responsibility to ensure that state is reset between fuzzing-runs.
35
+
36
+== Building the fuzzers ==
37
+
38
+NOTE: If possible, build a 32-bit binary. When forking, the 32-bit fuzzer is
39
+much faster, since the page-map has a smaller size. This is due to the fact that
40
+AddressSanitizer mmaps ~20TB of memory, as part of its detection. This results
41
+in a large page-map, and a much slower fork().
42
+
43
+To build the fuzzers, install a recent version of clang:
44
+Configure with (substitute the clang binaries with the version you installed):
45
+
46
+ CC=clang-8 CXX=clang++-8 /path/to/configure --enable-fuzzing
47
+
48
+Fuzz targets are built similarly to system/softmmu:
49
+
50
+ make i386-softmmu/fuzz
51
+
52
+This builds ./i386-softmmu/qemu-fuzz-i386
53
+
54
+The first option to this command is: --fuzz_taget=FUZZ_NAME
55
+To list all of the available fuzzers run qemu-fuzz-i386 with no arguments.
56
+
57
+eg:
58
+ ./i386-softmmu/qemu-fuzz-i386 --fuzz-target=virtio-net-fork-fuzz
59
+
60
+Internally, libfuzzer parses all arguments that do not begin with "--".
61
+Information about these is available by passing -help=1
62
+
63
+Now the only thing left to do is wait for the fuzzer to trigger potential
64
+crashes.
65
+
66
+== Adding a new fuzzer ==
67
+Coverage over virtual devices can be improved by adding additional fuzzers.
68
+Fuzzers are kept in tests/qtest/fuzz/ and should be added to
69
+tests/qtest/fuzz/Makefile.include
70
+
71
+Fuzzers can rely on both qtest and libqos to communicate with virtual devices.
72
+
73
+1. Create a new source file. For example ``tests/qtest/fuzz/foo-device-fuzz.c``.
74
+
75
+2. Write the fuzzing code using the libqtest/libqos API. See existing fuzzers
76
+for reference.
77
+
78
+3. Register the fuzzer in ``tests/fuzz/Makefile.include`` by appending the
79
+corresponding object to fuzz-obj-y
80
+
81
+Fuzzers can be more-or-less thought of as special qtest programs which can
82
+modify the qtest commands and/or qtest command arguments based on inputs
83
+provided by libfuzzer. Libfuzzer passes a byte array and length. Commonly the
84
+fuzzer loops over the byte-array interpreting it as a list of qtest commands,
85
+addresses, or values.
86
+
87
+= Implementation Details =
88
+
89
+== The Fuzzer's Lifecycle ==
90
+
91
+The fuzzer has two entrypoints that libfuzzer calls. libfuzzer provides it's
92
+own main(), which performs some setup, and calls the entrypoints:
93
+
94
+LLVMFuzzerInitialize: called prior to fuzzing. Used to initialize all of the
95
+necessary state
96
+
97
+LLVMFuzzerTestOneInput: called for each fuzzing run. Processes the input and
98
+resets the state at the end of each run.
99
+
100
+In more detail:
101
+
102
+LLVMFuzzerInitialize parses the arguments to the fuzzer (must start with two
103
+dashes, so they are ignored by libfuzzer main()). Currently, the arguments
104
+select the fuzz target. Then, the qtest client is initialized. If the target
105
+requires qos, qgraph is set up and the QOM/LIBQOS modules are initialized.
106
+Then the QGraph is walked and the QEMU cmd_line is determined and saved.
107
+
108
+After this, the vl.c:qemu__main is called to set up the guest. There are
109
+target-specific hooks that can be called before and after qemu_main, for
110
+additional setup(e.g. PCI setup, or VM snapshotting).
111
+
112
+LLVMFuzzerTestOneInput: Uses qtest/qos functions to act based on the fuzz
113
+input. It is also responsible for manually calling the main loop/main_loop_wait
114
+to ensure that bottom halves are executed and any cleanup required before the
115
+next input.
116
+
117
+Since the same process is reused for many fuzzing runs, QEMU state needs to
118
+be reset at the end of each run. There are currently two implemented
119
+options for resetting state:
120
+1. Reboot the guest between runs.
121
+ Pros: Straightforward and fast for simple fuzz targets.
122
+ Cons: Depending on the device, does not reset all device state. If the
123
+ device requires some initialization prior to being ready for fuzzing
124
+ (common for QOS-based targets), this initialization needs to be done after
125
+ each reboot.
126
+ Example target: i440fx-qtest-reboot-fuzz
127
+2. Run each test case in a separate forked process and copy the coverage
128
+ information back to the parent. This is fairly similar to AFL's "deferred"
129
+ fork-server mode [3]
130
+ Pros: Relatively fast. Devices only need to be initialized once. No need
131
+ to do slow reboots or vmloads.
132
+ Cons: Not officially supported by libfuzzer. Does not work well for devices
133
+ that rely on dedicated threads.
134
+ Example target: virtio-net-fork-fuzz
135
--
28
--
136
2.24.1
29
2.26.2
137
30
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
tests/fuzz/fuzz.c serves as the entry point for the virtual-device
3
Sharing QEMU devices via vhost-user protocol.
4
fuzzer. Namely, libfuzzer invokes the LLVMFuzzerInitialize and
5
LLVMFuzzerTestOneInput functions, both of which are defined in this
6
file. This change adds a "FuzzTarget" struct, along with the
7
fuzz_add_target function, which should be used to define new fuzz
8
targets.
9
4
10
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
5
Only one vhost-user client can connect to the server one time.
6
7
Suggested-by: Kevin Wolf <kwolf@redhat.com>
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
11
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
11
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
13
Message-id: 20200220041118.23264-13-alxndr@bu.edu
12
Message-id: 20200918080912.321299-4-coiby.xu@gmail.com
13
[Fixed size_t %lu -> %zu format string compiler error.
14
--Stefan]
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
---
16
---
16
MAINTAINERS | 8 ++
17
util/vhost-user-server.h | 65 ++++++
17
tests/qtest/fuzz/Makefile.include | 6 +
18
util/vhost-user-server.c | 428 +++++++++++++++++++++++++++++++++++++++
18
tests/qtest/fuzz/fuzz.c | 179 ++++++++++++++++++++++++++++++
19
util/meson.build | 1 +
19
tests/qtest/fuzz/fuzz.h | 95 ++++++++++++++++
20
3 files changed, 494 insertions(+)
20
4 files changed, 288 insertions(+)
21
create mode 100644 util/vhost-user-server.h
21
create mode 100644 tests/qtest/fuzz/Makefile.include
22
create mode 100644 util/vhost-user-server.c
22
create mode 100644 tests/qtest/fuzz/fuzz.c
23
create mode 100644 tests/qtest/fuzz/fuzz.h
24
23
25
diff --git a/MAINTAINERS b/MAINTAINERS
24
diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/MAINTAINERS
28
+++ b/MAINTAINERS
29
@@ -XXX,XX +XXX,XX @@ F: qtest.c
30
F: accel/qtest.c
31
F: tests/qtest/
32
33
+Device Fuzzing
34
+M: Alexander Bulekov <alxndr@bu.edu>
35
+R: Paolo Bonzini <pbonzini@redhat.com>
36
+R: Bandan Das <bsd@redhat.com>
37
+R: Stefan Hajnoczi <stefanha@redhat.com>
38
+S: Maintained
39
+F: tests/qtest/fuzz/
40
+
41
Register API
42
M: Alistair Francis <alistair@alistair23.me>
43
S: Maintained
44
diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include
45
new file mode 100644
25
new file mode 100644
46
index XXXXXXX..XXXXXXX
26
index XXXXXXX..XXXXXXX
47
--- /dev/null
27
--- /dev/null
48
+++ b/tests/qtest/fuzz/Makefile.include
28
+++ b/util/vhost-user-server.h
49
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@
50
+QEMU_PROG_FUZZ=qemu-fuzz-$(TARGET_NAME)$(EXESUF)
30
+/*
51
+
31
+ * Sharing QEMU devices via vhost-user protocol
52
+fuzz-obj-y += tests/qtest/libqtest.o
32
+ *
53
+fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton
33
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
54
+
34
+ * Copyright (c) 2020 Red Hat, Inc.
55
+FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest
35
+ *
56
diff --git a/tests/qtest/fuzz/fuzz.c b/tests/qtest/fuzz/fuzz.c
36
+ * This work is licensed under the terms of the GNU GPL, version 2 or
37
+ * later. See the COPYING file in the top-level directory.
38
+ */
39
+
40
+#ifndef VHOST_USER_SERVER_H
41
+#define VHOST_USER_SERVER_H
42
+
43
+#include "contrib/libvhost-user/libvhost-user.h"
44
+#include "io/channel-socket.h"
45
+#include "io/channel-file.h"
46
+#include "io/net-listener.h"
47
+#include "qemu/error-report.h"
48
+#include "qapi/error.h"
49
+#include "standard-headers/linux/virtio_blk.h"
50
+
51
+typedef struct VuFdWatch {
52
+ VuDev *vu_dev;
53
+ int fd; /*kick fd*/
54
+ void *pvt;
55
+ vu_watch_cb cb;
56
+ bool processing;
57
+ QTAILQ_ENTRY(VuFdWatch) next;
58
+} VuFdWatch;
59
+
60
+typedef struct VuServer VuServer;
61
+typedef void DevicePanicNotifierFn(VuServer *server);
62
+
63
+struct VuServer {
64
+ QIONetListener *listener;
65
+ AioContext *ctx;
66
+ DevicePanicNotifierFn *device_panic_notifier;
67
+ int max_queues;
68
+ const VuDevIface *vu_iface;
69
+ VuDev vu_dev;
70
+ QIOChannel *ioc; /* The I/O channel with the client */
71
+ QIOChannelSocket *sioc; /* The underlying data channel with the client */
72
+ /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
73
+ QIOChannel *ioc_slave;
74
+ QIOChannelSocket *sioc_slave;
75
+ Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
76
+ QTAILQ_HEAD(, VuFdWatch) vu_fd_watches;
77
+ /* restart coroutine co_trip if AIOContext is changed */
78
+ bool aio_context_changed;
79
+ bool processing_msg;
80
+};
81
+
82
+bool vhost_user_server_start(VuServer *server,
83
+ SocketAddress *unix_socket,
84
+ AioContext *ctx,
85
+ uint16_t max_queues,
86
+ DevicePanicNotifierFn *device_panic_notifier,
87
+ const VuDevIface *vu_iface,
88
+ Error **errp);
89
+
90
+void vhost_user_server_stop(VuServer *server);
91
+
92
+void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx);
93
+
94
+#endif /* VHOST_USER_SERVER_H */
95
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
57
new file mode 100644
96
new file mode 100644
58
index XXXXXXX..XXXXXXX
97
index XXXXXXX..XXXXXXX
59
--- /dev/null
98
--- /dev/null
60
+++ b/tests/qtest/fuzz/fuzz.c
99
+++ b/util/vhost-user-server.c
61
@@ -XXX,XX +XXX,XX @@
100
@@ -XXX,XX +XXX,XX @@
62
+/*
101
+/*
63
+ * fuzzing driver
102
+ * Sharing QEMU devices via vhost-user protocol
64
+ *
103
+ *
65
+ * Copyright Red Hat Inc., 2019
104
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
105
+ * Copyright (c) 2020 Red Hat, Inc.
66
+ *
106
+ *
67
+ * Authors:
107
+ * This work is licensed under the terms of the GNU GPL, version 2 or
68
+ * Alexander Bulekov <alxndr@bu.edu>
108
+ * later. See the COPYING file in the top-level directory.
109
+ */
110
+#include "qemu/osdep.h"
111
+#include "qemu/main-loop.h"
112
+#include "vhost-user-server.h"
113
+
114
+static void vmsg_close_fds(VhostUserMsg *vmsg)
115
+{
116
+ int i;
117
+ for (i = 0; i < vmsg->fd_num; i++) {
118
+ close(vmsg->fds[i]);
119
+ }
120
+}
121
+
122
+static void vmsg_unblock_fds(VhostUserMsg *vmsg)
123
+{
124
+ int i;
125
+ for (i = 0; i < vmsg->fd_num; i++) {
126
+ qemu_set_nonblock(vmsg->fds[i]);
127
+ }
128
+}
129
+
130
+static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
131
+ gpointer opaque);
132
+
133
+static void close_client(VuServer *server)
134
+{
135
+ /*
136
+ * Before closing the client
137
+ *
138
+ * 1. Let vu_client_trip stop processing new vhost-user msg
139
+ *
140
+ * 2. remove kick_handler
141
+ *
142
+ * 3. wait for the kick handler to be finished
143
+ *
144
+ * 4. wait for the current vhost-user msg to be finished processing
145
+ */
146
+
147
+ QIOChannelSocket *sioc = server->sioc;
148
+ /* When this is set vu_client_trip will stop new processing vhost-user message */
149
+ server->sioc = NULL;
150
+
151
+ VuFdWatch *vu_fd_watch, *next;
152
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
153
+ aio_set_fd_handler(server->ioc->ctx, vu_fd_watch->fd, true, NULL,
154
+ NULL, NULL, NULL);
155
+ }
156
+
157
+ while (!QTAILQ_EMPTY(&server->vu_fd_watches)) {
158
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
159
+ if (!vu_fd_watch->processing) {
160
+ QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
161
+ g_free(vu_fd_watch);
162
+ }
163
+ }
164
+ }
165
+
166
+ while (server->processing_msg) {
167
+ if (server->ioc->read_coroutine) {
168
+ server->ioc->read_coroutine = NULL;
169
+ qio_channel_set_aio_fd_handler(server->ioc, server->ioc->ctx, NULL,
170
+ NULL, server->ioc);
171
+ server->processing_msg = false;
172
+ }
173
+ }
174
+
175
+ vu_deinit(&server->vu_dev);
176
+ object_unref(OBJECT(sioc));
177
+ object_unref(OBJECT(server->ioc));
178
+}
179
+
180
+static void panic_cb(VuDev *vu_dev, const char *buf)
181
+{
182
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
183
+
184
+ /* avoid while loop in close_client */
185
+ server->processing_msg = false;
186
+
187
+ if (buf) {
188
+ error_report("vu_panic: %s", buf);
189
+ }
190
+
191
+ if (server->sioc) {
192
+ close_client(server);
193
+ }
194
+
195
+ if (server->device_panic_notifier) {
196
+ server->device_panic_notifier(server);
197
+ }
198
+
199
+ /*
200
+ * Set the callback function for network listener so another
201
+ * vhost-user client can connect to this server
202
+ */
203
+ qio_net_listener_set_client_func(server->listener,
204
+ vu_accept,
205
+ server,
206
+ NULL);
207
+}
208
+
209
+static bool coroutine_fn
210
+vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
211
+{
212
+ struct iovec iov = {
213
+ .iov_base = (char *)vmsg,
214
+ .iov_len = VHOST_USER_HDR_SIZE,
215
+ };
216
+ int rc, read_bytes = 0;
217
+ Error *local_err = NULL;
218
+ /*
219
+ * Store fds/nfds returned from qio_channel_readv_full into
220
+ * temporary variables.
221
+ *
222
+ * VhostUserMsg is a packed structure, gcc will complain about passing
223
+ * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
224
+ * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
225
+ * thus two temporary variables nfds and fds are used here.
226
+ */
227
+ size_t nfds = 0, nfds_t = 0;
228
+ const size_t max_fds = G_N_ELEMENTS(vmsg->fds);
229
+ int *fds_t = NULL;
230
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
231
+ QIOChannel *ioc = server->ioc;
232
+
233
+ if (!ioc) {
234
+ error_report_err(local_err);
235
+ goto fail;
236
+ }
237
+
238
+ assert(qemu_in_coroutine());
239
+ do {
240
+ /*
241
+ * qio_channel_readv_full may have short reads, keeping calling it
242
+ * until getting VHOST_USER_HDR_SIZE or 0 bytes in total
243
+ */
244
+ rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
245
+ if (rc < 0) {
246
+ if (rc == QIO_CHANNEL_ERR_BLOCK) {
247
+ qio_channel_yield(ioc, G_IO_IN);
248
+ continue;
249
+ } else {
250
+ error_report_err(local_err);
251
+ return false;
252
+ }
253
+ }
254
+ read_bytes += rc;
255
+ if (nfds_t > 0) {
256
+ if (nfds + nfds_t > max_fds) {
257
+ error_report("A maximum of %zu fds are allowed, "
258
+ "however got %zu fds now",
259
+ max_fds, nfds + nfds_t);
260
+ goto fail;
261
+ }
262
+ memcpy(vmsg->fds + nfds, fds_t,
263
+ nfds_t *sizeof(vmsg->fds[0]));
264
+ nfds += nfds_t;
265
+ g_free(fds_t);
266
+ }
267
+ if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
268
+ break;
269
+ }
270
+ iov.iov_base = (char *)vmsg + read_bytes;
271
+ iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
272
+ } while (true);
273
+
274
+ vmsg->fd_num = nfds;
275
+ /* qio_channel_readv_full will make socket fds blocking, unblock them */
276
+ vmsg_unblock_fds(vmsg);
277
+ if (vmsg->size > sizeof(vmsg->payload)) {
278
+ error_report("Error: too big message request: %d, "
279
+ "size: vmsg->size: %u, "
280
+ "while sizeof(vmsg->payload) = %zu",
281
+ vmsg->request, vmsg->size, sizeof(vmsg->payload));
282
+ goto fail;
283
+ }
284
+
285
+ struct iovec iov_payload = {
286
+ .iov_base = (char *)&vmsg->payload,
287
+ .iov_len = vmsg->size,
288
+ };
289
+ if (vmsg->size) {
290
+ rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
291
+ if (rc == -1) {
292
+ error_report_err(local_err);
293
+ goto fail;
294
+ }
295
+ }
296
+
297
+ return true;
298
+
299
+fail:
300
+ vmsg_close_fds(vmsg);
301
+
302
+ return false;
303
+}
304
+
305
+
306
+static void vu_client_start(VuServer *server);
307
+static coroutine_fn void vu_client_trip(void *opaque)
308
+{
309
+ VuServer *server = opaque;
310
+
311
+ while (!server->aio_context_changed && server->sioc) {
312
+ server->processing_msg = true;
313
+ vu_dispatch(&server->vu_dev);
314
+ server->processing_msg = false;
315
+ }
316
+
317
+ if (server->aio_context_changed && server->sioc) {
318
+ server->aio_context_changed = false;
319
+ vu_client_start(server);
320
+ }
321
+}
322
+
323
+static void vu_client_start(VuServer *server)
324
+{
325
+ server->co_trip = qemu_coroutine_create(vu_client_trip, server);
326
+ aio_co_enter(server->ctx, server->co_trip);
327
+}
328
+
329
+/*
330
+ * a wrapper for vu_kick_cb
69
+ *
331
+ *
70
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
332
+ * since aio_dispatch can only pass one user data pointer to the
71
+ * See the COPYING file in the top-level directory.
333
+ * callback function, pack VuDev and pvt into a struct. Then unpack it
72
+ *
334
+ * and pass them to vu_kick_cb
73
+ */
335
+ */
74
+
336
+static void kick_handler(void *opaque)
75
+#include "qemu/osdep.h"
337
+{
76
+
338
+ VuFdWatch *vu_fd_watch = opaque;
77
+#include <wordexp.h>
339
+ vu_fd_watch->processing = true;
78
+
340
+ vu_fd_watch->cb(vu_fd_watch->vu_dev, 0, vu_fd_watch->pvt);
79
+#include "sysemu/qtest.h"
341
+ vu_fd_watch->processing = false;
80
+#include "sysemu/runstate.h"
342
+}
81
+#include "sysemu/sysemu.h"
343
+
82
+#include "qemu/main-loop.h"
344
+
83
+#include "tests/qtest/libqtest.h"
345
+static VuFdWatch *find_vu_fd_watch(VuServer *server, int fd)
84
+#include "tests/qtest/libqos/qgraph.h"
346
+{
85
+#include "fuzz.h"
347
+
86
+
348
+ VuFdWatch *vu_fd_watch, *next;
87
+#define MAX_EVENT_LOOPS 10
349
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
88
+
350
+ if (vu_fd_watch->fd == fd) {
89
+typedef struct FuzzTargetState {
351
+ return vu_fd_watch;
90
+ FuzzTarget *target;
91
+ QSLIST_ENTRY(FuzzTargetState) target_list;
92
+} FuzzTargetState;
93
+
94
+typedef QSLIST_HEAD(, FuzzTargetState) FuzzTargetList;
95
+
96
+static const char *fuzz_arch = TARGET_NAME;
97
+
98
+static FuzzTargetList *fuzz_target_list;
99
+static FuzzTarget *fuzz_target;
100
+static QTestState *fuzz_qts;
101
+
102
+
103
+
104
+void flush_events(QTestState *s)
105
+{
106
+ int i = MAX_EVENT_LOOPS;
107
+ while (g_main_context_pending(NULL) && i-- > 0) {
108
+ main_loop_wait(false);
109
+ }
110
+}
111
+
112
+static QTestState *qtest_setup(void)
113
+{
114
+ qtest_server_set_send_handler(&qtest_client_inproc_recv, &fuzz_qts);
115
+ return qtest_inproc_init(&fuzz_qts, false, fuzz_arch,
116
+ &qtest_server_inproc_recv);
117
+}
118
+
119
+void fuzz_add_target(const FuzzTarget *target)
120
+{
121
+ FuzzTargetState *tmp;
122
+ FuzzTargetState *target_state;
123
+ if (!fuzz_target_list) {
124
+ fuzz_target_list = g_new0(FuzzTargetList, 1);
125
+ }
126
+
127
+ QSLIST_FOREACH(tmp, fuzz_target_list, target_list) {
128
+ if (g_strcmp0(tmp->target->name, target->name) == 0) {
129
+ fprintf(stderr, "Error: Fuzz target name %s already in use\n",
130
+ target->name);
131
+ abort();
132
+ }
133
+ }
134
+ target_state = g_new0(FuzzTargetState, 1);
135
+ target_state->target = g_new0(FuzzTarget, 1);
136
+ *(target_state->target) = *target;
137
+ QSLIST_INSERT_HEAD(fuzz_target_list, target_state, target_list);
138
+}
139
+
140
+
141
+
142
+static void usage(char *path)
143
+{
144
+ printf("Usage: %s --fuzz-target=FUZZ_TARGET [LIBFUZZER ARGUMENTS]\n", path);
145
+ printf("where FUZZ_TARGET is one of:\n");
146
+ FuzzTargetState *tmp;
147
+ if (!fuzz_target_list) {
148
+ fprintf(stderr, "Fuzz target list not initialized\n");
149
+ abort();
150
+ }
151
+ QSLIST_FOREACH(tmp, fuzz_target_list, target_list) {
152
+ printf(" * %s : %s\n", tmp->target->name,
153
+ tmp->target->description);
154
+ }
155
+ exit(0);
156
+}
157
+
158
+static FuzzTarget *fuzz_get_target(char* name)
159
+{
160
+ FuzzTargetState *tmp;
161
+ if (!fuzz_target_list) {
162
+ fprintf(stderr, "Fuzz target list not initialized\n");
163
+ abort();
164
+ }
165
+
166
+ QSLIST_FOREACH(tmp, fuzz_target_list, target_list) {
167
+ if (strcmp(tmp->target->name, name) == 0) {
168
+ return tmp->target;
169
+ }
352
+ }
170
+ }
353
+ }
171
+ return NULL;
354
+ return NULL;
172
+}
355
+}
173
+
356
+
174
+
357
+static void
175
+/* Executed for each fuzzing-input */
358
+set_watch(VuDev *vu_dev, int fd, int vu_evt,
176
+int LLVMFuzzerTestOneInput(const unsigned char *Data, size_t Size)
359
+ vu_watch_cb cb, void *pvt)
177
+{
360
+{
361
+
362
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
363
+ g_assert(vu_dev);
364
+ g_assert(fd >= 0);
365
+ g_assert(cb);
366
+
367
+ VuFdWatch *vu_fd_watch = find_vu_fd_watch(server, fd);
368
+
369
+ if (!vu_fd_watch) {
370
+ VuFdWatch *vu_fd_watch = g_new0(VuFdWatch, 1);
371
+
372
+ QTAILQ_INSERT_TAIL(&server->vu_fd_watches, vu_fd_watch, next);
373
+
374
+ vu_fd_watch->fd = fd;
375
+ vu_fd_watch->cb = cb;
376
+ qemu_set_nonblock(fd);
377
+ aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
378
+ NULL, NULL, vu_fd_watch);
379
+ vu_fd_watch->vu_dev = vu_dev;
380
+ vu_fd_watch->pvt = pvt;
381
+ }
382
+}
383
+
384
+
385
+static void remove_watch(VuDev *vu_dev, int fd)
386
+{
387
+ VuServer *server;
388
+ g_assert(vu_dev);
389
+ g_assert(fd >= 0);
390
+
391
+ server = container_of(vu_dev, VuServer, vu_dev);
392
+
393
+ VuFdWatch *vu_fd_watch = find_vu_fd_watch(server, fd);
394
+
395
+ if (!vu_fd_watch) {
396
+ return;
397
+ }
398
+ aio_set_fd_handler(server->ioc->ctx, fd, true, NULL, NULL, NULL, NULL);
399
+
400
+ QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
401
+ g_free(vu_fd_watch);
402
+}
403
+
404
+
405
+static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
406
+ gpointer opaque)
407
+{
408
+ VuServer *server = opaque;
409
+
410
+ if (server->sioc) {
411
+ warn_report("Only one vhost-user client is allowed to "
412
+ "connect the server one time");
413
+ return;
414
+ }
415
+
416
+ if (!vu_init(&server->vu_dev, server->max_queues, sioc->fd, panic_cb,
417
+ vu_message_read, set_watch, remove_watch, server->vu_iface)) {
418
+ error_report("Failed to initialize libvhost-user");
419
+ return;
420
+ }
421
+
178
+ /*
422
+ /*
179
+ * Do the pre-fuzz-initialization before the first fuzzing iteration,
423
+ * Unset the callback function for network listener to make another
180
+ * instead of before the actual fuzz loop. This is needed since libfuzzer
424
+ * vhost-user client keeping waiting until this client disconnects
181
+ * may fork off additional workers, prior to the fuzzing loop, and if
182
+ * pre_fuzz() sets up e.g. shared memory, this should be done for the
183
+ * individual worker processes
184
+ */
425
+ */
185
+ static int pre_fuzz_done;
426
+ qio_net_listener_set_client_func(server->listener,
186
+ if (!pre_fuzz_done && fuzz_target->pre_fuzz) {
427
+ NULL,
187
+ fuzz_target->pre_fuzz(fuzz_qts);
428
+ NULL,
188
+ pre_fuzz_done = true;
429
+ NULL);
189
+ }
430
+ server->sioc = sioc;
190
+
191
+ fuzz_target->fuzz(fuzz_qts, Data, Size);
192
+ return 0;
193
+}
194
+
195
+/* Executed once, prior to fuzzing */
196
+int LLVMFuzzerInitialize(int *argc, char ***argv, char ***envp)
197
+{
198
+
199
+ char *target_name;
200
+
201
+ /* Initialize qgraph and modules */
202
+ qos_graph_init();
203
+ module_call_init(MODULE_INIT_FUZZ_TARGET);
204
+ module_call_init(MODULE_INIT_QOM);
205
+ module_call_init(MODULE_INIT_LIBQOS);
206
+
207
+ if (*argc <= 1) {
208
+ usage(**argv);
209
+ }
210
+
211
+ /* Identify the fuzz target */
212
+ target_name = (*argv)[1];
213
+ if (!strstr(target_name, "--fuzz-target=")) {
214
+ usage(**argv);
215
+ }
216
+
217
+ target_name += strlen("--fuzz-target=");
218
+
219
+ fuzz_target = fuzz_get_target(target_name);
220
+ if (!fuzz_target) {
221
+ usage(**argv);
222
+ }
223
+
224
+ fuzz_qts = qtest_setup();
225
+
226
+ if (fuzz_target->pre_vm_init) {
227
+ fuzz_target->pre_vm_init();
228
+ }
229
+
230
+ /* Run QEMU's softmmu main with the fuzz-target dependent arguments */
231
+ const char *init_cmdline = fuzz_target->get_init_cmdline(fuzz_target);
232
+
233
+ /* Split the runcmd into an argv and argc */
234
+ wordexp_t result;
235
+ wordexp(init_cmdline, &result, 0);
236
+
237
+ qemu_init(result.we_wordc, result.we_wordv, NULL);
238
+
239
+ return 0;
240
+}
241
diff --git a/tests/qtest/fuzz/fuzz.h b/tests/qtest/fuzz/fuzz.h
242
new file mode 100644
243
index XXXXXXX..XXXXXXX
244
--- /dev/null
245
+++ b/tests/qtest/fuzz/fuzz.h
246
@@ -XXX,XX +XXX,XX @@
247
+/*
248
+ * fuzzing driver
249
+ *
250
+ * Copyright Red Hat Inc., 2019
251
+ *
252
+ * Authors:
253
+ * Alexander Bulekov <alxndr@bu.edu>
254
+ *
255
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
256
+ * See the COPYING file in the top-level directory.
257
+ *
258
+ */
259
+
260
+#ifndef FUZZER_H_
261
+#define FUZZER_H_
262
+
263
+#include "qemu/osdep.h"
264
+#include "qemu/units.h"
265
+#include "qapi/error.h"
266
+
267
+#include "tests/qtest/libqtest.h"
268
+
269
+/**
270
+ * A libfuzzer fuzzing target
271
+ *
272
+ * The QEMU fuzzing binary is built with all available targets, each
273
+ * with a unique @name that can be specified on the command-line to
274
+ * select which target should run.
275
+ *
276
+ * A target must implement ->fuzz() to process a random input. If QEMU
277
+ * crashes in ->fuzz() then libfuzzer will record a failure.
278
+ *
279
+ * Fuzzing targets are registered with fuzz_add_target():
280
+ *
281
+ * static const FuzzTarget fuzz_target = {
282
+ * .name = "my-device-fifo",
283
+ * .description = "Fuzz the FIFO buffer registers of my-device",
284
+ * ...
285
+ * };
286
+ *
287
+ * static void register_fuzz_target(void)
288
+ * {
289
+ * fuzz_add_target(&fuzz_target);
290
+ * }
291
+ * fuzz_target_init(register_fuzz_target);
292
+ */
293
+typedef struct FuzzTarget {
294
+ const char *name; /* target identifier (passed to --fuzz-target=)*/
295
+ const char *description; /* help text */
296
+
297
+
298
+ /*
431
+ /*
299
+ * returns the arg-list that is passed to qemu/softmmu init()
432
+ * Increase the object reference, so sioc will not freed by
300
+ * Cannot be NULL
433
+ * qio_net_listener_channel_func which will call object_unref(OBJECT(sioc))
301
+ */
434
+ */
302
+ const char* (*get_init_cmdline)(struct FuzzTarget *);
435
+ object_ref(OBJECT(server->sioc));
303
+
436
+ qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
304
+ /*
437
+ server->ioc = QIO_CHANNEL(sioc);
305
+ * will run once, prior to running qemu/softmmu init.
438
+ object_ref(OBJECT(server->ioc));
306
+ * eg: set up shared-memory for communication with the child-process
439
+ qio_channel_attach_aio_context(server->ioc, server->ctx);
307
+ * Can be NULL
440
+ qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
308
+ */
441
+ vu_client_start(server);
309
+ void(*pre_vm_init)(void);
442
+}
310
+
443
+
311
+ /*
444
+
312
+ * will run once, after QEMU has been initialized, prior to the fuzz-loop.
445
+void vhost_user_server_stop(VuServer *server)
313
+ * eg: detect the memory map
446
+{
314
+ * Can be NULL
447
+ if (server->sioc) {
315
+ */
448
+ close_client(server);
316
+ void(*pre_fuzz)(QTestState *);
449
+ }
317
+
450
+
318
+ /*
451
+ if (server->listener) {
319
+ * accepts and executes an input from libfuzzer. this is repeatedly
452
+ qio_net_listener_disconnect(server->listener);
320
+ * executed during the fuzzing loop. Its should handle setup, input
453
+ object_unref(OBJECT(server->listener));
321
+ * execution and cleanup.
454
+ }
322
+ * Cannot be NULL
455
+
323
+ */
456
+}
324
+ void(*fuzz)(QTestState *, const unsigned char *, size_t);
457
+
325
+
458
+void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx)
326
+} FuzzTarget;
459
+{
327
+
460
+ VuFdWatch *vu_fd_watch, *next;
328
+void flush_events(QTestState *);
461
+ void *opaque = NULL;
329
+void reboot(QTestState *);
462
+ IOHandler *io_read = NULL;
330
+
463
+ bool attach;
331
+/*
464
+
332
+ * makes a copy of *target and adds it to the target-list.
465
+ server->ctx = ctx ? ctx : qemu_get_aio_context();
333
+ * i.e. fine to set up target on the caller's stack
466
+
334
+ */
467
+ if (!server->sioc) {
335
+void fuzz_add_target(const FuzzTarget *target);
468
+ /* not yet serving any client*/
336
+
469
+ return;
337
+int LLVMFuzzerTestOneInput(const unsigned char *Data, size_t Size);
470
+ }
338
+int LLVMFuzzerInitialize(int *argc, char ***argv, char ***envp);
471
+
339
+
472
+ if (ctx) {
340
+#endif
473
+ qio_channel_attach_aio_context(server->ioc, ctx);
341
+
474
+ server->aio_context_changed = true;
475
+ io_read = kick_handler;
476
+ attach = true;
477
+ } else {
478
+ qio_channel_detach_aio_context(server->ioc);
479
+ /* server->ioc->ctx keeps the old AioConext */
480
+ ctx = server->ioc->ctx;
481
+ attach = false;
482
+ }
483
+
484
+ QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
485
+ if (vu_fd_watch->cb) {
486
+ opaque = attach ? vu_fd_watch : NULL;
487
+ aio_set_fd_handler(ctx, vu_fd_watch->fd, true,
488
+ io_read, NULL, NULL,
489
+ opaque);
490
+ }
491
+ }
492
+}
493
+
494
+
495
+bool vhost_user_server_start(VuServer *server,
496
+ SocketAddress *socket_addr,
497
+ AioContext *ctx,
498
+ uint16_t max_queues,
499
+ DevicePanicNotifierFn *device_panic_notifier,
500
+ const VuDevIface *vu_iface,
501
+ Error **errp)
502
+{
503
+ QIONetListener *listener = qio_net_listener_new();
504
+ if (qio_net_listener_open_sync(listener, socket_addr, 1,
505
+ errp) < 0) {
506
+ object_unref(OBJECT(listener));
507
+ return false;
508
+ }
509
+
510
+ /* zero out unspecified fileds */
511
+ *server = (VuServer) {
512
+ .listener = listener,
513
+ .vu_iface = vu_iface,
514
+ .max_queues = max_queues,
515
+ .ctx = ctx,
516
+ .device_panic_notifier = device_panic_notifier,
517
+ };
518
+
519
+ qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
520
+
521
+ qio_net_listener_set_client_func(server->listener,
522
+ vu_accept,
523
+ server,
524
+ NULL);
525
+
526
+ QTAILQ_INIT(&server->vu_fd_watches);
527
+ return true;
528
+}
529
diff --git a/util/meson.build b/util/meson.build
530
index XXXXXXX..XXXXXXX 100644
531
--- a/util/meson.build
532
+++ b/util/meson.build
533
@@ -XXX,XX +XXX,XX @@ if have_block
534
util_ss.add(files('main-loop.c'))
535
util_ss.add(files('nvdimm-utils.c'))
536
util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
537
+ util_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-server.c'))
538
util_ss.add(files('qemu-coroutine-sleep.c'))
539
util_ss.add(files('qemu-co-shared-resource.c'))
540
util_ss.add(files('thread-pool.c', 'qemu-timer.c'))
342
--
541
--
343
2.24.1
542
2.26.2
344
543
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
These three targets should simply fuzz reads/writes to a couple ioports,
3
Move the constants from hw/core/qdev-properties.c to
4
but they mostly serve as examples of different ways to write targets.
4
util/block-helpers.h so that knowledge of the min/max values is
5
They demonstrate using qtest and qos for fuzzing, as well as using
6
rebooting and forking to reset state, or not resetting it at all.
7
5
8
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
6
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
9
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
11
Message-id: 20200220041118.23264-20-alxndr@bu.edu
10
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
11
Message-id: 20200918080912.321299-5-coiby.xu@gmail.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
13
---
14
tests/qtest/fuzz/Makefile.include | 3 +
14
util/block-helpers.h | 19 +++++++++++++
15
tests/qtest/fuzz/i440fx_fuzz.c | 193 ++++++++++++++++++++++++++++++
15
hw/core/qdev-properties-system.c | 31 ++++-----------------
16
2 files changed, 196 insertions(+)
16
util/block-helpers.c | 46 ++++++++++++++++++++++++++++++++
17
create mode 100644 tests/qtest/fuzz/i440fx_fuzz.c
17
util/meson.build | 1 +
18
4 files changed, 71 insertions(+), 26 deletions(-)
19
create mode 100644 util/block-helpers.h
20
create mode 100644 util/block-helpers.c
18
21
19
diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include
22
diff --git a/util/block-helpers.h b/util/block-helpers.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/tests/qtest/fuzz/Makefile.include
22
+++ b/tests/qtest/fuzz/Makefile.include
23
@@ -XXX,XX +XXX,XX @@ fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton
24
fuzz-obj-y += tests/qtest/fuzz/fork_fuzz.o
25
fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o
26
27
+# Targets
28
+fuzz-obj-y += tests/qtest/fuzz/i440fx_fuzz.o
29
+
30
FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest
31
32
# Linker Script to force coverage-counters into known regions which we can mark
33
diff --git a/tests/qtest/fuzz/i440fx_fuzz.c b/tests/qtest/fuzz/i440fx_fuzz.c
34
new file mode 100644
23
new file mode 100644
35
index XXXXXXX..XXXXXXX
24
index XXXXXXX..XXXXXXX
36
--- /dev/null
25
--- /dev/null
37
+++ b/tests/qtest/fuzz/i440fx_fuzz.c
26
+++ b/util/block-helpers.h
27
@@ -XXX,XX +XXX,XX @@
28
+#ifndef BLOCK_HELPERS_H
29
+#define BLOCK_HELPERS_H
30
+
31
+#include "qemu/units.h"
32
+
33
+/* lower limit is sector size */
34
+#define MIN_BLOCK_SIZE INT64_C(512)
35
+#define MIN_BLOCK_SIZE_STR "512 B"
36
+/*
37
+ * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
38
+ * matches qcow2 cluster size limit
39
+ */
40
+#define MAX_BLOCK_SIZE (2 * MiB)
41
+#define MAX_BLOCK_SIZE_STR "2 MiB"
42
+
43
+void check_block_size(const char *id, const char *name, int64_t value,
44
+ Error **errp);
45
+
46
+#endif /* BLOCK_HELPERS_H */
47
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/core/qdev-properties-system.c
50
+++ b/hw/core/qdev-properties-system.c
51
@@ -XXX,XX +XXX,XX @@
52
#include "sysemu/blockdev.h"
53
#include "net/net.h"
54
#include "hw/pci/pci.h"
55
+#include "util/block-helpers.h"
56
57
static bool check_prop_still_unset(DeviceState *dev, const char *name,
58
const void *old_val, const char *new_val,
59
@@ -XXX,XX +XXX,XX @@ const PropertyInfo qdev_prop_losttickpolicy = {
60
61
/* --- blocksize --- */
62
63
-/* lower limit is sector size */
64
-#define MIN_BLOCK_SIZE 512
65
-#define MIN_BLOCK_SIZE_STR "512 B"
66
-/*
67
- * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
68
- * matches qcow2 cluster size limit
69
- */
70
-#define MAX_BLOCK_SIZE (2 * MiB)
71
-#define MAX_BLOCK_SIZE_STR "2 MiB"
72
-
73
static void set_blocksize(Object *obj, Visitor *v, const char *name,
74
void *opaque, Error **errp)
75
{
76
@@ -XXX,XX +XXX,XX @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
77
Property *prop = opaque;
78
uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
79
uint64_t value;
80
+ Error *local_err = NULL;
81
82
if (dev->realized) {
83
qdev_prop_set_after_realize(dev, name, errp);
84
@@ -XXX,XX +XXX,XX @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
85
if (!visit_type_size(v, name, &value, errp)) {
86
return;
87
}
88
- /* value of 0 means "unset" */
89
- if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
90
- error_setg(errp,
91
- "Property %s.%s doesn't take value %" PRIu64
92
- " (minimum: " MIN_BLOCK_SIZE_STR
93
- ", maximum: " MAX_BLOCK_SIZE_STR ")",
94
- dev->id ? : "", name, value);
95
+ check_block_size(dev->id ? : "", name, value, &local_err);
96
+ if (local_err) {
97
+ error_propagate(errp, local_err);
98
return;
99
}
100
-
101
- /* We rely on power-of-2 blocksizes for bitmasks */
102
- if ((value & (value - 1)) != 0) {
103
- error_setg(errp,
104
- "Property %s.%s doesn't take value '%" PRId64 "', "
105
- "it's not a power of 2", dev->id ?: "", name, (int64_t)value);
106
- return;
107
- }
108
-
109
*ptr = value;
110
}
111
112
diff --git a/util/block-helpers.c b/util/block-helpers.c
113
new file mode 100644
114
index XXXXXXX..XXXXXXX
115
--- /dev/null
116
+++ b/util/block-helpers.c
38
@@ -XXX,XX +XXX,XX @@
117
@@ -XXX,XX +XXX,XX @@
39
+/*
118
+/*
40
+ * I440FX Fuzzing Target
119
+ * Block utility functions
41
+ *
120
+ *
42
+ * Copyright Red Hat Inc., 2019
121
+ * Copyright IBM, Corp. 2011
43
+ *
122
+ * Copyright (c) 2020 Coiby Xu <coiby.xu@gmail.com>
44
+ * Authors:
45
+ * Alexander Bulekov <alxndr@bu.edu>
46
+ *
123
+ *
47
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
124
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
48
+ * See the COPYING file in the top-level directory.
125
+ * See the COPYING file in the top-level directory.
49
+ */
126
+ */
50
+
127
+
51
+#include "qemu/osdep.h"
128
+#include "qemu/osdep.h"
129
+#include "qapi/error.h"
130
+#include "qapi/qmp/qerror.h"
131
+#include "block-helpers.h"
52
+
132
+
53
+#include "qemu/main-loop.h"
133
+/**
54
+#include "tests/qtest/libqtest.h"
134
+ * check_block_size:
55
+#include "tests/qtest/libqos/pci.h"
135
+ * @id: The unique ID of the object
56
+#include "tests/qtest/libqos/pci-pc.h"
136
+ * @name: The name of the property being validated
57
+#include "fuzz.h"
137
+ * @value: The block size in bytes
58
+#include "fuzz/qos_fuzz.h"
138
+ * @errp: A pointer to an area to store an error
59
+#include "fuzz/fork_fuzz.h"
139
+ *
60
+
140
+ * This function checks that the block size meets the following conditions:
61
+
141
+ * 1. At least MIN_BLOCK_SIZE
62
+#define I440FX_PCI_HOST_BRIDGE_CFG 0xcf8
142
+ * 2. No larger than MAX_BLOCK_SIZE
63
+#define I440FX_PCI_HOST_BRIDGE_DATA 0xcfc
143
+ * 3. A power of 2
64
+
65
+/*
66
+ * the input to the fuzzing functions below is a buffer of random bytes. we
67
+ * want to convert these bytes into a sequence of qtest or qos calls. to do
68
+ * this we define some opcodes:
69
+ */
144
+ */
70
+enum action_id {
145
+void check_block_size(const char *id, const char *name, int64_t value,
71
+ WRITEB,
146
+ Error **errp)
72
+ WRITEW,
147
+{
73
+ WRITEL,
148
+ /* value of 0 means "unset" */
74
+ READB,
149
+ if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
75
+ READW,
150
+ error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
76
+ READL,
151
+ id, name, value, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);
77
+ ACTION_MAX
152
+ return;
78
+};
79
+
80
+static void i440fx_fuzz_qtest(QTestState *s,
81
+ const unsigned char *Data, size_t Size) {
82
+ /*
83
+ * loop over the Data, breaking it up into actions. each action has an
84
+ * opcode, address offset and value
85
+ */
86
+ typedef struct QTestFuzzAction {
87
+ uint8_t opcode;
88
+ uint8_t addr;
89
+ uint32_t value;
90
+ } QTestFuzzAction;
91
+ QTestFuzzAction a;
92
+
93
+ while (Size >= sizeof(a)) {
94
+ /* make a copy of the action so we can normalize the values in-place */
95
+ memcpy(&a, Data, sizeof(a));
96
+ /* select between two i440fx Port IO addresses */
97
+ uint16_t addr = a.addr % 2 ? I440FX_PCI_HOST_BRIDGE_CFG :
98
+ I440FX_PCI_HOST_BRIDGE_DATA;
99
+ switch (a.opcode % ACTION_MAX) {
100
+ case WRITEB:
101
+ qtest_outb(s, addr, (uint8_t)a.value);
102
+ break;
103
+ case WRITEW:
104
+ qtest_outw(s, addr, (uint16_t)a.value);
105
+ break;
106
+ case WRITEL:
107
+ qtest_outl(s, addr, (uint32_t)a.value);
108
+ break;
109
+ case READB:
110
+ qtest_inb(s, addr);
111
+ break;
112
+ case READW:
113
+ qtest_inw(s, addr);
114
+ break;
115
+ case READL:
116
+ qtest_inl(s, addr);
117
+ break;
118
+ }
119
+ /* Move to the next operation */
120
+ Size -= sizeof(a);
121
+ Data += sizeof(a);
122
+ }
123
+ flush_events(s);
124
+}
125
+
126
+static void i440fx_fuzz_qos(QTestState *s,
127
+ const unsigned char *Data, size_t Size) {
128
+ /*
129
+ * Same as i440fx_fuzz_qtest, but using QOS. devfn is incorporated into the
130
+ * value written over Port IO
131
+ */
132
+ typedef struct QOSFuzzAction {
133
+ uint8_t opcode;
134
+ uint8_t offset;
135
+ int devfn;
136
+ uint32_t value;
137
+ } QOSFuzzAction;
138
+
139
+ static QPCIBus *bus;
140
+ if (!bus) {
141
+ bus = qpci_new_pc(s, fuzz_qos_alloc);
142
+ }
153
+ }
143
+
154
+
144
+ QOSFuzzAction a;
155
+ /* We rely on power-of-2 blocksizes for bitmasks */
145
+ while (Size >= sizeof(a)) {
156
+ if ((value & (value - 1)) != 0) {
146
+ memcpy(&a, Data, sizeof(a));
157
+ error_setg(errp,
147
+ switch (a.opcode % ACTION_MAX) {
158
+ "Property %s.%s doesn't take value '%" PRId64
148
+ case WRITEB:
159
+ "', it's not a power of 2",
149
+ bus->config_writeb(bus, a.devfn, a.offset, (uint8_t)a.value);
160
+ id, name, value);
150
+ break;
161
+ return;
151
+ case WRITEW:
152
+ bus->config_writew(bus, a.devfn, a.offset, (uint16_t)a.value);
153
+ break;
154
+ case WRITEL:
155
+ bus->config_writel(bus, a.devfn, a.offset, (uint32_t)a.value);
156
+ break;
157
+ case READB:
158
+ bus->config_readb(bus, a.devfn, a.offset);
159
+ break;
160
+ case READW:
161
+ bus->config_readw(bus, a.devfn, a.offset);
162
+ break;
163
+ case READL:
164
+ bus->config_readl(bus, a.devfn, a.offset);
165
+ break;
166
+ }
167
+ Size -= sizeof(a);
168
+ Data += sizeof(a);
169
+ }
170
+ flush_events(s);
171
+}
172
+
173
+static void i440fx_fuzz_qos_fork(QTestState *s,
174
+ const unsigned char *Data, size_t Size) {
175
+ if (fork() == 0) {
176
+ i440fx_fuzz_qos(s, Data, Size);
177
+ _Exit(0);
178
+ } else {
179
+ wait(NULL);
180
+ }
162
+ }
181
+}
163
+}
182
+
164
diff --git a/util/meson.build b/util/meson.build
183
+static const char *i440fx_qtest_argv = TARGET_NAME " -machine accel=qtest"
165
index XXXXXXX..XXXXXXX 100644
184
+ "-m 0 -display none";
166
--- a/util/meson.build
185
+static const char *i440fx_argv(FuzzTarget *t)
167
+++ b/util/meson.build
186
+{
168
@@ -XXX,XX +XXX,XX @@ if have_block
187
+ return i440fx_qtest_argv;
169
util_ss.add(files('nvdimm-utils.c'))
188
+}
170
util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
189
+
171
util_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-server.c'))
190
+static void fork_init(void)
172
+ util_ss.add(files('block-helpers.c'))
191
+{
173
util_ss.add(files('qemu-coroutine-sleep.c'))
192
+ counter_shm_init();
174
util_ss.add(files('qemu-co-shared-resource.c'))
193
+}
175
util_ss.add(files('thread-pool.c', 'qemu-timer.c'))
194
+
195
+static void register_pci_fuzz_targets(void)
196
+{
197
+ /* Uses simple qtest commands and reboots to reset state */
198
+ fuzz_add_target(&(FuzzTarget){
199
+ .name = "i440fx-qtest-reboot-fuzz",
200
+ .description = "Fuzz the i440fx using raw qtest commands and"
201
+ "rebooting after each run",
202
+ .get_init_cmdline = i440fx_argv,
203
+ .fuzz = i440fx_fuzz_qtest});
204
+
205
+ /* Uses libqos and forks to prevent state leakage */
206
+ fuzz_add_qos_target(&(FuzzTarget){
207
+ .name = "i440fx-qos-fork-fuzz",
208
+ .description = "Fuzz the i440fx using raw qtest commands and"
209
+ "rebooting after each run",
210
+ .pre_vm_init = &fork_init,
211
+ .fuzz = i440fx_fuzz_qos_fork,},
212
+ "i440FX-pcihost",
213
+ &(QOSGraphTestOptions){}
214
+ );
215
+
216
+ /*
217
+ * Uses libqos. Doesn't do anything to reset state. Note that if we were to
218
+ * reboot after each run, we would also have to redo the qos-related
219
+ * initialization (qos_init_path)
220
+ */
221
+ fuzz_add_qos_target(&(FuzzTarget){
222
+ .name = "i440fx-qos-noreset-fuzz",
223
+ .description = "Fuzz the i440fx using raw qtest commands and"
224
+ "rebooting after each run",
225
+ .fuzz = i440fx_fuzz_qos,},
226
+ "i440FX-pcihost",
227
+ &(QOSGraphTestOptions){}
228
+ );
229
+}
230
+
231
+fuzz_target_init(register_pci_fuzz_targets);
232
--
176
--
233
2.24.1
177
2.26.2
234
178
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
fork() is a simple way to ensure that state does not leak in between
3
By making use of libvhost-user, block device drive can be shared to
4
fuzzing runs. Unfortunately, the fuzzer mutation engine relies on
4
the connected vhost-user client. Only one client can connect to the
5
bitmaps which contain coverage information for each fuzzing run, and
5
server one time.
6
these bitmaps should be copied from the child to the parent(where the
7
mutation occurs). These bitmaps are created through compile-time
8
instrumentation and they are not shared with fork()-ed processes, by
9
default. To address this, we create a shared memory region, adjust its
10
size and map it _over_ the counter region. Furthermore, libfuzzer
11
doesn't generally expose the globals that specify the location of the
12
counters/coverage bitmap. As a workaround, we rely on a custom linker
13
script which forces all of the bitmaps we care about to be placed in a
14
contiguous region, which is easy to locate and mmap over.
15
6
16
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
7
Since vhost-user-server needs a block drive to be created first, delay
8
the creation of this object.
9
10
Suggested-by: Kevin Wolf <kwolf@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
17
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
18
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
14
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
19
Message-id: 20200220041118.23264-16-alxndr@bu.edu
15
Message-id: 20200918080912.321299-6-coiby.xu@gmail.com
16
[Shorten "vhost_user_blk_server" string to "vhost_user_blk" to avoid the
17
following compiler warning:
18
../block/export/vhost-user-blk-server.c:178:50: error: ‘%s’ directive output truncated writing 21 bytes into a region of size 20 [-Werror=format-truncation=]
19
and fix "Invalid size %ld ..." ssize_t format string arguments for
20
32-bit hosts.
21
--Stefan]
20
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
22
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
21
---
23
---
22
tests/qtest/fuzz/Makefile.include | 5 +++
24
block/export/vhost-user-blk-server.h | 36 ++
23
tests/qtest/fuzz/fork_fuzz.c | 55 +++++++++++++++++++++++++++++++
25
block/export/vhost-user-blk-server.c | 661 +++++++++++++++++++++++++++
24
tests/qtest/fuzz/fork_fuzz.h | 23 +++++++++++++
26
softmmu/vl.c | 4 +
25
tests/qtest/fuzz/fork_fuzz.ld | 37 +++++++++++++++++++++
27
block/meson.build | 1 +
26
4 files changed, 120 insertions(+)
28
4 files changed, 702 insertions(+)
27
create mode 100644 tests/qtest/fuzz/fork_fuzz.c
29
create mode 100644 block/export/vhost-user-blk-server.h
28
create mode 100644 tests/qtest/fuzz/fork_fuzz.h
30
create mode 100644 block/export/vhost-user-blk-server.c
29
create mode 100644 tests/qtest/fuzz/fork_fuzz.ld
30
31
31
diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include
32
diff --git a/block/export/vhost-user-blk-server.h b/block/export/vhost-user-blk-server.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/tests/qtest/fuzz/Makefile.include
34
+++ b/tests/qtest/fuzz/Makefile.include
35
@@ -XXX,XX +XXX,XX @@ QEMU_PROG_FUZZ=qemu-fuzz-$(TARGET_NAME)$(EXESUF)
36
37
fuzz-obj-y += tests/qtest/libqtest.o
38
fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton
39
+fuzz-obj-y += tests/qtest/fuzz/fork_fuzz.o
40
41
FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest
42
+
43
+# Linker Script to force coverage-counters into known regions which we can mark
44
+# shared
45
+FUZZ_LDFLAGS += -Xlinker -T$(SRC_PATH)/tests/qtest/fuzz/fork_fuzz.ld
46
diff --git a/tests/qtest/fuzz/fork_fuzz.c b/tests/qtest/fuzz/fork_fuzz.c
47
new file mode 100644
33
new file mode 100644
48
index XXXXXXX..XXXXXXX
34
index XXXXXXX..XXXXXXX
49
--- /dev/null
35
--- /dev/null
50
+++ b/tests/qtest/fuzz/fork_fuzz.c
36
+++ b/block/export/vhost-user-blk-server.h
51
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
52
+/*
38
+/*
53
+ * Fork-based fuzzing helpers
39
+ * Sharing QEMU block devices via vhost-user protocal
54
+ *
40
+ *
55
+ * Copyright Red Hat Inc., 2019
41
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
42
+ * Copyright (c) 2020 Red Hat, Inc.
56
+ *
43
+ *
57
+ * Authors:
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or
58
+ * Alexander Bulekov <alxndr@bu.edu>
45
+ * later. See the COPYING file in the top-level directory.
59
+ *
60
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
61
+ * See the COPYING file in the top-level directory.
62
+ *
63
+ */
46
+ */
64
+
47
+
65
+#include "qemu/osdep.h"
48
+#ifndef VHOST_USER_BLK_SERVER_H
66
+#include "fork_fuzz.h"
49
+#define VHOST_USER_BLK_SERVER_H
67
+
50
+#include "util/vhost-user-server.h"
68
+
51
+
69
+void counter_shm_init(void)
52
+typedef struct VuBlockDev VuBlockDev;
70
+{
53
+#define TYPE_VHOST_USER_BLK_SERVER "vhost-user-blk-server"
71
+ char *shm_path = g_strdup_printf("/qemu-fuzz-cntrs.%d", getpid());
54
+#define VHOST_USER_BLK_SERVER(obj) \
72
+ int fd = shm_open(shm_path, O_CREAT | O_RDWR, S_IRUSR | S_IWUSR);
55
+ OBJECT_CHECK(VuBlockDev, obj, TYPE_VHOST_USER_BLK_SERVER)
73
+ g_free(shm_path);
56
+
74
+
57
+/* vhost user block device */
75
+ if (fd == -1) {
58
+struct VuBlockDev {
76
+ perror("Error: ");
59
+ Object parent_obj;
77
+ exit(1);
60
+ char *node_name;
78
+ }
61
+ SocketAddress *addr;
79
+ if (ftruncate(fd, &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START) == -1) {
62
+ AioContext *ctx;
80
+ perror("Error: ");
63
+ VuServer vu_server;
81
+ exit(1);
64
+ bool running;
82
+ }
65
+ uint32_t blk_size;
83
+ /* Copy what's in the counter region to the shm.. */
66
+ BlockBackend *backend;
84
+ void *rptr = mmap(NULL ,
67
+ QIOChannelSocket *sioc;
85
+ &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START,
68
+ QTAILQ_ENTRY(VuBlockDev) next;
86
+ PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
69
+ struct virtio_blk_config blkcfg;
87
+ memcpy(rptr,
70
+ bool writable;
88
+ &__FUZZ_COUNTERS_START,
71
+};
89
+ &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START);
72
+
90
+
73
+#endif /* VHOST_USER_BLK_SERVER_H */
91
+ munmap(rptr, &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START);
74
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
92
+
93
+ /* And map the shm over the counter region */
94
+ rptr = mmap(&__FUZZ_COUNTERS_START,
95
+ &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START,
96
+ PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0);
97
+
98
+ close(fd);
99
+
100
+ if (!rptr) {
101
+ perror("Error: ");
102
+ exit(1);
103
+ }
104
+}
105
+
106
+
107
diff --git a/tests/qtest/fuzz/fork_fuzz.h b/tests/qtest/fuzz/fork_fuzz.h
108
new file mode 100644
75
new file mode 100644
109
index XXXXXXX..XXXXXXX
76
index XXXXXXX..XXXXXXX
110
--- /dev/null
77
--- /dev/null
111
+++ b/tests/qtest/fuzz/fork_fuzz.h
78
+++ b/block/export/vhost-user-blk-server.c
112
@@ -XXX,XX +XXX,XX @@
79
@@ -XXX,XX +XXX,XX @@
113
+/*
80
+/*
114
+ * Fork-based fuzzing helpers
81
+ * Sharing QEMU block devices via vhost-user protocal
115
+ *
82
+ *
116
+ * Copyright Red Hat Inc., 2019
83
+ * Parts of the code based on nbd/server.c.
117
+ *
84
+ *
118
+ * Authors:
85
+ * Copyright (c) Coiby Xu <coiby.xu@gmail.com>.
119
+ * Alexander Bulekov <alxndr@bu.edu>
86
+ * Copyright (c) 2020 Red Hat, Inc.
120
+ *
87
+ *
121
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
88
+ * This work is licensed under the terms of the GNU GPL, version 2 or
122
+ * See the COPYING file in the top-level directory.
89
+ * later. See the COPYING file in the top-level directory.
90
+ */
91
+#include "qemu/osdep.h"
92
+#include "block/block.h"
93
+#include "vhost-user-blk-server.h"
94
+#include "qapi/error.h"
95
+#include "qom/object_interfaces.h"
96
+#include "sysemu/block-backend.h"
97
+#include "util/block-helpers.h"
98
+
99
+enum {
100
+ VHOST_USER_BLK_MAX_QUEUES = 1,
101
+};
102
+struct virtio_blk_inhdr {
103
+ unsigned char status;
104
+};
105
+
106
+typedef struct VuBlockReq {
107
+ VuVirtqElement *elem;
108
+ int64_t sector_num;
109
+ size_t size;
110
+ struct virtio_blk_inhdr *in;
111
+ struct virtio_blk_outhdr out;
112
+ VuServer *server;
113
+ struct VuVirtq *vq;
114
+} VuBlockReq;
115
+
116
+static void vu_block_req_complete(VuBlockReq *req)
117
+{
118
+ VuDev *vu_dev = &req->server->vu_dev;
119
+
120
+ /* IO size with 1 extra status byte */
121
+ vu_queue_push(vu_dev, req->vq, req->elem, req->size + 1);
122
+ vu_queue_notify(vu_dev, req->vq);
123
+
124
+ if (req->elem) {
125
+ free(req->elem);
126
+ }
127
+
128
+ g_free(req);
129
+}
130
+
131
+static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
132
+{
133
+ return container_of(server, VuBlockDev, vu_server);
134
+}
135
+
136
+static int coroutine_fn
137
+vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
138
+ uint32_t iovcnt, uint32_t type)
139
+{
140
+ struct virtio_blk_discard_write_zeroes desc;
141
+ ssize_t size = iov_to_buf(iov, iovcnt, 0, &desc, sizeof(desc));
142
+ if (unlikely(size != sizeof(desc))) {
143
+ error_report("Invalid size %zd, expect %zu", size, sizeof(desc));
144
+ return -EINVAL;
145
+ }
146
+
147
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
148
+ uint64_t range[2] = { le64_to_cpu(desc.sector) << 9,
149
+ le32_to_cpu(desc.num_sectors) << 9 };
150
+ if (type == VIRTIO_BLK_T_DISCARD) {
151
+ if (blk_co_pdiscard(vdev_blk->backend, range[0], range[1]) == 0) {
152
+ return 0;
153
+ }
154
+ } else if (type == VIRTIO_BLK_T_WRITE_ZEROES) {
155
+ if (blk_co_pwrite_zeroes(vdev_blk->backend,
156
+ range[0], range[1], 0) == 0) {
157
+ return 0;
158
+ }
159
+ }
160
+
161
+ return -EINVAL;
162
+}
163
+
164
+static void coroutine_fn vu_block_flush(VuBlockReq *req)
165
+{
166
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
167
+ BlockBackend *backend = vdev_blk->backend;
168
+ blk_co_flush(backend);
169
+}
170
+
171
+struct req_data {
172
+ VuServer *server;
173
+ VuVirtq *vq;
174
+ VuVirtqElement *elem;
175
+};
176
+
177
+static void coroutine_fn vu_block_virtio_process_req(void *opaque)
178
+{
179
+ struct req_data *data = opaque;
180
+ VuServer *server = data->server;
181
+ VuVirtq *vq = data->vq;
182
+ VuVirtqElement *elem = data->elem;
183
+ uint32_t type;
184
+ VuBlockReq *req;
185
+
186
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
187
+ BlockBackend *backend = vdev_blk->backend;
188
+
189
+ struct iovec *in_iov = elem->in_sg;
190
+ struct iovec *out_iov = elem->out_sg;
191
+ unsigned in_num = elem->in_num;
192
+ unsigned out_num = elem->out_num;
193
+ /* refer to hw/block/virtio_blk.c */
194
+ if (elem->out_num < 1 || elem->in_num < 1) {
195
+ error_report("virtio-blk request missing headers");
196
+ free(elem);
197
+ return;
198
+ }
199
+
200
+ req = g_new0(VuBlockReq, 1);
201
+ req->server = server;
202
+ req->vq = vq;
203
+ req->elem = elem;
204
+
205
+ if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
206
+ sizeof(req->out)) != sizeof(req->out))) {
207
+ error_report("virtio-blk request outhdr too short");
208
+ goto err;
209
+ }
210
+
211
+ iov_discard_front(&out_iov, &out_num, sizeof(req->out));
212
+
213
+ if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) {
214
+ error_report("virtio-blk request inhdr too short");
215
+ goto err;
216
+ }
217
+
218
+ /* We always touch the last byte, so just see how big in_iov is. */
219
+ req->in = (void *)in_iov[in_num - 1].iov_base
220
+ + in_iov[in_num - 1].iov_len
221
+ - sizeof(struct virtio_blk_inhdr);
222
+ iov_discard_back(in_iov, &in_num, sizeof(struct virtio_blk_inhdr));
223
+
224
+ type = le32_to_cpu(req->out.type);
225
+ switch (type & ~VIRTIO_BLK_T_BARRIER) {
226
+ case VIRTIO_BLK_T_IN:
227
+ case VIRTIO_BLK_T_OUT: {
228
+ ssize_t ret = 0;
229
+ bool is_write = type & VIRTIO_BLK_T_OUT;
230
+ req->sector_num = le64_to_cpu(req->out.sector);
231
+
232
+ int64_t offset = req->sector_num * vdev_blk->blk_size;
233
+ QEMUIOVector qiov;
234
+ if (is_write) {
235
+ qemu_iovec_init_external(&qiov, out_iov, out_num);
236
+ ret = blk_co_pwritev(backend, offset, qiov.size,
237
+ &qiov, 0);
238
+ } else {
239
+ qemu_iovec_init_external(&qiov, in_iov, in_num);
240
+ ret = blk_co_preadv(backend, offset, qiov.size,
241
+ &qiov, 0);
242
+ }
243
+ if (ret >= 0) {
244
+ req->in->status = VIRTIO_BLK_S_OK;
245
+ } else {
246
+ req->in->status = VIRTIO_BLK_S_IOERR;
247
+ }
248
+ break;
249
+ }
250
+ case VIRTIO_BLK_T_FLUSH:
251
+ vu_block_flush(req);
252
+ req->in->status = VIRTIO_BLK_S_OK;
253
+ break;
254
+ case VIRTIO_BLK_T_GET_ID: {
255
+ size_t size = MIN(iov_size(&elem->in_sg[0], in_num),
256
+ VIRTIO_BLK_ID_BYTES);
257
+ snprintf(elem->in_sg[0].iov_base, size, "%s", "vhost_user_blk");
258
+ req->in->status = VIRTIO_BLK_S_OK;
259
+ req->size = elem->in_sg[0].iov_len;
260
+ break;
261
+ }
262
+ case VIRTIO_BLK_T_DISCARD:
263
+ case VIRTIO_BLK_T_WRITE_ZEROES: {
264
+ int rc;
265
+ rc = vu_block_discard_write_zeroes(req, &elem->out_sg[1],
266
+ out_num, type);
267
+ if (rc == 0) {
268
+ req->in->status = VIRTIO_BLK_S_OK;
269
+ } else {
270
+ req->in->status = VIRTIO_BLK_S_IOERR;
271
+ }
272
+ break;
273
+ }
274
+ default:
275
+ req->in->status = VIRTIO_BLK_S_UNSUPP;
276
+ break;
277
+ }
278
+
279
+ vu_block_req_complete(req);
280
+ return;
281
+
282
+err:
283
+ free(elem);
284
+ g_free(req);
285
+ return;
286
+}
287
+
288
+static void vu_block_process_vq(VuDev *vu_dev, int idx)
289
+{
290
+ VuServer *server;
291
+ VuVirtq *vq;
292
+ struct req_data *req_data;
293
+
294
+ server = container_of(vu_dev, VuServer, vu_dev);
295
+ assert(server);
296
+
297
+ vq = vu_get_queue(vu_dev, idx);
298
+ assert(vq);
299
+ VuVirtqElement *elem;
300
+ while (1) {
301
+ elem = vu_queue_pop(vu_dev, vq, sizeof(VuVirtqElement) +
302
+ sizeof(VuBlockReq));
303
+ if (elem) {
304
+ req_data = g_new0(struct req_data, 1);
305
+ req_data->server = server;
306
+ req_data->vq = vq;
307
+ req_data->elem = elem;
308
+ Coroutine *co = qemu_coroutine_create(vu_block_virtio_process_req,
309
+ req_data);
310
+ aio_co_enter(server->ioc->ctx, co);
311
+ } else {
312
+ break;
313
+ }
314
+ }
315
+}
316
+
317
+static void vu_block_queue_set_started(VuDev *vu_dev, int idx, bool started)
318
+{
319
+ VuVirtq *vq;
320
+
321
+ assert(vu_dev);
322
+
323
+ vq = vu_get_queue(vu_dev, idx);
324
+ vu_set_queue_handler(vu_dev, vq, started ? vu_block_process_vq : NULL);
325
+}
326
+
327
+static uint64_t vu_block_get_features(VuDev *dev)
328
+{
329
+ uint64_t features;
330
+ VuServer *server = container_of(dev, VuServer, vu_dev);
331
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
332
+ features = 1ull << VIRTIO_BLK_F_SIZE_MAX |
333
+ 1ull << VIRTIO_BLK_F_SEG_MAX |
334
+ 1ull << VIRTIO_BLK_F_TOPOLOGY |
335
+ 1ull << VIRTIO_BLK_F_BLK_SIZE |
336
+ 1ull << VIRTIO_BLK_F_FLUSH |
337
+ 1ull << VIRTIO_BLK_F_DISCARD |
338
+ 1ull << VIRTIO_BLK_F_WRITE_ZEROES |
339
+ 1ull << VIRTIO_BLK_F_CONFIG_WCE |
340
+ 1ull << VIRTIO_F_VERSION_1 |
341
+ 1ull << VIRTIO_RING_F_INDIRECT_DESC |
342
+ 1ull << VIRTIO_RING_F_EVENT_IDX |
343
+ 1ull << VHOST_USER_F_PROTOCOL_FEATURES;
344
+
345
+ if (!vdev_blk->writable) {
346
+ features |= 1ull << VIRTIO_BLK_F_RO;
347
+ }
348
+
349
+ return features;
350
+}
351
+
352
+static uint64_t vu_block_get_protocol_features(VuDev *dev)
353
+{
354
+ return 1ull << VHOST_USER_PROTOCOL_F_CONFIG |
355
+ 1ull << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD;
356
+}
357
+
358
+static int
359
+vu_block_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
360
+{
361
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
362
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
363
+ memcpy(config, &vdev_blk->blkcfg, len);
364
+
365
+ return 0;
366
+}
367
+
368
+static int
369
+vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
370
+ uint32_t offset, uint32_t size, uint32_t flags)
371
+{
372
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
373
+ VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
374
+ uint8_t wce;
375
+
376
+ /* don't support live migration */
377
+ if (flags != VHOST_SET_CONFIG_TYPE_MASTER) {
378
+ return -EINVAL;
379
+ }
380
+
381
+ if (offset != offsetof(struct virtio_blk_config, wce) ||
382
+ size != 1) {
383
+ return -EINVAL;
384
+ }
385
+
386
+ wce = *data;
387
+ vdev_blk->blkcfg.wce = wce;
388
+ blk_set_enable_write_cache(vdev_blk->backend, wce);
389
+ return 0;
390
+}
391
+
392
+/*
393
+ * When the client disconnects, it sends a VHOST_USER_NONE request
394
+ * and vu_process_message will simple call exit which cause the VM
395
+ * to exit abruptly.
396
+ * To avoid this issue, process VHOST_USER_NONE request ahead
397
+ * of vu_process_message.
123
+ *
398
+ *
124
+ */
399
+ */
125
+
400
+static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
126
+#ifndef FORK_FUZZ_H
401
+{
127
+#define FORK_FUZZ_H
402
+ if (vmsg->request == VHOST_USER_NONE) {
128
+
403
+ dev->panic(dev, "disconnect");
129
+extern uint8_t __FUZZ_COUNTERS_START;
404
+ return true;
130
+extern uint8_t __FUZZ_COUNTERS_END;
405
+ }
131
+
406
+ return false;
132
+void counter_shm_init(void);
407
+}
133
+
408
+
134
+#endif
409
+static const VuDevIface vu_block_iface = {
135
+
410
+ .get_features = vu_block_get_features,
136
diff --git a/tests/qtest/fuzz/fork_fuzz.ld b/tests/qtest/fuzz/fork_fuzz.ld
411
+ .queue_set_started = vu_block_queue_set_started,
137
new file mode 100644
412
+ .get_protocol_features = vu_block_get_protocol_features,
138
index XXXXXXX..XXXXXXX
413
+ .get_config = vu_block_get_config,
139
--- /dev/null
414
+ .set_config = vu_block_set_config,
140
+++ b/tests/qtest/fuzz/fork_fuzz.ld
415
+ .process_msg = vu_block_process_msg,
141
@@ -XXX,XX +XXX,XX @@
416
+};
142
+/* We adjust linker script modification to place all of the stuff that needs to
417
+
143
+ * persist across fuzzing runs into a contiguous seciton of memory. Then, it is
418
+static void blk_aio_attached(AioContext *ctx, void *opaque)
144
+ * easy to re-map the counter-related memory as shared.
419
+{
145
+*/
420
+ VuBlockDev *vub_dev = opaque;
146
+
421
+ aio_context_acquire(ctx);
147
+SECTIONS
422
+ vhost_user_server_set_aio_context(&vub_dev->vu_server, ctx);
148
+{
423
+ aio_context_release(ctx);
149
+ .data.fuzz_start : ALIGN(4K)
424
+}
150
+ {
425
+
151
+ __FUZZ_COUNTERS_START = .;
426
+static void blk_aio_detach(void *opaque)
152
+ __start___sancov_cntrs = .;
427
+{
153
+ *(_*sancov_cntrs);
428
+ VuBlockDev *vub_dev = opaque;
154
+ __stop___sancov_cntrs = .;
429
+ AioContext *ctx = vub_dev->vu_server.ctx;
155
+
430
+ aio_context_acquire(ctx);
156
+ /* Lowest stack counter */
431
+ vhost_user_server_set_aio_context(&vub_dev->vu_server, NULL);
157
+ *(__sancov_lowest_stack);
432
+ aio_context_release(ctx);
158
+ }
433
+}
159
+ .data.fuzz_ordered :
434
+
160
+ {
435
+static void
161
+ /* Coverage counters. They're not necessary for fuzzing, but are useful
436
+vu_block_initialize_config(BlockDriverState *bs,
162
+ * for analyzing the fuzzing performance
437
+ struct virtio_blk_config *config, uint32_t blk_size)
163
+ */
438
+{
164
+ __start___llvm_prf_cnts = .;
439
+ config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
165
+ *(*llvm_prf_cnts);
440
+ config->blk_size = blk_size;
166
+ __stop___llvm_prf_cnts = .;
441
+ config->size_max = 0;
167
+
442
+ config->seg_max = 128 - 2;
168
+ /* Internal Libfuzzer TracePC object which contains the ValueProfileMap */
443
+ config->min_io_size = 1;
169
+ FuzzerTracePC*(.bss*);
444
+ config->opt_io_size = 1;
170
+ }
445
+ config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
171
+ .data.fuzz_end : ALIGN(4K)
446
+ config->max_discard_sectors = 32768;
172
+ {
447
+ config->max_discard_seg = 1;
173
+ __FUZZ_COUNTERS_END = .;
448
+ config->discard_sector_alignment = config->blk_size >> 9;
174
+ }
449
+ config->max_write_zeroes_sectors = 32768;
175
+}
450
+ config->max_write_zeroes_seg = 1;
176
+/* Dont overwrite the SECTIONS in the default linker script. Instead insert the
451
+}
177
+ * above into the default script */
452
+
178
+INSERT AFTER .data;
453
+static VuBlockDev *vu_block_init(VuBlockDev *vu_block_device, Error **errp)
454
+{
455
+
456
+ BlockBackend *blk;
457
+ Error *local_error = NULL;
458
+ const char *node_name = vu_block_device->node_name;
459
+ bool writable = vu_block_device->writable;
460
+ uint64_t perm = BLK_PERM_CONSISTENT_READ;
461
+ int ret;
462
+
463
+ AioContext *ctx;
464
+
465
+ BlockDriverState *bs = bdrv_lookup_bs(node_name, node_name, &local_error);
466
+
467
+ if (!bs) {
468
+ error_propagate(errp, local_error);
469
+ return NULL;
470
+ }
471
+
472
+ if (bdrv_is_read_only(bs)) {
473
+ writable = false;
474
+ }
475
+
476
+ if (writable) {
477
+ perm |= BLK_PERM_WRITE;
478
+ }
479
+
480
+ ctx = bdrv_get_aio_context(bs);
481
+ aio_context_acquire(ctx);
482
+ bdrv_invalidate_cache(bs, NULL);
483
+ aio_context_release(ctx);
484
+
485
+ /*
486
+ * Don't allow resize while the vhost user server is running,
487
+ * otherwise we don't care what happens with the node.
488
+ */
489
+ blk = blk_new(bdrv_get_aio_context(bs), perm,
490
+ BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
491
+ BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
492
+ ret = blk_insert_bs(blk, bs, errp);
493
+
494
+ if (ret < 0) {
495
+ goto fail;
496
+ }
497
+
498
+ blk_set_enable_write_cache(blk, false);
499
+
500
+ blk_set_allow_aio_context_change(blk, true);
501
+
502
+ vu_block_device->blkcfg.wce = 0;
503
+ vu_block_device->backend = blk;
504
+ if (!vu_block_device->blk_size) {
505
+ vu_block_device->blk_size = BDRV_SECTOR_SIZE;
506
+ }
507
+ vu_block_device->blkcfg.blk_size = vu_block_device->blk_size;
508
+ blk_set_guest_block_size(blk, vu_block_device->blk_size);
509
+ vu_block_initialize_config(bs, &vu_block_device->blkcfg,
510
+ vu_block_device->blk_size);
511
+ return vu_block_device;
512
+
513
+fail:
514
+ blk_unref(blk);
515
+ return NULL;
516
+}
517
+
518
+static void vu_block_deinit(VuBlockDev *vu_block_device)
519
+{
520
+ if (vu_block_device->backend) {
521
+ blk_remove_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
522
+ blk_aio_detach, vu_block_device);
523
+ }
524
+
525
+ blk_unref(vu_block_device->backend);
526
+}
527
+
528
+static void vhost_user_blk_server_stop(VuBlockDev *vu_block_device)
529
+{
530
+ vhost_user_server_stop(&vu_block_device->vu_server);
531
+ vu_block_deinit(vu_block_device);
532
+}
533
+
534
+static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
535
+ Error **errp)
536
+{
537
+ AioContext *ctx;
538
+ SocketAddress *addr = vu_block_device->addr;
539
+
540
+ if (!vu_block_init(vu_block_device, errp)) {
541
+ return;
542
+ }
543
+
544
+ ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
545
+
546
+ if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
547
+ VHOST_USER_BLK_MAX_QUEUES,
548
+ NULL, &vu_block_iface,
549
+ errp)) {
550
+ goto error;
551
+ }
552
+
553
+ blk_add_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
554
+ blk_aio_detach, vu_block_device);
555
+ vu_block_device->running = true;
556
+ return;
557
+
558
+ error:
559
+ vu_block_deinit(vu_block_device);
560
+}
561
+
562
+static bool vu_prop_modifiable(VuBlockDev *vus, Error **errp)
563
+{
564
+ if (vus->running) {
565
+ error_setg(errp, "The property can't be modified "
566
+ "while the server is running");
567
+ return false;
568
+ }
569
+ return true;
570
+}
571
+
572
+static void vu_set_node_name(Object *obj, const char *value, Error **errp)
573
+{
574
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
575
+
576
+ if (!vu_prop_modifiable(vus, errp)) {
577
+ return;
578
+ }
579
+
580
+ if (vus->node_name) {
581
+ g_free(vus->node_name);
582
+ }
583
+
584
+ vus->node_name = g_strdup(value);
585
+}
586
+
587
+static char *vu_get_node_name(Object *obj, Error **errp)
588
+{
589
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
590
+ return g_strdup(vus->node_name);
591
+}
592
+
593
+static void free_socket_addr(SocketAddress *addr)
594
+{
595
+ g_free(addr->u.q_unix.path);
596
+ g_free(addr);
597
+}
598
+
599
+static void vu_set_unix_socket(Object *obj, const char *value,
600
+ Error **errp)
601
+{
602
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
603
+
604
+ if (!vu_prop_modifiable(vus, errp)) {
605
+ return;
606
+ }
607
+
608
+ if (vus->addr) {
609
+ free_socket_addr(vus->addr);
610
+ }
611
+
612
+ SocketAddress *addr = g_new0(SocketAddress, 1);
613
+ addr->type = SOCKET_ADDRESS_TYPE_UNIX;
614
+ addr->u.q_unix.path = g_strdup(value);
615
+ vus->addr = addr;
616
+}
617
+
618
+static char *vu_get_unix_socket(Object *obj, Error **errp)
619
+{
620
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
621
+ return g_strdup(vus->addr->u.q_unix.path);
622
+}
623
+
624
+static bool vu_get_block_writable(Object *obj, Error **errp)
625
+{
626
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
627
+ return vus->writable;
628
+}
629
+
630
+static void vu_set_block_writable(Object *obj, bool value, Error **errp)
631
+{
632
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
633
+
634
+ if (!vu_prop_modifiable(vus, errp)) {
635
+ return;
636
+ }
637
+
638
+ vus->writable = value;
639
+}
640
+
641
+static void vu_get_blk_size(Object *obj, Visitor *v, const char *name,
642
+ void *opaque, Error **errp)
643
+{
644
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
645
+ uint32_t value = vus->blk_size;
646
+
647
+ visit_type_uint32(v, name, &value, errp);
648
+}
649
+
650
+static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
651
+ void *opaque, Error **errp)
652
+{
653
+ VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
654
+
655
+ Error *local_err = NULL;
656
+ uint32_t value;
657
+
658
+ if (!vu_prop_modifiable(vus, errp)) {
659
+ return;
660
+ }
661
+
662
+ visit_type_uint32(v, name, &value, &local_err);
663
+ if (local_err) {
664
+ goto out;
665
+ }
666
+
667
+ check_block_size(object_get_typename(obj), name, value, &local_err);
668
+ if (local_err) {
669
+ goto out;
670
+ }
671
+
672
+ vus->blk_size = value;
673
+
674
+out:
675
+ error_propagate(errp, local_err);
676
+}
677
+
678
+static void vhost_user_blk_server_instance_finalize(Object *obj)
679
+{
680
+ VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
681
+
682
+ vhost_user_blk_server_stop(vub);
683
+
684
+ /*
685
+ * Unlike object_property_add_str, object_class_property_add_str
686
+ * doesn't have a release method. Thus manual memory freeing is
687
+ * needed.
688
+ */
689
+ free_socket_addr(vub->addr);
690
+ g_free(vub->node_name);
691
+}
692
+
693
+static void vhost_user_blk_server_complete(UserCreatable *obj, Error **errp)
694
+{
695
+ VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
696
+
697
+ vhost_user_blk_server_start(vub, errp);
698
+}
699
+
700
+static void vhost_user_blk_server_class_init(ObjectClass *klass,
701
+ void *class_data)
702
+{
703
+ UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
704
+ ucc->complete = vhost_user_blk_server_complete;
705
+
706
+ object_class_property_add_bool(klass, "writable",
707
+ vu_get_block_writable,
708
+ vu_set_block_writable);
709
+
710
+ object_class_property_add_str(klass, "node-name",
711
+ vu_get_node_name,
712
+ vu_set_node_name);
713
+
714
+ object_class_property_add_str(klass, "unix-socket",
715
+ vu_get_unix_socket,
716
+ vu_set_unix_socket);
717
+
718
+ object_class_property_add(klass, "logical-block-size", "uint32",
719
+ vu_get_blk_size, vu_set_blk_size,
720
+ NULL, NULL);
721
+}
722
+
723
+static const TypeInfo vhost_user_blk_server_info = {
724
+ .name = TYPE_VHOST_USER_BLK_SERVER,
725
+ .parent = TYPE_OBJECT,
726
+ .instance_size = sizeof(VuBlockDev),
727
+ .instance_finalize = vhost_user_blk_server_instance_finalize,
728
+ .class_init = vhost_user_blk_server_class_init,
729
+ .interfaces = (InterfaceInfo[]) {
730
+ {TYPE_USER_CREATABLE},
731
+ {}
732
+ },
733
+};
734
+
735
+static void vhost_user_blk_server_register_types(void)
736
+{
737
+ type_register_static(&vhost_user_blk_server_info);
738
+}
739
+
740
+type_init(vhost_user_blk_server_register_types)
741
diff --git a/softmmu/vl.c b/softmmu/vl.c
742
index XXXXXXX..XXXXXXX 100644
743
--- a/softmmu/vl.c
744
+++ b/softmmu/vl.c
745
@@ -XXX,XX +XXX,XX @@ static bool object_create_initial(const char *type, QemuOpts *opts)
746
}
747
#endif
748
749
+ /* Reason: vhost-user-blk-server property "node-name" */
750
+ if (g_str_equal(type, "vhost-user-blk-server")) {
751
+ return false;
752
+ }
753
/*
754
* Reason: filter-* property "netdev" etc.
755
*/
756
diff --git a/block/meson.build b/block/meson.build
757
index XXXXXXX..XXXXXXX 100644
758
--- a/block/meson.build
759
+++ b/block/meson.build
760
@@ -XXX,XX +XXX,XX @@ block_ss.add(when: 'CONFIG_WIN32', if_true: files('file-win32.c', 'win32-aio.c')
761
block_ss.add(when: 'CONFIG_POSIX', if_true: [files('file-posix.c'), coref, iokit])
762
block_ss.add(when: 'CONFIG_LIBISCSI', if_true: files('iscsi-opts.c'))
763
block_ss.add(when: 'CONFIG_LINUX', if_true: files('nvme.c'))
764
+block_ss.add(when: 'CONFIG_LINUX', if_true: files('export/vhost-user-blk-server.c', '../contrib/libvhost-user/libvhost-user.c'))
765
block_ss.add(when: 'CONFIG_REPLICATION', if_true: files('replication.c'))
766
block_ss.add(when: 'CONFIG_SHEEPDOG', if_true: files('sheepdog.c'))
767
block_ss.add(when: ['CONFIG_LINUX_AIO', libaio], if_true: files('linux-aio.c'))
179
--
768
--
180
2.24.1
769
2.26.2
181
770
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Coiby Xu <coiby.xu@gmail.com>
2
2
3
A program might rely on functions implemented in vl.c, but implement its
3
Suggested-by: Stefano Garzarella <sgarzare@redhat.com>
4
own main(). By placing main into a separate source file, there are no
4
Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
5
complaints about duplicate main()s when linking against vl.o. For
6
example, the virtual-device fuzzer uses a main() provided by libfuzzer,
7
and needs to perform some initialization before running the softmmu
8
initialization. Now, main simply calls three vl.c functions which
9
handle the guest initialization, main loop and cleanup.
10
11
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
12
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
6
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
14
Message-id: 20200220041118.23264-3-alxndr@bu.edu
7
Message-id: 20200918080912.321299-8-coiby.xu@gmail.com
8
[Removed reference to vhost-user-blk-test.c, it will be sent in a
9
separate pull request.
10
--Stefan]
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
---
12
---
17
MAINTAINERS | 1 +
13
MAINTAINERS | 7 +++++++
18
Makefile.target | 2 +-
14
1 file changed, 7 insertions(+)
19
include/sysemu/sysemu.h | 4 ++++
20
softmmu/Makefile.objs | 1 +
21
softmmu/main.c | 53 +++++++++++++++++++++++++++++++++++++++++
22
softmmu/vl.c | 36 +++++++---------------------
23
6 files changed, 69 insertions(+), 28 deletions(-)
24
create mode 100644 softmmu/main.c
25
15
26
diff --git a/MAINTAINERS b/MAINTAINERS
16
diff --git a/MAINTAINERS b/MAINTAINERS
27
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
28
--- a/MAINTAINERS
18
--- a/MAINTAINERS
29
+++ b/MAINTAINERS
19
+++ b/MAINTAINERS
30
@@ -XXX,XX +XXX,XX @@ F: include/sysemu/runstate.h
20
@@ -XXX,XX +XXX,XX @@ L: qemu-block@nongnu.org
31
F: util/main-loop.c
21
S: Supported
32
F: util/qemu-timer.c
22
F: tests/image-fuzzer/
33
F: softmmu/vl.c
23
34
+F: softmmu/main.c
24
+Vhost-user block device backend server
35
F: qapi/run-state.json
25
+M: Coiby Xu <Coiby.Xu@gmail.com>
36
26
+S: Maintained
37
Human Monitor (HMP)
27
+F: block/export/vhost-user-blk-server.c
38
diff --git a/Makefile.target b/Makefile.target
28
+F: util/vhost-user-server.c
39
index XXXXXXX..XXXXXXX 100644
29
+F: tests/qtest/libqos/vhost-user-blk.c
40
--- a/Makefile.target
41
+++ b/Makefile.target
42
@@ -XXX,XX +XXX,XX @@ endif
43
COMMON_LDADDS = ../libqemuutil.a
44
45
# build either PROG or PROGW
46
-$(QEMU_PROG_BUILD): $(all-obj-y) $(COMMON_LDADDS)
47
+$(QEMU_PROG_BUILD): $(all-obj-y) $(COMMON_LDADDS) $(softmmu-main-y)
48
    $(call LINK, $(filter-out %.mak, $^))
49
ifdef CONFIG_DARWIN
50
    $(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@,"REZ","$(TARGET_DIR)$@")
51
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
52
index XXXXXXX..XXXXXXX 100644
53
--- a/include/sysemu/sysemu.h
54
+++ b/include/sysemu/sysemu.h
55
@@ -XXX,XX +XXX,XX @@ QemuOpts *qemu_get_machine_opts(void);
56
57
bool defaults_enabled(void);
58
59
+void qemu_init(int argc, char **argv, char **envp);
60
+void qemu_main_loop(void);
61
+void qemu_cleanup(void);
62
+
30
+
63
extern QemuOptsList qemu_legacy_drive_opts;
31
Replication
64
extern QemuOptsList qemu_common_drive_opts;
32
M: Wen Congyang <wencongyang2@huawei.com>
65
extern QemuOptsList qemu_drive_opts;
33
M: Xie Changlong <xiechanglong.d@gmail.com>
66
diff --git a/softmmu/Makefile.objs b/softmmu/Makefile.objs
67
index XXXXXXX..XXXXXXX 100644
68
--- a/softmmu/Makefile.objs
69
+++ b/softmmu/Makefile.objs
70
@@ -XXX,XX +XXX,XX @@
71
+softmmu-main-y = softmmu/main.o
72
obj-y += vl.o
73
vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
74
diff --git a/softmmu/main.c b/softmmu/main.c
75
new file mode 100644
76
index XXXXXXX..XXXXXXX
77
--- /dev/null
78
+++ b/softmmu/main.c
79
@@ -XXX,XX +XXX,XX @@
80
+/*
81
+ * QEMU System Emulator
82
+ *
83
+ * Copyright (c) 2003-2020 Fabrice Bellard
84
+ *
85
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
86
+ * of this software and associated documentation files (the "Software"), to deal
87
+ * in the Software without restriction, including without limitation the rights
88
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
89
+ * copies of the Software, and to permit persons to whom the Software is
90
+ * furnished to do so, subject to the following conditions:
91
+ *
92
+ * The above copyright notice and this permission notice shall be included in
93
+ * all copies or substantial portions of the Software.
94
+ *
95
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
96
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
97
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
98
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
99
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
100
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
101
+ * THE SOFTWARE.
102
+ */
103
+
104
+#include "qemu/osdep.h"
105
+#include "qemu-common.h"
106
+#include "sysemu/sysemu.h"
107
+
108
+#ifdef CONFIG_SDL
109
+#if defined(__APPLE__) || defined(main)
110
+#include <SDL.h>
111
+int main(int argc, char **argv)
112
+{
113
+ return qemu_main(argc, argv, NULL);
114
+}
115
+#undef main
116
+#define main qemu_main
117
+#endif
118
+#endif /* CONFIG_SDL */
119
+
120
+#ifdef CONFIG_COCOA
121
+#undef main
122
+#define main qemu_main
123
+#endif /* CONFIG_COCOA */
124
+
125
+int main(int argc, char **argv, char **envp)
126
+{
127
+ qemu_init(argc, argv, envp);
128
+ qemu_main_loop();
129
+ qemu_cleanup();
130
+
131
+ return 0;
132
+}
133
diff --git a/softmmu/vl.c b/softmmu/vl.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/softmmu/vl.c
136
+++ b/softmmu/vl.c
137
@@ -XXX,XX +XXX,XX @@
138
#include "sysemu/seccomp.h"
139
#include "sysemu/tcg.h"
140
141
-#ifdef CONFIG_SDL
142
-#if defined(__APPLE__) || defined(main)
143
-#include <SDL.h>
144
-int qemu_main(int argc, char **argv, char **envp);
145
-int main(int argc, char **argv)
146
-{
147
- return qemu_main(argc, argv, NULL);
148
-}
149
-#undef main
150
-#define main qemu_main
151
-#endif
152
-#endif /* CONFIG_SDL */
153
-
154
-#ifdef CONFIG_COCOA
155
-#undef main
156
-#define main qemu_main
157
-#endif /* CONFIG_COCOA */
158
-
159
-
160
#include "qemu/error-report.h"
161
#include "qemu/sockets.h"
162
#include "sysemu/accel.h"
163
@@ -XXX,XX +XXX,XX @@ static bool main_loop_should_exit(void)
164
return false;
165
}
166
167
-static void main_loop(void)
168
+void qemu_main_loop(void)
169
{
170
#ifdef CONFIG_PROFILER
171
int64_t ti;
172
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
173
}
174
}
175
176
-int main(int argc, char **argv, char **envp)
177
+void qemu_init(int argc, char **argv, char **envp)
178
{
179
int i;
180
int snapshot, linux_boot;
181
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
182
case QEMU_OPTION_watchdog:
183
if (watchdog) {
184
error_report("only one watchdog option may be given");
185
- return 1;
186
+ exit(1);
187
}
188
watchdog = optarg;
189
break;
190
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
191
parse_numa_opts(current_machine);
192
193
/* do monitor/qmp handling at preconfig state if requested */
194
- main_loop();
195
+ qemu_main_loop();
196
197
audio_init_audiodevs();
198
199
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
200
if (vmstate_dump_file) {
201
/* dump and exit */
202
dump_vmstate_json_to_file(vmstate_dump_file);
203
- return 0;
204
+ exit(0);
205
}
206
207
if (incoming) {
208
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
209
accel_setup_post(current_machine);
210
os_setup_post();
211
212
- main_loop();
213
+ return;
214
+}
215
216
+void qemu_cleanup(void)
217
+{
218
gdbserver_cleanup();
219
220
/*
221
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
222
qemu_chr_cleanup();
223
user_creatable_cleanup();
224
/* TODO: unref root container, check all devices are ok */
225
-
226
- return 0;
227
}
228
--
34
--
229
2.24.1
35
2.26.2
230
36
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2
2
Message-id: 20200924151549.913737-3-stefanha@redhat.com
3
The qtest-based fuzzer makes use of forking to reset-state between
4
tests. Keep the callback enabled, so the call_rcu thread gets created
5
within the child process.
6
7
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
8
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
9
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 20200220041118.23264-15-alxndr@bu.edu
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
4
---
13
softmmu/vl.c | 12 +++++++++++-
5
util/vhost-user-server.c | 2 +-
14
1 file changed, 11 insertions(+), 1 deletion(-)
6
1 file changed, 1 insertion(+), 1 deletion(-)
15
7
16
diff --git a/softmmu/vl.c b/softmmu/vl.c
8
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
17
index XXXXXXX..XXXXXXX 100644
9
index XXXXXXX..XXXXXXX 100644
18
--- a/softmmu/vl.c
10
--- a/util/vhost-user-server.c
19
+++ b/softmmu/vl.c
11
+++ b/util/vhost-user-server.c
20
@@ -XXX,XX +XXX,XX @@ void qemu_init(int argc, char **argv, char **envp)
12
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
21
set_memory_options(&ram_slots, &maxram_size, machine_class);
13
return false;
22
14
}
23
os_daemonize();
15
24
- rcu_disable_atfork();
16
- /* zero out unspecified fileds */
25
+
17
+ /* zero out unspecified fields */
26
+ /*
18
*server = (VuServer) {
27
+ * If QTest is enabled, keep the rcu_atfork enabled, since system processes
19
.listener = listener,
28
+ * may be forked testing purposes (e.g. fork-server based fuzzing) The fork
20
.vu_iface = vu_iface,
29
+ * should happen before a signle cpu instruction is executed, to prevent
30
+ * deadlocks. See commit 73c6e40, rcu: "completely disable pthread_atfork
31
+ * callbacks as soon as possible"
32
+ */
33
+ if (!qtest_enabled()) {
34
+ rcu_disable_atfork();
35
+ }
36
37
if (pid_file && !qemu_write_pidfile(pid_file, &err)) {
38
error_reportf_err(err, "cannot create PID file: ");
39
--
21
--
40
2.24.1
22
2.26.2
41
23
diff view generated by jsdifflib
1
epoll_handler is a stack variable and must not be accessed after it goes
1
We already have access to the value with the correct type (ioc and sioc
2
out of scope:
2
are the same QIOChannel).
3
4
if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) {
5
AioHandler epoll_handler;
6
...
7
add_pollfd(&epoll_handler);
8
ret = aio_epoll(ctx, pollfds, npfd, timeout);
9
} ...
10
11
...
12
13
/* if we have any readable fds, dispatch event */
14
if (ret > 0) {
15
for (i = 0; i < npfd; i++) {
16
nodes[i]->pfd.revents = pollfds[i].revents;
17
}
18
}
19
20
nodes[0] is &epoll_handler, which has already gone out of scope.
21
22
There is no need to use pollfds[] for epoll. We don't need an
23
AioHandler for the epoll fd.
24
3
25
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
26
Reviewed-by: Sergio Lopez <slp@redhat.com>
5
Message-id: 20200924151549.913737-4-stefanha@redhat.com
27
Message-id: 20200214171712.541358-2-stefanha@redhat.com
28
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
29
---
7
---
30
util/aio-posix.c | 20 ++++++++------------
8
util/vhost-user-server.c | 2 +-
31
1 file changed, 8 insertions(+), 12 deletions(-)
9
1 file changed, 1 insertion(+), 1 deletion(-)
32
10
33
diff --git a/util/aio-posix.c b/util/aio-posix.c
11
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
34
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
35
--- a/util/aio-posix.c
13
--- a/util/vhost-user-server.c
36
+++ b/util/aio-posix.c
14
+++ b/util/vhost-user-server.c
37
@@ -XXX,XX +XXX,XX @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
15
@@ -XXX,XX +XXX,XX @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
38
}
16
server->ioc = QIO_CHANNEL(sioc);
17
object_ref(OBJECT(server->ioc));
18
qio_channel_attach_aio_context(server->ioc, server->ctx);
19
- qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
20
+ qio_channel_set_blocking(server->ioc, false, NULL);
21
vu_client_start(server);
39
}
22
}
40
23
41
-static int aio_epoll(AioContext *ctx, GPollFD *pfds,
42
- unsigned npfd, int64_t timeout)
43
+static int aio_epoll(AioContext *ctx, int64_t timeout)
44
{
45
+ GPollFD pfd = {
46
+ .fd = ctx->epollfd,
47
+ .events = G_IO_IN | G_IO_OUT | G_IO_HUP | G_IO_ERR,
48
+ };
49
AioHandler *node;
50
int i, ret = 0;
51
struct epoll_event events[128];
52
53
- assert(npfd == 1);
54
- assert(pfds[0].fd == ctx->epollfd);
55
if (timeout > 0) {
56
- ret = qemu_poll_ns(pfds, npfd, timeout);
57
+ ret = qemu_poll_ns(&pfd, 1, timeout);
58
}
59
if (timeout <= 0 || ret > 0) {
60
ret = epoll_wait(ctx->epollfd, events,
61
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
62
63
/* wait until next event */
64
if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) {
65
- AioHandler epoll_handler;
66
-
67
- epoll_handler.pfd.fd = ctx->epollfd;
68
- epoll_handler.pfd.events = G_IO_IN | G_IO_OUT | G_IO_HUP | G_IO_ERR;
69
- npfd = 0;
70
- add_pollfd(&epoll_handler);
71
- ret = aio_epoll(ctx, pollfds, npfd, timeout);
72
+ npfd = 0; /* pollfds[] is not being used */
73
+ ret = aio_epoll(ctx, timeout);
74
} else {
75
ret = qemu_poll_ns(pollfds, npfd, timeout);
76
}
77
--
24
--
78
2.24.1
25
2.26.2
79
26
diff view generated by jsdifflib
1
It is not necessary to scan all AioHandlers for deletion. Keep a list
1
Explicitly deleting watches is not necessary since libvhost-user calls
2
of deleted handlers instead of scanning the full list of all handlers.
2
remove_watch() during vu_deinit(). Add an assertion to check this
3
3
though.
4
The AioHandler->deleted field can be dropped. Let's check if the
5
handler has been inserted into the deleted list instead. Add a new
6
QLIST_IS_INSERTED() API for this check.
7
4
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Reviewed-by: Sergio Lopez <slp@redhat.com>
6
Message-id: 20200924151549.913737-5-stefanha@redhat.com
10
Message-id: 20200214171712.541358-5-stefanha@redhat.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
8
---
13
include/block/aio.h | 6 ++++-
9
util/vhost-user-server.c | 19 ++++---------------
14
include/qemu/queue.h | 3 +++
10
1 file changed, 4 insertions(+), 15 deletions(-)
15
util/aio-posix.c | 53 +++++++++++++++++++++++++++++---------------
16
3 files changed, 43 insertions(+), 19 deletions(-)
17
11
18
diff --git a/include/block/aio.h b/include/block/aio.h
12
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
19
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
20
--- a/include/block/aio.h
14
--- a/util/vhost-user-server.c
21
+++ b/include/block/aio.h
15
+++ b/util/vhost-user-server.c
22
@@ -XXX,XX +XXX,XX @@ void qemu_aio_unref(void *p);
16
@@ -XXX,XX +XXX,XX @@ static void close_client(VuServer *server)
23
void qemu_aio_ref(void *p);
17
/* When this is set vu_client_trip will stop new processing vhost-user message */
24
18
server->sioc = NULL;
25
typedef struct AioHandler AioHandler;
19
26
+typedef QLIST_HEAD(, AioHandler) AioHandlerList;
20
- VuFdWatch *vu_fd_watch, *next;
27
typedef void QEMUBHFunc(void *opaque);
21
- QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
28
typedef bool AioPollFn(void *opaque);
22
- aio_set_fd_handler(server->ioc->ctx, vu_fd_watch->fd, true, NULL,
29
typedef void IOHandler(void *opaque);
23
- NULL, NULL, NULL);
30
@@ -XXX,XX +XXX,XX @@ struct AioContext {
24
- }
31
QemuRecMutex lock;
32
33
/* The list of registered AIO handlers. Protected by ctx->list_lock. */
34
- QLIST_HEAD(, AioHandler) aio_handlers;
35
+ AioHandlerList aio_handlers;
36
+
37
+ /* The list of AIO handlers to be deleted. Protected by ctx->list_lock. */
38
+ AioHandlerList deleted_aio_handlers;
39
40
/* Used to avoid unnecessary event_notifier_set calls in aio_notify;
41
* accessed with atomic primitives. If this field is 0, everything
42
diff --git a/include/qemu/queue.h b/include/qemu/queue.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/include/qemu/queue.h
45
+++ b/include/qemu/queue.h
46
@@ -XXX,XX +XXX,XX @@ struct { \
47
} \
48
} while (/*CONSTCOND*/0)
49
50
+/* Is elm in a list? */
51
+#define QLIST_IS_INSERTED(elm, field) ((elm)->field.le_prev != NULL)
52
+
53
#define QLIST_FOREACH(var, head, field) \
54
for ((var) = ((head)->lh_first); \
55
(var); \
56
diff --git a/util/aio-posix.c b/util/aio-posix.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/util/aio-posix.c
59
+++ b/util/aio-posix.c
60
@@ -XXX,XX +XXX,XX @@ struct AioHandler
61
AioPollFn *io_poll;
62
IOHandler *io_poll_begin;
63
IOHandler *io_poll_end;
64
- int deleted;
65
void *opaque;
66
bool is_external;
67
QLIST_ENTRY(AioHandler) node;
68
+ QLIST_ENTRY(AioHandler) node_deleted;
69
};
70
71
#ifdef CONFIG_EPOLL_CREATE1
72
@@ -XXX,XX +XXX,XX @@ static bool aio_epoll_try_enable(AioContext *ctx)
73
74
QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
75
int r;
76
- if (node->deleted || !node->pfd.events) {
77
+ if (QLIST_IS_INSERTED(node, node_deleted) || !node->pfd.events) {
78
continue;
79
}
80
event.events = epoll_events_from_pfd(node->pfd.events);
81
@@ -XXX,XX +XXX,XX @@ static AioHandler *find_aio_handler(AioContext *ctx, int fd)
82
AioHandler *node;
83
84
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
85
- if (node->pfd.fd == fd)
86
- if (!node->deleted)
87
+ if (node->pfd.fd == fd) {
88
+ if (!QLIST_IS_INSERTED(node, node_deleted)) {
89
return node;
90
+ }
91
+ }
92
}
93
94
return NULL;
95
@@ -XXX,XX +XXX,XX @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
96
97
/* If a read is in progress, just mark the node as deleted */
98
if (qemu_lockcnt_count(&ctx->list_lock)) {
99
- node->deleted = 1;
100
+ QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers, node, node_deleted);
101
node->pfd.revents = 0;
102
return false;
103
}
104
@@ -XXX,XX +XXX,XX @@ static void poll_set_started(AioContext *ctx, bool started)
105
QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
106
IOHandler *fn;
107
108
- if (node->deleted) {
109
+ if (QLIST_IS_INSERTED(node, node_deleted)) {
110
continue;
111
}
112
113
@@ -XXX,XX +XXX,XX @@ bool aio_pending(AioContext *ctx)
114
return result;
115
}
116
117
+static void aio_free_deleted_handlers(AioContext *ctx)
118
+{
119
+ AioHandler *node;
120
+
121
+ if (QLIST_EMPTY_RCU(&ctx->deleted_aio_handlers)) {
122
+ return;
123
+ }
124
+ if (!qemu_lockcnt_dec_if_lock(&ctx->list_lock)) {
125
+ return; /* we are nested, let the parent do the freeing */
126
+ }
127
+
128
+ while ((node = QLIST_FIRST_RCU(&ctx->deleted_aio_handlers))) {
129
+ QLIST_REMOVE(node, node);
130
+ QLIST_REMOVE(node, node_deleted);
131
+ g_free(node);
132
+ }
133
+
134
+ qemu_lockcnt_inc_and_unlock(&ctx->list_lock);
135
+}
136
+
137
static bool aio_dispatch_handlers(AioContext *ctx)
138
{
139
AioHandler *node, *tmp;
140
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
141
revents = node->pfd.revents & node->pfd.events;
142
node->pfd.revents = 0;
143
144
- if (!node->deleted &&
145
+ if (!QLIST_IS_INSERTED(node, node_deleted) &&
146
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
147
aio_node_check(ctx, node->is_external) &&
148
node->io_read) {
149
@@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx)
150
progress = true;
151
}
152
}
153
- if (!node->deleted &&
154
+ if (!QLIST_IS_INSERTED(node, node_deleted) &&
155
(revents & (G_IO_OUT | G_IO_ERR)) &&
156
aio_node_check(ctx, node->is_external) &&
157
node->io_write) {
158
node->io_write(node->opaque);
159
progress = true;
160
}
161
-
25
-
162
- if (node->deleted) {
26
- while (!QTAILQ_EMPTY(&server->vu_fd_watches)) {
163
- if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) {
27
- QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
164
- QLIST_REMOVE(node, node);
28
- if (!vu_fd_watch->processing) {
165
- g_free(node);
29
- QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
166
- qemu_lockcnt_inc_and_unlock(&ctx->list_lock);
30
- g_free(vu_fd_watch);
167
- }
31
- }
168
- }
32
- }
33
- }
34
-
35
while (server->processing_msg) {
36
if (server->ioc->read_coroutine) {
37
server->ioc->read_coroutine = NULL;
38
@@ -XXX,XX +XXX,XX @@ static void close_client(VuServer *server)
169
}
39
}
170
40
171
return progress;
41
vu_deinit(&server->vu_dev);
172
@@ -XXX,XX +XXX,XX @@ void aio_dispatch(AioContext *ctx)
173
qemu_lockcnt_inc(&ctx->list_lock);
174
aio_bh_poll(ctx);
175
aio_dispatch_handlers(ctx);
176
+ aio_free_deleted_handlers(ctx);
177
qemu_lockcnt_dec(&ctx->list_lock);
178
179
timerlistgroup_run_timers(&ctx->tlg);
180
@@ -XXX,XX +XXX,XX @@ static bool run_poll_handlers_once(AioContext *ctx, int64_t *timeout)
181
RCU_READ_LOCK_GUARD();
182
183
QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
184
- if (!node->deleted && node->io_poll &&
185
+ if (!QLIST_IS_INSERTED(node, node_deleted) && node->io_poll &&
186
aio_node_check(ctx, node->is_external) &&
187
node->io_poll(node->opaque)) {
188
/*
189
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
190
191
if (!aio_epoll_enabled(ctx)) {
192
QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
193
- if (!node->deleted && node->pfd.events
194
+ if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events
195
&& aio_node_check(ctx, node->is_external)) {
196
add_pollfd(node);
197
}
198
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
199
progress |= aio_dispatch_handlers(ctx);
200
}
201
202
+ aio_free_deleted_handlers(ctx);
203
+
42
+
204
qemu_lockcnt_dec(&ctx->list_lock);
43
+ /* vu_deinit() should have called remove_watch() */
205
44
+ assert(QTAILQ_EMPTY(&server->vu_fd_watches));
206
progress |= timerlistgroup_run_timers(&ctx->tlg);
45
+
46
object_unref(OBJECT(sioc));
47
object_unref(OBJECT(server->ioc));
48
}
207
--
49
--
208
2.24.1
50
2.26.2
209
51
diff view generated by jsdifflib
1
The ctx->first_bh list contains all created BHs, including those that
1
Only one struct is needed per request. Drop req_data and the separate
2
are not scheduled. The list is iterated by the event loop and therefore
2
VuBlockReq instance. Instead let vu_queue_pop() allocate everything at
3
has O(n) time complexity with respected to the number of created BHs.
3
once.
4
4
5
Rewrite BHs so that only scheduled or deleted BHs are enqueued.
5
This fixes the req_data memory leak in vu_block_virtio_process_req().
6
Only BHs that actually require action will be iterated.
7
8
One semantic change is required: qemu_bh_delete() enqueues the BH and
9
therefore invokes aio_notify(). The
10
tests/test-aio.c:test_source_bh_delete_from_cb() test case assumed that
11
g_main_context_iteration(NULL, false) returns false after
12
qemu_bh_delete() but it now returns true for one iteration. Fix up the
13
test case.
14
15
This patch makes aio_compute_timeout() and aio_bh_poll() drop from a CPU
16
profile reported by perf-top(1). Previously they combined to 9% CPU
17
utilization when AioContext polling is commented out and the guest has 2
18
virtio-blk,num-queues=1 and 99 virtio-blk,num-queues=32 devices.
19
6
20
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
21
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
8
Message-id: 20200924151549.913737-6-stefanha@redhat.com
22
Message-id: 20200221093951.1414693-1-stefanha@redhat.com
23
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
24
---
10
---
25
include/block/aio.h | 20 +++-
11
block/export/vhost-user-blk-server.c | 68 +++++++++-------------------
26
tests/test-aio.c | 3 +-
12
1 file changed, 21 insertions(+), 47 deletions(-)
27
util/async.c | 237 ++++++++++++++++++++++++++------------------
28
3 files changed, 158 insertions(+), 102 deletions(-)
29
13
30
diff --git a/include/block/aio.h b/include/block/aio.h
14
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
31
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
32
--- a/include/block/aio.h
16
--- a/block/export/vhost-user-blk-server.c
33
+++ b/include/block/aio.h
17
+++ b/block/export/vhost-user-blk-server.c
34
@@ -XXX,XX +XXX,XX @@ struct ThreadPool;
18
@@ -XXX,XX +XXX,XX @@ struct virtio_blk_inhdr {
35
struct LinuxAioState;
36
struct LuringState;
37
38
+/*
39
+ * Each aio_bh_poll() call carves off a slice of the BH list, so that newly
40
+ * scheduled BHs are not processed until the next aio_bh_poll() call. All
41
+ * active aio_bh_poll() calls chain their slices together in a list, so that
42
+ * nested aio_bh_poll() calls process all scheduled bottom halves.
43
+ */
44
+typedef QSLIST_HEAD(, QEMUBH) BHList;
45
+typedef struct BHListSlice BHListSlice;
46
+struct BHListSlice {
47
+ BHList bh_list;
48
+ QSIMPLEQ_ENTRY(BHListSlice) next;
49
+};
50
+
51
struct AioContext {
52
GSource source;
53
54
@@ -XXX,XX +XXX,XX @@ struct AioContext {
55
*/
56
QemuLockCnt list_lock;
57
58
- /* Anchor of the list of Bottom Halves belonging to the context */
59
- struct QEMUBH *first_bh;
60
+ /* Bottom Halves pending aio_bh_poll() processing */
61
+ BHList bh_list;
62
+
63
+ /* Chained BH list slices for each nested aio_bh_poll() call */
64
+ QSIMPLEQ_HEAD(, BHListSlice) bh_slice_list;
65
66
/* Used by aio_notify.
67
*
68
diff --git a/tests/test-aio.c b/tests/test-aio.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/tests/test-aio.c
71
+++ b/tests/test-aio.c
72
@@ -XXX,XX +XXX,XX @@ static void test_source_bh_delete_from_cb(void)
73
g_assert_cmpint(data1.n, ==, data1.max);
74
g_assert(data1.bh == NULL);
75
76
- g_assert(!g_main_context_iteration(NULL, false));
77
+ assert(g_main_context_iteration(NULL, false));
78
+ assert(!g_main_context_iteration(NULL, false));
79
}
80
81
static void test_source_bh_delete_from_cb_many(void)
82
diff --git a/util/async.c b/util/async.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/util/async.c
85
+++ b/util/async.c
86
@@ -XXX,XX +XXX,XX @@
87
#include "block/thread-pool.h"
88
#include "qemu/main-loop.h"
89
#include "qemu/atomic.h"
90
+#include "qemu/rcu_queue.h"
91
#include "block/raw-aio.h"
92
#include "qemu/coroutine_int.h"
93
#include "trace.h"
94
@@ -XXX,XX +XXX,XX @@
95
/***********************************************************/
96
/* bottom halves (can be seen as timers which expire ASAP) */
97
98
+/* QEMUBH::flags values */
99
+enum {
100
+ /* Already enqueued and waiting for aio_bh_poll() */
101
+ BH_PENDING = (1 << 0),
102
+
103
+ /* Invoke the callback */
104
+ BH_SCHEDULED = (1 << 1),
105
+
106
+ /* Delete without invoking callback */
107
+ BH_DELETED = (1 << 2),
108
+
109
+ /* Delete after invoking callback */
110
+ BH_ONESHOT = (1 << 3),
111
+
112
+ /* Schedule periodically when the event loop is idle */
113
+ BH_IDLE = (1 << 4),
114
+};
115
+
116
struct QEMUBH {
117
AioContext *ctx;
118
QEMUBHFunc *cb;
119
void *opaque;
120
- QEMUBH *next;
121
- bool scheduled;
122
- bool idle;
123
- bool deleted;
124
+ QSLIST_ENTRY(QEMUBH) next;
125
+ unsigned flags;
126
};
19
};
127
20
128
+/* Called concurrently from any thread */
21
typedef struct VuBlockReq {
129
+static void aio_bh_enqueue(QEMUBH *bh, unsigned new_flags)
22
- VuVirtqElement *elem;
130
+{
23
+ VuVirtqElement elem;
131
+ AioContext *ctx = bh->ctx;
24
int64_t sector_num;
132
+ unsigned old_flags;
25
size_t size;
133
+
26
struct virtio_blk_inhdr *in;
134
+ /*
27
@@ -XXX,XX +XXX,XX @@ static void vu_block_req_complete(VuBlockReq *req)
135
+ * The memory barrier implicit in atomic_fetch_or makes sure that:
28
VuDev *vu_dev = &req->server->vu_dev;
136
+ * 1. idle & any writes needed by the callback are done before the
29
137
+ * locations are read in the aio_bh_poll.
30
/* IO size with 1 extra status byte */
138
+ * 2. ctx is loaded before the callback has a chance to execute and bh
31
- vu_queue_push(vu_dev, req->vq, req->elem, req->size + 1);
139
+ * could be freed.
32
+ vu_queue_push(vu_dev, req->vq, &req->elem, req->size + 1);
140
+ */
33
vu_queue_notify(vu_dev, req->vq);
141
+ old_flags = atomic_fetch_or(&bh->flags, BH_PENDING | new_flags);
34
142
+ if (!(old_flags & BH_PENDING)) {
35
- if (req->elem) {
143
+ QSLIST_INSERT_HEAD_ATOMIC(&ctx->bh_list, bh, next);
36
- free(req->elem);
144
+ }
145
+
146
+ aio_notify(ctx);
147
+}
148
+
149
+/* Only called from aio_bh_poll() and aio_ctx_finalize() */
150
+static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *flags)
151
+{
152
+ QEMUBH *bh = QSLIST_FIRST_RCU(head);
153
+
154
+ if (!bh) {
155
+ return NULL;
156
+ }
157
+
158
+ QSLIST_REMOVE_HEAD(head, next);
159
+
160
+ /*
161
+ * The atomic_and is paired with aio_bh_enqueue(). The implicit memory
162
+ * barrier ensures that the callback sees all writes done by the scheduling
163
+ * thread. It also ensures that the scheduling thread sees the cleared
164
+ * flag before bh->cb has run, and thus will call aio_notify again if
165
+ * necessary.
166
+ */
167
+ *flags = atomic_fetch_and(&bh->flags,
168
+ ~(BH_PENDING | BH_SCHEDULED | BH_IDLE));
169
+ return bh;
170
+}
171
+
172
void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
173
{
174
QEMUBH *bh;
175
@@ -XXX,XX +XXX,XX @@ void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
176
.cb = cb,
177
.opaque = opaque,
178
};
179
- qemu_lockcnt_lock(&ctx->list_lock);
180
- bh->next = ctx->first_bh;
181
- bh->scheduled = 1;
182
- bh->deleted = 1;
183
- /* Make sure that the members are ready before putting bh into list */
184
- smp_wmb();
185
- ctx->first_bh = bh;
186
- qemu_lockcnt_unlock(&ctx->list_lock);
187
- aio_notify(ctx);
188
+ aio_bh_enqueue(bh, BH_SCHEDULED | BH_ONESHOT);
189
}
190
191
QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
192
@@ -XXX,XX +XXX,XX @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
193
.cb = cb,
194
.opaque = opaque,
195
};
196
- qemu_lockcnt_lock(&ctx->list_lock);
197
- bh->next = ctx->first_bh;
198
- /* Make sure that the members are ready before putting bh into list */
199
- smp_wmb();
200
- ctx->first_bh = bh;
201
- qemu_lockcnt_unlock(&ctx->list_lock);
202
return bh;
203
}
204
205
@@ -XXX,XX +XXX,XX @@ void aio_bh_call(QEMUBH *bh)
206
bh->cb(bh->opaque);
207
}
208
209
-/* Multiple occurrences of aio_bh_poll cannot be called concurrently.
210
- * The count in ctx->list_lock is incremented before the call, and is
211
- * not affected by the call.
212
- */
213
+/* Multiple occurrences of aio_bh_poll cannot be called concurrently. */
214
int aio_bh_poll(AioContext *ctx)
215
{
216
- QEMUBH *bh, **bhp, *next;
217
- int ret;
218
- bool deleted = false;
219
-
220
- ret = 0;
221
- for (bh = atomic_rcu_read(&ctx->first_bh); bh; bh = next) {
222
- next = atomic_rcu_read(&bh->next);
223
- /* The atomic_xchg is paired with the one in qemu_bh_schedule. The
224
- * implicit memory barrier ensures that the callback sees all writes
225
- * done by the scheduling thread. It also ensures that the scheduling
226
- * thread sees the zero before bh->cb has run, and thus will call
227
- * aio_notify again if necessary.
228
- */
229
- if (atomic_xchg(&bh->scheduled, 0)) {
230
+ BHListSlice slice;
231
+ BHListSlice *s;
232
+ int ret = 0;
233
+
234
+ QSLIST_MOVE_ATOMIC(&slice.bh_list, &ctx->bh_list);
235
+ QSIMPLEQ_INSERT_TAIL(&ctx->bh_slice_list, &slice, next);
236
+
237
+ while ((s = QSIMPLEQ_FIRST(&ctx->bh_slice_list))) {
238
+ QEMUBH *bh;
239
+ unsigned flags;
240
+
241
+ bh = aio_bh_dequeue(&s->bh_list, &flags);
242
+ if (!bh) {
243
+ QSIMPLEQ_REMOVE_HEAD(&ctx->bh_slice_list, next);
244
+ continue;
245
+ }
246
+
247
+ if ((flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) {
248
/* Idle BHs don't count as progress */
249
- if (!bh->idle) {
250
+ if (!(flags & BH_IDLE)) {
251
ret = 1;
252
}
253
- bh->idle = 0;
254
aio_bh_call(bh);
255
}
256
- if (bh->deleted) {
257
- deleted = true;
258
+ if (flags & (BH_DELETED | BH_ONESHOT)) {
259
+ g_free(bh);
260
}
261
}
262
263
- /* remove deleted bhs */
264
- if (!deleted) {
265
- return ret;
266
- }
37
- }
267
-
38
-
268
- if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) {
39
- g_free(req);
269
- bhp = &ctx->first_bh;
40
+ free(req);
270
- while (*bhp) {
271
- bh = *bhp;
272
- if (bh->deleted && !bh->scheduled) {
273
- *bhp = bh->next;
274
- g_free(bh);
275
- } else {
276
- bhp = &bh->next;
277
- }
278
- }
279
- qemu_lockcnt_inc_and_unlock(&ctx->list_lock);
280
- }
281
return ret;
282
}
41
}
283
42
284
void qemu_bh_schedule_idle(QEMUBH *bh)
43
static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
44
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_flush(VuBlockReq *req)
45
blk_co_flush(backend);
46
}
47
48
-struct req_data {
49
- VuServer *server;
50
- VuVirtq *vq;
51
- VuVirtqElement *elem;
52
-};
53
-
54
static void coroutine_fn vu_block_virtio_process_req(void *opaque)
285
{
55
{
286
- bh->idle = 1;
56
- struct req_data *data = opaque;
287
- /* Make sure that idle & any writes needed by the callback are done
57
- VuServer *server = data->server;
288
- * before the locations are read in the aio_bh_poll.
58
- VuVirtq *vq = data->vq;
289
- */
59
- VuVirtqElement *elem = data->elem;
290
- atomic_mb_set(&bh->scheduled, 1);
60
+ VuBlockReq *req = opaque;
291
+ aio_bh_enqueue(bh, BH_SCHEDULED | BH_IDLE);
61
+ VuServer *server = req->server;
62
+ VuVirtqElement *elem = &req->elem;
63
uint32_t type;
64
- VuBlockReq *req;
65
66
VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
67
BlockBackend *backend = vdev_blk->backend;
68
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
69
struct iovec *out_iov = elem->out_sg;
70
unsigned in_num = elem->in_num;
71
unsigned out_num = elem->out_num;
72
+
73
/* refer to hw/block/virtio_blk.c */
74
if (elem->out_num < 1 || elem->in_num < 1) {
75
error_report("virtio-blk request missing headers");
76
- free(elem);
77
- return;
78
+ goto err;
79
}
80
81
- req = g_new0(VuBlockReq, 1);
82
- req->server = server;
83
- req->vq = vq;
84
- req->elem = elem;
85
-
86
if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
87
sizeof(req->out)) != sizeof(req->out))) {
88
error_report("virtio-blk request outhdr too short");
89
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
90
91
err:
92
free(elem);
93
- g_free(req);
94
- return;
292
}
95
}
293
96
294
void qemu_bh_schedule(QEMUBH *bh)
97
static void vu_block_process_vq(VuDev *vu_dev, int idx)
295
{
98
{
296
- AioContext *ctx;
99
- VuServer *server;
100
- VuVirtq *vq;
101
- struct req_data *req_data;
102
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
103
+ VuVirtq *vq = vu_get_queue(vu_dev, idx);
104
105
- server = container_of(vu_dev, VuServer, vu_dev);
106
- assert(server);
297
-
107
-
298
- ctx = bh->ctx;
108
- vq = vu_get_queue(vu_dev, idx);
299
- bh->idle = 0;
109
- assert(vq);
300
- /* The memory barrier implicit in atomic_xchg makes sure that:
110
- VuVirtqElement *elem;
301
- * 1. idle & any writes needed by the callback are done before the
111
while (1) {
302
- * locations are read in the aio_bh_poll.
112
- elem = vu_queue_pop(vu_dev, vq, sizeof(VuVirtqElement) +
303
- * 2. ctx is loaded before scheduled is set and the callback has a chance
113
- sizeof(VuBlockReq));
304
- * to execute.
114
- if (elem) {
305
- */
115
- req_data = g_new0(struct req_data, 1);
306
- if (atomic_xchg(&bh->scheduled, 1) == 0) {
116
- req_data->server = server;
307
- aio_notify(ctx);
117
- req_data->vq = vq;
308
- }
118
- req_data->elem = elem;
309
+ aio_bh_enqueue(bh, BH_SCHEDULED);
119
- Coroutine *co = qemu_coroutine_create(vu_block_virtio_process_req,
120
- req_data);
121
- aio_co_enter(server->ioc->ctx, co);
122
- } else {
123
+ VuBlockReq *req;
124
+
125
+ req = vu_queue_pop(vu_dev, vq, sizeof(VuBlockReq));
126
+ if (!req) {
127
break;
128
}
129
+
130
+ req->server = server;
131
+ req->vq = vq;
132
+
133
+ Coroutine *co =
134
+ qemu_coroutine_create(vu_block_virtio_process_req, req);
135
+ qemu_coroutine_enter(co);
136
}
310
}
137
}
311
138
312
-
313
/* This func is async.
314
*/
315
void qemu_bh_cancel(QEMUBH *bh)
316
{
317
- atomic_mb_set(&bh->scheduled, 0);
318
+ atomic_and(&bh->flags, ~BH_SCHEDULED);
319
}
320
321
/* This func is async.The bottom half will do the delete action at the finial
322
@@ -XXX,XX +XXX,XX @@ void qemu_bh_cancel(QEMUBH *bh)
323
*/
324
void qemu_bh_delete(QEMUBH *bh)
325
{
326
- bh->scheduled = 0;
327
- bh->deleted = 1;
328
+ aio_bh_enqueue(bh, BH_DELETED);
329
}
330
331
-int64_t
332
-aio_compute_timeout(AioContext *ctx)
333
+static int64_t aio_compute_bh_timeout(BHList *head, int timeout)
334
{
335
- int64_t deadline;
336
- int timeout = -1;
337
QEMUBH *bh;
338
339
- for (bh = atomic_rcu_read(&ctx->first_bh); bh;
340
- bh = atomic_rcu_read(&bh->next)) {
341
- if (bh->scheduled) {
342
- if (bh->idle) {
343
+ QSLIST_FOREACH_RCU(bh, head, next) {
344
+ if ((bh->flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) {
345
+ if (bh->flags & BH_IDLE) {
346
/* idle bottom halves will be polled at least
347
* every 10ms */
348
timeout = 10000000;
349
@@ -XXX,XX +XXX,XX @@ aio_compute_timeout(AioContext *ctx)
350
}
351
}
352
353
+ return timeout;
354
+}
355
+
356
+int64_t
357
+aio_compute_timeout(AioContext *ctx)
358
+{
359
+ BHListSlice *s;
360
+ int64_t deadline;
361
+ int timeout = -1;
362
+
363
+ timeout = aio_compute_bh_timeout(&ctx->bh_list, timeout);
364
+ if (timeout == 0) {
365
+ return 0;
366
+ }
367
+
368
+ QSIMPLEQ_FOREACH(s, &ctx->bh_slice_list, next) {
369
+ timeout = aio_compute_bh_timeout(&s->bh_list, timeout);
370
+ if (timeout == 0) {
371
+ return 0;
372
+ }
373
+ }
374
+
375
deadline = timerlistgroup_deadline_ns(&ctx->tlg);
376
if (deadline == 0) {
377
return 0;
378
@@ -XXX,XX +XXX,XX @@ aio_ctx_check(GSource *source)
379
{
380
AioContext *ctx = (AioContext *) source;
381
QEMUBH *bh;
382
+ BHListSlice *s;
383
384
atomic_and(&ctx->notify_me, ~1);
385
aio_notify_accept(ctx);
386
387
- for (bh = ctx->first_bh; bh; bh = bh->next) {
388
- if (bh->scheduled) {
389
+ QSLIST_FOREACH_RCU(bh, &ctx->bh_list, next) {
390
+ if ((bh->flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) {
391
return true;
392
}
393
}
394
+
395
+ QSIMPLEQ_FOREACH(s, &ctx->bh_slice_list, next) {
396
+ QSLIST_FOREACH_RCU(bh, &s->bh_list, next) {
397
+ if ((bh->flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) {
398
+ return true;
399
+ }
400
+ }
401
+ }
402
return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0);
403
}
404
405
@@ -XXX,XX +XXX,XX @@ static void
406
aio_ctx_finalize(GSource *source)
407
{
408
AioContext *ctx = (AioContext *) source;
409
+ QEMUBH *bh;
410
+ unsigned flags;
411
412
thread_pool_free(ctx->thread_pool);
413
414
@@ -XXX,XX +XXX,XX @@ aio_ctx_finalize(GSource *source)
415
assert(QSLIST_EMPTY(&ctx->scheduled_coroutines));
416
qemu_bh_delete(ctx->co_schedule_bh);
417
418
- qemu_lockcnt_lock(&ctx->list_lock);
419
- assert(!qemu_lockcnt_count(&ctx->list_lock));
420
- while (ctx->first_bh) {
421
- QEMUBH *next = ctx->first_bh->next;
422
+ /* There must be no aio_bh_poll() calls going on */
423
+ assert(QSIMPLEQ_EMPTY(&ctx->bh_slice_list));
424
425
+ while ((bh = aio_bh_dequeue(&ctx->bh_list, &flags))) {
426
/* qemu_bh_delete() must have been called on BHs in this AioContext */
427
- assert(ctx->first_bh->deleted);
428
+ assert(flags & BH_DELETED);
429
430
- g_free(ctx->first_bh);
431
- ctx->first_bh = next;
432
+ g_free(bh);
433
}
434
- qemu_lockcnt_unlock(&ctx->list_lock);
435
436
aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL);
437
event_notifier_cleanup(&ctx->notifier);
438
@@ -XXX,XX +XXX,XX @@ AioContext *aio_context_new(Error **errp)
439
AioContext *ctx;
440
441
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
442
+ QSLIST_INIT(&ctx->bh_list);
443
+ QSIMPLEQ_INIT(&ctx->bh_slice_list);
444
aio_context_setup(ctx);
445
446
ret = event_notifier_init(&ctx->notifier, false);
447
--
139
--
448
2.24.1
140
2.26.2
449
141
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
The device panic notifier callback is not used. Drop it.
2
2
3
Most qos-related objects were specified in the qos-test-obj-y variable.
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
qos-test-obj-y also included qos-test.o which defines a main().
4
Message-id: 20200924151549.913737-7-stefanha@redhat.com
5
This made it difficult to repurpose qos-test-obj-y to link anything
6
beside tests/qos-test against libqos. This change separates objects that
7
are libqos-specific and ones that are qos-test specific into different
8
variables.
9
10
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
11
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
12
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Message-id: 20200220041118.23264-11-alxndr@bu.edu
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
---
6
---
17
tests/qtest/Makefile.include | 71 ++++++++++++++++++------------------
7
util/vhost-user-server.h | 3 ---
18
1 file changed, 36 insertions(+), 35 deletions(-)
8
block/export/vhost-user-blk-server.c | 3 +--
9
util/vhost-user-server.c | 6 ------
10
3 files changed, 1 insertion(+), 11 deletions(-)
19
11
20
diff --git a/tests/qtest/Makefile.include b/tests/qtest/Makefile.include
12
diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/tests/qtest/Makefile.include
14
--- a/util/vhost-user-server.h
23
+++ b/tests/qtest/Makefile.include
15
+++ b/util/vhost-user-server.h
24
@@ -XXX,XX +XXX,XX @@ check-qtest-s390x-y += migration-test
16
@@ -XXX,XX +XXX,XX @@ typedef struct VuFdWatch {
25
# libqos / qgraph :
17
} VuFdWatch;
26
libqgraph-obj-y = tests/qtest/libqos/qgraph.o
18
27
19
typedef struct VuServer VuServer;
28
-libqos-obj-y = $(libqgraph-obj-y) tests/qtest/libqos/pci.o tests/qtest/libqos/fw_cfg.o
20
-typedef void DevicePanicNotifierFn(VuServer *server);
29
-libqos-obj-y += tests/qtest/libqos/malloc.o
21
30
-libqos-obj-y += tests/qtest/libqos/libqos.o
22
struct VuServer {
31
-libqos-spapr-obj-y = $(libqos-obj-y) tests/qtest/libqos/malloc-spapr.o
23
QIONetListener *listener;
32
+libqos-core-obj-y = $(libqgraph-obj-y) tests/qtest/libqos/pci.o tests/qtest/libqos/fw_cfg.o
24
AioContext *ctx;
33
+libqos-core-obj-y += tests/qtest/libqos/malloc.o
25
- DevicePanicNotifierFn *device_panic_notifier;
34
+libqos-core-obj-y += tests/qtest/libqos/libqos.o
26
int max_queues;
35
+libqos-spapr-obj-y = $(libqos-core-obj-y) tests/qtest/libqos/malloc-spapr.o
27
const VuDevIface *vu_iface;
36
libqos-spapr-obj-y += tests/qtest/libqos/libqos-spapr.o
28
VuDev vu_dev;
37
libqos-spapr-obj-y += tests/qtest/libqos/rtas.o
29
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
38
libqos-spapr-obj-y += tests/qtest/libqos/pci-spapr.o
30
SocketAddress *unix_socket,
39
-libqos-pc-obj-y = $(libqos-obj-y) tests/qtest/libqos/pci-pc.o
31
AioContext *ctx,
40
+libqos-pc-obj-y = $(libqos-core-obj-y) tests/qtest/libqos/pci-pc.o
32
uint16_t max_queues,
41
libqos-pc-obj-y += tests/qtest/libqos/malloc-pc.o tests/qtest/libqos/libqos-pc.o
33
- DevicePanicNotifierFn *device_panic_notifier,
42
libqos-pc-obj-y += tests/qtest/libqos/ahci.o
34
const VuDevIface *vu_iface,
43
libqos-usb-obj-y = $(libqos-spapr-obj-y) $(libqos-pc-obj-y) tests/qtest/libqos/usb.o
35
Error **errp);
44
36
45
# qos devices:
37
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
46
-qos-test-obj-y = tests/qtest/qos-test.o $(libqgraph-obj-y)
38
index XXXXXXX..XXXXXXX 100644
47
-qos-test-obj-y += $(libqos-pc-obj-y) $(libqos-spapr-obj-y)
39
--- a/block/export/vhost-user-blk-server.c
48
-qos-test-obj-y += tests/qtest/libqos/e1000e.o
40
+++ b/block/export/vhost-user-blk-server.c
49
-qos-test-obj-y += tests/qtest/libqos/i2c.o
41
@@ -XXX,XX +XXX,XX @@ static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
50
-qos-test-obj-y += tests/qtest/libqos/i2c-imx.o
42
ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
51
-qos-test-obj-y += tests/qtest/libqos/i2c-omap.o
43
52
-qos-test-obj-y += tests/qtest/libqos/sdhci.o
44
if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
53
-qos-test-obj-y += tests/qtest/libqos/tpci200.o
45
- VHOST_USER_BLK_MAX_QUEUES,
54
-qos-test-obj-y += tests/qtest/libqos/virtio.o
46
- NULL, &vu_block_iface,
55
-qos-test-obj-$(CONFIG_VIRTFS) += tests/qtest/libqos/virtio-9p.o
47
+ VHOST_USER_BLK_MAX_QUEUES, &vu_block_iface,
56
-qos-test-obj-y += tests/qtest/libqos/virtio-balloon.o
48
errp)) {
57
-qos-test-obj-y += tests/qtest/libqos/virtio-blk.o
49
goto error;
58
-qos-test-obj-y += tests/qtest/libqos/virtio-mmio.o
50
}
59
-qos-test-obj-y += tests/qtest/libqos/virtio-net.o
51
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
60
-qos-test-obj-y += tests/qtest/libqos/virtio-pci.o
52
index XXXXXXX..XXXXXXX 100644
61
-qos-test-obj-y += tests/qtest/libqos/virtio-pci-modern.o
53
--- a/util/vhost-user-server.c
62
-qos-test-obj-y += tests/qtest/libqos/virtio-rng.o
54
+++ b/util/vhost-user-server.c
63
-qos-test-obj-y += tests/qtest/libqos/virtio-scsi.o
55
@@ -XXX,XX +XXX,XX @@ static void panic_cb(VuDev *vu_dev, const char *buf)
64
-qos-test-obj-y += tests/qtest/libqos/virtio-serial.o
56
close_client(server);
65
+libqos-obj-y = $(libqgraph-obj-y)
57
}
66
+libqos-obj-y += $(libqos-pc-obj-y) $(libqos-spapr-obj-y)
58
67
+libqos-obj-y += tests/qtest/libqos/e1000e.o
59
- if (server->device_panic_notifier) {
68
+libqos-obj-y += tests/qtest/libqos/i2c.o
60
- server->device_panic_notifier(server);
69
+libqos-obj-y += tests/qtest/libqos/i2c-imx.o
61
- }
70
+libqos-obj-y += tests/qtest/libqos/i2c-omap.o
62
-
71
+libqos-obj-y += tests/qtest/libqos/sdhci.o
63
/*
72
+libqos-obj-y += tests/qtest/libqos/tpci200.o
64
* Set the callback function for network listener so another
73
+libqos-obj-y += tests/qtest/libqos/virtio.o
65
* vhost-user client can connect to this server
74
+libqos-obj-$(CONFIG_VIRTFS) += tests/qtest/libqos/virtio-9p.o
66
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
75
+libqos-obj-y += tests/qtest/libqos/virtio-balloon.o
67
SocketAddress *socket_addr,
76
+libqos-obj-y += tests/qtest/libqos/virtio-blk.o
68
AioContext *ctx,
77
+libqos-obj-y += tests/qtest/libqos/virtio-mmio.o
69
uint16_t max_queues,
78
+libqos-obj-y += tests/qtest/libqos/virtio-net.o
70
- DevicePanicNotifierFn *device_panic_notifier,
79
+libqos-obj-y += tests/qtest/libqos/virtio-pci.o
71
const VuDevIface *vu_iface,
80
+libqos-obj-y += tests/qtest/libqos/virtio-pci-modern.o
72
Error **errp)
81
+libqos-obj-y += tests/qtest/libqos/virtio-rng.o
73
{
82
+libqos-obj-y += tests/qtest/libqos/virtio-scsi.o
74
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
83
+libqos-obj-y += tests/qtest/libqos/virtio-serial.o
75
.vu_iface = vu_iface,
84
76
.max_queues = max_queues,
85
# qos machines:
77
.ctx = ctx,
86
-qos-test-obj-y += tests/qtest/libqos/aarch64-xlnx-zcu102-machine.o
78
- .device_panic_notifier = device_panic_notifier,
87
-qos-test-obj-y += tests/qtest/libqos/arm-imx25-pdk-machine.o
79
};
88
-qos-test-obj-y += tests/qtest/libqos/arm-n800-machine.o
80
89
-qos-test-obj-y += tests/qtest/libqos/arm-raspi2-machine.o
81
qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
90
-qos-test-obj-y += tests/qtest/libqos/arm-sabrelite-machine.o
91
-qos-test-obj-y += tests/qtest/libqos/arm-smdkc210-machine.o
92
-qos-test-obj-y += tests/qtest/libqos/arm-virt-machine.o
93
-qos-test-obj-y += tests/qtest/libqos/arm-xilinx-zynq-a9-machine.o
94
-qos-test-obj-y += tests/qtest/libqos/ppc64_pseries-machine.o
95
-qos-test-obj-y += tests/qtest/libqos/x86_64_pc-machine.o
96
+libqos-obj-y += tests/qtest/libqos/aarch64-xlnx-zcu102-machine.o
97
+libqos-obj-y += tests/qtest/libqos/arm-imx25-pdk-machine.o
98
+libqos-obj-y += tests/qtest/libqos/arm-n800-machine.o
99
+libqos-obj-y += tests/qtest/libqos/arm-raspi2-machine.o
100
+libqos-obj-y += tests/qtest/libqos/arm-sabrelite-machine.o
101
+libqos-obj-y += tests/qtest/libqos/arm-smdkc210-machine.o
102
+libqos-obj-y += tests/qtest/libqos/arm-virt-machine.o
103
+libqos-obj-y += tests/qtest/libqos/arm-xilinx-zynq-a9-machine.o
104
+libqos-obj-y += tests/qtest/libqos/ppc64_pseries-machine.o
105
+libqos-obj-y += tests/qtest/libqos/x86_64_pc-machine.o
106
107
# qos tests:
108
+qos-test-obj-y += tests/qtest/qos-test.o
109
qos-test-obj-y += tests/qtest/ac97-test.o
110
qos-test-obj-y += tests/qtest/ds1338-test.o
111
qos-test-obj-y += tests/qtest/e1000-test.o
112
@@ -XXX,XX +XXX,XX @@ check-unit-y += tests/test-qgraph$(EXESUF)
113
tests/test-qgraph$(EXESUF): tests/test-qgraph.o $(libqgraph-obj-y)
114
115
check-qtest-generic-y += qos-test
116
-tests/qtest/qos-test$(EXESUF): $(qos-test-obj-y)
117
+tests/qtest/qos-test$(EXESUF): $(qos-test-obj-y) $(libqos-obj-y)
118
119
# QTest dependencies:
120
tests/qtest/qmp-test$(EXESUF): tests/qtest/qmp-test.o
121
--
82
--
122
2.24.1
83
2.26.2
123
84
diff view generated by jsdifflib
1
QLIST_REMOVE() assumes the element is in a list. It also leaves the
1
fds[] is leaked when qio_channel_readv_full() fails.
2
element's linked list pointers dangling.
3
2
4
Introduce a safe version of QLIST_REMOVE() and convert open-coded
3
Use vmsg->fds[] instead of keeping a local fds[] array. Then we can
5
instances of this pattern.
4
reuse goto fail to clean up fds. vmsg->fd_num must be zeroed before the
5
loop to make this safe.
6
6
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20200924151549.913737-8-stefanha@redhat.com
9
Message-id: 20200214171712.541358-4-stefanha@redhat.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
10
---
12
block.c | 5 +----
11
util/vhost-user-server.c | 50 ++++++++++++++++++----------------------
13
chardev/spice.c | 4 +---
12
1 file changed, 23 insertions(+), 27 deletions(-)
14
include/qemu/queue.h | 14 ++++++++++++++
15
3 files changed, 16 insertions(+), 7 deletions(-)
16
13
17
diff --git a/block.c b/block.c
14
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/block.c
16
--- a/util/vhost-user-server.c
20
+++ b/block.c
17
+++ b/util/vhost-user-server.c
21
@@ -XXX,XX +XXX,XX @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs,
18
@@ -XXX,XX +XXX,XX @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
22
19
};
23
static void bdrv_detach_child(BdrvChild *child)
20
int rc, read_bytes = 0;
24
{
21
Error *local_err = NULL;
25
- if (child->next.le_prev) {
22
- /*
26
- QLIST_REMOVE(child, next);
23
- * Store fds/nfds returned from qio_channel_readv_full into
27
- child->next.le_prev = NULL;
24
- * temporary variables.
28
- }
25
- *
29
+ QLIST_SAFE_REMOVE(child, next);
26
- * VhostUserMsg is a packed structure, gcc will complain about passing
30
27
- * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
31
bdrv_replace_child(child, NULL);
28
- * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
32
29
- * thus two temporary variables nfds and fds are used here.
33
diff --git a/chardev/spice.c b/chardev/spice.c
30
- */
34
index XXXXXXX..XXXXXXX 100644
31
- size_t nfds = 0, nfds_t = 0;
35
--- a/chardev/spice.c
32
const size_t max_fds = G_N_ELEMENTS(vmsg->fds);
36
+++ b/chardev/spice.c
33
- int *fds_t = NULL;
37
@@ -XXX,XX +XXX,XX @@ static void char_spice_finalize(Object *obj)
34
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
38
35
QIOChannel *ioc = server->ioc;
39
vmc_unregister_interface(s);
36
40
37
+ vmsg->fd_num = 0;
41
- if (s->next.le_prev) {
38
if (!ioc) {
42
- QLIST_REMOVE(s, next);
39
error_report_err(local_err);
43
- }
40
goto fail;
44
+ QLIST_SAFE_REMOVE(s, next);
41
@@ -XXX,XX +XXX,XX @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
45
42
46
g_free((char *)s->sin.subtype);
43
assert(qemu_in_coroutine());
47
g_free((char *)s->sin.portname);
44
do {
48
diff --git a/include/qemu/queue.h b/include/qemu/queue.h
45
+ size_t nfds = 0;
49
index XXXXXXX..XXXXXXX 100644
46
+ int *fds = NULL;
50
--- a/include/qemu/queue.h
51
+++ b/include/qemu/queue.h
52
@@ -XXX,XX +XXX,XX @@ struct { \
53
*(elm)->field.le_prev = (elm)->field.le_next; \
54
} while (/*CONSTCOND*/0)
55
56
+/*
57
+ * Like QLIST_REMOVE() but safe to call when elm is not in a list
58
+ */
59
+#define QLIST_SAFE_REMOVE(elm, field) do { \
60
+ if ((elm)->field.le_prev != NULL) { \
61
+ if ((elm)->field.le_next != NULL) \
62
+ (elm)->field.le_next->field.le_prev = \
63
+ (elm)->field.le_prev; \
64
+ *(elm)->field.le_prev = (elm)->field.le_next; \
65
+ (elm)->field.le_next = NULL; \
66
+ (elm)->field.le_prev = NULL; \
67
+ } \
68
+} while (/*CONSTCOND*/0)
69
+
47
+
70
#define QLIST_FOREACH(var, head, field) \
48
/*
71
for ((var) = ((head)->lh_first); \
49
* qio_channel_readv_full may have short reads, keeping calling it
72
(var); \
50
* until getting VHOST_USER_HDR_SIZE or 0 bytes in total
51
*/
52
- rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
53
+ rc = qio_channel_readv_full(ioc, &iov, 1, &fds, &nfds, &local_err);
54
if (rc < 0) {
55
if (rc == QIO_CHANNEL_ERR_BLOCK) {
56
+ assert(local_err == NULL);
57
qio_channel_yield(ioc, G_IO_IN);
58
continue;
59
} else {
60
error_report_err(local_err);
61
- return false;
62
+ goto fail;
63
}
64
}
65
- read_bytes += rc;
66
- if (nfds_t > 0) {
67
- if (nfds + nfds_t > max_fds) {
68
+
69
+ if (nfds > 0) {
70
+ if (vmsg->fd_num + nfds > max_fds) {
71
error_report("A maximum of %zu fds are allowed, "
72
"however got %zu fds now",
73
- max_fds, nfds + nfds_t);
74
+ max_fds, vmsg->fd_num + nfds);
75
+ g_free(fds);
76
goto fail;
77
}
78
- memcpy(vmsg->fds + nfds, fds_t,
79
- nfds_t *sizeof(vmsg->fds[0]));
80
- nfds += nfds_t;
81
- g_free(fds_t);
82
+ memcpy(vmsg->fds + vmsg->fd_num, fds, nfds * sizeof(vmsg->fds[0]));
83
+ vmsg->fd_num += nfds;
84
+ g_free(fds);
85
}
86
- if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
87
- break;
88
+
89
+ if (rc == 0) { /* socket closed */
90
+ goto fail;
91
}
92
- iov.iov_base = (char *)vmsg + read_bytes;
93
- iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
94
- } while (true);
95
96
- vmsg->fd_num = nfds;
97
+ iov.iov_base += rc;
98
+ iov.iov_len -= rc;
99
+ read_bytes += rc;
100
+ } while (read_bytes != VHOST_USER_HDR_SIZE);
101
+
102
/* qio_channel_readv_full will make socket fds blocking, unblock them */
103
vmsg_unblock_fds(vmsg);
104
if (vmsg->size > sizeof(vmsg->payload)) {
73
--
105
--
74
2.24.1
106
2.26.2
75
107
diff view generated by jsdifflib
1
Don't pass the nanosecond timeout into epoll_wait(), which expects
1
Unexpected EOF is an error that must be reported.
2
milliseconds.
3
4
The epoll_wait() timeout value does not matter if qemu_poll_ns()
5
determined that the poll fd is ready, but passing a value in the wrong
6
units is still ugly. Pass a 0 timeout to epoll_wait() instead.
7
2
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Reviewed-by: Sergio Lopez <slp@redhat.com>
4
Message-id: 20200924151549.913737-9-stefanha@redhat.com
10
Message-id: 20200214171712.541358-3-stefanha@redhat.com
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
---
6
---
13
util/aio-posix.c | 3 +++
7
util/vhost-user-server.c | 6 ++++--
14
1 file changed, 3 insertions(+)
8
1 file changed, 4 insertions(+), 2 deletions(-)
15
9
16
diff --git a/util/aio-posix.c b/util/aio-posix.c
10
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
17
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
18
--- a/util/aio-posix.c
12
--- a/util/vhost-user-server.c
19
+++ b/util/aio-posix.c
13
+++ b/util/vhost-user-server.c
20
@@ -XXX,XX +XXX,XX @@ static int aio_epoll(AioContext *ctx, int64_t timeout)
14
@@ -XXX,XX +XXX,XX @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
21
15
};
22
if (timeout > 0) {
16
if (vmsg->size) {
23
ret = qemu_poll_ns(&pfd, 1, timeout);
17
rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
24
+ if (ret > 0) {
18
- if (rc == -1) {
25
+ timeout = 0;
19
- error_report_err(local_err);
26
+ }
20
+ if (rc != 1) {
21
+ if (local_err) {
22
+ error_report_err(local_err);
23
+ }
24
goto fail;
25
}
27
}
26
}
28
if (timeout <= 0 || ret > 0) {
29
ret = epoll_wait(ctx->epollfd, events,
30
--
27
--
31
2.24.1
28
2.26.2
32
29
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
The vu_client_trip() coroutine is leaked during AioContext switching. It
2
is also unsafe to destroy the vu_dev in panic_cb() since its callers
3
still access it in some cases.
2
4
3
The virtio-scsi fuzz target sets up and fuzzes the available virtio-scsi
5
Rework the lifecycle to solve these safety issues.
4
queues. After an element is placed on a queue, the fuzzer can select
5
whether to perform a kick, or continue adding elements.
6
6
7
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
8
Message-id: 20200924151549.913737-10-stefanha@redhat.com
9
Message-id: 20200220041118.23264-22-alxndr@bu.edu
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
10
---
12
tests/qtest/fuzz/Makefile.include | 1 +
11
util/vhost-user-server.h | 29 ++--
13
tests/qtest/fuzz/virtio_scsi_fuzz.c | 213 ++++++++++++++++++++++++++++
12
block/export/vhost-user-blk-server.c | 9 +-
14
2 files changed, 214 insertions(+)
13
util/vhost-user-server.c | 245 +++++++++++++++------------
15
create mode 100644 tests/qtest/fuzz/virtio_scsi_fuzz.c
14
3 files changed, 155 insertions(+), 128 deletions(-)
16
15
17
diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include
16
diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/fuzz/Makefile.include
18
--- a/util/vhost-user-server.h
20
+++ b/tests/qtest/fuzz/Makefile.include
19
+++ b/util/vhost-user-server.h
21
@@ -XXX,XX +XXX,XX @@ fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o
22
# Targets
23
fuzz-obj-y += tests/qtest/fuzz/i440fx_fuzz.o
24
fuzz-obj-y += tests/qtest/fuzz/virtio_net_fuzz.o
25
+fuzz-obj-y += tests/qtest/fuzz/virtio_scsi_fuzz.o
26
27
FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest
28
29
diff --git a/tests/qtest/fuzz/virtio_scsi_fuzz.c b/tests/qtest/fuzz/virtio_scsi_fuzz.c
30
new file mode 100644
31
index XXXXXXX..XXXXXXX
32
--- /dev/null
33
+++ b/tests/qtest/fuzz/virtio_scsi_fuzz.c
34
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
21
#include "qapi/error.h"
22
#include "standard-headers/linux/virtio_blk.h"
23
24
+/* A kick fd that we monitor on behalf of libvhost-user */
25
typedef struct VuFdWatch {
26
VuDev *vu_dev;
27
int fd; /*kick fd*/
28
void *pvt;
29
vu_watch_cb cb;
30
- bool processing;
31
QTAILQ_ENTRY(VuFdWatch) next;
32
} VuFdWatch;
33
34
-typedef struct VuServer VuServer;
35
-
36
-struct VuServer {
37
+/**
38
+ * VuServer:
39
+ * A vhost-user server instance with user-defined VuDevIface callbacks.
40
+ * Vhost-user device backends can be implemented using VuServer. VuDevIface
41
+ * callbacks and virtqueue kicks run in the given AioContext.
42
+ */
43
+typedef struct {
44
QIONetListener *listener;
45
+ QEMUBH *restart_listener_bh;
46
AioContext *ctx;
47
int max_queues;
48
const VuDevIface *vu_iface;
49
+
50
+ /* Protected by ctx lock */
51
VuDev vu_dev;
52
QIOChannel *ioc; /* The I/O channel with the client */
53
QIOChannelSocket *sioc; /* The underlying data channel with the client */
54
- /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
55
- QIOChannel *ioc_slave;
56
- QIOChannelSocket *sioc_slave;
57
- Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
58
QTAILQ_HEAD(, VuFdWatch) vu_fd_watches;
59
- /* restart coroutine co_trip if AIOContext is changed */
60
- bool aio_context_changed;
61
- bool processing_msg;
62
-};
63
+
64
+ Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
65
+} VuServer;
66
67
bool vhost_user_server_start(VuServer *server,
68
SocketAddress *unix_socket,
69
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
70
71
void vhost_user_server_stop(VuServer *server);
72
73
-void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx);
74
+void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
75
+void vhost_user_server_detach_aio_context(VuServer *server);
76
77
#endif /* VHOST_USER_SERVER_H */
78
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/block/export/vhost-user-blk-server.c
81
+++ b/block/export/vhost-user-blk-server.c
82
@@ -XXX,XX +XXX,XX @@ static const VuDevIface vu_block_iface = {
83
static void blk_aio_attached(AioContext *ctx, void *opaque)
84
{
85
VuBlockDev *vub_dev = opaque;
86
- aio_context_acquire(ctx);
87
- vhost_user_server_set_aio_context(&vub_dev->vu_server, ctx);
88
- aio_context_release(ctx);
89
+ vhost_user_server_attach_aio_context(&vub_dev->vu_server, ctx);
90
}
91
92
static void blk_aio_detach(void *opaque)
93
{
94
VuBlockDev *vub_dev = opaque;
95
- AioContext *ctx = vub_dev->vu_server.ctx;
96
- aio_context_acquire(ctx);
97
- vhost_user_server_set_aio_context(&vub_dev->vu_server, NULL);
98
- aio_context_release(ctx);
99
+ vhost_user_server_detach_aio_context(&vub_dev->vu_server);
100
}
101
102
static void
103
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/util/vhost-user-server.c
106
+++ b/util/vhost-user-server.c
107
@@ -XXX,XX +XXX,XX @@
108
*/
109
#include "qemu/osdep.h"
110
#include "qemu/main-loop.h"
111
+#include "block/aio-wait.h"
112
#include "vhost-user-server.h"
113
35
+/*
114
+/*
36
+ * virtio-serial Fuzzing Target
115
+ * Theory of operation:
37
+ *
116
+ *
38
+ * Copyright Red Hat Inc., 2019
117
+ * VuServer is started and stopped by vhost_user_server_start() and
39
+ *
118
+ * vhost_user_server_stop() from the main loop thread. Starting the server
40
+ * Authors:
119
+ * opens a vhost-user UNIX domain socket and listens for incoming connections.
41
+ * Alexander Bulekov <alxndr@bu.edu>
120
+ * Only one connection is allowed at a time.
42
+ *
121
+ *
43
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
122
+ * The connection is handled by the vu_client_trip() coroutine in the
44
+ * See the COPYING file in the top-level directory.
123
+ * VuServer->ctx AioContext. The coroutine consists of a vu_dispatch() loop
124
+ * where libvhost-user calls vu_message_read() to receive the next vhost-user
125
+ * protocol messages over the UNIX domain socket.
126
+ *
127
+ * When virtqueues are set up libvhost-user calls set_watch() to monitor kick
128
+ * fds. These fds are also handled in the VuServer->ctx AioContext.
129
+ *
130
+ * Both vu_client_trip() and kick fd monitoring can be stopped by shutting down
131
+ * the socket connection. Shutting down the socket connection causes
132
+ * vu_message_read() to fail since no more data can be received from the socket.
133
+ * After vu_dispatch() fails, vu_client_trip() calls vu_deinit() to stop
134
+ * libvhost-user before terminating the coroutine. vu_deinit() calls
135
+ * remove_watch() to stop monitoring kick fds and this stops virtqueue
136
+ * processing.
137
+ *
138
+ * When vu_client_trip() has finished cleaning up it schedules a BH in the main
139
+ * loop thread to accept the next client connection.
140
+ *
141
+ * When libvhost-user detects an error it calls panic_cb() and sets the
142
+ * dev->broken flag. Both vu_client_trip() and kick fd processing stop when
143
+ * the dev->broken flag is set.
144
+ *
145
+ * It is possible to switch AioContexts using
146
+ * vhost_user_server_detach_aio_context() and
147
+ * vhost_user_server_attach_aio_context(). They stop monitoring fds in the old
148
+ * AioContext and resume monitoring in the new AioContext. The vu_client_trip()
149
+ * coroutine remains in a yielded state during the switch. This is made
150
+ * possible by QIOChannel's support for spurious coroutine re-entry in
151
+ * qio_channel_yield(). The coroutine will restart I/O when re-entered from the
152
+ * new AioContext.
45
+ */
153
+ */
46
+
154
+
47
+#include "qemu/osdep.h"
155
static void vmsg_close_fds(VhostUserMsg *vmsg)
48
+
156
{
49
+#include "tests/qtest/libqtest.h"
157
int i;
50
+#include "libqos/virtio-scsi.h"
158
@@ -XXX,XX +XXX,XX @@ static void vmsg_unblock_fds(VhostUserMsg *vmsg)
51
+#include "libqos/virtio.h"
159
}
52
+#include "libqos/virtio-pci.h"
160
}
53
+#include "standard-headers/linux/virtio_ids.h"
161
54
+#include "standard-headers/linux/virtio_pci.h"
162
-static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
55
+#include "standard-headers/linux/virtio_scsi.h"
163
- gpointer opaque);
56
+#include "fuzz.h"
164
-
57
+#include "fork_fuzz.h"
165
-static void close_client(VuServer *server)
58
+#include "qos_fuzz.h"
166
-{
59
+
167
- /*
60
+#define PCI_SLOT 0x02
168
- * Before closing the client
61
+#define PCI_FN 0x00
169
- *
62
+#define QVIRTIO_SCSI_TIMEOUT_US (1 * 1000 * 1000)
170
- * 1. Let vu_client_trip stop processing new vhost-user msg
63
+
171
- *
64
+#define MAX_NUM_QUEUES 64
172
- * 2. remove kick_handler
65
+
173
- *
66
+/* Based on tests/virtio-scsi-test.c */
174
- * 3. wait for the kick handler to be finished
67
+typedef struct {
175
- *
68
+ int num_queues;
176
- * 4. wait for the current vhost-user msg to be finished processing
69
+ QVirtQueue *vq[MAX_NUM_QUEUES + 2];
177
- */
70
+} QVirtioSCSIQueues;
178
-
71
+
179
- QIOChannelSocket *sioc = server->sioc;
72
+static QVirtioSCSIQueues *qvirtio_scsi_init(QVirtioDevice *dev, uint64_t mask)
180
- /* When this is set vu_client_trip will stop new processing vhost-user message */
181
- server->sioc = NULL;
182
-
183
- while (server->processing_msg) {
184
- if (server->ioc->read_coroutine) {
185
- server->ioc->read_coroutine = NULL;
186
- qio_channel_set_aio_fd_handler(server->ioc, server->ioc->ctx, NULL,
187
- NULL, server->ioc);
188
- server->processing_msg = false;
189
- }
190
- }
191
-
192
- vu_deinit(&server->vu_dev);
193
-
194
- /* vu_deinit() should have called remove_watch() */
195
- assert(QTAILQ_EMPTY(&server->vu_fd_watches));
196
-
197
- object_unref(OBJECT(sioc));
198
- object_unref(OBJECT(server->ioc));
199
-}
200
-
201
static void panic_cb(VuDev *vu_dev, const char *buf)
202
{
203
- VuServer *server = container_of(vu_dev, VuServer, vu_dev);
204
-
205
- /* avoid while loop in close_client */
206
- server->processing_msg = false;
207
-
208
- if (buf) {
209
- error_report("vu_panic: %s", buf);
210
- }
211
-
212
- if (server->sioc) {
213
- close_client(server);
214
- }
215
-
216
- /*
217
- * Set the callback function for network listener so another
218
- * vhost-user client can connect to this server
219
- */
220
- qio_net_listener_set_client_func(server->listener,
221
- vu_accept,
222
- server,
223
- NULL);
224
+ error_report("vu_panic: %s", buf);
225
}
226
227
static bool coroutine_fn
228
@@ -XXX,XX +XXX,XX @@ fail:
229
return false;
230
}
231
232
-
233
-static void vu_client_start(VuServer *server);
234
static coroutine_fn void vu_client_trip(void *opaque)
235
{
236
VuServer *server = opaque;
237
+ VuDev *vu_dev = &server->vu_dev;
238
239
- while (!server->aio_context_changed && server->sioc) {
240
- server->processing_msg = true;
241
- vu_dispatch(&server->vu_dev);
242
- server->processing_msg = false;
243
+ while (!vu_dev->broken && vu_dispatch(vu_dev)) {
244
+ /* Keep running */
245
}
246
247
- if (server->aio_context_changed && server->sioc) {
248
- server->aio_context_changed = false;
249
- vu_client_start(server);
250
- }
251
-}
252
+ vu_deinit(vu_dev);
253
+
254
+ /* vu_deinit() should have called remove_watch() */
255
+ assert(QTAILQ_EMPTY(&server->vu_fd_watches));
256
+
257
+ object_unref(OBJECT(server->sioc));
258
+ server->sioc = NULL;
259
260
-static void vu_client_start(VuServer *server)
261
-{
262
- server->co_trip = qemu_coroutine_create(vu_client_trip, server);
263
- aio_co_enter(server->ctx, server->co_trip);
264
+ object_unref(OBJECT(server->ioc));
265
+ server->ioc = NULL;
266
+
267
+ server->co_trip = NULL;
268
+ if (server->restart_listener_bh) {
269
+ qemu_bh_schedule(server->restart_listener_bh);
270
+ }
271
+ aio_wait_kick();
272
}
273
274
/*
275
@@ -XXX,XX +XXX,XX @@ static void vu_client_start(VuServer *server)
276
static void kick_handler(void *opaque)
277
{
278
VuFdWatch *vu_fd_watch = opaque;
279
- vu_fd_watch->processing = true;
280
- vu_fd_watch->cb(vu_fd_watch->vu_dev, 0, vu_fd_watch->pvt);
281
- vu_fd_watch->processing = false;
282
+ VuDev *vu_dev = vu_fd_watch->vu_dev;
283
+
284
+ vu_fd_watch->cb(vu_dev, 0, vu_fd_watch->pvt);
285
+
286
+ /* Stop vu_client_trip() if an error occurred in vu_fd_watch->cb() */
287
+ if (vu_dev->broken) {
288
+ VuServer *server = container_of(vu_dev, VuServer, vu_dev);
289
+
290
+ qio_channel_shutdown(server->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
291
+ }
292
}
293
294
-
295
static VuFdWatch *find_vu_fd_watch(VuServer *server, int fd)
296
{
297
298
@@ -XXX,XX +XXX,XX @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
299
qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
300
server->ioc = QIO_CHANNEL(sioc);
301
object_ref(OBJECT(server->ioc));
302
- qio_channel_attach_aio_context(server->ioc, server->ctx);
303
+
304
+ /* TODO vu_message_write() spins if non-blocking! */
305
qio_channel_set_blocking(server->ioc, false, NULL);
306
- vu_client_start(server);
307
+
308
+ server->co_trip = qemu_coroutine_create(vu_client_trip, server);
309
+
310
+ aio_context_acquire(server->ctx);
311
+ vhost_user_server_attach_aio_context(server, server->ctx);
312
+ aio_context_release(server->ctx);
313
}
314
315
-
316
void vhost_user_server_stop(VuServer *server)
317
{
318
+ aio_context_acquire(server->ctx);
319
+
320
+ qemu_bh_delete(server->restart_listener_bh);
321
+ server->restart_listener_bh = NULL;
322
+
323
if (server->sioc) {
324
- close_client(server);
325
+ VuFdWatch *vu_fd_watch;
326
+
327
+ QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
328
+ aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
329
+ NULL, NULL, NULL, vu_fd_watch);
330
+ }
331
+
332
+ qio_channel_shutdown(server->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
333
+
334
+ AIO_WAIT_WHILE(server->ctx, server->co_trip);
335
}
336
337
+ aio_context_release(server->ctx);
338
+
339
if (server->listener) {
340
qio_net_listener_disconnect(server->listener);
341
object_unref(OBJECT(server->listener));
342
}
343
+}
344
+
345
+/*
346
+ * Allow the next client to connect to the server. Called from a BH in the main
347
+ * loop.
348
+ */
349
+static void restart_listener_bh(void *opaque)
73
+{
350
+{
74
+ QVirtioSCSIQueues *vs;
351
+ VuServer *server = opaque;
75
+ uint64_t feat;
352
76
+ int i;
353
+ qio_net_listener_set_client_func(server->listener, vu_accept, server,
77
+
354
+ NULL);
78
+ vs = g_new0(QVirtioSCSIQueues, 1);
355
}
79
+
356
80
+ feat = qvirtio_get_features(dev);
357
-void vhost_user_server_set_aio_context(VuServer *server, AioContext *ctx)
81
+ if (mask) {
358
+/* Called with ctx acquired */
82
+ feat &= ~QVIRTIO_F_BAD_FEATURE | mask;
359
+void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
83
+ } else {
360
{
84
+ feat &= ~(QVIRTIO_F_BAD_FEATURE | (1ull << VIRTIO_RING_F_EVENT_IDX));
361
- VuFdWatch *vu_fd_watch, *next;
362
- void *opaque = NULL;
363
- IOHandler *io_read = NULL;
364
- bool attach;
365
+ VuFdWatch *vu_fd_watch;
366
367
- server->ctx = ctx ? ctx : qemu_get_aio_context();
368
+ server->ctx = ctx;
369
370
if (!server->sioc) {
371
- /* not yet serving any client*/
372
return;
373
}
374
375
- if (ctx) {
376
- qio_channel_attach_aio_context(server->ioc, ctx);
377
- server->aio_context_changed = true;
378
- io_read = kick_handler;
379
- attach = true;
380
- } else {
381
+ qio_channel_attach_aio_context(server->ioc, ctx);
382
+
383
+ QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
384
+ aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
385
+ NULL, vu_fd_watch);
85
+ }
386
+ }
86
+ qvirtio_set_features(dev, feat);
387
+
87
+
388
+ aio_co_schedule(ctx, server->co_trip);
88
+ vs->num_queues = qvirtio_config_readl(dev, 0);
89
+
90
+ for (i = 0; i < vs->num_queues + 2; i++) {
91
+ vs->vq[i] = qvirtqueue_setup(dev, fuzz_qos_alloc, i);
92
+ }
93
+
94
+ qvirtio_set_driver_ok(dev);
95
+
96
+ return vs;
97
+}
389
+}
98
+
390
+
99
+static void virtio_scsi_fuzz(QTestState *s, QVirtioSCSIQueues* queues,
391
+/* Called with server->ctx acquired */
100
+ const unsigned char *Data, size_t Size)
392
+void vhost_user_server_detach_aio_context(VuServer *server)
101
+{
393
+{
102
+ /*
394
+ if (server->sioc) {
103
+ * Data is a sequence of random bytes. We split them up into "actions",
395
+ VuFdWatch *vu_fd_watch;
104
+ * followed by data:
396
+
105
+ * [vqa][dddddddd][vqa][dddd][vqa][dddddddddddd] ...
397
+ QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
106
+ * The length of the data is specified by the preceding vqa.length
398
+ aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
107
+ */
399
+ NULL, NULL, NULL, vu_fd_watch);
108
+ typedef struct vq_action {
109
+ uint8_t queue;
110
+ uint8_t length;
111
+ uint8_t write;
112
+ uint8_t next;
113
+ uint8_t kick;
114
+ } vq_action;
115
+
116
+ /* Keep track of the free head for each queue we interact with */
117
+ bool vq_touched[MAX_NUM_QUEUES + 2] = {0};
118
+ uint32_t free_head[MAX_NUM_QUEUES + 2];
119
+
120
+ QGuestAllocator *t_alloc = fuzz_qos_alloc;
121
+
122
+ QVirtioSCSI *scsi = fuzz_qos_obj;
123
+ QVirtioDevice *dev = scsi->vdev;
124
+ QVirtQueue *q;
125
+ vq_action vqa;
126
+ while (Size >= sizeof(vqa)) {
127
+ /* Copy the action, so we can normalize length, queue and flags */
128
+ memcpy(&vqa, Data, sizeof(vqa));
129
+
130
+ Data += sizeof(vqa);
131
+ Size -= sizeof(vqa);
132
+
133
+ vqa.queue = vqa.queue % queues->num_queues;
134
+ /* Cap length at the number of remaining bytes in data */
135
+ vqa.length = vqa.length >= Size ? Size : vqa.length;
136
+ vqa.write = vqa.write & 1;
137
+ vqa.next = vqa.next & 1;
138
+ vqa.kick = vqa.kick & 1;
139
+
140
+
141
+ q = queues->vq[vqa.queue];
142
+
143
+ /* Copy the data into ram, and place it on the virtqueue */
144
+ uint64_t req_addr = guest_alloc(t_alloc, vqa.length);
145
+ qtest_memwrite(s, req_addr, Data, vqa.length);
146
+ if (vq_touched[vqa.queue] == 0) {
147
+ vq_touched[vqa.queue] = 1;
148
+ free_head[vqa.queue] = qvirtqueue_add(s, q, req_addr, vqa.length,
149
+ vqa.write, vqa.next);
150
+ } else {
151
+ qvirtqueue_add(s, q, req_addr, vqa.length, vqa.write , vqa.next);
152
+ }
400
+ }
153
+
401
+
154
+ if (vqa.kick) {
402
qio_channel_detach_aio_context(server->ioc);
155
+ qvirtqueue_kick(s, dev, q, free_head[vqa.queue]);
403
- /* server->ioc->ctx keeps the old AioConext */
156
+ free_head[vqa.queue] = 0;
404
- ctx = server->ioc->ctx;
157
+ }
405
- attach = false;
158
+ Data += vqa.length;
406
}
159
+ Size -= vqa.length;
407
160
+ }
408
- QTAILQ_FOREACH_SAFE(vu_fd_watch, &server->vu_fd_watches, next, next) {
161
+ /* In the end, kick each queue we interacted with */
409
- if (vu_fd_watch->cb) {
162
+ for (int i = 0; i < MAX_NUM_QUEUES + 2; i++) {
410
- opaque = attach ? vu_fd_watch : NULL;
163
+ if (vq_touched[i]) {
411
- aio_set_fd_handler(ctx, vu_fd_watch->fd, true,
164
+ qvirtqueue_kick(s, dev, queues->vq[i], free_head[i]);
412
- io_read, NULL, NULL,
165
+ }
413
- opaque);
166
+ }
414
- }
167
+}
415
- }
168
+
416
+ server->ctx = NULL;
169
+static void virtio_scsi_fork_fuzz(QTestState *s,
417
}
170
+ const unsigned char *Data, size_t Size)
418
171
+{
419
-
172
+ QVirtioSCSI *scsi = fuzz_qos_obj;
420
bool vhost_user_server_start(VuServer *server,
173
+ static QVirtioSCSIQueues *queues;
421
SocketAddress *socket_addr,
174
+ if (!queues) {
422
AioContext *ctx,
175
+ queues = qvirtio_scsi_init(scsi->vdev, 0);
423
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
176
+ }
424
const VuDevIface *vu_iface,
177
+ if (fork() == 0) {
425
Error **errp)
178
+ virtio_scsi_fuzz(s, queues, Data, Size);
426
{
179
+ flush_events(s);
427
+ QEMUBH *bh;
180
+ _Exit(0);
428
QIONetListener *listener = qio_net_listener_new();
181
+ } else {
429
if (qio_net_listener_open_sync(listener, socket_addr, 1,
182
+ wait(NULL);
430
errp) < 0) {
183
+ }
431
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
184
+}
432
return false;
185
+
433
}
186
+static void virtio_scsi_with_flag_fuzz(QTestState *s,
434
187
+ const unsigned char *Data, size_t Size)
435
+ bh = qemu_bh_new(restart_listener_bh, server);
188
+{
436
+
189
+ QVirtioSCSI *scsi = fuzz_qos_obj;
437
/* zero out unspecified fields */
190
+ static QVirtioSCSIQueues *queues;
438
*server = (VuServer) {
191
+
439
.listener = listener,
192
+ if (fork() == 0) {
440
+ .restart_listener_bh = bh,
193
+ if (Size >= sizeof(uint64_t)) {
441
.vu_iface = vu_iface,
194
+ queues = qvirtio_scsi_init(scsi->vdev, *(uint64_t *)Data);
442
.max_queues = max_queues,
195
+ virtio_scsi_fuzz(s, queues,
443
.ctx = ctx,
196
+ Data + sizeof(uint64_t), Size - sizeof(uint64_t));
197
+ flush_events(s);
198
+ }
199
+ _Exit(0);
200
+ } else {
201
+ wait(NULL);
202
+ }
203
+}
204
+
205
+static void virtio_scsi_pre_fuzz(QTestState *s)
206
+{
207
+ qos_init_path(s);
208
+ counter_shm_init();
209
+}
210
+
211
+static void *virtio_scsi_test_setup(GString *cmd_line, void *arg)
212
+{
213
+ g_string_append(cmd_line,
214
+ " -drive file=blkdebug::null-co://,"
215
+ "file.image.read-zeroes=on,"
216
+ "if=none,id=dr1,format=raw,file.align=4k "
217
+ "-device scsi-hd,drive=dr1,lun=0,scsi-id=1");
218
+ return arg;
219
+}
220
+
221
+
222
+static void register_virtio_scsi_fuzz_targets(void)
223
+{
224
+ fuzz_add_qos_target(&(FuzzTarget){
225
+ .name = "virtio-scsi-fuzz",
226
+ .description = "Fuzz the virtio-scsi virtual queues, forking"
227
+ "for each fuzz run",
228
+ .pre_vm_init = &counter_shm_init,
229
+ .pre_fuzz = &virtio_scsi_pre_fuzz,
230
+ .fuzz = virtio_scsi_fork_fuzz,},
231
+ "virtio-scsi",
232
+ &(QOSGraphTestOptions){.before = virtio_scsi_test_setup}
233
+ );
234
+
235
+ fuzz_add_qos_target(&(FuzzTarget){
236
+ .name = "virtio-scsi-flags-fuzz",
237
+ .description = "Fuzz the virtio-scsi virtual queues, forking"
238
+ "for each fuzz run (also fuzzes the virtio flags)",
239
+ .pre_vm_init = &counter_shm_init,
240
+ .pre_fuzz = &virtio_scsi_pre_fuzz,
241
+ .fuzz = virtio_scsi_with_flag_fuzz,},
242
+ "virtio-scsi",
243
+ &(QOSGraphTestOptions){.before = virtio_scsi_test_setup}
244
+ );
245
+}
246
+
247
+fuzz_target_init(register_virtio_scsi_fuzz_targets);
248
--
444
--
249
2.24.1
445
2.26.2
250
446
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
Propagate the flush return value since errors are possible.
2
2
3
qtest_server_send is a function pointer specifying the handler used to
3
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
transmit data to the qtest client. In the standard configuration, this
4
Message-id: 20200924151549.913737-11-stefanha@redhat.com
5
calls the CharBackend handler, but now it is possible for other types of
6
handlers, e.g direct-function calls if the qtest client and server
7
exist within the same process (inproc)
8
9
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
12
Acked-by: Thomas Huth <thuth@redhat.com>
13
Message-id: 20200220041118.23264-6-alxndr@bu.edu
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
---
6
---
16
include/sysemu/qtest.h | 3 +++
7
block/export/vhost-user-blk-server.c | 11 +++++++----
17
qtest.c | 18 ++++++++++++++++--
8
1 file changed, 7 insertions(+), 4 deletions(-)
18
2 files changed, 19 insertions(+), 2 deletions(-)
19
9
20
diff --git a/include/sysemu/qtest.h b/include/sysemu/qtest.h
10
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
21
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
22
--- a/include/sysemu/qtest.h
12
--- a/block/export/vhost-user-blk-server.c
23
+++ b/include/sysemu/qtest.h
13
+++ b/block/export/vhost-user-blk-server.c
24
@@ -XXX,XX +XXX,XX @@ bool qtest_driver(void);
14
@@ -XXX,XX +XXX,XX @@ vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
25
15
return -EINVAL;
26
void qtest_server_init(const char *qtest_chrdev, const char *qtest_log, Error **errp);
27
28
+void qtest_server_set_send_handler(void (*send)(void *, const char *),
29
+ void *opaque);
30
+
31
#endif
32
diff --git a/qtest.c b/qtest.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/qtest.c
35
+++ b/qtest.c
36
@@ -XXX,XX +XXX,XX @@ static GString *inbuf;
37
static int irq_levels[MAX_IRQ];
38
static qemu_timeval start_time;
39
static bool qtest_opened;
40
+static void (*qtest_server_send)(void*, const char*);
41
+static void *qtest_server_send_opaque;
42
43
#define FMT_timeval "%ld.%06ld"
44
45
@@ -XXX,XX +XXX,XX @@ static void GCC_FMT_ATTR(1, 2) qtest_log_send(const char *fmt, ...)
46
va_end(ap);
47
}
16
}
48
17
49
-static void do_qtest_send(CharBackend *chr, const char *str, size_t len)
18
-static void coroutine_fn vu_block_flush(VuBlockReq *req)
50
+static void qtest_server_char_be_send(void *opaque, const char *str)
19
+static int coroutine_fn vu_block_flush(VuBlockReq *req)
51
{
20
{
52
+ size_t len = strlen(str);
21
VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
53
+ CharBackend* chr = (CharBackend *)opaque;
22
BlockBackend *backend = vdev_blk->backend;
54
qemu_chr_fe_write_all(chr, (uint8_t *)str, len);
23
- blk_co_flush(backend);
55
if (qtest_log_fp && qtest_opened) {
24
+ return blk_co_flush(backend);
56
fprintf(qtest_log_fp, "%s", str);
57
@@ -XXX,XX +XXX,XX @@ static void do_qtest_send(CharBackend *chr, const char *str, size_t len)
58
59
static void qtest_send(CharBackend *chr, const char *str)
60
{
61
- do_qtest_send(chr, str, strlen(str));
62
+ qtest_server_send(qtest_server_send_opaque, str);
63
}
25
}
64
26
65
static void GCC_FMT_ATTR(2, 3) qtest_sendf(CharBackend *chr,
27
static void coroutine_fn vu_block_virtio_process_req(void *opaque)
66
@@ -XXX,XX +XXX,XX @@ void qtest_server_init(const char *qtest_chrdev, const char *qtest_log, Error **
28
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
67
qemu_chr_fe_set_echo(&qtest_chr, true);
29
break;
68
30
}
69
inbuf = g_string_new("");
31
case VIRTIO_BLK_T_FLUSH:
70
+
32
- vu_block_flush(req);
71
+ if (!qtest_server_send) {
33
- req->in->status = VIRTIO_BLK_S_OK;
72
+ qtest_server_set_send_handler(qtest_server_char_be_send, &qtest_chr);
34
+ if (vu_block_flush(req) == 0) {
73
+ }
35
+ req->in->status = VIRTIO_BLK_S_OK;
74
+}
36
+ } else {
75
+
37
+ req->in->status = VIRTIO_BLK_S_IOERR;
76
+void qtest_server_set_send_handler(void (*send)(void*, const char*), void *opaque)
38
+ }
77
+{
39
break;
78
+ qtest_server_send = send;
40
case VIRTIO_BLK_T_GET_ID: {
79
+ qtest_server_send_opaque = opaque;
41
size_t size = MIN(iov_size(&elem->in_sg[0], in_num),
80
}
81
82
bool qtest_driver(void)
83
--
42
--
84
2.24.1
43
2.26.2
85
44
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
Use the new QAPI block exports API instead of defining our own QOM
2
2
objects.
3
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
3
4
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
4
This is a large change because the lifecycle of VuBlockDev needs to
5
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
5
follow BlockExportDriver. QOM properties are replaced by QAPI options
6
Message-id: 20200220041118.23264-17-alxndr@bu.edu
6
objects.
7
8
VuBlockDev is renamed VuBlkExport and contains a BlockExport field.
9
Several fields can be dropped since BlockExport already has equivalents.
10
11
The file names and meson build integration will be adjusted in a future
12
patch. libvhost-user should probably be built as a static library that
13
is linked into QEMU instead of as a .c file that results in duplicate
14
compilation.
15
16
The new command-line syntax is:
17
18
$ qemu-storage-daemon \
19
--blockdev file,node-name=drive0,filename=test.img \
20
--export vhost-user-blk,node-name=drive0,id=export0,unix-socket=/tmp/vhost-user-blk.sock
21
22
Note that unix-socket is optional because we may wish to accept chardevs
23
too in the future.
24
25
Markus noted that supported address families are not explicit in the
26
QAPI schema. It is unlikely that support for more address families will
27
be added since file descriptor passing is required and few address
28
families support it. If a new address family needs to be added, then the
29
QAPI 'features' syntax can be used to advertize them.
30
31
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
32
Acked-by: Markus Armbruster <armbru@redhat.com>
33
Message-id: 20200924151549.913737-12-stefanha@redhat.com
34
[Skip test on big-endian host architectures because this device doesn't
35
support them yet (as already mentioned in a code comment).
36
--Stefan]
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
37
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
---
38
---
9
tests/qtest/fuzz/Makefile.include | 2 +
39
qapi/block-export.json | 21 +-
10
tests/qtest/fuzz/qos_fuzz.c | 234 ++++++++++++++++++++++++++++++
40
block/export/vhost-user-blk-server.h | 23 +-
11
tests/qtest/fuzz/qos_fuzz.h | 33 +++++
41
block/export/export.c | 6 +
12
3 files changed, 269 insertions(+)
42
block/export/vhost-user-blk-server.c | 452 +++++++--------------------
13
create mode 100644 tests/qtest/fuzz/qos_fuzz.c
43
util/vhost-user-server.c | 10 +-
14
create mode 100644 tests/qtest/fuzz/qos_fuzz.h
44
block/export/meson.build | 1 +
15
45
block/meson.build | 1 -
16
diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include
46
7 files changed, 156 insertions(+), 358 deletions(-)
47
48
diff --git a/qapi/block-export.json b/qapi/block-export.json
17
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
18
--- a/tests/qtest/fuzz/Makefile.include
50
--- a/qapi/block-export.json
19
+++ b/tests/qtest/fuzz/Makefile.include
51
+++ b/qapi/block-export.json
20
@@ -XXX,XX +XXX,XX @@
52
@@ -XXX,XX +XXX,XX @@
21
QEMU_PROG_FUZZ=qemu-fuzz-$(TARGET_NAME)$(EXESUF)
53
'data': { '*name': 'str', '*description': 'str',
22
54
'*bitmap': 'str' } }
23
fuzz-obj-y += tests/qtest/libqtest.o
55
24
+fuzz-obj-y += $(libqos-obj-y)
56
+##
25
fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton
57
+# @BlockExportOptionsVhostUserBlk:
26
fuzz-obj-y += tests/qtest/fuzz/fork_fuzz.o
58
+#
27
+fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o
59
+# A vhost-user-blk block export.
28
60
+#
29
FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest
61
+# @addr: The vhost-user socket on which to listen. Both 'unix' and 'fd'
30
62
+# SocketAddress types are supported. Passed fds must be UNIX domain
31
diff --git a/tests/qtest/fuzz/qos_fuzz.c b/tests/qtest/fuzz/qos_fuzz.c
63
+# sockets.
32
new file mode 100644
64
+# @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
33
index XXXXXXX..XXXXXXX
65
+#
34
--- /dev/null
66
+# Since: 5.2
35
+++ b/tests/qtest/fuzz/qos_fuzz.c
67
+##
68
+{ 'struct': 'BlockExportOptionsVhostUserBlk',
69
+ 'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
70
+
71
##
72
# @NbdServerAddOptions:
73
#
36
@@ -XXX,XX +XXX,XX @@
74
@@ -XXX,XX +XXX,XX @@
37
+/*
75
# An enumeration of block export types
38
+ * QOS-assisted fuzzing helpers
76
#
39
+ *
77
# @nbd: NBD export
40
+ * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
78
+# @vhost-user-blk: vhost-user-blk export (since 5.2)
41
+ *
79
#
42
+ * This library is free software; you can redistribute it and/or
80
# Since: 4.2
43
+ * modify it under the terms of the GNU Lesser General Public
81
##
44
+ * License version 2 as published by the Free Software Foundation.
82
{ 'enum': 'BlockExportType',
45
+ *
83
- 'data': [ 'nbd' ] }
46
+ * This library is distributed in the hope that it will be useful,
84
+ 'data': [ 'nbd', 'vhost-user-blk' ] }
47
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
85
48
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
86
##
49
+ * Lesser General Public License for more details.
87
# @BlockExportOptions:
50
+ *
88
@@ -XXX,XX +XXX,XX @@
51
+ * You should have received a copy of the GNU Lesser General Public
89
'*writethrough': 'bool' },
52
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
90
'discriminator': 'type',
53
+ */
91
'data': {
92
- 'nbd': 'BlockExportOptionsNbd'
93
+ 'nbd': 'BlockExportOptionsNbd',
94
+ 'vhost-user-blk': 'BlockExportOptionsVhostUserBlk'
95
} }
96
97
##
98
diff --git a/block/export/vhost-user-blk-server.h b/block/export/vhost-user-blk-server.h
99
index XXXXXXX..XXXXXXX 100644
100
--- a/block/export/vhost-user-blk-server.h
101
+++ b/block/export/vhost-user-blk-server.h
102
@@ -XXX,XX +XXX,XX @@
103
104
#ifndef VHOST_USER_BLK_SERVER_H
105
#define VHOST_USER_BLK_SERVER_H
106
-#include "util/vhost-user-server.h"
107
108
-typedef struct VuBlockDev VuBlockDev;
109
-#define TYPE_VHOST_USER_BLK_SERVER "vhost-user-blk-server"
110
-#define VHOST_USER_BLK_SERVER(obj) \
111
- OBJECT_CHECK(VuBlockDev, obj, TYPE_VHOST_USER_BLK_SERVER)
112
+#include "block/export.h"
113
114
-/* vhost user block device */
115
-struct VuBlockDev {
116
- Object parent_obj;
117
- char *node_name;
118
- SocketAddress *addr;
119
- AioContext *ctx;
120
- VuServer vu_server;
121
- bool running;
122
- uint32_t blk_size;
123
- BlockBackend *backend;
124
- QIOChannelSocket *sioc;
125
- QTAILQ_ENTRY(VuBlockDev) next;
126
- struct virtio_blk_config blkcfg;
127
- bool writable;
128
-};
129
+/* For block/export/export.c */
130
+extern const BlockExportDriver blk_exp_vhost_user_blk;
131
132
#endif /* VHOST_USER_BLK_SERVER_H */
133
diff --git a/block/export/export.c b/block/export/export.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/block/export/export.c
136
+++ b/block/export/export.c
137
@@ -XXX,XX +XXX,XX @@
138
#include "sysemu/block-backend.h"
139
#include "block/export.h"
140
#include "block/nbd.h"
141
+#if CONFIG_LINUX
142
+#include "block/export/vhost-user-blk-server.h"
143
+#endif
144
#include "qapi/error.h"
145
#include "qapi/qapi-commands-block-export.h"
146
#include "qapi/qapi-events-block-export.h"
147
@@ -XXX,XX +XXX,XX @@
148
149
static const BlockExportDriver *blk_exp_drivers[] = {
150
&blk_exp_nbd,
151
+#if CONFIG_LINUX
152
+ &blk_exp_vhost_user_blk,
153
+#endif
154
};
155
156
/* Only accessed from the main thread */
157
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/block/export/vhost-user-blk-server.c
160
+++ b/block/export/vhost-user-blk-server.c
161
@@ -XXX,XX +XXX,XX @@
162
*/
163
#include "qemu/osdep.h"
164
#include "block/block.h"
165
+#include "contrib/libvhost-user/libvhost-user.h"
166
+#include "standard-headers/linux/virtio_blk.h"
167
+#include "util/vhost-user-server.h"
168
#include "vhost-user-blk-server.h"
169
#include "qapi/error.h"
170
#include "qom/object_interfaces.h"
171
@@ -XXX,XX +XXX,XX @@ struct virtio_blk_inhdr {
172
unsigned char status;
173
};
174
175
-typedef struct VuBlockReq {
176
+typedef struct VuBlkReq {
177
VuVirtqElement elem;
178
int64_t sector_num;
179
size_t size;
180
@@ -XXX,XX +XXX,XX @@ typedef struct VuBlockReq {
181
struct virtio_blk_outhdr out;
182
VuServer *server;
183
struct VuVirtq *vq;
184
-} VuBlockReq;
185
+} VuBlkReq;
186
187
-static void vu_block_req_complete(VuBlockReq *req)
188
+/* vhost user block device */
189
+typedef struct {
190
+ BlockExport export;
191
+ VuServer vu_server;
192
+ uint32_t blk_size;
193
+ QIOChannelSocket *sioc;
194
+ struct virtio_blk_config blkcfg;
195
+ bool writable;
196
+} VuBlkExport;
54
+
197
+
55
+#include "qemu/osdep.h"
198
+static void vu_blk_req_complete(VuBlkReq *req)
56
+#include "qemu/units.h"
199
{
57
+#include "qapi/error.h"
200
VuDev *vu_dev = &req->server->vu_dev;
58
+#include "qemu-common.h"
201
59
+#include "exec/memory.h"
202
@@ -XXX,XX +XXX,XX @@ static void vu_block_req_complete(VuBlockReq *req)
60
+#include "exec/address-spaces.h"
203
free(req);
61
+#include "sysemu/sysemu.h"
204
}
62
+#include "qemu/main-loop.h"
205
63
+
206
-static VuBlockDev *get_vu_block_device_by_server(VuServer *server)
64
+#include "tests/qtest/libqtest.h"
207
-{
65
+#include "tests/qtest/libqos/malloc.h"
208
- return container_of(server, VuBlockDev, vu_server);
66
+#include "tests/qtest/libqos/qgraph.h"
209
-}
67
+#include "tests/qtest/libqos/qgraph_internal.h"
210
-
68
+#include "tests/qtest/libqos/qos_external.h"
211
static int coroutine_fn
69
+
212
-vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
70
+#include "fuzz.h"
213
- uint32_t iovcnt, uint32_t type)
71
+#include "qos_fuzz.h"
214
+vu_blk_discard_write_zeroes(BlockBackend *blk, struct iovec *iov,
72
+
215
+ uint32_t iovcnt, uint32_t type)
73
+#include "qapi/qapi-commands-machine.h"
216
{
74
+#include "qapi/qapi-commands-qom.h"
217
struct virtio_blk_discard_write_zeroes desc;
75
+#include "qapi/qmp/qlist.h"
218
ssize_t size = iov_to_buf(iov, iovcnt, 0, &desc, sizeof(desc));
76
+
219
@@ -XXX,XX +XXX,XX @@ vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
77
+
220
return -EINVAL;
78
+void *fuzz_qos_obj;
221
}
79
+QGuestAllocator *fuzz_qos_alloc;
222
80
+
223
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
81
+static const char *fuzz_target_name;
224
uint64_t range[2] = { le64_to_cpu(desc.sector) << 9,
82
+static char **fuzz_path_vec;
225
le32_to_cpu(desc.num_sectors) << 9 };
83
+
226
if (type == VIRTIO_BLK_T_DISCARD) {
84
+/*
227
- if (blk_co_pdiscard(vdev_blk->backend, range[0], range[1]) == 0) {
85
+ * Replaced the qmp commands with direct qmp_marshal calls.
228
+ if (blk_co_pdiscard(blk, range[0], range[1]) == 0) {
86
+ * Probably there is a better way to do this
229
return 0;
87
+ */
230
}
88
+static void qos_set_machines_devices_available(void)
231
} else if (type == VIRTIO_BLK_T_WRITE_ZEROES) {
89
+{
232
- if (blk_co_pwrite_zeroes(vdev_blk->backend,
90
+ QDict *req = qdict_new();
233
- range[0], range[1], 0) == 0) {
91
+ QObject *response;
234
+ if (blk_co_pwrite_zeroes(blk, range[0], range[1], 0) == 0) {
92
+ QDict *args = qdict_new();
235
return 0;
93
+ QList *lst;
236
}
94
+ Error *err = NULL;
237
}
95
+
238
@@ -XXX,XX +XXX,XX @@ vu_block_discard_write_zeroes(VuBlockReq *req, struct iovec *iov,
96
+ qmp_marshal_query_machines(NULL, &response, &err);
239
return -EINVAL;
97
+ assert(!err);
240
}
98
+ lst = qobject_to(QList, response);
241
99
+ apply_to_qlist(lst, true);
242
-static int coroutine_fn vu_block_flush(VuBlockReq *req)
100
+
243
+static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
101
+ qobject_unref(response);
244
{
102
+
245
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(req->server);
103
+
246
- BlockBackend *backend = vdev_blk->backend;
104
+ qdict_put_str(req, "execute", "qom-list-types");
247
- return blk_co_flush(backend);
105
+ qdict_put_str(args, "implements", "device");
248
-}
106
+ qdict_put_bool(args, "abstract", true);
249
-
107
+ qdict_put_obj(req, "arguments", (QObject *) args);
250
-static void coroutine_fn vu_block_virtio_process_req(void *opaque)
108
+
251
-{
109
+ qmp_marshal_qom_list_types(args, &response, &err);
252
- VuBlockReq *req = opaque;
110
+ assert(!err);
253
+ VuBlkReq *req = opaque;
111
+ lst = qobject_to(QList, response);
254
VuServer *server = req->server;
112
+ apply_to_qlist(lst, false);
255
VuVirtqElement *elem = &req->elem;
113
+ qobject_unref(response);
256
uint32_t type;
114
+ qobject_unref(req);
257
115
+}
258
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
116
+
259
- BlockBackend *backend = vdev_blk->backend;
117
+static char **current_path;
260
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
118
+
261
+ BlockBackend *blk = vexp->export.blk;
119
+void *qos_allocate_objects(QTestState *qts, QGuestAllocator **p_alloc)
262
120
+{
263
struct iovec *in_iov = elem->in_sg;
121
+ return allocate_objects(qts, current_path + 1, p_alloc);
264
struct iovec *out_iov = elem->out_sg;
122
+}
265
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
123
+
266
bool is_write = type & VIRTIO_BLK_T_OUT;
124
+static const char *qos_build_main_args(void)
267
req->sector_num = le64_to_cpu(req->out.sector);
125
+{
268
126
+ char **path = fuzz_path_vec;
269
- int64_t offset = req->sector_num * vdev_blk->blk_size;
127
+ QOSGraphNode *test_node;
270
+ if (is_write && !vexp->writable) {
128
+ GString *cmd_line = g_string_new(path[0]);
271
+ req->in->status = VIRTIO_BLK_S_IOERR;
129
+ void *test_arg;
130
+
131
+ if (!path) {
132
+ fprintf(stderr, "QOS Path not found\n");
133
+ abort();
134
+ }
135
+
136
+ /* Before test */
137
+ current_path = path;
138
+ test_node = qos_graph_get_node(path[(g_strv_length(path) - 1)]);
139
+ test_arg = test_node->u.test.arg;
140
+ if (test_node->u.test.before) {
141
+ test_arg = test_node->u.test.before(cmd_line, test_arg);
142
+ }
143
+ /* Prepend the arguments that we need */
144
+ g_string_prepend(cmd_line,
145
+ TARGET_NAME " -display none -machine accel=qtest -m 64 ");
146
+ return cmd_line->str;
147
+}
148
+
149
+/*
150
+ * This function is largely a copy of qos-test.c:walk_path. Since walk_path
151
+ * is itself a callback, its a little annoying to add another argument/layer of
152
+ * indirection
153
+ */
154
+static void walk_path(QOSGraphNode *orig_path, int len)
155
+{
156
+ QOSGraphNode *path;
157
+ QOSGraphEdge *edge;
158
+
159
+ /* etype set to QEDGE_CONSUMED_BY so that machine can add to the command line */
160
+ QOSEdgeType etype = QEDGE_CONSUMED_BY;
161
+
162
+ /* twice QOS_PATH_MAX_ELEMENT_SIZE since each edge can have its arg */
163
+ char **path_vec = g_new0(char *, (QOS_PATH_MAX_ELEMENT_SIZE * 2));
164
+ int path_vec_size = 0;
165
+
166
+ char *after_cmd, *before_cmd, *after_device;
167
+ GString *after_device_str = g_string_new("");
168
+ char *node_name = orig_path->name, *path_str;
169
+
170
+ GString *cmd_line = g_string_new("");
171
+ GString *cmd_line2 = g_string_new("");
172
+
173
+ path = qos_graph_get_node(node_name); /* root */
174
+ node_name = qos_graph_edge_get_dest(path->path_edge); /* machine name */
175
+
176
+ path_vec[path_vec_size++] = node_name;
177
+ path_vec[path_vec_size++] = qos_get_machine_type(node_name);
178
+
179
+ for (;;) {
180
+ path = qos_graph_get_node(node_name);
181
+ if (!path->path_edge) {
182
+ break;
272
+ break;
183
+ }
273
+ }
184
+
274
+
185
+ node_name = qos_graph_edge_get_dest(path->path_edge);
275
+ int64_t offset = req->sector_num * vexp->blk_size;
276
QEMUIOVector qiov;
277
if (is_write) {
278
qemu_iovec_init_external(&qiov, out_iov, out_num);
279
- ret = blk_co_pwritev(backend, offset, qiov.size,
280
- &qiov, 0);
281
+ ret = blk_co_pwritev(blk, offset, qiov.size, &qiov, 0);
282
} else {
283
qemu_iovec_init_external(&qiov, in_iov, in_num);
284
- ret = blk_co_preadv(backend, offset, qiov.size,
285
- &qiov, 0);
286
+ ret = blk_co_preadv(blk, offset, qiov.size, &qiov, 0);
287
}
288
if (ret >= 0) {
289
req->in->status = VIRTIO_BLK_S_OK;
290
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
291
break;
292
}
293
case VIRTIO_BLK_T_FLUSH:
294
- if (vu_block_flush(req) == 0) {
295
+ if (blk_co_flush(blk) == 0) {
296
req->in->status = VIRTIO_BLK_S_OK;
297
} else {
298
req->in->status = VIRTIO_BLK_S_IOERR;
299
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
300
case VIRTIO_BLK_T_DISCARD:
301
case VIRTIO_BLK_T_WRITE_ZEROES: {
302
int rc;
303
- rc = vu_block_discard_write_zeroes(req, &elem->out_sg[1],
304
- out_num, type);
186
+
305
+
187
+ /* append node command line + previous edge command line */
306
+ if (!vexp->writable) {
188
+ if (path->command_line && etype == QEDGE_CONSUMED_BY) {
307
+ req->in->status = VIRTIO_BLK_S_IOERR;
189
+ g_string_append(cmd_line, path->command_line);
308
+ break;
190
+ g_string_append(cmd_line, after_device_str->str);
191
+ g_string_truncate(after_device_str, 0);
192
+ }
309
+ }
193
+
310
+
194
+ path_vec[path_vec_size++] = qos_graph_edge_get_name(path->path_edge);
311
+ rc = vu_blk_discard_write_zeroes(blk, &elem->out_sg[1], out_num, type);
195
+ /* detect if edge has command line args */
312
if (rc == 0) {
196
+ after_cmd = qos_graph_edge_get_after_cmd_line(path->path_edge);
313
req->in->status = VIRTIO_BLK_S_OK;
197
+ after_device = qos_graph_edge_get_extra_device_opts(path->path_edge);
314
} else {
198
+ before_cmd = qos_graph_edge_get_before_cmd_line(path->path_edge);
315
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn vu_block_virtio_process_req(void *opaque)
199
+ edge = qos_graph_get_edge(path->name, node_name);
316
break;
200
+ etype = qos_graph_edge_get_type(edge);
317
}
318
319
- vu_block_req_complete(req);
320
+ vu_blk_req_complete(req);
321
return;
322
323
err:
324
- free(elem);
325
+ free(req);
326
}
327
328
-static void vu_block_process_vq(VuDev *vu_dev, int idx)
329
+static void vu_blk_process_vq(VuDev *vu_dev, int idx)
330
{
331
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
332
VuVirtq *vq = vu_get_queue(vu_dev, idx);
333
334
while (1) {
335
- VuBlockReq *req;
336
+ VuBlkReq *req;
337
338
- req = vu_queue_pop(vu_dev, vq, sizeof(VuBlockReq));
339
+ req = vu_queue_pop(vu_dev, vq, sizeof(VuBlkReq));
340
if (!req) {
341
break;
342
}
343
@@ -XXX,XX +XXX,XX @@ static void vu_block_process_vq(VuDev *vu_dev, int idx)
344
req->vq = vq;
345
346
Coroutine *co =
347
- qemu_coroutine_create(vu_block_virtio_process_req, req);
348
+ qemu_coroutine_create(vu_blk_virtio_process_req, req);
349
qemu_coroutine_enter(co);
350
}
351
}
352
353
-static void vu_block_queue_set_started(VuDev *vu_dev, int idx, bool started)
354
+static void vu_blk_queue_set_started(VuDev *vu_dev, int idx, bool started)
355
{
356
VuVirtq *vq;
357
358
assert(vu_dev);
359
360
vq = vu_get_queue(vu_dev, idx);
361
- vu_set_queue_handler(vu_dev, vq, started ? vu_block_process_vq : NULL);
362
+ vu_set_queue_handler(vu_dev, vq, started ? vu_blk_process_vq : NULL);
363
}
364
365
-static uint64_t vu_block_get_features(VuDev *dev)
366
+static uint64_t vu_blk_get_features(VuDev *dev)
367
{
368
uint64_t features;
369
VuServer *server = container_of(dev, VuServer, vu_dev);
370
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
371
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
372
features = 1ull << VIRTIO_BLK_F_SIZE_MAX |
373
1ull << VIRTIO_BLK_F_SEG_MAX |
374
1ull << VIRTIO_BLK_F_TOPOLOGY |
375
@@ -XXX,XX +XXX,XX @@ static uint64_t vu_block_get_features(VuDev *dev)
376
1ull << VIRTIO_RING_F_EVENT_IDX |
377
1ull << VHOST_USER_F_PROTOCOL_FEATURES;
378
379
- if (!vdev_blk->writable) {
380
+ if (!vexp->writable) {
381
features |= 1ull << VIRTIO_BLK_F_RO;
382
}
383
384
return features;
385
}
386
387
-static uint64_t vu_block_get_protocol_features(VuDev *dev)
388
+static uint64_t vu_blk_get_protocol_features(VuDev *dev)
389
{
390
return 1ull << VHOST_USER_PROTOCOL_F_CONFIG |
391
1ull << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD;
392
}
393
394
static int
395
-vu_block_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
396
+vu_blk_get_config(VuDev *vu_dev, uint8_t *config, uint32_t len)
397
{
398
+ /* TODO blkcfg must be little-endian for VIRTIO 1.0 */
399
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
400
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
401
- memcpy(config, &vdev_blk->blkcfg, len);
402
-
403
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
404
+ memcpy(config, &vexp->blkcfg, len);
405
return 0;
406
}
407
408
static int
409
-vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
410
+vu_blk_set_config(VuDev *vu_dev, const uint8_t *data,
411
uint32_t offset, uint32_t size, uint32_t flags)
412
{
413
VuServer *server = container_of(vu_dev, VuServer, vu_dev);
414
- VuBlockDev *vdev_blk = get_vu_block_device_by_server(server);
415
+ VuBlkExport *vexp = container_of(server, VuBlkExport, vu_server);
416
uint8_t wce;
417
418
/* don't support live migration */
419
@@ -XXX,XX +XXX,XX @@ vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
420
}
421
422
wce = *data;
423
- vdev_blk->blkcfg.wce = wce;
424
- blk_set_enable_write_cache(vdev_blk->backend, wce);
425
+ vexp->blkcfg.wce = wce;
426
+ blk_set_enable_write_cache(vexp->export.blk, wce);
427
return 0;
428
}
429
430
@@ -XXX,XX +XXX,XX @@ vu_block_set_config(VuDev *vu_dev, const uint8_t *data,
431
* of vu_process_message.
432
*
433
*/
434
-static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
435
+static int vu_blk_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
436
{
437
if (vmsg->request == VHOST_USER_NONE) {
438
dev->panic(dev, "disconnect");
439
@@ -XXX,XX +XXX,XX @@ static int vu_block_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
440
return false;
441
}
442
443
-static const VuDevIface vu_block_iface = {
444
- .get_features = vu_block_get_features,
445
- .queue_set_started = vu_block_queue_set_started,
446
- .get_protocol_features = vu_block_get_protocol_features,
447
- .get_config = vu_block_get_config,
448
- .set_config = vu_block_set_config,
449
- .process_msg = vu_block_process_msg,
450
+static const VuDevIface vu_blk_iface = {
451
+ .get_features = vu_blk_get_features,
452
+ .queue_set_started = vu_blk_queue_set_started,
453
+ .get_protocol_features = vu_blk_get_protocol_features,
454
+ .get_config = vu_blk_get_config,
455
+ .set_config = vu_blk_set_config,
456
+ .process_msg = vu_blk_process_msg,
457
};
458
459
static void blk_aio_attached(AioContext *ctx, void *opaque)
460
{
461
- VuBlockDev *vub_dev = opaque;
462
- vhost_user_server_attach_aio_context(&vub_dev->vu_server, ctx);
463
+ VuBlkExport *vexp = opaque;
464
+ vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
465
}
466
467
static void blk_aio_detach(void *opaque)
468
{
469
- VuBlockDev *vub_dev = opaque;
470
- vhost_user_server_detach_aio_context(&vub_dev->vu_server);
471
+ VuBlkExport *vexp = opaque;
472
+ vhost_user_server_detach_aio_context(&vexp->vu_server);
473
}
474
475
static void
476
-vu_block_initialize_config(BlockDriverState *bs,
477
+vu_blk_initialize_config(BlockDriverState *bs,
478
struct virtio_blk_config *config, uint32_t blk_size)
479
{
480
config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
481
@@ -XXX,XX +XXX,XX @@ vu_block_initialize_config(BlockDriverState *bs,
482
config->max_write_zeroes_seg = 1;
483
}
484
485
-static VuBlockDev *vu_block_init(VuBlockDev *vu_block_device, Error **errp)
486
+static void vu_blk_exp_request_shutdown(BlockExport *exp)
487
{
488
+ VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
489
490
- BlockBackend *blk;
491
- Error *local_error = NULL;
492
- const char *node_name = vu_block_device->node_name;
493
- bool writable = vu_block_device->writable;
494
- uint64_t perm = BLK_PERM_CONSISTENT_READ;
495
- int ret;
496
-
497
- AioContext *ctx;
498
-
499
- BlockDriverState *bs = bdrv_lookup_bs(node_name, node_name, &local_error);
500
-
501
- if (!bs) {
502
- error_propagate(errp, local_error);
503
- return NULL;
504
- }
505
-
506
- if (bdrv_is_read_only(bs)) {
507
- writable = false;
508
- }
509
-
510
- if (writable) {
511
- perm |= BLK_PERM_WRITE;
512
- }
513
-
514
- ctx = bdrv_get_aio_context(bs);
515
- aio_context_acquire(ctx);
516
- bdrv_invalidate_cache(bs, NULL);
517
- aio_context_release(ctx);
518
-
519
- /*
520
- * Don't allow resize while the vhost user server is running,
521
- * otherwise we don't care what happens with the node.
522
- */
523
- blk = blk_new(bdrv_get_aio_context(bs), perm,
524
- BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
525
- BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
526
- ret = blk_insert_bs(blk, bs, errp);
527
-
528
- if (ret < 0) {
529
- goto fail;
530
- }
531
-
532
- blk_set_enable_write_cache(blk, false);
533
-
534
- blk_set_allow_aio_context_change(blk, true);
535
-
536
- vu_block_device->blkcfg.wce = 0;
537
- vu_block_device->backend = blk;
538
- if (!vu_block_device->blk_size) {
539
- vu_block_device->blk_size = BDRV_SECTOR_SIZE;
540
- }
541
- vu_block_device->blkcfg.blk_size = vu_block_device->blk_size;
542
- blk_set_guest_block_size(blk, vu_block_device->blk_size);
543
- vu_block_initialize_config(bs, &vu_block_device->blkcfg,
544
- vu_block_device->blk_size);
545
- return vu_block_device;
546
-
547
-fail:
548
- blk_unref(blk);
549
- return NULL;
550
-}
551
-
552
-static void vu_block_deinit(VuBlockDev *vu_block_device)
553
-{
554
- if (vu_block_device->backend) {
555
- blk_remove_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
556
- blk_aio_detach, vu_block_device);
557
- }
558
-
559
- blk_unref(vu_block_device->backend);
560
-}
561
-
562
-static void vhost_user_blk_server_stop(VuBlockDev *vu_block_device)
563
-{
564
- vhost_user_server_stop(&vu_block_device->vu_server);
565
- vu_block_deinit(vu_block_device);
566
-}
567
-
568
-static void vhost_user_blk_server_start(VuBlockDev *vu_block_device,
569
- Error **errp)
570
-{
571
- AioContext *ctx;
572
- SocketAddress *addr = vu_block_device->addr;
573
-
574
- if (!vu_block_init(vu_block_device, errp)) {
575
- return;
576
- }
577
-
578
- ctx = bdrv_get_aio_context(blk_bs(vu_block_device->backend));
579
-
580
- if (!vhost_user_server_start(&vu_block_device->vu_server, addr, ctx,
581
- VHOST_USER_BLK_MAX_QUEUES, &vu_block_iface,
582
- errp)) {
583
- goto error;
584
- }
585
-
586
- blk_add_aio_context_notifier(vu_block_device->backend, blk_aio_attached,
587
- blk_aio_detach, vu_block_device);
588
- vu_block_device->running = true;
589
- return;
590
-
591
- error:
592
- vu_block_deinit(vu_block_device);
593
-}
594
-
595
-static bool vu_prop_modifiable(VuBlockDev *vus, Error **errp)
596
-{
597
- if (vus->running) {
598
- error_setg(errp, "The property can't be modified "
599
- "while the server is running");
600
- return false;
601
- }
602
- return true;
603
-}
604
-
605
-static void vu_set_node_name(Object *obj, const char *value, Error **errp)
606
-{
607
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
608
-
609
- if (!vu_prop_modifiable(vus, errp)) {
610
- return;
611
- }
612
-
613
- if (vus->node_name) {
614
- g_free(vus->node_name);
615
- }
616
-
617
- vus->node_name = g_strdup(value);
618
-}
619
-
620
-static char *vu_get_node_name(Object *obj, Error **errp)
621
-{
622
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
623
- return g_strdup(vus->node_name);
624
-}
625
-
626
-static void free_socket_addr(SocketAddress *addr)
627
-{
628
- g_free(addr->u.q_unix.path);
629
- g_free(addr);
630
-}
631
-
632
-static void vu_set_unix_socket(Object *obj, const char *value,
633
- Error **errp)
634
-{
635
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
636
-
637
- if (!vu_prop_modifiable(vus, errp)) {
638
- return;
639
- }
640
-
641
- if (vus->addr) {
642
- free_socket_addr(vus->addr);
643
- }
644
-
645
- SocketAddress *addr = g_new0(SocketAddress, 1);
646
- addr->type = SOCKET_ADDRESS_TYPE_UNIX;
647
- addr->u.q_unix.path = g_strdup(value);
648
- vus->addr = addr;
649
+ vhost_user_server_stop(&vexp->vu_server);
650
}
651
652
-static char *vu_get_unix_socket(Object *obj, Error **errp)
653
+static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
654
+ Error **errp)
655
{
656
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
657
- return g_strdup(vus->addr->u.q_unix.path);
658
-}
659
-
660
-static bool vu_get_block_writable(Object *obj, Error **errp)
661
-{
662
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
663
- return vus->writable;
664
-}
665
-
666
-static void vu_set_block_writable(Object *obj, bool value, Error **errp)
667
-{
668
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
669
-
670
- if (!vu_prop_modifiable(vus, errp)) {
671
- return;
672
- }
673
-
674
- vus->writable = value;
675
-}
676
-
677
-static void vu_get_blk_size(Object *obj, Visitor *v, const char *name,
678
- void *opaque, Error **errp)
679
-{
680
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
681
- uint32_t value = vus->blk_size;
682
-
683
- visit_type_uint32(v, name, &value, errp);
684
-}
685
-
686
-static void vu_set_blk_size(Object *obj, Visitor *v, const char *name,
687
- void *opaque, Error **errp)
688
-{
689
- VuBlockDev *vus = VHOST_USER_BLK_SERVER(obj);
690
-
691
+ VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
692
+ BlockExportOptionsVhostUserBlk *vu_opts = &opts->u.vhost_user_blk;
693
Error *local_err = NULL;
694
- uint32_t value;
695
+ uint64_t logical_block_size;
696
697
- if (!vu_prop_modifiable(vus, errp)) {
698
- return;
699
- }
700
+ vexp->writable = opts->writable;
701
+ vexp->blkcfg.wce = 0;
702
703
- visit_type_uint32(v, name, &value, &local_err);
704
- if (local_err) {
705
- goto out;
706
+ if (vu_opts->has_logical_block_size) {
707
+ logical_block_size = vu_opts->logical_block_size;
708
+ } else {
709
+ logical_block_size = BDRV_SECTOR_SIZE;
710
}
711
-
712
- check_block_size(object_get_typename(obj), name, value, &local_err);
713
+ check_block_size(exp->id, "logical-block-size", logical_block_size,
714
+ &local_err);
715
if (local_err) {
716
- goto out;
717
+ error_propagate(errp, local_err);
718
+ return -EINVAL;
719
+ }
720
+ vexp->blk_size = logical_block_size;
721
+ blk_set_guest_block_size(exp->blk, logical_block_size);
722
+ vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
723
+ logical_block_size);
201
+
724
+
202
+ if (before_cmd) {
725
+ blk_set_allow_aio_context_change(exp->blk, true);
203
+ g_string_append(cmd_line, before_cmd);
726
+ blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
204
+ }
727
+ vexp);
205
+ if (after_cmd) {
728
+
206
+ g_string_append(cmd_line2, after_cmd);
729
+ if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
207
+ }
730
+ VHOST_USER_BLK_MAX_QUEUES, &vu_blk_iface,
208
+ if (after_device) {
731
+ errp)) {
209
+ g_string_append(after_device_str, after_device);
732
+ blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
210
+ }
733
+ blk_aio_detach, vexp);
734
+ return -EADDRNOTAVAIL;
735
}
736
737
- vus->blk_size = value;
738
-
739
-out:
740
- error_propagate(errp, local_err);
741
-}
742
-
743
-static void vhost_user_blk_server_instance_finalize(Object *obj)
744
-{
745
- VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
746
-
747
- vhost_user_blk_server_stop(vub);
748
-
749
- /*
750
- * Unlike object_property_add_str, object_class_property_add_str
751
- * doesn't have a release method. Thus manual memory freeing is
752
- * needed.
753
- */
754
- free_socket_addr(vub->addr);
755
- g_free(vub->node_name);
756
-}
757
-
758
-static void vhost_user_blk_server_complete(UserCreatable *obj, Error **errp)
759
-{
760
- VuBlockDev *vub = VHOST_USER_BLK_SERVER(obj);
761
-
762
- vhost_user_blk_server_start(vub, errp);
763
+ return 0;
764
}
765
766
-static void vhost_user_blk_server_class_init(ObjectClass *klass,
767
- void *class_data)
768
+static void vu_blk_exp_delete(BlockExport *exp)
769
{
770
- UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
771
- ucc->complete = vhost_user_blk_server_complete;
772
-
773
- object_class_property_add_bool(klass, "writable",
774
- vu_get_block_writable,
775
- vu_set_block_writable);
776
-
777
- object_class_property_add_str(klass, "node-name",
778
- vu_get_node_name,
779
- vu_set_node_name);
780
-
781
- object_class_property_add_str(klass, "unix-socket",
782
- vu_get_unix_socket,
783
- vu_set_unix_socket);
784
+ VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
785
786
- object_class_property_add(klass, "logical-block-size", "uint32",
787
- vu_get_blk_size, vu_set_blk_size,
788
- NULL, NULL);
789
+ blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
790
+ vexp);
791
}
792
793
-static const TypeInfo vhost_user_blk_server_info = {
794
- .name = TYPE_VHOST_USER_BLK_SERVER,
795
- .parent = TYPE_OBJECT,
796
- .instance_size = sizeof(VuBlockDev),
797
- .instance_finalize = vhost_user_blk_server_instance_finalize,
798
- .class_init = vhost_user_blk_server_class_init,
799
- .interfaces = (InterfaceInfo[]) {
800
- {TYPE_USER_CREATABLE},
801
- {}
802
- },
803
+const BlockExportDriver blk_exp_vhost_user_blk = {
804
+ .type = BLOCK_EXPORT_TYPE_VHOST_USER_BLK,
805
+ .instance_size = sizeof(VuBlkExport),
806
+ .create = vu_blk_exp_create,
807
+ .delete = vu_blk_exp_delete,
808
+ .request_shutdown = vu_blk_exp_request_shutdown,
809
};
810
-
811
-static void vhost_user_blk_server_register_types(void)
812
-{
813
- type_register_static(&vhost_user_blk_server_info);
814
-}
815
-
816
-type_init(vhost_user_blk_server_register_types)
817
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
818
index XXXXXXX..XXXXXXX 100644
819
--- a/util/vhost-user-server.c
820
+++ b/util/vhost-user-server.c
821
@@ -XXX,XX +XXX,XX @@ bool vhost_user_server_start(VuServer *server,
822
Error **errp)
823
{
824
QEMUBH *bh;
825
- QIONetListener *listener = qio_net_listener_new();
826
+ QIONetListener *listener;
827
+
828
+ if (socket_addr->type != SOCKET_ADDRESS_TYPE_UNIX &&
829
+ socket_addr->type != SOCKET_ADDRESS_TYPE_FD) {
830
+ error_setg(errp, "Only socket address types 'unix' and 'fd' are supported");
831
+ return false;
211
+ }
832
+ }
212
+
833
+
213
+ path_vec[path_vec_size++] = NULL;
834
+ listener = qio_net_listener_new();
214
+ g_string_append(cmd_line, after_device_str->str);
835
if (qio_net_listener_open_sync(listener, socket_addr, 1,
215
+ g_string_free(after_device_str, true);
836
errp) < 0) {
216
+
837
object_unref(OBJECT(listener));
217
+ g_string_append(cmd_line, cmd_line2->str);
838
diff --git a/block/export/meson.build b/block/export/meson.build
218
+ g_string_free(cmd_line2, true);
839
index XXXXXXX..XXXXXXX 100644
219
+
840
--- a/block/export/meson.build
220
+ /*
841
+++ b/block/export/meson.build
221
+ * here position 0 has <arch>/<machine>, position 1 has <machine>.
842
@@ -1 +1,2 @@
222
+ * The path must not have the <arch>, qtest_add_data_func adds it.
843
block_ss.add(files('export.c'))
223
+ */
844
+block_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-blk-server.c', '../../contrib/libvhost-user/libvhost-user.c'))
224
+ path_str = g_strjoinv("/", path_vec + 1);
845
diff --git a/block/meson.build b/block/meson.build
225
+
846
index XXXXXXX..XXXXXXX 100644
226
+ /* Check that this is the test we care about: */
847
--- a/block/meson.build
227
+ char *test_name = strrchr(path_str, '/') + 1;
848
+++ b/block/meson.build
228
+ if (strcmp(test_name, fuzz_target_name) == 0) {
849
@@ -XXX,XX +XXX,XX @@ block_ss.add(when: 'CONFIG_WIN32', if_true: files('file-win32.c', 'win32-aio.c')
229
+ /*
850
block_ss.add(when: 'CONFIG_POSIX', if_true: [files('file-posix.c'), coref, iokit])
230
+ * put arch/machine in position 1 so run_one_test can do its work
851
block_ss.add(when: 'CONFIG_LIBISCSI', if_true: files('iscsi-opts.c'))
231
+ * and add the command line at position 0.
852
block_ss.add(when: 'CONFIG_LINUX', if_true: files('nvme.c'))
232
+ */
853
-block_ss.add(when: 'CONFIG_LINUX', if_true: files('export/vhost-user-blk-server.c', '../contrib/libvhost-user/libvhost-user.c'))
233
+ path_vec[1] = path_vec[0];
854
block_ss.add(when: 'CONFIG_REPLICATION', if_true: files('replication.c'))
234
+ path_vec[0] = g_string_free(cmd_line, false);
855
block_ss.add(when: 'CONFIG_SHEEPDOG', if_true: files('sheepdog.c'))
235
+
856
block_ss.add(when: ['CONFIG_LINUX_AIO', libaio], if_true: files('linux-aio.c'))
236
+ fuzz_path_vec = path_vec;
237
+ } else {
238
+ g_free(path_vec);
239
+ }
240
+
241
+ g_free(path_str);
242
+}
243
+
244
+static const char *qos_get_cmdline(FuzzTarget *t)
245
+{
246
+ /*
247
+ * Set a global variable that we use to identify the qos_path for our
248
+ * fuzz_target
249
+ */
250
+ fuzz_target_name = t->name;
251
+ qos_set_machines_devices_available();
252
+ qos_graph_foreach_test_path(walk_path);
253
+ return qos_build_main_args();
254
+}
255
+
256
+void fuzz_add_qos_target(
257
+ FuzzTarget *fuzz_opts,
258
+ const char *interface,
259
+ QOSGraphTestOptions *opts
260
+ )
261
+{
262
+ qos_add_test(fuzz_opts->name, interface, NULL, opts);
263
+ fuzz_opts->get_init_cmdline = qos_get_cmdline;
264
+ fuzz_add_target(fuzz_opts);
265
+}
266
+
267
+void qos_init_path(QTestState *s)
268
+{
269
+ fuzz_qos_obj = qos_allocate_objects(s , &fuzz_qos_alloc);
270
+}
271
diff --git a/tests/qtest/fuzz/qos_fuzz.h b/tests/qtest/fuzz/qos_fuzz.h
272
new file mode 100644
273
index XXXXXXX..XXXXXXX
274
--- /dev/null
275
+++ b/tests/qtest/fuzz/qos_fuzz.h
276
@@ -XXX,XX +XXX,XX @@
277
+/*
278
+ * QOS-assisted fuzzing helpers
279
+ *
280
+ * Copyright Red Hat Inc., 2019
281
+ *
282
+ * Authors:
283
+ * Alexander Bulekov <alxndr@bu.edu>
284
+ *
285
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
286
+ * See the COPYING file in the top-level directory.
287
+ */
288
+
289
+#ifndef _QOS_FUZZ_H_
290
+#define _QOS_FUZZ_H_
291
+
292
+#include "tests/qtest/fuzz/fuzz.h"
293
+#include "tests/qtest/libqos/qgraph.h"
294
+
295
+int qos_fuzz(const unsigned char *Data, size_t Size);
296
+void qos_setup(void);
297
+
298
+extern void *fuzz_qos_obj;
299
+extern QGuestAllocator *fuzz_qos_alloc;
300
+
301
+void fuzz_add_qos_target(
302
+ FuzzTarget *fuzz_opts,
303
+ const char *interface,
304
+ QOSGraphTestOptions *opts
305
+ );
306
+
307
+void qos_init_path(QTestState *);
308
+
309
+#endif
310
--
857
--
311
2.24.1
858
2.26.2
312
859
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
Headers used by other subsystems are located in include/. Also add the
2
vhost-user-server and vhost-user-blk-server headers to MAINTAINERS.
2
3
3
Move vl.c to a separate directory, similar to linux-user/
4
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
4
Update the chechpatch and get_maintainer scripts, since they relied on
5
Message-id: 20200924151549.913737-13-stefanha@redhat.com
5
/vl.c for top_of_tree checks.
6
7
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
8
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
9
Message-id: 20200220041118.23264-2-alxndr@bu.edu
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
7
---
12
MAINTAINERS | 2 +-
8
MAINTAINERS | 4 +++-
13
Makefile.objs | 2 --
9
{util => include/qemu}/vhost-user-server.h | 0
14
Makefile.target | 1 +
10
block/export/vhost-user-blk-server.c | 2 +-
15
scripts/checkpatch.pl | 2 +-
11
util/vhost-user-server.c | 2 +-
16
scripts/get_maintainer.pl | 3 ++-
12
4 files changed, 5 insertions(+), 3 deletions(-)
17
softmmu/Makefile.objs | 2 ++
13
rename {util => include/qemu}/vhost-user-server.h (100%)
18
vl.c => softmmu/vl.c | 0
19
7 files changed, 7 insertions(+), 5 deletions(-)
20
create mode 100644 softmmu/Makefile.objs
21
rename vl.c => softmmu/vl.c (100%)
22
14
23
diff --git a/MAINTAINERS b/MAINTAINERS
15
diff --git a/MAINTAINERS b/MAINTAINERS
24
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
25
--- a/MAINTAINERS
17
--- a/MAINTAINERS
26
+++ b/MAINTAINERS
18
+++ b/MAINTAINERS
27
@@ -XXX,XX +XXX,XX @@ F: include/qemu/main-loop.h
19
@@ -XXX,XX +XXX,XX @@ Vhost-user block device backend server
28
F: include/sysemu/runstate.h
20
M: Coiby Xu <Coiby.Xu@gmail.com>
29
F: util/main-loop.c
21
S: Maintained
30
F: util/qemu-timer.c
22
F: block/export/vhost-user-blk-server.c
31
-F: vl.c
23
-F: util/vhost-user-server.c
32
+F: softmmu/vl.c
24
+F: block/export/vhost-user-blk-server.h
33
F: qapi/run-state.json
25
+F: include/qemu/vhost-user-server.h
34
26
F: tests/qtest/libqos/vhost-user-blk.c
35
Human Monitor (HMP)
27
+F: util/vhost-user-server.c
36
diff --git a/Makefile.objs b/Makefile.objs
28
29
Replication
30
M: Wen Congyang <wencongyang2@huawei.com>
31
diff --git a/util/vhost-user-server.h b/include/qemu/vhost-user-server.h
32
similarity index 100%
33
rename from util/vhost-user-server.h
34
rename to include/qemu/vhost-user-server.h
35
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
37
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
38
--- a/Makefile.objs
37
--- a/block/export/vhost-user-blk-server.c
39
+++ b/Makefile.objs
38
+++ b/block/export/vhost-user-blk-server.c
40
@@ -XXX,XX +XXX,XX @@ common-obj-y += ui/
39
@@ -XXX,XX +XXX,XX @@
41
common-obj-m += ui/
40
#include "block/block.h"
42
41
#include "contrib/libvhost-user/libvhost-user.h"
43
common-obj-y += dma-helpers.o
42
#include "standard-headers/linux/virtio_blk.h"
44
-common-obj-y += vl.o
43
-#include "util/vhost-user-server.h"
45
-vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
44
+#include "qemu/vhost-user-server.h"
46
common-obj-$(CONFIG_TPM) += tpm.o
45
#include "vhost-user-blk-server.h"
47
46
#include "qapi/error.h"
48
common-obj-y += backends/
47
#include "qom/object_interfaces.h"
49
diff --git a/Makefile.target b/Makefile.target
48
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
50
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
51
--- a/Makefile.target
50
--- a/util/vhost-user-server.c
52
+++ b/Makefile.target
51
+++ b/util/vhost-user-server.c
53
@@ -XXX,XX +XXX,XX @@ obj-y += qapi/
54
obj-y += memory.o
55
obj-y += memory_mapping.o
56
obj-y += migration/ram.o
57
+obj-y += softmmu/
58
LIBS := $(libs_softmmu) $(LIBS)
59
60
# Hardware support
61
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
62
index XXXXXXX..XXXXXXX 100755
63
--- a/scripts/checkpatch.pl
64
+++ b/scripts/checkpatch.pl
65
@@ -XXX,XX +XXX,XX @@ sub top_of_kernel_tree {
66
    my @tree_check = (
67
        "COPYING", "MAINTAINERS", "Makefile",
68
        "README.rst", "docs", "VERSION",
69
-        "vl.c"
70
+        "linux-user", "softmmu"
71
    );
72
73
    foreach my $check (@tree_check) {
74
diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl
75
index XXXXXXX..XXXXXXX 100755
76
--- a/scripts/get_maintainer.pl
77
+++ b/scripts/get_maintainer.pl
78
@@ -XXX,XX +XXX,XX @@ sub top_of_tree {
79
&& (-f "${lk_path}Makefile")
80
&& (-d "${lk_path}docs")
81
&& (-f "${lk_path}VERSION")
82
- && (-f "${lk_path}vl.c")) {
83
+ && (-d "${lk_path}linux-user/")
84
+ && (-d "${lk_path}softmmu/")) {
85
    return 1;
86
}
87
return 0;
88
diff --git a/softmmu/Makefile.objs b/softmmu/Makefile.objs
89
new file mode 100644
90
index XXXXXXX..XXXXXXX
91
--- /dev/null
92
+++ b/softmmu/Makefile.objs
93
@@ -XXX,XX +XXX,XX @@
52
@@ -XXX,XX +XXX,XX @@
94
+obj-y += vl.o
53
*/
95
+vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
54
#include "qemu/osdep.h"
96
diff --git a/vl.c b/softmmu/vl.c
55
#include "qemu/main-loop.h"
97
similarity index 100%
56
+#include "qemu/vhost-user-server.h"
98
rename from vl.c
57
#include "block/aio-wait.h"
99
rename to softmmu/vl.c
58
-#include "vhost-user-server.h"
59
60
/*
61
* Theory of operation:
100
--
62
--
101
2.24.1
63
2.26.2
102
64
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
Don't compile contrib/libvhost-user/libvhost-user.c again. Instead build
2
the static library once and then reuse it throughout QEMU.
2
3
3
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
4
Also switch from CONFIG_LINUX to CONFIG_VHOST_USER, which is what the
4
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
5
vhost-user tools (vhost-user-gpu, etc) do.
5
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
6
6
Message-id: 20200220041118.23264-18-alxndr@bu.edu
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Message-id: 20200924151549.913737-14-stefanha@redhat.com
9
[Added CONFIG_LINUX again because libvhost-user doesn't build on macOS.
10
--Stefan]
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
---
12
---
9
Makefile | 15 ++++++++++++++-
13
block/export/export.c | 8 ++++----
10
Makefile.target | 16 ++++++++++++++++
14
block/export/meson.build | 2 +-
11
2 files changed, 30 insertions(+), 1 deletion(-)
15
contrib/libvhost-user/meson.build | 1 +
16
meson.build | 6 +++++-
17
util/meson.build | 4 +++-
18
5 files changed, 14 insertions(+), 7 deletions(-)
12
19
13
diff --git a/Makefile b/Makefile
20
diff --git a/block/export/export.c b/block/export/export.c
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/Makefile
22
--- a/block/export/export.c
16
+++ b/Makefile
23
+++ b/block/export/export.c
17
@@ -XXX,XX +XXX,XX @@ config-host.h-timestamp: config-host.mak
24
@@ -XXX,XX +XXX,XX @@
18
qemu-options.def: $(SRC_PATH)/qemu-options.hx $(SRC_PATH)/scripts/hxtool
25
#include "sysemu/block-backend.h"
19
    $(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"GEN","$@")
26
#include "block/export.h"
20
27
#include "block/nbd.h"
21
-TARGET_DIRS_RULES := $(foreach t, all clean install, $(addsuffix /$(t), $(TARGET_DIRS)))
28
-#if CONFIG_LINUX
22
+TARGET_DIRS_RULES := $(foreach t, all fuzz clean install, $(addsuffix /$(t), $(TARGET_DIRS)))
29
-#include "block/export/vhost-user-blk-server.h"
23
30
-#endif
24
SOFTMMU_ALL_RULES=$(filter %-softmmu/all, $(TARGET_DIRS_RULES))
31
#include "qapi/error.h"
25
$(SOFTMMU_ALL_RULES): $(authz-obj-y)
32
#include "qapi/qapi-commands-block-export.h"
26
@@ -XXX,XX +XXX,XX @@ ifdef DECOMPRESS_EDK2_BLOBS
33
#include "qapi/qapi-events-block-export.h"
27
$(SOFTMMU_ALL_RULES): $(edk2-decompressed)
34
#include "qemu/id.h"
28
endif
35
+#ifdef CONFIG_VHOST_USER
29
36
+#include "vhost-user-blk-server.h"
30
+SOFTMMU_FUZZ_RULES=$(filter %-softmmu/fuzz, $(TARGET_DIRS_RULES))
37
+#endif
31
+$(SOFTMMU_FUZZ_RULES): $(authz-obj-y)
38
32
+$(SOFTMMU_FUZZ_RULES): $(block-obj-y)
39
static const BlockExportDriver *blk_exp_drivers[] = {
33
+$(SOFTMMU_FUZZ_RULES): $(chardev-obj-y)
40
&blk_exp_nbd,
34
+$(SOFTMMU_FUZZ_RULES): $(crypto-obj-y)
41
-#if CONFIG_LINUX
35
+$(SOFTMMU_FUZZ_RULES): $(io-obj-y)
42
+#ifdef CONFIG_VHOST_USER
36
+$(SOFTMMU_FUZZ_RULES): config-all-devices.mak
43
&blk_exp_vhost_user_blk,
37
+$(SOFTMMU_FUZZ_RULES): $(edk2-decompressed)
44
#endif
38
+
45
};
39
.PHONY: $(TARGET_DIRS_RULES)
46
diff --git a/block/export/meson.build b/block/export/meson.build
40
# The $(TARGET_DIRS_RULES) are of the form SUBDIR/GOAL, so that
41
# $(dir $@) yields the sub-directory, and $(notdir $@) yields the sub-goal
42
@@ -XXX,XX +XXX,XX @@ subdir-slirp: slirp/all
43
$(filter %/all, $(TARGET_DIRS_RULES)): libqemuutil.a $(common-obj-y) \
44
    $(qom-obj-y)
45
46
+$(filter %/fuzz, $(TARGET_DIRS_RULES)): libqemuutil.a $(common-obj-y) \
47
+    $(qom-obj-y) $(crypto-user-obj-$(CONFIG_USER_ONLY))
48
+
49
ROM_DIRS = $(addprefix pc-bios/, $(ROMS))
50
ROM_DIRS_RULES=$(foreach t, all clean, $(addsuffix /$(t), $(ROM_DIRS)))
51
# Only keep -O and -g cflags
52
@@ -XXX,XX +XXX,XX @@ $(ROM_DIRS_RULES):
53
54
.PHONY: recurse-all recurse-clean recurse-install
55
recurse-all: $(addsuffix /all, $(TARGET_DIRS) $(ROM_DIRS))
56
+recurse-fuzz: $(addsuffix /fuzz, $(TARGET_DIRS) $(ROM_DIRS))
57
recurse-clean: $(addsuffix /clean, $(TARGET_DIRS) $(ROM_DIRS))
58
recurse-install: $(addsuffix /install, $(TARGET_DIRS))
59
$(addsuffix /install, $(TARGET_DIRS)): all
60
diff --git a/Makefile.target b/Makefile.target
61
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
62
--- a/Makefile.target
48
--- a/block/export/meson.build
63
+++ b/Makefile.target
49
+++ b/block/export/meson.build
64
@@ -XXX,XX +XXX,XX @@ ifdef CONFIG_TRACE_SYSTEMTAP
50
@@ -XXX,XX +XXX,XX @@
65
    rm -f *.stp
51
block_ss.add(files('export.c'))
66
endif
52
-block_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-blk-server.c', '../../contrib/libvhost-user/libvhost-user.c'))
67
53
+block_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: files('vhost-user-blk-server.c'))
68
+ifdef CONFIG_FUZZ
54
diff --git a/contrib/libvhost-user/meson.build b/contrib/libvhost-user/meson.build
69
+include $(SRC_PATH)/tests/qtest/fuzz/Makefile.include
55
index XXXXXXX..XXXXXXX 100644
70
+include $(SRC_PATH)/tests/qtest/Makefile.include
56
--- a/contrib/libvhost-user/meson.build
71
+
57
+++ b/contrib/libvhost-user/meson.build
72
+fuzz: fuzz-vars
58
@@ -XXX,XX +XXX,XX @@
73
+fuzz-vars: QEMU_CFLAGS := $(FUZZ_CFLAGS) $(QEMU_CFLAGS)
59
libvhost_user = static_library('vhost-user',
74
+fuzz-vars: QEMU_LDFLAGS := $(FUZZ_LDFLAGS) $(QEMU_LDFLAGS)
60
files('libvhost-user.c', 'libvhost-user-glib.c'),
75
+fuzz-vars: $(QEMU_PROG_FUZZ)
61
build_by_default: false)
76
+dummy := $(call unnest-vars,, fuzz-obj-y)
62
+vhost_user = declare_dependency(link_with: libvhost_user)
77
+
63
diff --git a/meson.build b/meson.build
78
+
64
index XXXXXXX..XXXXXXX 100644
79
+$(QEMU_PROG_FUZZ): config-devices.mak $(all-obj-y) $(COMMON_LDADDS) $(fuzz-obj-y)
65
--- a/meson.build
80
+    $(call LINK, $(filter-out %.mak, $^))
66
+++ b/meson.build
81
+
67
@@ -XXX,XX +XXX,XX @@ trace_events_subdirs += [
68
'util',
69
]
70
71
+vhost_user = not_found
72
+if 'CONFIG_VHOST_USER' in config_host
73
+ subdir('contrib/libvhost-user')
82
+endif
74
+endif
83
+
75
+
84
install: all
76
subdir('qapi')
85
ifneq ($(PROGS),)
77
subdir('qobject')
86
    $(call install-prog,$(PROGS),$(DESTDIR)$(bindir))
78
subdir('stubs')
79
@@ -XXX,XX +XXX,XX @@ if have_tools
80
install: true)
81
82
if 'CONFIG_VHOST_USER' in config_host
83
- subdir('contrib/libvhost-user')
84
subdir('contrib/vhost-user-blk')
85
subdir('contrib/vhost-user-gpu')
86
subdir('contrib/vhost-user-input')
87
diff --git a/util/meson.build b/util/meson.build
88
index XXXXXXX..XXXXXXX 100644
89
--- a/util/meson.build
90
+++ b/util/meson.build
91
@@ -XXX,XX +XXX,XX @@ if have_block
92
util_ss.add(files('main-loop.c'))
93
util_ss.add(files('nvdimm-utils.c'))
94
util_ss.add(files('qemu-coroutine.c', 'qemu-coroutine-lock.c', 'qemu-coroutine-io.c'))
95
- util_ss.add(when: 'CONFIG_LINUX', if_true: files('vhost-user-server.c'))
96
+ util_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: [
97
+ files('vhost-user-server.c'), vhost_user
98
+ ])
99
util_ss.add(files('block-helpers.c'))
100
util_ss.add(files('qemu-coroutine-sleep.c'))
101
util_ss.add(files('qemu-co-shared-resource.c'))
87
--
102
--
88
2.24.1
103
2.26.2
89
104
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
Introduce libblkdev.fa to avoid recompiling blockdev_ss twice.
2
2
3
The names i2c_send and i2c_recv collide with functions defined in
3
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
4
hw/i2c/core.c. This causes an error when linking against libqos and
4
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
5
softmmu simultaneously (for example when using qtest inproc). Rename the
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
libqos functions to avoid this.
6
Message-id: 20200929125516.186715-3-stefanha@redhat.com
7
8
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
11
Acked-by: Thomas Huth <thuth@redhat.com>
12
Message-id: 20200220041118.23264-10-alxndr@bu.edu
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
---
8
---
15
tests/qtest/libqos/i2c.c | 10 +++++-----
9
meson.build | 12 ++++++++++--
16
tests/qtest/libqos/i2c.h | 4 ++--
10
storage-daemon/meson.build | 3 +--
17
tests/qtest/pca9552-test.c | 10 +++++-----
11
2 files changed, 11 insertions(+), 4 deletions(-)
18
3 files changed, 12 insertions(+), 12 deletions(-)
19
12
20
diff --git a/tests/qtest/libqos/i2c.c b/tests/qtest/libqos/i2c.c
13
diff --git a/meson.build b/meson.build
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/tests/qtest/libqos/i2c.c
15
--- a/meson.build
23
+++ b/tests/qtest/libqos/i2c.c
16
+++ b/meson.build
17
@@ -XXX,XX +XXX,XX @@ blockdev_ss.add(files(
18
# os-win32.c does not
19
blockdev_ss.add(when: 'CONFIG_POSIX', if_true: files('os-posix.c'))
20
softmmu_ss.add(when: 'CONFIG_WIN32', if_true: [files('os-win32.c')])
21
-softmmu_ss.add_all(blockdev_ss)
22
23
common_ss.add(files('cpus-common.c'))
24
25
@@ -XXX,XX +XXX,XX @@ block = declare_dependency(link_whole: [libblock],
26
link_args: '@block.syms',
27
dependencies: [crypto, io])
28
29
+blockdev_ss = blockdev_ss.apply(config_host, strict: false)
30
+libblockdev = static_library('blockdev', blockdev_ss.sources() + genh,
31
+ dependencies: blockdev_ss.dependencies(),
32
+ name_suffix: 'fa',
33
+ build_by_default: false)
34
+
35
+blockdev = declare_dependency(link_whole: [libblockdev],
36
+ dependencies: [block])
37
+
38
qmp_ss = qmp_ss.apply(config_host, strict: false)
39
libqmp = static_library('qmp', qmp_ss.sources() + genh,
40
dependencies: qmp_ss.dependencies(),
41
@@ -XXX,XX +XXX,XX @@ foreach m : block_mods + softmmu_mods
42
install_dir: config_host['qemu_moddir'])
43
endforeach
44
45
-softmmu_ss.add(authz, block, chardev, crypto, io, qmp)
46
+softmmu_ss.add(authz, blockdev, chardev, crypto, io, qmp)
47
common_ss.add(qom, qemuutil)
48
49
common_ss.add_all(when: 'CONFIG_SOFTMMU', if_true: [softmmu_ss])
50
diff --git a/storage-daemon/meson.build b/storage-daemon/meson.build
51
index XXXXXXX..XXXXXXX 100644
52
--- a/storage-daemon/meson.build
53
+++ b/storage-daemon/meson.build
24
@@ -XXX,XX +XXX,XX @@
54
@@ -XXX,XX +XXX,XX @@
25
#include "libqos/i2c.h"
55
qsd_ss = ss.source_set()
26
#include "libqtest.h"
56
qsd_ss.add(files('qemu-storage-daemon.c'))
27
57
-qsd_ss.add(block, chardev, qmp, qom, qemuutil)
28
-void i2c_send(QI2CDevice *i2cdev, const uint8_t *buf, uint16_t len)
58
-qsd_ss.add_all(blockdev_ss)
29
+void qi2c_send(QI2CDevice *i2cdev, const uint8_t *buf, uint16_t len)
59
+qsd_ss.add(blockdev, chardev, qmp, qom, qemuutil)
30
{
60
31
i2cdev->bus->send(i2cdev->bus, i2cdev->addr, buf, len);
61
subdir('qapi')
32
}
33
34
-void i2c_recv(QI2CDevice *i2cdev, uint8_t *buf, uint16_t len)
35
+void qi2c_recv(QI2CDevice *i2cdev, uint8_t *buf, uint16_t len)
36
{
37
i2cdev->bus->recv(i2cdev->bus, i2cdev->addr, buf, len);
38
}
39
@@ -XXX,XX +XXX,XX @@ void i2c_recv(QI2CDevice *i2cdev, uint8_t *buf, uint16_t len)
40
void i2c_read_block(QI2CDevice *i2cdev, uint8_t reg,
41
uint8_t *buf, uint16_t len)
42
{
43
- i2c_send(i2cdev, &reg, 1);
44
- i2c_recv(i2cdev, buf, len);
45
+ qi2c_send(i2cdev, &reg, 1);
46
+ qi2c_recv(i2cdev, buf, len);
47
}
48
49
void i2c_write_block(QI2CDevice *i2cdev, uint8_t reg,
50
@@ -XXX,XX +XXX,XX @@ void i2c_write_block(QI2CDevice *i2cdev, uint8_t reg,
51
uint8_t *cmd = g_malloc(len + 1);
52
cmd[0] = reg;
53
memcpy(&cmd[1], buf, len);
54
- i2c_send(i2cdev, cmd, len + 1);
55
+ qi2c_send(i2cdev, cmd, len + 1);
56
g_free(cmd);
57
}
58
59
diff --git a/tests/qtest/libqos/i2c.h b/tests/qtest/libqos/i2c.h
60
index XXXXXXX..XXXXXXX 100644
61
--- a/tests/qtest/libqos/i2c.h
62
+++ b/tests/qtest/libqos/i2c.h
63
@@ -XXX,XX +XXX,XX @@ struct QI2CDevice {
64
void *i2c_device_create(void *i2c_bus, QGuestAllocator *alloc, void *addr);
65
void add_qi2c_address(QOSGraphEdgeOptions *opts, QI2CAddress *addr);
66
67
-void i2c_send(QI2CDevice *dev, const uint8_t *buf, uint16_t len);
68
-void i2c_recv(QI2CDevice *dev, uint8_t *buf, uint16_t len);
69
+void qi2c_send(QI2CDevice *dev, const uint8_t *buf, uint16_t len);
70
+void qi2c_recv(QI2CDevice *dev, uint8_t *buf, uint16_t len);
71
72
void i2c_read_block(QI2CDevice *dev, uint8_t reg,
73
uint8_t *buf, uint16_t len);
74
diff --git a/tests/qtest/pca9552-test.c b/tests/qtest/pca9552-test.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/tests/qtest/pca9552-test.c
77
+++ b/tests/qtest/pca9552-test.c
78
@@ -XXX,XX +XXX,XX @@ static void receive_autoinc(void *obj, void *data, QGuestAllocator *alloc)
79
80
pca9552_init(i2cdev);
81
82
- i2c_send(i2cdev, &reg, 1);
83
+ qi2c_send(i2cdev, &reg, 1);
84
85
/* PCA9552_LS0 */
86
- i2c_recv(i2cdev, &resp, 1);
87
+ qi2c_recv(i2cdev, &resp, 1);
88
g_assert_cmphex(resp, ==, 0x54);
89
90
/* PCA9552_LS1 */
91
- i2c_recv(i2cdev, &resp, 1);
92
+ qi2c_recv(i2cdev, &resp, 1);
93
g_assert_cmphex(resp, ==, 0x55);
94
95
/* PCA9552_LS2 */
96
- i2c_recv(i2cdev, &resp, 1);
97
+ qi2c_recv(i2cdev, &resp, 1);
98
g_assert_cmphex(resp, ==, 0x55);
99
100
/* PCA9552_LS3 */
101
- i2c_recv(i2cdev, &resp, 1);
102
+ qi2c_recv(i2cdev, &resp, 1);
103
g_assert_cmphex(resp, ==, 0x54);
104
}
105
62
106
--
63
--
107
2.24.1
64
2.26.2
108
65
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
Block exports are used by softmmu, qemu-storage-daemon, and qemu-nbd.
2
They are not used by other programs and are not otherwise needed in
3
libblock.
2
4
3
QSLIST is the only family of lists for which we do not have RCU-friendly accessors,
5
Undo the recent move of blockdev-nbd.c from blockdev_ss into block_ss.
4
add them.
6
Since bdrv_close_all() (libblock) calls blk_exp_close_all()
7
(libblockdev) a stub function is required..
5
8
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9
Make qemu-nbd.c use signal handling utility functions instead of
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
duplicating the code. This helps because os-posix.c is in libblockdev
8
Message-id: 20200220103828.24525-1-pbonzini@redhat.com
11
and it depends on a qemu_system_killed() symbol that qemu-nbd.c lacks.
12
Once we use the signal handling utility functions we also end up
13
providing the necessary symbol.
14
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
17
Reviewed-by: Eric Blake <eblake@redhat.com>
18
Message-id: 20200929125516.186715-4-stefanha@redhat.com
19
[Fixed s/ndb/nbd/ typo in commit description as suggested by Eric Blake
20
--Stefan]
9
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
21
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
---
22
---
11
include/qemu/queue.h | 15 +++++++++++--
23
qemu-nbd.c | 21 ++++++++-------------
12
include/qemu/rcu_queue.h | 47 ++++++++++++++++++++++++++++++++++++++++
24
stubs/blk-exp-close-all.c | 7 +++++++
13
tests/Makefile.include | 2 ++
25
block/export/meson.build | 4 ++--
14
tests/test-rcu-list.c | 16 ++++++++++++++
26
meson.build | 4 ++--
15
tests/test-rcu-slist.c | 2 ++
27
nbd/meson.build | 2 ++
16
5 files changed, 80 insertions(+), 2 deletions(-)
28
stubs/meson.build | 1 +
17
create mode 100644 tests/test-rcu-slist.c
29
6 files changed, 22 insertions(+), 17 deletions(-)
30
create mode 100644 stubs/blk-exp-close-all.c
18
31
19
diff --git a/include/qemu/queue.h b/include/qemu/queue.h
32
diff --git a/qemu-nbd.c b/qemu-nbd.c
20
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
21
--- a/include/qemu/queue.h
34
--- a/qemu-nbd.c
22
+++ b/include/qemu/queue.h
35
+++ b/qemu-nbd.c
23
@@ -XXX,XX +XXX,XX @@ struct { \
36
@@ -XXX,XX +XXX,XX @@
24
(head)->slh_first = (head)->slh_first->field.sle_next; \
37
#include "qapi/error.h"
25
} while (/*CONSTCOND*/0)
38
#include "qemu/cutils.h"
26
39
#include "sysemu/block-backend.h"
27
-#define QSLIST_REMOVE_AFTER(slistelm, field) do { \
40
+#include "sysemu/runstate.h" /* for qemu_system_killed() prototype */
28
+#define QSLIST_REMOVE_AFTER(slistelm, field) do { \
41
#include "block/block_int.h"
29
(slistelm)->field.sle_next = \
42
#include "block/nbd.h"
30
- QSLIST_NEXT(QSLIST_NEXT((slistelm), field), field); \
43
#include "qemu/main-loop.h"
31
+ QSLIST_NEXT(QSLIST_NEXT((slistelm), field), field); \
44
@@ -XXX,XX +XXX,XX @@ QEMU_COPYRIGHT "\n"
32
+} while (/*CONSTCOND*/0)
45
}
33
+
46
34
+#define QSLIST_REMOVE(head, elm, type, field) do { \
47
#ifdef CONFIG_POSIX
35
+ if ((head)->slh_first == (elm)) { \
48
-static void termsig_handler(int signum)
36
+ QSLIST_REMOVE_HEAD((head), field); \
37
+ } else { \
38
+ struct type *curelm = (head)->slh_first; \
39
+ while (curelm->field.sle_next != (elm)) \
40
+ curelm = curelm->field.sle_next; \
41
+ curelm->field.sle_next = curelm->field.sle_next->field.sle_next; \
42
+ } \
43
} while (/*CONSTCOND*/0)
44
45
#define QSLIST_FOREACH(var, head, field) \
46
diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/include/qemu/rcu_queue.h
49
+++ b/include/qemu/rcu_queue.h
50
@@ -XXX,XX +XXX,XX @@ extern "C" {
51
(var) && ((next) = atomic_rcu_read(&(var)->field.tqe_next), 1); \
52
(var) = (next))
53
54
+/*
49
+/*
55
+ * RCU singly-linked list
50
+ * The client thread uses SIGTERM to interrupt the server. A signal
51
+ * handler ensures that "qemu-nbd -v -c" exits with a nice status code.
56
+ */
52
+ */
57
+
53
+void qemu_system_killed(int signum, pid_t pid)
58
+/* Singly-linked list access methods */
54
{
59
+#define QSLIST_EMPTY_RCU(head) (atomic_read(&(head)->slh_first) == NULL)
55
qatomic_cmpxchg(&state, RUNNING, TERMINATE);
60
+#define QSLIST_FIRST_RCU(head) atomic_rcu_read(&(head)->slh_first)
56
qemu_notify_event();
61
+#define QSLIST_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.sle_next)
57
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
62
+
58
BlockExportOptions *export_opts;
63
+/* Singly-linked list functions */
59
64
+#define QSLIST_INSERT_HEAD_RCU(head, elm, field) do { \
60
#ifdef CONFIG_POSIX
65
+ (elm)->field.sle_next = (head)->slh_first; \
61
- /*
66
+ atomic_rcu_set(&(head)->slh_first, (elm)); \
62
- * Exit gracefully on various signals, which includes SIGTERM used
67
+} while (/*CONSTCOND*/0)
63
- * by 'qemu-nbd -v -c'.
68
+
64
- */
69
+#define QSLIST_INSERT_AFTER_RCU(head, listelm, elm, field) do { \
65
- struct sigaction sa_sigterm;
70
+ (elm)->field.sle_next = (listelm)->field.sle_next; \
66
- memset(&sa_sigterm, 0, sizeof(sa_sigterm));
71
+ atomic_rcu_set(&(listelm)->field.sle_next, (elm)); \
67
- sa_sigterm.sa_handler = termsig_handler;
72
+} while (/*CONSTCOND*/0)
68
- sigaction(SIGTERM, &sa_sigterm, NULL);
73
+
69
- sigaction(SIGINT, &sa_sigterm, NULL);
74
+#define QSLIST_REMOVE_HEAD_RCU(head, field) do { \
70
- sigaction(SIGHUP, &sa_sigterm, NULL);
75
+ atomic_set(&(head)->slh_first, (head)->slh_first->field.sle_next); \
71
-
76
+} while (/*CONSTCOND*/0)
72
- signal(SIGPIPE, SIG_IGN);
77
+
73
+ os_setup_early_signal_handling();
78
+#define QSLIST_REMOVE_RCU(head, elm, type, field) do { \
74
+ os_setup_signal_handling();
79
+ if ((head)->slh_first == (elm)) { \
80
+ QSLIST_REMOVE_HEAD_RCU((head), field); \
81
+ } else { \
82
+ struct type *curr = (head)->slh_first; \
83
+ while (curr->field.sle_next != (elm)) { \
84
+ curr = curr->field.sle_next; \
85
+ } \
86
+ atomic_set(&curr->field.sle_next, \
87
+ curr->field.sle_next->field.sle_next); \
88
+ } \
89
+} while (/*CONSTCOND*/0)
90
+
91
+#define QSLIST_FOREACH_RCU(var, head, field) \
92
+ for ((var) = atomic_rcu_read(&(head)->slh_first); \
93
+ (var); \
94
+ (var) = atomic_rcu_read(&(var)->field.sle_next))
95
+
96
+#define QSLIST_FOREACH_SAFE_RCU(var, head, field, next) \
97
+ for ((var) = atomic_rcu_read(&(head)->slh_first); \
98
+ (var) && ((next) = atomic_rcu_read(&(var)->field.sle_next), 1); \
99
+ (var) = (next))
100
+
101
#ifdef __cplusplus
102
}
103
#endif
75
#endif
104
diff --git a/tests/Makefile.include b/tests/Makefile.include
76
105
index XXXXXXX..XXXXXXX 100644
77
socket_init();
106
--- a/tests/Makefile.include
78
diff --git a/stubs/blk-exp-close-all.c b/stubs/blk-exp-close-all.c
107
+++ b/tests/Makefile.include
108
@@ -XXX,XX +XXX,XX @@ check-unit-y += tests/rcutorture$(EXESUF)
109
check-unit-y += tests/test-rcu-list$(EXESUF)
110
check-unit-y += tests/test-rcu-simpleq$(EXESUF)
111
check-unit-y += tests/test-rcu-tailq$(EXESUF)
112
+check-unit-y += tests/test-rcu-slist$(EXESUF)
113
check-unit-y += tests/test-qdist$(EXESUF)
114
check-unit-y += tests/test-qht$(EXESUF)
115
check-unit-y += tests/test-qht-par$(EXESUF)
116
@@ -XXX,XX +XXX,XX @@ tests/rcutorture$(EXESUF): tests/rcutorture.o $(test-util-obj-y)
117
tests/test-rcu-list$(EXESUF): tests/test-rcu-list.o $(test-util-obj-y)
118
tests/test-rcu-simpleq$(EXESUF): tests/test-rcu-simpleq.o $(test-util-obj-y)
119
tests/test-rcu-tailq$(EXESUF): tests/test-rcu-tailq.o $(test-util-obj-y)
120
+tests/test-rcu-slist$(EXESUF): tests/test-rcu-slist.o $(test-util-obj-y)
121
tests/test-qdist$(EXESUF): tests/test-qdist.o $(test-util-obj-y)
122
tests/test-qht$(EXESUF): tests/test-qht.o $(test-util-obj-y)
123
tests/test-qht-par$(EXESUF): tests/test-qht-par.o tests/qht-bench$(EXESUF) $(test-util-obj-y)
124
diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/tests/test-rcu-list.c
127
+++ b/tests/test-rcu-list.c
128
@@ -XXX,XX +XXX,XX @@ struct list_element {
129
QSIMPLEQ_ENTRY(list_element) entry;
130
#elif TEST_LIST_TYPE == 3
131
QTAILQ_ENTRY(list_element) entry;
132
+#elif TEST_LIST_TYPE == 4
133
+ QSLIST_ENTRY(list_element) entry;
134
#else
135
#error Invalid TEST_LIST_TYPE
136
#endif
137
@@ -XXX,XX +XXX,XX @@ static QTAILQ_HEAD(, list_element) Q_list_head;
138
#define TEST_LIST_INSERT_HEAD_RCU QTAILQ_INSERT_HEAD_RCU
139
#define TEST_LIST_FOREACH_RCU QTAILQ_FOREACH_RCU
140
#define TEST_LIST_FOREACH_SAFE_RCU QTAILQ_FOREACH_SAFE_RCU
141
+
142
+#elif TEST_LIST_TYPE == 4
143
+static QSLIST_HEAD(, list_element) Q_list_head;
144
+
145
+#define TEST_NAME "qslist"
146
+#define TEST_LIST_REMOVE_RCU(el, f) \
147
+     QSLIST_REMOVE_RCU(&Q_list_head, el, list_element, f)
148
+
149
+#define TEST_LIST_INSERT_AFTER_RCU(list_el, el, f) \
150
+ QSLIST_INSERT_AFTER_RCU(&Q_list_head, list_el, el, f)
151
+
152
+#define TEST_LIST_INSERT_HEAD_RCU QSLIST_INSERT_HEAD_RCU
153
+#define TEST_LIST_FOREACH_RCU QSLIST_FOREACH_RCU
154
+#define TEST_LIST_FOREACH_SAFE_RCU QSLIST_FOREACH_SAFE_RCU
155
#else
156
#error Invalid TEST_LIST_TYPE
157
#endif
158
diff --git a/tests/test-rcu-slist.c b/tests/test-rcu-slist.c
159
new file mode 100644
79
new file mode 100644
160
index XXXXXXX..XXXXXXX
80
index XXXXXXX..XXXXXXX
161
--- /dev/null
81
--- /dev/null
162
+++ b/tests/test-rcu-slist.c
82
+++ b/stubs/blk-exp-close-all.c
163
@@ -XXX,XX +XXX,XX @@
83
@@ -XXX,XX +XXX,XX @@
164
+#define TEST_LIST_TYPE 4
84
+#include "qemu/osdep.h"
165
+#include "test-rcu-list.c"
85
+#include "block/export.h"
86
+
87
+/* Only used in programs that support block exports (libblockdev.fa) */
88
+void blk_exp_close_all(void)
89
+{
90
+}
91
diff --git a/block/export/meson.build b/block/export/meson.build
92
index XXXXXXX..XXXXXXX 100644
93
--- a/block/export/meson.build
94
+++ b/block/export/meson.build
95
@@ -XXX,XX +XXX,XX @@
96
-block_ss.add(files('export.c'))
97
-block_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: files('vhost-user-blk-server.c'))
98
+blockdev_ss.add(files('export.c'))
99
+blockdev_ss.add(when: ['CONFIG_LINUX', 'CONFIG_VHOST_USER'], if_true: files('vhost-user-blk-server.c'))
100
diff --git a/meson.build b/meson.build
101
index XXXXXXX..XXXXXXX 100644
102
--- a/meson.build
103
+++ b/meson.build
104
@@ -XXX,XX +XXX,XX @@ subdir('dump')
105
106
block_ss.add(files(
107
'block.c',
108
- 'blockdev-nbd.c',
109
'blockjob.c',
110
'job.c',
111
'qemu-io-cmds.c',
112
@@ -XXX,XX +XXX,XX @@ subdir('block')
113
114
blockdev_ss.add(files(
115
'blockdev.c',
116
+ 'blockdev-nbd.c',
117
'iothread.c',
118
'job-qmp.c',
119
))
120
@@ -XXX,XX +XXX,XX @@ if have_tools
121
qemu_io = executable('qemu-io', files('qemu-io.c'),
122
dependencies: [block, qemuutil], install: true)
123
qemu_nbd = executable('qemu-nbd', files('qemu-nbd.c'),
124
- dependencies: [block, qemuutil], install: true)
125
+ dependencies: [blockdev, qemuutil], install: true)
126
127
subdir('storage-daemon')
128
subdir('contrib/rdmacm-mux')
129
diff --git a/nbd/meson.build b/nbd/meson.build
130
index XXXXXXX..XXXXXXX 100644
131
--- a/nbd/meson.build
132
+++ b/nbd/meson.build
133
@@ -XXX,XX +XXX,XX @@
134
block_ss.add(files(
135
'client.c',
136
'common.c',
137
+))
138
+blockdev_ss.add(files(
139
'server.c',
140
))
141
diff --git a/stubs/meson.build b/stubs/meson.build
142
index XXXXXXX..XXXXXXX 100644
143
--- a/stubs/meson.build
144
+++ b/stubs/meson.build
145
@@ -XXX,XX +XXX,XX @@
146
stub_ss.add(files('arch_type.c'))
147
stub_ss.add(files('bdrv-next-monitor-owned.c'))
148
stub_ss.add(files('blk-commit-all.c'))
149
+stub_ss.add(files('blk-exp-close-all.c'))
150
stub_ss.add(files('blockdev-close-all-bdrv-states.c'))
151
stub_ss.add(files('change-state-handler.c'))
152
stub_ss.add(files('cmos.c'))
166
--
153
--
167
2.24.1
154
2.26.2
168
155
diff view generated by jsdifflib
1
File descriptor monitoring is O(1) with epoll(7), but
1
Make it possible to specify the iothread where the export will run. By
2
aio_dispatch_handlers() still scans all AioHandlers instead of
2
default the block node can be moved to other AioContexts later and the
3
dispatching just those that are ready. This makes aio_poll() O(n) with
3
export will follow. The fixed-iothread option forces strict behavior
4
respect to the total number of registered handlers.
4
that prevents changing AioContext while the export is active. See the
5
5
QAPI docs for details.
6
Add a local ready_list to aio_poll() so that each nested aio_poll()
7
builds a list of handlers ready to be dispatched. Since file descriptor
8
polling is level-triggered, nested aio_poll() calls also see fds that
9
were ready in the parent but not yet dispatched. This guarantees that
10
nested aio_poll() invocations will dispatch all fds, even those that
11
became ready before the nested invocation.
12
13
Since only handlers ready to be dispatched are placed onto the
14
ready_list, the new aio_dispatch_ready_handlers() function provides O(1)
15
dispatch.
16
17
Note that AioContext polling is still O(n) and currently cannot be fully
18
disabled. This still needs to be fixed before aio_poll() is fully O(1).
19
6
20
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
21
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20200929125516.186715-5-stefanha@redhat.com
22
Message-id: 20200214171712.541358-6-stefanha@redhat.com
9
[Fix stray '#' character in block-export.json and add missing "(since:
23
[Fix compilation error on macOS where there is no epoll(87). The
10
5.2)" as suggested by Eric Blake.
24
aio_epoll() prototype was out of date and aio_add_ready_list() needed to
25
be moved outside the ifdef.
26
--Stefan]
11
--Stefan]
27
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
28
---
13
---
29
util/aio-posix.c | 110 +++++++++++++++++++++++++++++++++--------------
14
qapi/block-export.json | 11 ++++++++++
30
1 file changed, 78 insertions(+), 32 deletions(-)
15
block/export/export.c | 31 +++++++++++++++++++++++++++-
16
block/export/vhost-user-blk-server.c | 5 ++++-
17
nbd/server.c | 2 --
18
4 files changed, 45 insertions(+), 4 deletions(-)
31
19
32
diff --git a/util/aio-posix.c b/util/aio-posix.c
20
diff --git a/qapi/block-export.json b/qapi/block-export.json
33
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
34
--- a/util/aio-posix.c
22
--- a/qapi/block-export.json
35
+++ b/util/aio-posix.c
23
+++ b/qapi/block-export.json
36
@@ -XXX,XX +XXX,XX @@ struct AioHandler
24
@@ -XXX,XX +XXX,XX @@
37
void *opaque;
25
# export before completion is signalled. (since: 5.2;
38
bool is_external;
26
# default: false)
39
QLIST_ENTRY(AioHandler) node;
27
#
40
+ QLIST_ENTRY(AioHandler) node_ready; /* only used during aio_poll() */
28
+# @iothread: The name of the iothread object where the export will run. The
41
QLIST_ENTRY(AioHandler) node_deleted;
29
+# default is to use the thread currently associated with the
42
};
30
+# block node. (since: 5.2)
43
31
+#
44
+/* Add a handler to a ready list */
32
+# @fixed-iothread: True prevents the block node from being moved to another
45
+static void add_ready_handler(AioHandlerList *ready_list,
33
+# thread while the export is active. If true and @iothread is
46
+ AioHandler *node,
34
+# given, export creation fails if the block node cannot be
47
+ int revents)
35
+# moved to the iothread. The default is false. (since: 5.2)
48
+{
36
+#
49
+ QLIST_SAFE_REMOVE(node, node_ready); /* remove from nested parent's list */
37
# Since: 4.2
50
+ node->pfd.revents = revents;
38
##
51
+ QLIST_INSERT_HEAD(ready_list, node, node_ready);
39
{ 'union': 'BlockExportOptions',
52
+}
40
'base': { 'type': 'BlockExportType',
41
'id': 'str',
42
+     '*fixed-iothread': 'bool',
43
+     '*iothread': 'str',
44
'node-name': 'str',
45
'*writable': 'bool',
46
'*writethrough': 'bool' },
47
diff --git a/block/export/export.c b/block/export/export.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/block/export/export.c
50
+++ b/block/export/export.c
51
@@ -XXX,XX +XXX,XX @@
52
53
#include "block/block.h"
54
#include "sysemu/block-backend.h"
55
+#include "sysemu/iothread.h"
56
#include "block/export.h"
57
#include "block/nbd.h"
58
#include "qapi/error.h"
59
@@ -XXX,XX +XXX,XX @@ static const BlockExportDriver *blk_exp_find_driver(BlockExportType type)
60
61
BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
62
{
63
+ bool fixed_iothread = export->has_fixed_iothread && export->fixed_iothread;
64
const BlockExportDriver *drv;
65
BlockExport *exp = NULL;
66
BlockDriverState *bs;
67
- BlockBackend *blk;
68
+ BlockBackend *blk = NULL;
69
AioContext *ctx;
70
uint64_t perm;
71
int ret;
72
@@ -XXX,XX +XXX,XX @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
73
ctx = bdrv_get_aio_context(bs);
74
aio_context_acquire(ctx);
75
76
+ if (export->has_iothread) {
77
+ IOThread *iothread;
78
+ AioContext *new_ctx;
53
+
79
+
54
#ifdef CONFIG_EPOLL_CREATE1
80
+ iothread = iothread_by_id(export->iothread);
55
81
+ if (!iothread) {
56
/* The fd number threshold to switch to epoll */
82
+ error_setg(errp, "iothread \"%s\" not found", export->iothread);
57
@@ -XXX,XX +XXX,XX @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
83
+ goto fail;
58
}
84
+ }
59
}
60
61
-static int aio_epoll(AioContext *ctx, int64_t timeout)
62
+static int aio_epoll(AioContext *ctx, AioHandlerList *ready_list,
63
+ int64_t timeout)
64
{
65
GPollFD pfd = {
66
.fd = ctx->epollfd,
67
@@ -XXX,XX +XXX,XX @@ static int aio_epoll(AioContext *ctx, int64_t timeout)
68
}
69
for (i = 0; i < ret; i++) {
70
int ev = events[i].events;
71
+ int revents = (ev & EPOLLIN ? G_IO_IN : 0) |
72
+ (ev & EPOLLOUT ? G_IO_OUT : 0) |
73
+ (ev & EPOLLHUP ? G_IO_HUP : 0) |
74
+ (ev & EPOLLERR ? G_IO_ERR : 0);
75
+
85
+
76
node = events[i].data.ptr;
86
+ new_ctx = iothread_get_aio_context(iothread);
77
- node->pfd.revents = (ev & EPOLLIN ? G_IO_IN : 0) |
87
+
78
- (ev & EPOLLOUT ? G_IO_OUT : 0) |
88
+ ret = bdrv_try_set_aio_context(bs, new_ctx, errp);
79
- (ev & EPOLLHUP ? G_IO_HUP : 0) |
89
+ if (ret == 0) {
80
- (ev & EPOLLERR ? G_IO_ERR : 0);
90
+ aio_context_release(ctx);
81
+ add_ready_handler(ready_list, node, revents);
91
+ aio_context_acquire(new_ctx);
82
}
92
+ ctx = new_ctx;
83
}
93
+ } else if (fixed_iothread) {
84
out:
94
+ goto fail;
85
@@ -XXX,XX +XXX,XX @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
95
+ }
86
{
87
}
88
89
-static int aio_epoll(AioContext *ctx, GPollFD *pfds,
90
- unsigned npfd, int64_t timeout)
91
+static int aio_epoll(AioContext *ctx, AioHandlerList *ready_list,
92
+ int64_t timeout)
93
{
94
assert(false);
95
}
96
@@ -XXX,XX +XXX,XX @@ static void aio_free_deleted_handlers(AioContext *ctx)
97
qemu_lockcnt_inc_and_unlock(&ctx->list_lock);
98
}
99
100
-static bool aio_dispatch_handlers(AioContext *ctx)
101
+static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
102
{
103
- AioHandler *node, *tmp;
104
bool progress = false;
105
+ int revents;
106
107
- QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) {
108
- int revents;
109
+ revents = node->pfd.revents & node->pfd.events;
110
+ node->pfd.revents = 0;
111
112
- revents = node->pfd.revents & node->pfd.events;
113
- node->pfd.revents = 0;
114
+ if (!QLIST_IS_INSERTED(node, node_deleted) &&
115
+ (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
116
+ aio_node_check(ctx, node->is_external) &&
117
+ node->io_read) {
118
+ node->io_read(node->opaque);
119
120
- if (!QLIST_IS_INSERTED(node, node_deleted) &&
121
- (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
122
- aio_node_check(ctx, node->is_external) &&
123
- node->io_read) {
124
- node->io_read(node->opaque);
125
-
126
- /* aio_notify() does not count as progress */
127
- if (node->opaque != &ctx->notifier) {
128
- progress = true;
129
- }
130
- }
131
- if (!QLIST_IS_INSERTED(node, node_deleted) &&
132
- (revents & (G_IO_OUT | G_IO_ERR)) &&
133
- aio_node_check(ctx, node->is_external) &&
134
- node->io_write) {
135
- node->io_write(node->opaque);
136
+ /* aio_notify() does not count as progress */
137
+ if (node->opaque != &ctx->notifier) {
138
progress = true;
139
}
140
}
141
+ if (!QLIST_IS_INSERTED(node, node_deleted) &&
142
+ (revents & (G_IO_OUT | G_IO_ERR)) &&
143
+ aio_node_check(ctx, node->is_external) &&
144
+ node->io_write) {
145
+ node->io_write(node->opaque);
146
+ progress = true;
147
+ }
96
+ }
148
+
97
+
149
+ return progress;
98
/*
150
+}
99
* Block exports are used for non-shared storage migration. Make sure
100
* that BDRV_O_INACTIVE is cleared and the image is ready for write
101
@@ -XXX,XX +XXX,XX @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
102
}
103
104
blk = blk_new(ctx, perm, BLK_PERM_ALL);
151
+
105
+
152
+/*
106
+ if (!fixed_iothread) {
153
+ * If we have a list of ready handlers then this is more efficient than
107
+ blk_set_allow_aio_context_change(blk, true);
154
+ * scanning all handlers with aio_dispatch_handlers().
155
+ */
156
+static bool aio_dispatch_ready_handlers(AioContext *ctx,
157
+ AioHandlerList *ready_list)
158
+{
159
+ bool progress = false;
160
+ AioHandler *node;
161
+
162
+ while ((node = QLIST_FIRST(ready_list))) {
163
+ QLIST_SAFE_REMOVE(node, node_ready);
164
+ progress = aio_dispatch_handler(ctx, node) || progress;
165
+ }
108
+ }
166
+
109
+
167
+ return progress;
110
ret = blk_insert_bs(blk, bs, errp);
168
+}
111
if (ret < 0) {
112
goto fail;
113
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/block/export/vhost-user-blk-server.c
116
+++ b/block/export/vhost-user-blk-server.c
117
@@ -XXX,XX +XXX,XX @@ static const VuDevIface vu_blk_iface = {
118
static void blk_aio_attached(AioContext *ctx, void *opaque)
119
{
120
VuBlkExport *vexp = opaque;
169
+
121
+
170
+/* Slower than aio_dispatch_ready_handlers() but only used via glib */
122
+ vexp->export.ctx = ctx;
171
+static bool aio_dispatch_handlers(AioContext *ctx)
123
vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
172
+{
124
}
173
+ AioHandler *node, *tmp;
125
174
+ bool progress = false;
126
static void blk_aio_detach(void *opaque)
127
{
128
VuBlkExport *vexp = opaque;
175
+
129
+
176
+ QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) {
130
vhost_user_server_detach_aio_context(&vexp->vu_server);
177
+ progress = aio_dispatch_handler(ctx, node) || progress;
131
+ vexp->export.ctx = NULL;
178
+ }
179
180
return progress;
181
}
132
}
182
@@ -XXX,XX +XXX,XX @@ static bool try_poll_mode(AioContext *ctx, int64_t *timeout)
133
183
134
static void
184
bool aio_poll(AioContext *ctx, bool blocking)
135
@@ -XXX,XX +XXX,XX @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
185
{
136
vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
186
+ AioHandlerList ready_list = QLIST_HEAD_INITIALIZER(ready_list);
137
logical_block_size);
187
AioHandler *node;
138
188
int i;
139
- blk_set_allow_aio_context_change(exp->blk, true);
189
int ret = 0;
140
blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
190
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
141
vexp);
191
/* wait until next event */
142
192
if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) {
143
diff --git a/nbd/server.c b/nbd/server.c
193
npfd = 0; /* pollfds[] is not being used */
144
index XXXXXXX..XXXXXXX 100644
194
- ret = aio_epoll(ctx, timeout);
145
--- a/nbd/server.c
195
+ ret = aio_epoll(ctx, &ready_list, timeout);
146
+++ b/nbd/server.c
196
} else {
147
@@ -XXX,XX +XXX,XX @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
197
ret = qemu_poll_ns(pollfds, npfd, timeout);
148
return ret;
198
}
199
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
200
/* if we have any readable fds, dispatch event */
201
if (ret > 0) {
202
for (i = 0; i < npfd; i++) {
203
- nodes[i]->pfd.revents = pollfds[i].revents;
204
+ int revents = pollfds[i].revents;
205
+
206
+ if (revents) {
207
+ add_ready_handler(&ready_list, nodes[i], revents);
208
+ }
209
}
210
}
149
}
211
150
212
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
151
- blk_set_allow_aio_context_change(blk, true);
213
progress |= aio_bh_poll(ctx);
152
-
214
153
QTAILQ_INIT(&exp->clients);
215
if (ret > 0) {
154
exp->name = g_strdup(arg->name);
216
- progress |= aio_dispatch_handlers(ctx);
155
exp->description = g_strdup(arg->description);
217
+ progress |= aio_dispatch_ready_handlers(ctx, &ready_list);
218
}
219
220
aio_free_deleted_handlers(ctx);
221
--
156
--
222
2.24.1
157
2.26.2
223
158
diff view generated by jsdifflib
1
The first rcu_read_lock/unlock() is expensive. Nested calls are cheap.
1
Allow the number of queues to be configured using --export
2
vhost-user-blk,num-queues=N. This setting should match the QEMU --device
3
vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers
4
its own value if the vhost-user-blk backend offers fewer queues than
5
QEMU.
2
6
3
This optimization increases IOPS from 73k to 162k with a Linux guest
7
The vhost-user-blk-server.c code is already capable of multi-queue. All
4
that has 2 virtio-blk,num-queues=1 and 99 virtio-blk,num-queues=32
8
virtqueue processing runs in the same AioContext. No new locking is
5
devices.
9
needed.
10
11
Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.
12
Note that the feature bit only announces the presence of the num_queues
13
configuration space field. It does not promise that there is more than 1
14
virtqueue, so we can set it unconditionally.
15
16
I tested multi-queue by running a random read fio test with numjobs=4 on
17
an -smp 4 guest. After the benchmark finished the guest /proc/interrupts
18
file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/
19
directory shows that Linux blk-mq has 4 queues configured.
20
21
An automated test is included in the next commit.
6
22
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
23
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
24
Acked-by: Markus Armbruster <armbru@redhat.com>
9
Message-id: 20200218182708.914552-1-stefanha@redhat.com
25
Message-id: 20201001144604.559733-2-stefanha@redhat.com
26
[Fixed accidental tab characters as suggested by Markus Armbruster
27
--Stefan]
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
28
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
29
---
12
util/aio-posix.c | 11 +++++++++++
30
qapi/block-export.json | 10 +++++++---
13
1 file changed, 11 insertions(+)
31
block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------
32
2 files changed, 25 insertions(+), 9 deletions(-)
14
33
15
diff --git a/util/aio-posix.c b/util/aio-posix.c
34
diff --git a/qapi/block-export.json b/qapi/block-export.json
16
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
17
--- a/util/aio-posix.c
36
--- a/qapi/block-export.json
18
+++ b/util/aio-posix.c
37
+++ b/qapi/block-export.json
19
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@
20
39
# SocketAddress types are supported. Passed fds must be UNIX domain
21
#include "qemu/osdep.h"
40
# sockets.
22
#include "block/block.h"
41
# @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
23
+#include "qemu/rcu.h"
42
+# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults
24
#include "qemu/rcu_queue.h"
43
+# to 1.
25
#include "qemu/sockets.h"
44
#
26
#include "qemu/cutils.h"
45
# Since: 5.2
27
@@ -XXX,XX +XXX,XX @@ static bool run_poll_handlers_once(AioContext *ctx, int64_t *timeout)
46
##
28
bool progress = false;
47
{ 'struct': 'BlockExportOptionsVhostUserBlk',
29
AioHandler *node;
48
- 'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
30
49
+ 'data': { 'addr': 'SocketAddress',
31
+ /*
50
+     '*logical-block-size': 'size',
32
+ * Optimization: ->io_poll() handlers often contain RCU read critical
51
+ '*num-queues': 'uint16'} }
33
+ * sections and we therefore see many rcu_read_lock() -> rcu_read_unlock()
52
34
+ * -> rcu_read_lock() -> ... sequences with expensive memory
53
##
35
+ * synchronization primitives. Make the entire polling loop an RCU
54
# @NbdServerAddOptions:
36
+ * critical section because nested rcu_read_lock()/rcu_read_unlock() calls
55
@@ -XXX,XX +XXX,XX @@
37
+ * are cheap.
56
{ 'union': 'BlockExportOptions',
38
+ */
57
'base': { 'type': 'BlockExportType',
39
+ RCU_READ_LOCK_GUARD();
58
'id': 'str',
59
-     '*fixed-iothread': 'bool',
60
-     '*iothread': 'str',
61
+ '*fixed-iothread': 'bool',
62
+ '*iothread': 'str',
63
'node-name': 'str',
64
'*writable': 'bool',
65
'*writethrough': 'bool' },
66
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/block/export/vhost-user-blk-server.c
69
+++ b/block/export/vhost-user-blk-server.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "util/block-helpers.h"
72
73
enum {
74
- VHOST_USER_BLK_MAX_QUEUES = 1,
75
+ VHOST_USER_BLK_NUM_QUEUES_DEFAULT = 1,
76
};
77
struct virtio_blk_inhdr {
78
unsigned char status;
79
@@ -XXX,XX +XXX,XX @@ static uint64_t vu_blk_get_features(VuDev *dev)
80
1ull << VIRTIO_BLK_F_DISCARD |
81
1ull << VIRTIO_BLK_F_WRITE_ZEROES |
82
1ull << VIRTIO_BLK_F_CONFIG_WCE |
83
+ 1ull << VIRTIO_BLK_F_MQ |
84
1ull << VIRTIO_F_VERSION_1 |
85
1ull << VIRTIO_RING_F_INDIRECT_DESC |
86
1ull << VIRTIO_RING_F_EVENT_IDX |
87
@@ -XXX,XX +XXX,XX @@ static void blk_aio_detach(void *opaque)
88
89
static void
90
vu_blk_initialize_config(BlockDriverState *bs,
91
- struct virtio_blk_config *config, uint32_t blk_size)
92
+ struct virtio_blk_config *config,
93
+ uint32_t blk_size,
94
+ uint16_t num_queues)
95
{
96
config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
97
config->blk_size = blk_size;
98
@@ -XXX,XX +XXX,XX @@ vu_blk_initialize_config(BlockDriverState *bs,
99
config->seg_max = 128 - 2;
100
config->min_io_size = 1;
101
config->opt_io_size = 1;
102
- config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
103
+ config->num_queues = num_queues;
104
config->max_discard_sectors = 32768;
105
config->max_discard_seg = 1;
106
config->discard_sector_alignment = config->blk_size >> 9;
107
@@ -XXX,XX +XXX,XX @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
108
BlockExportOptionsVhostUserBlk *vu_opts = &opts->u.vhost_user_blk;
109
Error *local_err = NULL;
110
uint64_t logical_block_size;
111
+ uint16_t num_queues = VHOST_USER_BLK_NUM_QUEUES_DEFAULT;
112
113
vexp->writable = opts->writable;
114
vexp->blkcfg.wce = 0;
115
@@ -XXX,XX +XXX,XX @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
116
}
117
vexp->blk_size = logical_block_size;
118
blk_set_guest_block_size(exp->blk, logical_block_size);
40
+
119
+
41
QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
120
+ if (vu_opts->has_num_queues) {
42
if (!node->deleted && node->io_poll &&
121
+ num_queues = vu_opts->num_queues;
43
aio_node_check(ctx, node->is_external) &&
122
+ }
123
+ if (num_queues == 0) {
124
+ error_setg(errp, "num-queues must be greater than 0");
125
+ return -EINVAL;
126
+ }
127
+
128
vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
129
- logical_block_size);
130
+ logical_block_size, num_queues);
131
132
blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
133
vexp);
134
135
if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
136
- VHOST_USER_BLK_MAX_QUEUES, &vu_blk_iface,
137
- errp)) {
138
+ num_queues, &vu_blk_iface, errp)) {
139
blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
140
blk_aio_detach, vexp);
141
return -EADDRNOTAVAIL;
44
--
142
--
45
2.24.1
143
2.26.2
46
144
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
The virtual-device fuzzer must initialize QOM, prior to running
3
bdrv_co_block_status_above has several design problems with handling
4
vl:qemu_init, so that it can use the qos_graph to identify the arguments
4
short backing files:
5
required to initialize a guest for libqos-assisted fuzzing. This change
6
prevents errors when vl:qemu_init tries to (re)initialize the previously
7
initialized QOM module.
8
5
9
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
6
1. With want_zeros=true, it may return ret with BDRV_BLOCK_ZERO but
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
without BDRV_BLOCK_ALLOCATED flag, when actually short backing file
11
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
8
which produces these after-EOF zeros is inside requested backing
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
sequence.
13
Message-id: 20200220041118.23264-4-alxndr@bu.edu
10
11
2. With want_zero=false, it may return pnum=0 prior to actual EOF,
12
because of EOF of short backing file.
13
14
Fix these things, making logic about short backing files clearer.
15
16
With fixed bdrv_block_status_above we also have to improve is_zero in
17
qcow2 code, otherwise iotest 154 will fail, because with this patch we
18
stop to merge zeros of different types (produced by fully unallocated
19
in the whole backing chain regions vs produced by short backing files).
20
21
Note also, that this patch leaves for another day the general problem
22
around block-status: misuse of BDRV_BLOCK_ALLOCATED as is-fs-allocated
23
vs go-to-backing.
24
25
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
26
Reviewed-by: Alberto Garcia <berto@igalia.com>
27
Reviewed-by: Eric Blake <eblake@redhat.com>
28
Message-id: 20200924194003.22080-2-vsementsov@virtuozzo.com
29
[Fix s/comes/come/ as suggested by Eric Blake
30
--Stefan]
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
31
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
---
32
---
16
util/module.c | 7 +++++++
33
block/io.c | 68 ++++++++++++++++++++++++++++++++++++++++-----------
17
1 file changed, 7 insertions(+)
34
block/qcow2.c | 16 ++++++++++--
35
2 files changed, 68 insertions(+), 16 deletions(-)
18
36
19
diff --git a/util/module.c b/util/module.c
37
diff --git a/block/io.c b/block/io.c
20
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
21
--- a/util/module.c
39
--- a/block/io.c
22
+++ b/util/module.c
40
+++ b/block/io.c
23
@@ -XXX,XX +XXX,XX @@ typedef struct ModuleEntry
41
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
24
typedef QTAILQ_HEAD(, ModuleEntry) ModuleTypeList;
42
int64_t *map,
25
43
BlockDriverState **file)
26
static ModuleTypeList init_type_list[MODULE_INIT_MAX];
44
{
27
+static bool modules_init_done[MODULE_INIT_MAX];
45
+ int ret;
28
46
BlockDriverState *p;
29
static ModuleTypeList dso_init_list;
47
- int ret = 0;
30
48
- bool first = true;
31
@@ -XXX,XX +XXX,XX @@ void module_call_init(module_init_type type)
49
+ int64_t eof = 0;
32
ModuleTypeList *l;
50
33
ModuleEntry *e;
51
assert(bs != base);
34
52
- for (p = bs; p != base; p = bdrv_filter_or_cow_bs(p)) {
35
+ if (modules_init_done[type]) {
53
+
36
+ return;
54
+ ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
55
+ if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED) {
56
+ return ret;
37
+ }
57
+ }
38
+
58
+
39
l = find_type(type);
59
+ if (ret & BDRV_BLOCK_EOF) {
40
60
+ eof = offset + *pnum;
41
QTAILQ_FOREACH(e, l, node) {
61
+ }
42
e->init();
62
+
63
+ assert(*pnum <= bytes);
64
+ bytes = *pnum;
65
+
66
+ for (p = bdrv_filter_or_cow_bs(bs); p != base;
67
+ p = bdrv_filter_or_cow_bs(p))
68
+ {
69
ret = bdrv_co_block_status(p, want_zero, offset, bytes, pnum, map,
70
file);
71
if (ret < 0) {
72
- break;
73
+ return ret;
74
}
75
- if (ret & BDRV_BLOCK_ZERO && ret & BDRV_BLOCK_EOF && !first) {
76
+ if (*pnum == 0) {
77
/*
78
- * Reading beyond the end of the file continues to read
79
- * zeroes, but we can only widen the result to the
80
- * unallocated length we learned from an earlier
81
- * iteration.
82
+ * The top layer deferred to this layer, and because this layer is
83
+ * short, any zeroes that we synthesize beyond EOF behave as if they
84
+ * were allocated at this layer.
85
+ *
86
+ * We don't include BDRV_BLOCK_EOF into ret, as upper layer may be
87
+ * larger. We'll add BDRV_BLOCK_EOF if needed at function end, see
88
+ * below.
89
*/
90
+ assert(ret & BDRV_BLOCK_EOF);
91
*pnum = bytes;
92
+ if (file) {
93
+ *file = p;
94
+ }
95
+ ret = BDRV_BLOCK_ZERO | BDRV_BLOCK_ALLOCATED;
96
+ break;
97
}
98
- if (ret & (BDRV_BLOCK_ZERO | BDRV_BLOCK_DATA)) {
99
+ if (ret & BDRV_BLOCK_ALLOCATED) {
100
+ /*
101
+ * We've found the node and the status, we must break.
102
+ *
103
+ * Drop BDRV_BLOCK_EOF, as it's not for upper layer, which may be
104
+ * larger. We'll add BDRV_BLOCK_EOF if needed at function end, see
105
+ * below.
106
+ */
107
+ ret &= ~BDRV_BLOCK_EOF;
108
break;
109
}
110
- /* [offset, pnum] unallocated on this layer, which could be only
111
- * the first part of [offset, bytes]. */
112
- bytes = MIN(bytes, *pnum);
113
- first = false;
114
+
115
+ /*
116
+ * OK, [offset, offset + *pnum) region is unallocated on this layer,
117
+ * let's continue the diving.
118
+ */
119
+ assert(*pnum <= bytes);
120
+ bytes = *pnum;
121
+ }
122
+
123
+ if (offset + *pnum == eof) {
124
+ ret |= BDRV_BLOCK_EOF;
43
}
125
}
44
+
126
+
45
+ modules_init_done[type] = true;
127
return ret;
46
}
128
}
47
129
48
#ifdef CONFIG_MODULES
130
diff --git a/block/qcow2.c b/block/qcow2.c
131
index XXXXXXX..XXXXXXX 100644
132
--- a/block/qcow2.c
133
+++ b/block/qcow2.c
134
@@ -XXX,XX +XXX,XX @@ static bool is_zero(BlockDriverState *bs, int64_t offset, int64_t bytes)
135
if (!bytes) {
136
return true;
137
}
138
- res = bdrv_block_status_above(bs, NULL, offset, bytes, &nr, NULL, NULL);
139
- return res >= 0 && (res & BDRV_BLOCK_ZERO) && nr == bytes;
140
+
141
+ /*
142
+ * bdrv_block_status_above doesn't merge different types of zeros, for
143
+ * example, zeros which come from the region which is unallocated in
144
+ * the whole backing chain, and zeros which come because of a short
145
+ * backing file. So, we need a loop.
146
+ */
147
+ do {
148
+ res = bdrv_block_status_above(bs, NULL, offset, bytes, &nr, NULL, NULL);
149
+ offset += nr;
150
+ bytes -= nr;
151
+ } while (res >= 0 && (res & BDRV_BLOCK_ZERO) && nr && bytes);
152
+
153
+ return res >= 0 && (res & BDRV_BLOCK_ZERO) && bytes == 0;
154
}
155
156
static coroutine_fn int qcow2_co_pwrite_zeroes(BlockDriverState *bs,
49
--
157
--
50
2.24.1
158
2.26.2
51
159
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Bulekov <alxndr@bu.edu>
2
1
3
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
4
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
6
Message-id: 20200220041118.23264-5-alxndr@bu.edu
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
---
9
include/qemu/module.h | 4 +++-
10
1 file changed, 3 insertions(+), 1 deletion(-)
11
12
diff --git a/include/qemu/module.h b/include/qemu/module.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/include/qemu/module.h
15
+++ b/include/qemu/module.h
16
@@ -XXX,XX +XXX,XX @@ typedef enum {
17
MODULE_INIT_TRACE,
18
MODULE_INIT_XEN_BACKEND,
19
MODULE_INIT_LIBQOS,
20
+ MODULE_INIT_FUZZ_TARGET,
21
MODULE_INIT_MAX
22
} module_init_type;
23
24
@@ -XXX,XX +XXX,XX @@ typedef enum {
25
#define xen_backend_init(function) module_init(function, \
26
MODULE_INIT_XEN_BACKEND)
27
#define libqos_init(function) module_init(function, MODULE_INIT_LIBQOS)
28
-
29
+#define fuzz_target_init(function) module_init(function, \
30
+ MODULE_INIT_FUZZ_TARGET)
31
#define block_module_load_one(lib) module_load_one("block-", lib)
32
#define ui_module_load_one(lib) module_load_one("ui-", lib)
33
#define audio_module_load_one(lib) module_load_one("audio-", lib)
34
--
35
2.24.1
36
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Bulekov <alxndr@bu.edu>
2
1
3
This makes it simple to swap the transport functions for qtest commands
4
to and from the qtest client. For example, now it is possible to
5
directly pass qtest commands to a server handler that exists within the
6
same process, without the standard way of writing to a file descriptor.
7
8
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
11
Message-id: 20200220041118.23264-7-alxndr@bu.edu
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
14
tests/qtest/libqtest.c | 48 ++++++++++++++++++++++++++++++++++--------
15
1 file changed, 39 insertions(+), 9 deletions(-)
16
17
diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/libqtest.c
20
+++ b/tests/qtest/libqtest.c
21
@@ -XXX,XX +XXX,XX @@
22
#define SOCKET_TIMEOUT 50
23
#define SOCKET_MAX_FDS 16
24
25
+
26
+typedef void (*QTestSendFn)(QTestState *s, const char *buf);
27
+typedef GString* (*QTestRecvFn)(QTestState *);
28
+
29
+typedef struct QTestClientTransportOps {
30
+ QTestSendFn send; /* for sending qtest commands */
31
+ QTestRecvFn recv_line; /* for receiving qtest command responses */
32
+} QTestTransportOps;
33
+
34
struct QTestState
35
{
36
int fd;
37
@@ -XXX,XX +XXX,XX @@ struct QTestState
38
bool big_endian;
39
bool irq_level[MAX_IRQ];
40
GString *rx;
41
+ QTestTransportOps ops;
42
};
43
44
static GHookList abrt_hooks;
45
@@ -XXX,XX +XXX,XX @@ static struct sigaction sigact_old;
46
47
static int qtest_query_target_endianness(QTestState *s);
48
49
+static void qtest_client_socket_send(QTestState*, const char *buf);
50
+static void socket_send(int fd, const char *buf, size_t size);
51
+
52
+static GString *qtest_client_socket_recv_line(QTestState *);
53
+
54
+static void qtest_client_set_tx_handler(QTestState *s, QTestSendFn send);
55
+static void qtest_client_set_rx_handler(QTestState *s, QTestRecvFn recv);
56
+
57
static int init_socket(const char *socket_path)
58
{
59
struct sockaddr_un addr;
60
@@ -XXX,XX +XXX,XX @@ QTestState *qtest_init_without_qmp_handshake(const char *extra_args)
61
sock = init_socket(socket_path);
62
qmpsock = init_socket(qmp_socket_path);
63
64
+ qtest_client_set_rx_handler(s, qtest_client_socket_recv_line);
65
+ qtest_client_set_tx_handler(s, qtest_client_socket_send);
66
+
67
qtest_add_abrt_handler(kill_qemu_hook_func, s);
68
69
command = g_strdup_printf("exec %s "
70
@@ -XXX,XX +XXX,XX @@ static void socket_send(int fd, const char *buf, size_t size)
71
}
72
}
73
74
-static void socket_sendf(int fd, const char *fmt, va_list ap)
75
+static void qtest_client_socket_send(QTestState *s, const char *buf)
76
{
77
- gchar *str = g_strdup_vprintf(fmt, ap);
78
- size_t size = strlen(str);
79
-
80
- socket_send(fd, str, size);
81
- g_free(str);
82
+ socket_send(s->fd, buf, strlen(buf));
83
}
84
85
static void GCC_FMT_ATTR(2, 3) qtest_sendf(QTestState *s, const char *fmt, ...)
86
@@ -XXX,XX +XXX,XX @@ static void GCC_FMT_ATTR(2, 3) qtest_sendf(QTestState *s, const char *fmt, ...)
87
va_list ap;
88
89
va_start(ap, fmt);
90
- socket_sendf(s->fd, fmt, ap);
91
+ gchar *str = g_strdup_vprintf(fmt, ap);
92
va_end(ap);
93
+
94
+ s->ops.send(s, str);
95
+ g_free(str);
96
}
97
98
/* Sends a message and file descriptors to the socket.
99
@@ -XXX,XX +XXX,XX @@ static void socket_send_fds(int socket_fd, int *fds, size_t fds_num,
100
g_assert_cmpint(ret, >, 0);
101
}
102
103
-static GString *qtest_recv_line(QTestState *s)
104
+static GString *qtest_client_socket_recv_line(QTestState *s)
105
{
106
GString *line;
107
size_t offset;
108
@@ -XXX,XX +XXX,XX @@ static gchar **qtest_rsp(QTestState *s, int expected_args)
109
int i;
110
111
redo:
112
- line = qtest_recv_line(s);
113
+ line = s->ops.recv_line(s);
114
words = g_strsplit(line->str, " ", 0);
115
g_string_free(line, TRUE);
116
117
@@ -XXX,XX +XXX,XX @@ void qmp_assert_error_class(QDict *rsp, const char *class)
118
119
qobject_unref(rsp);
120
}
121
+
122
+static void qtest_client_set_tx_handler(QTestState *s,
123
+ QTestSendFn send)
124
+{
125
+ s->ops.send = send;
126
+}
127
+static void qtest_client_set_rx_handler(QTestState *s, QTestRecvFn recv)
128
+{
129
+ s->ops.recv_line = recv;
130
+}
131
--
132
2.24.1
133
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
The virtio-net fuzz target feeds inputs to all three virtio-net
3
In order to reuse bdrv_common_block_status_above in
4
virtqueues, and uses forking to avoid leaking state between fuzz runs.
4
bdrv_is_allocated_above, let's support include_base parameter.
5
5
6
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Reviewed-by: Alberto Garcia <berto@igalia.com>
8
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
8
Reviewed-by: Eric Blake <eblake@redhat.com>
9
Message-id: 20200220041118.23264-21-alxndr@bu.edu
9
Message-id: 20200924194003.22080-3-vsementsov@virtuozzo.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
11
---
12
tests/qtest/fuzz/Makefile.include | 1 +
12
block/coroutines.h | 2 ++
13
tests/qtest/fuzz/virtio_net_fuzz.c | 198 +++++++++++++++++++++++++++++
13
block/io.c | 21 ++++++++++++++-------
14
2 files changed, 199 insertions(+)
14
2 files changed, 16 insertions(+), 7 deletions(-)
15
create mode 100644 tests/qtest/fuzz/virtio_net_fuzz.c
16
15
17
diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include
16
diff --git a/block/coroutines.h b/block/coroutines.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/fuzz/Makefile.include
18
--- a/block/coroutines.h
20
+++ b/tests/qtest/fuzz/Makefile.include
19
+++ b/block/coroutines.h
21
@@ -XXX,XX +XXX,XX @@ fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o
20
@@ -XXX,XX +XXX,XX @@ bdrv_pwritev(BdrvChild *child, int64_t offset, unsigned int bytes,
22
21
int coroutine_fn
23
# Targets
22
bdrv_co_common_block_status_above(BlockDriverState *bs,
24
fuzz-obj-y += tests/qtest/fuzz/i440fx_fuzz.o
23
BlockDriverState *base,
25
+fuzz-obj-y += tests/qtest/fuzz/virtio_net_fuzz.o
24
+ bool include_base,
26
25
bool want_zero,
27
FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest
26
int64_t offset,
28
27
int64_t bytes,
29
diff --git a/tests/qtest/fuzz/virtio_net_fuzz.c b/tests/qtest/fuzz/virtio_net_fuzz.c
28
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
30
new file mode 100644
29
int generated_co_wrapper
31
index XXXXXXX..XXXXXXX
30
bdrv_common_block_status_above(BlockDriverState *bs,
32
--- /dev/null
31
BlockDriverState *base,
33
+++ b/tests/qtest/fuzz/virtio_net_fuzz.c
32
+ bool include_base,
34
@@ -XXX,XX +XXX,XX @@
33
bool want_zero,
35
+/*
34
int64_t offset,
36
+ * virtio-net Fuzzing Target
35
int64_t bytes,
37
+ *
36
diff --git a/block/io.c b/block/io.c
38
+ * Copyright Red Hat Inc., 2019
37
index XXXXXXX..XXXXXXX 100644
39
+ *
38
--- a/block/io.c
40
+ * Authors:
39
+++ b/block/io.c
41
+ * Alexander Bulekov <alxndr@bu.edu>
40
@@ -XXX,XX +XXX,XX @@ early_out:
42
+ *
41
int coroutine_fn
43
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
42
bdrv_co_common_block_status_above(BlockDriverState *bs,
44
+ * See the COPYING file in the top-level directory.
43
BlockDriverState *base,
45
+ */
44
+ bool include_base,
46
+
45
bool want_zero,
47
+#include "qemu/osdep.h"
46
int64_t offset,
48
+
47
int64_t bytes,
49
+#include "standard-headers/linux/virtio_config.h"
48
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
50
+#include "tests/qtest/libqtest.h"
49
BlockDriverState *p;
51
+#include "tests/qtest/libqos/virtio-net.h"
50
int64_t eof = 0;
52
+#include "fuzz.h"
51
53
+#include "fork_fuzz.h"
52
- assert(bs != base);
54
+#include "qos_fuzz.h"
53
+ assert(include_base || bs != base);
55
+
54
+ assert(!include_base || base); /* Can't include NULL base */
56
+
55
57
+#define QVIRTIO_NET_TIMEOUT_US (30 * 1000 * 1000)
56
ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
58
+#define QVIRTIO_RX_VQ 0
57
- if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED) {
59
+#define QVIRTIO_TX_VQ 1
58
+ if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) {
60
+#define QVIRTIO_CTRL_VQ 2
59
return ret;
61
+
60
}
62
+static int sockfds[2];
61
63
+static bool sockfds_initialized;
62
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
64
+
63
assert(*pnum <= bytes);
65
+static void virtio_net_fuzz_multi(QTestState *s,
64
bytes = *pnum;
66
+ const unsigned char *Data, size_t Size, bool check_used)
65
67
+{
66
- for (p = bdrv_filter_or_cow_bs(bs); p != base;
68
+ typedef struct vq_action {
67
+ for (p = bdrv_filter_or_cow_bs(bs); include_base || p != base;
69
+ uint8_t queue;
68
p = bdrv_filter_or_cow_bs(p))
70
+ uint8_t length;
69
{
71
+ uint8_t write;
70
ret = bdrv_co_block_status(p, want_zero, offset, bytes, pnum, map,
72
+ uint8_t next;
71
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
73
+ uint8_t rx;
72
break;
74
+ } vq_action;
73
}
75
+
74
76
+ uint32_t free_head = 0;
75
+ if (p == base) {
77
+
76
+ assert(include_base);
78
+ QGuestAllocator *t_alloc = fuzz_qos_alloc;
77
+ break;
79
+
80
+ QVirtioNet *net_if = fuzz_qos_obj;
81
+ QVirtioDevice *dev = net_if->vdev;
82
+ QVirtQueue *q;
83
+ vq_action vqa;
84
+ while (Size >= sizeof(vqa)) {
85
+ memcpy(&vqa, Data, sizeof(vqa));
86
+ Data += sizeof(vqa);
87
+ Size -= sizeof(vqa);
88
+
89
+ q = net_if->queues[vqa.queue % 3];
90
+
91
+ vqa.length = vqa.length >= Size ? Size : vqa.length;
92
+
93
+ /*
94
+ * Only attempt to write incoming packets, when using the socket
95
+ * backend. Otherwise, always place the input on a virtqueue.
96
+ */
97
+ if (vqa.rx && sockfds_initialized) {
98
+ write(sockfds[0], Data, vqa.length);
99
+ } else {
100
+ vqa.rx = 0;
101
+ uint64_t req_addr = guest_alloc(t_alloc, vqa.length);
102
+ /*
103
+ * If checking used ring, ensure that the fuzzer doesn't trigger
104
+ * trivial asserion failure on zero-zied buffer
105
+ */
106
+ qtest_memwrite(s, req_addr, Data, vqa.length);
107
+
108
+
109
+ free_head = qvirtqueue_add(s, q, req_addr, vqa.length,
110
+ vqa.write, vqa.next);
111
+ qvirtqueue_add(s, q, req_addr, vqa.length, vqa.write , vqa.next);
112
+ qvirtqueue_kick(s, dev, q, free_head);
113
+ }
78
+ }
114
+
79
+
115
+ /* Run the main loop */
80
/*
116
+ qtest_clock_step(s, 100);
81
* OK, [offset, offset + *pnum) region is unallocated on this layer,
117
+ flush_events(s);
82
* let's continue the diving.
118
+
83
@@ -XXX,XX +XXX,XX @@ int bdrv_block_status_above(BlockDriverState *bs, BlockDriverState *base,
119
+ /* Wait on used descriptors */
84
int64_t offset, int64_t bytes, int64_t *pnum,
120
+ if (check_used && !vqa.rx) {
85
int64_t *map, BlockDriverState **file)
121
+ gint64 start_time = g_get_monotonic_time();
86
{
122
+ /*
87
- return bdrv_common_block_status_above(bs, base, true, offset, bytes,
123
+ * normally, we could just use qvirtio_wait_used_elem, but since we
88
+ return bdrv_common_block_status_above(bs, base, false, true, offset, bytes,
124
+ * must manually run the main-loop for all the bhs to run, we use
89
pnum, map, file);
125
+ * this hack with flush_events(), to run the main_loop
90
}
126
+ */
91
127
+ while (!vqa.rx && q != net_if->queues[QVIRTIO_RX_VQ]) {
92
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_is_allocated(BlockDriverState *bs, int64_t offset,
128
+ uint32_t got_desc_idx;
93
int ret;
129
+ /* Input led to a virtio_error */
94
int64_t dummy;
130
+ if (dev->bus->get_status(dev) & VIRTIO_CONFIG_S_NEEDS_RESET) {
95
131
+ break;
96
- ret = bdrv_common_block_status_above(bs, bdrv_filter_or_cow_bs(bs), false,
132
+ }
97
- offset, bytes, pnum ? pnum : &dummy,
133
+ if (dev->bus->get_queue_isr_status(dev, q) &&
98
- NULL, NULL);
134
+ qvirtqueue_get_buf(s, q, &got_desc_idx, NULL)) {
99
+ ret = bdrv_common_block_status_above(bs, bs, true, false, offset,
135
+ g_assert_cmpint(got_desc_idx, ==, free_head);
100
+ bytes, pnum ? pnum : &dummy, NULL,
136
+ break;
101
+ NULL);
137
+ }
102
if (ret < 0) {
138
+ g_assert(g_get_monotonic_time() - start_time
103
return ret;
139
+ <= QVIRTIO_NET_TIMEOUT_US);
104
}
140
+
141
+ /* Run the main loop */
142
+ qtest_clock_step(s, 100);
143
+ flush_events(s);
144
+ }
145
+ }
146
+ Data += vqa.length;
147
+ Size -= vqa.length;
148
+ }
149
+}
150
+
151
+static void virtio_net_fork_fuzz(QTestState *s,
152
+ const unsigned char *Data, size_t Size)
153
+{
154
+ if (fork() == 0) {
155
+ virtio_net_fuzz_multi(s, Data, Size, false);
156
+ flush_events(s);
157
+ _Exit(0);
158
+ } else {
159
+ wait(NULL);
160
+ }
161
+}
162
+
163
+static void virtio_net_fork_fuzz_check_used(QTestState *s,
164
+ const unsigned char *Data, size_t Size)
165
+{
166
+ if (fork() == 0) {
167
+ virtio_net_fuzz_multi(s, Data, Size, true);
168
+ flush_events(s);
169
+ _Exit(0);
170
+ } else {
171
+ wait(NULL);
172
+ }
173
+}
174
+
175
+static void virtio_net_pre_fuzz(QTestState *s)
176
+{
177
+ qos_init_path(s);
178
+ counter_shm_init();
179
+}
180
+
181
+static void *virtio_net_test_setup_socket(GString *cmd_line, void *arg)
182
+{
183
+ int ret = socketpair(PF_UNIX, SOCK_STREAM, 0, sockfds);
184
+ g_assert_cmpint(ret, !=, -1);
185
+ fcntl(sockfds[0], F_SETFL, O_NONBLOCK);
186
+ sockfds_initialized = true;
187
+ g_string_append_printf(cmd_line, " -netdev socket,fd=%d,id=hs0 ",
188
+ sockfds[1]);
189
+ return arg;
190
+}
191
+
192
+static void *virtio_net_test_setup_user(GString *cmd_line, void *arg)
193
+{
194
+ g_string_append_printf(cmd_line, " -netdev user,id=hs0 ");
195
+ return arg;
196
+}
197
+
198
+static void register_virtio_net_fuzz_targets(void)
199
+{
200
+ fuzz_add_qos_target(&(FuzzTarget){
201
+ .name = "virtio-net-socket",
202
+ .description = "Fuzz the virtio-net virtual queues. Fuzz incoming "
203
+ "traffic using the socket backend",
204
+ .pre_fuzz = &virtio_net_pre_fuzz,
205
+ .fuzz = virtio_net_fork_fuzz,},
206
+ "virtio-net",
207
+ &(QOSGraphTestOptions){.before = virtio_net_test_setup_socket}
208
+ );
209
+
210
+ fuzz_add_qos_target(&(FuzzTarget){
211
+ .name = "virtio-net-socket-check-used",
212
+ .description = "Fuzz the virtio-net virtual queues. Wait for the "
213
+ "descriptors to be used. Timeout may indicate improperly handled "
214
+ "input",
215
+ .pre_fuzz = &virtio_net_pre_fuzz,
216
+ .fuzz = virtio_net_fork_fuzz_check_used,},
217
+ "virtio-net",
218
+ &(QOSGraphTestOptions){.before = virtio_net_test_setup_socket}
219
+ );
220
+ fuzz_add_qos_target(&(FuzzTarget){
221
+ .name = "virtio-net-slirp",
222
+ .description = "Fuzz the virtio-net virtual queues with the slirp "
223
+ " backend. Warning: May result in network traffic emitted from the "
224
+ " process. Run in an isolated network environment.",
225
+ .pre_fuzz = &virtio_net_pre_fuzz,
226
+ .fuzz = virtio_net_fork_fuzz,},
227
+ "virtio-net",
228
+ &(QOSGraphTestOptions){.before = virtio_net_test_setup_user}
229
+ );
230
+}
231
+
232
+fuzz_target_init(register_virtio_net_fuzz_targets);
233
--
105
--
234
2.24.1
106
2.26.2
235
107
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
When using qtest "in-process" communication, qtest_sendf directly calls
3
We are going to reuse bdrv_common_block_status_above in
4
a function in the server (qtest.c). Previously, bufwrite used
4
bdrv_is_allocated_above. bdrv_is_allocated_above may be called with
5
socket_send, which bypasses the TransportOps enabling the call into
5
include_base == false and still bs == base (for ex. from img_rebase()).
6
qtest.c. This change replaces the socket_send calls with ops->send,
7
maintaining the benefits of the direct socket_send call, while adding
8
support for in-process qtest calls.
9
6
10
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
7
So, support this corner case.
11
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
8
12
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
9
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
13
Message-id: 20200220041118.23264-8-alxndr@bu.edu
10
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
11
Reviewed-by: Eric Blake <eblake@redhat.com>
12
Reviewed-by: Alberto Garcia <berto@igalia.com>
13
Message-id: 20200924194003.22080-4-vsementsov@virtuozzo.com
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
---
15
---
16
tests/qtest/libqtest.c | 71 ++++++++++++++++++++++++++++++++++++++++--
16
block/io.c | 6 +++++-
17
tests/qtest/libqtest.h | 4 +++
17
1 file changed, 5 insertions(+), 1 deletion(-)
18
2 files changed, 73 insertions(+), 2 deletions(-)
19
18
20
diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
19
diff --git a/block/io.c b/block/io.c
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
22
--- a/tests/qtest/libqtest.c
21
--- a/block/io.c
23
+++ b/tests/qtest/libqtest.c
22
+++ b/block/io.c
24
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ bdrv_co_common_block_status_above(BlockDriverState *bs,
25
24
BlockDriverState *p;
26
25
int64_t eof = 0;
27
typedef void (*QTestSendFn)(QTestState *s, const char *buf);
26
28
+typedef void (*ExternalSendFn)(void *s, const char *buf);
27
- assert(include_base || bs != base);
29
typedef GString* (*QTestRecvFn)(QTestState *);
28
assert(!include_base || base); /* Can't include NULL base */
30
29
31
typedef struct QTestClientTransportOps {
30
+ if (!include_base && bs == base) {
32
QTestSendFn send; /* for sending qtest commands */
31
+ *pnum = bytes;
33
+
32
+ return 0;
34
+ /*
35
+ * use external_send to send qtest command strings through functions which
36
+ * do not accept a QTestState as the first parameter.
37
+ */
38
+ ExternalSendFn external_send;
39
+
40
QTestRecvFn recv_line; /* for receiving qtest command responses */
41
} QTestTransportOps;
42
43
@@ -XXX,XX +XXX,XX @@ void qtest_bufwrite(QTestState *s, uint64_t addr, const void *data, size_t size)
44
45
bdata = g_base64_encode(data, size);
46
qtest_sendf(s, "b64write 0x%" PRIx64 " 0x%zx ", addr, size);
47
- socket_send(s->fd, bdata, strlen(bdata));
48
- socket_send(s->fd, "\n", 1);
49
+ s->ops.send(s, bdata);
50
+ s->ops.send(s, "\n");
51
qtest_rsp(s, 0);
52
g_free(bdata);
53
}
54
@@ -XXX,XX +XXX,XX @@ static void qtest_client_set_rx_handler(QTestState *s, QTestRecvFn recv)
55
{
56
s->ops.recv_line = recv;
57
}
58
+/* A type-safe wrapper for s->send() */
59
+static void send_wrapper(QTestState *s, const char *buf)
60
+{
61
+ s->ops.external_send(s, buf);
62
+}
63
+
64
+static GString *qtest_client_inproc_recv_line(QTestState *s)
65
+{
66
+ GString *line;
67
+ size_t offset;
68
+ char *eol;
69
+
70
+ eol = strchr(s->rx->str, '\n');
71
+ offset = eol - s->rx->str;
72
+ line = g_string_new_len(s->rx->str, offset);
73
+ g_string_erase(s->rx, 0, offset + 1);
74
+ return line;
75
+}
76
+
77
+QTestState *qtest_inproc_init(QTestState **s, bool log, const char* arch,
78
+ void (*send)(void*, const char*))
79
+{
80
+ QTestState *qts;
81
+ qts = g_new0(QTestState, 1);
82
+ *s = qts; /* Expose qts early on, since the query endianness relies on it */
83
+ qts->wstatus = 0;
84
+ for (int i = 0; i < MAX_IRQ; i++) {
85
+ qts->irq_level[i] = false;
86
+ }
33
+ }
87
+
34
+
88
+ qtest_client_set_rx_handler(qts, qtest_client_inproc_recv_line);
35
ret = bdrv_co_block_status(bs, want_zero, offset, bytes, pnum, map, file);
89
+
36
if (ret < 0 || *pnum == 0 || ret & BDRV_BLOCK_ALLOCATED || bs == base) {
90
+ /* send() may not have a matching protoype, so use a type-safe wrapper */
37
return ret;
91
+ qts->ops.external_send = send;
92
+ qtest_client_set_tx_handler(qts, send_wrapper);
93
+
94
+ qts->big_endian = qtest_query_target_endianness(qts);
95
+
96
+ /*
97
+ * Set a dummy path for QTEST_QEMU_BINARY. Doesn't need to exist, but this
98
+ * way, qtest_get_arch works for inproc qtest.
99
+ */
100
+ gchar *bin_path = g_strconcat("/qemu-system-", arch, NULL);
101
+ setenv("QTEST_QEMU_BINARY", bin_path, 0);
102
+ g_free(bin_path);
103
+
104
+ return qts;
105
+}
106
+
107
+void qtest_client_inproc_recv(void *opaque, const char *str)
108
+{
109
+ QTestState *qts = *(QTestState **)opaque;
110
+
111
+ if (!qts->rx) {
112
+ qts->rx = g_string_new(NULL);
113
+ }
114
+ g_string_append(qts->rx, str);
115
+ return;
116
+}
117
diff --git a/tests/qtest/libqtest.h b/tests/qtest/libqtest.h
118
index XXXXXXX..XXXXXXX 100644
119
--- a/tests/qtest/libqtest.h
120
+++ b/tests/qtest/libqtest.h
121
@@ -XXX,XX +XXX,XX @@ bool qtest_probe_child(QTestState *s);
122
*/
123
void qtest_set_expected_status(QTestState *s, int status);
124
125
+QTestState *qtest_inproc_init(QTestState **s, bool log, const char* arch,
126
+ void (*send)(void*, const char*));
127
+
128
+void qtest_client_inproc_recv(void *opaque, const char *str);
129
#endif
130
--
38
--
131
2.24.1
39
2.26.2
132
40
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
The moved functions are not specific to qos-test and might be useful
3
bdrv_is_allocated_above wrongly handles short backing files: it reports
4
elsewhere. For example the virtual-device fuzzer makes use of them for
4
after-EOF space as UNALLOCATED which is wrong, as on read the data is
5
qos-assisted fuzz-targets.
5
generated on the level of short backing file (if all overlays have
6
unallocated areas at that place).
6
7
7
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
8
Reusing bdrv_common_block_status_above fixes the issue and unifies code
8
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
path.
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
10
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
11
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
11
Message-id: 20200220041118.23264-12-alxndr@bu.edu
12
Reviewed-by: Eric Blake <eblake@redhat.com>
13
Reviewed-by: Alberto Garcia <berto@igalia.com>
14
Message-id: 20200924194003.22080-5-vsementsov@virtuozzo.com
15
[Fix s/has/have/ as suggested by Eric Blake. Fix s/area/areas/.
16
--Stefan]
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
17
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
18
---
14
tests/qtest/Makefile.include | 1 +
19
block/io.c | 43 +++++--------------------------------------
15
tests/qtest/libqos/qos_external.c | 168 ++++++++++++++++++++++++++++++
20
1 file changed, 5 insertions(+), 38 deletions(-)
16
tests/qtest/libqos/qos_external.h | 28 +++++
17
tests/qtest/qos-test.c | 132 +----------------------
18
4 files changed, 198 insertions(+), 131 deletions(-)
19
create mode 100644 tests/qtest/libqos/qos_external.c
20
create mode 100644 tests/qtest/libqos/qos_external.h
21
21
22
diff --git a/tests/qtest/Makefile.include b/tests/qtest/Makefile.include
22
diff --git a/block/io.c b/block/io.c
23
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
24
--- a/tests/qtest/Makefile.include
24
--- a/block/io.c
25
+++ b/tests/qtest/Makefile.include
25
+++ b/block/io.c
26
@@ -XXX,XX +XXX,XX @@ libqos-usb-obj-y = $(libqos-spapr-obj-y) $(libqos-pc-obj-y) tests/qtest/libqos/u
26
@@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_is_allocated(BlockDriverState *bs, int64_t offset,
27
# qos devices:
27
* at 'offset + *pnum' may return the same allocation status (in other
28
libqos-obj-y = $(libqgraph-obj-y)
28
* words, the result is not necessarily the maximum possible range);
29
libqos-obj-y += $(libqos-pc-obj-y) $(libqos-spapr-obj-y)
29
* but 'pnum' will only be 0 when end of file is reached.
30
+libqos-obj-y += tests/qtest/libqos/qos_external.o
31
libqos-obj-y += tests/qtest/libqos/e1000e.o
32
libqos-obj-y += tests/qtest/libqos/i2c.o
33
libqos-obj-y += tests/qtest/libqos/i2c-imx.o
34
diff --git a/tests/qtest/libqos/qos_external.c b/tests/qtest/libqos/qos_external.c
35
new file mode 100644
36
index XXXXXXX..XXXXXXX
37
--- /dev/null
38
+++ b/tests/qtest/libqos/qos_external.c
39
@@ -XXX,XX +XXX,XX @@
40
+/*
41
+ * libqos driver framework
42
+ *
43
+ * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
44
+ *
45
+ * This library is free software; you can redistribute it and/or
46
+ * modify it under the terms of the GNU Lesser General Public
47
+ * License version 2 as published by the Free Software Foundation.
48
+ *
49
+ * This library is distributed in the hope that it will be useful,
50
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
51
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
52
+ * Lesser General Public License for more details.
53
+ *
54
+ * You should have received a copy of the GNU Lesser General Public
55
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
56
+ */
57
+
58
+#include "qemu/osdep.h"
59
+#include <getopt.h>
60
+#include "libqtest.h"
61
+#include "qapi/qmp/qdict.h"
62
+#include "qapi/qmp/qbool.h"
63
+#include "qapi/qmp/qstring.h"
64
+#include "qemu/module.h"
65
+#include "qapi/qmp/qlist.h"
66
+#include "libqos/malloc.h"
67
+#include "libqos/qgraph.h"
68
+#include "libqos/qgraph_internal.h"
69
+#include "libqos/qos_external.h"
70
+
71
+
72
+
73
+void apply_to_node(const char *name, bool is_machine, bool is_abstract)
74
+{
75
+ char *machine_name = NULL;
76
+ if (is_machine) {
77
+ const char *arch = qtest_get_arch();
78
+ machine_name = g_strconcat(arch, "/", name, NULL);
79
+ name = machine_name;
80
+ }
81
+ qos_graph_node_set_availability(name, true);
82
+ if (is_abstract) {
83
+ qos_delete_cmd_line(name);
84
+ }
85
+ g_free(machine_name);
86
+}
87
+
88
+/**
89
+ * apply_to_qlist(): using QMP queries QEMU for a list of
90
+ * machines and devices available, and sets the respective node
91
+ * as true. If a node is found, also all its produced and contained
92
+ * child are marked available.
93
+ *
94
+ * See qos_graph_node_set_availability() for more info
95
+ */
96
+void apply_to_qlist(QList *list, bool is_machine)
97
+{
98
+ const QListEntry *p;
99
+ const char *name;
100
+ bool abstract;
101
+ QDict *minfo;
102
+ QObject *qobj;
103
+ QString *qstr;
104
+ QBool *qbool;
105
+
106
+ for (p = qlist_first(list); p; p = qlist_next(p)) {
107
+ minfo = qobject_to(QDict, qlist_entry_obj(p));
108
+ qobj = qdict_get(minfo, "name");
109
+ qstr = qobject_to(QString, qobj);
110
+ name = qstring_get_str(qstr);
111
+
112
+ qobj = qdict_get(minfo, "abstract");
113
+ if (qobj) {
114
+ qbool = qobject_to(QBool, qobj);
115
+ abstract = qbool_get_bool(qbool);
116
+ } else {
117
+ abstract = false;
118
+ }
119
+
120
+ apply_to_node(name, is_machine, abstract);
121
+ qobj = qdict_get(minfo, "alias");
122
+ if (qobj) {
123
+ qstr = qobject_to(QString, qobj);
124
+ name = qstring_get_str(qstr);
125
+ apply_to_node(name, is_machine, abstract);
126
+ }
127
+ }
128
+}
129
+
130
+QGuestAllocator *get_machine_allocator(QOSGraphObject *obj)
131
+{
132
+ return obj->get_driver(obj, "memory");
133
+}
134
+
135
+/**
136
+ * allocate_objects(): given an array of nodes @arg,
137
+ * walks the path invoking all constructors and
138
+ * passing the corresponding parameter in order to
139
+ * continue the objects allocation.
140
+ * Once the test is reached, return the object it consumes.
141
+ *
142
+ * Since the machine and QEDGE_CONSUMED_BY nodes allocate
143
+ * memory in the constructor, g_test_queue_destroy is used so
144
+ * that after execution they can be safely free'd. (The test's
145
+ * ->before callback is also welcome to use g_test_queue_destroy).
146
+ *
147
+ * Note: as specified in walk_path() too, @arg is an array of
148
+ * char *, where arg[0] is a pointer to the command line
149
+ * string that will be used to properly start QEMU when executing
150
+ * the test, and the remaining elements represent the actual objects
151
+ * that will be allocated.
152
+ */
153
+void *allocate_objects(QTestState *qts, char **path, QGuestAllocator **p_alloc)
154
+{
155
+ int current = 0;
156
+ QGuestAllocator *alloc;
157
+ QOSGraphObject *parent = NULL;
158
+ QOSGraphEdge *edge;
159
+ QOSGraphNode *node;
160
+ void *edge_arg;
161
+ void *obj;
162
+
163
+ node = qos_graph_get_node(path[current]);
164
+ g_assert(node->type == QNODE_MACHINE);
165
+
166
+ obj = qos_machine_new(node, qts);
167
+ qos_object_queue_destroy(obj);
168
+
169
+ alloc = get_machine_allocator(obj);
170
+ if (p_alloc) {
171
+ *p_alloc = alloc;
172
+ }
173
+
174
+ for (;;) {
175
+ if (node->type != QNODE_INTERFACE) {
176
+ qos_object_start_hw(obj);
177
+ parent = obj;
178
+ }
179
+
180
+ /* follow edge and get object for next node constructor */
181
+ current++;
182
+ edge = qos_graph_get_edge(path[current - 1], path[current]);
183
+ node = qos_graph_get_node(path[current]);
184
+
185
+ if (node->type == QNODE_TEST) {
186
+ g_assert(qos_graph_edge_get_type(edge) == QEDGE_CONSUMED_BY);
187
+ return obj;
188
+ }
189
+
190
+ switch (qos_graph_edge_get_type(edge)) {
191
+ case QEDGE_PRODUCES:
192
+ obj = parent->get_driver(parent, path[current]);
193
+ break;
194
+
195
+ case QEDGE_CONSUMED_BY:
196
+ edge_arg = qos_graph_edge_get_arg(edge);
197
+ obj = qos_driver_new(node, obj, alloc, edge_arg);
198
+ qos_object_queue_destroy(obj);
199
+ break;
200
+
201
+ case QEDGE_CONTAINS:
202
+ obj = parent->get_device(parent, path[current]);
203
+ break;
204
+ }
205
+ }
206
+}
207
+
208
diff --git a/tests/qtest/libqos/qos_external.h b/tests/qtest/libqos/qos_external.h
209
new file mode 100644
210
index XXXXXXX..XXXXXXX
211
--- /dev/null
212
+++ b/tests/qtest/libqos/qos_external.h
213
@@ -XXX,XX +XXX,XX @@
214
+/*
215
+ * libqos driver framework
216
+ *
217
+ * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
218
+ *
219
+ * This library is free software; you can redistribute it and/or
220
+ * modify it under the terms of the GNU Lesser General Public
221
+ * License version 2 as published by the Free Software Foundation.
222
+ *
223
+ * This library is distributed in the hope that it will be useful,
224
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
225
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
226
+ * Lesser General Public License for more details.
227
+ *
228
+ * You should have received a copy of the GNU Lesser General Public
229
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
230
+ */
231
+
232
+#ifndef QOS_EXTERNAL_H
233
+#define QOS_EXTERNAL_H
234
+#include "libqos/qgraph.h"
235
+
236
+void apply_to_node(const char *name, bool is_machine, bool is_abstract);
237
+void apply_to_qlist(QList *list, bool is_machine);
238
+QGuestAllocator *get_machine_allocator(QOSGraphObject *obj);
239
+void *allocate_objects(QTestState *qts, char **path, QGuestAllocator **p_alloc);
240
+
241
+#endif
242
diff --git a/tests/qtest/qos-test.c b/tests/qtest/qos-test.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/tests/qtest/qos-test.c
245
+++ b/tests/qtest/qos-test.c
246
@@ -XXX,XX +XXX,XX @@
247
#include "libqos/malloc.h"
248
#include "libqos/qgraph.h"
249
#include "libqos/qgraph_internal.h"
250
+#include "libqos/qos_external.h"
251
252
static char *old_path;
253
254
-static void apply_to_node(const char *name, bool is_machine, bool is_abstract)
255
-{
256
- char *machine_name = NULL;
257
- if (is_machine) {
258
- const char *arch = qtest_get_arch();
259
- machine_name = g_strconcat(arch, "/", name, NULL);
260
- name = machine_name;
261
- }
262
- qos_graph_node_set_availability(name, true);
263
- if (is_abstract) {
264
- qos_delete_cmd_line(name);
265
- }
266
- g_free(machine_name);
267
-}
268
269
-/**
270
- * apply_to_qlist(): using QMP queries QEMU for a list of
271
- * machines and devices available, and sets the respective node
272
- * as true. If a node is found, also all its produced and contained
273
- * child are marked available.
274
- *
30
- *
275
- * See qos_graph_node_set_availability() for more info
31
*/
276
- */
32
int bdrv_is_allocated_above(BlockDriverState *top,
277
-static void apply_to_qlist(QList *list, bool is_machine)
33
BlockDriverState *base,
278
-{
34
bool include_base, int64_t offset,
279
- const QListEntry *p;
35
int64_t bytes, int64_t *pnum)
280
- const char *name;
36
{
281
- bool abstract;
37
- BlockDriverState *intermediate;
282
- QDict *minfo;
38
- int ret;
283
- QObject *qobj;
39
- int64_t n = bytes;
284
- QString *qstr;
285
- QBool *qbool;
286
-
40
-
287
- for (p = qlist_first(list); p; p = qlist_next(p)) {
41
- assert(base || !include_base);
288
- minfo = qobject_to(QDict, qlist_entry_obj(p));
289
- qobj = qdict_get(minfo, "name");
290
- qstr = qobject_to(QString, qobj);
291
- name = qstring_get_str(qstr);
292
-
42
-
293
- qobj = qdict_get(minfo, "abstract");
43
- intermediate = top;
294
- if (qobj) {
44
- while (include_base || intermediate != base) {
295
- qbool = qobject_to(QBool, qobj);
45
- int64_t pnum_inter;
296
- abstract = qbool_get_bool(qbool);
46
- int64_t size_inter;
297
- } else {
47
-
298
- abstract = false;
48
- assert(intermediate);
49
- ret = bdrv_is_allocated(intermediate, offset, bytes, &pnum_inter);
50
- if (ret < 0) {
51
- return ret;
52
- }
53
- if (ret) {
54
- *pnum = pnum_inter;
55
- return 1;
299
- }
56
- }
300
-
57
-
301
- apply_to_node(name, is_machine, abstract);
58
- size_inter = bdrv_getlength(intermediate);
302
- qobj = qdict_get(minfo, "alias");
59
- if (size_inter < 0) {
303
- if (qobj) {
60
- return size_inter;
304
- qstr = qobject_to(QString, qobj);
305
- name = qstring_get_str(qstr);
306
- apply_to_node(name, is_machine, abstract);
307
- }
61
- }
308
- }
62
- if (n > pnum_inter &&
309
-}
63
- (intermediate == top || offset + pnum_inter < size_inter)) {
310
64
- n = pnum_inter;
311
/**
312
* qos_set_machines_devices_available(): sets availability of qgraph
313
@@ -XXX,XX +XXX,XX @@ static void qos_set_machines_devices_available(void)
314
qobject_unref(response);
315
}
316
317
-static QGuestAllocator *get_machine_allocator(QOSGraphObject *obj)
318
-{
319
- return obj->get_driver(obj, "memory");
320
-}
321
322
static void restart_qemu_or_continue(char *path)
323
{
324
@@ -XXX,XX +XXX,XX @@ void qos_invalidate_command_line(void)
325
old_path = NULL;
326
}
327
328
-/**
329
- * allocate_objects(): given an array of nodes @arg,
330
- * walks the path invoking all constructors and
331
- * passing the corresponding parameter in order to
332
- * continue the objects allocation.
333
- * Once the test is reached, return the object it consumes.
334
- *
335
- * Since the machine and QEDGE_CONSUMED_BY nodes allocate
336
- * memory in the constructor, g_test_queue_destroy is used so
337
- * that after execution they can be safely free'd. (The test's
338
- * ->before callback is also welcome to use g_test_queue_destroy).
339
- *
340
- * Note: as specified in walk_path() too, @arg is an array of
341
- * char *, where arg[0] is a pointer to the command line
342
- * string that will be used to properly start QEMU when executing
343
- * the test, and the remaining elements represent the actual objects
344
- * that will be allocated.
345
- */
346
-static void *allocate_objects(QTestState *qts, char **path, QGuestAllocator **p_alloc)
347
-{
348
- int current = 0;
349
- QGuestAllocator *alloc;
350
- QOSGraphObject *parent = NULL;
351
- QOSGraphEdge *edge;
352
- QOSGraphNode *node;
353
- void *edge_arg;
354
- void *obj;
355
-
356
- node = qos_graph_get_node(path[current]);
357
- g_assert(node->type == QNODE_MACHINE);
358
-
359
- obj = qos_machine_new(node, qts);
360
- qos_object_queue_destroy(obj);
361
-
362
- alloc = get_machine_allocator(obj);
363
- if (p_alloc) {
364
- *p_alloc = alloc;
365
- }
366
-
367
- for (;;) {
368
- if (node->type != QNODE_INTERFACE) {
369
- qos_object_start_hw(obj);
370
- parent = obj;
371
- }
65
- }
372
-
66
-
373
- /* follow edge and get object for next node constructor */
67
- if (intermediate == base) {
374
- current++;
68
- break;
375
- edge = qos_graph_get_edge(path[current - 1], path[current]);
376
- node = qos_graph_get_node(path[current]);
377
-
378
- if (node->type == QNODE_TEST) {
379
- g_assert(qos_graph_edge_get_type(edge) == QEDGE_CONSUMED_BY);
380
- return obj;
381
- }
69
- }
382
-
70
-
383
- switch (qos_graph_edge_get_type(edge)) {
71
- intermediate = bdrv_filter_or_cow_bs(intermediate);
384
- case QEDGE_PRODUCES:
72
+ int ret = bdrv_common_block_status_above(top, base, include_base, false,
385
- obj = parent->get_driver(parent, path[current]);
73
+ offset, bytes, pnum, NULL, NULL);
386
- break;
74
+ if (ret < 0) {
387
-
75
+ return ret;
388
- case QEDGE_CONSUMED_BY:
76
}
389
- edge_arg = qos_graph_edge_get_arg(edge);
77
390
- obj = qos_driver_new(node, obj, alloc, edge_arg);
78
- *pnum = n;
391
- qos_object_queue_destroy(obj);
79
- return 0;
392
- break;
80
+ return !!(ret & BDRV_BLOCK_ALLOCATED);
393
-
81
}
394
- case QEDGE_CONTAINS:
82
395
- obj = parent->get_device(parent, path[current]);
83
int coroutine_fn
396
- break;
397
- }
398
- }
399
-}
400
401
/* The argument to run_one_test, which is the test function that is registered
402
* with GTest, is a vector of strings. The first item is the initial command
403
--
84
--
404
2.24.1
85
2.26.2
405
86
diff view generated by jsdifflib
1
From: Alexander Bulekov <alxndr@bu.edu>
1
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2
2
3
The handler allows a qtest client to send commands to the server by
3
These cases are fixed by previous patches around block_status and
4
directly calling a function, rather than using a file/CharBackend
4
is_allocated.
5
5
6
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
6
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
7
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
Reviewed-by: Eric Blake <eblake@redhat.com>
8
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
8
Reviewed-by: Alberto Garcia <berto@igalia.com>
9
Message-id: 20200220041118.23264-9-alxndr@bu.edu
9
Message-id: 20200924194003.22080-6-vsementsov@virtuozzo.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
11
---
12
include/sysemu/qtest.h | 1 +
12
tests/qemu-iotests/274 | 20 +++++++++++
13
qtest.c | 13 +++++++++++++
13
tests/qemu-iotests/274.out | 68 ++++++++++++++++++++++++++++++++++++++
14
2 files changed, 14 insertions(+)
14
2 files changed, 88 insertions(+)
15
15
16
diff --git a/include/sysemu/qtest.h b/include/sysemu/qtest.h
16
diff --git a/tests/qemu-iotests/274 b/tests/qemu-iotests/274
17
index XXXXXXX..XXXXXXX 100755
18
--- a/tests/qemu-iotests/274
19
+++ b/tests/qemu-iotests/274
20
@@ -XXX,XX +XXX,XX @@ with iotests.FilePath('base') as base, \
21
iotests.qemu_io_log('-c', 'read -P 1 0 %d' % size_short, mid)
22
iotests.qemu_io_log('-c', 'read -P 0 %d %d' % (size_short, size_diff), mid)
23
24
+ iotests.log('=== Testing qemu-img commit (top -> base) ===')
25
+
26
+ create_chain()
27
+ iotests.qemu_img_log('commit', '-b', base, top)
28
+ iotests.img_info_log(base)
29
+ iotests.qemu_io_log('-c', 'read -P 1 0 %d' % size_short, base)
30
+ iotests.qemu_io_log('-c', 'read -P 0 %d %d' % (size_short, size_diff), base)
31
+
32
+ iotests.log('=== Testing QMP active commit (top -> base) ===')
33
+
34
+ create_chain()
35
+ with create_vm() as vm:
36
+ vm.launch()
37
+ vm.qmp_log('block-commit', device='top', base_node='base',
38
+ job_id='job0', auto_dismiss=False)
39
+ vm.run_job('job0', wait=5)
40
+
41
+ iotests.img_info_log(mid)
42
+ iotests.qemu_io_log('-c', 'read -P 1 0 %d' % size_short, base)
43
+ iotests.qemu_io_log('-c', 'read -P 0 %d %d' % (size_short, size_diff), base)
44
45
iotests.log('== Resize tests ==')
46
47
diff --git a/tests/qemu-iotests/274.out b/tests/qemu-iotests/274.out
17
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
18
--- a/include/sysemu/qtest.h
49
--- a/tests/qemu-iotests/274.out
19
+++ b/include/sysemu/qtest.h
50
+++ b/tests/qemu-iotests/274.out
20
@@ -XXX,XX +XXX,XX @@ void qtest_server_init(const char *qtest_chrdev, const char *qtest_log, Error **
51
@@ -XXX,XX +XXX,XX @@ read 1048576/1048576 bytes at offset 0
21
52
read 1048576/1048576 bytes at offset 1048576
22
void qtest_server_set_send_handler(void (*send)(void *, const char *),
53
1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
23
void *opaque);
54
24
+void qtest_server_inproc_recv(void *opaque, const char *buf);
55
+=== Testing qemu-img commit (top -> base) ===
25
56
+Formatting 'TEST_DIR/PID-base', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 lazy_refcounts=off refcount_bits=16
26
#endif
27
diff --git a/qtest.c b/qtest.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/qtest.c
30
+++ b/qtest.c
31
@@ -XXX,XX +XXX,XX @@ bool qtest_driver(void)
32
{
33
return qtest_chr.chr != NULL;
34
}
35
+
57
+
36
+void qtest_server_inproc_recv(void *dummy, const char *buf)
58
+Formatting 'TEST_DIR/PID-mid', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=1048576 backing_file=TEST_DIR/PID-base backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
37
+{
59
+
38
+ static GString *gstr;
60
+Formatting 'TEST_DIR/PID-top', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 backing_file=TEST_DIR/PID-mid backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
39
+ if (!gstr) {
61
+
40
+ gstr = g_string_new(NULL);
62
+wrote 2097152/2097152 bytes at offset 0
41
+ }
63
+2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
42
+ g_string_append(gstr, buf);
64
+
43
+ if (gstr->str[gstr->len - 1] == '\n') {
65
+Image committed.
44
+ qtest_process_inbuf(NULL, gstr);
66
+
45
+ g_string_truncate(gstr, 0);
67
+image: TEST_IMG
46
+ }
68
+file format: IMGFMT
47
+}
69
+virtual size: 2 MiB (2097152 bytes)
70
+cluster_size: 65536
71
+Format specific information:
72
+ compat: 1.1
73
+ compression type: zlib
74
+ lazy refcounts: false
75
+ refcount bits: 16
76
+ corrupt: false
77
+ extended l2: false
78
+
79
+read 1048576/1048576 bytes at offset 0
80
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
81
+
82
+read 1048576/1048576 bytes at offset 1048576
83
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
84
+
85
+=== Testing QMP active commit (top -> base) ===
86
+Formatting 'TEST_DIR/PID-base', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 lazy_refcounts=off refcount_bits=16
87
+
88
+Formatting 'TEST_DIR/PID-mid', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=1048576 backing_file=TEST_DIR/PID-base backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
89
+
90
+Formatting 'TEST_DIR/PID-top', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2097152 backing_file=TEST_DIR/PID-mid backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
91
+
92
+wrote 2097152/2097152 bytes at offset 0
93
+2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
94
+
95
+{"execute": "block-commit", "arguments": {"auto-dismiss": false, "base-node": "base", "device": "top", "job-id": "job0"}}
96
+{"return": {}}
97
+{"execute": "job-complete", "arguments": {"id": "job0"}}
98
+{"return": {}}
99
+{"data": {"device": "job0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_READY", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
100
+{"data": {"device": "job0", "len": 1048576, "offset": 1048576, "speed": 0, "type": "commit"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
101
+{"execute": "job-dismiss", "arguments": {"id": "job0"}}
102
+{"return": {}}
103
+image: TEST_IMG
104
+file format: IMGFMT
105
+virtual size: 1 MiB (1048576 bytes)
106
+cluster_size: 65536
107
+backing file: TEST_DIR/PID-base
108
+backing file format: IMGFMT
109
+Format specific information:
110
+ compat: 1.1
111
+ compression type: zlib
112
+ lazy refcounts: false
113
+ refcount bits: 16
114
+ corrupt: false
115
+ extended l2: false
116
+
117
+read 1048576/1048576 bytes at offset 0
118
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
119
+
120
+read 1048576/1048576 bytes at offset 1048576
121
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
122
+
123
== Resize tests ==
124
=== preallocation=off ===
125
Formatting 'TEST_DIR/PID-base', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=6442450944 lazy_refcounts=off refcount_bits=16
48
--
126
--
49
2.24.1
127
2.26.2
50
128
diff view generated by jsdifflib