1 | The following changes since commit 9ac5df20f51fabcba0d902025df4bd7ea987c158: | 1 | The following changes since commit 56f9e46b841c7be478ca038d8d4085d776ab4b0d: |
---|---|---|---|
2 | 2 | ||
3 | Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20200221-1' into staging (2020-02-21 16:18:38 +0000) | 3 | Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2017-02-20' into staging (2017-02-20 17:42:47 +0000) |
4 | 4 | ||
5 | are available in the Git repository at: | 5 | are available in the git repository at: |
6 | 6 | ||
7 | https://github.com/stefanha/qemu.git tags/block-pull-request | 7 | git://github.com/stefanha/qemu.git tags/block-pull-request |
8 | 8 | ||
9 | for you to fetch changes up to e5c59355ae9f724777c61c859292ec9db2c8c2ab: | 9 | for you to fetch changes up to a7b91d35bab97a2d3e779d0c64c9b837b52a6cf7: |
10 | 10 | ||
11 | fuzz: add documentation to docs/devel/ (2020-02-22 08:26:48 +0000) | 11 | coroutine-lock: make CoRwlock thread-safe and fair (2017-02-21 11:39:40 +0000) |
12 | 12 | ||
13 | ---------------------------------------------------------------- | 13 | ---------------------------------------------------------------- |
14 | Pull request | 14 | Pull request |
15 | 15 | ||
16 | This pull request contains a virtio-blk/scsi performance optimization, event | 16 | v2: |
17 | loop scalability improvements, and a qtest-based device fuzzing framework. I | 17 | * Rebased to resolve scsi conflicts |
18 | am including the fuzzing patches because I have reviewed them and Thomas Huth | ||
19 | is currently away on leave. | ||
20 | 18 | ||
21 | ---------------------------------------------------------------- | 19 | ---------------------------------------------------------------- |
22 | 20 | ||
23 | Alexander Bulekov (22): | 21 | Paolo Bonzini (24): |
24 | softmmu: move vl.c to softmmu/ | 22 | block: move AioContext, QEMUTimer, main-loop to libqemuutil |
25 | softmmu: split off vl.c:main() into main.c | 23 | aio: introduce aio_co_schedule and aio_co_wake |
26 | module: check module wasn't already initialized | 24 | block-backend: allow blk_prw from coroutine context |
27 | fuzz: add FUZZ_TARGET module type | 25 | test-thread-pool: use generic AioContext infrastructure |
28 | qtest: add qtest_server_send abstraction | 26 | io: add methods to set I/O handlers on AioContext |
29 | libqtest: add a layer of abstraction to send/recv | 27 | io: make qio_channel_yield aware of AioContexts |
30 | libqtest: make bufwrite rely on the TransportOps | 28 | nbd: convert to use qio_channel_yield |
31 | qtest: add in-process incoming command handler | 29 | coroutine-lock: reschedule coroutine on the AioContext it was running |
32 | libqos: rename i2c_send and i2c_recv | 30 | on |
33 | libqos: split qos-test and libqos makefile vars | 31 | blkdebug: reschedule coroutine on the AioContext it is running on |
34 | libqos: move useful qos-test funcs to qos_external | 32 | qed: introduce qed_aio_start_io and qed_aio_next_io_cb |
35 | fuzz: add fuzzer skeleton | 33 | aio: push aio_context_acquire/release down to dispatching |
36 | exec: keep ram block across fork when using qtest | 34 | block: explicitly acquire aiocontext in timers that need it |
37 | main: keep rcu_atfork callback enabled for qtest | 35 | block: explicitly acquire aiocontext in callbacks that need it |
38 | fuzz: support for fork-based fuzzing. | 36 | block: explicitly acquire aiocontext in bottom halves that need it |
39 | fuzz: add support for qos-assisted fuzz targets | 37 | block: explicitly acquire aiocontext in aio callbacks that need it |
40 | fuzz: add target/fuzz makefile rules | 38 | aio-posix: partially inline aio_dispatch into aio_poll |
41 | fuzz: add configure flag --enable-fuzzing | 39 | async: remove unnecessary inc/dec pairs |
42 | fuzz: add i440fx fuzz targets | 40 | block: document fields protected by AioContext lock |
43 | fuzz: add virtio-net fuzz target | 41 | coroutine-lock: make CoMutex thread-safe |
44 | fuzz: add virtio-scsi fuzz target | 42 | coroutine-lock: add limited spinning to CoMutex |
45 | fuzz: add documentation to docs/devel/ | 43 | test-aio-multithread: add performance comparison with thread-based |
44 | mutexes | ||
45 | coroutine-lock: place CoMutex before CoQueue in header | ||
46 | coroutine-lock: add mutex argument to CoQueue APIs | ||
47 | coroutine-lock: make CoRwlock thread-safe and fair | ||
46 | 48 | ||
47 | Denis Plotnikov (1): | 49 | Makefile.objs | 4 - |
48 | virtio: increase virtqueue size for virtio-scsi and virtio-blk | 50 | stubs/Makefile.objs | 1 + |
49 | 51 | tests/Makefile.include | 19 +- | |
50 | Paolo Bonzini (1): | 52 | util/Makefile.objs | 6 +- |
51 | rcu_queue: add QSLIST functions | 53 | block/nbd-client.h | 2 +- |
52 | 54 | block/qed.h | 3 + | |
53 | Stefan Hajnoczi (7): | 55 | include/block/aio.h | 38 ++- |
54 | aio-posix: avoid reacquiring rcu_read_lock() when polling | 56 | include/block/block_int.h | 64 +++-- |
55 | util/async: make bh_aio_poll() O(1) | 57 | include/io/channel.h | 72 +++++- |
56 | aio-posix: fix use after leaving scope in aio_poll() | 58 | include/qemu/coroutine.h | 84 ++++--- |
57 | aio-posix: don't pass ns timeout to epoll_wait() | 59 | include/qemu/coroutine_int.h | 11 +- |
58 | qemu/queue.h: add QLIST_SAFE_REMOVE() | 60 | include/sysemu/block-backend.h | 14 +- |
59 | aio-posix: make AioHandler deletion O(1) | 61 | tests/iothread.h | 25 ++ |
60 | aio-posix: make AioHandler dispatch O(1) with epoll | 62 | block/backup.c | 2 +- |
61 | 63 | block/blkdebug.c | 9 +- | |
62 | MAINTAINERS | 11 +- | 64 | block/blkreplay.c | 2 +- |
63 | Makefile | 15 +- | 65 | block/block-backend.c | 13 +- |
64 | Makefile.objs | 2 - | 66 | block/curl.c | 44 +++- |
65 | Makefile.target | 19 ++- | 67 | block/gluster.c | 9 +- |
66 | block.c | 5 +- | 68 | block/io.c | 42 +--- |
67 | chardev/spice.c | 4 +- | 69 | block/iscsi.c | 15 +- |
68 | configure | 39 +++++ | 70 | block/linux-aio.c | 10 +- |
69 | docs/devel/fuzzing.txt | 116 ++++++++++++++ | 71 | block/mirror.c | 12 +- |
70 | exec.c | 12 +- | 72 | block/nbd-client.c | 119 +++++---- |
71 | hw/block/virtio-blk.c | 2 +- | 73 | block/nfs.c | 9 +- |
72 | hw/core/machine.c | 2 + | 74 | block/qcow2-cluster.c | 4 +- |
73 | hw/scsi/virtio-scsi.c | 2 +- | 75 | block/qed-cluster.c | 2 + |
74 | include/block/aio.h | 26 ++- | 76 | block/qed-table.c | 12 +- |
75 | include/qemu/module.h | 4 +- | 77 | block/qed.c | 58 +++-- |
76 | include/qemu/queue.h | 32 +++- | 78 | block/sheepdog.c | 31 +-- |
77 | include/qemu/rcu_queue.h | 47 ++++++ | 79 | block/ssh.c | 29 +-- |
78 | include/sysemu/qtest.h | 4 + | 80 | block/throttle-groups.c | 4 +- |
79 | include/sysemu/sysemu.h | 4 + | 81 | block/win32-aio.c | 9 +- |
80 | qtest.c | 31 +++- | 82 | dma-helpers.c | 2 + |
81 | scripts/checkpatch.pl | 2 +- | 83 | hw/9pfs/9p.c | 2 +- |
82 | scripts/get_maintainer.pl | 3 +- | 84 | hw/block/virtio-blk.c | 19 +- |
83 | softmmu/Makefile.objs | 3 + | 85 | hw/scsi/scsi-bus.c | 2 + |
84 | softmmu/main.c | 53 +++++++ | 86 | hw/scsi/scsi-disk.c | 15 ++ |
85 | vl.c => softmmu/vl.c | 48 +++--- | 87 | hw/scsi/scsi-generic.c | 20 +- |
86 | tests/Makefile.include | 2 + | 88 | hw/scsi/virtio-scsi.c | 7 + |
87 | tests/qtest/Makefile.include | 72 +++++---- | 89 | io/channel-command.c | 13 + |
88 | tests/qtest/fuzz/Makefile.include | 18 +++ | 90 | io/channel-file.c | 11 + |
89 | tests/qtest/fuzz/fork_fuzz.c | 55 +++++++ | 91 | io/channel-socket.c | 16 +- |
90 | tests/qtest/fuzz/fork_fuzz.h | 23 +++ | 92 | io/channel-tls.c | 12 + |
91 | tests/qtest/fuzz/fork_fuzz.ld | 37 +++++ | 93 | io/channel-watch.c | 6 + |
92 | tests/qtest/fuzz/fuzz.c | 179 +++++++++++++++++++++ | 94 | io/channel.c | 97 ++++++-- |
93 | tests/qtest/fuzz/fuzz.h | 95 +++++++++++ | 95 | nbd/client.c | 2 +- |
94 | tests/qtest/fuzz/i440fx_fuzz.c | 193 ++++++++++++++++++++++ | 96 | nbd/common.c | 9 +- |
95 | tests/qtest/fuzz/qos_fuzz.c | 234 +++++++++++++++++++++++++++ | 97 | nbd/server.c | 94 +++----- |
96 | tests/qtest/fuzz/qos_fuzz.h | 33 ++++ | 98 | stubs/linux-aio.c | 32 +++ |
97 | tests/qtest/fuzz/virtio_net_fuzz.c | 198 +++++++++++++++++++++++ | 99 | stubs/set-fd-handler.c | 11 - |
98 | tests/qtest/fuzz/virtio_scsi_fuzz.c | 213 +++++++++++++++++++++++++ | 100 | tests/iothread.c | 91 +++++++ |
99 | tests/qtest/libqos/i2c.c | 10 +- | 101 | tests/test-aio-multithread.c | 463 ++++++++++++++++++++++++++++++++++++ |
100 | tests/qtest/libqos/i2c.h | 4 +- | 102 | tests/test-thread-pool.c | 12 +- |
101 | tests/qtest/libqos/qos_external.c | 168 ++++++++++++++++++++ | 103 | aio-posix.c => util/aio-posix.c | 62 ++--- |
102 | tests/qtest/libqos/qos_external.h | 28 ++++ | 104 | aio-win32.c => util/aio-win32.c | 30 +-- |
103 | tests/qtest/libqtest.c | 119 ++++++++++++-- | 105 | util/aiocb.c | 55 +++++ |
104 | tests/qtest/libqtest.h | 4 + | 106 | async.c => util/async.c | 84 ++++++- |
105 | tests/qtest/pca9552-test.c | 10 +- | 107 | iohandler.c => util/iohandler.c | 0 |
106 | tests/qtest/qos-test.c | 132 +--------------- | 108 | main-loop.c => util/main-loop.c | 0 |
107 | tests/test-aio.c | 3 +- | 109 | util/qemu-coroutine-lock.c | 254 ++++++++++++++++++-- |
108 | tests/test-rcu-list.c | 16 ++ | 110 | util/qemu-coroutine-sleep.c | 2 +- |
109 | tests/test-rcu-slist.c | 2 + | 111 | util/qemu-coroutine.c | 8 + |
110 | util/aio-posix.c | 187 +++++++++++++++------- | 112 | qemu-timer.c => util/qemu-timer.c | 0 |
111 | util/async.c | 237 ++++++++++++++++------------ | 113 | thread-pool.c => util/thread-pool.c | 8 +- |
112 | util/module.c | 7 + | 114 | trace-events | 11 - |
113 | 51 files changed, 2365 insertions(+), 400 deletions(-) | 115 | util/trace-events | 17 +- |
114 | create mode 100644 docs/devel/fuzzing.txt | 116 | 67 files changed, 1712 insertions(+), 533 deletions(-) |
115 | create mode 100644 softmmu/Makefile.objs | 117 | create mode 100644 tests/iothread.h |
116 | create mode 100644 softmmu/main.c | 118 | create mode 100644 stubs/linux-aio.c |
117 | rename vl.c => softmmu/vl.c (99%) | 119 | create mode 100644 tests/iothread.c |
118 | create mode 100644 tests/qtest/fuzz/Makefile.include | 120 | create mode 100644 tests/test-aio-multithread.c |
119 | create mode 100644 tests/qtest/fuzz/fork_fuzz.c | 121 | rename aio-posix.c => util/aio-posix.c (94%) |
120 | create mode 100644 tests/qtest/fuzz/fork_fuzz.h | 122 | rename aio-win32.c => util/aio-win32.c (95%) |
121 | create mode 100644 tests/qtest/fuzz/fork_fuzz.ld | 123 | create mode 100644 util/aiocb.c |
122 | create mode 100644 tests/qtest/fuzz/fuzz.c | 124 | rename async.c => util/async.c (82%) |
123 | create mode 100644 tests/qtest/fuzz/fuzz.h | 125 | rename iohandler.c => util/iohandler.c (100%) |
124 | create mode 100644 tests/qtest/fuzz/i440fx_fuzz.c | 126 | rename main-loop.c => util/main-loop.c (100%) |
125 | create mode 100644 tests/qtest/fuzz/qos_fuzz.c | 127 | rename qemu-timer.c => util/qemu-timer.c (100%) |
126 | create mode 100644 tests/qtest/fuzz/qos_fuzz.h | 128 | rename thread-pool.c => util/thread-pool.c (97%) |
127 | create mode 100644 tests/qtest/fuzz/virtio_net_fuzz.c | ||
128 | create mode 100644 tests/qtest/fuzz/virtio_scsi_fuzz.c | ||
129 | create mode 100644 tests/qtest/libqos/qos_external.c | ||
130 | create mode 100644 tests/qtest/libqos/qos_external.h | ||
131 | create mode 100644 tests/test-rcu-slist.c | ||
132 | 129 | ||
133 | -- | 130 | -- |
134 | 2.24.1 | 131 | 2.9.3 |
135 | 132 | ||
133 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | A program might rely on functions implemented in vl.c, but implement its | 3 | AioContext is fairly self contained, the only dependency is QEMUTimer but |
4 | own main(). By placing main into a separate source file, there are no | 4 | that in turn doesn't need anything else. So move them out of block-obj-y |
5 | complaints about duplicate main()s when linking against vl.o. For | 5 | to avoid introducing a dependency from io/ to block-obj-y. |
6 | example, the virtual-device fuzzer uses a main() provided by libfuzzer, | 6 | |
7 | and needs to perform some initialization before running the softmmu | 7 | main-loop and its dependency iohandler also need to be moved, because |
8 | initialization. Now, main simply calls three vl.c functions which | 8 | later in this series io/ will call iohandler_get_aio_context. |
9 | handle the guest initialization, main loop and cleanup. | 9 | |
10 | 10 | [Changed copyright "the QEMU team" to "other QEMU contributors" as | |
11 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 11 | suggested by Daniel Berrange and agreed by Paolo. |
12 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 12 | --Stefan] |
13 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 13 | |
14 | Message-id: 20200220041118.23264-3-alxndr@bu.edu | 14 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
15 | Reviewed-by: Fam Zheng <famz@redhat.com> | ||
16 | Message-id: 20170213135235.12274-2-pbonzini@redhat.com | ||
15 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 17 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
16 | --- | 18 | --- |
17 | MAINTAINERS | 1 + | 19 | Makefile.objs | 4 --- |
18 | Makefile.target | 2 +- | 20 | stubs/Makefile.objs | 1 + |
19 | include/sysemu/sysemu.h | 4 ++++ | 21 | tests/Makefile.include | 11 ++++---- |
20 | softmmu/Makefile.objs | 1 + | 22 | util/Makefile.objs | 6 +++- |
21 | softmmu/main.c | 53 +++++++++++++++++++++++++++++++++++++++++ | 23 | block/io.c | 29 ------------------- |
22 | softmmu/vl.c | 36 +++++++--------------------- | 24 | stubs/linux-aio.c | 32 +++++++++++++++++++++ |
23 | 6 files changed, 69 insertions(+), 28 deletions(-) | 25 | stubs/set-fd-handler.c | 11 -------- |
24 | create mode 100644 softmmu/main.c | 26 | aio-posix.c => util/aio-posix.c | 2 +- |
25 | 27 | aio-win32.c => util/aio-win32.c | 0 | |
26 | diff --git a/MAINTAINERS b/MAINTAINERS | 28 | util/aiocb.c | 55 +++++++++++++++++++++++++++++++++++++ |
27 | index XXXXXXX..XXXXXXX 100644 | 29 | async.c => util/async.c | 3 +- |
28 | --- a/MAINTAINERS | 30 | iohandler.c => util/iohandler.c | 0 |
29 | +++ b/MAINTAINERS | 31 | main-loop.c => util/main-loop.c | 0 |
30 | @@ -XXX,XX +XXX,XX @@ F: include/sysemu/runstate.h | 32 | qemu-timer.c => util/qemu-timer.c | 0 |
31 | F: util/main-loop.c | 33 | thread-pool.c => util/thread-pool.c | 2 +- |
32 | F: util/qemu-timer.c | 34 | trace-events | 11 -------- |
33 | F: softmmu/vl.c | 35 | util/trace-events | 11 ++++++++ |
34 | +F: softmmu/main.c | 36 | 17 files changed, 114 insertions(+), 64 deletions(-) |
35 | F: qapi/run-state.json | 37 | create mode 100644 stubs/linux-aio.c |
36 | 38 | rename aio-posix.c => util/aio-posix.c (99%) | |
37 | Human Monitor (HMP) | 39 | rename aio-win32.c => util/aio-win32.c (100%) |
38 | diff --git a/Makefile.target b/Makefile.target | 40 | create mode 100644 util/aiocb.c |
39 | index XXXXXXX..XXXXXXX 100644 | 41 | rename async.c => util/async.c (99%) |
40 | --- a/Makefile.target | 42 | rename iohandler.c => util/iohandler.c (100%) |
41 | +++ b/Makefile.target | 43 | rename main-loop.c => util/main-loop.c (100%) |
42 | @@ -XXX,XX +XXX,XX @@ endif | 44 | rename qemu-timer.c => util/qemu-timer.c (100%) |
43 | COMMON_LDADDS = ../libqemuutil.a | 45 | rename thread-pool.c => util/thread-pool.c (99%) |
44 | 46 | ||
45 | # build either PROG or PROGW | 47 | diff --git a/Makefile.objs b/Makefile.objs |
46 | -$(QEMU_PROG_BUILD): $(all-obj-y) $(COMMON_LDADDS) | 48 | index XXXXXXX..XXXXXXX 100644 |
47 | +$(QEMU_PROG_BUILD): $(all-obj-y) $(COMMON_LDADDS) $(softmmu-main-y) | 49 | --- a/Makefile.objs |
48 | $(call LINK, $(filter-out %.mak, $^)) | 50 | +++ b/Makefile.objs |
49 | ifdef CONFIG_DARWIN | 51 | @@ -XXX,XX +XXX,XX @@ chardev-obj-y = chardev/ |
50 | $(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@,"REZ","$(TARGET_DIR)$@") | 52 | ####################################################################### |
51 | diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h | 53 | # block-obj-y is code used by both qemu system emulation and qemu-img |
52 | index XXXXXXX..XXXXXXX 100644 | 54 | |
53 | --- a/include/sysemu/sysemu.h | 55 | -block-obj-y = async.o thread-pool.o |
54 | +++ b/include/sysemu/sysemu.h | 56 | block-obj-y += nbd/ |
55 | @@ -XXX,XX +XXX,XX @@ QemuOpts *qemu_get_machine_opts(void); | 57 | block-obj-y += block.o blockjob.o |
56 | 58 | -block-obj-y += main-loop.o iohandler.o qemu-timer.o | |
57 | bool defaults_enabled(void); | 59 | -block-obj-$(CONFIG_POSIX) += aio-posix.o |
58 | 60 | -block-obj-$(CONFIG_WIN32) += aio-win32.o | |
59 | +void qemu_init(int argc, char **argv, char **envp); | 61 | block-obj-y += block/ |
60 | +void qemu_main_loop(void); | 62 | block-obj-y += qemu-io-cmds.o |
61 | +void qemu_cleanup(void); | 63 | block-obj-$(CONFIG_REPLICATION) += replication.o |
62 | + | 64 | diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs |
63 | extern QemuOptsList qemu_legacy_drive_opts; | 65 | index XXXXXXX..XXXXXXX 100644 |
64 | extern QemuOptsList qemu_common_drive_opts; | 66 | --- a/stubs/Makefile.objs |
65 | extern QemuOptsList qemu_drive_opts; | 67 | +++ b/stubs/Makefile.objs |
66 | diff --git a/softmmu/Makefile.objs b/softmmu/Makefile.objs | 68 | @@ -XXX,XX +XXX,XX @@ stub-obj-y += get-vm-name.o |
67 | index XXXXXXX..XXXXXXX 100644 | 69 | stub-obj-y += iothread.o |
68 | --- a/softmmu/Makefile.objs | 70 | stub-obj-y += iothread-lock.o |
69 | +++ b/softmmu/Makefile.objs | 71 | stub-obj-y += is-daemonized.o |
70 | @@ -XXX,XX +XXX,XX @@ | 72 | +stub-obj-$(CONFIG_LINUX_AIO) += linux-aio.o |
71 | +softmmu-main-y = softmmu/main.o | 73 | stub-obj-y += machine-init-done.o |
72 | obj-y += vl.o | 74 | stub-obj-y += migr-blocker.o |
73 | vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS) | 75 | stub-obj-y += monitor.o |
74 | diff --git a/softmmu/main.c b/softmmu/main.c | 76 | diff --git a/tests/Makefile.include b/tests/Makefile.include |
77 | index XXXXXXX..XXXXXXX 100644 | ||
78 | --- a/tests/Makefile.include | ||
79 | +++ b/tests/Makefile.include | ||
80 | @@ -XXX,XX +XXX,XX @@ check-unit-y += tests/test-visitor-serialization$(EXESUF) | ||
81 | check-unit-y += tests/test-iov$(EXESUF) | ||
82 | gcov-files-test-iov-y = util/iov.c | ||
83 | check-unit-y += tests/test-aio$(EXESUF) | ||
84 | +gcov-files-test-aio-y = util/async.c util/qemu-timer.o | ||
85 | +gcov-files-test-aio-$(CONFIG_WIN32) += util/aio-win32.c | ||
86 | +gcov-files-test-aio-$(CONFIG_POSIX) += util/aio-posix.c | ||
87 | check-unit-y += tests/test-throttle$(EXESUF) | ||
88 | gcov-files-test-aio-$(CONFIG_WIN32) = aio-win32.c | ||
89 | gcov-files-test-aio-$(CONFIG_POSIX) = aio-posix.c | ||
90 | @@ -XXX,XX +XXX,XX @@ tests/check-qjson$(EXESUF): tests/check-qjson.o $(test-util-obj-y) | ||
91 | tests/check-qom-interface$(EXESUF): tests/check-qom-interface.o $(test-qom-obj-y) | ||
92 | tests/check-qom-proplist$(EXESUF): tests/check-qom-proplist.o $(test-qom-obj-y) | ||
93 | |||
94 | -tests/test-char$(EXESUF): tests/test-char.o qemu-timer.o \ | ||
95 | - $(test-util-obj-y) $(qtest-obj-y) $(test-block-obj-y) $(chardev-obj-y) | ||
96 | +tests/test-char$(EXESUF): tests/test-char.o $(test-util-obj-y) $(qtest-obj-y) $(test-io-obj-y) $(chardev-obj-y) | ||
97 | tests/test-coroutine$(EXESUF): tests/test-coroutine.o $(test-block-obj-y) | ||
98 | tests/test-aio$(EXESUF): tests/test-aio.o $(test-block-obj-y) | ||
99 | tests/test-throttle$(EXESUF): tests/test-throttle.o $(test-block-obj-y) | ||
100 | @@ -XXX,XX +XXX,XX @@ tests/test-vmstate$(EXESUF): tests/test-vmstate.o \ | ||
101 | migration/vmstate.o migration/qemu-file.o \ | ||
102 | migration/qemu-file-channel.o migration/qjson.o \ | ||
103 | $(test-io-obj-y) | ||
104 | -tests/test-timed-average$(EXESUF): tests/test-timed-average.o qemu-timer.o \ | ||
105 | - $(test-util-obj-y) | ||
106 | +tests/test-timed-average$(EXESUF): tests/test-timed-average.o $(test-util-obj-y) | ||
107 | tests/test-base64$(EXESUF): tests/test-base64.o \ | ||
108 | libqemuutil.a libqemustub.a | ||
109 | tests/ptimer-test$(EXESUF): tests/ptimer-test.o tests/ptimer-test-stubs.o hw/core/ptimer.o libqemustub.a | ||
110 | @@ -XXX,XX +XXX,XX @@ tests/usb-hcd-ehci-test$(EXESUF): tests/usb-hcd-ehci-test.o $(libqos-usb-obj-y) | ||
111 | tests/usb-hcd-xhci-test$(EXESUF): tests/usb-hcd-xhci-test.o $(libqos-usb-obj-y) | ||
112 | tests/pc-cpu-test$(EXESUF): tests/pc-cpu-test.o | ||
113 | tests/postcopy-test$(EXESUF): tests/postcopy-test.o | ||
114 | -tests/vhost-user-test$(EXESUF): tests/vhost-user-test.o qemu-timer.o \ | ||
115 | +tests/vhost-user-test$(EXESUF): tests/vhost-user-test.o $(test-util-obj-y) \ | ||
116 | $(qtest-obj-y) $(test-io-obj-y) $(libqos-virtio-obj-y) $(libqos-pc-obj-y) \ | ||
117 | $(chardev-obj-y) | ||
118 | tests/qemu-iotests/socket_scm_helper$(EXESUF): tests/qemu-iotests/socket_scm_helper.o | ||
119 | diff --git a/util/Makefile.objs b/util/Makefile.objs | ||
120 | index XXXXXXX..XXXXXXX 100644 | ||
121 | --- a/util/Makefile.objs | ||
122 | +++ b/util/Makefile.objs | ||
123 | @@ -XXX,XX +XXX,XX @@ | ||
124 | util-obj-y = osdep.o cutils.o unicode.o qemu-timer-common.o | ||
125 | util-obj-y += bufferiszero.o | ||
126 | util-obj-y += lockcnt.o | ||
127 | +util-obj-y += aiocb.o async.o thread-pool.o qemu-timer.o | ||
128 | +util-obj-y += main-loop.o iohandler.o | ||
129 | +util-obj-$(CONFIG_POSIX) += aio-posix.o | ||
130 | util-obj-$(CONFIG_POSIX) += compatfd.o | ||
131 | util-obj-$(CONFIG_POSIX) += event_notifier-posix.o | ||
132 | util-obj-$(CONFIG_POSIX) += mmap-alloc.o | ||
133 | util-obj-$(CONFIG_POSIX) += oslib-posix.o | ||
134 | util-obj-$(CONFIG_POSIX) += qemu-openpty.o | ||
135 | util-obj-$(CONFIG_POSIX) += qemu-thread-posix.o | ||
136 | -util-obj-$(CONFIG_WIN32) += event_notifier-win32.o | ||
137 | util-obj-$(CONFIG_POSIX) += memfd.o | ||
138 | +util-obj-$(CONFIG_WIN32) += aio-win32.o | ||
139 | +util-obj-$(CONFIG_WIN32) += event_notifier-win32.o | ||
140 | util-obj-$(CONFIG_WIN32) += oslib-win32.o | ||
141 | util-obj-$(CONFIG_WIN32) += qemu-thread-win32.o | ||
142 | util-obj-y += envlist.o path.o module.o | ||
143 | diff --git a/block/io.c b/block/io.c | ||
144 | index XXXXXXX..XXXXXXX 100644 | ||
145 | --- a/block/io.c | ||
146 | +++ b/block/io.c | ||
147 | @@ -XXX,XX +XXX,XX @@ BlockAIOCB *bdrv_aio_flush(BlockDriverState *bs, | ||
148 | return &acb->common; | ||
149 | } | ||
150 | |||
151 | -void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs, | ||
152 | - BlockCompletionFunc *cb, void *opaque) | ||
153 | -{ | ||
154 | - BlockAIOCB *acb; | ||
155 | - | ||
156 | - acb = g_malloc(aiocb_info->aiocb_size); | ||
157 | - acb->aiocb_info = aiocb_info; | ||
158 | - acb->bs = bs; | ||
159 | - acb->cb = cb; | ||
160 | - acb->opaque = opaque; | ||
161 | - acb->refcnt = 1; | ||
162 | - return acb; | ||
163 | -} | ||
164 | - | ||
165 | -void qemu_aio_ref(void *p) | ||
166 | -{ | ||
167 | - BlockAIOCB *acb = p; | ||
168 | - acb->refcnt++; | ||
169 | -} | ||
170 | - | ||
171 | -void qemu_aio_unref(void *p) | ||
172 | -{ | ||
173 | - BlockAIOCB *acb = p; | ||
174 | - assert(acb->refcnt > 0); | ||
175 | - if (--acb->refcnt == 0) { | ||
176 | - g_free(acb); | ||
177 | - } | ||
178 | -} | ||
179 | - | ||
180 | /**************************************************************/ | ||
181 | /* Coroutine block device emulation */ | ||
182 | |||
183 | diff --git a/stubs/linux-aio.c b/stubs/linux-aio.c | ||
75 | new file mode 100644 | 184 | new file mode 100644 |
76 | index XXXXXXX..XXXXXXX | 185 | index XXXXXXX..XXXXXXX |
77 | --- /dev/null | 186 | --- /dev/null |
78 | +++ b/softmmu/main.c | 187 | +++ b/stubs/linux-aio.c |
79 | @@ -XXX,XX +XXX,XX @@ | 188 | @@ -XXX,XX +XXX,XX @@ |
80 | +/* | 189 | +/* |
81 | + * QEMU System Emulator | 190 | + * Linux native AIO support. |
82 | + * | 191 | + * |
83 | + * Copyright (c) 2003-2020 Fabrice Bellard | 192 | + * Copyright (C) 2009 IBM, Corp. |
193 | + * Copyright (C) 2009 Red Hat, Inc. | ||
194 | + * | ||
195 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | ||
196 | + * See the COPYING file in the top-level directory. | ||
197 | + */ | ||
198 | +#include "qemu/osdep.h" | ||
199 | +#include "block/aio.h" | ||
200 | +#include "block/raw-aio.h" | ||
201 | + | ||
202 | +void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context) | ||
203 | +{ | ||
204 | + abort(); | ||
205 | +} | ||
206 | + | ||
207 | +void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context) | ||
208 | +{ | ||
209 | + abort(); | ||
210 | +} | ||
211 | + | ||
212 | +LinuxAioState *laio_init(void) | ||
213 | +{ | ||
214 | + abort(); | ||
215 | +} | ||
216 | + | ||
217 | +void laio_cleanup(LinuxAioState *s) | ||
218 | +{ | ||
219 | + abort(); | ||
220 | +} | ||
221 | diff --git a/stubs/set-fd-handler.c b/stubs/set-fd-handler.c | ||
222 | index XXXXXXX..XXXXXXX 100644 | ||
223 | --- a/stubs/set-fd-handler.c | ||
224 | +++ b/stubs/set-fd-handler.c | ||
225 | @@ -XXX,XX +XXX,XX @@ void qemu_set_fd_handler(int fd, | ||
226 | { | ||
227 | abort(); | ||
228 | } | ||
229 | - | ||
230 | -void aio_set_fd_handler(AioContext *ctx, | ||
231 | - int fd, | ||
232 | - bool is_external, | ||
233 | - IOHandler *io_read, | ||
234 | - IOHandler *io_write, | ||
235 | - AioPollFn *io_poll, | ||
236 | - void *opaque) | ||
237 | -{ | ||
238 | - abort(); | ||
239 | -} | ||
240 | diff --git a/aio-posix.c b/util/aio-posix.c | ||
241 | similarity index 99% | ||
242 | rename from aio-posix.c | ||
243 | rename to util/aio-posix.c | ||
244 | index XXXXXXX..XXXXXXX 100644 | ||
245 | --- a/aio-posix.c | ||
246 | +++ b/util/aio-posix.c | ||
247 | @@ -XXX,XX +XXX,XX @@ | ||
248 | #include "qemu/rcu_queue.h" | ||
249 | #include "qemu/sockets.h" | ||
250 | #include "qemu/cutils.h" | ||
251 | -#include "trace-root.h" | ||
252 | +#include "trace.h" | ||
253 | #ifdef CONFIG_EPOLL_CREATE1 | ||
254 | #include <sys/epoll.h> | ||
255 | #endif | ||
256 | diff --git a/aio-win32.c b/util/aio-win32.c | ||
257 | similarity index 100% | ||
258 | rename from aio-win32.c | ||
259 | rename to util/aio-win32.c | ||
260 | diff --git a/util/aiocb.c b/util/aiocb.c | ||
261 | new file mode 100644 | ||
262 | index XXXXXXX..XXXXXXX | ||
263 | --- /dev/null | ||
264 | +++ b/util/aiocb.c | ||
265 | @@ -XXX,XX +XXX,XX @@ | ||
266 | +/* | ||
267 | + * BlockAIOCB allocation | ||
268 | + * | ||
269 | + * Copyright (c) 2003-2017 Fabrice Bellard and other QEMU contributors | ||
84 | + * | 270 | + * |
85 | + * Permission is hereby granted, free of charge, to any person obtaining a copy | 271 | + * Permission is hereby granted, free of charge, to any person obtaining a copy |
86 | + * of this software and associated documentation files (the "Software"), to deal | 272 | + * of this software and associated documentation files (the "Software"), to deal |
87 | + * in the Software without restriction, including without limitation the rights | 273 | + * in the Software without restriction, including without limitation the rights |
88 | + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | 274 | + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |
... | ... | ||
100 | + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | 286 | + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN |
101 | + * THE SOFTWARE. | 287 | + * THE SOFTWARE. |
102 | + */ | 288 | + */ |
103 | + | 289 | + |
104 | +#include "qemu/osdep.h" | 290 | +#include "qemu/osdep.h" |
105 | +#include "qemu-common.h" | 291 | +#include "block/aio.h" |
106 | +#include "sysemu/sysemu.h" | 292 | + |
107 | + | 293 | +void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs, |
108 | +#ifdef CONFIG_SDL | 294 | + BlockCompletionFunc *cb, void *opaque) |
109 | +#if defined(__APPLE__) || defined(main) | 295 | +{ |
110 | +#include <SDL.h> | 296 | + BlockAIOCB *acb; |
111 | +int main(int argc, char **argv) | 297 | + |
112 | +{ | 298 | + acb = g_malloc(aiocb_info->aiocb_size); |
113 | + return qemu_main(argc, argv, NULL); | 299 | + acb->aiocb_info = aiocb_info; |
114 | +} | 300 | + acb->bs = bs; |
115 | +#undef main | 301 | + acb->cb = cb; |
116 | +#define main qemu_main | 302 | + acb->opaque = opaque; |
117 | +#endif | 303 | + acb->refcnt = 1; |
118 | +#endif /* CONFIG_SDL */ | 304 | + return acb; |
119 | + | 305 | +} |
120 | +#ifdef CONFIG_COCOA | 306 | + |
121 | +#undef main | 307 | +void qemu_aio_ref(void *p) |
122 | +#define main qemu_main | 308 | +{ |
123 | +#endif /* CONFIG_COCOA */ | 309 | + BlockAIOCB *acb = p; |
124 | + | 310 | + acb->refcnt++; |
125 | +int main(int argc, char **argv, char **envp) | 311 | +} |
126 | +{ | 312 | + |
127 | + qemu_init(argc, argv, envp); | 313 | +void qemu_aio_unref(void *p) |
128 | + qemu_main_loop(); | 314 | +{ |
129 | + qemu_cleanup(); | 315 | + BlockAIOCB *acb = p; |
130 | + | 316 | + assert(acb->refcnt > 0); |
131 | + return 0; | 317 | + if (--acb->refcnt == 0) { |
132 | +} | 318 | + g_free(acb); |
133 | diff --git a/softmmu/vl.c b/softmmu/vl.c | 319 | + } |
134 | index XXXXXXX..XXXXXXX 100644 | 320 | +} |
135 | --- a/softmmu/vl.c | 321 | diff --git a/async.c b/util/async.c |
136 | +++ b/softmmu/vl.c | 322 | similarity index 99% |
137 | @@ -XXX,XX +XXX,XX @@ | 323 | rename from async.c |
138 | #include "sysemu/seccomp.h" | 324 | rename to util/async.c |
139 | #include "sysemu/tcg.h" | 325 | index XXXXXXX..XXXXXXX 100644 |
140 | 326 | --- a/async.c | |
141 | -#ifdef CONFIG_SDL | 327 | +++ b/util/async.c |
142 | -#if defined(__APPLE__) || defined(main) | 328 | @@ -XXX,XX +XXX,XX @@ |
143 | -#include <SDL.h> | 329 | /* |
144 | -int qemu_main(int argc, char **argv, char **envp); | 330 | - * QEMU System Emulator |
145 | -int main(int argc, char **argv) | 331 | + * Data plane event loop |
146 | -{ | 332 | * |
147 | - return qemu_main(argc, argv, NULL); | 333 | * Copyright (c) 2003-2008 Fabrice Bellard |
148 | -} | 334 | + * Copyright (c) 2009-2017 QEMU contributors |
149 | -#undef main | 335 | * |
150 | -#define main qemu_main | 336 | * Permission is hereby granted, free of charge, to any person obtaining a copy |
151 | -#endif | 337 | * of this software and associated documentation files (the "Software"), to deal |
152 | -#endif /* CONFIG_SDL */ | 338 | diff --git a/iohandler.c b/util/iohandler.c |
153 | - | 339 | similarity index 100% |
154 | -#ifdef CONFIG_COCOA | 340 | rename from iohandler.c |
155 | -#undef main | 341 | rename to util/iohandler.c |
156 | -#define main qemu_main | 342 | diff --git a/main-loop.c b/util/main-loop.c |
157 | -#endif /* CONFIG_COCOA */ | 343 | similarity index 100% |
158 | - | 344 | rename from main-loop.c |
159 | - | 345 | rename to util/main-loop.c |
160 | #include "qemu/error-report.h" | 346 | diff --git a/qemu-timer.c b/util/qemu-timer.c |
161 | #include "qemu/sockets.h" | 347 | similarity index 100% |
162 | #include "sysemu/accel.h" | 348 | rename from qemu-timer.c |
163 | @@ -XXX,XX +XXX,XX @@ static bool main_loop_should_exit(void) | 349 | rename to util/qemu-timer.c |
164 | return false; | 350 | diff --git a/thread-pool.c b/util/thread-pool.c |
165 | } | 351 | similarity index 99% |
166 | 352 | rename from thread-pool.c | |
167 | -static void main_loop(void) | 353 | rename to util/thread-pool.c |
168 | +void qemu_main_loop(void) | 354 | index XXXXXXX..XXXXXXX 100644 |
169 | { | 355 | --- a/thread-pool.c |
170 | #ifdef CONFIG_PROFILER | 356 | +++ b/util/thread-pool.c |
171 | int64_t ti; | 357 | @@ -XXX,XX +XXX,XX @@ |
172 | @@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname) | 358 | #include "qemu/queue.h" |
173 | } | 359 | #include "qemu/thread.h" |
174 | } | 360 | #include "qemu/coroutine.h" |
175 | 361 | -#include "trace-root.h" | |
176 | -int main(int argc, char **argv, char **envp) | 362 | +#include "trace.h" |
177 | +void qemu_init(int argc, char **argv, char **envp) | 363 | #include "block/thread-pool.h" |
178 | { | 364 | #include "qemu/main-loop.h" |
179 | int i; | 365 | |
180 | int snapshot, linux_boot; | 366 | diff --git a/trace-events b/trace-events |
181 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp) | 367 | index XXXXXXX..XXXXXXX 100644 |
182 | case QEMU_OPTION_watchdog: | 368 | --- a/trace-events |
183 | if (watchdog) { | 369 | +++ b/trace-events |
184 | error_report("only one watchdog option may be given"); | 370 | @@ -XXX,XX +XXX,XX @@ |
185 | - return 1; | 371 | # |
186 | + exit(1); | 372 | # The <format-string> should be a sprintf()-compatible format string. |
187 | } | 373 | |
188 | watchdog = optarg; | 374 | -# aio-posix.c |
189 | break; | 375 | -run_poll_handlers_begin(void *ctx, int64_t max_ns) "ctx %p max_ns %"PRId64 |
190 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp) | 376 | -run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d" |
191 | parse_numa_opts(current_machine); | 377 | -poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64 |
192 | 378 | -poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64 | |
193 | /* do monitor/qmp handling at preconfig state if requested */ | 379 | - |
194 | - main_loop(); | 380 | -# thread-pool.c |
195 | + qemu_main_loop(); | 381 | -thread_pool_submit(void *pool, void *req, void *opaque) "pool %p req %p opaque %p" |
196 | 382 | -thread_pool_complete(void *pool, void *req, void *opaque, int ret) "pool %p req %p opaque %p ret %d" | |
197 | audio_init_audiodevs(); | 383 | -thread_pool_cancel(void *req, void *opaque) "req %p opaque %p" |
198 | 384 | - | |
199 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp) | 385 | # ioport.c |
200 | if (vmstate_dump_file) { | 386 | cpu_in(unsigned int addr, char size, unsigned int val) "addr %#x(%c) value %u" |
201 | /* dump and exit */ | 387 | cpu_out(unsigned int addr, char size, unsigned int val) "addr %#x(%c) value %u" |
202 | dump_vmstate_json_to_file(vmstate_dump_file); | 388 | diff --git a/util/trace-events b/util/trace-events |
203 | - return 0; | 389 | index XXXXXXX..XXXXXXX 100644 |
204 | + exit(0); | 390 | --- a/util/trace-events |
205 | } | 391 | +++ b/util/trace-events |
206 | 392 | @@ -XXX,XX +XXX,XX @@ | |
207 | if (incoming) { | 393 | # See docs/tracing.txt for syntax documentation. |
208 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp) | 394 | |
209 | accel_setup_post(current_machine); | 395 | +# util/aio-posix.c |
210 | os_setup_post(); | 396 | +run_poll_handlers_begin(void *ctx, int64_t max_ns) "ctx %p max_ns %"PRId64 |
211 | 397 | +run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d" | |
212 | - main_loop(); | 398 | +poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64 |
213 | + return; | 399 | +poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64 |
214 | +} | 400 | + |
215 | 401 | +# util/thread-pool.c | |
216 | +void qemu_cleanup(void) | 402 | +thread_pool_submit(void *pool, void *req, void *opaque) "pool %p req %p opaque %p" |
217 | +{ | 403 | +thread_pool_complete(void *pool, void *req, void *opaque, int ret) "pool %p req %p opaque %p ret %d" |
218 | gdbserver_cleanup(); | 404 | +thread_pool_cancel(void *req, void *opaque) "req %p opaque %p" |
219 | 405 | + | |
220 | /* | 406 | # util/buffer.c |
221 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp) | 407 | buffer_resize(const char *buf, size_t olen, size_t len) "%s: old %zd, new %zd" |
222 | qemu_chr_cleanup(); | 408 | buffer_move_empty(const char *buf, size_t len, const char *from) "%s: %zd bytes from %s" |
223 | user_creatable_cleanup(); | ||
224 | /* TODO: unref root container, check all devices are ok */ | ||
225 | - | ||
226 | - return 0; | ||
227 | } | ||
228 | -- | 409 | -- |
229 | 2.24.1 | 410 | 2.9.3 |
230 | 411 | ||
412 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | tests/fuzz/fuzz.c serves as the entry point for the virtual-device | 3 | aio_co_wake provides the infrastructure to start a coroutine on a "home" |
4 | fuzzer. Namely, libfuzzer invokes the LLVMFuzzerInitialize and | 4 | AioContext. It will be used by CoMutex and CoQueue, so that coroutines |
5 | LLVMFuzzerTestOneInput functions, both of which are defined in this | 5 | don't jump from one context to another when they go to sleep on a |
6 | file. This change adds a "FuzzTarget" struct, along with the | 6 | mutex or waitqueue. However, it can also be used as a more efficient |
7 | fuzz_add_target function, which should be used to define new fuzz | 7 | alternative to one-shot bottom halves, and saves the effort of tracking |
8 | targets. | 8 | which AioContext a coroutine is running on. |
9 | 9 | ||
10 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 10 | aio_co_schedule is the part of aio_co_wake that starts a coroutine |
11 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | on a remove AioContext, but it is also useful to implement e.g. |
12 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 12 | bdrv_set_aio_context callbacks. |
13 | Message-id: 20200220041118.23264-13-alxndr@bu.edu | 13 | |
14 | The implementation of aio_co_schedule is based on a lock-free | ||
15 | multiple-producer, single-consumer queue. The multiple producers use | ||
16 | cmpxchg to add to a LIFO stack. The consumer (a per-AioContext bottom | ||
17 | half) grabs all items added so far, inverts the list to make it FIFO, | ||
18 | and goes through it one item at a time until it's empty. The data | ||
19 | structure was inspired by OSv, which uses it in the very code we'll | ||
20 | "port" to QEMU for the thread-safe CoMutex. | ||
21 | |||
22 | Most of the new code is really tests. | ||
23 | |||
24 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | ||
25 | Reviewed-by: Fam Zheng <famz@redhat.com> | ||
26 | Message-id: 20170213135235.12274-3-pbonzini@redhat.com | ||
14 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 27 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
15 | --- | 28 | --- |
16 | MAINTAINERS | 8 ++ | 29 | tests/Makefile.include | 8 +- |
17 | tests/qtest/fuzz/Makefile.include | 6 + | 30 | include/block/aio.h | 32 +++++++ |
18 | tests/qtest/fuzz/fuzz.c | 179 ++++++++++++++++++++++++++++++ | 31 | include/qemu/coroutine_int.h | 11 ++- |
19 | tests/qtest/fuzz/fuzz.h | 95 ++++++++++++++++ | 32 | tests/iothread.h | 25 +++++ |
20 | 4 files changed, 288 insertions(+) | 33 | tests/iothread.c | 91 ++++++++++++++++++ |
21 | create mode 100644 tests/qtest/fuzz/Makefile.include | 34 | tests/test-aio-multithread.c | 213 +++++++++++++++++++++++++++++++++++++++++++ |
22 | create mode 100644 tests/qtest/fuzz/fuzz.c | 35 | util/async.c | 65 +++++++++++++ |
23 | create mode 100644 tests/qtest/fuzz/fuzz.h | 36 | util/qemu-coroutine.c | 8 ++ |
24 | 37 | util/trace-events | 4 + | |
25 | diff --git a/MAINTAINERS b/MAINTAINERS | 38 | 9 files changed, 453 insertions(+), 4 deletions(-) |
39 | create mode 100644 tests/iothread.h | ||
40 | create mode 100644 tests/iothread.c | ||
41 | create mode 100644 tests/test-aio-multithread.c | ||
42 | |||
43 | diff --git a/tests/Makefile.include b/tests/Makefile.include | ||
26 | index XXXXXXX..XXXXXXX 100644 | 44 | index XXXXXXX..XXXXXXX 100644 |
27 | --- a/MAINTAINERS | 45 | --- a/tests/Makefile.include |
28 | +++ b/MAINTAINERS | 46 | +++ b/tests/Makefile.include |
29 | @@ -XXX,XX +XXX,XX @@ F: qtest.c | 47 | @@ -XXX,XX +XXX,XX @@ check-unit-y += tests/test-aio$(EXESUF) |
30 | F: accel/qtest.c | 48 | gcov-files-test-aio-y = util/async.c util/qemu-timer.o |
31 | F: tests/qtest/ | 49 | gcov-files-test-aio-$(CONFIG_WIN32) += util/aio-win32.c |
32 | 50 | gcov-files-test-aio-$(CONFIG_POSIX) += util/aio-posix.c | |
33 | +Device Fuzzing | 51 | +check-unit-y += tests/test-aio-multithread$(EXESUF) |
34 | +M: Alexander Bulekov <alxndr@bu.edu> | 52 | +gcov-files-test-aio-multithread-y = $(gcov-files-test-aio-y) |
35 | +R: Paolo Bonzini <pbonzini@redhat.com> | 53 | +gcov-files-test-aio-multithread-y += util/qemu-coroutine.c tests/iothread.c |
36 | +R: Bandan Das <bsd@redhat.com> | 54 | check-unit-y += tests/test-throttle$(EXESUF) |
37 | +R: Stefan Hajnoczi <stefanha@redhat.com> | 55 | -gcov-files-test-aio-$(CONFIG_WIN32) = aio-win32.c |
38 | +S: Maintained | 56 | -gcov-files-test-aio-$(CONFIG_POSIX) = aio-posix.c |
39 | +F: tests/qtest/fuzz/ | 57 | check-unit-y += tests/test-thread-pool$(EXESUF) |
40 | + | 58 | gcov-files-test-thread-pool-y = thread-pool.c |
41 | Register API | 59 | gcov-files-test-hbitmap-y = util/hbitmap.c |
42 | M: Alistair Francis <alistair@alistair23.me> | 60 | @@ -XXX,XX +XXX,XX @@ test-qapi-obj-y = tests/test-qapi-visit.o tests/test-qapi-types.o \ |
43 | S: Maintained | 61 | $(test-qom-obj-y) |
44 | diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include | 62 | test-crypto-obj-y = $(crypto-obj-y) $(test-qom-obj-y) |
63 | test-io-obj-y = $(io-obj-y) $(test-crypto-obj-y) | ||
64 | -test-block-obj-y = $(block-obj-y) $(test-io-obj-y) | ||
65 | +test-block-obj-y = $(block-obj-y) $(test-io-obj-y) tests/iothread.o | ||
66 | |||
67 | tests/check-qint$(EXESUF): tests/check-qint.o $(test-util-obj-y) | ||
68 | tests/check-qstring$(EXESUF): tests/check-qstring.o $(test-util-obj-y) | ||
69 | @@ -XXX,XX +XXX,XX @@ tests/check-qom-proplist$(EXESUF): tests/check-qom-proplist.o $(test-qom-obj-y) | ||
70 | tests/test-char$(EXESUF): tests/test-char.o $(test-util-obj-y) $(qtest-obj-y) $(test-io-obj-y) $(chardev-obj-y) | ||
71 | tests/test-coroutine$(EXESUF): tests/test-coroutine.o $(test-block-obj-y) | ||
72 | tests/test-aio$(EXESUF): tests/test-aio.o $(test-block-obj-y) | ||
73 | +tests/test-aio-multithread$(EXESUF): tests/test-aio-multithread.o $(test-block-obj-y) | ||
74 | tests/test-throttle$(EXESUF): tests/test-throttle.o $(test-block-obj-y) | ||
75 | tests/test-blockjob$(EXESUF): tests/test-blockjob.o $(test-block-obj-y) $(test-util-obj-y) | ||
76 | tests/test-blockjob-txn$(EXESUF): tests/test-blockjob-txn.o $(test-block-obj-y) $(test-util-obj-y) | ||
77 | diff --git a/include/block/aio.h b/include/block/aio.h | ||
78 | index XXXXXXX..XXXXXXX 100644 | ||
79 | --- a/include/block/aio.h | ||
80 | +++ b/include/block/aio.h | ||
81 | @@ -XXX,XX +XXX,XX @@ typedef void QEMUBHFunc(void *opaque); | ||
82 | typedef bool AioPollFn(void *opaque); | ||
83 | typedef void IOHandler(void *opaque); | ||
84 | |||
85 | +struct Coroutine; | ||
86 | struct ThreadPool; | ||
87 | struct LinuxAioState; | ||
88 | |||
89 | @@ -XXX,XX +XXX,XX @@ struct AioContext { | ||
90 | bool notified; | ||
91 | EventNotifier notifier; | ||
92 | |||
93 | + QSLIST_HEAD(, Coroutine) scheduled_coroutines; | ||
94 | + QEMUBH *co_schedule_bh; | ||
95 | + | ||
96 | /* Thread pool for performing work and receiving completion callbacks. | ||
97 | * Has its own locking. | ||
98 | */ | ||
99 | @@ -XXX,XX +XXX,XX @@ static inline bool aio_node_check(AioContext *ctx, bool is_external) | ||
100 | } | ||
101 | |||
102 | /** | ||
103 | + * aio_co_schedule: | ||
104 | + * @ctx: the aio context | ||
105 | + * @co: the coroutine | ||
106 | + * | ||
107 | + * Start a coroutine on a remote AioContext. | ||
108 | + * | ||
109 | + * The coroutine must not be entered by anyone else while aio_co_schedule() | ||
110 | + * is active. In addition the coroutine must have yielded unless ctx | ||
111 | + * is the context in which the coroutine is running (i.e. the value of | ||
112 | + * qemu_get_current_aio_context() from the coroutine itself). | ||
113 | + */ | ||
114 | +void aio_co_schedule(AioContext *ctx, struct Coroutine *co); | ||
115 | + | ||
116 | +/** | ||
117 | + * aio_co_wake: | ||
118 | + * @co: the coroutine | ||
119 | + * | ||
120 | + * Restart a coroutine on the AioContext where it was running last, thus | ||
121 | + * preventing coroutines from jumping from one context to another when they | ||
122 | + * go to sleep. | ||
123 | + * | ||
124 | + * aio_co_wake may be executed either in coroutine or non-coroutine | ||
125 | + * context. The coroutine must not be entered by anyone else while | ||
126 | + * aio_co_wake() is active. | ||
127 | + */ | ||
128 | +void aio_co_wake(struct Coroutine *co); | ||
129 | + | ||
130 | +/** | ||
131 | * Return the AioContext whose event loop runs in the current thread. | ||
132 | * | ||
133 | * If called from an IOThread this will be the IOThread's AioContext. If | ||
134 | diff --git a/include/qemu/coroutine_int.h b/include/qemu/coroutine_int.h | ||
135 | index XXXXXXX..XXXXXXX 100644 | ||
136 | --- a/include/qemu/coroutine_int.h | ||
137 | +++ b/include/qemu/coroutine_int.h | ||
138 | @@ -XXX,XX +XXX,XX @@ struct Coroutine { | ||
139 | CoroutineEntry *entry; | ||
140 | void *entry_arg; | ||
141 | Coroutine *caller; | ||
142 | + | ||
143 | + /* Only used when the coroutine has terminated. */ | ||
144 | QSLIST_ENTRY(Coroutine) pool_next; | ||
145 | + | ||
146 | size_t locks_held; | ||
147 | |||
148 | - /* Coroutines that should be woken up when we yield or terminate */ | ||
149 | + /* Coroutines that should be woken up when we yield or terminate. | ||
150 | + * Only used when the coroutine is running. | ||
151 | + */ | ||
152 | QSIMPLEQ_HEAD(, Coroutine) co_queue_wakeup; | ||
153 | + | ||
154 | + /* Only used when the coroutine has yielded. */ | ||
155 | + AioContext *ctx; | ||
156 | QSIMPLEQ_ENTRY(Coroutine) co_queue_next; | ||
157 | + QSLIST_ENTRY(Coroutine) co_scheduled_next; | ||
158 | }; | ||
159 | |||
160 | Coroutine *qemu_coroutine_new(void); | ||
161 | diff --git a/tests/iothread.h b/tests/iothread.h | ||
45 | new file mode 100644 | 162 | new file mode 100644 |
46 | index XXXXXXX..XXXXXXX | 163 | index XXXXXXX..XXXXXXX |
47 | --- /dev/null | 164 | --- /dev/null |
48 | +++ b/tests/qtest/fuzz/Makefile.include | 165 | +++ b/tests/iothread.h |
49 | @@ -XXX,XX +XXX,XX @@ | 166 | @@ -XXX,XX +XXX,XX @@ |
50 | +QEMU_PROG_FUZZ=qemu-fuzz-$(TARGET_NAME)$(EXESUF) | 167 | +/* |
51 | + | 168 | + * Event loop thread implementation for unit tests |
52 | +fuzz-obj-y += tests/qtest/libqtest.o | 169 | + * |
53 | +fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton | 170 | + * Copyright Red Hat Inc., 2013, 2016 |
54 | + | 171 | + * |
55 | +FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest | 172 | + * Authors: |
56 | diff --git a/tests/qtest/fuzz/fuzz.c b/tests/qtest/fuzz/fuzz.c | 173 | + * Stefan Hajnoczi <stefanha@redhat.com> |
174 | + * Paolo Bonzini <pbonzini@redhat.com> | ||
175 | + * | ||
176 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | ||
177 | + * See the COPYING file in the top-level directory. | ||
178 | + */ | ||
179 | +#ifndef TEST_IOTHREAD_H | ||
180 | +#define TEST_IOTHREAD_H | ||
181 | + | ||
182 | +#include "block/aio.h" | ||
183 | +#include "qemu/thread.h" | ||
184 | + | ||
185 | +typedef struct IOThread IOThread; | ||
186 | + | ||
187 | +IOThread *iothread_new(void); | ||
188 | +void iothread_join(IOThread *iothread); | ||
189 | +AioContext *iothread_get_aio_context(IOThread *iothread); | ||
190 | + | ||
191 | +#endif | ||
192 | diff --git a/tests/iothread.c b/tests/iothread.c | ||
57 | new file mode 100644 | 193 | new file mode 100644 |
58 | index XXXXXXX..XXXXXXX | 194 | index XXXXXXX..XXXXXXX |
59 | --- /dev/null | 195 | --- /dev/null |
60 | +++ b/tests/qtest/fuzz/fuzz.c | 196 | +++ b/tests/iothread.c |
61 | @@ -XXX,XX +XXX,XX @@ | 197 | @@ -XXX,XX +XXX,XX @@ |
62 | +/* | 198 | +/* |
63 | + * fuzzing driver | 199 | + * Event loop thread implementation for unit tests |
64 | + * | 200 | + * |
65 | + * Copyright Red Hat Inc., 2019 | 201 | + * Copyright Red Hat Inc., 2013, 2016 |
66 | + * | 202 | + * |
67 | + * Authors: | 203 | + * Authors: |
68 | + * Alexander Bulekov <alxndr@bu.edu> | 204 | + * Stefan Hajnoczi <stefanha@redhat.com> |
205 | + * Paolo Bonzini <pbonzini@redhat.com> | ||
69 | + * | 206 | + * |
70 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | 207 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. |
71 | + * See the COPYING file in the top-level directory. | 208 | + * See the COPYING file in the top-level directory. |
72 | + * | 209 | + * |
73 | + */ | 210 | + */ |
74 | + | 211 | + |
75 | +#include "qemu/osdep.h" | 212 | +#include "qemu/osdep.h" |
76 | + | 213 | +#include "qapi/error.h" |
77 | +#include <wordexp.h> | 214 | +#include "block/aio.h" |
78 | + | ||
79 | +#include "sysemu/qtest.h" | ||
80 | +#include "sysemu/runstate.h" | ||
81 | +#include "sysemu/sysemu.h" | ||
82 | +#include "qemu/main-loop.h" | 215 | +#include "qemu/main-loop.h" |
83 | +#include "tests/qtest/libqtest.h" | 216 | +#include "qemu/rcu.h" |
84 | +#include "tests/qtest/libqos/qgraph.h" | 217 | +#include "iothread.h" |
85 | +#include "fuzz.h" | 218 | + |
86 | + | 219 | +struct IOThread { |
87 | +#define MAX_EVENT_LOOPS 10 | 220 | + AioContext *ctx; |
88 | + | 221 | + |
89 | +typedef struct FuzzTargetState { | 222 | + QemuThread thread; |
90 | + FuzzTarget *target; | 223 | + QemuMutex init_done_lock; |
91 | + QSLIST_ENTRY(FuzzTargetState) target_list; | 224 | + QemuCond init_done_cond; /* is thread initialization done? */ |
92 | +} FuzzTargetState; | 225 | + bool stopping; |
93 | + | 226 | +}; |
94 | +typedef QSLIST_HEAD(, FuzzTargetState) FuzzTargetList; | 227 | + |
95 | + | 228 | +static __thread IOThread *my_iothread; |
96 | +static const char *fuzz_arch = TARGET_NAME; | 229 | + |
97 | + | 230 | +AioContext *qemu_get_current_aio_context(void) |
98 | +static FuzzTargetList *fuzz_target_list; | 231 | +{ |
99 | +static FuzzTarget *fuzz_target; | 232 | + return my_iothread ? my_iothread->ctx : qemu_get_aio_context(); |
100 | +static QTestState *fuzz_qts; | 233 | +} |
101 | + | 234 | + |
102 | + | 235 | +static void *iothread_run(void *opaque) |
103 | + | 236 | +{ |
104 | +void flush_events(QTestState *s) | 237 | + IOThread *iothread = opaque; |
105 | +{ | 238 | + |
106 | + int i = MAX_EVENT_LOOPS; | 239 | + rcu_register_thread(); |
107 | + while (g_main_context_pending(NULL) && i-- > 0) { | 240 | + |
108 | + main_loop_wait(false); | 241 | + my_iothread = iothread; |
109 | + } | 242 | + qemu_mutex_lock(&iothread->init_done_lock); |
110 | +} | 243 | + iothread->ctx = aio_context_new(&error_abort); |
111 | + | 244 | + qemu_cond_signal(&iothread->init_done_cond); |
112 | +static QTestState *qtest_setup(void) | 245 | + qemu_mutex_unlock(&iothread->init_done_lock); |
113 | +{ | 246 | + |
114 | + qtest_server_set_send_handler(&qtest_client_inproc_recv, &fuzz_qts); | 247 | + while (!atomic_read(&iothread->stopping)) { |
115 | + return qtest_inproc_init(&fuzz_qts, false, fuzz_arch, | 248 | + aio_poll(iothread->ctx, true); |
116 | + &qtest_server_inproc_recv); | 249 | + } |
117 | +} | 250 | + |
118 | + | 251 | + rcu_unregister_thread(); |
119 | +void fuzz_add_target(const FuzzTarget *target) | ||
120 | +{ | ||
121 | + FuzzTargetState *tmp; | ||
122 | + FuzzTargetState *target_state; | ||
123 | + if (!fuzz_target_list) { | ||
124 | + fuzz_target_list = g_new0(FuzzTargetList, 1); | ||
125 | + } | ||
126 | + | ||
127 | + QSLIST_FOREACH(tmp, fuzz_target_list, target_list) { | ||
128 | + if (g_strcmp0(tmp->target->name, target->name) == 0) { | ||
129 | + fprintf(stderr, "Error: Fuzz target name %s already in use\n", | ||
130 | + target->name); | ||
131 | + abort(); | ||
132 | + } | ||
133 | + } | ||
134 | + target_state = g_new0(FuzzTargetState, 1); | ||
135 | + target_state->target = g_new0(FuzzTarget, 1); | ||
136 | + *(target_state->target) = *target; | ||
137 | + QSLIST_INSERT_HEAD(fuzz_target_list, target_state, target_list); | ||
138 | +} | ||
139 | + | ||
140 | + | ||
141 | + | ||
142 | +static void usage(char *path) | ||
143 | +{ | ||
144 | + printf("Usage: %s --fuzz-target=FUZZ_TARGET [LIBFUZZER ARGUMENTS]\n", path); | ||
145 | + printf("where FUZZ_TARGET is one of:\n"); | ||
146 | + FuzzTargetState *tmp; | ||
147 | + if (!fuzz_target_list) { | ||
148 | + fprintf(stderr, "Fuzz target list not initialized\n"); | ||
149 | + abort(); | ||
150 | + } | ||
151 | + QSLIST_FOREACH(tmp, fuzz_target_list, target_list) { | ||
152 | + printf(" * %s : %s\n", tmp->target->name, | ||
153 | + tmp->target->description); | ||
154 | + } | ||
155 | + exit(0); | ||
156 | +} | ||
157 | + | ||
158 | +static FuzzTarget *fuzz_get_target(char* name) | ||
159 | +{ | ||
160 | + FuzzTargetState *tmp; | ||
161 | + if (!fuzz_target_list) { | ||
162 | + fprintf(stderr, "Fuzz target list not initialized\n"); | ||
163 | + abort(); | ||
164 | + } | ||
165 | + | ||
166 | + QSLIST_FOREACH(tmp, fuzz_target_list, target_list) { | ||
167 | + if (strcmp(tmp->target->name, name) == 0) { | ||
168 | + return tmp->target; | ||
169 | + } | ||
170 | + } | ||
171 | + return NULL; | 252 | + return NULL; |
172 | +} | 253 | +} |
173 | + | 254 | + |
174 | + | 255 | +void iothread_join(IOThread *iothread) |
175 | +/* Executed for each fuzzing-input */ | 256 | +{ |
176 | +int LLVMFuzzerTestOneInput(const unsigned char *Data, size_t Size) | 257 | + iothread->stopping = true; |
177 | +{ | 258 | + aio_notify(iothread->ctx); |
178 | + /* | 259 | + qemu_thread_join(&iothread->thread); |
179 | + * Do the pre-fuzz-initialization before the first fuzzing iteration, | 260 | + qemu_cond_destroy(&iothread->init_done_cond); |
180 | + * instead of before the actual fuzz loop. This is needed since libfuzzer | 261 | + qemu_mutex_destroy(&iothread->init_done_lock); |
181 | + * may fork off additional workers, prior to the fuzzing loop, and if | 262 | + aio_context_unref(iothread->ctx); |
182 | + * pre_fuzz() sets up e.g. shared memory, this should be done for the | 263 | + g_free(iothread); |
183 | + * individual worker processes | 264 | +} |
184 | + */ | 265 | + |
185 | + static int pre_fuzz_done; | 266 | +IOThread *iothread_new(void) |
186 | + if (!pre_fuzz_done && fuzz_target->pre_fuzz) { | 267 | +{ |
187 | + fuzz_target->pre_fuzz(fuzz_qts); | 268 | + IOThread *iothread = g_new0(IOThread, 1); |
188 | + pre_fuzz_done = true; | 269 | + |
189 | + } | 270 | + qemu_mutex_init(&iothread->init_done_lock); |
190 | + | 271 | + qemu_cond_init(&iothread->init_done_cond); |
191 | + fuzz_target->fuzz(fuzz_qts, Data, Size); | 272 | + qemu_thread_create(&iothread->thread, NULL, iothread_run, |
192 | + return 0; | 273 | + iothread, QEMU_THREAD_JOINABLE); |
193 | +} | 274 | + |
194 | + | 275 | + /* Wait for initialization to complete */ |
195 | +/* Executed once, prior to fuzzing */ | 276 | + qemu_mutex_lock(&iothread->init_done_lock); |
196 | +int LLVMFuzzerInitialize(int *argc, char ***argv, char ***envp) | 277 | + while (iothread->ctx == NULL) { |
197 | +{ | 278 | + qemu_cond_wait(&iothread->init_done_cond, |
198 | + | 279 | + &iothread->init_done_lock); |
199 | + char *target_name; | 280 | + } |
200 | + | 281 | + qemu_mutex_unlock(&iothread->init_done_lock); |
201 | + /* Initialize qgraph and modules */ | 282 | + return iothread; |
202 | + qos_graph_init(); | 283 | +} |
203 | + module_call_init(MODULE_INIT_FUZZ_TARGET); | 284 | + |
204 | + module_call_init(MODULE_INIT_QOM); | 285 | +AioContext *iothread_get_aio_context(IOThread *iothread) |
205 | + module_call_init(MODULE_INIT_LIBQOS); | 286 | +{ |
206 | + | 287 | + return iothread->ctx; |
207 | + if (*argc <= 1) { | 288 | +} |
208 | + usage(**argv); | 289 | diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c |
209 | + } | ||
210 | + | ||
211 | + /* Identify the fuzz target */ | ||
212 | + target_name = (*argv)[1]; | ||
213 | + if (!strstr(target_name, "--fuzz-target=")) { | ||
214 | + usage(**argv); | ||
215 | + } | ||
216 | + | ||
217 | + target_name += strlen("--fuzz-target="); | ||
218 | + | ||
219 | + fuzz_target = fuzz_get_target(target_name); | ||
220 | + if (!fuzz_target) { | ||
221 | + usage(**argv); | ||
222 | + } | ||
223 | + | ||
224 | + fuzz_qts = qtest_setup(); | ||
225 | + | ||
226 | + if (fuzz_target->pre_vm_init) { | ||
227 | + fuzz_target->pre_vm_init(); | ||
228 | + } | ||
229 | + | ||
230 | + /* Run QEMU's softmmu main with the fuzz-target dependent arguments */ | ||
231 | + const char *init_cmdline = fuzz_target->get_init_cmdline(fuzz_target); | ||
232 | + | ||
233 | + /* Split the runcmd into an argv and argc */ | ||
234 | + wordexp_t result; | ||
235 | + wordexp(init_cmdline, &result, 0); | ||
236 | + | ||
237 | + qemu_init(result.we_wordc, result.we_wordv, NULL); | ||
238 | + | ||
239 | + return 0; | ||
240 | +} | ||
241 | diff --git a/tests/qtest/fuzz/fuzz.h b/tests/qtest/fuzz/fuzz.h | ||
242 | new file mode 100644 | 290 | new file mode 100644 |
243 | index XXXXXXX..XXXXXXX | 291 | index XXXXXXX..XXXXXXX |
244 | --- /dev/null | 292 | --- /dev/null |
245 | +++ b/tests/qtest/fuzz/fuzz.h | 293 | +++ b/tests/test-aio-multithread.c |
246 | @@ -XXX,XX +XXX,XX @@ | 294 | @@ -XXX,XX +XXX,XX @@ |
247 | +/* | 295 | +/* |
248 | + * fuzzing driver | 296 | + * AioContext multithreading tests |
249 | + * | 297 | + * |
250 | + * Copyright Red Hat Inc., 2019 | 298 | + * Copyright Red Hat, Inc. 2016 |
251 | + * | 299 | + * |
252 | + * Authors: | 300 | + * Authors: |
253 | + * Alexander Bulekov <alxndr@bu.edu> | 301 | + * Paolo Bonzini <pbonzini@redhat.com> |
254 | + * | 302 | + * |
255 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | 303 | + * This work is licensed under the terms of the GNU LGPL, version 2 or later. |
256 | + * See the COPYING file in the top-level directory. | 304 | + * See the COPYING.LIB file in the top-level directory. |
257 | + * | ||
258 | + */ | 305 | + */ |
259 | + | 306 | + |
260 | +#ifndef FUZZER_H_ | ||
261 | +#define FUZZER_H_ | ||
262 | + | ||
263 | +#include "qemu/osdep.h" | 307 | +#include "qemu/osdep.h" |
264 | +#include "qemu/units.h" | 308 | +#include <glib.h> |
309 | +#include "block/aio.h" | ||
265 | +#include "qapi/error.h" | 310 | +#include "qapi/error.h" |
266 | + | 311 | +#include "qemu/coroutine.h" |
267 | +#include "tests/qtest/libqtest.h" | 312 | +#include "qemu/thread.h" |
268 | + | 313 | +#include "qemu/error-report.h" |
269 | +/** | 314 | +#include "iothread.h" |
270 | + * A libfuzzer fuzzing target | 315 | + |
271 | + * | 316 | +/* AioContext management */ |
272 | + * The QEMU fuzzing binary is built with all available targets, each | 317 | + |
273 | + * with a unique @name that can be specified on the command-line to | 318 | +#define NUM_CONTEXTS 5 |
274 | + * select which target should run. | 319 | + |
275 | + * | 320 | +static IOThread *threads[NUM_CONTEXTS]; |
276 | + * A target must implement ->fuzz() to process a random input. If QEMU | 321 | +static AioContext *ctx[NUM_CONTEXTS]; |
277 | + * crashes in ->fuzz() then libfuzzer will record a failure. | 322 | +static __thread int id = -1; |
278 | + * | 323 | + |
279 | + * Fuzzing targets are registered with fuzz_add_target(): | 324 | +static QemuEvent done_event; |
280 | + * | 325 | + |
281 | + * static const FuzzTarget fuzz_target = { | 326 | +/* Run a function synchronously on a remote iothread. */ |
282 | + * .name = "my-device-fifo", | 327 | + |
283 | + * .description = "Fuzz the FIFO buffer registers of my-device", | 328 | +typedef struct CtxRunData { |
284 | + * ... | 329 | + QEMUBHFunc *cb; |
285 | + * }; | 330 | + void *arg; |
286 | + * | 331 | +} CtxRunData; |
287 | + * static void register_fuzz_target(void) | 332 | + |
288 | + * { | 333 | +static void ctx_run_bh_cb(void *opaque) |
289 | + * fuzz_add_target(&fuzz_target); | 334 | +{ |
290 | + * } | 335 | + CtxRunData *data = opaque; |
291 | + * fuzz_target_init(register_fuzz_target); | 336 | + |
292 | + */ | 337 | + data->cb(data->arg); |
293 | +typedef struct FuzzTarget { | 338 | + qemu_event_set(&done_event); |
294 | + const char *name; /* target identifier (passed to --fuzz-target=)*/ | 339 | +} |
295 | + const char *description; /* help text */ | 340 | + |
296 | + | 341 | +static void ctx_run(int i, QEMUBHFunc *cb, void *opaque) |
297 | + | 342 | +{ |
298 | + /* | 343 | + CtxRunData data = { |
299 | + * returns the arg-list that is passed to qemu/softmmu init() | 344 | + .cb = cb, |
300 | + * Cannot be NULL | 345 | + .arg = opaque |
346 | + }; | ||
347 | + | ||
348 | + qemu_event_reset(&done_event); | ||
349 | + aio_bh_schedule_oneshot(ctx[i], ctx_run_bh_cb, &data); | ||
350 | + qemu_event_wait(&done_event); | ||
351 | +} | ||
352 | + | ||
353 | +/* Starting the iothreads. */ | ||
354 | + | ||
355 | +static void set_id_cb(void *opaque) | ||
356 | +{ | ||
357 | + int *i = opaque; | ||
358 | + | ||
359 | + id = *i; | ||
360 | +} | ||
361 | + | ||
362 | +static void create_aio_contexts(void) | ||
363 | +{ | ||
364 | + int i; | ||
365 | + | ||
366 | + for (i = 0; i < NUM_CONTEXTS; i++) { | ||
367 | + threads[i] = iothread_new(); | ||
368 | + ctx[i] = iothread_get_aio_context(threads[i]); | ||
369 | + } | ||
370 | + | ||
371 | + qemu_event_init(&done_event, false); | ||
372 | + for (i = 0; i < NUM_CONTEXTS; i++) { | ||
373 | + ctx_run(i, set_id_cb, &i); | ||
374 | + } | ||
375 | +} | ||
376 | + | ||
377 | +/* Stopping the iothreads. */ | ||
378 | + | ||
379 | +static void join_aio_contexts(void) | ||
380 | +{ | ||
381 | + int i; | ||
382 | + | ||
383 | + for (i = 0; i < NUM_CONTEXTS; i++) { | ||
384 | + aio_context_ref(ctx[i]); | ||
385 | + } | ||
386 | + for (i = 0; i < NUM_CONTEXTS; i++) { | ||
387 | + iothread_join(threads[i]); | ||
388 | + } | ||
389 | + for (i = 0; i < NUM_CONTEXTS; i++) { | ||
390 | + aio_context_unref(ctx[i]); | ||
391 | + } | ||
392 | + qemu_event_destroy(&done_event); | ||
393 | +} | ||
394 | + | ||
395 | +/* Basic test for the stuff above. */ | ||
396 | + | ||
397 | +static void test_lifecycle(void) | ||
398 | +{ | ||
399 | + create_aio_contexts(); | ||
400 | + join_aio_contexts(); | ||
401 | +} | ||
402 | + | ||
403 | +/* aio_co_schedule test. */ | ||
404 | + | ||
405 | +static Coroutine *to_schedule[NUM_CONTEXTS]; | ||
406 | + | ||
407 | +static bool now_stopping; | ||
408 | + | ||
409 | +static int count_retry; | ||
410 | +static int count_here; | ||
411 | +static int count_other; | ||
412 | + | ||
413 | +static bool schedule_next(int n) | ||
414 | +{ | ||
415 | + Coroutine *co; | ||
416 | + | ||
417 | + co = atomic_xchg(&to_schedule[n], NULL); | ||
418 | + if (!co) { | ||
419 | + atomic_inc(&count_retry); | ||
420 | + return false; | ||
421 | + } | ||
422 | + | ||
423 | + if (n == id) { | ||
424 | + atomic_inc(&count_here); | ||
425 | + } else { | ||
426 | + atomic_inc(&count_other); | ||
427 | + } | ||
428 | + | ||
429 | + aio_co_schedule(ctx[n], co); | ||
430 | + return true; | ||
431 | +} | ||
432 | + | ||
433 | +static void finish_cb(void *opaque) | ||
434 | +{ | ||
435 | + schedule_next(id); | ||
436 | +} | ||
437 | + | ||
438 | +static coroutine_fn void test_multi_co_schedule_entry(void *opaque) | ||
439 | +{ | ||
440 | + g_assert(to_schedule[id] == NULL); | ||
441 | + atomic_mb_set(&to_schedule[id], qemu_coroutine_self()); | ||
442 | + | ||
443 | + while (!atomic_mb_read(&now_stopping)) { | ||
444 | + int n; | ||
445 | + | ||
446 | + n = g_test_rand_int_range(0, NUM_CONTEXTS); | ||
447 | + schedule_next(n); | ||
448 | + qemu_coroutine_yield(); | ||
449 | + | ||
450 | + g_assert(to_schedule[id] == NULL); | ||
451 | + atomic_mb_set(&to_schedule[id], qemu_coroutine_self()); | ||
452 | + } | ||
453 | +} | ||
454 | + | ||
455 | + | ||
456 | +static void test_multi_co_schedule(int seconds) | ||
457 | +{ | ||
458 | + int i; | ||
459 | + | ||
460 | + count_here = count_other = count_retry = 0; | ||
461 | + now_stopping = false; | ||
462 | + | ||
463 | + create_aio_contexts(); | ||
464 | + for (i = 0; i < NUM_CONTEXTS; i++) { | ||
465 | + Coroutine *co1 = qemu_coroutine_create(test_multi_co_schedule_entry, NULL); | ||
466 | + aio_co_schedule(ctx[i], co1); | ||
467 | + } | ||
468 | + | ||
469 | + g_usleep(seconds * 1000000); | ||
470 | + | ||
471 | + atomic_mb_set(&now_stopping, true); | ||
472 | + for (i = 0; i < NUM_CONTEXTS; i++) { | ||
473 | + ctx_run(i, finish_cb, NULL); | ||
474 | + to_schedule[i] = NULL; | ||
475 | + } | ||
476 | + | ||
477 | + join_aio_contexts(); | ||
478 | + g_test_message("scheduled %d, queued %d, retry %d, total %d\n", | ||
479 | + count_other, count_here, count_retry, | ||
480 | + count_here + count_other + count_retry); | ||
481 | +} | ||
482 | + | ||
483 | +static void test_multi_co_schedule_1(void) | ||
484 | +{ | ||
485 | + test_multi_co_schedule(1); | ||
486 | +} | ||
487 | + | ||
488 | +static void test_multi_co_schedule_10(void) | ||
489 | +{ | ||
490 | + test_multi_co_schedule(10); | ||
491 | +} | ||
492 | + | ||
493 | +/* End of tests. */ | ||
494 | + | ||
495 | +int main(int argc, char **argv) | ||
496 | +{ | ||
497 | + init_clocks(); | ||
498 | + | ||
499 | + g_test_init(&argc, &argv, NULL); | ||
500 | + g_test_add_func("/aio/multi/lifecycle", test_lifecycle); | ||
501 | + if (g_test_quick()) { | ||
502 | + g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_1); | ||
503 | + } else { | ||
504 | + g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_10); | ||
505 | + } | ||
506 | + return g_test_run(); | ||
507 | +} | ||
508 | diff --git a/util/async.c b/util/async.c | ||
509 | index XXXXXXX..XXXXXXX 100644 | ||
510 | --- a/util/async.c | ||
511 | +++ b/util/async.c | ||
512 | @@ -XXX,XX +XXX,XX @@ | ||
513 | #include "qemu/main-loop.h" | ||
514 | #include "qemu/atomic.h" | ||
515 | #include "block/raw-aio.h" | ||
516 | +#include "qemu/coroutine_int.h" | ||
517 | +#include "trace.h" | ||
518 | |||
519 | /***********************************************************/ | ||
520 | /* bottom halves (can be seen as timers which expire ASAP) */ | ||
521 | @@ -XXX,XX +XXX,XX @@ aio_ctx_finalize(GSource *source) | ||
522 | } | ||
523 | #endif | ||
524 | |||
525 | + assert(QSLIST_EMPTY(&ctx->scheduled_coroutines)); | ||
526 | + qemu_bh_delete(ctx->co_schedule_bh); | ||
527 | + | ||
528 | qemu_lockcnt_lock(&ctx->list_lock); | ||
529 | assert(!qemu_lockcnt_count(&ctx->list_lock)); | ||
530 | while (ctx->first_bh) { | ||
531 | @@ -XXX,XX +XXX,XX @@ static bool event_notifier_poll(void *opaque) | ||
532 | return atomic_read(&ctx->notified); | ||
533 | } | ||
534 | |||
535 | +static void co_schedule_bh_cb(void *opaque) | ||
536 | +{ | ||
537 | + AioContext *ctx = opaque; | ||
538 | + QSLIST_HEAD(, Coroutine) straight, reversed; | ||
539 | + | ||
540 | + QSLIST_MOVE_ATOMIC(&reversed, &ctx->scheduled_coroutines); | ||
541 | + QSLIST_INIT(&straight); | ||
542 | + | ||
543 | + while (!QSLIST_EMPTY(&reversed)) { | ||
544 | + Coroutine *co = QSLIST_FIRST(&reversed); | ||
545 | + QSLIST_REMOVE_HEAD(&reversed, co_scheduled_next); | ||
546 | + QSLIST_INSERT_HEAD(&straight, co, co_scheduled_next); | ||
547 | + } | ||
548 | + | ||
549 | + while (!QSLIST_EMPTY(&straight)) { | ||
550 | + Coroutine *co = QSLIST_FIRST(&straight); | ||
551 | + QSLIST_REMOVE_HEAD(&straight, co_scheduled_next); | ||
552 | + trace_aio_co_schedule_bh_cb(ctx, co); | ||
553 | + qemu_coroutine_enter(co); | ||
554 | + } | ||
555 | +} | ||
556 | + | ||
557 | AioContext *aio_context_new(Error **errp) | ||
558 | { | ||
559 | int ret; | ||
560 | @@ -XXX,XX +XXX,XX @@ AioContext *aio_context_new(Error **errp) | ||
561 | } | ||
562 | g_source_set_can_recurse(&ctx->source, true); | ||
563 | qemu_lockcnt_init(&ctx->list_lock); | ||
564 | + | ||
565 | + ctx->co_schedule_bh = aio_bh_new(ctx, co_schedule_bh_cb, ctx); | ||
566 | + QSLIST_INIT(&ctx->scheduled_coroutines); | ||
567 | + | ||
568 | aio_set_event_notifier(ctx, &ctx->notifier, | ||
569 | false, | ||
570 | (EventNotifierHandler *) | ||
571 | @@ -XXX,XX +XXX,XX @@ fail: | ||
572 | return NULL; | ||
573 | } | ||
574 | |||
575 | +void aio_co_schedule(AioContext *ctx, Coroutine *co) | ||
576 | +{ | ||
577 | + trace_aio_co_schedule(ctx, co); | ||
578 | + QSLIST_INSERT_HEAD_ATOMIC(&ctx->scheduled_coroutines, | ||
579 | + co, co_scheduled_next); | ||
580 | + qemu_bh_schedule(ctx->co_schedule_bh); | ||
581 | +} | ||
582 | + | ||
583 | +void aio_co_wake(struct Coroutine *co) | ||
584 | +{ | ||
585 | + AioContext *ctx; | ||
586 | + | ||
587 | + /* Read coroutine before co->ctx. Matches smp_wmb in | ||
588 | + * qemu_coroutine_enter. | ||
301 | + */ | 589 | + */ |
302 | + const char* (*get_init_cmdline)(struct FuzzTarget *); | 590 | + smp_read_barrier_depends(); |
303 | + | 591 | + ctx = atomic_read(&co->ctx); |
304 | + /* | 592 | + |
305 | + * will run once, prior to running qemu/softmmu init. | 593 | + if (ctx != qemu_get_current_aio_context()) { |
306 | + * eg: set up shared-memory for communication with the child-process | 594 | + aio_co_schedule(ctx, co); |
307 | + * Can be NULL | 595 | + return; |
596 | + } | ||
597 | + | ||
598 | + if (qemu_in_coroutine()) { | ||
599 | + Coroutine *self = qemu_coroutine_self(); | ||
600 | + assert(self != co); | ||
601 | + QSIMPLEQ_INSERT_TAIL(&self->co_queue_wakeup, co, co_queue_next); | ||
602 | + } else { | ||
603 | + aio_context_acquire(ctx); | ||
604 | + qemu_coroutine_enter(co); | ||
605 | + aio_context_release(ctx); | ||
606 | + } | ||
607 | +} | ||
608 | + | ||
609 | void aio_context_ref(AioContext *ctx) | ||
610 | { | ||
611 | g_source_ref(&ctx->source); | ||
612 | diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c | ||
613 | index XXXXXXX..XXXXXXX 100644 | ||
614 | --- a/util/qemu-coroutine.c | ||
615 | +++ b/util/qemu-coroutine.c | ||
616 | @@ -XXX,XX +XXX,XX @@ | ||
617 | #include "qemu/atomic.h" | ||
618 | #include "qemu/coroutine.h" | ||
619 | #include "qemu/coroutine_int.h" | ||
620 | +#include "block/aio.h" | ||
621 | |||
622 | enum { | ||
623 | POOL_BATCH_SIZE = 64, | ||
624 | @@ -XXX,XX +XXX,XX @@ void qemu_coroutine_enter(Coroutine *co) | ||
625 | } | ||
626 | |||
627 | co->caller = self; | ||
628 | + co->ctx = qemu_get_current_aio_context(); | ||
629 | + | ||
630 | + /* Store co->ctx before anything that stores co. Matches | ||
631 | + * barrier in aio_co_wake. | ||
308 | + */ | 632 | + */ |
309 | + void(*pre_vm_init)(void); | 633 | + smp_wmb(); |
310 | + | 634 | + |
311 | + /* | 635 | ret = qemu_coroutine_switch(self, co, COROUTINE_ENTER); |
312 | + * will run once, after QEMU has been initialized, prior to the fuzz-loop. | 636 | |
313 | + * eg: detect the memory map | 637 | qemu_co_queue_run_restart(co); |
314 | + * Can be NULL | 638 | diff --git a/util/trace-events b/util/trace-events |
315 | + */ | 639 | index XXXXXXX..XXXXXXX 100644 |
316 | + void(*pre_fuzz)(QTestState *); | 640 | --- a/util/trace-events |
317 | + | 641 | +++ b/util/trace-events |
318 | + /* | 642 | @@ -XXX,XX +XXX,XX @@ run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d" |
319 | + * accepts and executes an input from libfuzzer. this is repeatedly | 643 | poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64 |
320 | + * executed during the fuzzing loop. Its should handle setup, input | 644 | poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64 |
321 | + * execution and cleanup. | 645 | |
322 | + * Cannot be NULL | 646 | +# util/async.c |
323 | + */ | 647 | +aio_co_schedule(void *ctx, void *co) "ctx %p co %p" |
324 | + void(*fuzz)(QTestState *, const unsigned char *, size_t); | 648 | +aio_co_schedule_bh_cb(void *ctx, void *co) "ctx %p co %p" |
325 | + | 649 | + |
326 | +} FuzzTarget; | 650 | # util/thread-pool.c |
327 | + | 651 | thread_pool_submit(void *pool, void *req, void *opaque) "pool %p req %p opaque %p" |
328 | +void flush_events(QTestState *); | 652 | thread_pool_complete(void *pool, void *req, void *opaque, int ret) "pool %p req %p opaque %p ret %d" |
329 | +void reboot(QTestState *); | ||
330 | + | ||
331 | +/* | ||
332 | + * makes a copy of *target and adds it to the target-list. | ||
333 | + * i.e. fine to set up target on the caller's stack | ||
334 | + */ | ||
335 | +void fuzz_add_target(const FuzzTarget *target); | ||
336 | + | ||
337 | +int LLVMFuzzerTestOneInput(const unsigned char *Data, size_t Size); | ||
338 | +int LLVMFuzzerInitialize(int *argc, char ***argv, char ***envp); | ||
339 | + | ||
340 | +#endif | ||
341 | + | ||
342 | -- | 653 | -- |
343 | 2.24.1 | 654 | 2.9.3 |
344 | 655 | ||
656 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | The qtest-based fuzzer makes use of forking to reset-state between | 3 | qcow2_create2 calls this. Do not run a nested event loop, as that |
4 | tests. Keep the callback enabled, so the call_rcu thread gets created | 4 | breaks when aio_co_wake tries to queue the coroutine on the co_queue_wakeup |
5 | within the child process. | 5 | list of the currently running one. |
6 | 6 | ||
7 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 7 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
8 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 8 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
9 | Acked-by: Stefan Hajnoczi <stefanha@redhat.com> | 9 | Reviewed-by: Fam Zheng <famz@redhat.com> |
10 | Message-id: 20200220041118.23264-15-alxndr@bu.edu | 10 | Message-id: 20170213135235.12274-4-pbonzini@redhat.com |
11 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
12 | --- | 12 | --- |
13 | softmmu/vl.c | 12 +++++++++++- | 13 | block/block-backend.c | 12 ++++++++---- |
14 | 1 file changed, 11 insertions(+), 1 deletion(-) | 14 | 1 file changed, 8 insertions(+), 4 deletions(-) |
15 | 15 | ||
16 | diff --git a/softmmu/vl.c b/softmmu/vl.c | 16 | diff --git a/block/block-backend.c b/block/block-backend.c |
17 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
18 | --- a/softmmu/vl.c | 18 | --- a/block/block-backend.c |
19 | +++ b/softmmu/vl.c | 19 | +++ b/block/block-backend.c |
20 | @@ -XXX,XX +XXX,XX @@ void qemu_init(int argc, char **argv, char **envp) | 20 | @@ -XXX,XX +XXX,XX @@ static int blk_prw(BlockBackend *blk, int64_t offset, uint8_t *buf, |
21 | set_memory_options(&ram_slots, &maxram_size, machine_class); | 21 | { |
22 | 22 | QEMUIOVector qiov; | |
23 | os_daemonize(); | 23 | struct iovec iov; |
24 | - rcu_disable_atfork(); | 24 | - Coroutine *co; |
25 | + | 25 | BlkRwCo rwco; |
26 | + /* | 26 | |
27 | + * If QTest is enabled, keep the rcu_atfork enabled, since system processes | 27 | iov = (struct iovec) { |
28 | + * may be forked testing purposes (e.g. fork-server based fuzzing) The fork | 28 | @@ -XXX,XX +XXX,XX @@ static int blk_prw(BlockBackend *blk, int64_t offset, uint8_t *buf, |
29 | + * should happen before a signle cpu instruction is executed, to prevent | 29 | .ret = NOT_DONE, |
30 | + * deadlocks. See commit 73c6e40, rcu: "completely disable pthread_atfork | 30 | }; |
31 | + * callbacks as soon as possible" | 31 | |
32 | + */ | 32 | - co = qemu_coroutine_create(co_entry, &rwco); |
33 | + if (!qtest_enabled()) { | 33 | - qemu_coroutine_enter(co); |
34 | + rcu_disable_atfork(); | 34 | - BDRV_POLL_WHILE(blk_bs(blk), rwco.ret == NOT_DONE); |
35 | + if (qemu_in_coroutine()) { | ||
36 | + /* Fast-path if already in coroutine context */ | ||
37 | + co_entry(&rwco); | ||
38 | + } else { | ||
39 | + Coroutine *co = qemu_coroutine_create(co_entry, &rwco); | ||
40 | + qemu_coroutine_enter(co); | ||
41 | + BDRV_POLL_WHILE(blk_bs(blk), rwco.ret == NOT_DONE); | ||
35 | + } | 42 | + } |
36 | 43 | ||
37 | if (pid_file && !qemu_write_pidfile(pid_file, &err)) { | 44 | return rwco.ret; |
38 | error_reportf_err(err, "cannot create PID file: "); | 45 | } |
39 | -- | 46 | -- |
40 | 2.24.1 | 47 | 2.9.3 |
41 | 48 | ||
49 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | Ram blocks were marked MADV_DONTFORK breaking fuzzing-tests which | 3 | Once the thread pool starts using aio_co_wake, it will also need |
4 | execute each test-input in a forked process. | 4 | qemu_get_current_aio_context(). Make test-thread-pool create |
5 | an AioContext with qemu_init_main_loop, so that stubs/iothread.c | ||
6 | and tests/iothread.c can provide the rest. | ||
5 | 7 | ||
6 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
7 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 8 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
8 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 9 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
9 | Message-id: 20200220041118.23264-14-alxndr@bu.edu | 10 | Reviewed-by: Fam Zheng <famz@redhat.com> |
11 | Message-id: 20170213135235.12274-5-pbonzini@redhat.com | ||
10 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 12 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
11 | --- | 13 | --- |
12 | exec.c | 12 ++++++++++-- | 14 | tests/test-thread-pool.c | 12 +++--------- |
13 | 1 file changed, 10 insertions(+), 2 deletions(-) | 15 | 1 file changed, 3 insertions(+), 9 deletions(-) |
14 | 16 | ||
15 | diff --git a/exec.c b/exec.c | 17 | diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c |
16 | index XXXXXXX..XXXXXXX 100644 | 18 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/exec.c | 19 | --- a/tests/test-thread-pool.c |
18 | +++ b/exec.c | 20 | +++ b/tests/test-thread-pool.c |
19 | @@ -XXX,XX +XXX,XX @@ | 21 | @@ -XXX,XX +XXX,XX @@ |
20 | #include "sysemu/kvm.h" | 22 | #include "qapi/error.h" |
21 | #include "sysemu/sysemu.h" | ||
22 | #include "sysemu/tcg.h" | ||
23 | +#include "sysemu/qtest.h" | ||
24 | #include "qemu/timer.h" | 23 | #include "qemu/timer.h" |
25 | #include "qemu/config-file.h" | ||
26 | #include "qemu/error-report.h" | 24 | #include "qemu/error-report.h" |
27 | @@ -XXX,XX +XXX,XX @@ static void ram_block_add(RAMBlock *new_block, Error **errp, bool shared) | 25 | +#include "qemu/main-loop.h" |
28 | if (new_block->host) { | 26 | |
29 | qemu_ram_setup_dump(new_block->host, new_block->max_length); | 27 | static AioContext *ctx; |
30 | qemu_madvise(new_block->host, new_block->max_length, QEMU_MADV_HUGEPAGE); | 28 | static ThreadPool *pool; |
31 | - /* MADV_DONTFORK is also needed by KVM in absence of synchronous MMU */ | 29 | @@ -XXX,XX +XXX,XX @@ static void test_cancel_async(void) |
32 | - qemu_madvise(new_block->host, new_block->max_length, QEMU_MADV_DONTFORK); | 30 | int main(int argc, char **argv) |
33 | + /* | 31 | { |
34 | + * MADV_DONTFORK is also needed by KVM in absence of synchronous MMU | 32 | int ret; |
35 | + * Configure it unless the machine is a qtest server, in which case | 33 | - Error *local_error = NULL; |
36 | + * KVM is not used and it may be forked (eg for fuzzing purposes). | 34 | |
37 | + */ | 35 | - init_clocks(); |
38 | + if (!qtest_enabled()) { | 36 | - |
39 | + qemu_madvise(new_block->host, new_block->max_length, | 37 | - ctx = aio_context_new(&local_error); |
40 | + QEMU_MADV_DONTFORK); | 38 | - if (!ctx) { |
41 | + } | 39 | - error_reportf_err(local_error, "Failed to create AIO Context: "); |
42 | ram_block_notify_add(new_block->host, new_block->max_length); | 40 | - exit(1); |
43 | } | 41 | - } |
42 | + qemu_init_main_loop(&error_abort); | ||
43 | + ctx = qemu_get_current_aio_context(); | ||
44 | pool = aio_get_thread_pool(ctx); | ||
45 | |||
46 | g_test_init(&argc, &argv, NULL); | ||
47 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv) | ||
48 | |||
49 | ret = g_test_run(); | ||
50 | |||
51 | - aio_context_unref(ctx); | ||
52 | return ret; | ||
44 | } | 53 | } |
45 | -- | 54 | -- |
46 | 2.24.1 | 55 | 2.9.3 |
47 | 56 | ||
57 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | fork() is a simple way to ensure that state does not leak in between | 3 | This is in preparation for making qio_channel_yield work on |
4 | fuzzing runs. Unfortunately, the fuzzer mutation engine relies on | 4 | AioContexts other than the main one. |
5 | bitmaps which contain coverage information for each fuzzing run, and | 5 | |
6 | these bitmaps should be copied from the child to the parent(where the | 6 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> |
7 | mutation occurs). These bitmaps are created through compile-time | ||
8 | instrumentation and they are not shared with fork()-ed processes, by | ||
9 | default. To address this, we create a shared memory region, adjust its | ||
10 | size and map it _over_ the counter region. Furthermore, libfuzzer | ||
11 | doesn't generally expose the globals that specify the location of the | ||
12 | counters/coverage bitmap. As a workaround, we rely on a custom linker | ||
13 | script which forces all of the bitmaps we care about to be placed in a | ||
14 | contiguous region, which is easy to locate and mmap over. | ||
15 | |||
16 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
17 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 7 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
18 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 8 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
19 | Message-id: 20200220041118.23264-16-alxndr@bu.edu | 9 | Reviewed-by: Fam Zheng <famz@redhat.com> |
10 | Message-id: 20170213135235.12274-6-pbonzini@redhat.com | ||
20 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
21 | --- | 12 | --- |
22 | tests/qtest/fuzz/Makefile.include | 5 +++ | 13 | include/io/channel.h | 25 +++++++++++++++++++++++++ |
23 | tests/qtest/fuzz/fork_fuzz.c | 55 +++++++++++++++++++++++++++++++ | 14 | io/channel-command.c | 13 +++++++++++++ |
24 | tests/qtest/fuzz/fork_fuzz.h | 23 +++++++++++++ | 15 | io/channel-file.c | 11 +++++++++++ |
25 | tests/qtest/fuzz/fork_fuzz.ld | 37 +++++++++++++++++++++ | 16 | io/channel-socket.c | 16 +++++++++++----- |
26 | 4 files changed, 120 insertions(+) | 17 | io/channel-tls.c | 12 ++++++++++++ |
27 | create mode 100644 tests/qtest/fuzz/fork_fuzz.c | 18 | io/channel-watch.c | 6 ++++++ |
28 | create mode 100644 tests/qtest/fuzz/fork_fuzz.h | 19 | io/channel.c | 11 +++++++++++ |
29 | create mode 100644 tests/qtest/fuzz/fork_fuzz.ld | 20 | 7 files changed, 89 insertions(+), 5 deletions(-) |
30 | 21 | ||
31 | diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include | 22 | diff --git a/include/io/channel.h b/include/io/channel.h |
32 | index XXXXXXX..XXXXXXX 100644 | 23 | index XXXXXXX..XXXXXXX 100644 |
33 | --- a/tests/qtest/fuzz/Makefile.include | 24 | --- a/include/io/channel.h |
34 | +++ b/tests/qtest/fuzz/Makefile.include | 25 | +++ b/include/io/channel.h |
35 | @@ -XXX,XX +XXX,XX @@ QEMU_PROG_FUZZ=qemu-fuzz-$(TARGET_NAME)$(EXESUF) | ||
36 | |||
37 | fuzz-obj-y += tests/qtest/libqtest.o | ||
38 | fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton | ||
39 | +fuzz-obj-y += tests/qtest/fuzz/fork_fuzz.o | ||
40 | |||
41 | FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest | ||
42 | + | ||
43 | +# Linker Script to force coverage-counters into known regions which we can mark | ||
44 | +# shared | ||
45 | +FUZZ_LDFLAGS += -Xlinker -T$(SRC_PATH)/tests/qtest/fuzz/fork_fuzz.ld | ||
46 | diff --git a/tests/qtest/fuzz/fork_fuzz.c b/tests/qtest/fuzz/fork_fuzz.c | ||
47 | new file mode 100644 | ||
48 | index XXXXXXX..XXXXXXX | ||
49 | --- /dev/null | ||
50 | +++ b/tests/qtest/fuzz/fork_fuzz.c | ||
51 | @@ -XXX,XX +XXX,XX @@ | 26 | @@ -XXX,XX +XXX,XX @@ |
52 | +/* | 27 | |
53 | + * Fork-based fuzzing helpers | 28 | #include "qemu-common.h" |
29 | #include "qom/object.h" | ||
30 | +#include "block/aio.h" | ||
31 | |||
32 | #define TYPE_QIO_CHANNEL "qio-channel" | ||
33 | #define QIO_CHANNEL(obj) \ | ||
34 | @@ -XXX,XX +XXX,XX @@ struct QIOChannelClass { | ||
35 | off_t offset, | ||
36 | int whence, | ||
37 | Error **errp); | ||
38 | + void (*io_set_aio_fd_handler)(QIOChannel *ioc, | ||
39 | + AioContext *ctx, | ||
40 | + IOHandler *io_read, | ||
41 | + IOHandler *io_write, | ||
42 | + void *opaque); | ||
43 | }; | ||
44 | |||
45 | /* General I/O handling functions */ | ||
46 | @@ -XXX,XX +XXX,XX @@ void qio_channel_yield(QIOChannel *ioc, | ||
47 | void qio_channel_wait(QIOChannel *ioc, | ||
48 | GIOCondition condition); | ||
49 | |||
50 | +/** | ||
51 | + * qio_channel_set_aio_fd_handler: | ||
52 | + * @ioc: the channel object | ||
53 | + * @ctx: the AioContext to set the handlers on | ||
54 | + * @io_read: the read handler | ||
55 | + * @io_write: the write handler | ||
56 | + * @opaque: the opaque value passed to the handler | ||
54 | + * | 57 | + * |
55 | + * Copyright Red Hat Inc., 2019 | 58 | + * This is used internally by qio_channel_yield(). It can |
56 | + * | 59 | + * be used by channel implementations to forward the handlers |
57 | + * Authors: | 60 | + * to another channel (e.g. from #QIOChannelTLS to the |
58 | + * Alexander Bulekov <alxndr@bu.edu> | 61 | + * underlying socket). |
59 | + * | ||
60 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | ||
61 | + * See the COPYING file in the top-level directory. | ||
62 | + * | ||
63 | + */ | 62 | + */ |
64 | + | 63 | +void qio_channel_set_aio_fd_handler(QIOChannel *ioc, |
65 | +#include "qemu/osdep.h" | 64 | + AioContext *ctx, |
66 | +#include "fork_fuzz.h" | 65 | + IOHandler *io_read, |
67 | + | 66 | + IOHandler *io_write, |
68 | + | 67 | + void *opaque); |
69 | +void counter_shm_init(void) | 68 | + |
70 | +{ | 69 | #endif /* QIO_CHANNEL_H */ |
71 | + char *shm_path = g_strdup_printf("/qemu-fuzz-cntrs.%d", getpid()); | 70 | diff --git a/io/channel-command.c b/io/channel-command.c |
72 | + int fd = shm_open(shm_path, O_CREAT | O_RDWR, S_IRUSR | S_IWUSR); | 71 | index XXXXXXX..XXXXXXX 100644 |
73 | + g_free(shm_path); | 72 | --- a/io/channel-command.c |
74 | + | 73 | +++ b/io/channel-command.c |
75 | + if (fd == -1) { | 74 | @@ -XXX,XX +XXX,XX @@ static int qio_channel_command_close(QIOChannel *ioc, |
76 | + perror("Error: "); | 75 | } |
77 | + exit(1); | 76 | |
78 | + } | 77 | |
79 | + if (ftruncate(fd, &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START) == -1) { | 78 | +static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc, |
80 | + perror("Error: "); | 79 | + AioContext *ctx, |
81 | + exit(1); | 80 | + IOHandler *io_read, |
82 | + } | 81 | + IOHandler *io_write, |
83 | + /* Copy what's in the counter region to the shm.. */ | 82 | + void *opaque) |
84 | + void *rptr = mmap(NULL , | 83 | +{ |
85 | + &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START, | 84 | + QIOChannelCommand *cioc = QIO_CHANNEL_COMMAND(ioc); |
86 | + PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); | 85 | + aio_set_fd_handler(ctx, cioc->readfd, false, io_read, NULL, NULL, opaque); |
87 | + memcpy(rptr, | 86 | + aio_set_fd_handler(ctx, cioc->writefd, false, NULL, io_write, NULL, opaque); |
88 | + &__FUZZ_COUNTERS_START, | 87 | +} |
89 | + &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START); | 88 | + |
90 | + | 89 | + |
91 | + munmap(rptr, &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START); | 90 | static GSource *qio_channel_command_create_watch(QIOChannel *ioc, |
92 | + | 91 | GIOCondition condition) |
93 | + /* And map the shm over the counter region */ | 92 | { |
94 | + rptr = mmap(&__FUZZ_COUNTERS_START, | 93 | @@ -XXX,XX +XXX,XX @@ static void qio_channel_command_class_init(ObjectClass *klass, |
95 | + &__FUZZ_COUNTERS_END - &__FUZZ_COUNTERS_START, | 94 | ioc_klass->io_set_blocking = qio_channel_command_set_blocking; |
96 | + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0); | 95 | ioc_klass->io_close = qio_channel_command_close; |
97 | + | 96 | ioc_klass->io_create_watch = qio_channel_command_create_watch; |
98 | + close(fd); | 97 | + ioc_klass->io_set_aio_fd_handler = qio_channel_command_set_aio_fd_handler; |
99 | + | 98 | } |
100 | + if (!rptr) { | 99 | |
101 | + perror("Error: "); | 100 | static const TypeInfo qio_channel_command_info = { |
102 | + exit(1); | 101 | diff --git a/io/channel-file.c b/io/channel-file.c |
103 | + } | 102 | index XXXXXXX..XXXXXXX 100644 |
104 | +} | 103 | --- a/io/channel-file.c |
105 | + | 104 | +++ b/io/channel-file.c |
106 | + | 105 | @@ -XXX,XX +XXX,XX @@ static int qio_channel_file_close(QIOChannel *ioc, |
107 | diff --git a/tests/qtest/fuzz/fork_fuzz.h b/tests/qtest/fuzz/fork_fuzz.h | 106 | } |
108 | new file mode 100644 | 107 | |
109 | index XXXXXXX..XXXXXXX | 108 | |
110 | --- /dev/null | 109 | +static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc, |
111 | +++ b/tests/qtest/fuzz/fork_fuzz.h | 110 | + AioContext *ctx, |
112 | @@ -XXX,XX +XXX,XX @@ | 111 | + IOHandler *io_read, |
113 | +/* | 112 | + IOHandler *io_write, |
114 | + * Fork-based fuzzing helpers | 113 | + void *opaque) |
115 | + * | 114 | +{ |
116 | + * Copyright Red Hat Inc., 2019 | 115 | + QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc); |
117 | + * | 116 | + aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write, NULL, opaque); |
118 | + * Authors: | 117 | +} |
119 | + * Alexander Bulekov <alxndr@bu.edu> | 118 | + |
120 | + * | 119 | static GSource *qio_channel_file_create_watch(QIOChannel *ioc, |
121 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | 120 | GIOCondition condition) |
122 | + * See the COPYING file in the top-level directory. | 121 | { |
123 | + * | 122 | @@ -XXX,XX +XXX,XX @@ static void qio_channel_file_class_init(ObjectClass *klass, |
124 | + */ | 123 | ioc_klass->io_seek = qio_channel_file_seek; |
125 | + | 124 | ioc_klass->io_close = qio_channel_file_close; |
126 | +#ifndef FORK_FUZZ_H | 125 | ioc_klass->io_create_watch = qio_channel_file_create_watch; |
127 | +#define FORK_FUZZ_H | 126 | + ioc_klass->io_set_aio_fd_handler = qio_channel_file_set_aio_fd_handler; |
128 | + | 127 | } |
129 | +extern uint8_t __FUZZ_COUNTERS_START; | 128 | |
130 | +extern uint8_t __FUZZ_COUNTERS_END; | 129 | static const TypeInfo qio_channel_file_info = { |
131 | + | 130 | diff --git a/io/channel-socket.c b/io/channel-socket.c |
132 | +void counter_shm_init(void); | 131 | index XXXXXXX..XXXXXXX 100644 |
133 | + | 132 | --- a/io/channel-socket.c |
133 | +++ b/io/channel-socket.c | ||
134 | @@ -XXX,XX +XXX,XX @@ qio_channel_socket_set_blocking(QIOChannel *ioc, | ||
135 | qemu_set_block(sioc->fd); | ||
136 | } else { | ||
137 | qemu_set_nonblock(sioc->fd); | ||
138 | -#ifdef WIN32 | ||
139 | - WSAEventSelect(sioc->fd, ioc->event, | ||
140 | - FD_READ | FD_ACCEPT | FD_CLOSE | | ||
141 | - FD_CONNECT | FD_WRITE | FD_OOB); | ||
142 | -#endif | ||
143 | } | ||
144 | return 0; | ||
145 | } | ||
146 | @@ -XXX,XX +XXX,XX @@ qio_channel_socket_shutdown(QIOChannel *ioc, | ||
147 | return 0; | ||
148 | } | ||
149 | |||
150 | +static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc, | ||
151 | + AioContext *ctx, | ||
152 | + IOHandler *io_read, | ||
153 | + IOHandler *io_write, | ||
154 | + void *opaque) | ||
155 | +{ | ||
156 | + QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc); | ||
157 | + aio_set_fd_handler(ctx, sioc->fd, false, io_read, io_write, NULL, opaque); | ||
158 | +} | ||
159 | + | ||
160 | static GSource *qio_channel_socket_create_watch(QIOChannel *ioc, | ||
161 | GIOCondition condition) | ||
162 | { | ||
163 | @@ -XXX,XX +XXX,XX @@ static void qio_channel_socket_class_init(ObjectClass *klass, | ||
164 | ioc_klass->io_set_cork = qio_channel_socket_set_cork; | ||
165 | ioc_klass->io_set_delay = qio_channel_socket_set_delay; | ||
166 | ioc_klass->io_create_watch = qio_channel_socket_create_watch; | ||
167 | + ioc_klass->io_set_aio_fd_handler = qio_channel_socket_set_aio_fd_handler; | ||
168 | } | ||
169 | |||
170 | static const TypeInfo qio_channel_socket_info = { | ||
171 | diff --git a/io/channel-tls.c b/io/channel-tls.c | ||
172 | index XXXXXXX..XXXXXXX 100644 | ||
173 | --- a/io/channel-tls.c | ||
174 | +++ b/io/channel-tls.c | ||
175 | @@ -XXX,XX +XXX,XX @@ static int qio_channel_tls_close(QIOChannel *ioc, | ||
176 | return qio_channel_close(tioc->master, errp); | ||
177 | } | ||
178 | |||
179 | +static void qio_channel_tls_set_aio_fd_handler(QIOChannel *ioc, | ||
180 | + AioContext *ctx, | ||
181 | + IOHandler *io_read, | ||
182 | + IOHandler *io_write, | ||
183 | + void *opaque) | ||
184 | +{ | ||
185 | + QIOChannelTLS *tioc = QIO_CHANNEL_TLS(ioc); | ||
186 | + | ||
187 | + qio_channel_set_aio_fd_handler(tioc->master, ctx, io_read, io_write, opaque); | ||
188 | +} | ||
189 | + | ||
190 | static GSource *qio_channel_tls_create_watch(QIOChannel *ioc, | ||
191 | GIOCondition condition) | ||
192 | { | ||
193 | @@ -XXX,XX +XXX,XX @@ static void qio_channel_tls_class_init(ObjectClass *klass, | ||
194 | ioc_klass->io_close = qio_channel_tls_close; | ||
195 | ioc_klass->io_shutdown = qio_channel_tls_shutdown; | ||
196 | ioc_klass->io_create_watch = qio_channel_tls_create_watch; | ||
197 | + ioc_klass->io_set_aio_fd_handler = qio_channel_tls_set_aio_fd_handler; | ||
198 | } | ||
199 | |||
200 | static const TypeInfo qio_channel_tls_info = { | ||
201 | diff --git a/io/channel-watch.c b/io/channel-watch.c | ||
202 | index XXXXXXX..XXXXXXX 100644 | ||
203 | --- a/io/channel-watch.c | ||
204 | +++ b/io/channel-watch.c | ||
205 | @@ -XXX,XX +XXX,XX @@ GSource *qio_channel_create_socket_watch(QIOChannel *ioc, | ||
206 | GSource *source; | ||
207 | QIOChannelSocketSource *ssource; | ||
208 | |||
209 | +#ifdef WIN32 | ||
210 | + WSAEventSelect(socket, ioc->event, | ||
211 | + FD_READ | FD_ACCEPT | FD_CLOSE | | ||
212 | + FD_CONNECT | FD_WRITE | FD_OOB); | ||
134 | +#endif | 213 | +#endif |
135 | + | 214 | + |
136 | diff --git a/tests/qtest/fuzz/fork_fuzz.ld b/tests/qtest/fuzz/fork_fuzz.ld | 215 | source = g_source_new(&qio_channel_socket_source_funcs, |
137 | new file mode 100644 | 216 | sizeof(QIOChannelSocketSource)); |
138 | index XXXXXXX..XXXXXXX | 217 | ssource = (QIOChannelSocketSource *)source; |
139 | --- /dev/null | 218 | diff --git a/io/channel.c b/io/channel.c |
140 | +++ b/tests/qtest/fuzz/fork_fuzz.ld | 219 | index XXXXXXX..XXXXXXX 100644 |
141 | @@ -XXX,XX +XXX,XX @@ | 220 | --- a/io/channel.c |
142 | +/* We adjust linker script modification to place all of the stuff that needs to | 221 | +++ b/io/channel.c |
143 | + * persist across fuzzing runs into a contiguous seciton of memory. Then, it is | 222 | @@ -XXX,XX +XXX,XX @@ GSource *qio_channel_create_watch(QIOChannel *ioc, |
144 | + * easy to re-map the counter-related memory as shared. | 223 | } |
145 | +*/ | 224 | |
146 | + | 225 | |
147 | +SECTIONS | 226 | +void qio_channel_set_aio_fd_handler(QIOChannel *ioc, |
148 | +{ | 227 | + AioContext *ctx, |
149 | + .data.fuzz_start : ALIGN(4K) | 228 | + IOHandler *io_read, |
150 | + { | 229 | + IOHandler *io_write, |
151 | + __FUZZ_COUNTERS_START = .; | 230 | + void *opaque) |
152 | + __start___sancov_cntrs = .; | 231 | +{ |
153 | + *(_*sancov_cntrs); | 232 | + QIOChannelClass *klass = QIO_CHANNEL_GET_CLASS(ioc); |
154 | + __stop___sancov_cntrs = .; | 233 | + |
155 | + | 234 | + klass->io_set_aio_fd_handler(ioc, ctx, io_read, io_write, opaque); |
156 | + /* Lowest stack counter */ | 235 | +} |
157 | + *(__sancov_lowest_stack); | 236 | + |
158 | + } | 237 | guint qio_channel_add_watch(QIOChannel *ioc, |
159 | + .data.fuzz_ordered : | 238 | GIOCondition condition, |
160 | + { | 239 | QIOChannelFunc func, |
161 | + /* Coverage counters. They're not necessary for fuzzing, but are useful | ||
162 | + * for analyzing the fuzzing performance | ||
163 | + */ | ||
164 | + __start___llvm_prf_cnts = .; | ||
165 | + *(*llvm_prf_cnts); | ||
166 | + __stop___llvm_prf_cnts = .; | ||
167 | + | ||
168 | + /* Internal Libfuzzer TracePC object which contains the ValueProfileMap */ | ||
169 | + FuzzerTracePC*(.bss*); | ||
170 | + } | ||
171 | + .data.fuzz_end : ALIGN(4K) | ||
172 | + { | ||
173 | + __FUZZ_COUNTERS_END = .; | ||
174 | + } | ||
175 | +} | ||
176 | +/* Dont overwrite the SECTIONS in the default linker script. Instead insert the | ||
177 | + * above into the default script */ | ||
178 | +INSERT AFTER .data; | ||
179 | -- | 240 | -- |
180 | 2.24.1 | 241 | 2.9.3 |
181 | 242 | ||
243 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | The virtual-device fuzzer must initialize QOM, prior to running | 3 | Support separate coroutines for reading and writing, and place the |
4 | vl:qemu_init, so that it can use the qos_graph to identify the arguments | 4 | read/write handlers on the AioContext that the QIOChannel is registered |
5 | required to initialize a guest for libqos-assisted fuzzing. This change | 5 | with. |
6 | prevents errors when vl:qemu_init tries to (re)initialize the previously | 6 | |
7 | initialized QOM module. | 7 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> |
8 | |||
9 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
10 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 8 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
11 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 9 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
12 | Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> | 10 | Reviewed-by: Fam Zheng <famz@redhat.com> |
13 | Message-id: 20200220041118.23264-4-alxndr@bu.edu | 11 | Message-id: 20170213135235.12274-7-pbonzini@redhat.com |
14 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 12 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
15 | --- | 13 | --- |
16 | util/module.c | 7 +++++++ | 14 | include/io/channel.h | 47 ++++++++++++++++++++++++++-- |
17 | 1 file changed, 7 insertions(+) | 15 | io/channel.c | 86 +++++++++++++++++++++++++++++++++++++++------------- |
18 | 16 | 2 files changed, 109 insertions(+), 24 deletions(-) | |
19 | diff --git a/util/module.c b/util/module.c | 17 | |
18 | diff --git a/include/io/channel.h b/include/io/channel.h | ||
20 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/util/module.c | 20 | --- a/include/io/channel.h |
22 | +++ b/util/module.c | 21 | +++ b/include/io/channel.h |
23 | @@ -XXX,XX +XXX,XX @@ typedef struct ModuleEntry | 22 | @@ -XXX,XX +XXX,XX @@ |
24 | typedef QTAILQ_HEAD(, ModuleEntry) ModuleTypeList; | 23 | |
25 | 24 | #include "qemu-common.h" | |
26 | static ModuleTypeList init_type_list[MODULE_INIT_MAX]; | 25 | #include "qom/object.h" |
27 | +static bool modules_init_done[MODULE_INIT_MAX]; | 26 | +#include "qemu/coroutine.h" |
28 | 27 | #include "block/aio.h" | |
29 | static ModuleTypeList dso_init_list; | 28 | |
30 | 29 | #define TYPE_QIO_CHANNEL "qio-channel" | |
31 | @@ -XXX,XX +XXX,XX @@ void module_call_init(module_init_type type) | 30 | @@ -XXX,XX +XXX,XX @@ struct QIOChannel { |
32 | ModuleTypeList *l; | 31 | Object parent; |
33 | ModuleEntry *e; | 32 | unsigned int features; /* bitmask of QIOChannelFeatures */ |
34 | 33 | char *name; | |
35 | + if (modules_init_done[type]) { | 34 | + AioContext *ctx; |
35 | + Coroutine *read_coroutine; | ||
36 | + Coroutine *write_coroutine; | ||
37 | #ifdef _WIN32 | ||
38 | HANDLE event; /* For use with GSource on Win32 */ | ||
39 | #endif | ||
40 | @@ -XXX,XX +XXX,XX @@ guint qio_channel_add_watch(QIOChannel *ioc, | ||
41 | |||
42 | |||
43 | /** | ||
44 | + * qio_channel_attach_aio_context: | ||
45 | + * @ioc: the channel object | ||
46 | + * @ctx: the #AioContext to set the handlers on | ||
47 | + * | ||
48 | + * Request that qio_channel_yield() sets I/O handlers on | ||
49 | + * the given #AioContext. If @ctx is %NULL, qio_channel_yield() | ||
50 | + * uses QEMU's main thread event loop. | ||
51 | + * | ||
52 | + * You can move a #QIOChannel from one #AioContext to another even if | ||
53 | + * I/O handlers are set for a coroutine. However, #QIOChannel provides | ||
54 | + * no synchronization between the calls to qio_channel_yield() and | ||
55 | + * qio_channel_attach_aio_context(). | ||
56 | + * | ||
57 | + * Therefore you should first call qio_channel_detach_aio_context() | ||
58 | + * to ensure that the coroutine is not entered concurrently. Then, | ||
59 | + * while the coroutine has yielded, call qio_channel_attach_aio_context(), | ||
60 | + * and then aio_co_schedule() to place the coroutine on the new | ||
61 | + * #AioContext. The calls to qio_channel_detach_aio_context() | ||
62 | + * and qio_channel_attach_aio_context() should be protected with | ||
63 | + * aio_context_acquire() and aio_context_release(). | ||
64 | + */ | ||
65 | +void qio_channel_attach_aio_context(QIOChannel *ioc, | ||
66 | + AioContext *ctx); | ||
67 | + | ||
68 | +/** | ||
69 | + * qio_channel_detach_aio_context: | ||
70 | + * @ioc: the channel object | ||
71 | + * | ||
72 | + * Disable any I/O handlers set by qio_channel_yield(). With the | ||
73 | + * help of aio_co_schedule(), this allows moving a coroutine that was | ||
74 | + * paused by qio_channel_yield() to another context. | ||
75 | + */ | ||
76 | +void qio_channel_detach_aio_context(QIOChannel *ioc); | ||
77 | + | ||
78 | +/** | ||
79 | * qio_channel_yield: | ||
80 | * @ioc: the channel object | ||
81 | * @condition: the I/O condition to wait for | ||
82 | * | ||
83 | - * Yields execution from the current coroutine until | ||
84 | - * the condition indicated by @condition becomes | ||
85 | - * available. | ||
86 | + * Yields execution from the current coroutine until the condition | ||
87 | + * indicated by @condition becomes available. @condition must | ||
88 | + * be either %G_IO_IN or %G_IO_OUT; it cannot contain both. In | ||
89 | + * addition, no two coroutine can be waiting on the same condition | ||
90 | + * and channel at the same time. | ||
91 | * | ||
92 | * This must only be called from coroutine context | ||
93 | */ | ||
94 | diff --git a/io/channel.c b/io/channel.c | ||
95 | index XXXXXXX..XXXXXXX 100644 | ||
96 | --- a/io/channel.c | ||
97 | +++ b/io/channel.c | ||
98 | @@ -XXX,XX +XXX,XX @@ | ||
99 | #include "qemu/osdep.h" | ||
100 | #include "io/channel.h" | ||
101 | #include "qapi/error.h" | ||
102 | -#include "qemu/coroutine.h" | ||
103 | +#include "qemu/main-loop.h" | ||
104 | |||
105 | bool qio_channel_has_feature(QIOChannel *ioc, | ||
106 | QIOChannelFeature feature) | ||
107 | @@ -XXX,XX +XXX,XX @@ off_t qio_channel_io_seek(QIOChannel *ioc, | ||
108 | } | ||
109 | |||
110 | |||
111 | -typedef struct QIOChannelYieldData QIOChannelYieldData; | ||
112 | -struct QIOChannelYieldData { | ||
113 | - QIOChannel *ioc; | ||
114 | - Coroutine *co; | ||
115 | -}; | ||
116 | +static void qio_channel_set_aio_fd_handlers(QIOChannel *ioc); | ||
117 | |||
118 | +static void qio_channel_restart_read(void *opaque) | ||
119 | +{ | ||
120 | + QIOChannel *ioc = opaque; | ||
121 | + Coroutine *co = ioc->read_coroutine; | ||
122 | + | ||
123 | + ioc->read_coroutine = NULL; | ||
124 | + qio_channel_set_aio_fd_handlers(ioc); | ||
125 | + aio_co_wake(co); | ||
126 | +} | ||
127 | |||
128 | -static gboolean qio_channel_yield_enter(QIOChannel *ioc, | ||
129 | - GIOCondition condition, | ||
130 | - gpointer opaque) | ||
131 | +static void qio_channel_restart_write(void *opaque) | ||
132 | { | ||
133 | - QIOChannelYieldData *data = opaque; | ||
134 | - qemu_coroutine_enter(data->co); | ||
135 | - return FALSE; | ||
136 | + QIOChannel *ioc = opaque; | ||
137 | + Coroutine *co = ioc->write_coroutine; | ||
138 | + | ||
139 | + ioc->write_coroutine = NULL; | ||
140 | + qio_channel_set_aio_fd_handlers(ioc); | ||
141 | + aio_co_wake(co); | ||
142 | } | ||
143 | |||
144 | +static void qio_channel_set_aio_fd_handlers(QIOChannel *ioc) | ||
145 | +{ | ||
146 | + IOHandler *rd_handler = NULL, *wr_handler = NULL; | ||
147 | + AioContext *ctx; | ||
148 | + | ||
149 | + if (ioc->read_coroutine) { | ||
150 | + rd_handler = qio_channel_restart_read; | ||
151 | + } | ||
152 | + if (ioc->write_coroutine) { | ||
153 | + wr_handler = qio_channel_restart_write; | ||
154 | + } | ||
155 | + | ||
156 | + ctx = ioc->ctx ? ioc->ctx : iohandler_get_aio_context(); | ||
157 | + qio_channel_set_aio_fd_handler(ioc, ctx, rd_handler, wr_handler, ioc); | ||
158 | +} | ||
159 | + | ||
160 | +void qio_channel_attach_aio_context(QIOChannel *ioc, | ||
161 | + AioContext *ctx) | ||
162 | +{ | ||
163 | + AioContext *old_ctx; | ||
164 | + if (ioc->ctx == ctx) { | ||
36 | + return; | 165 | + return; |
37 | + } | 166 | + } |
38 | + | 167 | + |
39 | l = find_type(type); | 168 | + old_ctx = ioc->ctx ? ioc->ctx : iohandler_get_aio_context(); |
40 | 169 | + qio_channel_set_aio_fd_handler(ioc, old_ctx, NULL, NULL, NULL); | |
41 | QTAILQ_FOREACH(e, l, node) { | 170 | + ioc->ctx = ctx; |
42 | e->init(); | 171 | + qio_channel_set_aio_fd_handlers(ioc); |
43 | } | 172 | +} |
44 | + | 173 | + |
45 | + modules_init_done[type] = true; | 174 | +void qio_channel_detach_aio_context(QIOChannel *ioc) |
175 | +{ | ||
176 | + ioc->read_coroutine = NULL; | ||
177 | + ioc->write_coroutine = NULL; | ||
178 | + qio_channel_set_aio_fd_handlers(ioc); | ||
179 | + ioc->ctx = NULL; | ||
180 | +} | ||
181 | |||
182 | void coroutine_fn qio_channel_yield(QIOChannel *ioc, | ||
183 | GIOCondition condition) | ||
184 | { | ||
185 | - QIOChannelYieldData data; | ||
186 | - | ||
187 | assert(qemu_in_coroutine()); | ||
188 | - data.ioc = ioc; | ||
189 | - data.co = qemu_coroutine_self(); | ||
190 | - qio_channel_add_watch(ioc, | ||
191 | - condition, | ||
192 | - qio_channel_yield_enter, | ||
193 | - &data, | ||
194 | - NULL); | ||
195 | + if (condition == G_IO_IN) { | ||
196 | + assert(!ioc->read_coroutine); | ||
197 | + ioc->read_coroutine = qemu_coroutine_self(); | ||
198 | + } else if (condition == G_IO_OUT) { | ||
199 | + assert(!ioc->write_coroutine); | ||
200 | + ioc->write_coroutine = qemu_coroutine_self(); | ||
201 | + } else { | ||
202 | + abort(); | ||
203 | + } | ||
204 | + qio_channel_set_aio_fd_handlers(ioc); | ||
205 | qemu_coroutine_yield(); | ||
46 | } | 206 | } |
47 | 207 | ||
48 | #ifdef CONFIG_MODULES | ||
49 | -- | 208 | -- |
50 | 2.24.1 | 209 | 2.9.3 |
51 | 210 | ||
211 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | The moved functions are not specific to qos-test and might be useful | 3 | In the client, read the reply headers from a coroutine, switching the |
4 | elsewhere. For example the virtual-device fuzzer makes use of them for | 4 | read side between the "read header" coroutine and the I/O coroutine that |
5 | qos-assisted fuzz-targets. | 5 | reads the body of the reply. |
6 | 6 | ||
7 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 7 | In the server, if the server can read more requests it will create a new |
8 | "read request" coroutine as soon as a request has been read. Otherwise, | ||
9 | the new coroutine is created in nbd_request_put. | ||
10 | |||
8 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
9 | Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> | 12 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
10 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 13 | Reviewed-by: Fam Zheng <famz@redhat.com> |
11 | Message-id: 20200220041118.23264-12-alxndr@bu.edu | 14 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> |
15 | Message-id: 20170213135235.12274-8-pbonzini@redhat.com | ||
12 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 16 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
13 | --- | 17 | --- |
14 | tests/qtest/Makefile.include | 1 + | 18 | block/nbd-client.h | 2 +- |
15 | tests/qtest/libqos/qos_external.c | 168 ++++++++++++++++++++++++++++++ | 19 | block/nbd-client.c | 117 ++++++++++++++++++++++++----------------------------- |
16 | tests/qtest/libqos/qos_external.h | 28 +++++ | 20 | nbd/client.c | 2 +- |
17 | tests/qtest/qos-test.c | 132 +---------------------- | 21 | nbd/common.c | 9 +---- |
18 | 4 files changed, 198 insertions(+), 131 deletions(-) | 22 | nbd/server.c | 94 +++++++++++++----------------------------- |
19 | create mode 100644 tests/qtest/libqos/qos_external.c | 23 | 5 files changed, 83 insertions(+), 141 deletions(-) |
20 | create mode 100644 tests/qtest/libqos/qos_external.h | ||
21 | 24 | ||
22 | diff --git a/tests/qtest/Makefile.include b/tests/qtest/Makefile.include | 25 | diff --git a/block/nbd-client.h b/block/nbd-client.h |
23 | index XXXXXXX..XXXXXXX 100644 | 26 | index XXXXXXX..XXXXXXX 100644 |
24 | --- a/tests/qtest/Makefile.include | 27 | --- a/block/nbd-client.h |
25 | +++ b/tests/qtest/Makefile.include | 28 | +++ b/block/nbd-client.h |
26 | @@ -XXX,XX +XXX,XX @@ libqos-usb-obj-y = $(libqos-spapr-obj-y) $(libqos-pc-obj-y) tests/qtest/libqos/u | 29 | @@ -XXX,XX +XXX,XX @@ typedef struct NBDClientSession { |
27 | # qos devices: | 30 | |
28 | libqos-obj-y = $(libqgraph-obj-y) | 31 | CoMutex send_mutex; |
29 | libqos-obj-y += $(libqos-pc-obj-y) $(libqos-spapr-obj-y) | 32 | CoQueue free_sema; |
30 | +libqos-obj-y += tests/qtest/libqos/qos_external.o | 33 | - Coroutine *send_coroutine; |
31 | libqos-obj-y += tests/qtest/libqos/e1000e.o | 34 | + Coroutine *read_reply_co; |
32 | libqos-obj-y += tests/qtest/libqos/i2c.o | 35 | int in_flight; |
33 | libqos-obj-y += tests/qtest/libqos/i2c-imx.o | 36 | |
34 | diff --git a/tests/qtest/libqos/qos_external.c b/tests/qtest/libqos/qos_external.c | 37 | Coroutine *recv_coroutine[MAX_NBD_REQUESTS]; |
35 | new file mode 100644 | 38 | diff --git a/block/nbd-client.c b/block/nbd-client.c |
36 | index XXXXXXX..XXXXXXX | 39 | index XXXXXXX..XXXXXXX 100644 |
37 | --- /dev/null | 40 | --- a/block/nbd-client.c |
38 | +++ b/tests/qtest/libqos/qos_external.c | 41 | +++ b/block/nbd-client.c |
39 | @@ -XXX,XX +XXX,XX @@ | 42 | @@ -XXX,XX +XXX,XX @@ |
40 | +/* | 43 | #define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs)) |
41 | + * libqos driver framework | 44 | #define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs)) |
42 | + * | 45 | |
43 | + * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com> | 46 | -static void nbd_recv_coroutines_enter_all(NBDClientSession *s) |
44 | + * | 47 | +static void nbd_recv_coroutines_enter_all(BlockDriverState *bs) |
45 | + * This library is free software; you can redistribute it and/or | 48 | { |
46 | + * modify it under the terms of the GNU Lesser General Public | 49 | + NBDClientSession *s = nbd_get_client_session(bs); |
47 | + * License version 2 as published by the Free Software Foundation. | 50 | int i; |
48 | + * | 51 | |
49 | + * This library is distributed in the hope that it will be useful, | 52 | for (i = 0; i < MAX_NBD_REQUESTS; i++) { |
50 | + * but WITHOUT ANY WARRANTY; without even the implied warranty of | 53 | @@ -XXX,XX +XXX,XX @@ static void nbd_recv_coroutines_enter_all(NBDClientSession *s) |
51 | + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | 54 | qemu_coroutine_enter(s->recv_coroutine[i]); |
52 | + * Lesser General Public License for more details. | 55 | } |
53 | + * | 56 | } |
54 | + * You should have received a copy of the GNU Lesser General Public | 57 | + BDRV_POLL_WHILE(bs, s->read_reply_co); |
55 | + * License along with this library; if not, see <http://www.gnu.org/licenses/> | 58 | } |
56 | + */ | 59 | |
57 | + | 60 | static void nbd_teardown_connection(BlockDriverState *bs) |
58 | +#include "qemu/osdep.h" | 61 | @@ -XXX,XX +XXX,XX @@ static void nbd_teardown_connection(BlockDriverState *bs) |
59 | +#include <getopt.h> | 62 | qio_channel_shutdown(client->ioc, |
60 | +#include "libqtest.h" | 63 | QIO_CHANNEL_SHUTDOWN_BOTH, |
61 | +#include "qapi/qmp/qdict.h" | 64 | NULL); |
62 | +#include "qapi/qmp/qbool.h" | 65 | - nbd_recv_coroutines_enter_all(client); |
63 | +#include "qapi/qmp/qstring.h" | 66 | + nbd_recv_coroutines_enter_all(bs); |
64 | +#include "qemu/module.h" | 67 | |
65 | +#include "qapi/qmp/qlist.h" | 68 | nbd_client_detach_aio_context(bs); |
66 | +#include "libqos/malloc.h" | 69 | object_unref(OBJECT(client->sioc)); |
67 | +#include "libqos/qgraph.h" | 70 | @@ -XXX,XX +XXX,XX @@ static void nbd_teardown_connection(BlockDriverState *bs) |
68 | +#include "libqos/qgraph_internal.h" | 71 | client->ioc = NULL; |
69 | +#include "libqos/qos_external.h" | 72 | } |
70 | + | 73 | |
71 | + | 74 | -static void nbd_reply_ready(void *opaque) |
72 | + | 75 | +static coroutine_fn void nbd_read_reply_entry(void *opaque) |
73 | +void apply_to_node(const char *name, bool is_machine, bool is_abstract) | 76 | { |
74 | +{ | 77 | - BlockDriverState *bs = opaque; |
75 | + char *machine_name = NULL; | 78 | - NBDClientSession *s = nbd_get_client_session(bs); |
76 | + if (is_machine) { | 79 | + NBDClientSession *s = opaque; |
77 | + const char *arch = qtest_get_arch(); | 80 | uint64_t i; |
78 | + machine_name = g_strconcat(arch, "/", name, NULL); | 81 | int ret; |
79 | + name = machine_name; | 82 | |
80 | + } | 83 | - if (!s->ioc) { /* Already closed */ |
81 | + qos_graph_node_set_availability(name, true); | 84 | - return; |
82 | + if (is_abstract) { | 85 | - } |
83 | + qos_delete_cmd_line(name); | 86 | - |
84 | + } | 87 | - if (s->reply.handle == 0) { |
85 | + g_free(machine_name); | 88 | - /* No reply already in flight. Fetch a header. It is possible |
86 | +} | 89 | - * that another thread has done the same thing in parallel, so |
87 | + | 90 | - * the socket is not readable anymore. |
88 | +/** | 91 | - */ |
89 | + * apply_to_qlist(): using QMP queries QEMU for a list of | ||
90 | + * machines and devices available, and sets the respective node | ||
91 | + * as true. If a node is found, also all its produced and contained | ||
92 | + * child are marked available. | ||
93 | + * | ||
94 | + * See qos_graph_node_set_availability() for more info | ||
95 | + */ | ||
96 | +void apply_to_qlist(QList *list, bool is_machine) | ||
97 | +{ | ||
98 | + const QListEntry *p; | ||
99 | + const char *name; | ||
100 | + bool abstract; | ||
101 | + QDict *minfo; | ||
102 | + QObject *qobj; | ||
103 | + QString *qstr; | ||
104 | + QBool *qbool; | ||
105 | + | ||
106 | + for (p = qlist_first(list); p; p = qlist_next(p)) { | ||
107 | + minfo = qobject_to(QDict, qlist_entry_obj(p)); | ||
108 | + qobj = qdict_get(minfo, "name"); | ||
109 | + qstr = qobject_to(QString, qobj); | ||
110 | + name = qstring_get_str(qstr); | ||
111 | + | ||
112 | + qobj = qdict_get(minfo, "abstract"); | ||
113 | + if (qobj) { | ||
114 | + qbool = qobject_to(QBool, qobj); | ||
115 | + abstract = qbool_get_bool(qbool); | ||
116 | + } else { | ||
117 | + abstract = false; | ||
118 | + } | ||
119 | + | ||
120 | + apply_to_node(name, is_machine, abstract); | ||
121 | + qobj = qdict_get(minfo, "alias"); | ||
122 | + if (qobj) { | ||
123 | + qstr = qobject_to(QString, qobj); | ||
124 | + name = qstring_get_str(qstr); | ||
125 | + apply_to_node(name, is_machine, abstract); | ||
126 | + } | ||
127 | + } | ||
128 | +} | ||
129 | + | ||
130 | +QGuestAllocator *get_machine_allocator(QOSGraphObject *obj) | ||
131 | +{ | ||
132 | + return obj->get_driver(obj, "memory"); | ||
133 | +} | ||
134 | + | ||
135 | +/** | ||
136 | + * allocate_objects(): given an array of nodes @arg, | ||
137 | + * walks the path invoking all constructors and | ||
138 | + * passing the corresponding parameter in order to | ||
139 | + * continue the objects allocation. | ||
140 | + * Once the test is reached, return the object it consumes. | ||
141 | + * | ||
142 | + * Since the machine and QEDGE_CONSUMED_BY nodes allocate | ||
143 | + * memory in the constructor, g_test_queue_destroy is used so | ||
144 | + * that after execution they can be safely free'd. (The test's | ||
145 | + * ->before callback is also welcome to use g_test_queue_destroy). | ||
146 | + * | ||
147 | + * Note: as specified in walk_path() too, @arg is an array of | ||
148 | + * char *, where arg[0] is a pointer to the command line | ||
149 | + * string that will be used to properly start QEMU when executing | ||
150 | + * the test, and the remaining elements represent the actual objects | ||
151 | + * that will be allocated. | ||
152 | + */ | ||
153 | +void *allocate_objects(QTestState *qts, char **path, QGuestAllocator **p_alloc) | ||
154 | +{ | ||
155 | + int current = 0; | ||
156 | + QGuestAllocator *alloc; | ||
157 | + QOSGraphObject *parent = NULL; | ||
158 | + QOSGraphEdge *edge; | ||
159 | + QOSGraphNode *node; | ||
160 | + void *edge_arg; | ||
161 | + void *obj; | ||
162 | + | ||
163 | + node = qos_graph_get_node(path[current]); | ||
164 | + g_assert(node->type == QNODE_MACHINE); | ||
165 | + | ||
166 | + obj = qos_machine_new(node, qts); | ||
167 | + qos_object_queue_destroy(obj); | ||
168 | + | ||
169 | + alloc = get_machine_allocator(obj); | ||
170 | + if (p_alloc) { | ||
171 | + *p_alloc = alloc; | ||
172 | + } | ||
173 | + | ||
174 | + for (;;) { | 92 | + for (;;) { |
175 | + if (node->type != QNODE_INTERFACE) { | 93 | + assert(s->reply.handle == 0); |
176 | + qos_object_start_hw(obj); | 94 | ret = nbd_receive_reply(s->ioc, &s->reply); |
177 | + parent = obj; | 95 | - if (ret == -EAGAIN) { |
178 | + } | 96 | - return; |
179 | + | 97 | - } |
180 | + /* follow edge and get object for next node constructor */ | 98 | if (ret < 0) { |
181 | + current++; | 99 | - s->reply.handle = 0; |
182 | + edge = qos_graph_get_edge(path[current - 1], path[current]); | 100 | - goto fail; |
183 | + node = qos_graph_get_node(path[current]); | ||
184 | + | ||
185 | + if (node->type == QNODE_TEST) { | ||
186 | + g_assert(qos_graph_edge_get_type(edge) == QEDGE_CONSUMED_BY); | ||
187 | + return obj; | ||
188 | + } | ||
189 | + | ||
190 | + switch (qos_graph_edge_get_type(edge)) { | ||
191 | + case QEDGE_PRODUCES: | ||
192 | + obj = parent->get_driver(parent, path[current]); | ||
193 | + break; | 101 | + break; |
194 | + | 102 | } |
195 | + case QEDGE_CONSUMED_BY: | 103 | - } |
196 | + edge_arg = qos_graph_edge_get_arg(edge); | 104 | |
197 | + obj = qos_driver_new(node, obj, alloc, edge_arg); | 105 | - /* There's no need for a mutex on the receive side, because the |
198 | + qos_object_queue_destroy(obj); | 106 | - * handler acts as a synchronization point and ensures that only |
199 | + break; | 107 | - * one coroutine is called until the reply finishes. */ |
200 | + | 108 | - i = HANDLE_TO_INDEX(s, s->reply.handle); |
201 | + case QEDGE_CONTAINS: | 109 | - if (i >= MAX_NBD_REQUESTS) { |
202 | + obj = parent->get_device(parent, path[current]); | 110 | - goto fail; |
111 | - } | ||
112 | + /* There's no need for a mutex on the receive side, because the | ||
113 | + * handler acts as a synchronization point and ensures that only | ||
114 | + * one coroutine is called until the reply finishes. | ||
115 | + */ | ||
116 | + i = HANDLE_TO_INDEX(s, s->reply.handle); | ||
117 | + if (i >= MAX_NBD_REQUESTS || !s->recv_coroutine[i]) { | ||
203 | + break; | 118 | + break; |
204 | + } | 119 | + } |
205 | + } | 120 | |
206 | +} | 121 | - if (s->recv_coroutine[i]) { |
122 | - qemu_coroutine_enter(s->recv_coroutine[i]); | ||
123 | - return; | ||
124 | + /* We're woken up by the recv_coroutine itself. Note that there | ||
125 | + * is no race between yielding and reentering read_reply_co. This | ||
126 | + * is because: | ||
127 | + * | ||
128 | + * - if recv_coroutine[i] runs on the same AioContext, it is only | ||
129 | + * entered after we yield | ||
130 | + * | ||
131 | + * - if recv_coroutine[i] runs on a different AioContext, reentering | ||
132 | + * read_reply_co happens through a bottom half, which can only | ||
133 | + * run after we yield. | ||
134 | + */ | ||
135 | + aio_co_wake(s->recv_coroutine[i]); | ||
136 | + qemu_coroutine_yield(); | ||
137 | } | ||
138 | - | ||
139 | -fail: | ||
140 | - nbd_teardown_connection(bs); | ||
141 | -} | ||
142 | - | ||
143 | -static void nbd_restart_write(void *opaque) | ||
144 | -{ | ||
145 | - BlockDriverState *bs = opaque; | ||
146 | - | ||
147 | - qemu_coroutine_enter(nbd_get_client_session(bs)->send_coroutine); | ||
148 | + s->read_reply_co = NULL; | ||
149 | } | ||
150 | |||
151 | static int nbd_co_send_request(BlockDriverState *bs, | ||
152 | @@ -XXX,XX +XXX,XX @@ static int nbd_co_send_request(BlockDriverState *bs, | ||
153 | QEMUIOVector *qiov) | ||
154 | { | ||
155 | NBDClientSession *s = nbd_get_client_session(bs); | ||
156 | - AioContext *aio_context; | ||
157 | int rc, ret, i; | ||
158 | |||
159 | qemu_co_mutex_lock(&s->send_mutex); | ||
160 | @@ -XXX,XX +XXX,XX @@ static int nbd_co_send_request(BlockDriverState *bs, | ||
161 | return -EPIPE; | ||
162 | } | ||
163 | |||
164 | - s->send_coroutine = qemu_coroutine_self(); | ||
165 | - aio_context = bdrv_get_aio_context(bs); | ||
166 | - | ||
167 | - aio_set_fd_handler(aio_context, s->sioc->fd, false, | ||
168 | - nbd_reply_ready, nbd_restart_write, NULL, bs); | ||
169 | if (qiov) { | ||
170 | qio_channel_set_cork(s->ioc, true); | ||
171 | rc = nbd_send_request(s->ioc, request); | ||
172 | @@ -XXX,XX +XXX,XX @@ static int nbd_co_send_request(BlockDriverState *bs, | ||
173 | } else { | ||
174 | rc = nbd_send_request(s->ioc, request); | ||
175 | } | ||
176 | - aio_set_fd_handler(aio_context, s->sioc->fd, false, | ||
177 | - nbd_reply_ready, NULL, NULL, bs); | ||
178 | - s->send_coroutine = NULL; | ||
179 | qemu_co_mutex_unlock(&s->send_mutex); | ||
180 | return rc; | ||
181 | } | ||
182 | @@ -XXX,XX +XXX,XX @@ static void nbd_co_receive_reply(NBDClientSession *s, | ||
183 | { | ||
184 | int ret; | ||
185 | |||
186 | - /* Wait until we're woken up by the read handler. TODO: perhaps | ||
187 | - * peek at the next reply and avoid yielding if it's ours? */ | ||
188 | + /* Wait until we're woken up by nbd_read_reply_entry. */ | ||
189 | qemu_coroutine_yield(); | ||
190 | *reply = s->reply; | ||
191 | if (reply->handle != request->handle || | ||
192 | @@ -XXX,XX +XXX,XX @@ static void nbd_coroutine_start(NBDClientSession *s, | ||
193 | /* s->recv_coroutine[i] is set as soon as we get the send_lock. */ | ||
194 | } | ||
195 | |||
196 | -static void nbd_coroutine_end(NBDClientSession *s, | ||
197 | +static void nbd_coroutine_end(BlockDriverState *bs, | ||
198 | NBDRequest *request) | ||
199 | { | ||
200 | + NBDClientSession *s = nbd_get_client_session(bs); | ||
201 | int i = HANDLE_TO_INDEX(s, request->handle); | ||
207 | + | 202 | + |
208 | diff --git a/tests/qtest/libqos/qos_external.h b/tests/qtest/libqos/qos_external.h | 203 | s->recv_coroutine[i] = NULL; |
209 | new file mode 100644 | 204 | - if (s->in_flight-- == MAX_NBD_REQUESTS) { |
210 | index XXXXXXX..XXXXXXX | 205 | - qemu_co_queue_next(&s->free_sema); |
211 | --- /dev/null | 206 | + s->in_flight--; |
212 | +++ b/tests/qtest/libqos/qos_external.h | 207 | + qemu_co_queue_next(&s->free_sema); |
213 | @@ -XXX,XX +XXX,XX @@ | ||
214 | +/* | ||
215 | + * libqos driver framework | ||
216 | + * | ||
217 | + * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com> | ||
218 | + * | ||
219 | + * This library is free software; you can redistribute it and/or | ||
220 | + * modify it under the terms of the GNU Lesser General Public | ||
221 | + * License version 2 as published by the Free Software Foundation. | ||
222 | + * | ||
223 | + * This library is distributed in the hope that it will be useful, | ||
224 | + * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
225 | + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | ||
226 | + * Lesser General Public License for more details. | ||
227 | + * | ||
228 | + * You should have received a copy of the GNU Lesser General Public | ||
229 | + * License along with this library; if not, see <http://www.gnu.org/licenses/> | ||
230 | + */ | ||
231 | + | 208 | + |
232 | +#ifndef QOS_EXTERNAL_H | 209 | + /* Kick the read_reply_co to get the next reply. */ |
233 | +#define QOS_EXTERNAL_H | 210 | + if (s->read_reply_co) { |
234 | +#include "libqos/qgraph.h" | 211 | + aio_co_wake(s->read_reply_co); |
212 | } | ||
213 | } | ||
214 | |||
215 | @@ -XXX,XX +XXX,XX @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset, | ||
216 | } else { | ||
217 | nbd_co_receive_reply(client, &request, &reply, qiov); | ||
218 | } | ||
219 | - nbd_coroutine_end(client, &request); | ||
220 | + nbd_coroutine_end(bs, &request); | ||
221 | return -reply.error; | ||
222 | } | ||
223 | |||
224 | @@ -XXX,XX +XXX,XX @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset, | ||
225 | } else { | ||
226 | nbd_co_receive_reply(client, &request, &reply, NULL); | ||
227 | } | ||
228 | - nbd_coroutine_end(client, &request); | ||
229 | + nbd_coroutine_end(bs, &request); | ||
230 | return -reply.error; | ||
231 | } | ||
232 | |||
233 | @@ -XXX,XX +XXX,XX @@ int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, | ||
234 | } else { | ||
235 | nbd_co_receive_reply(client, &request, &reply, NULL); | ||
236 | } | ||
237 | - nbd_coroutine_end(client, &request); | ||
238 | + nbd_coroutine_end(bs, &request); | ||
239 | return -reply.error; | ||
240 | } | ||
241 | |||
242 | @@ -XXX,XX +XXX,XX @@ int nbd_client_co_flush(BlockDriverState *bs) | ||
243 | } else { | ||
244 | nbd_co_receive_reply(client, &request, &reply, NULL); | ||
245 | } | ||
246 | - nbd_coroutine_end(client, &request); | ||
247 | + nbd_coroutine_end(bs, &request); | ||
248 | return -reply.error; | ||
249 | } | ||
250 | |||
251 | @@ -XXX,XX +XXX,XX @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int count) | ||
252 | } else { | ||
253 | nbd_co_receive_reply(client, &request, &reply, NULL); | ||
254 | } | ||
255 | - nbd_coroutine_end(client, &request); | ||
256 | + nbd_coroutine_end(bs, &request); | ||
257 | return -reply.error; | ||
258 | |||
259 | } | ||
260 | |||
261 | void nbd_client_detach_aio_context(BlockDriverState *bs) | ||
262 | { | ||
263 | - aio_set_fd_handler(bdrv_get_aio_context(bs), | ||
264 | - nbd_get_client_session(bs)->sioc->fd, | ||
265 | - false, NULL, NULL, NULL, NULL); | ||
266 | + NBDClientSession *client = nbd_get_client_session(bs); | ||
267 | + qio_channel_detach_aio_context(QIO_CHANNEL(client->sioc)); | ||
268 | } | ||
269 | |||
270 | void nbd_client_attach_aio_context(BlockDriverState *bs, | ||
271 | AioContext *new_context) | ||
272 | { | ||
273 | - aio_set_fd_handler(new_context, nbd_get_client_session(bs)->sioc->fd, | ||
274 | - false, nbd_reply_ready, NULL, NULL, bs); | ||
275 | + NBDClientSession *client = nbd_get_client_session(bs); | ||
276 | + qio_channel_attach_aio_context(QIO_CHANNEL(client->sioc), new_context); | ||
277 | + aio_co_schedule(new_context, client->read_reply_co); | ||
278 | } | ||
279 | |||
280 | void nbd_client_close(BlockDriverState *bs) | ||
281 | @@ -XXX,XX +XXX,XX @@ int nbd_client_init(BlockDriverState *bs, | ||
282 | /* Now that we're connected, set the socket to be non-blocking and | ||
283 | * kick the reply mechanism. */ | ||
284 | qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL); | ||
285 | - | ||
286 | + client->read_reply_co = qemu_coroutine_create(nbd_read_reply_entry, client); | ||
287 | nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs)); | ||
288 | |||
289 | logout("Established connection with NBD server\n"); | ||
290 | diff --git a/nbd/client.c b/nbd/client.c | ||
291 | index XXXXXXX..XXXXXXX 100644 | ||
292 | --- a/nbd/client.c | ||
293 | +++ b/nbd/client.c | ||
294 | @@ -XXX,XX +XXX,XX @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply) | ||
295 | ssize_t ret; | ||
296 | |||
297 | ret = read_sync(ioc, buf, sizeof(buf)); | ||
298 | - if (ret < 0) { | ||
299 | + if (ret <= 0) { | ||
300 | return ret; | ||
301 | } | ||
302 | |||
303 | diff --git a/nbd/common.c b/nbd/common.c | ||
304 | index XXXXXXX..XXXXXXX 100644 | ||
305 | --- a/nbd/common.c | ||
306 | +++ b/nbd/common.c | ||
307 | @@ -XXX,XX +XXX,XX @@ ssize_t nbd_wr_syncv(QIOChannel *ioc, | ||
308 | } | ||
309 | if (len == QIO_CHANNEL_ERR_BLOCK) { | ||
310 | if (qemu_in_coroutine()) { | ||
311 | - /* XXX figure out if we can create a variant on | ||
312 | - * qio_channel_yield() that works with AIO contexts | ||
313 | - * and consider using that in this branch */ | ||
314 | - qemu_coroutine_yield(); | ||
315 | - } else if (done) { | ||
316 | - /* XXX this is needed by nbd_reply_ready. */ | ||
317 | - qio_channel_wait(ioc, | ||
318 | - do_read ? G_IO_IN : G_IO_OUT); | ||
319 | + qio_channel_yield(ioc, do_read ? G_IO_IN : G_IO_OUT); | ||
320 | } else { | ||
321 | return -EAGAIN; | ||
322 | } | ||
323 | diff --git a/nbd/server.c b/nbd/server.c | ||
324 | index XXXXXXX..XXXXXXX 100644 | ||
325 | --- a/nbd/server.c | ||
326 | +++ b/nbd/server.c | ||
327 | @@ -XXX,XX +XXX,XX @@ struct NBDClient { | ||
328 | CoMutex send_lock; | ||
329 | Coroutine *send_coroutine; | ||
330 | |||
331 | - bool can_read; | ||
332 | - | ||
333 | QTAILQ_ENTRY(NBDClient) next; | ||
334 | int nb_requests; | ||
335 | bool closing; | ||
336 | @@ -XXX,XX +XXX,XX @@ struct NBDClient { | ||
337 | |||
338 | /* That's all folks */ | ||
339 | |||
340 | -static void nbd_set_handlers(NBDClient *client); | ||
341 | -static void nbd_unset_handlers(NBDClient *client); | ||
342 | -static void nbd_update_can_read(NBDClient *client); | ||
343 | +static void nbd_client_receive_next_request(NBDClient *client); | ||
344 | |||
345 | static gboolean nbd_negotiate_continue(QIOChannel *ioc, | ||
346 | GIOCondition condition, | ||
347 | @@ -XXX,XX +XXX,XX @@ void nbd_client_put(NBDClient *client) | ||
348 | */ | ||
349 | assert(client->closing); | ||
350 | |||
351 | - nbd_unset_handlers(client); | ||
352 | + qio_channel_detach_aio_context(client->ioc); | ||
353 | object_unref(OBJECT(client->sioc)); | ||
354 | object_unref(OBJECT(client->ioc)); | ||
355 | if (client->tlscreds) { | ||
356 | @@ -XXX,XX +XXX,XX @@ static NBDRequestData *nbd_request_get(NBDClient *client) | ||
357 | |||
358 | assert(client->nb_requests <= MAX_NBD_REQUESTS - 1); | ||
359 | client->nb_requests++; | ||
360 | - nbd_update_can_read(client); | ||
361 | |||
362 | req = g_new0(NBDRequestData, 1); | ||
363 | nbd_client_get(client); | ||
364 | @@ -XXX,XX +XXX,XX @@ static void nbd_request_put(NBDRequestData *req) | ||
365 | g_free(req); | ||
366 | |||
367 | client->nb_requests--; | ||
368 | - nbd_update_can_read(client); | ||
369 | + nbd_client_receive_next_request(client); | ||
235 | + | 370 | + |
236 | +void apply_to_node(const char *name, bool is_machine, bool is_abstract); | 371 | nbd_client_put(client); |
237 | +void apply_to_qlist(QList *list, bool is_machine); | 372 | } |
238 | +QGuestAllocator *get_machine_allocator(QOSGraphObject *obj); | 373 | |
239 | +void *allocate_objects(QTestState *qts, char **path, QGuestAllocator **p_alloc); | 374 | @@ -XXX,XX +XXX,XX @@ static void blk_aio_attached(AioContext *ctx, void *opaque) |
240 | + | 375 | exp->ctx = ctx; |
241 | +#endif | 376 | |
242 | diff --git a/tests/qtest/qos-test.c b/tests/qtest/qos-test.c | 377 | QTAILQ_FOREACH(client, &exp->clients, next) { |
243 | index XXXXXXX..XXXXXXX 100644 | 378 | - nbd_set_handlers(client); |
244 | --- a/tests/qtest/qos-test.c | 379 | + qio_channel_attach_aio_context(client->ioc, ctx); |
245 | +++ b/tests/qtest/qos-test.c | 380 | + if (client->recv_coroutine) { |
246 | @@ -XXX,XX +XXX,XX @@ | 381 | + aio_co_schedule(ctx, client->recv_coroutine); |
247 | #include "libqos/malloc.h" | 382 | + } |
248 | #include "libqos/qgraph.h" | 383 | + if (client->send_coroutine) { |
249 | #include "libqos/qgraph_internal.h" | 384 | + aio_co_schedule(ctx, client->send_coroutine); |
250 | +#include "libqos/qos_external.h" | 385 | + } |
251 | 386 | } | |
252 | static char *old_path; | 387 | } |
253 | 388 | ||
254 | -static void apply_to_node(const char *name, bool is_machine, bool is_abstract) | 389 | @@ -XXX,XX +XXX,XX @@ static void blk_aio_detach(void *opaque) |
255 | -{ | 390 | TRACE("Export %s: Detaching clients from AIO context %p\n", exp->name, exp->ctx); |
256 | - char *machine_name = NULL; | 391 | |
257 | - if (is_machine) { | 392 | QTAILQ_FOREACH(client, &exp->clients, next) { |
258 | - const char *arch = qtest_get_arch(); | 393 | - nbd_unset_handlers(client); |
259 | - machine_name = g_strconcat(arch, "/", name, NULL); | 394 | + qio_channel_detach_aio_context(client->ioc); |
260 | - name = machine_name; | 395 | } |
261 | - } | 396 | |
262 | - qos_graph_node_set_availability(name, true); | 397 | exp->ctx = NULL; |
263 | - if (is_abstract) { | 398 | @@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_send_reply(NBDRequestData *req, NBDReply *reply, |
264 | - qos_delete_cmd_line(name); | 399 | g_assert(qemu_in_coroutine()); |
265 | - } | 400 | qemu_co_mutex_lock(&client->send_lock); |
266 | - g_free(machine_name); | 401 | client->send_coroutine = qemu_coroutine_self(); |
267 | -} | 402 | - nbd_set_handlers(client); |
268 | 403 | ||
269 | -/** | 404 | if (!len) { |
270 | - * apply_to_qlist(): using QMP queries QEMU for a list of | 405 | rc = nbd_send_reply(client->ioc, reply); |
271 | - * machines and devices available, and sets the respective node | 406 | @@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_send_reply(NBDRequestData *req, NBDReply *reply, |
272 | - * as true. If a node is found, also all its produced and contained | 407 | } |
273 | - * child are marked available. | 408 | |
274 | - * | 409 | client->send_coroutine = NULL; |
275 | - * See qos_graph_node_set_availability() for more info | 410 | - nbd_set_handlers(client); |
276 | - */ | 411 | qemu_co_mutex_unlock(&client->send_lock); |
277 | -static void apply_to_qlist(QList *list, bool is_machine) | 412 | return rc; |
278 | -{ | 413 | } |
279 | - const QListEntry *p; | 414 | @@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_receive_request(NBDRequestData *req, |
280 | - const char *name; | 415 | ssize_t rc; |
281 | - bool abstract; | 416 | |
282 | - QDict *minfo; | 417 | g_assert(qemu_in_coroutine()); |
283 | - QObject *qobj; | 418 | - client->recv_coroutine = qemu_coroutine_self(); |
284 | - QString *qstr; | 419 | - nbd_update_can_read(client); |
285 | - QBool *qbool; | 420 | - |
286 | - | 421 | + assert(client->recv_coroutine == qemu_coroutine_self()); |
287 | - for (p = qlist_first(list); p; p = qlist_next(p)) { | 422 | rc = nbd_receive_request(client->ioc, request); |
288 | - minfo = qobject_to(QDict, qlist_entry_obj(p)); | 423 | if (rc < 0) { |
289 | - qobj = qdict_get(minfo, "name"); | 424 | if (rc != -EAGAIN) { |
290 | - qstr = qobject_to(QString, qobj); | 425 | @@ -XXX,XX +XXX,XX @@ static ssize_t nbd_co_receive_request(NBDRequestData *req, |
291 | - name = qstring_get_str(qstr); | 426 | |
292 | - | 427 | out: |
293 | - qobj = qdict_get(minfo, "abstract"); | 428 | client->recv_coroutine = NULL; |
294 | - if (qobj) { | 429 | - nbd_update_can_read(client); |
295 | - qbool = qobject_to(QBool, qobj); | 430 | + nbd_client_receive_next_request(client); |
296 | - abstract = qbool_get_bool(qbool); | 431 | |
297 | - } else { | 432 | return rc; |
298 | - abstract = false; | 433 | } |
299 | - } | 434 | |
300 | - | 435 | -static void nbd_trip(void *opaque) |
301 | - apply_to_node(name, is_machine, abstract); | 436 | +/* Owns a reference to the NBDClient passed as opaque. */ |
302 | - qobj = qdict_get(minfo, "alias"); | 437 | +static coroutine_fn void nbd_trip(void *opaque) |
303 | - if (qobj) { | 438 | { |
304 | - qstr = qobject_to(QString, qobj); | 439 | NBDClient *client = opaque; |
305 | - name = qstring_get_str(qstr); | 440 | NBDExport *exp = client->exp; |
306 | - apply_to_node(name, is_machine, abstract); | 441 | NBDRequestData *req; |
307 | - } | 442 | - NBDRequest request; |
443 | + NBDRequest request = { 0 }; /* GCC thinks it can be used uninitialized */ | ||
444 | NBDReply reply; | ||
445 | ssize_t ret; | ||
446 | int flags; | ||
447 | |||
448 | TRACE("Reading request."); | ||
449 | if (client->closing) { | ||
450 | + nbd_client_put(client); | ||
451 | return; | ||
452 | } | ||
453 | |||
454 | @@ -XXX,XX +XXX,XX @@ static void nbd_trip(void *opaque) | ||
455 | |||
456 | done: | ||
457 | nbd_request_put(req); | ||
458 | + nbd_client_put(client); | ||
459 | return; | ||
460 | |||
461 | out: | ||
462 | nbd_request_put(req); | ||
463 | client_close(client); | ||
464 | + nbd_client_put(client); | ||
465 | } | ||
466 | |||
467 | -static void nbd_read(void *opaque) | ||
468 | +static void nbd_client_receive_next_request(NBDClient *client) | ||
469 | { | ||
470 | - NBDClient *client = opaque; | ||
471 | - | ||
472 | - if (client->recv_coroutine) { | ||
473 | - qemu_coroutine_enter(client->recv_coroutine); | ||
474 | - } else { | ||
475 | - qemu_coroutine_enter(qemu_coroutine_create(nbd_trip, client)); | ||
308 | - } | 476 | - } |
309 | -} | 477 | -} |
310 | 478 | - | |
311 | /** | 479 | -static void nbd_restart_write(void *opaque) |
312 | * qos_set_machines_devices_available(): sets availability of qgraph | ||
313 | @@ -XXX,XX +XXX,XX @@ static void qos_set_machines_devices_available(void) | ||
314 | qobject_unref(response); | ||
315 | } | ||
316 | |||
317 | -static QGuestAllocator *get_machine_allocator(QOSGraphObject *obj) | ||
318 | -{ | 480 | -{ |
319 | - return obj->get_driver(obj, "memory"); | 481 | - NBDClient *client = opaque; |
482 | - | ||
483 | - qemu_coroutine_enter(client->send_coroutine); | ||
320 | -} | 484 | -} |
321 | 485 | - | |
322 | static void restart_qemu_or_continue(char *path) | 486 | -static void nbd_set_handlers(NBDClient *client) |
323 | { | ||
324 | @@ -XXX,XX +XXX,XX @@ void qos_invalidate_command_line(void) | ||
325 | old_path = NULL; | ||
326 | } | ||
327 | |||
328 | -/** | ||
329 | - * allocate_objects(): given an array of nodes @arg, | ||
330 | - * walks the path invoking all constructors and | ||
331 | - * passing the corresponding parameter in order to | ||
332 | - * continue the objects allocation. | ||
333 | - * Once the test is reached, return the object it consumes. | ||
334 | - * | ||
335 | - * Since the machine and QEDGE_CONSUMED_BY nodes allocate | ||
336 | - * memory in the constructor, g_test_queue_destroy is used so | ||
337 | - * that after execution they can be safely free'd. (The test's | ||
338 | - * ->before callback is also welcome to use g_test_queue_destroy). | ||
339 | - * | ||
340 | - * Note: as specified in walk_path() too, @arg is an array of | ||
341 | - * char *, where arg[0] is a pointer to the command line | ||
342 | - * string that will be used to properly start QEMU when executing | ||
343 | - * the test, and the remaining elements represent the actual objects | ||
344 | - * that will be allocated. | ||
345 | - */ | ||
346 | -static void *allocate_objects(QTestState *qts, char **path, QGuestAllocator **p_alloc) | ||
347 | -{ | 487 | -{ |
348 | - int current = 0; | 488 | - if (client->exp && client->exp->ctx) { |
349 | - QGuestAllocator *alloc; | 489 | - aio_set_fd_handler(client->exp->ctx, client->sioc->fd, true, |
350 | - QOSGraphObject *parent = NULL; | 490 | - client->can_read ? nbd_read : NULL, |
351 | - QOSGraphEdge *edge; | 491 | - client->send_coroutine ? nbd_restart_write : NULL, |
352 | - QOSGraphNode *node; | 492 | - NULL, client); |
353 | - void *edge_arg; | ||
354 | - void *obj; | ||
355 | - | ||
356 | - node = qos_graph_get_node(path[current]); | ||
357 | - g_assert(node->type == QNODE_MACHINE); | ||
358 | - | ||
359 | - obj = qos_machine_new(node, qts); | ||
360 | - qos_object_queue_destroy(obj); | ||
361 | - | ||
362 | - alloc = get_machine_allocator(obj); | ||
363 | - if (p_alloc) { | ||
364 | - *p_alloc = alloc; | ||
365 | - } | ||
366 | - | ||
367 | - for (;;) { | ||
368 | - if (node->type != QNODE_INTERFACE) { | ||
369 | - qos_object_start_hw(obj); | ||
370 | - parent = obj; | ||
371 | - } | ||
372 | - | ||
373 | - /* follow edge and get object for next node constructor */ | ||
374 | - current++; | ||
375 | - edge = qos_graph_get_edge(path[current - 1], path[current]); | ||
376 | - node = qos_graph_get_node(path[current]); | ||
377 | - | ||
378 | - if (node->type == QNODE_TEST) { | ||
379 | - g_assert(qos_graph_edge_get_type(edge) == QEDGE_CONSUMED_BY); | ||
380 | - return obj; | ||
381 | - } | ||
382 | - | ||
383 | - switch (qos_graph_edge_get_type(edge)) { | ||
384 | - case QEDGE_PRODUCES: | ||
385 | - obj = parent->get_driver(parent, path[current]); | ||
386 | - break; | ||
387 | - | ||
388 | - case QEDGE_CONSUMED_BY: | ||
389 | - edge_arg = qos_graph_edge_get_arg(edge); | ||
390 | - obj = qos_driver_new(node, obj, alloc, edge_arg); | ||
391 | - qos_object_queue_destroy(obj); | ||
392 | - break; | ||
393 | - | ||
394 | - case QEDGE_CONTAINS: | ||
395 | - obj = parent->get_device(parent, path[current]); | ||
396 | - break; | ||
397 | - } | ||
398 | - } | 493 | - } |
399 | -} | 494 | -} |
400 | 495 | - | |
401 | /* The argument to run_one_test, which is the test function that is registered | 496 | -static void nbd_unset_handlers(NBDClient *client) |
402 | * with GTest, is a vector of strings. The first item is the initial command | 497 | -{ |
498 | - if (client->exp && client->exp->ctx) { | ||
499 | - aio_set_fd_handler(client->exp->ctx, client->sioc->fd, true, NULL, | ||
500 | - NULL, NULL, NULL); | ||
501 | - } | ||
502 | -} | ||
503 | - | ||
504 | -static void nbd_update_can_read(NBDClient *client) | ||
505 | -{ | ||
506 | - bool can_read = client->recv_coroutine || | ||
507 | - client->nb_requests < MAX_NBD_REQUESTS; | ||
508 | - | ||
509 | - if (can_read != client->can_read) { | ||
510 | - client->can_read = can_read; | ||
511 | - nbd_set_handlers(client); | ||
512 | - | ||
513 | - /* There is no need to invoke aio_notify(), since aio_set_fd_handler() | ||
514 | - * in nbd_set_handlers() will have taken care of that */ | ||
515 | + if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS) { | ||
516 | + nbd_client_get(client); | ||
517 | + client->recv_coroutine = qemu_coroutine_create(nbd_trip, client); | ||
518 | + aio_co_schedule(client->exp->ctx, client->recv_coroutine); | ||
519 | } | ||
520 | } | ||
521 | |||
522 | @@ -XXX,XX +XXX,XX @@ static coroutine_fn void nbd_co_client_start(void *opaque) | ||
523 | goto out; | ||
524 | } | ||
525 | qemu_co_mutex_init(&client->send_lock); | ||
526 | - nbd_set_handlers(client); | ||
527 | |||
528 | if (exp) { | ||
529 | QTAILQ_INSERT_TAIL(&exp->clients, client, next); | ||
530 | } | ||
531 | + | ||
532 | + nbd_client_receive_next_request(client); | ||
533 | + | ||
534 | out: | ||
535 | g_free(data); | ||
536 | } | ||
537 | @@ -XXX,XX +XXX,XX @@ void nbd_client_new(NBDExport *exp, | ||
538 | object_ref(OBJECT(client->sioc)); | ||
539 | client->ioc = QIO_CHANNEL(sioc); | ||
540 | object_ref(OBJECT(client->ioc)); | ||
541 | - client->can_read = true; | ||
542 | client->close = close_fn; | ||
543 | |||
544 | data->client = client; | ||
403 | -- | 545 | -- |
404 | 2.24.1 | 546 | 2.9.3 |
405 | 547 | ||
548 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 3 | As a small step towards the introduction of multiqueue, we want |
4 | coroutines to remain on the same AioContext that started them, | ||
5 | unless they are moved explicitly with e.g. aio_co_schedule. This patch | ||
6 | avoids that coroutines switch AioContext when they use a CoMutex. | ||
7 | For now it does not make much of a difference, because the CoMutex | ||
8 | is not thread-safe and the AioContext itself is used to protect the | ||
9 | CoMutex from concurrent access. However, this is going to change. | ||
10 | |||
4 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
5 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 12 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
6 | Message-id: 20200220041118.23264-23-alxndr@bu.edu | 13 | Reviewed-by: Fam Zheng <famz@redhat.com> |
14 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | ||
15 | Message-id: 20170213135235.12274-9-pbonzini@redhat.com | ||
7 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 16 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
8 | --- | 17 | --- |
9 | docs/devel/fuzzing.txt | 116 +++++++++++++++++++++++++++++++++++++++++ | 18 | util/qemu-coroutine-lock.c | 5 ++--- |
10 | 1 file changed, 116 insertions(+) | 19 | util/trace-events | 1 - |
11 | create mode 100644 docs/devel/fuzzing.txt | 20 | 2 files changed, 2 insertions(+), 4 deletions(-) |
12 | 21 | ||
13 | diff --git a/docs/devel/fuzzing.txt b/docs/devel/fuzzing.txt | 22 | diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c |
14 | new file mode 100644 | 23 | index XXXXXXX..XXXXXXX 100644 |
15 | index XXXXXXX..XXXXXXX | 24 | --- a/util/qemu-coroutine-lock.c |
16 | --- /dev/null | 25 | +++ b/util/qemu-coroutine-lock.c |
17 | +++ b/docs/devel/fuzzing.txt | ||
18 | @@ -XXX,XX +XXX,XX @@ | 26 | @@ -XXX,XX +XXX,XX @@ |
19 | += Fuzzing = | 27 | #include "qemu/coroutine.h" |
20 | + | 28 | #include "qemu/coroutine_int.h" |
21 | +== Introduction == | 29 | #include "qemu/queue.h" |
22 | + | 30 | +#include "block/aio.h" |
23 | +This document describes the virtual-device fuzzing infrastructure in QEMU and | 31 | #include "trace.h" |
24 | +how to use it to implement additional fuzzers. | 32 | |
25 | + | 33 | void qemu_co_queue_init(CoQueue *queue) |
26 | +== Basics == | 34 | @@ -XXX,XX +XXX,XX @@ void qemu_co_queue_run_restart(Coroutine *co) |
27 | + | 35 | |
28 | +Fuzzing operates by passing inputs to an entry point/target function. The | 36 | static bool qemu_co_queue_do_restart(CoQueue *queue, bool single) |
29 | +fuzzer tracks the code coverage triggered by the input. Based on these | 37 | { |
30 | +findings, the fuzzer mutates the input and repeats the fuzzing. | 38 | - Coroutine *self = qemu_coroutine_self(); |
31 | + | 39 | Coroutine *next; |
32 | +To fuzz QEMU, we rely on libfuzzer. Unlike other fuzzers such as AFL, libfuzzer | 40 | |
33 | +is an _in-process_ fuzzer. For the developer, this means that it is their | 41 | if (QSIMPLEQ_EMPTY(&queue->entries)) { |
34 | +responsibility to ensure that state is reset between fuzzing-runs. | 42 | @@ -XXX,XX +XXX,XX @@ static bool qemu_co_queue_do_restart(CoQueue *queue, bool single) |
35 | + | 43 | |
36 | +== Building the fuzzers == | 44 | while ((next = QSIMPLEQ_FIRST(&queue->entries)) != NULL) { |
37 | + | 45 | QSIMPLEQ_REMOVE_HEAD(&queue->entries, co_queue_next); |
38 | +NOTE: If possible, build a 32-bit binary. When forking, the 32-bit fuzzer is | 46 | - QSIMPLEQ_INSERT_TAIL(&self->co_queue_wakeup, next, co_queue_next); |
39 | +much faster, since the page-map has a smaller size. This is due to the fact that | 47 | - trace_qemu_co_queue_next(next); |
40 | +AddressSanitizer mmaps ~20TB of memory, as part of its detection. This results | 48 | + aio_co_wake(next); |
41 | +in a large page-map, and a much slower fork(). | 49 | if (single) { |
42 | + | 50 | break; |
43 | +To build the fuzzers, install a recent version of clang: | 51 | } |
44 | +Configure with (substitute the clang binaries with the version you installed): | 52 | diff --git a/util/trace-events b/util/trace-events |
45 | + | 53 | index XXXXXXX..XXXXXXX 100644 |
46 | + CC=clang-8 CXX=clang++-8 /path/to/configure --enable-fuzzing | 54 | --- a/util/trace-events |
47 | + | 55 | +++ b/util/trace-events |
48 | +Fuzz targets are built similarly to system/softmmu: | 56 | @@ -XXX,XX +XXX,XX @@ qemu_coroutine_terminate(void *co) "self %p" |
49 | + | 57 | |
50 | + make i386-softmmu/fuzz | 58 | # util/qemu-coroutine-lock.c |
51 | + | 59 | qemu_co_queue_run_restart(void *co) "co %p" |
52 | +This builds ./i386-softmmu/qemu-fuzz-i386 | 60 | -qemu_co_queue_next(void *nxt) "next %p" |
53 | + | 61 | qemu_co_mutex_lock_entry(void *mutex, void *self) "mutex %p self %p" |
54 | +The first option to this command is: --fuzz_taget=FUZZ_NAME | 62 | qemu_co_mutex_lock_return(void *mutex, void *self) "mutex %p self %p" |
55 | +To list all of the available fuzzers run qemu-fuzz-i386 with no arguments. | 63 | qemu_co_mutex_unlock_entry(void *mutex, void *self) "mutex %p self %p" |
56 | + | ||
57 | +eg: | ||
58 | + ./i386-softmmu/qemu-fuzz-i386 --fuzz-target=virtio-net-fork-fuzz | ||
59 | + | ||
60 | +Internally, libfuzzer parses all arguments that do not begin with "--". | ||
61 | +Information about these is available by passing -help=1 | ||
62 | + | ||
63 | +Now the only thing left to do is wait for the fuzzer to trigger potential | ||
64 | +crashes. | ||
65 | + | ||
66 | +== Adding a new fuzzer == | ||
67 | +Coverage over virtual devices can be improved by adding additional fuzzers. | ||
68 | +Fuzzers are kept in tests/qtest/fuzz/ and should be added to | ||
69 | +tests/qtest/fuzz/Makefile.include | ||
70 | + | ||
71 | +Fuzzers can rely on both qtest and libqos to communicate with virtual devices. | ||
72 | + | ||
73 | +1. Create a new source file. For example ``tests/qtest/fuzz/foo-device-fuzz.c``. | ||
74 | + | ||
75 | +2. Write the fuzzing code using the libqtest/libqos API. See existing fuzzers | ||
76 | +for reference. | ||
77 | + | ||
78 | +3. Register the fuzzer in ``tests/fuzz/Makefile.include`` by appending the | ||
79 | +corresponding object to fuzz-obj-y | ||
80 | + | ||
81 | +Fuzzers can be more-or-less thought of as special qtest programs which can | ||
82 | +modify the qtest commands and/or qtest command arguments based on inputs | ||
83 | +provided by libfuzzer. Libfuzzer passes a byte array and length. Commonly the | ||
84 | +fuzzer loops over the byte-array interpreting it as a list of qtest commands, | ||
85 | +addresses, or values. | ||
86 | + | ||
87 | += Implementation Details = | ||
88 | + | ||
89 | +== The Fuzzer's Lifecycle == | ||
90 | + | ||
91 | +The fuzzer has two entrypoints that libfuzzer calls. libfuzzer provides it's | ||
92 | +own main(), which performs some setup, and calls the entrypoints: | ||
93 | + | ||
94 | +LLVMFuzzerInitialize: called prior to fuzzing. Used to initialize all of the | ||
95 | +necessary state | ||
96 | + | ||
97 | +LLVMFuzzerTestOneInput: called for each fuzzing run. Processes the input and | ||
98 | +resets the state at the end of each run. | ||
99 | + | ||
100 | +In more detail: | ||
101 | + | ||
102 | +LLVMFuzzerInitialize parses the arguments to the fuzzer (must start with two | ||
103 | +dashes, so they are ignored by libfuzzer main()). Currently, the arguments | ||
104 | +select the fuzz target. Then, the qtest client is initialized. If the target | ||
105 | +requires qos, qgraph is set up and the QOM/LIBQOS modules are initialized. | ||
106 | +Then the QGraph is walked and the QEMU cmd_line is determined and saved. | ||
107 | + | ||
108 | +After this, the vl.c:qemu__main is called to set up the guest. There are | ||
109 | +target-specific hooks that can be called before and after qemu_main, for | ||
110 | +additional setup(e.g. PCI setup, or VM snapshotting). | ||
111 | + | ||
112 | +LLVMFuzzerTestOneInput: Uses qtest/qos functions to act based on the fuzz | ||
113 | +input. It is also responsible for manually calling the main loop/main_loop_wait | ||
114 | +to ensure that bottom halves are executed and any cleanup required before the | ||
115 | +next input. | ||
116 | + | ||
117 | +Since the same process is reused for many fuzzing runs, QEMU state needs to | ||
118 | +be reset at the end of each run. There are currently two implemented | ||
119 | +options for resetting state: | ||
120 | +1. Reboot the guest between runs. | ||
121 | + Pros: Straightforward and fast for simple fuzz targets. | ||
122 | + Cons: Depending on the device, does not reset all device state. If the | ||
123 | + device requires some initialization prior to being ready for fuzzing | ||
124 | + (common for QOS-based targets), this initialization needs to be done after | ||
125 | + each reboot. | ||
126 | + Example target: i440fx-qtest-reboot-fuzz | ||
127 | +2. Run each test case in a separate forked process and copy the coverage | ||
128 | + information back to the parent. This is fairly similar to AFL's "deferred" | ||
129 | + fork-server mode [3] | ||
130 | + Pros: Relatively fast. Devices only need to be initialized once. No need | ||
131 | + to do slow reboots or vmloads. | ||
132 | + Cons: Not officially supported by libfuzzer. Does not work well for devices | ||
133 | + that rely on dedicated threads. | ||
134 | + Example target: virtio-net-fork-fuzz | ||
135 | -- | 64 | -- |
136 | 2.24.1 | 65 | 2.9.3 |
137 | 66 | ||
67 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 3 | Keep the coroutine on the same AioContext. Without this change, |
4 | there would be a race between yielding the coroutine and reentering it. | ||
5 | While the race cannot happen now, because the code only runs from a single | ||
6 | AioContext, this will change with multiqueue support in the block layer. | ||
7 | |||
8 | While doing the change, replace custom bottom half with aio_co_schedule. | ||
9 | |||
10 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | ||
11 | Reviewed-by: Fam Zheng <famz@redhat.com> | ||
4 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 12 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
5 | Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> | 13 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> |
6 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 14 | Message-id: 20170213135235.12274-10-pbonzini@redhat.com |
7 | Message-id: 20200220041118.23264-19-alxndr@bu.edu | ||
8 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 15 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
9 | --- | 16 | --- |
10 | configure | 39 +++++++++++++++++++++++++++++++++++++++ | 17 | block/blkdebug.c | 9 +-------- |
11 | 1 file changed, 39 insertions(+) | 18 | 1 file changed, 1 insertion(+), 8 deletions(-) |
12 | 19 | ||
13 | diff --git a/configure b/configure | 20 | diff --git a/block/blkdebug.c b/block/blkdebug.c |
14 | index XXXXXXX..XXXXXXX 100755 | 21 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/configure | 22 | --- a/block/blkdebug.c |
16 | +++ b/configure | 23 | +++ b/block/blkdebug.c |
17 | @@ -XXX,XX +XXX,XX @@ debug_mutex="no" | 24 | @@ -XXX,XX +XXX,XX @@ out: |
18 | libpmem="" | 25 | return ret; |
19 | default_devices="yes" | ||
20 | plugins="no" | ||
21 | +fuzzing="no" | ||
22 | |||
23 | supported_cpu="no" | ||
24 | supported_os="no" | ||
25 | @@ -XXX,XX +XXX,XX @@ int main(void) { return 0; } | ||
26 | EOF | ||
27 | } | 26 | } |
28 | 27 | ||
29 | +write_c_fuzzer_skeleton() { | 28 | -static void error_callback_bh(void *opaque) |
30 | + cat > $TMPC <<EOF | 29 | -{ |
31 | +#include <stdint.h> | 30 | - Coroutine *co = opaque; |
32 | +#include <sys/types.h> | 31 | - qemu_coroutine_enter(co); |
33 | +int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size); | 32 | -} |
34 | +int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) { return 0; } | 33 | - |
35 | +EOF | 34 | static int inject_error(BlockDriverState *bs, BlkdebugRule *rule) |
36 | +} | 35 | { |
37 | + | 36 | BDRVBlkdebugState *s = bs->opaque; |
38 | if check_define __linux__ ; then | 37 | @@ -XXX,XX +XXX,XX @@ static int inject_error(BlockDriverState *bs, BlkdebugRule *rule) |
39 | targetos="Linux" | 38 | } |
40 | elif check_define _WIN32 ; then | 39 | |
41 | @@ -XXX,XX +XXX,XX @@ for opt do | 40 | if (!immediately) { |
42 | ;; | 41 | - aio_bh_schedule_oneshot(bdrv_get_aio_context(bs), error_callback_bh, |
43 | --disable-containers) use_containers="no" | 42 | - qemu_coroutine_self()); |
44 | ;; | 43 | + aio_co_schedule(qemu_get_current_aio_context(), qemu_coroutine_self()); |
45 | + --enable-fuzzing) fuzzing=yes | 44 | qemu_coroutine_yield(); |
46 | + ;; | 45 | } |
47 | + --disable-fuzzing) fuzzing=no | 46 | |
48 | + ;; | ||
49 | *) | ||
50 | echo "ERROR: unknown option $opt" | ||
51 | echo "Try '$0 --help' for more information" | ||
52 | @@ -XXX,XX +XXX,XX @@ EOF | ||
53 | fi | ||
54 | fi | ||
55 | |||
56 | +########################################## | ||
57 | +# checks for fuzzer | ||
58 | +if test "$fuzzing" = "yes" ; then | ||
59 | + write_c_fuzzer_skeleton | ||
60 | + if compile_prog "$CPU_CFLAGS -Werror -fsanitize=address,fuzzer" ""; then | ||
61 | + have_fuzzer=yes | ||
62 | + fi | ||
63 | +fi | ||
64 | + | ||
65 | ########################################## | ||
66 | # check for libpmem | ||
67 | |||
68 | @@ -XXX,XX +XXX,XX @@ echo "libpmem support $libpmem" | ||
69 | echo "libudev $libudev" | ||
70 | echo "default devices $default_devices" | ||
71 | echo "plugin support $plugins" | ||
72 | +echo "fuzzing support $fuzzing" | ||
73 | |||
74 | if test "$supported_cpu" = "no"; then | ||
75 | echo | ||
76 | @@ -XXX,XX +XXX,XX @@ fi | ||
77 | if test "$sheepdog" = "yes" ; then | ||
78 | echo "CONFIG_SHEEPDOG=y" >> $config_host_mak | ||
79 | fi | ||
80 | +if test "$fuzzing" = "yes" ; then | ||
81 | + if test "$have_fuzzer" = "yes"; then | ||
82 | + FUZZ_LDFLAGS=" -fsanitize=address,fuzzer" | ||
83 | + FUZZ_CFLAGS=" -fsanitize=address,fuzzer" | ||
84 | + CFLAGS=" -fsanitize=address,fuzzer-no-link" | ||
85 | + else | ||
86 | + error_exit "Your compiler doesn't support -fsanitize=address,fuzzer" | ||
87 | + exit 1 | ||
88 | + fi | ||
89 | +fi | ||
90 | |||
91 | if test "$plugins" = "yes" ; then | ||
92 | echo "CONFIG_PLUGIN=y" >> $config_host_mak | ||
93 | @@ -XXX,XX +XXX,XX @@ if test "$libudev" != "no"; then | ||
94 | echo "CONFIG_LIBUDEV=y" >> $config_host_mak | ||
95 | echo "LIBUDEV_LIBS=$libudev_libs" >> $config_host_mak | ||
96 | fi | ||
97 | +if test "$fuzzing" != "no"; then | ||
98 | + echo "CONFIG_FUZZ=y" >> $config_host_mak | ||
99 | + echo "FUZZ_CFLAGS=$FUZZ_CFLAGS" >> $config_host_mak | ||
100 | + echo "FUZZ_LDFLAGS=$FUZZ_LDFLAGS" >> $config_host_mak | ||
101 | +fi | ||
102 | |||
103 | if test "$edk2_blobs" = "yes" ; then | ||
104 | echo "DECOMPRESS_EDK2_BLOBS=y" >> $config_host_mak | ||
105 | -- | 47 | -- |
106 | 2.24.1 | 48 | 2.9.3 |
107 | 49 | ||
50 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | This makes it simple to swap the transport functions for qtest commands | 3 | qed_aio_start_io and qed_aio_next_io will not have to acquire/release |
4 | to and from the qtest client. For example, now it is possible to | 4 | the AioContext, while qed_aio_next_io_cb will. Split the functionality |
5 | directly pass qtest commands to a server handler that exists within the | 5 | and gain a little type-safety in the process. |
6 | same process, without the standard way of writing to a file descriptor. | ||
7 | 6 | ||
8 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
9 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 7 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
10 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 8 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
11 | Message-id: 20200220041118.23264-7-alxndr@bu.edu | 9 | Reviewed-by: Fam Zheng <famz@redhat.com> |
10 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | ||
11 | Message-id: 20170213135235.12274-11-pbonzini@redhat.com | ||
12 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 12 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
13 | --- | 13 | --- |
14 | tests/qtest/libqtest.c | 48 ++++++++++++++++++++++++++++++++++-------- | 14 | block/qed.c | 39 +++++++++++++++++++++++++-------------- |
15 | 1 file changed, 39 insertions(+), 9 deletions(-) | 15 | 1 file changed, 25 insertions(+), 14 deletions(-) |
16 | 16 | ||
17 | diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c | 17 | diff --git a/block/qed.c b/block/qed.c |
18 | index XXXXXXX..XXXXXXX 100644 | 18 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/tests/qtest/libqtest.c | 19 | --- a/block/qed.c |
20 | +++ b/tests/qtest/libqtest.c | 20 | +++ b/block/qed.c |
21 | @@ -XXX,XX +XXX,XX @@ | 21 | @@ -XXX,XX +XXX,XX @@ static CachedL2Table *qed_new_l2_table(BDRVQEDState *s) |
22 | #define SOCKET_TIMEOUT 50 | 22 | return l2_table; |
23 | #define SOCKET_MAX_FDS 16 | 23 | } |
24 | 24 | ||
25 | -static void qed_aio_next_io(void *opaque, int ret); | ||
26 | +static void qed_aio_next_io(QEDAIOCB *acb, int ret); | ||
25 | + | 27 | + |
26 | +typedef void (*QTestSendFn)(QTestState *s, const char *buf); | 28 | +static void qed_aio_start_io(QEDAIOCB *acb) |
27 | +typedef GString* (*QTestRecvFn)(QTestState *); | 29 | +{ |
30 | + qed_aio_next_io(acb, 0); | ||
31 | +} | ||
28 | + | 32 | + |
29 | +typedef struct QTestClientTransportOps { | 33 | +static void qed_aio_next_io_cb(void *opaque, int ret) |
30 | + QTestSendFn send; /* for sending qtest commands */ | 34 | +{ |
31 | + QTestRecvFn recv_line; /* for receiving qtest command responses */ | 35 | + QEDAIOCB *acb = opaque; |
32 | +} QTestTransportOps; | ||
33 | + | 36 | + |
34 | struct QTestState | 37 | + qed_aio_next_io(acb, ret); |
38 | +} | ||
39 | |||
40 | static void qed_plug_allocating_write_reqs(BDRVQEDState *s) | ||
35 | { | 41 | { |
36 | int fd; | 42 | @@ -XXX,XX +XXX,XX @@ static void qed_unplug_allocating_write_reqs(BDRVQEDState *s) |
37 | @@ -XXX,XX +XXX,XX @@ struct QTestState | 43 | |
38 | bool big_endian; | 44 | acb = QSIMPLEQ_FIRST(&s->allocating_write_reqs); |
39 | bool irq_level[MAX_IRQ]; | 45 | if (acb) { |
40 | GString *rx; | 46 | - qed_aio_next_io(acb, 0); |
41 | + QTestTransportOps ops; | 47 | + qed_aio_start_io(acb); |
42 | }; | ||
43 | |||
44 | static GHookList abrt_hooks; | ||
45 | @@ -XXX,XX +XXX,XX @@ static struct sigaction sigact_old; | ||
46 | |||
47 | static int qtest_query_target_endianness(QTestState *s); | ||
48 | |||
49 | +static void qtest_client_socket_send(QTestState*, const char *buf); | ||
50 | +static void socket_send(int fd, const char *buf, size_t size); | ||
51 | + | ||
52 | +static GString *qtest_client_socket_recv_line(QTestState *); | ||
53 | + | ||
54 | +static void qtest_client_set_tx_handler(QTestState *s, QTestSendFn send); | ||
55 | +static void qtest_client_set_rx_handler(QTestState *s, QTestRecvFn recv); | ||
56 | + | ||
57 | static int init_socket(const char *socket_path) | ||
58 | { | ||
59 | struct sockaddr_un addr; | ||
60 | @@ -XXX,XX +XXX,XX @@ QTestState *qtest_init_without_qmp_handshake(const char *extra_args) | ||
61 | sock = init_socket(socket_path); | ||
62 | qmpsock = init_socket(qmp_socket_path); | ||
63 | |||
64 | + qtest_client_set_rx_handler(s, qtest_client_socket_recv_line); | ||
65 | + qtest_client_set_tx_handler(s, qtest_client_socket_send); | ||
66 | + | ||
67 | qtest_add_abrt_handler(kill_qemu_hook_func, s); | ||
68 | |||
69 | command = g_strdup_printf("exec %s " | ||
70 | @@ -XXX,XX +XXX,XX @@ static void socket_send(int fd, const char *buf, size_t size) | ||
71 | } | 48 | } |
72 | } | 49 | } |
73 | 50 | ||
74 | -static void socket_sendf(int fd, const char *fmt, va_list ap) | 51 | @@ -XXX,XX +XXX,XX @@ static void qed_aio_complete(QEDAIOCB *acb, int ret) |
75 | +static void qtest_client_socket_send(QTestState *s, const char *buf) | 52 | QSIMPLEQ_REMOVE_HEAD(&s->allocating_write_reqs, next); |
53 | acb = QSIMPLEQ_FIRST(&s->allocating_write_reqs); | ||
54 | if (acb) { | ||
55 | - qed_aio_next_io(acb, 0); | ||
56 | + qed_aio_start_io(acb); | ||
57 | } else if (s->header.features & QED_F_NEED_CHECK) { | ||
58 | qed_start_need_check_timer(s); | ||
59 | } | ||
60 | @@ -XXX,XX +XXX,XX @@ static void qed_commit_l2_update(void *opaque, int ret) | ||
61 | acb->request.l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset); | ||
62 | assert(acb->request.l2_table != NULL); | ||
63 | |||
64 | - qed_aio_next_io(opaque, ret); | ||
65 | + qed_aio_next_io(acb, ret); | ||
66 | } | ||
67 | |||
68 | /** | ||
69 | @@ -XXX,XX +XXX,XX @@ static void qed_aio_write_l2_update(QEDAIOCB *acb, int ret, uint64_t offset) | ||
70 | if (need_alloc) { | ||
71 | /* Write out the whole new L2 table */ | ||
72 | qed_write_l2_table(s, &acb->request, 0, s->table_nelems, true, | ||
73 | - qed_aio_write_l1_update, acb); | ||
74 | + qed_aio_write_l1_update, acb); | ||
75 | } else { | ||
76 | /* Write out only the updated part of the L2 table */ | ||
77 | qed_write_l2_table(s, &acb->request, index, acb->cur_nclusters, false, | ||
78 | - qed_aio_next_io, acb); | ||
79 | + qed_aio_next_io_cb, acb); | ||
80 | } | ||
81 | return; | ||
82 | |||
83 | @@ -XXX,XX +XXX,XX @@ static void qed_aio_write_main(void *opaque, int ret) | ||
84 | } | ||
85 | |||
86 | if (acb->find_cluster_ret == QED_CLUSTER_FOUND) { | ||
87 | - next_fn = qed_aio_next_io; | ||
88 | + next_fn = qed_aio_next_io_cb; | ||
89 | } else { | ||
90 | if (s->bs->backing) { | ||
91 | next_fn = qed_aio_write_flush_before_l2_update; | ||
92 | @@ -XXX,XX +XXX,XX @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) | ||
93 | if (acb->flags & QED_AIOCB_ZERO) { | ||
94 | /* Skip ahead if the clusters are already zero */ | ||
95 | if (acb->find_cluster_ret == QED_CLUSTER_ZERO) { | ||
96 | - qed_aio_next_io(acb, 0); | ||
97 | + qed_aio_start_io(acb); | ||
98 | return; | ||
99 | } | ||
100 | |||
101 | @@ -XXX,XX +XXX,XX @@ static void qed_aio_read_data(void *opaque, int ret, | ||
102 | /* Handle zero cluster and backing file reads */ | ||
103 | if (ret == QED_CLUSTER_ZERO) { | ||
104 | qemu_iovec_memset(&acb->cur_qiov, 0, 0, acb->cur_qiov.size); | ||
105 | - qed_aio_next_io(acb, 0); | ||
106 | + qed_aio_start_io(acb); | ||
107 | return; | ||
108 | } else if (ret != QED_CLUSTER_FOUND) { | ||
109 | qed_read_backing_file(s, acb->cur_pos, &acb->cur_qiov, | ||
110 | - &acb->backing_qiov, qed_aio_next_io, acb); | ||
111 | + &acb->backing_qiov, qed_aio_next_io_cb, acb); | ||
112 | return; | ||
113 | } | ||
114 | |||
115 | BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO); | ||
116 | bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE, | ||
117 | &acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE, | ||
118 | - qed_aio_next_io, acb); | ||
119 | + qed_aio_next_io_cb, acb); | ||
120 | return; | ||
121 | |||
122 | err: | ||
123 | @@ -XXX,XX +XXX,XX @@ err: | ||
124 | /** | ||
125 | * Begin next I/O or complete the request | ||
126 | */ | ||
127 | -static void qed_aio_next_io(void *opaque, int ret) | ||
128 | +static void qed_aio_next_io(QEDAIOCB *acb, int ret) | ||
76 | { | 129 | { |
77 | - gchar *str = g_strdup_vprintf(fmt, ap); | 130 | - QEDAIOCB *acb = opaque; |
78 | - size_t size = strlen(str); | 131 | BDRVQEDState *s = acb_to_s(acb); |
79 | - | 132 | QEDFindClusterFunc *io_fn = (acb->flags & QED_AIOCB_WRITE) ? |
80 | - socket_send(fd, str, size); | 133 | qed_aio_write_data : qed_aio_read_data; |
81 | - g_free(str); | 134 | @@ -XXX,XX +XXX,XX @@ static BlockAIOCB *qed_aio_setup(BlockDriverState *bs, |
82 | + socket_send(s->fd, buf, strlen(buf)); | 135 | qemu_iovec_init(&acb->cur_qiov, qiov->niov); |
136 | |||
137 | /* Start request */ | ||
138 | - qed_aio_next_io(acb, 0); | ||
139 | + qed_aio_start_io(acb); | ||
140 | return &acb->common; | ||
83 | } | 141 | } |
84 | 142 | ||
85 | static void GCC_FMT_ATTR(2, 3) qtest_sendf(QTestState *s, const char *fmt, ...) | ||
86 | @@ -XXX,XX +XXX,XX @@ static void GCC_FMT_ATTR(2, 3) qtest_sendf(QTestState *s, const char *fmt, ...) | ||
87 | va_list ap; | ||
88 | |||
89 | va_start(ap, fmt); | ||
90 | - socket_sendf(s->fd, fmt, ap); | ||
91 | + gchar *str = g_strdup_vprintf(fmt, ap); | ||
92 | va_end(ap); | ||
93 | + | ||
94 | + s->ops.send(s, str); | ||
95 | + g_free(str); | ||
96 | } | ||
97 | |||
98 | /* Sends a message and file descriptors to the socket. | ||
99 | @@ -XXX,XX +XXX,XX @@ static void socket_send_fds(int socket_fd, int *fds, size_t fds_num, | ||
100 | g_assert_cmpint(ret, >, 0); | ||
101 | } | ||
102 | |||
103 | -static GString *qtest_recv_line(QTestState *s) | ||
104 | +static GString *qtest_client_socket_recv_line(QTestState *s) | ||
105 | { | ||
106 | GString *line; | ||
107 | size_t offset; | ||
108 | @@ -XXX,XX +XXX,XX @@ static gchar **qtest_rsp(QTestState *s, int expected_args) | ||
109 | int i; | ||
110 | |||
111 | redo: | ||
112 | - line = qtest_recv_line(s); | ||
113 | + line = s->ops.recv_line(s); | ||
114 | words = g_strsplit(line->str, " ", 0); | ||
115 | g_string_free(line, TRUE); | ||
116 | |||
117 | @@ -XXX,XX +XXX,XX @@ void qmp_assert_error_class(QDict *rsp, const char *class) | ||
118 | |||
119 | qobject_unref(rsp); | ||
120 | } | ||
121 | + | ||
122 | +static void qtest_client_set_tx_handler(QTestState *s, | ||
123 | + QTestSendFn send) | ||
124 | +{ | ||
125 | + s->ops.send = send; | ||
126 | +} | ||
127 | +static void qtest_client_set_rx_handler(QTestState *s, QTestRecvFn recv) | ||
128 | +{ | ||
129 | + s->ops.recv_line = recv; | ||
130 | +} | ||
131 | -- | 143 | -- |
132 | 2.24.1 | 144 | 2.9.3 |
133 | 145 | ||
146 | diff view generated by jsdifflib |
1 | epoll_handler is a stack variable and must not be accessed after it goes | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | out of scope: | ||
3 | 2 | ||
4 | if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) { | 3 | The AioContext data structures are now protected by list_lock and/or |
5 | AioHandler epoll_handler; | 4 | they are walked with FOREACH_RCU primitives. There is no need anymore |
6 | ... | 5 | to acquire the AioContext for the entire duration of aio_dispatch. |
7 | add_pollfd(&epoll_handler); | 6 | Instead, just acquire it before and after invoking the callbacks. |
8 | ret = aio_epoll(ctx, pollfds, npfd, timeout); | 7 | The next step is then to push it further down. |
9 | } ... | ||
10 | 8 | ||
11 | ... | 9 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
12 | 10 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | |
13 | /* if we have any readable fds, dispatch event */ | 11 | Reviewed-by: Fam Zheng <famz@redhat.com> |
14 | if (ret > 0) { | 12 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> |
15 | for (i = 0; i < npfd; i++) { | 13 | Message-id: 20170213135235.12274-12-pbonzini@redhat.com |
16 | nodes[i]->pfd.revents = pollfds[i].revents; | ||
17 | } | ||
18 | } | ||
19 | |||
20 | nodes[0] is &epoll_handler, which has already gone out of scope. | ||
21 | |||
22 | There is no need to use pollfds[] for epoll. We don't need an | ||
23 | AioHandler for the epoll fd. | ||
24 | |||
25 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
26 | Reviewed-by: Sergio Lopez <slp@redhat.com> | ||
27 | Message-id: 20200214171712.541358-2-stefanha@redhat.com | ||
28 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 14 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
29 | --- | 15 | --- |
30 | util/aio-posix.c | 20 ++++++++------------ | 16 | util/aio-posix.c | 25 +++++++++++-------------- |
31 | 1 file changed, 8 insertions(+), 12 deletions(-) | 17 | util/aio-win32.c | 15 +++++++-------- |
18 | util/async.c | 2 ++ | ||
19 | 3 files changed, 20 insertions(+), 22 deletions(-) | ||
32 | 20 | ||
33 | diff --git a/util/aio-posix.c b/util/aio-posix.c | 21 | diff --git a/util/aio-posix.c b/util/aio-posix.c |
34 | index XXXXXXX..XXXXXXX 100644 | 22 | index XXXXXXX..XXXXXXX 100644 |
35 | --- a/util/aio-posix.c | 23 | --- a/util/aio-posix.c |
36 | +++ b/util/aio-posix.c | 24 | +++ b/util/aio-posix.c |
37 | @@ -XXX,XX +XXX,XX @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new) | 25 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) |
26 | (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && | ||
27 | aio_node_check(ctx, node->is_external) && | ||
28 | node->io_read) { | ||
29 | + aio_context_acquire(ctx); | ||
30 | node->io_read(node->opaque); | ||
31 | + aio_context_release(ctx); | ||
32 | |||
33 | /* aio_notify() does not count as progress */ | ||
34 | if (node->opaque != &ctx->notifier) { | ||
35 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) | ||
36 | (revents & (G_IO_OUT | G_IO_ERR)) && | ||
37 | aio_node_check(ctx, node->is_external) && | ||
38 | node->io_write) { | ||
39 | + aio_context_acquire(ctx); | ||
40 | node->io_write(node->opaque); | ||
41 | + aio_context_release(ctx); | ||
42 | progress = true; | ||
43 | } | ||
44 | |||
45 | @@ -XXX,XX +XXX,XX @@ bool aio_dispatch(AioContext *ctx, bool dispatch_fds) | ||
38 | } | 46 | } |
47 | |||
48 | /* Run our timers */ | ||
49 | + aio_context_acquire(ctx); | ||
50 | progress |= timerlistgroup_run_timers(&ctx->tlg); | ||
51 | + aio_context_release(ctx); | ||
52 | |||
53 | return progress; | ||
39 | } | 54 | } |
40 | 55 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | |
41 | -static int aio_epoll(AioContext *ctx, GPollFD *pfds, | 56 | int64_t timeout; |
42 | - unsigned npfd, int64_t timeout) | 57 | int64_t start = 0; |
43 | +static int aio_epoll(AioContext *ctx, int64_t timeout) | 58 | |
44 | { | 59 | - aio_context_acquire(ctx); |
45 | + GPollFD pfd = { | 60 | - progress = false; |
46 | + .fd = ctx->epollfd, | 61 | - |
47 | + .events = G_IO_IN | G_IO_OUT | G_IO_HUP | G_IO_ERR, | 62 | /* aio_notify can avoid the expensive event_notifier_set if |
48 | + }; | 63 | * everything (file descriptors, bottom halves, timers) will |
49 | AioHandler *node; | 64 | * be re-evaluated before the next blocking poll(). This is |
50 | int i, ret = 0; | 65 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) |
51 | struct epoll_event events[128]; | 66 | start = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); |
52 | |||
53 | - assert(npfd == 1); | ||
54 | - assert(pfds[0].fd == ctx->epollfd); | ||
55 | if (timeout > 0) { | ||
56 | - ret = qemu_poll_ns(pfds, npfd, timeout); | ||
57 | + ret = qemu_poll_ns(&pfd, 1, timeout); | ||
58 | } | 67 | } |
59 | if (timeout <= 0 || ret > 0) { | 68 | |
60 | ret = epoll_wait(ctx->epollfd, events, | 69 | - if (try_poll_mode(ctx, blocking)) { |
70 | - progress = true; | ||
71 | - } else { | ||
72 | + aio_context_acquire(ctx); | ||
73 | + progress = try_poll_mode(ctx, blocking); | ||
74 | + aio_context_release(ctx); | ||
75 | + | ||
76 | + if (!progress) { | ||
77 | assert(npfd == 0); | ||
78 | |||
79 | /* fill pollfds */ | ||
61 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | 80 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) |
81 | timeout = blocking ? aio_compute_timeout(ctx) : 0; | ||
62 | 82 | ||
63 | /* wait until next event */ | 83 | /* wait until next event */ |
84 | - if (timeout) { | ||
85 | - aio_context_release(ctx); | ||
86 | - } | ||
64 | if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) { | 87 | if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) { |
65 | - AioHandler epoll_handler; | 88 | AioHandler epoll_handler; |
66 | - | 89 | |
67 | - epoll_handler.pfd.fd = ctx->epollfd; | 90 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) |
68 | - epoll_handler.pfd.events = G_IO_IN | G_IO_OUT | G_IO_HUP | G_IO_ERR; | ||
69 | - npfd = 0; | ||
70 | - add_pollfd(&epoll_handler); | ||
71 | - ret = aio_epoll(ctx, pollfds, npfd, timeout); | ||
72 | + npfd = 0; /* pollfds[] is not being used */ | ||
73 | + ret = aio_epoll(ctx, timeout); | ||
74 | } else { | 91 | } else { |
75 | ret = qemu_poll_ns(pollfds, npfd, timeout); | 92 | ret = qemu_poll_ns(pollfds, npfd, timeout); |
76 | } | 93 | } |
94 | - if (timeout) { | ||
95 | - aio_context_acquire(ctx); | ||
96 | - } | ||
97 | } | ||
98 | |||
99 | if (blocking) { | ||
100 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | ||
101 | progress = true; | ||
102 | } | ||
103 | |||
104 | - aio_context_release(ctx); | ||
105 | - | ||
106 | return progress; | ||
107 | } | ||
108 | |||
109 | diff --git a/util/aio-win32.c b/util/aio-win32.c | ||
110 | index XXXXXXX..XXXXXXX 100644 | ||
111 | --- a/util/aio-win32.c | ||
112 | +++ b/util/aio-win32.c | ||
113 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) | ||
114 | (revents || event_notifier_get_handle(node->e) == event) && | ||
115 | node->io_notify) { | ||
116 | node->pfd.revents = 0; | ||
117 | + aio_context_acquire(ctx); | ||
118 | node->io_notify(node->e); | ||
119 | + aio_context_release(ctx); | ||
120 | |||
121 | /* aio_notify() does not count as progress */ | ||
122 | if (node->e != &ctx->notifier) { | ||
123 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) | ||
124 | (node->io_read || node->io_write)) { | ||
125 | node->pfd.revents = 0; | ||
126 | if ((revents & G_IO_IN) && node->io_read) { | ||
127 | + aio_context_acquire(ctx); | ||
128 | node->io_read(node->opaque); | ||
129 | + aio_context_release(ctx); | ||
130 | progress = true; | ||
131 | } | ||
132 | if ((revents & G_IO_OUT) && node->io_write) { | ||
133 | + aio_context_acquire(ctx); | ||
134 | node->io_write(node->opaque); | ||
135 | + aio_context_release(ctx); | ||
136 | progress = true; | ||
137 | } | ||
138 | |||
139 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | ||
140 | int count; | ||
141 | int timeout; | ||
142 | |||
143 | - aio_context_acquire(ctx); | ||
144 | progress = false; | ||
145 | |||
146 | /* aio_notify can avoid the expensive event_notifier_set if | ||
147 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | ||
148 | |||
149 | timeout = blocking && !have_select_revents | ||
150 | ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0; | ||
151 | - if (timeout) { | ||
152 | - aio_context_release(ctx); | ||
153 | - } | ||
154 | ret = WaitForMultipleObjects(count, events, FALSE, timeout); | ||
155 | if (blocking) { | ||
156 | assert(first); | ||
157 | atomic_sub(&ctx->notify_me, 2); | ||
158 | } | ||
159 | - if (timeout) { | ||
160 | - aio_context_acquire(ctx); | ||
161 | - } | ||
162 | |||
163 | if (first) { | ||
164 | aio_notify_accept(ctx); | ||
165 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | ||
166 | progress |= aio_dispatch_handlers(ctx, event); | ||
167 | } while (count > 0); | ||
168 | |||
169 | + aio_context_acquire(ctx); | ||
170 | progress |= timerlistgroup_run_timers(&ctx->tlg); | ||
171 | - | ||
172 | aio_context_release(ctx); | ||
173 | return progress; | ||
174 | } | ||
175 | diff --git a/util/async.c b/util/async.c | ||
176 | index XXXXXXX..XXXXXXX 100644 | ||
177 | --- a/util/async.c | ||
178 | +++ b/util/async.c | ||
179 | @@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx) | ||
180 | ret = 1; | ||
181 | } | ||
182 | bh->idle = 0; | ||
183 | + aio_context_acquire(ctx); | ||
184 | aio_bh_call(bh); | ||
185 | + aio_context_release(ctx); | ||
186 | } | ||
187 | if (bh->deleted) { | ||
188 | deleted = true; | ||
77 | -- | 189 | -- |
78 | 2.24.1 | 190 | 2.9.3 |
79 | 191 | ||
192 | diff view generated by jsdifflib |
1 | The first rcu_read_lock/unlock() is expensive. Nested calls are cheap. | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | This optimization increases IOPS from 73k to 162k with a Linux guest | 3 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
4 | that has 2 virtio-blk,num-queues=1 and 99 virtio-blk,num-queues=32 | 4 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
5 | devices. | 5 | Reviewed-by: Fam Zheng <famz@redhat.com> |
6 | 6 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | |
7 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 7 | Message-id: 20170213135235.12274-13-pbonzini@redhat.com |
8 | Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> | ||
9 | Message-id: 20200218182708.914552-1-stefanha@redhat.com | ||
10 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 8 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
11 | --- | 9 | --- |
12 | util/aio-posix.c | 11 +++++++++++ | 10 | block/qed.h | 3 +++ |
13 | 1 file changed, 11 insertions(+) | 11 | block/curl.c | 2 ++ |
14 | 12 | block/io.c | 5 +++++ | |
13 | block/iscsi.c | 8 ++++++-- | ||
14 | block/null.c | 4 ++++ | ||
15 | block/qed.c | 12 ++++++++++++ | ||
16 | block/throttle-groups.c | 2 ++ | ||
17 | util/aio-posix.c | 2 -- | ||
18 | util/aio-win32.c | 2 -- | ||
19 | util/qemu-coroutine-sleep.c | 2 +- | ||
20 | 10 files changed, 35 insertions(+), 7 deletions(-) | ||
21 | |||
22 | diff --git a/block/qed.h b/block/qed.h | ||
23 | index XXXXXXX..XXXXXXX 100644 | ||
24 | --- a/block/qed.h | ||
25 | +++ b/block/qed.h | ||
26 | @@ -XXX,XX +XXX,XX @@ enum { | ||
27 | */ | ||
28 | typedef void QEDFindClusterFunc(void *opaque, int ret, uint64_t offset, size_t len); | ||
29 | |||
30 | +void qed_acquire(BDRVQEDState *s); | ||
31 | +void qed_release(BDRVQEDState *s); | ||
32 | + | ||
33 | /** | ||
34 | * Generic callback for chaining async callbacks | ||
35 | */ | ||
36 | diff --git a/block/curl.c b/block/curl.c | ||
37 | index XXXXXXX..XXXXXXX 100644 | ||
38 | --- a/block/curl.c | ||
39 | +++ b/block/curl.c | ||
40 | @@ -XXX,XX +XXX,XX @@ static void curl_multi_timeout_do(void *arg) | ||
41 | return; | ||
42 | } | ||
43 | |||
44 | + aio_context_acquire(s->aio_context); | ||
45 | curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running); | ||
46 | |||
47 | curl_multi_check_completion(s); | ||
48 | + aio_context_release(s->aio_context); | ||
49 | #else | ||
50 | abort(); | ||
51 | #endif | ||
52 | diff --git a/block/io.c b/block/io.c | ||
53 | index XXXXXXX..XXXXXXX 100644 | ||
54 | --- a/block/io.c | ||
55 | +++ b/block/io.c | ||
56 | @@ -XXX,XX +XXX,XX @@ void bdrv_aio_cancel(BlockAIOCB *acb) | ||
57 | if (acb->aiocb_info->get_aio_context) { | ||
58 | aio_poll(acb->aiocb_info->get_aio_context(acb), true); | ||
59 | } else if (acb->bs) { | ||
60 | + /* qemu_aio_ref and qemu_aio_unref are not thread-safe, so | ||
61 | + * assert that we're not using an I/O thread. Thread-safe | ||
62 | + * code should use bdrv_aio_cancel_async exclusively. | ||
63 | + */ | ||
64 | + assert(bdrv_get_aio_context(acb->bs) == qemu_get_aio_context()); | ||
65 | aio_poll(bdrv_get_aio_context(acb->bs), true); | ||
66 | } else { | ||
67 | abort(); | ||
68 | diff --git a/block/iscsi.c b/block/iscsi.c | ||
69 | index XXXXXXX..XXXXXXX 100644 | ||
70 | --- a/block/iscsi.c | ||
71 | +++ b/block/iscsi.c | ||
72 | @@ -XXX,XX +XXX,XX @@ static void iscsi_retry_timer_expired(void *opaque) | ||
73 | struct IscsiTask *iTask = opaque; | ||
74 | iTask->complete = 1; | ||
75 | if (iTask->co) { | ||
76 | - qemu_coroutine_enter(iTask->co); | ||
77 | + aio_co_wake(iTask->co); | ||
78 | } | ||
79 | } | ||
80 | |||
81 | @@ -XXX,XX +XXX,XX @@ static void iscsi_nop_timed_event(void *opaque) | ||
82 | { | ||
83 | IscsiLun *iscsilun = opaque; | ||
84 | |||
85 | + aio_context_acquire(iscsilun->aio_context); | ||
86 | if (iscsi_get_nops_in_flight(iscsilun->iscsi) >= MAX_NOP_FAILURES) { | ||
87 | error_report("iSCSI: NOP timeout. Reconnecting..."); | ||
88 | iscsilun->request_timed_out = true; | ||
89 | } else if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) { | ||
90 | error_report("iSCSI: failed to sent NOP-Out. Disabling NOP messages."); | ||
91 | - return; | ||
92 | + goto out; | ||
93 | } | ||
94 | |||
95 | timer_mod(iscsilun->nop_timer, qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + NOP_INTERVAL); | ||
96 | iscsi_set_events(iscsilun); | ||
97 | + | ||
98 | +out: | ||
99 | + aio_context_release(iscsilun->aio_context); | ||
100 | } | ||
101 | |||
102 | static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp) | ||
103 | diff --git a/block/null.c b/block/null.c | ||
104 | index XXXXXXX..XXXXXXX 100644 | ||
105 | --- a/block/null.c | ||
106 | +++ b/block/null.c | ||
107 | @@ -XXX,XX +XXX,XX @@ static void null_bh_cb(void *opaque) | ||
108 | static void null_timer_cb(void *opaque) | ||
109 | { | ||
110 | NullAIOCB *acb = opaque; | ||
111 | + AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
112 | + | ||
113 | + aio_context_acquire(ctx); | ||
114 | acb->common.cb(acb->common.opaque, 0); | ||
115 | + aio_context_release(ctx); | ||
116 | timer_deinit(&acb->timer); | ||
117 | qemu_aio_unref(acb); | ||
118 | } | ||
119 | diff --git a/block/qed.c b/block/qed.c | ||
120 | index XXXXXXX..XXXXXXX 100644 | ||
121 | --- a/block/qed.c | ||
122 | +++ b/block/qed.c | ||
123 | @@ -XXX,XX +XXX,XX @@ static void qed_need_check_timer_cb(void *opaque) | ||
124 | |||
125 | trace_qed_need_check_timer_cb(s); | ||
126 | |||
127 | + qed_acquire(s); | ||
128 | qed_plug_allocating_write_reqs(s); | ||
129 | |||
130 | /* Ensure writes are on disk before clearing flag */ | ||
131 | bdrv_aio_flush(s->bs->file->bs, qed_clear_need_check, s); | ||
132 | + qed_release(s); | ||
133 | +} | ||
134 | + | ||
135 | +void qed_acquire(BDRVQEDState *s) | ||
136 | +{ | ||
137 | + aio_context_acquire(bdrv_get_aio_context(s->bs)); | ||
138 | +} | ||
139 | + | ||
140 | +void qed_release(BDRVQEDState *s) | ||
141 | +{ | ||
142 | + aio_context_release(bdrv_get_aio_context(s->bs)); | ||
143 | } | ||
144 | |||
145 | static void qed_start_need_check_timer(BDRVQEDState *s) | ||
146 | diff --git a/block/throttle-groups.c b/block/throttle-groups.c | ||
147 | index XXXXXXX..XXXXXXX 100644 | ||
148 | --- a/block/throttle-groups.c | ||
149 | +++ b/block/throttle-groups.c | ||
150 | @@ -XXX,XX +XXX,XX @@ static void timer_cb(BlockBackend *blk, bool is_write) | ||
151 | qemu_mutex_unlock(&tg->lock); | ||
152 | |||
153 | /* Run the request that was waiting for this timer */ | ||
154 | + aio_context_acquire(blk_get_aio_context(blk)); | ||
155 | empty_queue = !qemu_co_enter_next(&blkp->throttled_reqs[is_write]); | ||
156 | + aio_context_release(blk_get_aio_context(blk)); | ||
157 | |||
158 | /* If the request queue was empty then we have to take care of | ||
159 | * scheduling the next one */ | ||
15 | diff --git a/util/aio-posix.c b/util/aio-posix.c | 160 | diff --git a/util/aio-posix.c b/util/aio-posix.c |
16 | index XXXXXXX..XXXXXXX 100644 | 161 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/util/aio-posix.c | 162 | --- a/util/aio-posix.c |
18 | +++ b/util/aio-posix.c | 163 | +++ b/util/aio-posix.c |
19 | @@ -XXX,XX +XXX,XX @@ | 164 | @@ -XXX,XX +XXX,XX @@ bool aio_dispatch(AioContext *ctx, bool dispatch_fds) |
20 | 165 | } | |
21 | #include "qemu/osdep.h" | 166 | |
22 | #include "block/block.h" | 167 | /* Run our timers */ |
23 | +#include "qemu/rcu.h" | 168 | - aio_context_acquire(ctx); |
24 | #include "qemu/rcu_queue.h" | 169 | progress |= timerlistgroup_run_timers(&ctx->tlg); |
25 | #include "qemu/sockets.h" | 170 | - aio_context_release(ctx); |
26 | #include "qemu/cutils.h" | 171 | |
27 | @@ -XXX,XX +XXX,XX @@ static bool run_poll_handlers_once(AioContext *ctx, int64_t *timeout) | 172 | return progress; |
28 | bool progress = false; | 173 | } |
29 | AioHandler *node; | 174 | diff --git a/util/aio-win32.c b/util/aio-win32.c |
30 | 175 | index XXXXXXX..XXXXXXX 100644 | |
31 | + /* | 176 | --- a/util/aio-win32.c |
32 | + * Optimization: ->io_poll() handlers often contain RCU read critical | 177 | +++ b/util/aio-win32.c |
33 | + * sections and we therefore see many rcu_read_lock() -> rcu_read_unlock() | 178 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) |
34 | + * -> rcu_read_lock() -> ... sequences with expensive memory | 179 | progress |= aio_dispatch_handlers(ctx, event); |
35 | + * synchronization primitives. Make the entire polling loop an RCU | 180 | } while (count > 0); |
36 | + * critical section because nested rcu_read_lock()/rcu_read_unlock() calls | 181 | |
37 | + * are cheap. | 182 | - aio_context_acquire(ctx); |
38 | + */ | 183 | progress |= timerlistgroup_run_timers(&ctx->tlg); |
39 | + RCU_READ_LOCK_GUARD(); | 184 | - aio_context_release(ctx); |
40 | + | 185 | return progress; |
41 | QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { | 186 | } |
42 | if (!node->deleted && node->io_poll && | 187 | |
43 | aio_node_check(ctx, node->is_external) && | 188 | diff --git a/util/qemu-coroutine-sleep.c b/util/qemu-coroutine-sleep.c |
189 | index XXXXXXX..XXXXXXX 100644 | ||
190 | --- a/util/qemu-coroutine-sleep.c | ||
191 | +++ b/util/qemu-coroutine-sleep.c | ||
192 | @@ -XXX,XX +XXX,XX @@ static void co_sleep_cb(void *opaque) | ||
193 | { | ||
194 | CoSleepCB *sleep_cb = opaque; | ||
195 | |||
196 | - qemu_coroutine_enter(sleep_cb->co); | ||
197 | + aio_co_wake(sleep_cb->co); | ||
198 | } | ||
199 | |||
200 | void coroutine_fn co_aio_sleep_ns(AioContext *ctx, QEMUClockType type, | ||
44 | -- | 201 | -- |
45 | 2.24.1 | 202 | 2.9.3 |
46 | 203 | ||
204 | diff view generated by jsdifflib |
1 | It is not necessary to scan all AioHandlers for deletion. Keep a list | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | of deleted handlers instead of scanning the full list of all handlers. | ||
3 | 2 | ||
4 | The AioHandler->deleted field can be dropped. Let's check if the | 3 | This covers both file descriptor callbacks and polling callbacks, |
5 | handler has been inserted into the deleted list instead. Add a new | 4 | since they execute related code. |
6 | QLIST_IS_INSERTED() API for this check. | ||
7 | 5 | ||
8 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 6 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
9 | Reviewed-by: Sergio Lopez <slp@redhat.com> | 7 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
10 | Message-id: 20200214171712.541358-5-stefanha@redhat.com | 8 | Reviewed-by: Fam Zheng <famz@redhat.com> |
9 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | ||
10 | Message-id: 20170213135235.12274-14-pbonzini@redhat.com | ||
11 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
12 | --- | 12 | --- |
13 | include/block/aio.h | 6 ++++- | 13 | block/curl.c | 16 +++++++++++++--- |
14 | include/qemu/queue.h | 3 +++ | 14 | block/iscsi.c | 4 ++++ |
15 | util/aio-posix.c | 53 +++++++++++++++++++++++++++++--------------- | 15 | block/linux-aio.c | 4 ++++ |
16 | 3 files changed, 43 insertions(+), 19 deletions(-) | 16 | block/nfs.c | 6 ++++++ |
17 | block/sheepdog.c | 29 +++++++++++++++-------------- | ||
18 | block/ssh.c | 29 +++++++++-------------------- | ||
19 | block/win32-aio.c | 10 ++++++---- | ||
20 | hw/block/virtio-blk.c | 5 ++++- | ||
21 | hw/scsi/virtio-scsi.c | 7 +++++++ | ||
22 | util/aio-posix.c | 7 ------- | ||
23 | util/aio-win32.c | 6 ------ | ||
24 | 11 files changed, 68 insertions(+), 55 deletions(-) | ||
17 | 25 | ||
18 | diff --git a/include/block/aio.h b/include/block/aio.h | 26 | diff --git a/block/curl.c b/block/curl.c |
19 | index XXXXXXX..XXXXXXX 100644 | 27 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/include/block/aio.h | 28 | --- a/block/curl.c |
21 | +++ b/include/block/aio.h | 29 | +++ b/block/curl.c |
22 | @@ -XXX,XX +XXX,XX @@ void qemu_aio_unref(void *p); | 30 | @@ -XXX,XX +XXX,XX @@ static void curl_multi_check_completion(BDRVCURLState *s) |
23 | void qemu_aio_ref(void *p); | 31 | } |
24 | 32 | } | |
25 | typedef struct AioHandler AioHandler; | 33 | |
26 | +typedef QLIST_HEAD(, AioHandler) AioHandlerList; | 34 | -static void curl_multi_do(void *arg) |
27 | typedef void QEMUBHFunc(void *opaque); | 35 | +static void curl_multi_do_locked(CURLState *s) |
28 | typedef bool AioPollFn(void *opaque); | 36 | { |
29 | typedef void IOHandler(void *opaque); | 37 | - CURLState *s = (CURLState *)arg; |
30 | @@ -XXX,XX +XXX,XX @@ struct AioContext { | 38 | CURLSocket *socket, *next_socket; |
31 | QemuRecMutex lock; | 39 | int running; |
32 | 40 | int r; | |
33 | /* The list of registered AIO handlers. Protected by ctx->list_lock. */ | 41 | @@ -XXX,XX +XXX,XX @@ static void curl_multi_do(void *arg) |
34 | - QLIST_HEAD(, AioHandler) aio_handlers; | 42 | } |
35 | + AioHandlerList aio_handlers; | 43 | } |
36 | + | 44 | |
37 | + /* The list of AIO handlers to be deleted. Protected by ctx->list_lock. */ | 45 | +static void curl_multi_do(void *arg) |
38 | + AioHandlerList deleted_aio_handlers; | 46 | +{ |
39 | 47 | + CURLState *s = (CURLState *)arg; | |
40 | /* Used to avoid unnecessary event_notifier_set calls in aio_notify; | 48 | + |
41 | * accessed with atomic primitives. If this field is 0, everything | 49 | + aio_context_acquire(s->s->aio_context); |
42 | diff --git a/include/qemu/queue.h b/include/qemu/queue.h | 50 | + curl_multi_do_locked(s); |
43 | index XXXXXXX..XXXXXXX 100644 | 51 | + aio_context_release(s->s->aio_context); |
44 | --- a/include/qemu/queue.h | 52 | +} |
45 | +++ b/include/qemu/queue.h | 53 | + |
46 | @@ -XXX,XX +XXX,XX @@ struct { \ | 54 | static void curl_multi_read(void *arg) |
47 | } \ | 55 | { |
48 | } while (/*CONSTCOND*/0) | 56 | CURLState *s = (CURLState *)arg; |
49 | 57 | ||
50 | +/* Is elm in a list? */ | 58 | - curl_multi_do(arg); |
51 | +#define QLIST_IS_INSERTED(elm, field) ((elm)->field.le_prev != NULL) | 59 | + aio_context_acquire(s->s->aio_context); |
52 | + | 60 | + curl_multi_do_locked(s); |
53 | #define QLIST_FOREACH(var, head, field) \ | 61 | curl_multi_check_completion(s->s); |
54 | for ((var) = ((head)->lh_first); \ | 62 | + aio_context_release(s->s->aio_context); |
55 | (var); \ | 63 | } |
64 | |||
65 | static void curl_multi_timeout_do(void *arg) | ||
66 | diff --git a/block/iscsi.c b/block/iscsi.c | ||
67 | index XXXXXXX..XXXXXXX 100644 | ||
68 | --- a/block/iscsi.c | ||
69 | +++ b/block/iscsi.c | ||
70 | @@ -XXX,XX +XXX,XX @@ iscsi_process_read(void *arg) | ||
71 | IscsiLun *iscsilun = arg; | ||
72 | struct iscsi_context *iscsi = iscsilun->iscsi; | ||
73 | |||
74 | + aio_context_acquire(iscsilun->aio_context); | ||
75 | iscsi_service(iscsi, POLLIN); | ||
76 | iscsi_set_events(iscsilun); | ||
77 | + aio_context_release(iscsilun->aio_context); | ||
78 | } | ||
79 | |||
80 | static void | ||
81 | @@ -XXX,XX +XXX,XX @@ iscsi_process_write(void *arg) | ||
82 | IscsiLun *iscsilun = arg; | ||
83 | struct iscsi_context *iscsi = iscsilun->iscsi; | ||
84 | |||
85 | + aio_context_acquire(iscsilun->aio_context); | ||
86 | iscsi_service(iscsi, POLLOUT); | ||
87 | iscsi_set_events(iscsilun); | ||
88 | + aio_context_release(iscsilun->aio_context); | ||
89 | } | ||
90 | |||
91 | static int64_t sector_lun2qemu(int64_t sector, IscsiLun *iscsilun) | ||
92 | diff --git a/block/linux-aio.c b/block/linux-aio.c | ||
93 | index XXXXXXX..XXXXXXX 100644 | ||
94 | --- a/block/linux-aio.c | ||
95 | +++ b/block/linux-aio.c | ||
96 | @@ -XXX,XX +XXX,XX @@ static void qemu_laio_completion_cb(EventNotifier *e) | ||
97 | LinuxAioState *s = container_of(e, LinuxAioState, e); | ||
98 | |||
99 | if (event_notifier_test_and_clear(&s->e)) { | ||
100 | + aio_context_acquire(s->aio_context); | ||
101 | qemu_laio_process_completions_and_submit(s); | ||
102 | + aio_context_release(s->aio_context); | ||
103 | } | ||
104 | } | ||
105 | |||
106 | @@ -XXX,XX +XXX,XX @@ static bool qemu_laio_poll_cb(void *opaque) | ||
107 | return false; | ||
108 | } | ||
109 | |||
110 | + aio_context_acquire(s->aio_context); | ||
111 | qemu_laio_process_completions_and_submit(s); | ||
112 | + aio_context_release(s->aio_context); | ||
113 | return true; | ||
114 | } | ||
115 | |||
116 | diff --git a/block/nfs.c b/block/nfs.c | ||
117 | index XXXXXXX..XXXXXXX 100644 | ||
118 | --- a/block/nfs.c | ||
119 | +++ b/block/nfs.c | ||
120 | @@ -XXX,XX +XXX,XX @@ static void nfs_set_events(NFSClient *client) | ||
121 | static void nfs_process_read(void *arg) | ||
122 | { | ||
123 | NFSClient *client = arg; | ||
124 | + | ||
125 | + aio_context_acquire(client->aio_context); | ||
126 | nfs_service(client->context, POLLIN); | ||
127 | nfs_set_events(client); | ||
128 | + aio_context_release(client->aio_context); | ||
129 | } | ||
130 | |||
131 | static void nfs_process_write(void *arg) | ||
132 | { | ||
133 | NFSClient *client = arg; | ||
134 | + | ||
135 | + aio_context_acquire(client->aio_context); | ||
136 | nfs_service(client->context, POLLOUT); | ||
137 | nfs_set_events(client); | ||
138 | + aio_context_release(client->aio_context); | ||
139 | } | ||
140 | |||
141 | static void nfs_co_init_task(BlockDriverState *bs, NFSRPC *task) | ||
142 | diff --git a/block/sheepdog.c b/block/sheepdog.c | ||
143 | index XXXXXXX..XXXXXXX 100644 | ||
144 | --- a/block/sheepdog.c | ||
145 | +++ b/block/sheepdog.c | ||
146 | @@ -XXX,XX +XXX,XX @@ static coroutine_fn int send_co_req(int sockfd, SheepdogReq *hdr, void *data, | ||
147 | return ret; | ||
148 | } | ||
149 | |||
150 | -static void restart_co_req(void *opaque) | ||
151 | -{ | ||
152 | - Coroutine *co = opaque; | ||
153 | - | ||
154 | - qemu_coroutine_enter(co); | ||
155 | -} | ||
156 | - | ||
157 | typedef struct SheepdogReqCo { | ||
158 | int sockfd; | ||
159 | BlockDriverState *bs; | ||
160 | @@ -XXX,XX +XXX,XX @@ typedef struct SheepdogReqCo { | ||
161 | unsigned int *rlen; | ||
162 | int ret; | ||
163 | bool finished; | ||
164 | + Coroutine *co; | ||
165 | } SheepdogReqCo; | ||
166 | |||
167 | +static void restart_co_req(void *opaque) | ||
168 | +{ | ||
169 | + SheepdogReqCo *srco = opaque; | ||
170 | + | ||
171 | + aio_co_wake(srco->co); | ||
172 | +} | ||
173 | + | ||
174 | static coroutine_fn void do_co_req(void *opaque) | ||
175 | { | ||
176 | int ret; | ||
177 | - Coroutine *co; | ||
178 | SheepdogReqCo *srco = opaque; | ||
179 | int sockfd = srco->sockfd; | ||
180 | SheepdogReq *hdr = srco->hdr; | ||
181 | @@ -XXX,XX +XXX,XX @@ static coroutine_fn void do_co_req(void *opaque) | ||
182 | unsigned int *wlen = srco->wlen; | ||
183 | unsigned int *rlen = srco->rlen; | ||
184 | |||
185 | - co = qemu_coroutine_self(); | ||
186 | + srco->co = qemu_coroutine_self(); | ||
187 | aio_set_fd_handler(srco->aio_context, sockfd, false, | ||
188 | - NULL, restart_co_req, NULL, co); | ||
189 | + NULL, restart_co_req, NULL, srco); | ||
190 | |||
191 | ret = send_co_req(sockfd, hdr, data, wlen); | ||
192 | if (ret < 0) { | ||
193 | @@ -XXX,XX +XXX,XX @@ static coroutine_fn void do_co_req(void *opaque) | ||
194 | } | ||
195 | |||
196 | aio_set_fd_handler(srco->aio_context, sockfd, false, | ||
197 | - restart_co_req, NULL, NULL, co); | ||
198 | + restart_co_req, NULL, NULL, srco); | ||
199 | |||
200 | ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr)); | ||
201 | if (ret != sizeof(*hdr)) { | ||
202 | @@ -XXX,XX +XXX,XX @@ out: | ||
203 | aio_set_fd_handler(srco->aio_context, sockfd, false, | ||
204 | NULL, NULL, NULL, NULL); | ||
205 | |||
206 | + srco->co = NULL; | ||
207 | srco->ret = ret; | ||
208 | srco->finished = true; | ||
209 | if (srco->bs) { | ||
210 | @@ -XXX,XX +XXX,XX @@ static void coroutine_fn aio_read_response(void *opaque) | ||
211 | * We've finished all requests which belong to the AIOCB, so | ||
212 | * we can switch back to sd_co_readv/writev now. | ||
213 | */ | ||
214 | - qemu_coroutine_enter(acb->coroutine); | ||
215 | + aio_co_wake(acb->coroutine); | ||
216 | } | ||
217 | |||
218 | return; | ||
219 | @@ -XXX,XX +XXX,XX @@ static void co_read_response(void *opaque) | ||
220 | s->co_recv = qemu_coroutine_create(aio_read_response, opaque); | ||
221 | } | ||
222 | |||
223 | - qemu_coroutine_enter(s->co_recv); | ||
224 | + aio_co_wake(s->co_recv); | ||
225 | } | ||
226 | |||
227 | static void co_write_request(void *opaque) | ||
228 | { | ||
229 | BDRVSheepdogState *s = opaque; | ||
230 | |||
231 | - qemu_coroutine_enter(s->co_send); | ||
232 | + aio_co_wake(s->co_send); | ||
233 | } | ||
234 | |||
235 | /* | ||
236 | diff --git a/block/ssh.c b/block/ssh.c | ||
237 | index XXXXXXX..XXXXXXX 100644 | ||
238 | --- a/block/ssh.c | ||
239 | +++ b/block/ssh.c | ||
240 | @@ -XXX,XX +XXX,XX @@ static void restart_coroutine(void *opaque) | ||
241 | |||
242 | DPRINTF("co=%p", co); | ||
243 | |||
244 | - qemu_coroutine_enter(co); | ||
245 | + aio_co_wake(co); | ||
246 | } | ||
247 | |||
248 | -static coroutine_fn void set_fd_handler(BDRVSSHState *s, BlockDriverState *bs) | ||
249 | +/* A non-blocking call returned EAGAIN, so yield, ensuring the | ||
250 | + * handlers are set up so that we'll be rescheduled when there is an | ||
251 | + * interesting event on the socket. | ||
252 | + */ | ||
253 | +static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs) | ||
254 | { | ||
255 | int r; | ||
256 | IOHandler *rd_handler = NULL, *wr_handler = NULL; | ||
257 | @@ -XXX,XX +XXX,XX @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s, BlockDriverState *bs) | ||
258 | |||
259 | aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, | ||
260 | false, rd_handler, wr_handler, NULL, co); | ||
261 | -} | ||
262 | - | ||
263 | -static coroutine_fn void clear_fd_handler(BDRVSSHState *s, | ||
264 | - BlockDriverState *bs) | ||
265 | -{ | ||
266 | - DPRINTF("s->sock=%d", s->sock); | ||
267 | - aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, | ||
268 | - false, NULL, NULL, NULL, NULL); | ||
269 | -} | ||
270 | - | ||
271 | -/* A non-blocking call returned EAGAIN, so yield, ensuring the | ||
272 | - * handlers are set up so that we'll be rescheduled when there is an | ||
273 | - * interesting event on the socket. | ||
274 | - */ | ||
275 | -static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs) | ||
276 | -{ | ||
277 | - set_fd_handler(s, bs); | ||
278 | qemu_coroutine_yield(); | ||
279 | - clear_fd_handler(s, bs); | ||
280 | + DPRINTF("s->sock=%d - back", s->sock); | ||
281 | + aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, false, | ||
282 | + NULL, NULL, NULL, NULL); | ||
283 | } | ||
284 | |||
285 | /* SFTP has a function `libssh2_sftp_seek64' which seeks to a position | ||
286 | diff --git a/block/win32-aio.c b/block/win32-aio.c | ||
287 | index XXXXXXX..XXXXXXX 100644 | ||
288 | --- a/block/win32-aio.c | ||
289 | +++ b/block/win32-aio.c | ||
290 | @@ -XXX,XX +XXX,XX @@ struct QEMUWin32AIOState { | ||
291 | HANDLE hIOCP; | ||
292 | EventNotifier e; | ||
293 | int count; | ||
294 | - bool is_aio_context_attached; | ||
295 | + AioContext *aio_ctx; | ||
296 | }; | ||
297 | |||
298 | typedef struct QEMUWin32AIOCB { | ||
299 | @@ -XXX,XX +XXX,XX @@ static void win32_aio_process_completion(QEMUWin32AIOState *s, | ||
300 | } | ||
301 | |||
302 | |||
303 | + aio_context_acquire(s->aio_ctx); | ||
304 | waiocb->common.cb(waiocb->common.opaque, ret); | ||
305 | + aio_context_release(s->aio_ctx); | ||
306 | qemu_aio_unref(waiocb); | ||
307 | } | ||
308 | |||
309 | @@ -XXX,XX +XXX,XX @@ void win32_aio_detach_aio_context(QEMUWin32AIOState *aio, | ||
310 | AioContext *old_context) | ||
311 | { | ||
312 | aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL); | ||
313 | - aio->is_aio_context_attached = false; | ||
314 | + aio->aio_ctx = NULL; | ||
315 | } | ||
316 | |||
317 | void win32_aio_attach_aio_context(QEMUWin32AIOState *aio, | ||
318 | AioContext *new_context) | ||
319 | { | ||
320 | - aio->is_aio_context_attached = true; | ||
321 | + aio->aio_ctx = new_context; | ||
322 | aio_set_event_notifier(new_context, &aio->e, false, | ||
323 | win32_aio_completion_cb, NULL); | ||
324 | } | ||
325 | @@ -XXX,XX +XXX,XX @@ out_free_state: | ||
326 | |||
327 | void win32_aio_cleanup(QEMUWin32AIOState *aio) | ||
328 | { | ||
329 | - assert(!aio->is_aio_context_attached); | ||
330 | + assert(!aio->aio_ctx); | ||
331 | CloseHandle(aio->hIOCP); | ||
332 | event_notifier_cleanup(&aio->e); | ||
333 | g_free(aio); | ||
334 | diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c | ||
335 | index XXXXXXX..XXXXXXX 100644 | ||
336 | --- a/hw/block/virtio-blk.c | ||
337 | +++ b/hw/block/virtio-blk.c | ||
338 | @@ -XXX,XX +XXX,XX @@ static void virtio_blk_ioctl_complete(void *opaque, int status) | ||
339 | { | ||
340 | VirtIOBlockIoctlReq *ioctl_req = opaque; | ||
341 | VirtIOBlockReq *req = ioctl_req->req; | ||
342 | - VirtIODevice *vdev = VIRTIO_DEVICE(req->dev); | ||
343 | + VirtIOBlock *s = req->dev; | ||
344 | + VirtIODevice *vdev = VIRTIO_DEVICE(s); | ||
345 | struct virtio_scsi_inhdr *scsi; | ||
346 | struct sg_io_hdr *hdr; | ||
347 | |||
348 | @@ -XXX,XX +XXX,XX @@ bool virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) | ||
349 | MultiReqBuffer mrb = {}; | ||
350 | bool progress = false; | ||
351 | |||
352 | + aio_context_acquire(blk_get_aio_context(s->blk)); | ||
353 | blk_io_plug(s->blk); | ||
354 | |||
355 | do { | ||
356 | @@ -XXX,XX +XXX,XX @@ bool virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) | ||
357 | } | ||
358 | |||
359 | blk_io_unplug(s->blk); | ||
360 | + aio_context_release(blk_get_aio_context(s->blk)); | ||
361 | return progress; | ||
362 | } | ||
363 | |||
364 | diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c | ||
365 | index XXXXXXX..XXXXXXX 100644 | ||
366 | --- a/hw/scsi/virtio-scsi.c | ||
367 | +++ b/hw/scsi/virtio-scsi.c | ||
368 | @@ -XXX,XX +XXX,XX @@ bool virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq) | ||
369 | VirtIOSCSIReq *req; | ||
370 | bool progress = false; | ||
371 | |||
372 | + virtio_scsi_acquire(s); | ||
373 | while ((req = virtio_scsi_pop_req(s, vq))) { | ||
374 | progress = true; | ||
375 | virtio_scsi_handle_ctrl_req(s, req); | ||
376 | } | ||
377 | + virtio_scsi_release(s); | ||
378 | return progress; | ||
379 | } | ||
380 | |||
381 | @@ -XXX,XX +XXX,XX @@ bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq) | ||
382 | |||
383 | QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs); | ||
384 | |||
385 | + virtio_scsi_acquire(s); | ||
386 | do { | ||
387 | virtio_queue_set_notification(vq, 0); | ||
388 | |||
389 | @@ -XXX,XX +XXX,XX @@ bool virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq) | ||
390 | QTAILQ_FOREACH_SAFE(req, &reqs, next, next) { | ||
391 | virtio_scsi_handle_cmd_req_submit(s, req); | ||
392 | } | ||
393 | + virtio_scsi_release(s); | ||
394 | return progress; | ||
395 | } | ||
396 | |||
397 | @@ -XXX,XX +XXX,XX @@ out: | ||
398 | |||
399 | bool virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq) | ||
400 | { | ||
401 | + virtio_scsi_acquire(s); | ||
402 | if (s->events_dropped) { | ||
403 | virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); | ||
404 | + virtio_scsi_release(s); | ||
405 | return true; | ||
406 | } | ||
407 | + virtio_scsi_release(s); | ||
408 | return false; | ||
409 | } | ||
410 | |||
56 | diff --git a/util/aio-posix.c b/util/aio-posix.c | 411 | diff --git a/util/aio-posix.c b/util/aio-posix.c |
57 | index XXXXXXX..XXXXXXX 100644 | 412 | index XXXXXXX..XXXXXXX 100644 |
58 | --- a/util/aio-posix.c | 413 | --- a/util/aio-posix.c |
59 | +++ b/util/aio-posix.c | 414 | +++ b/util/aio-posix.c |
60 | @@ -XXX,XX +XXX,XX @@ struct AioHandler | ||
61 | AioPollFn *io_poll; | ||
62 | IOHandler *io_poll_begin; | ||
63 | IOHandler *io_poll_end; | ||
64 | - int deleted; | ||
65 | void *opaque; | ||
66 | bool is_external; | ||
67 | QLIST_ENTRY(AioHandler) node; | ||
68 | + QLIST_ENTRY(AioHandler) node_deleted; | ||
69 | }; | ||
70 | |||
71 | #ifdef CONFIG_EPOLL_CREATE1 | ||
72 | @@ -XXX,XX +XXX,XX @@ static bool aio_epoll_try_enable(AioContext *ctx) | ||
73 | |||
74 | QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { | ||
75 | int r; | ||
76 | - if (node->deleted || !node->pfd.events) { | ||
77 | + if (QLIST_IS_INSERTED(node, node_deleted) || !node->pfd.events) { | ||
78 | continue; | ||
79 | } | ||
80 | event.events = epoll_events_from_pfd(node->pfd.events); | ||
81 | @@ -XXX,XX +XXX,XX @@ static AioHandler *find_aio_handler(AioContext *ctx, int fd) | ||
82 | AioHandler *node; | ||
83 | |||
84 | QLIST_FOREACH(node, &ctx->aio_handlers, node) { | ||
85 | - if (node->pfd.fd == fd) | ||
86 | - if (!node->deleted) | ||
87 | + if (node->pfd.fd == fd) { | ||
88 | + if (!QLIST_IS_INSERTED(node, node_deleted)) { | ||
89 | return node; | ||
90 | + } | ||
91 | + } | ||
92 | } | ||
93 | |||
94 | return NULL; | ||
95 | @@ -XXX,XX +XXX,XX @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node) | ||
96 | |||
97 | /* If a read is in progress, just mark the node as deleted */ | ||
98 | if (qemu_lockcnt_count(&ctx->list_lock)) { | ||
99 | - node->deleted = 1; | ||
100 | + QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers, node, node_deleted); | ||
101 | node->pfd.revents = 0; | ||
102 | return false; | ||
103 | } | ||
104 | @@ -XXX,XX +XXX,XX @@ static void poll_set_started(AioContext *ctx, bool started) | ||
105 | QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { | ||
106 | IOHandler *fn; | ||
107 | |||
108 | - if (node->deleted) { | ||
109 | + if (QLIST_IS_INSERTED(node, node_deleted)) { | ||
110 | continue; | ||
111 | } | ||
112 | |||
113 | @@ -XXX,XX +XXX,XX @@ bool aio_pending(AioContext *ctx) | ||
114 | return result; | ||
115 | } | ||
116 | |||
117 | +static void aio_free_deleted_handlers(AioContext *ctx) | ||
118 | +{ | ||
119 | + AioHandler *node; | ||
120 | + | ||
121 | + if (QLIST_EMPTY_RCU(&ctx->deleted_aio_handlers)) { | ||
122 | + return; | ||
123 | + } | ||
124 | + if (!qemu_lockcnt_dec_if_lock(&ctx->list_lock)) { | ||
125 | + return; /* we are nested, let the parent do the freeing */ | ||
126 | + } | ||
127 | + | ||
128 | + while ((node = QLIST_FIRST_RCU(&ctx->deleted_aio_handlers))) { | ||
129 | + QLIST_REMOVE(node, node); | ||
130 | + QLIST_REMOVE(node, node_deleted); | ||
131 | + g_free(node); | ||
132 | + } | ||
133 | + | ||
134 | + qemu_lockcnt_inc_and_unlock(&ctx->list_lock); | ||
135 | +} | ||
136 | + | ||
137 | static bool aio_dispatch_handlers(AioContext *ctx) | ||
138 | { | ||
139 | AioHandler *node, *tmp; | ||
140 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) | 415 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) |
141 | revents = node->pfd.revents & node->pfd.events; | ||
142 | node->pfd.revents = 0; | ||
143 | |||
144 | - if (!node->deleted && | ||
145 | + if (!QLIST_IS_INSERTED(node, node_deleted) && | ||
146 | (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && | 416 | (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && |
147 | aio_node_check(ctx, node->is_external) && | 417 | aio_node_check(ctx, node->is_external) && |
148 | node->io_read) { | 418 | node->io_read) { |
419 | - aio_context_acquire(ctx); | ||
420 | node->io_read(node->opaque); | ||
421 | - aio_context_release(ctx); | ||
422 | |||
423 | /* aio_notify() does not count as progress */ | ||
424 | if (node->opaque != &ctx->notifier) { | ||
149 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) | 425 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) |
150 | progress = true; | ||
151 | } | ||
152 | } | ||
153 | - if (!node->deleted && | ||
154 | + if (!QLIST_IS_INSERTED(node, node_deleted) && | ||
155 | (revents & (G_IO_OUT | G_IO_ERR)) && | 426 | (revents & (G_IO_OUT | G_IO_ERR)) && |
156 | aio_node_check(ctx, node->is_external) && | 427 | aio_node_check(ctx, node->is_external) && |
157 | node->io_write) { | 428 | node->io_write) { |
429 | - aio_context_acquire(ctx); | ||
158 | node->io_write(node->opaque); | 430 | node->io_write(node->opaque); |
431 | - aio_context_release(ctx); | ||
159 | progress = true; | 432 | progress = true; |
160 | } | 433 | } |
434 | |||
435 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | ||
436 | start = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); | ||
437 | } | ||
438 | |||
439 | - aio_context_acquire(ctx); | ||
440 | progress = try_poll_mode(ctx, blocking); | ||
441 | - aio_context_release(ctx); | ||
161 | - | 442 | - |
162 | - if (node->deleted) { | 443 | if (!progress) { |
163 | - if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) { | 444 | assert(npfd == 0); |
164 | - QLIST_REMOVE(node, node); | 445 | |
165 | - g_free(node); | 446 | diff --git a/util/aio-win32.c b/util/aio-win32.c |
166 | - qemu_lockcnt_inc_and_unlock(&ctx->list_lock); | 447 | index XXXXXXX..XXXXXXX 100644 |
167 | - } | 448 | --- a/util/aio-win32.c |
168 | - } | 449 | +++ b/util/aio-win32.c |
169 | } | 450 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) |
170 | 451 | (revents || event_notifier_get_handle(node->e) == event) && | |
171 | return progress; | 452 | node->io_notify) { |
172 | @@ -XXX,XX +XXX,XX @@ void aio_dispatch(AioContext *ctx) | 453 | node->pfd.revents = 0; |
173 | qemu_lockcnt_inc(&ctx->list_lock); | 454 | - aio_context_acquire(ctx); |
174 | aio_bh_poll(ctx); | 455 | node->io_notify(node->e); |
175 | aio_dispatch_handlers(ctx); | 456 | - aio_context_release(ctx); |
176 | + aio_free_deleted_handlers(ctx); | 457 | |
177 | qemu_lockcnt_dec(&ctx->list_lock); | 458 | /* aio_notify() does not count as progress */ |
178 | 459 | if (node->e != &ctx->notifier) { | |
179 | timerlistgroup_run_timers(&ctx->tlg); | 460 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) |
180 | @@ -XXX,XX +XXX,XX @@ static bool run_poll_handlers_once(AioContext *ctx, int64_t *timeout) | 461 | (node->io_read || node->io_write)) { |
181 | RCU_READ_LOCK_GUARD(); | 462 | node->pfd.revents = 0; |
182 | 463 | if ((revents & G_IO_IN) && node->io_read) { | |
183 | QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { | 464 | - aio_context_acquire(ctx); |
184 | - if (!node->deleted && node->io_poll && | 465 | node->io_read(node->opaque); |
185 | + if (!QLIST_IS_INSERTED(node, node_deleted) && node->io_poll && | 466 | - aio_context_release(ctx); |
186 | aio_node_check(ctx, node->is_external) && | 467 | progress = true; |
187 | node->io_poll(node->opaque)) { | 468 | } |
188 | /* | 469 | if ((revents & G_IO_OUT) && node->io_write) { |
189 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | 470 | - aio_context_acquire(ctx); |
190 | 471 | node->io_write(node->opaque); | |
191 | if (!aio_epoll_enabled(ctx)) { | 472 | - aio_context_release(ctx); |
192 | QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { | 473 | progress = true; |
193 | - if (!node->deleted && node->pfd.events | 474 | } |
194 | + if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events | 475 | |
195 | && aio_node_check(ctx, node->is_external)) { | ||
196 | add_pollfd(node); | ||
197 | } | ||
198 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | ||
199 | progress |= aio_dispatch_handlers(ctx); | ||
200 | } | ||
201 | |||
202 | + aio_free_deleted_handlers(ctx); | ||
203 | + | ||
204 | qemu_lockcnt_dec(&ctx->list_lock); | ||
205 | |||
206 | progress |= timerlistgroup_run_timers(&ctx->tlg); | ||
207 | -- | 476 | -- |
208 | 2.24.1 | 477 | 2.9.3 |
209 | 478 | ||
479 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | The virtio-scsi fuzz target sets up and fuzzes the available virtio-scsi | 3 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
4 | queues. After an element is placed on a queue, the fuzzer can select | 4 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
5 | whether to perform a kick, or continue adding elements. | 5 | Reviewed-by: Fam Zheng <famz@redhat.com> |
6 | 6 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | |
7 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 7 | Message-id: 20170213135235.12274-15-pbonzini@redhat.com |
8 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | ||
9 | Message-id: 20200220041118.23264-22-alxndr@bu.edu | ||
10 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 8 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
11 | --- | 9 | --- |
12 | tests/qtest/fuzz/Makefile.include | 1 + | 10 | block/archipelago.c | 3 +++ |
13 | tests/qtest/fuzz/virtio_scsi_fuzz.c | 213 ++++++++++++++++++++++++++++ | 11 | block/blkreplay.c | 2 +- |
14 | 2 files changed, 214 insertions(+) | 12 | block/block-backend.c | 6 ++++++ |
15 | create mode 100644 tests/qtest/fuzz/virtio_scsi_fuzz.c | 13 | block/curl.c | 26 ++++++++++++++++++-------- |
14 | block/gluster.c | 9 +-------- | ||
15 | block/io.c | 6 +++++- | ||
16 | block/iscsi.c | 6 +++++- | ||
17 | block/linux-aio.c | 15 +++++++++------ | ||
18 | block/nfs.c | 3 ++- | ||
19 | block/null.c | 4 ++++ | ||
20 | block/qed.c | 3 +++ | ||
21 | block/rbd.c | 4 ++++ | ||
22 | dma-helpers.c | 2 ++ | ||
23 | hw/block/virtio-blk.c | 2 ++ | ||
24 | hw/scsi/scsi-bus.c | 2 ++ | ||
25 | util/async.c | 4 ++-- | ||
26 | util/thread-pool.c | 2 ++ | ||
27 | 17 files changed, 71 insertions(+), 28 deletions(-) | ||
16 | 28 | ||
17 | diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include | 29 | diff --git a/block/archipelago.c b/block/archipelago.c |
18 | index XXXXXXX..XXXXXXX 100644 | 30 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/tests/qtest/fuzz/Makefile.include | 31 | --- a/block/archipelago.c |
20 | +++ b/tests/qtest/fuzz/Makefile.include | 32 | +++ b/block/archipelago.c |
21 | @@ -XXX,XX +XXX,XX @@ fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o | 33 | @@ -XXX,XX +XXX,XX @@ static void qemu_archipelago_complete_aio(void *opaque) |
22 | # Targets | 34 | { |
23 | fuzz-obj-y += tests/qtest/fuzz/i440fx_fuzz.o | 35 | AIORequestData *reqdata = (AIORequestData *) opaque; |
24 | fuzz-obj-y += tests/qtest/fuzz/virtio_net_fuzz.o | 36 | ArchipelagoAIOCB *aio_cb = (ArchipelagoAIOCB *) reqdata->aio_cb; |
25 | +fuzz-obj-y += tests/qtest/fuzz/virtio_scsi_fuzz.o | 37 | + AioContext *ctx = bdrv_get_aio_context(aio_cb->common.bs); |
26 | 38 | ||
27 | FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest | 39 | + aio_context_acquire(ctx); |
28 | 40 | aio_cb->common.cb(aio_cb->common.opaque, aio_cb->ret); | |
29 | diff --git a/tests/qtest/fuzz/virtio_scsi_fuzz.c b/tests/qtest/fuzz/virtio_scsi_fuzz.c | 41 | + aio_context_release(ctx); |
30 | new file mode 100644 | 42 | aio_cb->status = 0; |
31 | index XXXXXXX..XXXXXXX | 43 | |
32 | --- /dev/null | 44 | qemu_aio_unref(aio_cb); |
33 | +++ b/tests/qtest/fuzz/virtio_scsi_fuzz.c | 45 | diff --git a/block/blkreplay.c b/block/blkreplay.c |
34 | @@ -XXX,XX +XXX,XX @@ | 46 | index XXXXXXX..XXXXXXX 100755 |
35 | +/* | 47 | --- a/block/blkreplay.c |
36 | + * virtio-serial Fuzzing Target | 48 | +++ b/block/blkreplay.c |
37 | + * | 49 | @@ -XXX,XX +XXX,XX @@ static int64_t blkreplay_getlength(BlockDriverState *bs) |
38 | + * Copyright Red Hat Inc., 2019 | 50 | static void blkreplay_bh_cb(void *opaque) |
39 | + * | 51 | { |
40 | + * Authors: | 52 | Request *req = opaque; |
41 | + * Alexander Bulekov <alxndr@bu.edu> | 53 | - qemu_coroutine_enter(req->co); |
42 | + * | 54 | + aio_co_wake(req->co); |
43 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | 55 | qemu_bh_delete(req->bh); |
44 | + * See the COPYING file in the top-level directory. | 56 | g_free(req); |
45 | + */ | 57 | } |
46 | + | 58 | diff --git a/block/block-backend.c b/block/block-backend.c |
47 | +#include "qemu/osdep.h" | 59 | index XXXXXXX..XXXXXXX 100644 |
48 | + | 60 | --- a/block/block-backend.c |
49 | +#include "tests/qtest/libqtest.h" | 61 | +++ b/block/block-backend.c |
50 | +#include "libqos/virtio-scsi.h" | 62 | @@ -XXX,XX +XXX,XX @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags) |
51 | +#include "libqos/virtio.h" | 63 | static void error_callback_bh(void *opaque) |
52 | +#include "libqos/virtio-pci.h" | 64 | { |
53 | +#include "standard-headers/linux/virtio_ids.h" | 65 | struct BlockBackendAIOCB *acb = opaque; |
54 | +#include "standard-headers/linux/virtio_pci.h" | 66 | + AioContext *ctx = bdrv_get_aio_context(acb->common.bs); |
55 | +#include "standard-headers/linux/virtio_scsi.h" | 67 | |
56 | +#include "fuzz.h" | 68 | bdrv_dec_in_flight(acb->common.bs); |
57 | +#include "fork_fuzz.h" | 69 | + aio_context_acquire(ctx); |
58 | +#include "qos_fuzz.h" | 70 | acb->common.cb(acb->common.opaque, acb->ret); |
59 | + | 71 | + aio_context_release(ctx); |
60 | +#define PCI_SLOT 0x02 | 72 | qemu_aio_unref(acb); |
61 | +#define PCI_FN 0x00 | 73 | } |
62 | +#define QVIRTIO_SCSI_TIMEOUT_US (1 * 1000 * 1000) | 74 | |
63 | + | 75 | @@ -XXX,XX +XXX,XX @@ static void blk_aio_complete(BlkAioEmAIOCB *acb) |
64 | +#define MAX_NUM_QUEUES 64 | 76 | static void blk_aio_complete_bh(void *opaque) |
65 | + | 77 | { |
66 | +/* Based on tests/virtio-scsi-test.c */ | 78 | BlkAioEmAIOCB *acb = opaque; |
67 | +typedef struct { | 79 | + AioContext *ctx = bdrv_get_aio_context(acb->common.bs); |
68 | + int num_queues; | 80 | |
69 | + QVirtQueue *vq[MAX_NUM_QUEUES + 2]; | 81 | assert(acb->has_returned); |
70 | +} QVirtioSCSIQueues; | 82 | + aio_context_acquire(ctx); |
71 | + | 83 | blk_aio_complete(acb); |
72 | +static QVirtioSCSIQueues *qvirtio_scsi_init(QVirtioDevice *dev, uint64_t mask) | 84 | + aio_context_release(ctx); |
73 | +{ | 85 | } |
74 | + QVirtioSCSIQueues *vs; | 86 | |
75 | + uint64_t feat; | 87 | static BlockAIOCB *blk_aio_prwv(BlockBackend *blk, int64_t offset, int bytes, |
76 | + int i; | 88 | diff --git a/block/curl.c b/block/curl.c |
77 | + | 89 | index XXXXXXX..XXXXXXX 100644 |
78 | + vs = g_new0(QVirtioSCSIQueues, 1); | 90 | --- a/block/curl.c |
79 | + | 91 | +++ b/block/curl.c |
80 | + feat = qvirtio_get_features(dev); | 92 | @@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p) |
81 | + if (mask) { | 93 | { |
82 | + feat &= ~QVIRTIO_F_BAD_FEATURE | mask; | 94 | CURLState *state; |
83 | + } else { | 95 | int running; |
84 | + feat &= ~(QVIRTIO_F_BAD_FEATURE | (1ull << VIRTIO_RING_F_EVENT_IDX)); | 96 | + int ret = -EINPROGRESS; |
97 | |||
98 | CURLAIOCB *acb = p; | ||
99 | - BDRVCURLState *s = acb->common.bs->opaque; | ||
100 | + BlockDriverState *bs = acb->common.bs; | ||
101 | + BDRVCURLState *s = bs->opaque; | ||
102 | + AioContext *ctx = bdrv_get_aio_context(bs); | ||
103 | |||
104 | size_t start = acb->sector_num * BDRV_SECTOR_SIZE; | ||
105 | size_t end; | ||
106 | |||
107 | + aio_context_acquire(ctx); | ||
108 | + | ||
109 | // In case we have the requested data already (e.g. read-ahead), | ||
110 | // we can just call the callback and be done. | ||
111 | switch (curl_find_buf(s, start, acb->nb_sectors * BDRV_SECTOR_SIZE, acb)) { | ||
112 | @@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p) | ||
113 | qemu_aio_unref(acb); | ||
114 | // fall through | ||
115 | case FIND_RET_WAIT: | ||
116 | - return; | ||
117 | + goto out; | ||
118 | default: | ||
119 | break; | ||
120 | } | ||
121 | @@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p) | ||
122 | // No cache found, so let's start a new request | ||
123 | state = curl_init_state(acb->common.bs, s); | ||
124 | if (!state) { | ||
125 | - acb->common.cb(acb->common.opaque, -EIO); | ||
126 | - qemu_aio_unref(acb); | ||
127 | - return; | ||
128 | + ret = -EIO; | ||
129 | + goto out; | ||
130 | } | ||
131 | |||
132 | acb->start = 0; | ||
133 | @@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p) | ||
134 | state->orig_buf = g_try_malloc(state->buf_len); | ||
135 | if (state->buf_len && state->orig_buf == NULL) { | ||
136 | curl_clean_state(state); | ||
137 | - acb->common.cb(acb->common.opaque, -ENOMEM); | ||
138 | - qemu_aio_unref(acb); | ||
139 | - return; | ||
140 | + ret = -ENOMEM; | ||
141 | + goto out; | ||
142 | } | ||
143 | state->acb[0] = acb; | ||
144 | |||
145 | @@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p) | ||
146 | |||
147 | /* Tell curl it needs to kick things off */ | ||
148 | curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running); | ||
149 | + | ||
150 | +out: | ||
151 | + if (ret != -EINPROGRESS) { | ||
152 | + acb->common.cb(acb->common.opaque, ret); | ||
153 | + qemu_aio_unref(acb); | ||
85 | + } | 154 | + } |
86 | + qvirtio_set_features(dev, feat); | 155 | + aio_context_release(ctx); |
87 | + | 156 | } |
88 | + vs->num_queues = qvirtio_config_readl(dev, 0); | 157 | |
89 | + | 158 | static BlockAIOCB *curl_aio_readv(BlockDriverState *bs, |
90 | + for (i = 0; i < vs->num_queues + 2; i++) { | 159 | diff --git a/block/gluster.c b/block/gluster.c |
91 | + vs->vq[i] = qvirtqueue_setup(dev, fuzz_qos_alloc, i); | 160 | index XXXXXXX..XXXXXXX 100644 |
92 | + } | 161 | --- a/block/gluster.c |
93 | + | 162 | +++ b/block/gluster.c |
94 | + qvirtio_set_driver_ok(dev); | 163 | @@ -XXX,XX +XXX,XX @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf, |
95 | + | 164 | return qemu_gluster_glfs_init(gconf, errp); |
96 | + return vs; | 165 | } |
97 | +} | 166 | |
98 | + | 167 | -static void qemu_gluster_complete_aio(void *opaque) |
99 | +static void virtio_scsi_fuzz(QTestState *s, QVirtioSCSIQueues* queues, | 168 | -{ |
100 | + const unsigned char *Data, size_t Size) | 169 | - GlusterAIOCB *acb = (GlusterAIOCB *)opaque; |
101 | +{ | 170 | - |
102 | + /* | 171 | - qemu_coroutine_enter(acb->coroutine); |
103 | + * Data is a sequence of random bytes. We split them up into "actions", | 172 | -} |
104 | + * followed by data: | 173 | - |
105 | + * [vqa][dddddddd][vqa][dddd][vqa][dddddddddddd] ... | 174 | /* |
106 | + * The length of the data is specified by the preceding vqa.length | 175 | * AIO callback routine called from GlusterFS thread. |
107 | + */ | 176 | */ |
108 | + typedef struct vq_action { | 177 | @@ -XXX,XX +XXX,XX @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg) |
109 | + uint8_t queue; | 178 | acb->ret = -EIO; /* Partial read/write - fail it */ |
110 | + uint8_t length; | 179 | } |
111 | + uint8_t write; | 180 | |
112 | + uint8_t next; | 181 | - aio_bh_schedule_oneshot(acb->aio_context, qemu_gluster_complete_aio, acb); |
113 | + uint8_t kick; | 182 | + aio_co_schedule(acb->aio_context, acb->coroutine); |
114 | + } vq_action; | 183 | } |
115 | + | 184 | |
116 | + /* Keep track of the free head for each queue we interact with */ | 185 | static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags) |
117 | + bool vq_touched[MAX_NUM_QUEUES + 2] = {0}; | 186 | diff --git a/block/io.c b/block/io.c |
118 | + uint32_t free_head[MAX_NUM_QUEUES + 2]; | 187 | index XXXXXXX..XXXXXXX 100644 |
119 | + | 188 | --- a/block/io.c |
120 | + QGuestAllocator *t_alloc = fuzz_qos_alloc; | 189 | +++ b/block/io.c |
121 | + | 190 | @@ -XXX,XX +XXX,XX @@ static void bdrv_co_drain_bh_cb(void *opaque) |
122 | + QVirtioSCSI *scsi = fuzz_qos_obj; | 191 | bdrv_dec_in_flight(bs); |
123 | + QVirtioDevice *dev = scsi->vdev; | 192 | bdrv_drained_begin(bs); |
124 | + QVirtQueue *q; | 193 | data->done = true; |
125 | + vq_action vqa; | 194 | - qemu_coroutine_enter(co); |
126 | + while (Size >= sizeof(vqa)) { | 195 | + aio_co_wake(co); |
127 | + /* Copy the action, so we can normalize length, queue and flags */ | 196 | } |
128 | + memcpy(&vqa, Data, sizeof(vqa)); | 197 | |
129 | + | 198 | static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs) |
130 | + Data += sizeof(vqa); | 199 | @@ -XXX,XX +XXX,XX @@ static void bdrv_co_complete(BlockAIOCBCoroutine *acb) |
131 | + Size -= sizeof(vqa); | 200 | static void bdrv_co_em_bh(void *opaque) |
132 | + | 201 | { |
133 | + vqa.queue = vqa.queue % queues->num_queues; | 202 | BlockAIOCBCoroutine *acb = opaque; |
134 | + /* Cap length at the number of remaining bytes in data */ | 203 | + BlockDriverState *bs = acb->common.bs; |
135 | + vqa.length = vqa.length >= Size ? Size : vqa.length; | 204 | + AioContext *ctx = bdrv_get_aio_context(bs); |
136 | + vqa.write = vqa.write & 1; | 205 | |
137 | + vqa.next = vqa.next & 1; | 206 | assert(!acb->need_bh); |
138 | + vqa.kick = vqa.kick & 1; | 207 | + aio_context_acquire(ctx); |
139 | + | 208 | bdrv_co_complete(acb); |
140 | + | 209 | + aio_context_release(ctx); |
141 | + q = queues->vq[vqa.queue]; | 210 | } |
142 | + | 211 | |
143 | + /* Copy the data into ram, and place it on the virtqueue */ | 212 | static void bdrv_co_maybe_schedule_bh(BlockAIOCBCoroutine *acb) |
144 | + uint64_t req_addr = guest_alloc(t_alloc, vqa.length); | 213 | diff --git a/block/iscsi.c b/block/iscsi.c |
145 | + qtest_memwrite(s, req_addr, Data, vqa.length); | 214 | index XXXXXXX..XXXXXXX 100644 |
146 | + if (vq_touched[vqa.queue] == 0) { | 215 | --- a/block/iscsi.c |
147 | + vq_touched[vqa.queue] = 1; | 216 | +++ b/block/iscsi.c |
148 | + free_head[vqa.queue] = qvirtqueue_add(s, q, req_addr, vqa.length, | 217 | @@ -XXX,XX +XXX,XX @@ static void |
149 | + vqa.write, vqa.next); | 218 | iscsi_bh_cb(void *p) |
150 | + } else { | 219 | { |
151 | + qvirtqueue_add(s, q, req_addr, vqa.length, vqa.write , vqa.next); | 220 | IscsiAIOCB *acb = p; |
152 | + } | 221 | + AioContext *ctx = bdrv_get_aio_context(acb->common.bs); |
153 | + | 222 | |
154 | + if (vqa.kick) { | 223 | qemu_bh_delete(acb->bh); |
155 | + qvirtqueue_kick(s, dev, q, free_head[vqa.queue]); | 224 | |
156 | + free_head[vqa.queue] = 0; | 225 | g_free(acb->buf); |
157 | + } | 226 | acb->buf = NULL; |
158 | + Data += vqa.length; | 227 | |
159 | + Size -= vqa.length; | 228 | + aio_context_acquire(ctx); |
160 | + } | 229 | acb->common.cb(acb->common.opaque, acb->status); |
161 | + /* In the end, kick each queue we interacted with */ | 230 | + aio_context_release(ctx); |
162 | + for (int i = 0; i < MAX_NUM_QUEUES + 2; i++) { | 231 | |
163 | + if (vq_touched[i]) { | 232 | if (acb->task != NULL) { |
164 | + qvirtqueue_kick(s, dev, queues->vq[i], free_head[i]); | 233 | scsi_free_scsi_task(acb->task); |
165 | + } | 234 | @@ -XXX,XX +XXX,XX @@ iscsi_schedule_bh(IscsiAIOCB *acb) |
166 | + } | 235 | static void iscsi_co_generic_bh_cb(void *opaque) |
167 | +} | 236 | { |
168 | + | 237 | struct IscsiTask *iTask = opaque; |
169 | +static void virtio_scsi_fork_fuzz(QTestState *s, | 238 | + |
170 | + const unsigned char *Data, size_t Size) | 239 | iTask->complete = 1; |
171 | +{ | 240 | - qemu_coroutine_enter(iTask->co); |
172 | + QVirtioSCSI *scsi = fuzz_qos_obj; | 241 | + aio_co_wake(iTask->co); |
173 | + static QVirtioSCSIQueues *queues; | 242 | } |
174 | + if (!queues) { | 243 | |
175 | + queues = qvirtio_scsi_init(scsi->vdev, 0); | 244 | static void iscsi_retry_timer_expired(void *opaque) |
176 | + } | 245 | diff --git a/block/linux-aio.c b/block/linux-aio.c |
177 | + if (fork() == 0) { | 246 | index XXXXXXX..XXXXXXX 100644 |
178 | + virtio_scsi_fuzz(s, queues, Data, Size); | 247 | --- a/block/linux-aio.c |
179 | + flush_events(s); | 248 | +++ b/block/linux-aio.c |
180 | + _Exit(0); | 249 | @@ -XXX,XX +XXX,XX @@ struct LinuxAioState { |
181 | + } else { | 250 | io_context_t ctx; |
182 | + wait(NULL); | 251 | EventNotifier e; |
183 | + } | 252 | |
184 | +} | 253 | - /* io queue for submit at batch */ |
185 | + | 254 | + /* io queue for submit at batch. Protected by AioContext lock. */ |
186 | +static void virtio_scsi_with_flag_fuzz(QTestState *s, | 255 | LaioQueue io_q; |
187 | + const unsigned char *Data, size_t Size) | 256 | |
188 | +{ | 257 | - /* I/O completion processing */ |
189 | + QVirtioSCSI *scsi = fuzz_qos_obj; | 258 | + /* I/O completion processing. Only runs in I/O thread. */ |
190 | + static QVirtioSCSIQueues *queues; | 259 | QEMUBH *completion_bh; |
191 | + | 260 | int event_idx; |
192 | + if (fork() == 0) { | 261 | int event_max; |
193 | + if (Size >= sizeof(uint64_t)) { | 262 | @@ -XXX,XX +XXX,XX @@ static inline ssize_t io_event_ret(struct io_event *ev) |
194 | + queues = qvirtio_scsi_init(scsi->vdev, *(uint64_t *)Data); | 263 | */ |
195 | + virtio_scsi_fuzz(s, queues, | 264 | static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) |
196 | + Data + sizeof(uint64_t), Size - sizeof(uint64_t)); | 265 | { |
197 | + flush_events(s); | 266 | + LinuxAioState *s = laiocb->ctx; |
198 | + } | 267 | int ret; |
199 | + _Exit(0); | 268 | |
200 | + } else { | 269 | ret = laiocb->ret; |
201 | + wait(NULL); | 270 | @@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) |
202 | + } | 271 | } |
203 | +} | 272 | |
204 | + | 273 | laiocb->ret = ret; |
205 | +static void virtio_scsi_pre_fuzz(QTestState *s) | 274 | + aio_context_acquire(s->aio_context); |
206 | +{ | 275 | if (laiocb->co) { |
207 | + qos_init_path(s); | 276 | /* If the coroutine is already entered it must be in ioq_submit() and |
208 | + counter_shm_init(); | 277 | * will notice laio->ret has been filled in when it eventually runs |
209 | +} | 278 | @@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) |
210 | + | 279 | laiocb->common.cb(laiocb->common.opaque, ret); |
211 | +static void *virtio_scsi_test_setup(GString *cmd_line, void *arg) | 280 | qemu_aio_unref(laiocb); |
212 | +{ | 281 | } |
213 | + g_string_append(cmd_line, | 282 | + aio_context_release(s->aio_context); |
214 | + " -drive file=blkdebug::null-co://," | 283 | } |
215 | + "file.image.read-zeroes=on," | 284 | |
216 | + "if=none,id=dr1,format=raw,file.align=4k " | 285 | /** |
217 | + "-device scsi-hd,drive=dr1,lun=0,scsi-id=1"); | 286 | @@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completions(LinuxAioState *s) |
218 | + return arg; | 287 | static void qemu_laio_process_completions_and_submit(LinuxAioState *s) |
219 | +} | 288 | { |
220 | + | 289 | qemu_laio_process_completions(s); |
221 | + | 290 | + |
222 | +static void register_virtio_scsi_fuzz_targets(void) | 291 | + aio_context_acquire(s->aio_context); |
223 | +{ | 292 | if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) { |
224 | + fuzz_add_qos_target(&(FuzzTarget){ | 293 | ioq_submit(s); |
225 | + .name = "virtio-scsi-fuzz", | 294 | } |
226 | + .description = "Fuzz the virtio-scsi virtual queues, forking" | 295 | + aio_context_release(s->aio_context); |
227 | + "for each fuzz run", | 296 | } |
228 | + .pre_vm_init = &counter_shm_init, | 297 | |
229 | + .pre_fuzz = &virtio_scsi_pre_fuzz, | 298 | static void qemu_laio_completion_bh(void *opaque) |
230 | + .fuzz = virtio_scsi_fork_fuzz,}, | 299 | @@ -XXX,XX +XXX,XX @@ static void qemu_laio_completion_cb(EventNotifier *e) |
231 | + "virtio-scsi", | 300 | LinuxAioState *s = container_of(e, LinuxAioState, e); |
232 | + &(QOSGraphTestOptions){.before = virtio_scsi_test_setup} | 301 | |
233 | + ); | 302 | if (event_notifier_test_and_clear(&s->e)) { |
234 | + | 303 | - aio_context_acquire(s->aio_context); |
235 | + fuzz_add_qos_target(&(FuzzTarget){ | 304 | qemu_laio_process_completions_and_submit(s); |
236 | + .name = "virtio-scsi-flags-fuzz", | 305 | - aio_context_release(s->aio_context); |
237 | + .description = "Fuzz the virtio-scsi virtual queues, forking" | 306 | } |
238 | + "for each fuzz run (also fuzzes the virtio flags)", | 307 | } |
239 | + .pre_vm_init = &counter_shm_init, | 308 | |
240 | + .pre_fuzz = &virtio_scsi_pre_fuzz, | 309 | @@ -XXX,XX +XXX,XX @@ static bool qemu_laio_poll_cb(void *opaque) |
241 | + .fuzz = virtio_scsi_with_flag_fuzz,}, | 310 | return false; |
242 | + "virtio-scsi", | 311 | } |
243 | + &(QOSGraphTestOptions){.before = virtio_scsi_test_setup} | 312 | |
244 | + ); | 313 | - aio_context_acquire(s->aio_context); |
245 | +} | 314 | qemu_laio_process_completions_and_submit(s); |
246 | + | 315 | - aio_context_release(s->aio_context); |
247 | +fuzz_target_init(register_virtio_scsi_fuzz_targets); | 316 | return true; |
317 | } | ||
318 | |||
319 | @@ -XXX,XX +XXX,XX @@ void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context) | ||
320 | { | ||
321 | aio_set_event_notifier(old_context, &s->e, false, NULL, NULL); | ||
322 | qemu_bh_delete(s->completion_bh); | ||
323 | + s->aio_context = NULL; | ||
324 | } | ||
325 | |||
326 | void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context) | ||
327 | diff --git a/block/nfs.c b/block/nfs.c | ||
328 | index XXXXXXX..XXXXXXX 100644 | ||
329 | --- a/block/nfs.c | ||
330 | +++ b/block/nfs.c | ||
331 | @@ -XXX,XX +XXX,XX @@ static void nfs_co_init_task(BlockDriverState *bs, NFSRPC *task) | ||
332 | static void nfs_co_generic_bh_cb(void *opaque) | ||
333 | { | ||
334 | NFSRPC *task = opaque; | ||
335 | + | ||
336 | task->complete = 1; | ||
337 | - qemu_coroutine_enter(task->co); | ||
338 | + aio_co_wake(task->co); | ||
339 | } | ||
340 | |||
341 | static void | ||
342 | diff --git a/block/null.c b/block/null.c | ||
343 | index XXXXXXX..XXXXXXX 100644 | ||
344 | --- a/block/null.c | ||
345 | +++ b/block/null.c | ||
346 | @@ -XXX,XX +XXX,XX @@ static const AIOCBInfo null_aiocb_info = { | ||
347 | static void null_bh_cb(void *opaque) | ||
348 | { | ||
349 | NullAIOCB *acb = opaque; | ||
350 | + AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
351 | + | ||
352 | + aio_context_acquire(ctx); | ||
353 | acb->common.cb(acb->common.opaque, 0); | ||
354 | + aio_context_release(ctx); | ||
355 | qemu_aio_unref(acb); | ||
356 | } | ||
357 | |||
358 | diff --git a/block/qed.c b/block/qed.c | ||
359 | index XXXXXXX..XXXXXXX 100644 | ||
360 | --- a/block/qed.c | ||
361 | +++ b/block/qed.c | ||
362 | @@ -XXX,XX +XXX,XX @@ static void qed_update_l2_table(BDRVQEDState *s, QEDTable *table, int index, | ||
363 | static void qed_aio_complete_bh(void *opaque) | ||
364 | { | ||
365 | QEDAIOCB *acb = opaque; | ||
366 | + BDRVQEDState *s = acb_to_s(acb); | ||
367 | BlockCompletionFunc *cb = acb->common.cb; | ||
368 | void *user_opaque = acb->common.opaque; | ||
369 | int ret = acb->bh_ret; | ||
370 | @@ -XXX,XX +XXX,XX @@ static void qed_aio_complete_bh(void *opaque) | ||
371 | qemu_aio_unref(acb); | ||
372 | |||
373 | /* Invoke callback */ | ||
374 | + qed_acquire(s); | ||
375 | cb(user_opaque, ret); | ||
376 | + qed_release(s); | ||
377 | } | ||
378 | |||
379 | static void qed_aio_complete(QEDAIOCB *acb, int ret) | ||
380 | diff --git a/block/rbd.c b/block/rbd.c | ||
381 | index XXXXXXX..XXXXXXX 100644 | ||
382 | --- a/block/rbd.c | ||
383 | +++ b/block/rbd.c | ||
384 | @@ -XXX,XX +XXX,XX @@ shutdown: | ||
385 | static void qemu_rbd_complete_aio(RADOSCB *rcb) | ||
386 | { | ||
387 | RBDAIOCB *acb = rcb->acb; | ||
388 | + AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
389 | int64_t r; | ||
390 | |||
391 | r = rcb->ret; | ||
392 | @@ -XXX,XX +XXX,XX @@ static void qemu_rbd_complete_aio(RADOSCB *rcb) | ||
393 | qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size); | ||
394 | } | ||
395 | qemu_vfree(acb->bounce); | ||
396 | + | ||
397 | + aio_context_acquire(ctx); | ||
398 | acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret)); | ||
399 | + aio_context_release(ctx); | ||
400 | |||
401 | qemu_aio_unref(acb); | ||
402 | } | ||
403 | diff --git a/dma-helpers.c b/dma-helpers.c | ||
404 | index XXXXXXX..XXXXXXX 100644 | ||
405 | --- a/dma-helpers.c | ||
406 | +++ b/dma-helpers.c | ||
407 | @@ -XXX,XX +XXX,XX @@ static void dma_blk_cb(void *opaque, int ret) | ||
408 | QEMU_ALIGN_DOWN(dbs->iov.size, dbs->align)); | ||
409 | } | ||
410 | |||
411 | + aio_context_acquire(dbs->ctx); | ||
412 | dbs->acb = dbs->io_func(dbs->offset, &dbs->iov, | ||
413 | dma_blk_cb, dbs, dbs->io_func_opaque); | ||
414 | + aio_context_release(dbs->ctx); | ||
415 | assert(dbs->acb); | ||
416 | } | ||
417 | |||
418 | diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c | ||
419 | index XXXXXXX..XXXXXXX 100644 | ||
420 | --- a/hw/block/virtio-blk.c | ||
421 | +++ b/hw/block/virtio-blk.c | ||
422 | @@ -XXX,XX +XXX,XX @@ static void virtio_blk_dma_restart_bh(void *opaque) | ||
423 | |||
424 | s->rq = NULL; | ||
425 | |||
426 | + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); | ||
427 | while (req) { | ||
428 | VirtIOBlockReq *next = req->next; | ||
429 | if (virtio_blk_handle_request(req, &mrb)) { | ||
430 | @@ -XXX,XX +XXX,XX @@ static void virtio_blk_dma_restart_bh(void *opaque) | ||
431 | if (mrb.num_reqs) { | ||
432 | virtio_blk_submit_multireq(s->blk, &mrb); | ||
433 | } | ||
434 | + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); | ||
435 | } | ||
436 | |||
437 | static void virtio_blk_dma_restart_cb(void *opaque, int running, | ||
438 | diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c | ||
439 | index XXXXXXX..XXXXXXX 100644 | ||
440 | --- a/hw/scsi/scsi-bus.c | ||
441 | +++ b/hw/scsi/scsi-bus.c | ||
442 | @@ -XXX,XX +XXX,XX @@ static void scsi_dma_restart_bh(void *opaque) | ||
443 | qemu_bh_delete(s->bh); | ||
444 | s->bh = NULL; | ||
445 | |||
446 | + aio_context_acquire(blk_get_aio_context(s->conf.blk)); | ||
447 | QTAILQ_FOREACH_SAFE(req, &s->requests, next, next) { | ||
448 | scsi_req_ref(req); | ||
449 | if (req->retry) { | ||
450 | @@ -XXX,XX +XXX,XX @@ static void scsi_dma_restart_bh(void *opaque) | ||
451 | } | ||
452 | scsi_req_unref(req); | ||
453 | } | ||
454 | + aio_context_release(blk_get_aio_context(s->conf.blk)); | ||
455 | } | ||
456 | |||
457 | void scsi_req_retry(SCSIRequest *req) | ||
458 | diff --git a/util/async.c b/util/async.c | ||
459 | index XXXXXXX..XXXXXXX 100644 | ||
460 | --- a/util/async.c | ||
461 | +++ b/util/async.c | ||
462 | @@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx) | ||
463 | ret = 1; | ||
464 | } | ||
465 | bh->idle = 0; | ||
466 | - aio_context_acquire(ctx); | ||
467 | aio_bh_call(bh); | ||
468 | - aio_context_release(ctx); | ||
469 | } | ||
470 | if (bh->deleted) { | ||
471 | deleted = true; | ||
472 | @@ -XXX,XX +XXX,XX @@ static void co_schedule_bh_cb(void *opaque) | ||
473 | Coroutine *co = QSLIST_FIRST(&straight); | ||
474 | QSLIST_REMOVE_HEAD(&straight, co_scheduled_next); | ||
475 | trace_aio_co_schedule_bh_cb(ctx, co); | ||
476 | + aio_context_acquire(ctx); | ||
477 | qemu_coroutine_enter(co); | ||
478 | + aio_context_release(ctx); | ||
479 | } | ||
480 | } | ||
481 | |||
482 | diff --git a/util/thread-pool.c b/util/thread-pool.c | ||
483 | index XXXXXXX..XXXXXXX 100644 | ||
484 | --- a/util/thread-pool.c | ||
485 | +++ b/util/thread-pool.c | ||
486 | @@ -XXX,XX +XXX,XX @@ static void thread_pool_completion_bh(void *opaque) | ||
487 | ThreadPool *pool = opaque; | ||
488 | ThreadPoolElement *elem, *next; | ||
489 | |||
490 | + aio_context_acquire(pool->ctx); | ||
491 | restart: | ||
492 | QLIST_FOREACH_SAFE(elem, &pool->head, all, next) { | ||
493 | if (elem->state != THREAD_DONE) { | ||
494 | @@ -XXX,XX +XXX,XX @@ restart: | ||
495 | qemu_aio_unref(elem); | ||
496 | } | ||
497 | } | ||
498 | + aio_context_release(pool->ctx); | ||
499 | } | ||
500 | |||
501 | static void thread_pool_cancel(BlockAIOCB *acb) | ||
248 | -- | 502 | -- |
249 | 2.24.1 | 503 | 2.9.3 |
250 | 504 | ||
505 | diff view generated by jsdifflib |
1 | From: Denis Plotnikov <dplotnikov@virtuozzo.com> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | The goal is to reduce the amount of requests issued by a guest on | 3 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
4 | 1M reads/writes. This rises the performance up to 4% on that kind of | 4 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
5 | disk access pattern. | 5 | Reviewed-by: Fam Zheng <famz@redhat.com> |
6 | 6 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | |
7 | The maximum chunk size to be used for the guest disk accessing is | 7 | Message-id: 20170213135235.12274-16-pbonzini@redhat.com |
8 | limited with seg_max parameter, which represents the max amount of | ||
9 | pices in the scatter-geather list in one guest disk request. | ||
10 | |||
11 | Since seg_max is virqueue_size dependent, increasing the virtqueue | ||
12 | size increases seg_max, which, in turn, increases the maximum size | ||
13 | of data to be read/write from a guest disk. | ||
14 | |||
15 | More details in the original problem statment: | ||
16 | https://lists.gnu.org/archive/html/qemu-devel/2017-12/msg03721.html | ||
17 | |||
18 | Suggested-by: Denis V. Lunev <den@openvz.org> | ||
19 | Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com> | ||
20 | Message-id: 20200214074648.958-1-dplotnikov@virtuozzo.com | ||
21 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 8 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
22 | --- | 9 | --- |
23 | hw/block/virtio-blk.c | 2 +- | 10 | block/archipelago.c | 3 --- |
24 | hw/core/machine.c | 2 ++ | 11 | block/block-backend.c | 7 ------- |
25 | hw/scsi/virtio-scsi.c | 2 +- | 12 | block/curl.c | 2 +- |
26 | 3 files changed, 4 insertions(+), 2 deletions(-) | 13 | block/io.c | 6 +----- |
14 | block/iscsi.c | 3 --- | ||
15 | block/linux-aio.c | 5 +---- | ||
16 | block/mirror.c | 12 +++++++++--- | ||
17 | block/null.c | 8 -------- | ||
18 | block/qed-cluster.c | 2 ++ | ||
19 | block/qed-table.c | 12 ++++++++++-- | ||
20 | block/qed.c | 4 ++-- | ||
21 | block/rbd.c | 4 ---- | ||
22 | block/win32-aio.c | 3 --- | ||
23 | hw/block/virtio-blk.c | 12 +++++++++++- | ||
24 | hw/scsi/scsi-disk.c | 15 +++++++++++++++ | ||
25 | hw/scsi/scsi-generic.c | 20 +++++++++++++++++--- | ||
26 | util/thread-pool.c | 4 +++- | ||
27 | 17 files changed, 72 insertions(+), 50 deletions(-) | ||
27 | 28 | ||
29 | diff --git a/block/archipelago.c b/block/archipelago.c | ||
30 | index XXXXXXX..XXXXXXX 100644 | ||
31 | --- a/block/archipelago.c | ||
32 | +++ b/block/archipelago.c | ||
33 | @@ -XXX,XX +XXX,XX @@ static void qemu_archipelago_complete_aio(void *opaque) | ||
34 | { | ||
35 | AIORequestData *reqdata = (AIORequestData *) opaque; | ||
36 | ArchipelagoAIOCB *aio_cb = (ArchipelagoAIOCB *) reqdata->aio_cb; | ||
37 | - AioContext *ctx = bdrv_get_aio_context(aio_cb->common.bs); | ||
38 | |||
39 | - aio_context_acquire(ctx); | ||
40 | aio_cb->common.cb(aio_cb->common.opaque, aio_cb->ret); | ||
41 | - aio_context_release(ctx); | ||
42 | aio_cb->status = 0; | ||
43 | |||
44 | qemu_aio_unref(aio_cb); | ||
45 | diff --git a/block/block-backend.c b/block/block-backend.c | ||
46 | index XXXXXXX..XXXXXXX 100644 | ||
47 | --- a/block/block-backend.c | ||
48 | +++ b/block/block-backend.c | ||
49 | @@ -XXX,XX +XXX,XX @@ int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags) | ||
50 | static void error_callback_bh(void *opaque) | ||
51 | { | ||
52 | struct BlockBackendAIOCB *acb = opaque; | ||
53 | - AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
54 | |||
55 | bdrv_dec_in_flight(acb->common.bs); | ||
56 | - aio_context_acquire(ctx); | ||
57 | acb->common.cb(acb->common.opaque, acb->ret); | ||
58 | - aio_context_release(ctx); | ||
59 | qemu_aio_unref(acb); | ||
60 | } | ||
61 | |||
62 | @@ -XXX,XX +XXX,XX @@ static void blk_aio_complete(BlkAioEmAIOCB *acb) | ||
63 | static void blk_aio_complete_bh(void *opaque) | ||
64 | { | ||
65 | BlkAioEmAIOCB *acb = opaque; | ||
66 | - AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
67 | - | ||
68 | assert(acb->has_returned); | ||
69 | - aio_context_acquire(ctx); | ||
70 | blk_aio_complete(acb); | ||
71 | - aio_context_release(ctx); | ||
72 | } | ||
73 | |||
74 | static BlockAIOCB *blk_aio_prwv(BlockBackend *blk, int64_t offset, int bytes, | ||
75 | diff --git a/block/curl.c b/block/curl.c | ||
76 | index XXXXXXX..XXXXXXX 100644 | ||
77 | --- a/block/curl.c | ||
78 | +++ b/block/curl.c | ||
79 | @@ -XXX,XX +XXX,XX @@ static void curl_readv_bh_cb(void *p) | ||
80 | curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running); | ||
81 | |||
82 | out: | ||
83 | + aio_context_release(ctx); | ||
84 | if (ret != -EINPROGRESS) { | ||
85 | acb->common.cb(acb->common.opaque, ret); | ||
86 | qemu_aio_unref(acb); | ||
87 | } | ||
88 | - aio_context_release(ctx); | ||
89 | } | ||
90 | |||
91 | static BlockAIOCB *curl_aio_readv(BlockDriverState *bs, | ||
92 | diff --git a/block/io.c b/block/io.c | ||
93 | index XXXXXXX..XXXXXXX 100644 | ||
94 | --- a/block/io.c | ||
95 | +++ b/block/io.c | ||
96 | @@ -XXX,XX +XXX,XX @@ static void bdrv_co_io_em_complete(void *opaque, int ret) | ||
97 | CoroutineIOCompletion *co = opaque; | ||
98 | |||
99 | co->ret = ret; | ||
100 | - qemu_coroutine_enter(co->coroutine); | ||
101 | + aio_co_wake(co->coroutine); | ||
102 | } | ||
103 | |||
104 | static int coroutine_fn bdrv_driver_preadv(BlockDriverState *bs, | ||
105 | @@ -XXX,XX +XXX,XX @@ static void bdrv_co_complete(BlockAIOCBCoroutine *acb) | ||
106 | static void bdrv_co_em_bh(void *opaque) | ||
107 | { | ||
108 | BlockAIOCBCoroutine *acb = opaque; | ||
109 | - BlockDriverState *bs = acb->common.bs; | ||
110 | - AioContext *ctx = bdrv_get_aio_context(bs); | ||
111 | |||
112 | assert(!acb->need_bh); | ||
113 | - aio_context_acquire(ctx); | ||
114 | bdrv_co_complete(acb); | ||
115 | - aio_context_release(ctx); | ||
116 | } | ||
117 | |||
118 | static void bdrv_co_maybe_schedule_bh(BlockAIOCBCoroutine *acb) | ||
119 | diff --git a/block/iscsi.c b/block/iscsi.c | ||
120 | index XXXXXXX..XXXXXXX 100644 | ||
121 | --- a/block/iscsi.c | ||
122 | +++ b/block/iscsi.c | ||
123 | @@ -XXX,XX +XXX,XX @@ static void | ||
124 | iscsi_bh_cb(void *p) | ||
125 | { | ||
126 | IscsiAIOCB *acb = p; | ||
127 | - AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
128 | |||
129 | qemu_bh_delete(acb->bh); | ||
130 | |||
131 | g_free(acb->buf); | ||
132 | acb->buf = NULL; | ||
133 | |||
134 | - aio_context_acquire(ctx); | ||
135 | acb->common.cb(acb->common.opaque, acb->status); | ||
136 | - aio_context_release(ctx); | ||
137 | |||
138 | if (acb->task != NULL) { | ||
139 | scsi_free_scsi_task(acb->task); | ||
140 | diff --git a/block/linux-aio.c b/block/linux-aio.c | ||
141 | index XXXXXXX..XXXXXXX 100644 | ||
142 | --- a/block/linux-aio.c | ||
143 | +++ b/block/linux-aio.c | ||
144 | @@ -XXX,XX +XXX,XX @@ static inline ssize_t io_event_ret(struct io_event *ev) | ||
145 | */ | ||
146 | static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) | ||
147 | { | ||
148 | - LinuxAioState *s = laiocb->ctx; | ||
149 | int ret; | ||
150 | |||
151 | ret = laiocb->ret; | ||
152 | @@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) | ||
153 | } | ||
154 | |||
155 | laiocb->ret = ret; | ||
156 | - aio_context_acquire(s->aio_context); | ||
157 | if (laiocb->co) { | ||
158 | /* If the coroutine is already entered it must be in ioq_submit() and | ||
159 | * will notice laio->ret has been filled in when it eventually runs | ||
160 | @@ -XXX,XX +XXX,XX @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb) | ||
161 | * that! | ||
162 | */ | ||
163 | if (!qemu_coroutine_entered(laiocb->co)) { | ||
164 | - qemu_coroutine_enter(laiocb->co); | ||
165 | + aio_co_wake(laiocb->co); | ||
166 | } | ||
167 | } else { | ||
168 | laiocb->common.cb(laiocb->common.opaque, ret); | ||
169 | qemu_aio_unref(laiocb); | ||
170 | } | ||
171 | - aio_context_release(s->aio_context); | ||
172 | } | ||
173 | |||
174 | /** | ||
175 | diff --git a/block/mirror.c b/block/mirror.c | ||
176 | index XXXXXXX..XXXXXXX 100644 | ||
177 | --- a/block/mirror.c | ||
178 | +++ b/block/mirror.c | ||
179 | @@ -XXX,XX +XXX,XX @@ static void mirror_write_complete(void *opaque, int ret) | ||
180 | { | ||
181 | MirrorOp *op = opaque; | ||
182 | MirrorBlockJob *s = op->s; | ||
183 | + | ||
184 | + aio_context_acquire(blk_get_aio_context(s->common.blk)); | ||
185 | if (ret < 0) { | ||
186 | BlockErrorAction action; | ||
187 | |||
188 | @@ -XXX,XX +XXX,XX @@ static void mirror_write_complete(void *opaque, int ret) | ||
189 | } | ||
190 | } | ||
191 | mirror_iteration_done(op, ret); | ||
192 | + aio_context_release(blk_get_aio_context(s->common.blk)); | ||
193 | } | ||
194 | |||
195 | static void mirror_read_complete(void *opaque, int ret) | ||
196 | { | ||
197 | MirrorOp *op = opaque; | ||
198 | MirrorBlockJob *s = op->s; | ||
199 | + | ||
200 | + aio_context_acquire(blk_get_aio_context(s->common.blk)); | ||
201 | if (ret < 0) { | ||
202 | BlockErrorAction action; | ||
203 | |||
204 | @@ -XXX,XX +XXX,XX @@ static void mirror_read_complete(void *opaque, int ret) | ||
205 | } | ||
206 | |||
207 | mirror_iteration_done(op, ret); | ||
208 | - return; | ||
209 | + } else { | ||
210 | + blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE, &op->qiov, | ||
211 | + 0, mirror_write_complete, op); | ||
212 | } | ||
213 | - blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE, &op->qiov, | ||
214 | - 0, mirror_write_complete, op); | ||
215 | + aio_context_release(blk_get_aio_context(s->common.blk)); | ||
216 | } | ||
217 | |||
218 | static inline void mirror_clip_sectors(MirrorBlockJob *s, | ||
219 | diff --git a/block/null.c b/block/null.c | ||
220 | index XXXXXXX..XXXXXXX 100644 | ||
221 | --- a/block/null.c | ||
222 | +++ b/block/null.c | ||
223 | @@ -XXX,XX +XXX,XX @@ static const AIOCBInfo null_aiocb_info = { | ||
224 | static void null_bh_cb(void *opaque) | ||
225 | { | ||
226 | NullAIOCB *acb = opaque; | ||
227 | - AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
228 | - | ||
229 | - aio_context_acquire(ctx); | ||
230 | acb->common.cb(acb->common.opaque, 0); | ||
231 | - aio_context_release(ctx); | ||
232 | qemu_aio_unref(acb); | ||
233 | } | ||
234 | |||
235 | static void null_timer_cb(void *opaque) | ||
236 | { | ||
237 | NullAIOCB *acb = opaque; | ||
238 | - AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
239 | - | ||
240 | - aio_context_acquire(ctx); | ||
241 | acb->common.cb(acb->common.opaque, 0); | ||
242 | - aio_context_release(ctx); | ||
243 | timer_deinit(&acb->timer); | ||
244 | qemu_aio_unref(acb); | ||
245 | } | ||
246 | diff --git a/block/qed-cluster.c b/block/qed-cluster.c | ||
247 | index XXXXXXX..XXXXXXX 100644 | ||
248 | --- a/block/qed-cluster.c | ||
249 | +++ b/block/qed-cluster.c | ||
250 | @@ -XXX,XX +XXX,XX @@ static void qed_find_cluster_cb(void *opaque, int ret) | ||
251 | unsigned int index; | ||
252 | unsigned int n; | ||
253 | |||
254 | + qed_acquire(s); | ||
255 | if (ret) { | ||
256 | goto out; | ||
257 | } | ||
258 | @@ -XXX,XX +XXX,XX @@ static void qed_find_cluster_cb(void *opaque, int ret) | ||
259 | |||
260 | out: | ||
261 | find_cluster_cb->cb(find_cluster_cb->opaque, ret, offset, len); | ||
262 | + qed_release(s); | ||
263 | g_free(find_cluster_cb); | ||
264 | } | ||
265 | |||
266 | diff --git a/block/qed-table.c b/block/qed-table.c | ||
267 | index XXXXXXX..XXXXXXX 100644 | ||
268 | --- a/block/qed-table.c | ||
269 | +++ b/block/qed-table.c | ||
270 | @@ -XXX,XX +XXX,XX @@ static void qed_read_table_cb(void *opaque, int ret) | ||
271 | { | ||
272 | QEDReadTableCB *read_table_cb = opaque; | ||
273 | QEDTable *table = read_table_cb->table; | ||
274 | + BDRVQEDState *s = read_table_cb->s; | ||
275 | int noffsets = read_table_cb->qiov.size / sizeof(uint64_t); | ||
276 | int i; | ||
277 | |||
278 | @@ -XXX,XX +XXX,XX @@ static void qed_read_table_cb(void *opaque, int ret) | ||
279 | } | ||
280 | |||
281 | /* Byteswap offsets */ | ||
282 | + qed_acquire(s); | ||
283 | for (i = 0; i < noffsets; i++) { | ||
284 | table->offsets[i] = le64_to_cpu(table->offsets[i]); | ||
285 | } | ||
286 | + qed_release(s); | ||
287 | |||
288 | out: | ||
289 | /* Completion */ | ||
290 | - trace_qed_read_table_cb(read_table_cb->s, read_table_cb->table, ret); | ||
291 | + trace_qed_read_table_cb(s, read_table_cb->table, ret); | ||
292 | gencb_complete(&read_table_cb->gencb, ret); | ||
293 | } | ||
294 | |||
295 | @@ -XXX,XX +XXX,XX @@ typedef struct { | ||
296 | static void qed_write_table_cb(void *opaque, int ret) | ||
297 | { | ||
298 | QEDWriteTableCB *write_table_cb = opaque; | ||
299 | + BDRVQEDState *s = write_table_cb->s; | ||
300 | |||
301 | - trace_qed_write_table_cb(write_table_cb->s, | ||
302 | + trace_qed_write_table_cb(s, | ||
303 | write_table_cb->orig_table, | ||
304 | write_table_cb->flush, | ||
305 | ret); | ||
306 | @@ -XXX,XX +XXX,XX @@ static void qed_write_table_cb(void *opaque, int ret) | ||
307 | if (write_table_cb->flush) { | ||
308 | /* We still need to flush first */ | ||
309 | write_table_cb->flush = false; | ||
310 | + qed_acquire(s); | ||
311 | bdrv_aio_flush(write_table_cb->s->bs, qed_write_table_cb, | ||
312 | write_table_cb); | ||
313 | + qed_release(s); | ||
314 | return; | ||
315 | } | ||
316 | |||
317 | @@ -XXX,XX +XXX,XX @@ static void qed_read_l2_table_cb(void *opaque, int ret) | ||
318 | CachedL2Table *l2_table = request->l2_table; | ||
319 | uint64_t l2_offset = read_l2_table_cb->l2_offset; | ||
320 | |||
321 | + qed_acquire(s); | ||
322 | if (ret) { | ||
323 | /* can't trust loaded L2 table anymore */ | ||
324 | qed_unref_l2_cache_entry(l2_table); | ||
325 | @@ -XXX,XX +XXX,XX @@ static void qed_read_l2_table_cb(void *opaque, int ret) | ||
326 | request->l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset); | ||
327 | assert(request->l2_table != NULL); | ||
328 | } | ||
329 | + qed_release(s); | ||
330 | |||
331 | gencb_complete(&read_l2_table_cb->gencb, ret); | ||
332 | } | ||
333 | diff --git a/block/qed.c b/block/qed.c | ||
334 | index XXXXXXX..XXXXXXX 100644 | ||
335 | --- a/block/qed.c | ||
336 | +++ b/block/qed.c | ||
337 | @@ -XXX,XX +XXX,XX @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l | ||
338 | } | ||
339 | |||
340 | if (cb->co) { | ||
341 | - qemu_coroutine_enter(cb->co); | ||
342 | + aio_co_wake(cb->co); | ||
343 | } | ||
344 | } | ||
345 | |||
346 | @@ -XXX,XX +XXX,XX @@ static void coroutine_fn qed_co_pwrite_zeroes_cb(void *opaque, int ret) | ||
347 | cb->done = true; | ||
348 | cb->ret = ret; | ||
349 | if (cb->co) { | ||
350 | - qemu_coroutine_enter(cb->co); | ||
351 | + aio_co_wake(cb->co); | ||
352 | } | ||
353 | } | ||
354 | |||
355 | diff --git a/block/rbd.c b/block/rbd.c | ||
356 | index XXXXXXX..XXXXXXX 100644 | ||
357 | --- a/block/rbd.c | ||
358 | +++ b/block/rbd.c | ||
359 | @@ -XXX,XX +XXX,XX @@ shutdown: | ||
360 | static void qemu_rbd_complete_aio(RADOSCB *rcb) | ||
361 | { | ||
362 | RBDAIOCB *acb = rcb->acb; | ||
363 | - AioContext *ctx = bdrv_get_aio_context(acb->common.bs); | ||
364 | int64_t r; | ||
365 | |||
366 | r = rcb->ret; | ||
367 | @@ -XXX,XX +XXX,XX @@ static void qemu_rbd_complete_aio(RADOSCB *rcb) | ||
368 | qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size); | ||
369 | } | ||
370 | qemu_vfree(acb->bounce); | ||
371 | - | ||
372 | - aio_context_acquire(ctx); | ||
373 | acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret)); | ||
374 | - aio_context_release(ctx); | ||
375 | |||
376 | qemu_aio_unref(acb); | ||
377 | } | ||
378 | diff --git a/block/win32-aio.c b/block/win32-aio.c | ||
379 | index XXXXXXX..XXXXXXX 100644 | ||
380 | --- a/block/win32-aio.c | ||
381 | +++ b/block/win32-aio.c | ||
382 | @@ -XXX,XX +XXX,XX @@ static void win32_aio_process_completion(QEMUWin32AIOState *s, | ||
383 | qemu_vfree(waiocb->buf); | ||
384 | } | ||
385 | |||
386 | - | ||
387 | - aio_context_acquire(s->aio_ctx); | ||
388 | waiocb->common.cb(waiocb->common.opaque, ret); | ||
389 | - aio_context_release(s->aio_ctx); | ||
390 | qemu_aio_unref(waiocb); | ||
391 | } | ||
392 | |||
28 | diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c | 393 | diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c |
29 | index XXXXXXX..XXXXXXX 100644 | 394 | index XXXXXXX..XXXXXXX 100644 |
30 | --- a/hw/block/virtio-blk.c | 395 | --- a/hw/block/virtio-blk.c |
31 | +++ b/hw/block/virtio-blk.c | 396 | +++ b/hw/block/virtio-blk.c |
32 | @@ -XXX,XX +XXX,XX @@ static Property virtio_blk_properties[] = { | 397 | @@ -XXX,XX +XXX,XX @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error, |
33 | DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0, | 398 | static void virtio_blk_rw_complete(void *opaque, int ret) |
34 | true), | 399 | { |
35 | DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1), | 400 | VirtIOBlockReq *next = opaque; |
36 | - DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 128), | 401 | + VirtIOBlock *s = next->dev; |
37 | + DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 256), | 402 | |
38 | DEFINE_PROP_BOOL("seg-max-adjust", VirtIOBlock, conf.seg_max_adjust, true), | 403 | + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); |
39 | DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD, | 404 | while (next) { |
40 | IOThread *), | 405 | VirtIOBlockReq *req = next; |
41 | diff --git a/hw/core/machine.c b/hw/core/machine.c | 406 | next = req->mr_next; |
42 | index XXXXXXX..XXXXXXX 100644 | 407 | @@ -XXX,XX +XXX,XX @@ static void virtio_blk_rw_complete(void *opaque, int ret) |
43 | --- a/hw/core/machine.c | 408 | block_acct_done(blk_get_stats(req->dev->blk), &req->acct); |
44 | +++ b/hw/core/machine.c | 409 | virtio_blk_free_request(req); |
45 | @@ -XXX,XX +XXX,XX @@ | 410 | } |
46 | #include "hw/mem/nvdimm.h" | 411 | + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); |
47 | 412 | } | |
48 | GlobalProperty hw_compat_4_2[] = { | 413 | |
49 | + { "virtio-blk-device", "queue-size", "128"}, | 414 | static void virtio_blk_flush_complete(void *opaque, int ret) |
50 | + { "virtio-scsi-device", "virtqueue_size", "128"}, | 415 | { |
51 | { "virtio-blk-device", "x-enable-wce-if-config-wce", "off" }, | 416 | VirtIOBlockReq *req = opaque; |
52 | { "virtio-blk-device", "seg-max-adjust", "off"}, | 417 | + VirtIOBlock *s = req->dev; |
53 | { "virtio-scsi-device", "seg_max_adjust", "off"}, | 418 | |
54 | diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c | 419 | + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); |
55 | index XXXXXXX..XXXXXXX 100644 | 420 | if (ret) { |
56 | --- a/hw/scsi/virtio-scsi.c | 421 | if (virtio_blk_handle_rw_error(req, -ret, 0)) { |
57 | +++ b/hw/scsi/virtio-scsi.c | 422 | - return; |
58 | @@ -XXX,XX +XXX,XX @@ static void virtio_scsi_device_unrealize(DeviceState *dev, Error **errp) | 423 | + goto out; |
59 | static Property virtio_scsi_properties[] = { | 424 | } |
60 | DEFINE_PROP_UINT32("num_queues", VirtIOSCSI, parent_obj.conf.num_queues, 1), | 425 | } |
61 | DEFINE_PROP_UINT32("virtqueue_size", VirtIOSCSI, | 426 | |
62 | - parent_obj.conf.virtqueue_size, 128), | 427 | virtio_blk_req_complete(req, VIRTIO_BLK_S_OK); |
63 | + parent_obj.conf.virtqueue_size, 256), | 428 | block_acct_done(blk_get_stats(req->dev->blk), &req->acct); |
64 | DEFINE_PROP_BOOL("seg_max_adjust", VirtIOSCSI, | 429 | virtio_blk_free_request(req); |
65 | parent_obj.conf.seg_max_adjust, true), | 430 | + |
66 | DEFINE_PROP_UINT32("max_sectors", VirtIOSCSI, parent_obj.conf.max_sectors, | 431 | +out: |
432 | + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); | ||
433 | } | ||
434 | |||
435 | #ifdef __linux__ | ||
436 | @@ -XXX,XX +XXX,XX @@ static void virtio_blk_ioctl_complete(void *opaque, int status) | ||
437 | virtio_stl_p(vdev, &scsi->data_len, hdr->dxfer_len); | ||
438 | |||
439 | out: | ||
440 | + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); | ||
441 | virtio_blk_req_complete(req, status); | ||
442 | virtio_blk_free_request(req); | ||
443 | + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); | ||
444 | g_free(ioctl_req); | ||
445 | } | ||
446 | |||
447 | diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c | ||
448 | index XXXXXXX..XXXXXXX 100644 | ||
449 | --- a/hw/scsi/scsi-disk.c | ||
450 | +++ b/hw/scsi/scsi-disk.c | ||
451 | @@ -XXX,XX +XXX,XX @@ static void scsi_aio_complete(void *opaque, int ret) | ||
452 | |||
453 | assert(r->req.aiocb != NULL); | ||
454 | r->req.aiocb = NULL; | ||
455 | + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); | ||
456 | if (scsi_disk_req_check_error(r, ret, true)) { | ||
457 | goto done; | ||
458 | } | ||
459 | @@ -XXX,XX +XXX,XX @@ static void scsi_aio_complete(void *opaque, int ret) | ||
460 | scsi_req_complete(&r->req, GOOD); | ||
461 | |||
462 | done: | ||
463 | + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); | ||
464 | scsi_req_unref(&r->req); | ||
465 | } | ||
466 | |||
467 | @@ -XXX,XX +XXX,XX @@ static void scsi_dma_complete(void *opaque, int ret) | ||
468 | assert(r->req.aiocb != NULL); | ||
469 | r->req.aiocb = NULL; | ||
470 | |||
471 | + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); | ||
472 | if (ret < 0) { | ||
473 | block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct); | ||
474 | } else { | ||
475 | block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct); | ||
476 | } | ||
477 | scsi_dma_complete_noio(r, ret); | ||
478 | + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); | ||
479 | } | ||
480 | |||
481 | static void scsi_read_complete(void * opaque, int ret) | ||
482 | @@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret) | ||
483 | |||
484 | assert(r->req.aiocb != NULL); | ||
485 | r->req.aiocb = NULL; | ||
486 | + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); | ||
487 | if (scsi_disk_req_check_error(r, ret, true)) { | ||
488 | goto done; | ||
489 | } | ||
490 | @@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret) | ||
491 | |||
492 | done: | ||
493 | scsi_req_unref(&r->req); | ||
494 | + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); | ||
495 | } | ||
496 | |||
497 | /* Actually issue a read to the block device. */ | ||
498 | @@ -XXX,XX +XXX,XX @@ static void scsi_do_read_cb(void *opaque, int ret) | ||
499 | assert (r->req.aiocb != NULL); | ||
500 | r->req.aiocb = NULL; | ||
501 | |||
502 | + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); | ||
503 | if (ret < 0) { | ||
504 | block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct); | ||
505 | } else { | ||
506 | block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct); | ||
507 | } | ||
508 | scsi_do_read(opaque, ret); | ||
509 | + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); | ||
510 | } | ||
511 | |||
512 | /* Read more data from scsi device into buffer. */ | ||
513 | @@ -XXX,XX +XXX,XX @@ static void scsi_write_complete(void * opaque, int ret) | ||
514 | assert (r->req.aiocb != NULL); | ||
515 | r->req.aiocb = NULL; | ||
516 | |||
517 | + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); | ||
518 | if (ret < 0) { | ||
519 | block_acct_failed(blk_get_stats(s->qdev.conf.blk), &r->acct); | ||
520 | } else { | ||
521 | block_acct_done(blk_get_stats(s->qdev.conf.blk), &r->acct); | ||
522 | } | ||
523 | scsi_write_complete_noio(r, ret); | ||
524 | + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); | ||
525 | } | ||
526 | |||
527 | static void scsi_write_data(SCSIRequest *req) | ||
528 | @@ -XXX,XX +XXX,XX @@ static void scsi_unmap_complete(void *opaque, int ret) | ||
529 | { | ||
530 | UnmapCBData *data = opaque; | ||
531 | SCSIDiskReq *r = data->r; | ||
532 | + SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, r->req.dev); | ||
533 | |||
534 | assert(r->req.aiocb != NULL); | ||
535 | r->req.aiocb = NULL; | ||
536 | |||
537 | + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); | ||
538 | scsi_unmap_complete_noio(data, ret); | ||
539 | + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); | ||
540 | } | ||
541 | |||
542 | static void scsi_disk_emulate_unmap(SCSIDiskReq *r, uint8_t *inbuf) | ||
543 | @@ -XXX,XX +XXX,XX @@ static void scsi_write_same_complete(void *opaque, int ret) | ||
544 | |||
545 | assert(r->req.aiocb != NULL); | ||
546 | r->req.aiocb = NULL; | ||
547 | + aio_context_acquire(blk_get_aio_context(s->qdev.conf.blk)); | ||
548 | if (scsi_disk_req_check_error(r, ret, true)) { | ||
549 | goto done; | ||
550 | } | ||
551 | @@ -XXX,XX +XXX,XX @@ done: | ||
552 | scsi_req_unref(&r->req); | ||
553 | qemu_vfree(data->iov.iov_base); | ||
554 | g_free(data); | ||
555 | + aio_context_release(blk_get_aio_context(s->qdev.conf.blk)); | ||
556 | } | ||
557 | |||
558 | static void scsi_disk_emulate_write_same(SCSIDiskReq *r, uint8_t *inbuf) | ||
559 | diff --git a/hw/scsi/scsi-generic.c b/hw/scsi/scsi-generic.c | ||
560 | index XXXXXXX..XXXXXXX 100644 | ||
561 | --- a/hw/scsi/scsi-generic.c | ||
562 | +++ b/hw/scsi/scsi-generic.c | ||
563 | @@ -XXX,XX +XXX,XX @@ done: | ||
564 | static void scsi_command_complete(void *opaque, int ret) | ||
565 | { | ||
566 | SCSIGenericReq *r = (SCSIGenericReq *)opaque; | ||
567 | + SCSIDevice *s = r->req.dev; | ||
568 | |||
569 | assert(r->req.aiocb != NULL); | ||
570 | r->req.aiocb = NULL; | ||
571 | + | ||
572 | + aio_context_acquire(blk_get_aio_context(s->conf.blk)); | ||
573 | scsi_command_complete_noio(r, ret); | ||
574 | + aio_context_release(blk_get_aio_context(s->conf.blk)); | ||
575 | } | ||
576 | |||
577 | static int execute_command(BlockBackend *blk, | ||
578 | @@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret) | ||
579 | assert(r->req.aiocb != NULL); | ||
580 | r->req.aiocb = NULL; | ||
581 | |||
582 | + aio_context_acquire(blk_get_aio_context(s->conf.blk)); | ||
583 | + | ||
584 | if (ret || r->req.io_canceled) { | ||
585 | scsi_command_complete_noio(r, ret); | ||
586 | - return; | ||
587 | + goto done; | ||
588 | } | ||
589 | |||
590 | len = r->io_header.dxfer_len - r->io_header.resid; | ||
591 | @@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret) | ||
592 | r->len = -1; | ||
593 | if (len == 0) { | ||
594 | scsi_command_complete_noio(r, 0); | ||
595 | - return; | ||
596 | + goto done; | ||
597 | } | ||
598 | |||
599 | /* Snoop READ CAPACITY output to set the blocksize. */ | ||
600 | @@ -XXX,XX +XXX,XX @@ static void scsi_read_complete(void * opaque, int ret) | ||
601 | } | ||
602 | scsi_req_data(&r->req, len); | ||
603 | scsi_req_unref(&r->req); | ||
604 | + | ||
605 | +done: | ||
606 | + aio_context_release(blk_get_aio_context(s->conf.blk)); | ||
607 | } | ||
608 | |||
609 | /* Read more data from scsi device into buffer. */ | ||
610 | @@ -XXX,XX +XXX,XX @@ static void scsi_write_complete(void * opaque, int ret) | ||
611 | assert(r->req.aiocb != NULL); | ||
612 | r->req.aiocb = NULL; | ||
613 | |||
614 | + aio_context_acquire(blk_get_aio_context(s->conf.blk)); | ||
615 | + | ||
616 | if (ret || r->req.io_canceled) { | ||
617 | scsi_command_complete_noio(r, ret); | ||
618 | - return; | ||
619 | + goto done; | ||
620 | } | ||
621 | |||
622 | if (r->req.cmd.buf[0] == MODE_SELECT && r->req.cmd.buf[4] == 12 && | ||
623 | @@ -XXX,XX +XXX,XX @@ static void scsi_write_complete(void * opaque, int ret) | ||
624 | } | ||
625 | |||
626 | scsi_command_complete_noio(r, ret); | ||
627 | + | ||
628 | +done: | ||
629 | + aio_context_release(blk_get_aio_context(s->conf.blk)); | ||
630 | } | ||
631 | |||
632 | /* Write data to a scsi device. Returns nonzero on failure. | ||
633 | diff --git a/util/thread-pool.c b/util/thread-pool.c | ||
634 | index XXXXXXX..XXXXXXX 100644 | ||
635 | --- a/util/thread-pool.c | ||
636 | +++ b/util/thread-pool.c | ||
637 | @@ -XXX,XX +XXX,XX @@ restart: | ||
638 | */ | ||
639 | qemu_bh_schedule(pool->completion_bh); | ||
640 | |||
641 | + aio_context_release(pool->ctx); | ||
642 | elem->common.cb(elem->common.opaque, elem->ret); | ||
643 | + aio_context_acquire(pool->ctx); | ||
644 | qemu_aio_unref(elem); | ||
645 | goto restart; | ||
646 | } else { | ||
647 | @@ -XXX,XX +XXX,XX @@ static void thread_pool_co_cb(void *opaque, int ret) | ||
648 | ThreadPoolCo *co = opaque; | ||
649 | |||
650 | co->ret = ret; | ||
651 | - qemu_coroutine_enter(co->co); | ||
652 | + aio_co_wake(co->co); | ||
653 | } | ||
654 | |||
655 | int coroutine_fn thread_pool_submit_co(ThreadPool *pool, ThreadPoolFunc *func, | ||
67 | -- | 656 | -- |
68 | 2.24.1 | 657 | 2.9.3 |
69 | 658 | ||
659 | diff view generated by jsdifflib |
1 | The ctx->first_bh list contains all created BHs, including those that | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | are not scheduled. The list is iterated by the event loop and therefore | ||
3 | has O(n) time complexity with respected to the number of created BHs. | ||
4 | 2 | ||
5 | Rewrite BHs so that only scheduled or deleted BHs are enqueued. | 3 | This patch prepares for the removal of unnecessary lockcnt inc/dec pairs. |
6 | Only BHs that actually require action will be iterated. | 4 | Extract the dispatching loop for file descriptor handlers into a new |
5 | function aio_dispatch_handlers, and then inline aio_dispatch into | ||
6 | aio_poll. | ||
7 | 7 | ||
8 | One semantic change is required: qemu_bh_delete() enqueues the BH and | 8 | aio_dispatch can now become void. |
9 | therefore invokes aio_notify(). The | ||
10 | tests/test-aio.c:test_source_bh_delete_from_cb() test case assumed that | ||
11 | g_main_context_iteration(NULL, false) returns false after | ||
12 | qemu_bh_delete() but it now returns true for one iteration. Fix up the | ||
13 | test case. | ||
14 | 9 | ||
15 | This patch makes aio_compute_timeout() and aio_bh_poll() drop from a CPU | 10 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
16 | profile reported by perf-top(1). Previously they combined to 9% CPU | 11 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
17 | utilization when AioContext polling is commented out and the guest has 2 | 12 | Reviewed-by: Fam Zheng <famz@redhat.com> |
18 | virtio-blk,num-queues=1 and 99 virtio-blk,num-queues=32 devices. | 13 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> |
19 | 14 | Message-id: 20170213135235.12274-17-pbonzini@redhat.com | |
20 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
21 | Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> | ||
22 | Message-id: 20200221093951.1414693-1-stefanha@redhat.com | ||
23 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 15 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
24 | --- | 16 | --- |
25 | include/block/aio.h | 20 +++- | 17 | include/block/aio.h | 6 +----- |
26 | tests/test-aio.c | 3 +- | 18 | util/aio-posix.c | 44 ++++++++++++++------------------------------ |
27 | util/async.c | 237 ++++++++++++++++++++++++++------------------ | 19 | util/aio-win32.c | 13 ++++--------- |
28 | 3 files changed, 158 insertions(+), 102 deletions(-) | 20 | util/async.c | 2 +- |
21 | 4 files changed, 20 insertions(+), 45 deletions(-) | ||
29 | 22 | ||
30 | diff --git a/include/block/aio.h b/include/block/aio.h | 23 | diff --git a/include/block/aio.h b/include/block/aio.h |
31 | index XXXXXXX..XXXXXXX 100644 | 24 | index XXXXXXX..XXXXXXX 100644 |
32 | --- a/include/block/aio.h | 25 | --- a/include/block/aio.h |
33 | +++ b/include/block/aio.h | 26 | +++ b/include/block/aio.h |
34 | @@ -XXX,XX +XXX,XX @@ struct ThreadPool; | 27 | @@ -XXX,XX +XXX,XX @@ bool aio_pending(AioContext *ctx); |
35 | struct LinuxAioState; | 28 | /* Dispatch any pending callbacks from the GSource attached to the AioContext. |
36 | struct LuringState; | 29 | * |
37 | 30 | * This is used internally in the implementation of the GSource. | |
38 | +/* | 31 | - * |
39 | + * Each aio_bh_poll() call carves off a slice of the BH list, so that newly | 32 | - * @dispatch_fds: true to process fds, false to skip them |
40 | + * scheduled BHs are not processed until the next aio_bh_poll() call. All | 33 | - * (can be used as an optimization by callers that know there |
41 | + * active aio_bh_poll() calls chain their slices together in a list, so that | 34 | - * are no fds ready) |
42 | + * nested aio_bh_poll() calls process all scheduled bottom halves. | 35 | */ |
43 | + */ | 36 | -bool aio_dispatch(AioContext *ctx, bool dispatch_fds); |
44 | +typedef QSLIST_HEAD(, QEMUBH) BHList; | 37 | +void aio_dispatch(AioContext *ctx); |
45 | +typedef struct BHListSlice BHListSlice; | 38 | |
46 | +struct BHListSlice { | 39 | /* Progress in completing AIO work to occur. This can issue new pending |
47 | + BHList bh_list; | 40 | * aio as a result of executing I/O completion or bh callbacks. |
48 | + QSIMPLEQ_ENTRY(BHListSlice) next; | 41 | diff --git a/util/aio-posix.c b/util/aio-posix.c |
49 | +}; | 42 | index XXXXXXX..XXXXXXX 100644 |
43 | --- a/util/aio-posix.c | ||
44 | +++ b/util/aio-posix.c | ||
45 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) | ||
46 | AioHandler *node, *tmp; | ||
47 | bool progress = false; | ||
48 | |||
49 | - /* | ||
50 | - * We have to walk very carefully in case aio_set_fd_handler is | ||
51 | - * called while we're walking. | ||
52 | - */ | ||
53 | - qemu_lockcnt_inc(&ctx->list_lock); | ||
54 | - | ||
55 | QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) { | ||
56 | int revents; | ||
57 | |||
58 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) | ||
59 | } | ||
60 | } | ||
61 | |||
62 | - qemu_lockcnt_dec(&ctx->list_lock); | ||
63 | return progress; | ||
64 | } | ||
65 | |||
66 | -/* | ||
67 | - * Note that dispatch_fds == false has the side-effect of post-poning the | ||
68 | - * freeing of deleted handlers. | ||
69 | - */ | ||
70 | -bool aio_dispatch(AioContext *ctx, bool dispatch_fds) | ||
71 | +void aio_dispatch(AioContext *ctx) | ||
72 | { | ||
73 | - bool progress; | ||
74 | + aio_bh_poll(ctx); | ||
75 | |||
76 | - /* | ||
77 | - * If there are callbacks left that have been queued, we need to call them. | ||
78 | - * Do not call select in this case, because it is possible that the caller | ||
79 | - * does not need a complete flush (as is the case for aio_poll loops). | ||
80 | - */ | ||
81 | - progress = aio_bh_poll(ctx); | ||
82 | + qemu_lockcnt_inc(&ctx->list_lock); | ||
83 | + aio_dispatch_handlers(ctx); | ||
84 | + qemu_lockcnt_dec(&ctx->list_lock); | ||
85 | |||
86 | - if (dispatch_fds) { | ||
87 | - progress |= aio_dispatch_handlers(ctx); | ||
88 | - } | ||
89 | - | ||
90 | - /* Run our timers */ | ||
91 | - progress |= timerlistgroup_run_timers(&ctx->tlg); | ||
92 | - | ||
93 | - return progress; | ||
94 | + timerlistgroup_run_timers(&ctx->tlg); | ||
95 | } | ||
96 | |||
97 | /* These thread-local variables are used only in a small part of aio_poll | ||
98 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | ||
99 | npfd = 0; | ||
100 | qemu_lockcnt_dec(&ctx->list_lock); | ||
101 | |||
102 | - /* Run dispatch even if there were no readable fds to run timers */ | ||
103 | - if (aio_dispatch(ctx, ret > 0)) { | ||
104 | - progress = true; | ||
105 | + progress |= aio_bh_poll(ctx); | ||
50 | + | 106 | + |
51 | struct AioContext { | 107 | + if (ret > 0) { |
52 | GSource source; | 108 | + qemu_lockcnt_inc(&ctx->list_lock); |
53 | 109 | + progress |= aio_dispatch_handlers(ctx); | |
54 | @@ -XXX,XX +XXX,XX @@ struct AioContext { | 110 | + qemu_lockcnt_dec(&ctx->list_lock); |
55 | */ | 111 | } |
56 | QemuLockCnt list_lock; | 112 | |
57 | 113 | + progress |= timerlistgroup_run_timers(&ctx->tlg); | |
58 | - /* Anchor of the list of Bottom Halves belonging to the context */ | ||
59 | - struct QEMUBH *first_bh; | ||
60 | + /* Bottom Halves pending aio_bh_poll() processing */ | ||
61 | + BHList bh_list; | ||
62 | + | 114 | + |
63 | + /* Chained BH list slices for each nested aio_bh_poll() call */ | 115 | return progress; |
64 | + QSIMPLEQ_HEAD(, BHListSlice) bh_slice_list; | 116 | } |
65 | 117 | ||
66 | /* Used by aio_notify. | 118 | diff --git a/util/aio-win32.c b/util/aio-win32.c |
67 | * | ||
68 | diff --git a/tests/test-aio.c b/tests/test-aio.c | ||
69 | index XXXXXXX..XXXXXXX 100644 | 119 | index XXXXXXX..XXXXXXX 100644 |
70 | --- a/tests/test-aio.c | 120 | --- a/util/aio-win32.c |
71 | +++ b/tests/test-aio.c | 121 | +++ b/util/aio-win32.c |
72 | @@ -XXX,XX +XXX,XX @@ static void test_source_bh_delete_from_cb(void) | 122 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) |
73 | g_assert_cmpint(data1.n, ==, data1.max); | 123 | return progress; |
74 | g_assert(data1.bh == NULL); | ||
75 | |||
76 | - g_assert(!g_main_context_iteration(NULL, false)); | ||
77 | + assert(g_main_context_iteration(NULL, false)); | ||
78 | + assert(!g_main_context_iteration(NULL, false)); | ||
79 | } | 124 | } |
80 | 125 | ||
81 | static void test_source_bh_delete_from_cb_many(void) | 126 | -bool aio_dispatch(AioContext *ctx, bool dispatch_fds) |
127 | +void aio_dispatch(AioContext *ctx) | ||
128 | { | ||
129 | - bool progress; | ||
130 | - | ||
131 | - progress = aio_bh_poll(ctx); | ||
132 | - if (dispatch_fds) { | ||
133 | - progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); | ||
134 | - } | ||
135 | - progress |= timerlistgroup_run_timers(&ctx->tlg); | ||
136 | - return progress; | ||
137 | + aio_bh_poll(ctx); | ||
138 | + aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); | ||
139 | + timerlistgroup_run_timers(&ctx->tlg); | ||
140 | } | ||
141 | |||
142 | bool aio_poll(AioContext *ctx, bool blocking) | ||
82 | diff --git a/util/async.c b/util/async.c | 143 | diff --git a/util/async.c b/util/async.c |
83 | index XXXXXXX..XXXXXXX 100644 | 144 | index XXXXXXX..XXXXXXX 100644 |
84 | --- a/util/async.c | 145 | --- a/util/async.c |
85 | +++ b/util/async.c | 146 | +++ b/util/async.c |
86 | @@ -XXX,XX +XXX,XX @@ | 147 | @@ -XXX,XX +XXX,XX @@ aio_ctx_dispatch(GSource *source, |
87 | #include "block/thread-pool.h" | 148 | AioContext *ctx = (AioContext *) source; |
88 | #include "qemu/main-loop.h" | 149 | |
89 | #include "qemu/atomic.h" | 150 | assert(callback == NULL); |
90 | +#include "qemu/rcu_queue.h" | 151 | - aio_dispatch(ctx, true); |
91 | #include "block/raw-aio.h" | 152 | + aio_dispatch(ctx); |
92 | #include "qemu/coroutine_int.h" | 153 | return true; |
93 | #include "trace.h" | ||
94 | @@ -XXX,XX +XXX,XX @@ | ||
95 | /***********************************************************/ | ||
96 | /* bottom halves (can be seen as timers which expire ASAP) */ | ||
97 | |||
98 | +/* QEMUBH::flags values */ | ||
99 | +enum { | ||
100 | + /* Already enqueued and waiting for aio_bh_poll() */ | ||
101 | + BH_PENDING = (1 << 0), | ||
102 | + | ||
103 | + /* Invoke the callback */ | ||
104 | + BH_SCHEDULED = (1 << 1), | ||
105 | + | ||
106 | + /* Delete without invoking callback */ | ||
107 | + BH_DELETED = (1 << 2), | ||
108 | + | ||
109 | + /* Delete after invoking callback */ | ||
110 | + BH_ONESHOT = (1 << 3), | ||
111 | + | ||
112 | + /* Schedule periodically when the event loop is idle */ | ||
113 | + BH_IDLE = (1 << 4), | ||
114 | +}; | ||
115 | + | ||
116 | struct QEMUBH { | ||
117 | AioContext *ctx; | ||
118 | QEMUBHFunc *cb; | ||
119 | void *opaque; | ||
120 | - QEMUBH *next; | ||
121 | - bool scheduled; | ||
122 | - bool idle; | ||
123 | - bool deleted; | ||
124 | + QSLIST_ENTRY(QEMUBH) next; | ||
125 | + unsigned flags; | ||
126 | }; | ||
127 | |||
128 | +/* Called concurrently from any thread */ | ||
129 | +static void aio_bh_enqueue(QEMUBH *bh, unsigned new_flags) | ||
130 | +{ | ||
131 | + AioContext *ctx = bh->ctx; | ||
132 | + unsigned old_flags; | ||
133 | + | ||
134 | + /* | ||
135 | + * The memory barrier implicit in atomic_fetch_or makes sure that: | ||
136 | + * 1. idle & any writes needed by the callback are done before the | ||
137 | + * locations are read in the aio_bh_poll. | ||
138 | + * 2. ctx is loaded before the callback has a chance to execute and bh | ||
139 | + * could be freed. | ||
140 | + */ | ||
141 | + old_flags = atomic_fetch_or(&bh->flags, BH_PENDING | new_flags); | ||
142 | + if (!(old_flags & BH_PENDING)) { | ||
143 | + QSLIST_INSERT_HEAD_ATOMIC(&ctx->bh_list, bh, next); | ||
144 | + } | ||
145 | + | ||
146 | + aio_notify(ctx); | ||
147 | +} | ||
148 | + | ||
149 | +/* Only called from aio_bh_poll() and aio_ctx_finalize() */ | ||
150 | +static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *flags) | ||
151 | +{ | ||
152 | + QEMUBH *bh = QSLIST_FIRST_RCU(head); | ||
153 | + | ||
154 | + if (!bh) { | ||
155 | + return NULL; | ||
156 | + } | ||
157 | + | ||
158 | + QSLIST_REMOVE_HEAD(head, next); | ||
159 | + | ||
160 | + /* | ||
161 | + * The atomic_and is paired with aio_bh_enqueue(). The implicit memory | ||
162 | + * barrier ensures that the callback sees all writes done by the scheduling | ||
163 | + * thread. It also ensures that the scheduling thread sees the cleared | ||
164 | + * flag before bh->cb has run, and thus will call aio_notify again if | ||
165 | + * necessary. | ||
166 | + */ | ||
167 | + *flags = atomic_fetch_and(&bh->flags, | ||
168 | + ~(BH_PENDING | BH_SCHEDULED | BH_IDLE)); | ||
169 | + return bh; | ||
170 | +} | ||
171 | + | ||
172 | void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque) | ||
173 | { | ||
174 | QEMUBH *bh; | ||
175 | @@ -XXX,XX +XXX,XX @@ void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque) | ||
176 | .cb = cb, | ||
177 | .opaque = opaque, | ||
178 | }; | ||
179 | - qemu_lockcnt_lock(&ctx->list_lock); | ||
180 | - bh->next = ctx->first_bh; | ||
181 | - bh->scheduled = 1; | ||
182 | - bh->deleted = 1; | ||
183 | - /* Make sure that the members are ready before putting bh into list */ | ||
184 | - smp_wmb(); | ||
185 | - ctx->first_bh = bh; | ||
186 | - qemu_lockcnt_unlock(&ctx->list_lock); | ||
187 | - aio_notify(ctx); | ||
188 | + aio_bh_enqueue(bh, BH_SCHEDULED | BH_ONESHOT); | ||
189 | } | 154 | } |
190 | 155 | ||
191 | QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) | ||
192 | @@ -XXX,XX +XXX,XX @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) | ||
193 | .cb = cb, | ||
194 | .opaque = opaque, | ||
195 | }; | ||
196 | - qemu_lockcnt_lock(&ctx->list_lock); | ||
197 | - bh->next = ctx->first_bh; | ||
198 | - /* Make sure that the members are ready before putting bh into list */ | ||
199 | - smp_wmb(); | ||
200 | - ctx->first_bh = bh; | ||
201 | - qemu_lockcnt_unlock(&ctx->list_lock); | ||
202 | return bh; | ||
203 | } | ||
204 | |||
205 | @@ -XXX,XX +XXX,XX @@ void aio_bh_call(QEMUBH *bh) | ||
206 | bh->cb(bh->opaque); | ||
207 | } | ||
208 | |||
209 | -/* Multiple occurrences of aio_bh_poll cannot be called concurrently. | ||
210 | - * The count in ctx->list_lock is incremented before the call, and is | ||
211 | - * not affected by the call. | ||
212 | - */ | ||
213 | +/* Multiple occurrences of aio_bh_poll cannot be called concurrently. */ | ||
214 | int aio_bh_poll(AioContext *ctx) | ||
215 | { | ||
216 | - QEMUBH *bh, **bhp, *next; | ||
217 | - int ret; | ||
218 | - bool deleted = false; | ||
219 | - | ||
220 | - ret = 0; | ||
221 | - for (bh = atomic_rcu_read(&ctx->first_bh); bh; bh = next) { | ||
222 | - next = atomic_rcu_read(&bh->next); | ||
223 | - /* The atomic_xchg is paired with the one in qemu_bh_schedule. The | ||
224 | - * implicit memory barrier ensures that the callback sees all writes | ||
225 | - * done by the scheduling thread. It also ensures that the scheduling | ||
226 | - * thread sees the zero before bh->cb has run, and thus will call | ||
227 | - * aio_notify again if necessary. | ||
228 | - */ | ||
229 | - if (atomic_xchg(&bh->scheduled, 0)) { | ||
230 | + BHListSlice slice; | ||
231 | + BHListSlice *s; | ||
232 | + int ret = 0; | ||
233 | + | ||
234 | + QSLIST_MOVE_ATOMIC(&slice.bh_list, &ctx->bh_list); | ||
235 | + QSIMPLEQ_INSERT_TAIL(&ctx->bh_slice_list, &slice, next); | ||
236 | + | ||
237 | + while ((s = QSIMPLEQ_FIRST(&ctx->bh_slice_list))) { | ||
238 | + QEMUBH *bh; | ||
239 | + unsigned flags; | ||
240 | + | ||
241 | + bh = aio_bh_dequeue(&s->bh_list, &flags); | ||
242 | + if (!bh) { | ||
243 | + QSIMPLEQ_REMOVE_HEAD(&ctx->bh_slice_list, next); | ||
244 | + continue; | ||
245 | + } | ||
246 | + | ||
247 | + if ((flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) { | ||
248 | /* Idle BHs don't count as progress */ | ||
249 | - if (!bh->idle) { | ||
250 | + if (!(flags & BH_IDLE)) { | ||
251 | ret = 1; | ||
252 | } | ||
253 | - bh->idle = 0; | ||
254 | aio_bh_call(bh); | ||
255 | } | ||
256 | - if (bh->deleted) { | ||
257 | - deleted = true; | ||
258 | + if (flags & (BH_DELETED | BH_ONESHOT)) { | ||
259 | + g_free(bh); | ||
260 | } | ||
261 | } | ||
262 | |||
263 | - /* remove deleted bhs */ | ||
264 | - if (!deleted) { | ||
265 | - return ret; | ||
266 | - } | ||
267 | - | ||
268 | - if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) { | ||
269 | - bhp = &ctx->first_bh; | ||
270 | - while (*bhp) { | ||
271 | - bh = *bhp; | ||
272 | - if (bh->deleted && !bh->scheduled) { | ||
273 | - *bhp = bh->next; | ||
274 | - g_free(bh); | ||
275 | - } else { | ||
276 | - bhp = &bh->next; | ||
277 | - } | ||
278 | - } | ||
279 | - qemu_lockcnt_inc_and_unlock(&ctx->list_lock); | ||
280 | - } | ||
281 | return ret; | ||
282 | } | ||
283 | |||
284 | void qemu_bh_schedule_idle(QEMUBH *bh) | ||
285 | { | ||
286 | - bh->idle = 1; | ||
287 | - /* Make sure that idle & any writes needed by the callback are done | ||
288 | - * before the locations are read in the aio_bh_poll. | ||
289 | - */ | ||
290 | - atomic_mb_set(&bh->scheduled, 1); | ||
291 | + aio_bh_enqueue(bh, BH_SCHEDULED | BH_IDLE); | ||
292 | } | ||
293 | |||
294 | void qemu_bh_schedule(QEMUBH *bh) | ||
295 | { | ||
296 | - AioContext *ctx; | ||
297 | - | ||
298 | - ctx = bh->ctx; | ||
299 | - bh->idle = 0; | ||
300 | - /* The memory barrier implicit in atomic_xchg makes sure that: | ||
301 | - * 1. idle & any writes needed by the callback are done before the | ||
302 | - * locations are read in the aio_bh_poll. | ||
303 | - * 2. ctx is loaded before scheduled is set and the callback has a chance | ||
304 | - * to execute. | ||
305 | - */ | ||
306 | - if (atomic_xchg(&bh->scheduled, 1) == 0) { | ||
307 | - aio_notify(ctx); | ||
308 | - } | ||
309 | + aio_bh_enqueue(bh, BH_SCHEDULED); | ||
310 | } | ||
311 | |||
312 | - | ||
313 | /* This func is async. | ||
314 | */ | ||
315 | void qemu_bh_cancel(QEMUBH *bh) | ||
316 | { | ||
317 | - atomic_mb_set(&bh->scheduled, 0); | ||
318 | + atomic_and(&bh->flags, ~BH_SCHEDULED); | ||
319 | } | ||
320 | |||
321 | /* This func is async.The bottom half will do the delete action at the finial | ||
322 | @@ -XXX,XX +XXX,XX @@ void qemu_bh_cancel(QEMUBH *bh) | ||
323 | */ | ||
324 | void qemu_bh_delete(QEMUBH *bh) | ||
325 | { | ||
326 | - bh->scheduled = 0; | ||
327 | - bh->deleted = 1; | ||
328 | + aio_bh_enqueue(bh, BH_DELETED); | ||
329 | } | ||
330 | |||
331 | -int64_t | ||
332 | -aio_compute_timeout(AioContext *ctx) | ||
333 | +static int64_t aio_compute_bh_timeout(BHList *head, int timeout) | ||
334 | { | ||
335 | - int64_t deadline; | ||
336 | - int timeout = -1; | ||
337 | QEMUBH *bh; | ||
338 | |||
339 | - for (bh = atomic_rcu_read(&ctx->first_bh); bh; | ||
340 | - bh = atomic_rcu_read(&bh->next)) { | ||
341 | - if (bh->scheduled) { | ||
342 | - if (bh->idle) { | ||
343 | + QSLIST_FOREACH_RCU(bh, head, next) { | ||
344 | + if ((bh->flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) { | ||
345 | + if (bh->flags & BH_IDLE) { | ||
346 | /* idle bottom halves will be polled at least | ||
347 | * every 10ms */ | ||
348 | timeout = 10000000; | ||
349 | @@ -XXX,XX +XXX,XX @@ aio_compute_timeout(AioContext *ctx) | ||
350 | } | ||
351 | } | ||
352 | |||
353 | + return timeout; | ||
354 | +} | ||
355 | + | ||
356 | +int64_t | ||
357 | +aio_compute_timeout(AioContext *ctx) | ||
358 | +{ | ||
359 | + BHListSlice *s; | ||
360 | + int64_t deadline; | ||
361 | + int timeout = -1; | ||
362 | + | ||
363 | + timeout = aio_compute_bh_timeout(&ctx->bh_list, timeout); | ||
364 | + if (timeout == 0) { | ||
365 | + return 0; | ||
366 | + } | ||
367 | + | ||
368 | + QSIMPLEQ_FOREACH(s, &ctx->bh_slice_list, next) { | ||
369 | + timeout = aio_compute_bh_timeout(&s->bh_list, timeout); | ||
370 | + if (timeout == 0) { | ||
371 | + return 0; | ||
372 | + } | ||
373 | + } | ||
374 | + | ||
375 | deadline = timerlistgroup_deadline_ns(&ctx->tlg); | ||
376 | if (deadline == 0) { | ||
377 | return 0; | ||
378 | @@ -XXX,XX +XXX,XX @@ aio_ctx_check(GSource *source) | ||
379 | { | ||
380 | AioContext *ctx = (AioContext *) source; | ||
381 | QEMUBH *bh; | ||
382 | + BHListSlice *s; | ||
383 | |||
384 | atomic_and(&ctx->notify_me, ~1); | ||
385 | aio_notify_accept(ctx); | ||
386 | |||
387 | - for (bh = ctx->first_bh; bh; bh = bh->next) { | ||
388 | - if (bh->scheduled) { | ||
389 | + QSLIST_FOREACH_RCU(bh, &ctx->bh_list, next) { | ||
390 | + if ((bh->flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) { | ||
391 | return true; | ||
392 | } | ||
393 | } | ||
394 | + | ||
395 | + QSIMPLEQ_FOREACH(s, &ctx->bh_slice_list, next) { | ||
396 | + QSLIST_FOREACH_RCU(bh, &s->bh_list, next) { | ||
397 | + if ((bh->flags & (BH_SCHEDULED | BH_DELETED)) == BH_SCHEDULED) { | ||
398 | + return true; | ||
399 | + } | ||
400 | + } | ||
401 | + } | ||
402 | return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0); | ||
403 | } | ||
404 | |||
405 | @@ -XXX,XX +XXX,XX @@ static void | ||
406 | aio_ctx_finalize(GSource *source) | ||
407 | { | ||
408 | AioContext *ctx = (AioContext *) source; | ||
409 | + QEMUBH *bh; | ||
410 | + unsigned flags; | ||
411 | |||
412 | thread_pool_free(ctx->thread_pool); | ||
413 | |||
414 | @@ -XXX,XX +XXX,XX @@ aio_ctx_finalize(GSource *source) | ||
415 | assert(QSLIST_EMPTY(&ctx->scheduled_coroutines)); | ||
416 | qemu_bh_delete(ctx->co_schedule_bh); | ||
417 | |||
418 | - qemu_lockcnt_lock(&ctx->list_lock); | ||
419 | - assert(!qemu_lockcnt_count(&ctx->list_lock)); | ||
420 | - while (ctx->first_bh) { | ||
421 | - QEMUBH *next = ctx->first_bh->next; | ||
422 | + /* There must be no aio_bh_poll() calls going on */ | ||
423 | + assert(QSIMPLEQ_EMPTY(&ctx->bh_slice_list)); | ||
424 | |||
425 | + while ((bh = aio_bh_dequeue(&ctx->bh_list, &flags))) { | ||
426 | /* qemu_bh_delete() must have been called on BHs in this AioContext */ | ||
427 | - assert(ctx->first_bh->deleted); | ||
428 | + assert(flags & BH_DELETED); | ||
429 | |||
430 | - g_free(ctx->first_bh); | ||
431 | - ctx->first_bh = next; | ||
432 | + g_free(bh); | ||
433 | } | ||
434 | - qemu_lockcnt_unlock(&ctx->list_lock); | ||
435 | |||
436 | aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL); | ||
437 | event_notifier_cleanup(&ctx->notifier); | ||
438 | @@ -XXX,XX +XXX,XX @@ AioContext *aio_context_new(Error **errp) | ||
439 | AioContext *ctx; | ||
440 | |||
441 | ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext)); | ||
442 | + QSLIST_INIT(&ctx->bh_list); | ||
443 | + QSIMPLEQ_INIT(&ctx->bh_slice_list); | ||
444 | aio_context_setup(ctx); | ||
445 | |||
446 | ret = event_notifier_init(&ctx->notifier, false); | ||
447 | -- | 156 | -- |
448 | 2.24.1 | 157 | 2.9.3 |
449 | 158 | ||
159 | diff view generated by jsdifflib |
1 | File descriptor monitoring is O(1) with epoll(7), but | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | aio_dispatch_handlers() still scans all AioHandlers instead of | ||
3 | dispatching just those that are ready. This makes aio_poll() O(n) with | ||
4 | respect to the total number of registered handlers. | ||
5 | 2 | ||
6 | Add a local ready_list to aio_poll() so that each nested aio_poll() | 3 | Pull the increment/decrement pair out of aio_bh_poll and into the |
7 | builds a list of handlers ready to be dispatched. Since file descriptor | 4 | callers. |
8 | polling is level-triggered, nested aio_poll() calls also see fds that | ||
9 | were ready in the parent but not yet dispatched. This guarantees that | ||
10 | nested aio_poll() invocations will dispatch all fds, even those that | ||
11 | became ready before the nested invocation. | ||
12 | 5 | ||
13 | Since only handlers ready to be dispatched are placed onto the | 6 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
14 | ready_list, the new aio_dispatch_ready_handlers() function provides O(1) | 7 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
15 | dispatch. | 8 | Reviewed-by: Fam Zheng <famz@redhat.com> |
16 | 9 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | |
17 | Note that AioContext polling is still O(n) and currently cannot be fully | 10 | Message-id: 20170213135235.12274-18-pbonzini@redhat.com |
18 | disabled. This still needs to be fixed before aio_poll() is fully O(1). | ||
19 | |||
20 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
21 | Reviewed-by: Sergio Lopez <slp@redhat.com> | ||
22 | Message-id: 20200214171712.541358-6-stefanha@redhat.com | ||
23 | [Fix compilation error on macOS where there is no epoll(87). The | ||
24 | aio_epoll() prototype was out of date and aio_add_ready_list() needed to | ||
25 | be moved outside the ifdef. | ||
26 | --Stefan] | ||
27 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
28 | --- | 12 | --- |
29 | util/aio-posix.c | 110 +++++++++++++++++++++++++++++++++-------------- | 13 | util/aio-posix.c | 8 +++----- |
30 | 1 file changed, 78 insertions(+), 32 deletions(-) | 14 | util/aio-win32.c | 8 ++++---- |
15 | util/async.c | 12 ++++++------ | ||
16 | 3 files changed, 13 insertions(+), 15 deletions(-) | ||
31 | 17 | ||
32 | diff --git a/util/aio-posix.c b/util/aio-posix.c | 18 | diff --git a/util/aio-posix.c b/util/aio-posix.c |
33 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
34 | --- a/util/aio-posix.c | 20 | --- a/util/aio-posix.c |
35 | +++ b/util/aio-posix.c | 21 | +++ b/util/aio-posix.c |
36 | @@ -XXX,XX +XXX,XX @@ struct AioHandler | 22 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx) |
37 | void *opaque; | 23 | |
38 | bool is_external; | 24 | void aio_dispatch(AioContext *ctx) |
39 | QLIST_ENTRY(AioHandler) node; | 25 | { |
40 | + QLIST_ENTRY(AioHandler) node_ready; /* only used during aio_poll() */ | 26 | + qemu_lockcnt_inc(&ctx->list_lock); |
41 | QLIST_ENTRY(AioHandler) node_deleted; | 27 | aio_bh_poll(ctx); |
42 | }; | 28 | - |
43 | 29 | - qemu_lockcnt_inc(&ctx->list_lock); | |
44 | +/* Add a handler to a ready list */ | 30 | aio_dispatch_handlers(ctx); |
45 | +static void add_ready_handler(AioHandlerList *ready_list, | 31 | qemu_lockcnt_dec(&ctx->list_lock); |
46 | + AioHandler *node, | 32 | |
47 | + int revents) | 33 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) |
48 | +{ | 34 | } |
49 | + QLIST_SAFE_REMOVE(node, node_ready); /* remove from nested parent's list */ | 35 | |
50 | + node->pfd.revents = revents; | 36 | npfd = 0; |
51 | + QLIST_INSERT_HEAD(ready_list, node, node_ready); | 37 | - qemu_lockcnt_dec(&ctx->list_lock); |
52 | +} | 38 | |
39 | progress |= aio_bh_poll(ctx); | ||
40 | |||
41 | if (ret > 0) { | ||
42 | - qemu_lockcnt_inc(&ctx->list_lock); | ||
43 | progress |= aio_dispatch_handlers(ctx); | ||
44 | - qemu_lockcnt_dec(&ctx->list_lock); | ||
45 | } | ||
46 | |||
47 | + qemu_lockcnt_dec(&ctx->list_lock); | ||
53 | + | 48 | + |
54 | #ifdef CONFIG_EPOLL_CREATE1 | 49 | progress |= timerlistgroup_run_timers(&ctx->tlg); |
55 | 50 | ||
56 | /* The fd number threshold to switch to epoll */ | 51 | return progress; |
57 | @@ -XXX,XX +XXX,XX @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new) | 52 | diff --git a/util/aio-win32.c b/util/aio-win32.c |
58 | } | 53 | index XXXXXXX..XXXXXXX 100644 |
59 | } | 54 | --- a/util/aio-win32.c |
60 | 55 | +++ b/util/aio-win32.c | |
61 | -static int aio_epoll(AioContext *ctx, int64_t timeout) | 56 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) |
62 | +static int aio_epoll(AioContext *ctx, AioHandlerList *ready_list, | 57 | bool progress = false; |
63 | + int64_t timeout) | 58 | AioHandler *tmp; |
64 | { | 59 | |
65 | GPollFD pfd = { | 60 | - qemu_lockcnt_inc(&ctx->list_lock); |
66 | .fd = ctx->epollfd, | 61 | - |
67 | @@ -XXX,XX +XXX,XX @@ static int aio_epoll(AioContext *ctx, int64_t timeout) | 62 | /* |
68 | } | 63 | * We have to walk very carefully in case aio_set_fd_handler is |
69 | for (i = 0; i < ret; i++) { | 64 | * called while we're walking. |
70 | int ev = events[i].events; | 65 | @@ -XXX,XX +XXX,XX @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event) |
71 | + int revents = (ev & EPOLLIN ? G_IO_IN : 0) | | ||
72 | + (ev & EPOLLOUT ? G_IO_OUT : 0) | | ||
73 | + (ev & EPOLLHUP ? G_IO_HUP : 0) | | ||
74 | + (ev & EPOLLERR ? G_IO_ERR : 0); | ||
75 | + | ||
76 | node = events[i].data.ptr; | ||
77 | - node->pfd.revents = (ev & EPOLLIN ? G_IO_IN : 0) | | ||
78 | - (ev & EPOLLOUT ? G_IO_OUT : 0) | | ||
79 | - (ev & EPOLLHUP ? G_IO_HUP : 0) | | ||
80 | - (ev & EPOLLERR ? G_IO_ERR : 0); | ||
81 | + add_ready_handler(ready_list, node, revents); | ||
82 | } | 66 | } |
83 | } | 67 | } |
84 | out: | 68 | |
85 | @@ -XXX,XX +XXX,XX @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new) | 69 | - qemu_lockcnt_dec(&ctx->list_lock); |
70 | return progress; | ||
71 | } | ||
72 | |||
73 | void aio_dispatch(AioContext *ctx) | ||
86 | { | 74 | { |
75 | + qemu_lockcnt_inc(&ctx->list_lock); | ||
76 | aio_bh_poll(ctx); | ||
77 | aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); | ||
78 | + qemu_lockcnt_dec(&ctx->list_lock); | ||
79 | timerlistgroup_run_timers(&ctx->tlg); | ||
87 | } | 80 | } |
88 | 81 | ||
89 | -static int aio_epoll(AioContext *ctx, GPollFD *pfds, | 82 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) |
90 | - unsigned npfd, int64_t timeout) | ||
91 | +static int aio_epoll(AioContext *ctx, AioHandlerList *ready_list, | ||
92 | + int64_t timeout) | ||
93 | { | ||
94 | assert(false); | ||
95 | } | ||
96 | @@ -XXX,XX +XXX,XX @@ static void aio_free_deleted_handlers(AioContext *ctx) | ||
97 | qemu_lockcnt_inc_and_unlock(&ctx->list_lock); | ||
98 | } | ||
99 | |||
100 | -static bool aio_dispatch_handlers(AioContext *ctx) | ||
101 | +static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node) | ||
102 | { | ||
103 | - AioHandler *node, *tmp; | ||
104 | bool progress = false; | ||
105 | + int revents; | ||
106 | |||
107 | - QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) { | ||
108 | - int revents; | ||
109 | + revents = node->pfd.revents & node->pfd.events; | ||
110 | + node->pfd.revents = 0; | ||
111 | |||
112 | - revents = node->pfd.revents & node->pfd.events; | ||
113 | - node->pfd.revents = 0; | ||
114 | + if (!QLIST_IS_INSERTED(node, node_deleted) && | ||
115 | + (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && | ||
116 | + aio_node_check(ctx, node->is_external) && | ||
117 | + node->io_read) { | ||
118 | + node->io_read(node->opaque); | ||
119 | |||
120 | - if (!QLIST_IS_INSERTED(node, node_deleted) && | ||
121 | - (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && | ||
122 | - aio_node_check(ctx, node->is_external) && | ||
123 | - node->io_read) { | ||
124 | - node->io_read(node->opaque); | ||
125 | - | ||
126 | - /* aio_notify() does not count as progress */ | ||
127 | - if (node->opaque != &ctx->notifier) { | ||
128 | - progress = true; | ||
129 | - } | ||
130 | - } | ||
131 | - if (!QLIST_IS_INSERTED(node, node_deleted) && | ||
132 | - (revents & (G_IO_OUT | G_IO_ERR)) && | ||
133 | - aio_node_check(ctx, node->is_external) && | ||
134 | - node->io_write) { | ||
135 | - node->io_write(node->opaque); | ||
136 | + /* aio_notify() does not count as progress */ | ||
137 | + if (node->opaque != &ctx->notifier) { | ||
138 | progress = true; | ||
139 | } | 83 | } |
140 | } | 84 | } |
141 | + if (!QLIST_IS_INSERTED(node, node_deleted) && | 85 | |
142 | + (revents & (G_IO_OUT | G_IO_ERR)) && | 86 | - qemu_lockcnt_dec(&ctx->list_lock); |
143 | + aio_node_check(ctx, node->is_external) && | 87 | first = true; |
144 | + node->io_write) { | 88 | |
145 | + node->io_write(node->opaque); | 89 | /* ctx->notifier is always registered. */ |
146 | + progress = true; | 90 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) |
147 | + } | 91 | progress |= aio_dispatch_handlers(ctx, event); |
92 | } while (count > 0); | ||
93 | |||
94 | + qemu_lockcnt_dec(&ctx->list_lock); | ||
148 | + | 95 | + |
149 | + return progress; | 96 | progress |= timerlistgroup_run_timers(&ctx->tlg); |
150 | +} | ||
151 | + | ||
152 | +/* | ||
153 | + * If we have a list of ready handlers then this is more efficient than | ||
154 | + * scanning all handlers with aio_dispatch_handlers(). | ||
155 | + */ | ||
156 | +static bool aio_dispatch_ready_handlers(AioContext *ctx, | ||
157 | + AioHandlerList *ready_list) | ||
158 | +{ | ||
159 | + bool progress = false; | ||
160 | + AioHandler *node; | ||
161 | + | ||
162 | + while ((node = QLIST_FIRST(ready_list))) { | ||
163 | + QLIST_SAFE_REMOVE(node, node_ready); | ||
164 | + progress = aio_dispatch_handler(ctx, node) || progress; | ||
165 | + } | ||
166 | + | ||
167 | + return progress; | ||
168 | +} | ||
169 | + | ||
170 | +/* Slower than aio_dispatch_ready_handlers() but only used via glib */ | ||
171 | +static bool aio_dispatch_handlers(AioContext *ctx) | ||
172 | +{ | ||
173 | + AioHandler *node, *tmp; | ||
174 | + bool progress = false; | ||
175 | + | ||
176 | + QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) { | ||
177 | + progress = aio_dispatch_handler(ctx, node) || progress; | ||
178 | + } | ||
179 | |||
180 | return progress; | 97 | return progress; |
181 | } | 98 | } |
182 | @@ -XXX,XX +XXX,XX @@ static bool try_poll_mode(AioContext *ctx, int64_t *timeout) | 99 | diff --git a/util/async.c b/util/async.c |
183 | 100 | index XXXXXXX..XXXXXXX 100644 | |
184 | bool aio_poll(AioContext *ctx, bool blocking) | 101 | --- a/util/async.c |
102 | +++ b/util/async.c | ||
103 | @@ -XXX,XX +XXX,XX @@ void aio_bh_call(QEMUBH *bh) | ||
104 | bh->cb(bh->opaque); | ||
105 | } | ||
106 | |||
107 | -/* Multiple occurrences of aio_bh_poll cannot be called concurrently */ | ||
108 | +/* Multiple occurrences of aio_bh_poll cannot be called concurrently. | ||
109 | + * The count in ctx->list_lock is incremented before the call, and is | ||
110 | + * not affected by the call. | ||
111 | + */ | ||
112 | int aio_bh_poll(AioContext *ctx) | ||
185 | { | 113 | { |
186 | + AioHandlerList ready_list = QLIST_HEAD_INITIALIZER(ready_list); | 114 | QEMUBH *bh, **bhp, *next; |
187 | AioHandler *node; | 115 | int ret; |
188 | int i; | 116 | bool deleted = false; |
189 | int ret = 0; | 117 | |
190 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | 118 | - qemu_lockcnt_inc(&ctx->list_lock); |
191 | /* wait until next event */ | 119 | - |
192 | if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) { | 120 | ret = 0; |
193 | npfd = 0; /* pollfds[] is not being used */ | 121 | for (bh = atomic_rcu_read(&ctx->first_bh); bh; bh = next) { |
194 | - ret = aio_epoll(ctx, timeout); | 122 | next = atomic_rcu_read(&bh->next); |
195 | + ret = aio_epoll(ctx, &ready_list, timeout); | 123 | @@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx) |
196 | } else { | 124 | |
197 | ret = qemu_poll_ns(pollfds, npfd, timeout); | 125 | /* remove deleted bhs */ |
126 | if (!deleted) { | ||
127 | - qemu_lockcnt_dec(&ctx->list_lock); | ||
128 | return ret; | ||
129 | } | ||
130 | |||
131 | - if (qemu_lockcnt_dec_and_lock(&ctx->list_lock)) { | ||
132 | + if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) { | ||
133 | bhp = &ctx->first_bh; | ||
134 | while (*bhp) { | ||
135 | bh = *bhp; | ||
136 | @@ -XXX,XX +XXX,XX @@ int aio_bh_poll(AioContext *ctx) | ||
137 | bhp = &bh->next; | ||
138 | } | ||
198 | } | 139 | } |
199 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | 140 | - qemu_lockcnt_unlock(&ctx->list_lock); |
200 | /* if we have any readable fds, dispatch event */ | 141 | + qemu_lockcnt_inc_and_unlock(&ctx->list_lock); |
201 | if (ret > 0) { | ||
202 | for (i = 0; i < npfd; i++) { | ||
203 | - nodes[i]->pfd.revents = pollfds[i].revents; | ||
204 | + int revents = pollfds[i].revents; | ||
205 | + | ||
206 | + if (revents) { | ||
207 | + add_ready_handler(&ready_list, nodes[i], revents); | ||
208 | + } | ||
209 | } | ||
210 | } | 142 | } |
211 | 143 | return ret; | |
212 | @@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking) | 144 | } |
213 | progress |= aio_bh_poll(ctx); | ||
214 | |||
215 | if (ret > 0) { | ||
216 | - progress |= aio_dispatch_handlers(ctx); | ||
217 | + progress |= aio_dispatch_ready_handlers(ctx, &ready_list); | ||
218 | } | ||
219 | |||
220 | aio_free_deleted_handlers(ctx); | ||
221 | -- | 145 | -- |
222 | 2.24.1 | 146 | 2.9.3 |
223 | 147 | ||
148 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | The virtio-net fuzz target feeds inputs to all three virtio-net | ||
4 | virtqueues, and uses forking to avoid leaking state between fuzz runs. | ||
5 | |||
6 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
7 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 3 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> |
8 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 4 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
9 | Message-id: 20200220041118.23264-21-alxndr@bu.edu | 5 | Reviewed-by: Fam Zheng <famz@redhat.com> |
6 | Reviewed-by: Daniel P. Berrange <berrange@redhat.com> | ||
7 | Message-id: 20170213135235.12274-19-pbonzini@redhat.com | ||
10 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 8 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
11 | --- | 9 | --- |
12 | tests/qtest/fuzz/Makefile.include | 1 + | 10 | include/block/block_int.h | 64 +++++++++++++++++++++++++----------------- |
13 | tests/qtest/fuzz/virtio_net_fuzz.c | 198 +++++++++++++++++++++++++++++ | 11 | include/sysemu/block-backend.h | 14 ++++++--- |
14 | 2 files changed, 199 insertions(+) | 12 | 2 files changed, 49 insertions(+), 29 deletions(-) |
15 | create mode 100644 tests/qtest/fuzz/virtio_net_fuzz.c | ||
16 | 13 | ||
17 | diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include | 14 | diff --git a/include/block/block_int.h b/include/block/block_int.h |
18 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/tests/qtest/fuzz/Makefile.include | 16 | --- a/include/block/block_int.h |
20 | +++ b/tests/qtest/fuzz/Makefile.include | 17 | +++ b/include/block/block_int.h |
21 | @@ -XXX,XX +XXX,XX @@ fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o | 18 | @@ -XXX,XX +XXX,XX @@ struct BdrvChild { |
22 | 19 | * copied as well. | |
23 | # Targets | 20 | */ |
24 | fuzz-obj-y += tests/qtest/fuzz/i440fx_fuzz.o | 21 | struct BlockDriverState { |
25 | +fuzz-obj-y += tests/qtest/fuzz/virtio_net_fuzz.o | 22 | - int64_t total_sectors; /* if we are reading a disk image, give its |
26 | 23 | - size in sectors */ | |
27 | FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest | 24 | + /* Protected by big QEMU lock or read-only after opening. No special |
28 | 25 | + * locking needed during I/O... | |
29 | diff --git a/tests/qtest/fuzz/virtio_net_fuzz.c b/tests/qtest/fuzz/virtio_net_fuzz.c | 26 | + */ |
30 | new file mode 100644 | 27 | int open_flags; /* flags used to open the file, re-used for re-open */ |
31 | index XXXXXXX..XXXXXXX | 28 | bool read_only; /* if true, the media is read only */ |
32 | --- /dev/null | 29 | bool encrypted; /* if true, the media is encrypted */ |
33 | +++ b/tests/qtest/fuzz/virtio_net_fuzz.c | 30 | @@ -XXX,XX +XXX,XX @@ struct BlockDriverState { |
34 | @@ -XXX,XX +XXX,XX @@ | 31 | bool sg; /* if true, the device is a /dev/sg* */ |
35 | +/* | 32 | bool probed; /* if true, format was probed rather than specified */ |
36 | + * virtio-net Fuzzing Target | 33 | |
37 | + * | 34 | - int copy_on_read; /* if nonzero, copy read backing sectors into image. |
38 | + * Copyright Red Hat Inc., 2019 | 35 | - note this is a reference count */ |
39 | + * | 36 | - |
40 | + * Authors: | 37 | - CoQueue flush_queue; /* Serializing flush queue */ |
41 | + * Alexander Bulekov <alxndr@bu.edu> | 38 | - bool active_flush_req; /* Flush request in flight? */ |
42 | + * | 39 | - unsigned int write_gen; /* Current data generation */ |
43 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | 40 | - unsigned int flushed_gen; /* Flushed write generation */ |
44 | + * See the COPYING file in the top-level directory. | 41 | - |
45 | + */ | 42 | BlockDriver *drv; /* NULL means no media */ |
43 | void *opaque; | ||
44 | |||
45 | @@ -XXX,XX +XXX,XX @@ struct BlockDriverState { | ||
46 | BdrvChild *backing; | ||
47 | BdrvChild *file; | ||
48 | |||
49 | - /* Callback before write request is processed */ | ||
50 | - NotifierWithReturnList before_write_notifiers; | ||
51 | - | ||
52 | - /* number of in-flight requests; overall and serialising */ | ||
53 | - unsigned int in_flight; | ||
54 | - unsigned int serialising_in_flight; | ||
55 | - | ||
56 | - bool wakeup; | ||
57 | - | ||
58 | - /* Offset after the highest byte written to */ | ||
59 | - uint64_t wr_highest_offset; | ||
60 | - | ||
61 | /* I/O Limits */ | ||
62 | BlockLimits bl; | ||
63 | |||
64 | @@ -XXX,XX +XXX,XX @@ struct BlockDriverState { | ||
65 | QTAILQ_ENTRY(BlockDriverState) bs_list; | ||
66 | /* element of the list of monitor-owned BDS */ | ||
67 | QTAILQ_ENTRY(BlockDriverState) monitor_list; | ||
68 | - QLIST_HEAD(, BdrvDirtyBitmap) dirty_bitmaps; | ||
69 | int refcnt; | ||
70 | |||
71 | - QLIST_HEAD(, BdrvTrackedRequest) tracked_requests; | ||
72 | - | ||
73 | /* operation blockers */ | ||
74 | QLIST_HEAD(, BdrvOpBlocker) op_blockers[BLOCK_OP_TYPE_MAX]; | ||
75 | |||
76 | @@ -XXX,XX +XXX,XX @@ struct BlockDriverState { | ||
77 | /* The error object in use for blocking operations on backing_hd */ | ||
78 | Error *backing_blocker; | ||
79 | |||
80 | + /* Protected by AioContext lock */ | ||
46 | + | 81 | + |
47 | +#include "qemu/osdep.h" | 82 | + /* If true, copy read backing sectors into image. Can be >1 if more |
83 | + * than one client has requested copy-on-read. | ||
84 | + */ | ||
85 | + int copy_on_read; | ||
48 | + | 86 | + |
49 | +#include "standard-headers/linux/virtio_config.h" | 87 | + /* If we are reading a disk image, give its size in sectors. |
50 | +#include "tests/qtest/libqtest.h" | 88 | + * Generally read-only; it is written to by load_vmstate and save_vmstate, |
51 | +#include "tests/qtest/libqos/virtio-net.h" | 89 | + * but the block layer is quiescent during those. |
52 | +#include "fuzz.h" | 90 | + */ |
53 | +#include "fork_fuzz.h" | 91 | + int64_t total_sectors; |
54 | +#include "qos_fuzz.h" | ||
55 | + | 92 | + |
93 | + /* Callback before write request is processed */ | ||
94 | + NotifierWithReturnList before_write_notifiers; | ||
56 | + | 95 | + |
57 | +#define QVIRTIO_NET_TIMEOUT_US (30 * 1000 * 1000) | 96 | + /* number of in-flight requests; overall and serialising */ |
58 | +#define QVIRTIO_RX_VQ 0 | 97 | + unsigned int in_flight; |
59 | +#define QVIRTIO_TX_VQ 1 | 98 | + unsigned int serialising_in_flight; |
60 | +#define QVIRTIO_CTRL_VQ 2 | ||
61 | + | 99 | + |
62 | +static int sockfds[2]; | 100 | + bool wakeup; |
63 | +static bool sockfds_initialized; | ||
64 | + | 101 | + |
65 | +static void virtio_net_fuzz_multi(QTestState *s, | 102 | + /* Offset after the highest byte written to */ |
66 | + const unsigned char *Data, size_t Size, bool check_used) | 103 | + uint64_t wr_highest_offset; |
67 | +{ | ||
68 | + typedef struct vq_action { | ||
69 | + uint8_t queue; | ||
70 | + uint8_t length; | ||
71 | + uint8_t write; | ||
72 | + uint8_t next; | ||
73 | + uint8_t rx; | ||
74 | + } vq_action; | ||
75 | + | 104 | + |
76 | + uint32_t free_head = 0; | 105 | /* threshold limit for writes, in bytes. "High water mark". */ |
106 | uint64_t write_threshold_offset; | ||
107 | NotifierWithReturn write_threshold_notifier; | ||
108 | @@ -XXX,XX +XXX,XX @@ struct BlockDriverState { | ||
109 | /* counter for nested bdrv_io_plug */ | ||
110 | unsigned io_plugged; | ||
111 | |||
112 | + QLIST_HEAD(, BdrvTrackedRequest) tracked_requests; | ||
113 | + CoQueue flush_queue; /* Serializing flush queue */ | ||
114 | + bool active_flush_req; /* Flush request in flight? */ | ||
115 | + unsigned int write_gen; /* Current data generation */ | ||
116 | + unsigned int flushed_gen; /* Flushed write generation */ | ||
77 | + | 117 | + |
78 | + QGuestAllocator *t_alloc = fuzz_qos_alloc; | 118 | + QLIST_HEAD(, BdrvDirtyBitmap) dirty_bitmaps; |
79 | + | 119 | + |
80 | + QVirtioNet *net_if = fuzz_qos_obj; | 120 | + /* do we need to tell the quest if we have a volatile write cache? */ |
81 | + QVirtioDevice *dev = net_if->vdev; | 121 | + int enable_write_cache; |
82 | + QVirtQueue *q; | ||
83 | + vq_action vqa; | ||
84 | + while (Size >= sizeof(vqa)) { | ||
85 | + memcpy(&vqa, Data, sizeof(vqa)); | ||
86 | + Data += sizeof(vqa); | ||
87 | + Size -= sizeof(vqa); | ||
88 | + | 122 | + |
89 | + q = net_if->queues[vqa.queue % 3]; | 123 | int quiesce_counter; |
124 | }; | ||
125 | |||
126 | diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h | ||
127 | index XXXXXXX..XXXXXXX 100644 | ||
128 | --- a/include/sysemu/block-backend.h | ||
129 | +++ b/include/sysemu/block-backend.h | ||
130 | @@ -XXX,XX +XXX,XX @@ typedef struct BlockDevOps { | ||
131 | * fields that must be public. This is in particular for QLIST_ENTRY() and | ||
132 | * friends so that BlockBackends can be kept in lists outside block-backend.c */ | ||
133 | typedef struct BlockBackendPublic { | ||
134 | - /* I/O throttling. | ||
135 | - * throttle_state tells us if this BlockBackend has I/O limits configured. | ||
136 | - * io_limits_disabled tells us if they are currently being enforced */ | ||
137 | + /* I/O throttling has its own locking, but also some fields are | ||
138 | + * protected by the AioContext lock. | ||
139 | + */ | ||
90 | + | 140 | + |
91 | + vqa.length = vqa.length >= Size ? Size : vqa.length; | 141 | + /* Protected by AioContext lock. */ |
142 | CoQueue throttled_reqs[2]; | ||
92 | + | 143 | + |
93 | + /* | 144 | + /* Nonzero if the I/O limits are currently being ignored; generally |
94 | + * Only attempt to write incoming packets, when using the socket | 145 | + * it is zero. */ |
95 | + * backend. Otherwise, always place the input on a virtqueue. | 146 | unsigned int io_limits_disabled; |
96 | + */ | 147 | |
97 | + if (vqa.rx && sockfds_initialized) { | 148 | /* The following fields are protected by the ThrottleGroup lock. |
98 | + write(sockfds[0], Data, vqa.length); | 149 | - * See the ThrottleGroup documentation for details. */ |
99 | + } else { | 150 | + * See the ThrottleGroup documentation for details. |
100 | + vqa.rx = 0; | 151 | + * throttle_state tells us if I/O limits are configured. */ |
101 | + uint64_t req_addr = guest_alloc(t_alloc, vqa.length); | 152 | ThrottleState *throttle_state; |
102 | + /* | 153 | ThrottleTimers throttle_timers; |
103 | + * If checking used ring, ensure that the fuzzer doesn't trigger | 154 | unsigned pending_reqs[2]; |
104 | + * trivial asserion failure on zero-zied buffer | ||
105 | + */ | ||
106 | + qtest_memwrite(s, req_addr, Data, vqa.length); | ||
107 | + | ||
108 | + | ||
109 | + free_head = qvirtqueue_add(s, q, req_addr, vqa.length, | ||
110 | + vqa.write, vqa.next); | ||
111 | + qvirtqueue_add(s, q, req_addr, vqa.length, vqa.write , vqa.next); | ||
112 | + qvirtqueue_kick(s, dev, q, free_head); | ||
113 | + } | ||
114 | + | ||
115 | + /* Run the main loop */ | ||
116 | + qtest_clock_step(s, 100); | ||
117 | + flush_events(s); | ||
118 | + | ||
119 | + /* Wait on used descriptors */ | ||
120 | + if (check_used && !vqa.rx) { | ||
121 | + gint64 start_time = g_get_monotonic_time(); | ||
122 | + /* | ||
123 | + * normally, we could just use qvirtio_wait_used_elem, but since we | ||
124 | + * must manually run the main-loop for all the bhs to run, we use | ||
125 | + * this hack with flush_events(), to run the main_loop | ||
126 | + */ | ||
127 | + while (!vqa.rx && q != net_if->queues[QVIRTIO_RX_VQ]) { | ||
128 | + uint32_t got_desc_idx; | ||
129 | + /* Input led to a virtio_error */ | ||
130 | + if (dev->bus->get_status(dev) & VIRTIO_CONFIG_S_NEEDS_RESET) { | ||
131 | + break; | ||
132 | + } | ||
133 | + if (dev->bus->get_queue_isr_status(dev, q) && | ||
134 | + qvirtqueue_get_buf(s, q, &got_desc_idx, NULL)) { | ||
135 | + g_assert_cmpint(got_desc_idx, ==, free_head); | ||
136 | + break; | ||
137 | + } | ||
138 | + g_assert(g_get_monotonic_time() - start_time | ||
139 | + <= QVIRTIO_NET_TIMEOUT_US); | ||
140 | + | ||
141 | + /* Run the main loop */ | ||
142 | + qtest_clock_step(s, 100); | ||
143 | + flush_events(s); | ||
144 | + } | ||
145 | + } | ||
146 | + Data += vqa.length; | ||
147 | + Size -= vqa.length; | ||
148 | + } | ||
149 | +} | ||
150 | + | ||
151 | +static void virtio_net_fork_fuzz(QTestState *s, | ||
152 | + const unsigned char *Data, size_t Size) | ||
153 | +{ | ||
154 | + if (fork() == 0) { | ||
155 | + virtio_net_fuzz_multi(s, Data, Size, false); | ||
156 | + flush_events(s); | ||
157 | + _Exit(0); | ||
158 | + } else { | ||
159 | + wait(NULL); | ||
160 | + } | ||
161 | +} | ||
162 | + | ||
163 | +static void virtio_net_fork_fuzz_check_used(QTestState *s, | ||
164 | + const unsigned char *Data, size_t Size) | ||
165 | +{ | ||
166 | + if (fork() == 0) { | ||
167 | + virtio_net_fuzz_multi(s, Data, Size, true); | ||
168 | + flush_events(s); | ||
169 | + _Exit(0); | ||
170 | + } else { | ||
171 | + wait(NULL); | ||
172 | + } | ||
173 | +} | ||
174 | + | ||
175 | +static void virtio_net_pre_fuzz(QTestState *s) | ||
176 | +{ | ||
177 | + qos_init_path(s); | ||
178 | + counter_shm_init(); | ||
179 | +} | ||
180 | + | ||
181 | +static void *virtio_net_test_setup_socket(GString *cmd_line, void *arg) | ||
182 | +{ | ||
183 | + int ret = socketpair(PF_UNIX, SOCK_STREAM, 0, sockfds); | ||
184 | + g_assert_cmpint(ret, !=, -1); | ||
185 | + fcntl(sockfds[0], F_SETFL, O_NONBLOCK); | ||
186 | + sockfds_initialized = true; | ||
187 | + g_string_append_printf(cmd_line, " -netdev socket,fd=%d,id=hs0 ", | ||
188 | + sockfds[1]); | ||
189 | + return arg; | ||
190 | +} | ||
191 | + | ||
192 | +static void *virtio_net_test_setup_user(GString *cmd_line, void *arg) | ||
193 | +{ | ||
194 | + g_string_append_printf(cmd_line, " -netdev user,id=hs0 "); | ||
195 | + return arg; | ||
196 | +} | ||
197 | + | ||
198 | +static void register_virtio_net_fuzz_targets(void) | ||
199 | +{ | ||
200 | + fuzz_add_qos_target(&(FuzzTarget){ | ||
201 | + .name = "virtio-net-socket", | ||
202 | + .description = "Fuzz the virtio-net virtual queues. Fuzz incoming " | ||
203 | + "traffic using the socket backend", | ||
204 | + .pre_fuzz = &virtio_net_pre_fuzz, | ||
205 | + .fuzz = virtio_net_fork_fuzz,}, | ||
206 | + "virtio-net", | ||
207 | + &(QOSGraphTestOptions){.before = virtio_net_test_setup_socket} | ||
208 | + ); | ||
209 | + | ||
210 | + fuzz_add_qos_target(&(FuzzTarget){ | ||
211 | + .name = "virtio-net-socket-check-used", | ||
212 | + .description = "Fuzz the virtio-net virtual queues. Wait for the " | ||
213 | + "descriptors to be used. Timeout may indicate improperly handled " | ||
214 | + "input", | ||
215 | + .pre_fuzz = &virtio_net_pre_fuzz, | ||
216 | + .fuzz = virtio_net_fork_fuzz_check_used,}, | ||
217 | + "virtio-net", | ||
218 | + &(QOSGraphTestOptions){.before = virtio_net_test_setup_socket} | ||
219 | + ); | ||
220 | + fuzz_add_qos_target(&(FuzzTarget){ | ||
221 | + .name = "virtio-net-slirp", | ||
222 | + .description = "Fuzz the virtio-net virtual queues with the slirp " | ||
223 | + " backend. Warning: May result in network traffic emitted from the " | ||
224 | + " process. Run in an isolated network environment.", | ||
225 | + .pre_fuzz = &virtio_net_pre_fuzz, | ||
226 | + .fuzz = virtio_net_fork_fuzz,}, | ||
227 | + "virtio-net", | ||
228 | + &(QOSGraphTestOptions){.before = virtio_net_test_setup_user} | ||
229 | + ); | ||
230 | +} | ||
231 | + | ||
232 | +fuzz_target_init(register_virtio_net_fuzz_targets); | ||
233 | -- | 155 | -- |
234 | 2.24.1 | 156 | 2.9.3 |
235 | 157 | ||
158 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 3 | This uses the lock-free mutex described in the paper '"Blocking without |
4 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 4 | Locking", or LFTHREADS: A lock-free thread library' by Gidenstam and |
5 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 5 | Papatriantafilou. The same technique is used in OSv, and in fact |
6 | Message-id: 20200220041118.23264-17-alxndr@bu.edu | 6 | the code is essentially a conversion to C of OSv's code. |
7 | |||
8 | [Added missing coroutine_fn in tests/test-aio-multithread.c. | ||
9 | --Stefan] | ||
10 | |||
11 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | ||
12 | Reviewed-by: Fam Zheng <famz@redhat.com> | ||
13 | Message-id: 20170213181244.16297-2-pbonzini@redhat.com | ||
7 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 14 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
8 | --- | 15 | --- |
9 | tests/qtest/fuzz/Makefile.include | 2 + | 16 | include/qemu/coroutine.h | 17 ++++- |
10 | tests/qtest/fuzz/qos_fuzz.c | 234 ++++++++++++++++++++++++++++++ | 17 | tests/test-aio-multithread.c | 86 ++++++++++++++++++++++++ |
11 | tests/qtest/fuzz/qos_fuzz.h | 33 +++++ | 18 | util/qemu-coroutine-lock.c | 155 ++++++++++++++++++++++++++++++++++++++++--- |
12 | 3 files changed, 269 insertions(+) | 19 | util/trace-events | 1 + |
13 | create mode 100644 tests/qtest/fuzz/qos_fuzz.c | 20 | 4 files changed, 246 insertions(+), 13 deletions(-) |
14 | create mode 100644 tests/qtest/fuzz/qos_fuzz.h | 21 | |
15 | 22 | diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h | |
16 | diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include | ||
17 | index XXXXXXX..XXXXXXX 100644 | 23 | index XXXXXXX..XXXXXXX 100644 |
18 | --- a/tests/qtest/fuzz/Makefile.include | 24 | --- a/include/qemu/coroutine.h |
19 | +++ b/tests/qtest/fuzz/Makefile.include | 25 | +++ b/include/qemu/coroutine.h |
26 | @@ -XXX,XX +XXX,XX @@ bool qemu_co_queue_empty(CoQueue *queue); | ||
27 | /** | ||
28 | * Provides a mutex that can be used to synchronise coroutines | ||
29 | */ | ||
30 | +struct CoWaitRecord; | ||
31 | typedef struct CoMutex { | ||
32 | - bool locked; | ||
33 | + /* Count of pending lockers; 0 for a free mutex, 1 for an | ||
34 | + * uncontended mutex. | ||
35 | + */ | ||
36 | + unsigned locked; | ||
37 | + | ||
38 | + /* A queue of waiters. Elements are added atomically in front of | ||
39 | + * from_push. to_pop is only populated, and popped from, by whoever | ||
40 | + * is in charge of the next wakeup. This can be an unlocker or, | ||
41 | + * through the handoff protocol, a locker that is about to go to sleep. | ||
42 | + */ | ||
43 | + QSLIST_HEAD(, CoWaitRecord) from_push, to_pop; | ||
44 | + | ||
45 | + unsigned handoff, sequence; | ||
46 | + | ||
47 | Coroutine *holder; | ||
48 | - CoQueue queue; | ||
49 | } CoMutex; | ||
50 | |||
51 | /** | ||
52 | diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c | ||
53 | index XXXXXXX..XXXXXXX 100644 | ||
54 | --- a/tests/test-aio-multithread.c | ||
55 | +++ b/tests/test-aio-multithread.c | ||
56 | @@ -XXX,XX +XXX,XX @@ static void test_multi_co_schedule_10(void) | ||
57 | test_multi_co_schedule(10); | ||
58 | } | ||
59 | |||
60 | +/* CoMutex thread-safety. */ | ||
61 | + | ||
62 | +static uint32_t atomic_counter; | ||
63 | +static uint32_t running; | ||
64 | +static uint32_t counter; | ||
65 | +static CoMutex comutex; | ||
66 | + | ||
67 | +static void coroutine_fn test_multi_co_mutex_entry(void *opaque) | ||
68 | +{ | ||
69 | + while (!atomic_mb_read(&now_stopping)) { | ||
70 | + qemu_co_mutex_lock(&comutex); | ||
71 | + counter++; | ||
72 | + qemu_co_mutex_unlock(&comutex); | ||
73 | + | ||
74 | + /* Increase atomic_counter *after* releasing the mutex. Otherwise | ||
75 | + * there is a chance (it happens about 1 in 3 runs) that the iothread | ||
76 | + * exits before the coroutine is woken up, causing a spurious | ||
77 | + * assertion failure. | ||
78 | + */ | ||
79 | + atomic_inc(&atomic_counter); | ||
80 | + } | ||
81 | + atomic_dec(&running); | ||
82 | +} | ||
83 | + | ||
84 | +static void test_multi_co_mutex(int threads, int seconds) | ||
85 | +{ | ||
86 | + int i; | ||
87 | + | ||
88 | + qemu_co_mutex_init(&comutex); | ||
89 | + counter = 0; | ||
90 | + atomic_counter = 0; | ||
91 | + now_stopping = false; | ||
92 | + | ||
93 | + create_aio_contexts(); | ||
94 | + assert(threads <= NUM_CONTEXTS); | ||
95 | + running = threads; | ||
96 | + for (i = 0; i < threads; i++) { | ||
97 | + Coroutine *co1 = qemu_coroutine_create(test_multi_co_mutex_entry, NULL); | ||
98 | + aio_co_schedule(ctx[i], co1); | ||
99 | + } | ||
100 | + | ||
101 | + g_usleep(seconds * 1000000); | ||
102 | + | ||
103 | + atomic_mb_set(&now_stopping, true); | ||
104 | + while (running > 0) { | ||
105 | + g_usleep(100000); | ||
106 | + } | ||
107 | + | ||
108 | + join_aio_contexts(); | ||
109 | + g_test_message("%d iterations/second\n", counter / seconds); | ||
110 | + g_assert_cmpint(counter, ==, atomic_counter); | ||
111 | +} | ||
112 | + | ||
113 | +/* Testing with NUM_CONTEXTS threads focuses on the queue. The mutex however | ||
114 | + * is too contended (and the threads spend too much time in aio_poll) | ||
115 | + * to actually stress the handoff protocol. | ||
116 | + */ | ||
117 | +static void test_multi_co_mutex_1(void) | ||
118 | +{ | ||
119 | + test_multi_co_mutex(NUM_CONTEXTS, 1); | ||
120 | +} | ||
121 | + | ||
122 | +static void test_multi_co_mutex_10(void) | ||
123 | +{ | ||
124 | + test_multi_co_mutex(NUM_CONTEXTS, 10); | ||
125 | +} | ||
126 | + | ||
127 | +/* Testing with fewer threads stresses the handoff protocol too. Still, the | ||
128 | + * case where the locker _can_ pick up a handoff is very rare, happening | ||
129 | + * about 10 times in 1 million, so increase the runtime a bit compared to | ||
130 | + * other "quick" testcases that only run for 1 second. | ||
131 | + */ | ||
132 | +static void test_multi_co_mutex_2_3(void) | ||
133 | +{ | ||
134 | + test_multi_co_mutex(2, 3); | ||
135 | +} | ||
136 | + | ||
137 | +static void test_multi_co_mutex_2_30(void) | ||
138 | +{ | ||
139 | + test_multi_co_mutex(2, 30); | ||
140 | +} | ||
141 | + | ||
142 | /* End of tests. */ | ||
143 | |||
144 | int main(int argc, char **argv) | ||
145 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv) | ||
146 | g_test_add_func("/aio/multi/lifecycle", test_lifecycle); | ||
147 | if (g_test_quick()) { | ||
148 | g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_1); | ||
149 | + g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_1); | ||
150 | + g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_3); | ||
151 | } else { | ||
152 | g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_10); | ||
153 | + g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_10); | ||
154 | + g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_30); | ||
155 | } | ||
156 | return g_test_run(); | ||
157 | } | ||
158 | diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c | ||
159 | index XXXXXXX..XXXXXXX 100644 | ||
160 | --- a/util/qemu-coroutine-lock.c | ||
161 | +++ b/util/qemu-coroutine-lock.c | ||
20 | @@ -XXX,XX +XXX,XX @@ | 162 | @@ -XXX,XX +XXX,XX @@ |
21 | QEMU_PROG_FUZZ=qemu-fuzz-$(TARGET_NAME)$(EXESUF) | 163 | * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |
22 | 164 | * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | |
23 | fuzz-obj-y += tests/qtest/libqtest.o | 165 | * THE SOFTWARE. |
24 | +fuzz-obj-y += $(libqos-obj-y) | ||
25 | fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton | ||
26 | fuzz-obj-y += tests/qtest/fuzz/fork_fuzz.o | ||
27 | +fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o | ||
28 | |||
29 | FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest | ||
30 | |||
31 | diff --git a/tests/qtest/fuzz/qos_fuzz.c b/tests/qtest/fuzz/qos_fuzz.c | ||
32 | new file mode 100644 | ||
33 | index XXXXXXX..XXXXXXX | ||
34 | --- /dev/null | ||
35 | +++ b/tests/qtest/fuzz/qos_fuzz.c | ||
36 | @@ -XXX,XX +XXX,XX @@ | ||
37 | +/* | ||
38 | + * QOS-assisted fuzzing helpers | ||
39 | + * | 166 | + * |
40 | + * Copyright (c) 2018 Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com> | 167 | + * The lock-free mutex implementation is based on OSv |
168 | + * (core/lfmutex.cc, include/lockfree/mutex.hh). | ||
169 | + * Copyright (C) 2013 Cloudius Systems, Ltd. | ||
170 | */ | ||
171 | |||
172 | #include "qemu/osdep.h" | ||
173 | @@ -XXX,XX +XXX,XX @@ bool qemu_co_queue_empty(CoQueue *queue) | ||
174 | return QSIMPLEQ_FIRST(&queue->entries) == NULL; | ||
175 | } | ||
176 | |||
177 | +/* The wait records are handled with a multiple-producer, single-consumer | ||
178 | + * lock-free queue. There cannot be two concurrent pop_waiter() calls | ||
179 | + * because pop_waiter() can only be called while mutex->handoff is zero. | ||
180 | + * This can happen in three cases: | ||
181 | + * - in qemu_co_mutex_unlock, before the hand-off protocol has started. | ||
182 | + * In this case, qemu_co_mutex_lock will see mutex->handoff == 0 and | ||
183 | + * not take part in the handoff. | ||
184 | + * - in qemu_co_mutex_lock, if it steals the hand-off responsibility from | ||
185 | + * qemu_co_mutex_unlock. In this case, qemu_co_mutex_unlock will fail | ||
186 | + * the cmpxchg (it will see either 0 or the next sequence value) and | ||
187 | + * exit. The next hand-off cannot begin until qemu_co_mutex_lock has | ||
188 | + * woken up someone. | ||
189 | + * - in qemu_co_mutex_unlock, if it takes the hand-off token itself. | ||
190 | + * In this case another iteration starts with mutex->handoff == 0; | ||
191 | + * a concurrent qemu_co_mutex_lock will fail the cmpxchg, and | ||
192 | + * qemu_co_mutex_unlock will go back to case (1). | ||
41 | + * | 193 | + * |
42 | + * This library is free software; you can redistribute it and/or | 194 | + * The following functions manage this queue. |
43 | + * modify it under the terms of the GNU Lesser General Public | ||
44 | + * License version 2 as published by the Free Software Foundation. | ||
45 | + * | ||
46 | + * This library is distributed in the hope that it will be useful, | ||
47 | + * but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
48 | + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | ||
49 | + * Lesser General Public License for more details. | ||
50 | + * | ||
51 | + * You should have received a copy of the GNU Lesser General Public | ||
52 | + * License along with this library; if not, see <http://www.gnu.org/licenses/> | ||
53 | + */ | 195 | + */ |
54 | + | 196 | +typedef struct CoWaitRecord { |
55 | +#include "qemu/osdep.h" | 197 | + Coroutine *co; |
56 | +#include "qemu/units.h" | 198 | + QSLIST_ENTRY(CoWaitRecord) next; |
57 | +#include "qapi/error.h" | 199 | +} CoWaitRecord; |
58 | +#include "qemu-common.h" | 200 | + |
59 | +#include "exec/memory.h" | 201 | +static void push_waiter(CoMutex *mutex, CoWaitRecord *w) |
60 | +#include "exec/address-spaces.h" | 202 | +{ |
61 | +#include "sysemu/sysemu.h" | 203 | + w->co = qemu_coroutine_self(); |
62 | +#include "qemu/main-loop.h" | 204 | + QSLIST_INSERT_HEAD_ATOMIC(&mutex->from_push, w, next); |
63 | + | 205 | +} |
64 | +#include "tests/qtest/libqtest.h" | 206 | + |
65 | +#include "tests/qtest/libqos/malloc.h" | 207 | +static void move_waiters(CoMutex *mutex) |
66 | +#include "tests/qtest/libqos/qgraph.h" | 208 | +{ |
67 | +#include "tests/qtest/libqos/qgraph_internal.h" | 209 | + QSLIST_HEAD(, CoWaitRecord) reversed; |
68 | +#include "tests/qtest/libqos/qos_external.h" | 210 | + QSLIST_MOVE_ATOMIC(&reversed, &mutex->from_push); |
69 | + | 211 | + while (!QSLIST_EMPTY(&reversed)) { |
70 | +#include "fuzz.h" | 212 | + CoWaitRecord *w = QSLIST_FIRST(&reversed); |
71 | +#include "qos_fuzz.h" | 213 | + QSLIST_REMOVE_HEAD(&reversed, next); |
72 | + | 214 | + QSLIST_INSERT_HEAD(&mutex->to_pop, w, next); |
73 | +#include "qapi/qapi-commands-machine.h" | 215 | + } |
74 | +#include "qapi/qapi-commands-qom.h" | 216 | +} |
75 | +#include "qapi/qmp/qlist.h" | 217 | + |
76 | + | 218 | +static CoWaitRecord *pop_waiter(CoMutex *mutex) |
77 | + | 219 | +{ |
78 | +void *fuzz_qos_obj; | 220 | + CoWaitRecord *w; |
79 | +QGuestAllocator *fuzz_qos_alloc; | 221 | + |
80 | + | 222 | + if (QSLIST_EMPTY(&mutex->to_pop)) { |
81 | +static const char *fuzz_target_name; | 223 | + move_waiters(mutex); |
82 | +static char **fuzz_path_vec; | 224 | + if (QSLIST_EMPTY(&mutex->to_pop)) { |
83 | + | 225 | + return NULL; |
84 | +/* | 226 | + } |
85 | + * Replaced the qmp commands with direct qmp_marshal calls. | 227 | + } |
86 | + * Probably there is a better way to do this | 228 | + w = QSLIST_FIRST(&mutex->to_pop); |
87 | + */ | 229 | + QSLIST_REMOVE_HEAD(&mutex->to_pop, next); |
88 | +static void qos_set_machines_devices_available(void) | 230 | + return w; |
89 | +{ | 231 | +} |
90 | + QDict *req = qdict_new(); | 232 | + |
91 | + QObject *response; | 233 | +static bool has_waiters(CoMutex *mutex) |
92 | + QDict *args = qdict_new(); | 234 | +{ |
93 | + QList *lst; | 235 | + return QSLIST_EMPTY(&mutex->to_pop) || QSLIST_EMPTY(&mutex->from_push); |
94 | + Error *err = NULL; | 236 | +} |
95 | + | 237 | + |
96 | + qmp_marshal_query_machines(NULL, &response, &err); | 238 | void qemu_co_mutex_init(CoMutex *mutex) |
97 | + assert(!err); | 239 | { |
98 | + lst = qobject_to(QList, response); | 240 | memset(mutex, 0, sizeof(*mutex)); |
99 | + apply_to_qlist(lst, true); | 241 | - qemu_co_queue_init(&mutex->queue); |
100 | + | 242 | } |
101 | + qobject_unref(response); | 243 | |
102 | + | 244 | -void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex) |
103 | + | 245 | +static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex) |
104 | + qdict_put_str(req, "execute", "qom-list-types"); | 246 | { |
105 | + qdict_put_str(args, "implements", "device"); | 247 | Coroutine *self = qemu_coroutine_self(); |
106 | + qdict_put_bool(args, "abstract", true); | 248 | + CoWaitRecord w; |
107 | + qdict_put_obj(req, "arguments", (QObject *) args); | 249 | + unsigned old_handoff; |
108 | + | 250 | |
109 | + qmp_marshal_qom_list_types(args, &response, &err); | 251 | trace_qemu_co_mutex_lock_entry(mutex, self); |
110 | + assert(!err); | 252 | + w.co = self; |
111 | + lst = qobject_to(QList, response); | 253 | + push_waiter(mutex, &w); |
112 | + apply_to_qlist(lst, false); | 254 | |
113 | + qobject_unref(response); | 255 | - while (mutex->locked) { |
114 | + qobject_unref(req); | 256 | - qemu_co_queue_wait(&mutex->queue); |
115 | +} | 257 | + /* This is the "Responsibility Hand-Off" protocol; a lock() picks from |
116 | + | 258 | + * a concurrent unlock() the responsibility of waking somebody up. |
117 | +static char **current_path; | 259 | + */ |
118 | + | 260 | + old_handoff = atomic_mb_read(&mutex->handoff); |
119 | +void *qos_allocate_objects(QTestState *qts, QGuestAllocator **p_alloc) | 261 | + if (old_handoff && |
120 | +{ | 262 | + has_waiters(mutex) && |
121 | + return allocate_objects(qts, current_path + 1, p_alloc); | 263 | + atomic_cmpxchg(&mutex->handoff, old_handoff, 0) == old_handoff) { |
122 | +} | 264 | + /* There can be no concurrent pops, because there can be only |
123 | + | 265 | + * one active handoff at a time. |
124 | +static const char *qos_build_main_args(void) | 266 | + */ |
125 | +{ | 267 | + CoWaitRecord *to_wake = pop_waiter(mutex); |
126 | + char **path = fuzz_path_vec; | 268 | + Coroutine *co = to_wake->co; |
127 | + QOSGraphNode *test_node; | 269 | + if (co == self) { |
128 | + GString *cmd_line = g_string_new(path[0]); | 270 | + /* We got the lock ourselves! */ |
129 | + void *test_arg; | 271 | + assert(to_wake == &w); |
130 | + | 272 | + return; |
131 | + if (!path) { | 273 | + } |
132 | + fprintf(stderr, "QOS Path not found\n"); | 274 | + |
133 | + abort(); | 275 | + aio_co_wake(co); |
134 | + } | 276 | } |
135 | + | 277 | |
136 | + /* Before test */ | 278 | - mutex->locked = true; |
137 | + current_path = path; | 279 | - mutex->holder = self; |
138 | + test_node = qos_graph_get_node(path[(g_strv_length(path) - 1)]); | 280 | - self->locks_held++; |
139 | + test_arg = test_node->u.test.arg; | 281 | - |
140 | + if (test_node->u.test.before) { | 282 | + qemu_coroutine_yield(); |
141 | + test_arg = test_node->u.test.before(cmd_line, test_arg); | 283 | trace_qemu_co_mutex_lock_return(mutex, self); |
142 | + } | 284 | } |
143 | + /* Prepend the arguments that we need */ | 285 | |
144 | + g_string_prepend(cmd_line, | 286 | +void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex) |
145 | + TARGET_NAME " -display none -machine accel=qtest -m 64 "); | 287 | +{ |
146 | + return cmd_line->str; | 288 | + Coroutine *self = qemu_coroutine_self(); |
147 | +} | 289 | + |
148 | + | 290 | + if (atomic_fetch_inc(&mutex->locked) == 0) { |
149 | +/* | 291 | + /* Uncontended. */ |
150 | + * This function is largely a copy of qos-test.c:walk_path. Since walk_path | 292 | + trace_qemu_co_mutex_lock_uncontended(mutex, self); |
151 | + * is itself a callback, its a little annoying to add another argument/layer of | 293 | + } else { |
152 | + * indirection | 294 | + qemu_co_mutex_lock_slowpath(mutex); |
153 | + */ | 295 | + } |
154 | +static void walk_path(QOSGraphNode *orig_path, int len) | 296 | + mutex->holder = self; |
155 | +{ | 297 | + self->locks_held++; |
156 | + QOSGraphNode *path; | 298 | +} |
157 | + QOSGraphEdge *edge; | 299 | + |
158 | + | 300 | void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex) |
159 | + /* etype set to QEDGE_CONSUMED_BY so that machine can add to the command line */ | 301 | { |
160 | + QOSEdgeType etype = QEDGE_CONSUMED_BY; | 302 | Coroutine *self = qemu_coroutine_self(); |
161 | + | 303 | |
162 | + /* twice QOS_PATH_MAX_ELEMENT_SIZE since each edge can have its arg */ | 304 | trace_qemu_co_mutex_unlock_entry(mutex, self); |
163 | + char **path_vec = g_new0(char *, (QOS_PATH_MAX_ELEMENT_SIZE * 2)); | 305 | |
164 | + int path_vec_size = 0; | 306 | - assert(mutex->locked == true); |
165 | + | 307 | + assert(mutex->locked); |
166 | + char *after_cmd, *before_cmd, *after_device; | 308 | assert(mutex->holder == self); |
167 | + GString *after_device_str = g_string_new(""); | 309 | assert(qemu_in_coroutine()); |
168 | + char *node_name = orig_path->name, *path_str; | 310 | |
169 | + | 311 | - mutex->locked = false; |
170 | + GString *cmd_line = g_string_new(""); | 312 | mutex->holder = NULL; |
171 | + GString *cmd_line2 = g_string_new(""); | 313 | self->locks_held--; |
172 | + | 314 | - qemu_co_queue_next(&mutex->queue); |
173 | + path = qos_graph_get_node(node_name); /* root */ | 315 | + if (atomic_fetch_dec(&mutex->locked) == 1) { |
174 | + node_name = qos_graph_edge_get_dest(path->path_edge); /* machine name */ | 316 | + /* No waiting qemu_co_mutex_lock(). Pfew, that was easy! */ |
175 | + | 317 | + return; |
176 | + path_vec[path_vec_size++] = node_name; | 318 | + } |
177 | + path_vec[path_vec_size++] = qos_get_machine_type(node_name); | ||
178 | + | 319 | + |
179 | + for (;;) { | 320 | + for (;;) { |
180 | + path = qos_graph_get_node(node_name); | 321 | + CoWaitRecord *to_wake = pop_waiter(mutex); |
181 | + if (!path->path_edge) { | 322 | + unsigned our_handoff; |
323 | + | ||
324 | + if (to_wake) { | ||
325 | + Coroutine *co = to_wake->co; | ||
326 | + aio_co_wake(co); | ||
182 | + break; | 327 | + break; |
183 | + } | 328 | + } |
184 | + | 329 | + |
185 | + node_name = qos_graph_edge_get_dest(path->path_edge); | 330 | + /* Some concurrent lock() is in progress (we know this because |
186 | + | 331 | + * mutex->locked was >1) but it hasn't yet put itself on the wait |
187 | + /* append node command line + previous edge command line */ | 332 | + * queue. Pick a sequence number for the handoff protocol (not 0). |
188 | + if (path->command_line && etype == QEDGE_CONSUMED_BY) { | ||
189 | + g_string_append(cmd_line, path->command_line); | ||
190 | + g_string_append(cmd_line, after_device_str->str); | ||
191 | + g_string_truncate(after_device_str, 0); | ||
192 | + } | ||
193 | + | ||
194 | + path_vec[path_vec_size++] = qos_graph_edge_get_name(path->path_edge); | ||
195 | + /* detect if edge has command line args */ | ||
196 | + after_cmd = qos_graph_edge_get_after_cmd_line(path->path_edge); | ||
197 | + after_device = qos_graph_edge_get_extra_device_opts(path->path_edge); | ||
198 | + before_cmd = qos_graph_edge_get_before_cmd_line(path->path_edge); | ||
199 | + edge = qos_graph_get_edge(path->name, node_name); | ||
200 | + etype = qos_graph_edge_get_type(edge); | ||
201 | + | ||
202 | + if (before_cmd) { | ||
203 | + g_string_append(cmd_line, before_cmd); | ||
204 | + } | ||
205 | + if (after_cmd) { | ||
206 | + g_string_append(cmd_line2, after_cmd); | ||
207 | + } | ||
208 | + if (after_device) { | ||
209 | + g_string_append(after_device_str, after_device); | ||
210 | + } | ||
211 | + } | ||
212 | + | ||
213 | + path_vec[path_vec_size++] = NULL; | ||
214 | + g_string_append(cmd_line, after_device_str->str); | ||
215 | + g_string_free(after_device_str, true); | ||
216 | + | ||
217 | + g_string_append(cmd_line, cmd_line2->str); | ||
218 | + g_string_free(cmd_line2, true); | ||
219 | + | ||
220 | + /* | ||
221 | + * here position 0 has <arch>/<machine>, position 1 has <machine>. | ||
222 | + * The path must not have the <arch>, qtest_add_data_func adds it. | ||
223 | + */ | ||
224 | + path_str = g_strjoinv("/", path_vec + 1); | ||
225 | + | ||
226 | + /* Check that this is the test we care about: */ | ||
227 | + char *test_name = strrchr(path_str, '/') + 1; | ||
228 | + if (strcmp(test_name, fuzz_target_name) == 0) { | ||
229 | + /* | ||
230 | + * put arch/machine in position 1 so run_one_test can do its work | ||
231 | + * and add the command line at position 0. | ||
232 | + */ | 333 | + */ |
233 | + path_vec[1] = path_vec[0]; | 334 | + if (++mutex->sequence == 0) { |
234 | + path_vec[0] = g_string_free(cmd_line, false); | 335 | + mutex->sequence = 1; |
235 | + | 336 | + } |
236 | + fuzz_path_vec = path_vec; | 337 | + |
237 | + } else { | 338 | + our_handoff = mutex->sequence; |
238 | + g_free(path_vec); | 339 | + atomic_mb_set(&mutex->handoff, our_handoff); |
239 | + } | 340 | + if (!has_waiters(mutex)) { |
240 | + | 341 | + /* The concurrent lock has not added itself yet, so it |
241 | + g_free(path_str); | 342 | + * will be able to pick our handoff. |
242 | +} | 343 | + */ |
243 | + | 344 | + break; |
244 | +static const char *qos_get_cmdline(FuzzTarget *t) | 345 | + } |
245 | +{ | 346 | + |
246 | + /* | 347 | + /* Try to do the handoff protocol ourselves; if somebody else has |
247 | + * Set a global variable that we use to identify the qos_path for our | 348 | + * already taken it, however, we're done and they're responsible. |
248 | + * fuzz_target | 349 | + */ |
249 | + */ | 350 | + if (atomic_cmpxchg(&mutex->handoff, our_handoff, 0) != our_handoff) { |
250 | + fuzz_target_name = t->name; | 351 | + break; |
251 | + qos_set_machines_devices_available(); | 352 | + } |
252 | + qos_graph_foreach_test_path(walk_path); | 353 | + } |
253 | + return qos_build_main_args(); | 354 | |
254 | +} | 355 | trace_qemu_co_mutex_unlock_return(mutex, self); |
255 | + | 356 | } |
256 | +void fuzz_add_qos_target( | 357 | diff --git a/util/trace-events b/util/trace-events |
257 | + FuzzTarget *fuzz_opts, | 358 | index XXXXXXX..XXXXXXX 100644 |
258 | + const char *interface, | 359 | --- a/util/trace-events |
259 | + QOSGraphTestOptions *opts | 360 | +++ b/util/trace-events |
260 | + ) | 361 | @@ -XXX,XX +XXX,XX @@ qemu_coroutine_terminate(void *co) "self %p" |
261 | +{ | 362 | |
262 | + qos_add_test(fuzz_opts->name, interface, NULL, opts); | 363 | # util/qemu-coroutine-lock.c |
263 | + fuzz_opts->get_init_cmdline = qos_get_cmdline; | 364 | qemu_co_queue_run_restart(void *co) "co %p" |
264 | + fuzz_add_target(fuzz_opts); | 365 | +qemu_co_mutex_lock_uncontended(void *mutex, void *self) "mutex %p self %p" |
265 | +} | 366 | qemu_co_mutex_lock_entry(void *mutex, void *self) "mutex %p self %p" |
266 | + | 367 | qemu_co_mutex_lock_return(void *mutex, void *self) "mutex %p self %p" |
267 | +void qos_init_path(QTestState *s) | 368 | qemu_co_mutex_unlock_entry(void *mutex, void *self) "mutex %p self %p" |
268 | +{ | ||
269 | + fuzz_qos_obj = qos_allocate_objects(s , &fuzz_qos_alloc); | ||
270 | +} | ||
271 | diff --git a/tests/qtest/fuzz/qos_fuzz.h b/tests/qtest/fuzz/qos_fuzz.h | ||
272 | new file mode 100644 | ||
273 | index XXXXXXX..XXXXXXX | ||
274 | --- /dev/null | ||
275 | +++ b/tests/qtest/fuzz/qos_fuzz.h | ||
276 | @@ -XXX,XX +XXX,XX @@ | ||
277 | +/* | ||
278 | + * QOS-assisted fuzzing helpers | ||
279 | + * | ||
280 | + * Copyright Red Hat Inc., 2019 | ||
281 | + * | ||
282 | + * Authors: | ||
283 | + * Alexander Bulekov <alxndr@bu.edu> | ||
284 | + * | ||
285 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | ||
286 | + * See the COPYING file in the top-level directory. | ||
287 | + */ | ||
288 | + | ||
289 | +#ifndef _QOS_FUZZ_H_ | ||
290 | +#define _QOS_FUZZ_H_ | ||
291 | + | ||
292 | +#include "tests/qtest/fuzz/fuzz.h" | ||
293 | +#include "tests/qtest/libqos/qgraph.h" | ||
294 | + | ||
295 | +int qos_fuzz(const unsigned char *Data, size_t Size); | ||
296 | +void qos_setup(void); | ||
297 | + | ||
298 | +extern void *fuzz_qos_obj; | ||
299 | +extern QGuestAllocator *fuzz_qos_alloc; | ||
300 | + | ||
301 | +void fuzz_add_qos_target( | ||
302 | + FuzzTarget *fuzz_opts, | ||
303 | + const char *interface, | ||
304 | + QOSGraphTestOptions *opts | ||
305 | + ); | ||
306 | + | ||
307 | +void qos_init_path(QTestState *); | ||
308 | + | ||
309 | +#endif | ||
310 | -- | 369 | -- |
311 | 2.24.1 | 370 | 2.9.3 |
312 | 371 | ||
372 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | These three targets should simply fuzz reads/writes to a couple ioports, | 3 | Running a very small critical section on pthread_mutex_t and CoMutex |
4 | but they mostly serve as examples of different ways to write targets. | 4 | shows that pthread_mutex_t is much faster because it doesn't actually |
5 | They demonstrate using qtest and qos for fuzzing, as well as using | 5 | go to sleep. What happens is that the critical section is shorter |
6 | rebooting and forking to reset state, or not resetting it at all. | 6 | than the latency of entering the kernel and thus FUTEX_WAIT always |
7 | fails. With CoMutex there is no such latency but you still want to | ||
8 | avoid wait and wakeup. So introduce it artificially. | ||
7 | 9 | ||
8 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 10 | This only works with one waiters; because CoMutex is fair, it will |
9 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 11 | always have more waits and wakeups than a pthread_mutex_t. |
10 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 12 | |
11 | Message-id: 20200220041118.23264-20-alxndr@bu.edu | 13 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
14 | Reviewed-by: Fam Zheng <famz@redhat.com> | ||
15 | Message-id: 20170213181244.16297-3-pbonzini@redhat.com | ||
12 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 16 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
13 | --- | 17 | --- |
14 | tests/qtest/fuzz/Makefile.include | 3 + | 18 | include/qemu/coroutine.h | 5 +++++ |
15 | tests/qtest/fuzz/i440fx_fuzz.c | 193 ++++++++++++++++++++++++++++++ | 19 | util/qemu-coroutine-lock.c | 51 ++++++++++++++++++++++++++++++++++++++++------ |
16 | 2 files changed, 196 insertions(+) | 20 | util/qemu-coroutine.c | 2 +- |
17 | create mode 100644 tests/qtest/fuzz/i440fx_fuzz.c | 21 | 3 files changed, 51 insertions(+), 7 deletions(-) |
18 | 22 | ||
19 | diff --git a/tests/qtest/fuzz/Makefile.include b/tests/qtest/fuzz/Makefile.include | 23 | diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h |
20 | index XXXXXXX..XXXXXXX 100644 | 24 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/tests/qtest/fuzz/Makefile.include | 25 | --- a/include/qemu/coroutine.h |
22 | +++ b/tests/qtest/fuzz/Makefile.include | 26 | +++ b/include/qemu/coroutine.h |
23 | @@ -XXX,XX +XXX,XX @@ fuzz-obj-y += tests/qtest/fuzz/fuzz.o # Fuzzer skeleton | 27 | @@ -XXX,XX +XXX,XX @@ typedef struct CoMutex { |
24 | fuzz-obj-y += tests/qtest/fuzz/fork_fuzz.o | 28 | */ |
25 | fuzz-obj-y += tests/qtest/fuzz/qos_fuzz.o | 29 | unsigned locked; |
26 | 30 | ||
27 | +# Targets | 31 | + /* Context that is holding the lock. Useful to avoid spinning |
28 | +fuzz-obj-y += tests/qtest/fuzz/i440fx_fuzz.o | 32 | + * when two coroutines on the same AioContext try to get the lock. :) |
33 | + */ | ||
34 | + AioContext *ctx; | ||
29 | + | 35 | + |
30 | FUZZ_CFLAGS += -I$(SRC_PATH)/tests -I$(SRC_PATH)/tests/qtest | 36 | /* A queue of waiters. Elements are added atomically in front of |
31 | 37 | * from_push. to_pop is only populated, and popped from, by whoever | |
32 | # Linker Script to force coverage-counters into known regions which we can mark | 38 | * is in charge of the next wakeup. This can be an unlocker or, |
33 | diff --git a/tests/qtest/fuzz/i440fx_fuzz.c b/tests/qtest/fuzz/i440fx_fuzz.c | 39 | diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c |
34 | new file mode 100644 | 40 | index XXXXXXX..XXXXXXX 100644 |
35 | index XXXXXXX..XXXXXXX | 41 | --- a/util/qemu-coroutine-lock.c |
36 | --- /dev/null | 42 | +++ b/util/qemu-coroutine-lock.c |
37 | +++ b/tests/qtest/fuzz/i440fx_fuzz.c | ||
38 | @@ -XXX,XX +XXX,XX @@ | 43 | @@ -XXX,XX +XXX,XX @@ |
39 | +/* | 44 | #include "qemu-common.h" |
40 | + * I440FX Fuzzing Target | 45 | #include "qemu/coroutine.h" |
41 | + * | 46 | #include "qemu/coroutine_int.h" |
42 | + * Copyright Red Hat Inc., 2019 | 47 | +#include "qemu/processor.h" |
43 | + * | 48 | #include "qemu/queue.h" |
44 | + * Authors: | 49 | #include "block/aio.h" |
45 | + * Alexander Bulekov <alxndr@bu.edu> | 50 | #include "trace.h" |
46 | + * | 51 | @@ -XXX,XX +XXX,XX @@ void qemu_co_mutex_init(CoMutex *mutex) |
47 | + * This work is licensed under the terms of the GNU GPL, version 2 or later. | 52 | memset(mutex, 0, sizeof(*mutex)); |
48 | + * See the COPYING file in the top-level directory. | 53 | } |
49 | + */ | 54 | |
50 | + | 55 | -static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex) |
51 | +#include "qemu/osdep.h" | 56 | +static void coroutine_fn qemu_co_mutex_wake(CoMutex *mutex, Coroutine *co) |
52 | + | 57 | +{ |
53 | +#include "qemu/main-loop.h" | 58 | + /* Read co before co->ctx; pairs with smp_wmb() in |
54 | +#include "tests/qtest/libqtest.h" | 59 | + * qemu_coroutine_enter(). |
55 | +#include "tests/qtest/libqos/pci.h" | ||
56 | +#include "tests/qtest/libqos/pci-pc.h" | ||
57 | +#include "fuzz.h" | ||
58 | +#include "fuzz/qos_fuzz.h" | ||
59 | +#include "fuzz/fork_fuzz.h" | ||
60 | + | ||
61 | + | ||
62 | +#define I440FX_PCI_HOST_BRIDGE_CFG 0xcf8 | ||
63 | +#define I440FX_PCI_HOST_BRIDGE_DATA 0xcfc | ||
64 | + | ||
65 | +/* | ||
66 | + * the input to the fuzzing functions below is a buffer of random bytes. we | ||
67 | + * want to convert these bytes into a sequence of qtest or qos calls. to do | ||
68 | + * this we define some opcodes: | ||
69 | + */ | ||
70 | +enum action_id { | ||
71 | + WRITEB, | ||
72 | + WRITEW, | ||
73 | + WRITEL, | ||
74 | + READB, | ||
75 | + READW, | ||
76 | + READL, | ||
77 | + ACTION_MAX | ||
78 | +}; | ||
79 | + | ||
80 | +static void i440fx_fuzz_qtest(QTestState *s, | ||
81 | + const unsigned char *Data, size_t Size) { | ||
82 | + /* | ||
83 | + * loop over the Data, breaking it up into actions. each action has an | ||
84 | + * opcode, address offset and value | ||
85 | + */ | 60 | + */ |
86 | + typedef struct QTestFuzzAction { | 61 | + smp_read_barrier_depends(); |
87 | + uint8_t opcode; | 62 | + mutex->ctx = co->ctx; |
88 | + uint8_t addr; | 63 | + aio_co_wake(co); |
89 | + uint32_t value; | ||
90 | + } QTestFuzzAction; | ||
91 | + QTestFuzzAction a; | ||
92 | + | ||
93 | + while (Size >= sizeof(a)) { | ||
94 | + /* make a copy of the action so we can normalize the values in-place */ | ||
95 | + memcpy(&a, Data, sizeof(a)); | ||
96 | + /* select between two i440fx Port IO addresses */ | ||
97 | + uint16_t addr = a.addr % 2 ? I440FX_PCI_HOST_BRIDGE_CFG : | ||
98 | + I440FX_PCI_HOST_BRIDGE_DATA; | ||
99 | + switch (a.opcode % ACTION_MAX) { | ||
100 | + case WRITEB: | ||
101 | + qtest_outb(s, addr, (uint8_t)a.value); | ||
102 | + break; | ||
103 | + case WRITEW: | ||
104 | + qtest_outw(s, addr, (uint16_t)a.value); | ||
105 | + break; | ||
106 | + case WRITEL: | ||
107 | + qtest_outl(s, addr, (uint32_t)a.value); | ||
108 | + break; | ||
109 | + case READB: | ||
110 | + qtest_inb(s, addr); | ||
111 | + break; | ||
112 | + case READW: | ||
113 | + qtest_inw(s, addr); | ||
114 | + break; | ||
115 | + case READL: | ||
116 | + qtest_inl(s, addr); | ||
117 | + break; | ||
118 | + } | ||
119 | + /* Move to the next operation */ | ||
120 | + Size -= sizeof(a); | ||
121 | + Data += sizeof(a); | ||
122 | + } | ||
123 | + flush_events(s); | ||
124 | +} | 64 | +} |
125 | + | 65 | + |
126 | +static void i440fx_fuzz_qos(QTestState *s, | 66 | +static void coroutine_fn qemu_co_mutex_lock_slowpath(AioContext *ctx, |
127 | + const unsigned char *Data, size_t Size) { | 67 | + CoMutex *mutex) |
128 | + /* | 68 | { |
129 | + * Same as i440fx_fuzz_qtest, but using QOS. devfn is incorporated into the | 69 | Coroutine *self = qemu_coroutine_self(); |
130 | + * value written over Port IO | 70 | CoWaitRecord w; |
71 | @@ -XXX,XX +XXX,XX @@ static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex) | ||
72 | if (co == self) { | ||
73 | /* We got the lock ourselves! */ | ||
74 | assert(to_wake == &w); | ||
75 | + mutex->ctx = ctx; | ||
76 | return; | ||
77 | } | ||
78 | |||
79 | - aio_co_wake(co); | ||
80 | + qemu_co_mutex_wake(mutex, co); | ||
81 | } | ||
82 | |||
83 | qemu_coroutine_yield(); | ||
84 | @@ -XXX,XX +XXX,XX @@ static void coroutine_fn qemu_co_mutex_lock_slowpath(CoMutex *mutex) | ||
85 | |||
86 | void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex) | ||
87 | { | ||
88 | + AioContext *ctx = qemu_get_current_aio_context(); | ||
89 | Coroutine *self = qemu_coroutine_self(); | ||
90 | + int waiters, i; | ||
91 | |||
92 | - if (atomic_fetch_inc(&mutex->locked) == 0) { | ||
93 | + /* Running a very small critical section on pthread_mutex_t and CoMutex | ||
94 | + * shows that pthread_mutex_t is much faster because it doesn't actually | ||
95 | + * go to sleep. What happens is that the critical section is shorter | ||
96 | + * than the latency of entering the kernel and thus FUTEX_WAIT always | ||
97 | + * fails. With CoMutex there is no such latency but you still want to | ||
98 | + * avoid wait and wakeup. So introduce it artificially. | ||
131 | + */ | 99 | + */ |
132 | + typedef struct QOSFuzzAction { | 100 | + i = 0; |
133 | + uint8_t opcode; | 101 | +retry_fast_path: |
134 | + uint8_t offset; | 102 | + waiters = atomic_cmpxchg(&mutex->locked, 0, 1); |
135 | + int devfn; | 103 | + if (waiters != 0) { |
136 | + uint32_t value; | 104 | + while (waiters == 1 && ++i < 1000) { |
137 | + } QOSFuzzAction; | 105 | + if (atomic_read(&mutex->ctx) == ctx) { |
138 | + | 106 | + break; |
139 | + static QPCIBus *bus; | 107 | + } |
140 | + if (!bus) { | 108 | + if (atomic_read(&mutex->locked) == 0) { |
141 | + bus = qpci_new_pc(s, fuzz_qos_alloc); | 109 | + goto retry_fast_path; |
110 | + } | ||
111 | + cpu_relax(); | ||
112 | + } | ||
113 | + waiters = atomic_fetch_inc(&mutex->locked); | ||
142 | + } | 114 | + } |
143 | + | 115 | + |
144 | + QOSFuzzAction a; | 116 | + if (waiters == 0) { |
145 | + while (Size >= sizeof(a)) { | 117 | /* Uncontended. */ |
146 | + memcpy(&a, Data, sizeof(a)); | 118 | trace_qemu_co_mutex_lock_uncontended(mutex, self); |
147 | + switch (a.opcode % ACTION_MAX) { | 119 | + mutex->ctx = ctx; |
148 | + case WRITEB: | 120 | } else { |
149 | + bus->config_writeb(bus, a.devfn, a.offset, (uint8_t)a.value); | 121 | - qemu_co_mutex_lock_slowpath(mutex); |
150 | + break; | 122 | + qemu_co_mutex_lock_slowpath(ctx, mutex); |
151 | + case WRITEW: | 123 | } |
152 | + bus->config_writew(bus, a.devfn, a.offset, (uint16_t)a.value); | 124 | mutex->holder = self; |
153 | + break; | 125 | self->locks_held++; |
154 | + case WRITEL: | 126 | @@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex) |
155 | + bus->config_writel(bus, a.devfn, a.offset, (uint32_t)a.value); | 127 | assert(mutex->holder == self); |
156 | + break; | 128 | assert(qemu_in_coroutine()); |
157 | + case READB: | 129 | |
158 | + bus->config_readb(bus, a.devfn, a.offset); | 130 | + mutex->ctx = NULL; |
159 | + break; | 131 | mutex->holder = NULL; |
160 | + case READW: | 132 | self->locks_held--; |
161 | + bus->config_readw(bus, a.devfn, a.offset); | 133 | if (atomic_fetch_dec(&mutex->locked) == 1) { |
162 | + break; | 134 | @@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex) |
163 | + case READL: | 135 | unsigned our_handoff; |
164 | + bus->config_readl(bus, a.devfn, a.offset); | 136 | |
165 | + break; | 137 | if (to_wake) { |
166 | + } | 138 | - Coroutine *co = to_wake->co; |
167 | + Size -= sizeof(a); | 139 | - aio_co_wake(co); |
168 | + Data += sizeof(a); | 140 | + qemu_co_mutex_wake(mutex, to_wake->co); |
169 | + } | 141 | break; |
170 | + flush_events(s); | 142 | } |
171 | +} | 143 | |
172 | + | 144 | diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c |
173 | +static void i440fx_fuzz_qos_fork(QTestState *s, | 145 | index XXXXXXX..XXXXXXX 100644 |
174 | + const unsigned char *Data, size_t Size) { | 146 | --- a/util/qemu-coroutine.c |
175 | + if (fork() == 0) { | 147 | +++ b/util/qemu-coroutine.c |
176 | + i440fx_fuzz_qos(s, Data, Size); | 148 | @@ -XXX,XX +XXX,XX @@ void qemu_coroutine_enter(Coroutine *co) |
177 | + _Exit(0); | 149 | co->ctx = qemu_get_current_aio_context(); |
178 | + } else { | 150 | |
179 | + wait(NULL); | 151 | /* Store co->ctx before anything that stores co. Matches |
180 | + } | 152 | - * barrier in aio_co_wake. |
181 | +} | 153 | + * barrier in aio_co_wake and qemu_co_mutex_wake. |
182 | + | 154 | */ |
183 | +static const char *i440fx_qtest_argv = TARGET_NAME " -machine accel=qtest" | 155 | smp_wmb(); |
184 | + "-m 0 -display none"; | 156 | |
185 | +static const char *i440fx_argv(FuzzTarget *t) | ||
186 | +{ | ||
187 | + return i440fx_qtest_argv; | ||
188 | +} | ||
189 | + | ||
190 | +static void fork_init(void) | ||
191 | +{ | ||
192 | + counter_shm_init(); | ||
193 | +} | ||
194 | + | ||
195 | +static void register_pci_fuzz_targets(void) | ||
196 | +{ | ||
197 | + /* Uses simple qtest commands and reboots to reset state */ | ||
198 | + fuzz_add_target(&(FuzzTarget){ | ||
199 | + .name = "i440fx-qtest-reboot-fuzz", | ||
200 | + .description = "Fuzz the i440fx using raw qtest commands and" | ||
201 | + "rebooting after each run", | ||
202 | + .get_init_cmdline = i440fx_argv, | ||
203 | + .fuzz = i440fx_fuzz_qtest}); | ||
204 | + | ||
205 | + /* Uses libqos and forks to prevent state leakage */ | ||
206 | + fuzz_add_qos_target(&(FuzzTarget){ | ||
207 | + .name = "i440fx-qos-fork-fuzz", | ||
208 | + .description = "Fuzz the i440fx using raw qtest commands and" | ||
209 | + "rebooting after each run", | ||
210 | + .pre_vm_init = &fork_init, | ||
211 | + .fuzz = i440fx_fuzz_qos_fork,}, | ||
212 | + "i440FX-pcihost", | ||
213 | + &(QOSGraphTestOptions){} | ||
214 | + ); | ||
215 | + | ||
216 | + /* | ||
217 | + * Uses libqos. Doesn't do anything to reset state. Note that if we were to | ||
218 | + * reboot after each run, we would also have to redo the qos-related | ||
219 | + * initialization (qos_init_path) | ||
220 | + */ | ||
221 | + fuzz_add_qos_target(&(FuzzTarget){ | ||
222 | + .name = "i440fx-qos-noreset-fuzz", | ||
223 | + .description = "Fuzz the i440fx using raw qtest commands and" | ||
224 | + "rebooting after each run", | ||
225 | + .fuzz = i440fx_fuzz_qos,}, | ||
226 | + "i440FX-pcihost", | ||
227 | + &(QOSGraphTestOptions){} | ||
228 | + ); | ||
229 | +} | ||
230 | + | ||
231 | +fuzz_target_init(register_pci_fuzz_targets); | ||
232 | -- | 157 | -- |
233 | 2.24.1 | 158 | 2.9.3 |
234 | 159 | ||
160 | diff view generated by jsdifflib |
1 | From: Paolo Bonzini <pbonzini@redhat.com> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | QSLIST is the only family of lists for which we do not have RCU-friendly accessors, | 3 | Add two implementations of the same benchmark as the previous patch, |
4 | add them. | 4 | but using pthreads. One uses a normal QemuMutex, the other is Linux |
5 | only and implements a fair mutex based on MCS locks and futexes. | ||
6 | This shows that the slower performance of the 5-thread case is due to | ||
7 | the fairness of CoMutex, rather than to coroutines. If fairness does | ||
8 | not matter, as is the case with two threads, CoMutex can actually be | ||
9 | faster than pthreads. | ||
5 | 10 | ||
6 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | 11 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
7 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 12 | Reviewed-by: Fam Zheng <famz@redhat.com> |
8 | Message-id: 20200220103828.24525-1-pbonzini@redhat.com | 13 | Message-id: 20170213181244.16297-4-pbonzini@redhat.com |
9 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 14 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
10 | --- | 15 | --- |
11 | include/qemu/queue.h | 15 +++++++++++-- | 16 | tests/test-aio-multithread.c | 164 +++++++++++++++++++++++++++++++++++++++++++ |
12 | include/qemu/rcu_queue.h | 47 ++++++++++++++++++++++++++++++++++++++++ | 17 | 1 file changed, 164 insertions(+) |
13 | tests/Makefile.include | 2 ++ | 18 | |
14 | tests/test-rcu-list.c | 16 ++++++++++++++ | 19 | diff --git a/tests/test-aio-multithread.c b/tests/test-aio-multithread.c |
15 | tests/test-rcu-slist.c | 2 ++ | ||
16 | 5 files changed, 80 insertions(+), 2 deletions(-) | ||
17 | create mode 100644 tests/test-rcu-slist.c | ||
18 | |||
19 | diff --git a/include/qemu/queue.h b/include/qemu/queue.h | ||
20 | index XXXXXXX..XXXXXXX 100644 | 20 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/include/qemu/queue.h | 21 | --- a/tests/test-aio-multithread.c |
22 | +++ b/include/qemu/queue.h | 22 | +++ b/tests/test-aio-multithread.c |
23 | @@ -XXX,XX +XXX,XX @@ struct { \ | 23 | @@ -XXX,XX +XXX,XX @@ static void test_multi_co_mutex_2_30(void) |
24 | (head)->slh_first = (head)->slh_first->field.sle_next; \ | 24 | test_multi_co_mutex(2, 30); |
25 | } while (/*CONSTCOND*/0) | 25 | } |
26 | 26 | ||
27 | -#define QSLIST_REMOVE_AFTER(slistelm, field) do { \ | 27 | +/* Same test with fair mutexes, for performance comparison. */ |
28 | +#define QSLIST_REMOVE_AFTER(slistelm, field) do { \ | 28 | + |
29 | (slistelm)->field.sle_next = \ | 29 | +#ifdef CONFIG_LINUX |
30 | - QSLIST_NEXT(QSLIST_NEXT((slistelm), field), field); \ | 30 | +#include "qemu/futex.h" |
31 | + QSLIST_NEXT(QSLIST_NEXT((slistelm), field), field); \ | 31 | + |
32 | +} while (/*CONSTCOND*/0) | 32 | +/* The nodes for the mutex reside in this structure (on which we try to avoid |
33 | + | 33 | + * false sharing). The head of the mutex is in the "mutex_head" variable. |
34 | +#define QSLIST_REMOVE(head, elm, type, field) do { \ | ||
35 | + if ((head)->slh_first == (elm)) { \ | ||
36 | + QSLIST_REMOVE_HEAD((head), field); \ | ||
37 | + } else { \ | ||
38 | + struct type *curelm = (head)->slh_first; \ | ||
39 | + while (curelm->field.sle_next != (elm)) \ | ||
40 | + curelm = curelm->field.sle_next; \ | ||
41 | + curelm->field.sle_next = curelm->field.sle_next->field.sle_next; \ | ||
42 | + } \ | ||
43 | } while (/*CONSTCOND*/0) | ||
44 | |||
45 | #define QSLIST_FOREACH(var, head, field) \ | ||
46 | diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h | ||
47 | index XXXXXXX..XXXXXXX 100644 | ||
48 | --- a/include/qemu/rcu_queue.h | ||
49 | +++ b/include/qemu/rcu_queue.h | ||
50 | @@ -XXX,XX +XXX,XX @@ extern "C" { | ||
51 | (var) && ((next) = atomic_rcu_read(&(var)->field.tqe_next), 1); \ | ||
52 | (var) = (next)) | ||
53 | |||
54 | +/* | ||
55 | + * RCU singly-linked list | ||
56 | + */ | 34 | + */ |
57 | + | 35 | +static struct { |
58 | +/* Singly-linked list access methods */ | 36 | + int next, locked; |
59 | +#define QSLIST_EMPTY_RCU(head) (atomic_read(&(head)->slh_first) == NULL) | 37 | + int padding[14]; |
60 | +#define QSLIST_FIRST_RCU(head) atomic_rcu_read(&(head)->slh_first) | 38 | +} nodes[NUM_CONTEXTS] __attribute__((__aligned__(64))); |
61 | +#define QSLIST_NEXT_RCU(elm, field) atomic_rcu_read(&(elm)->field.sle_next) | 39 | + |
62 | + | 40 | +static int mutex_head = -1; |
63 | +/* Singly-linked list functions */ | 41 | + |
64 | +#define QSLIST_INSERT_HEAD_RCU(head, elm, field) do { \ | 42 | +static void mcs_mutex_lock(void) |
65 | + (elm)->field.sle_next = (head)->slh_first; \ | 43 | +{ |
66 | + atomic_rcu_set(&(head)->slh_first, (elm)); \ | 44 | + int prev; |
67 | +} while (/*CONSTCOND*/0) | 45 | + |
68 | + | 46 | + nodes[id].next = -1; |
69 | +#define QSLIST_INSERT_AFTER_RCU(head, listelm, elm, field) do { \ | 47 | + nodes[id].locked = 1; |
70 | + (elm)->field.sle_next = (listelm)->field.sle_next; \ | 48 | + prev = atomic_xchg(&mutex_head, id); |
71 | + atomic_rcu_set(&(listelm)->field.sle_next, (elm)); \ | 49 | + if (prev != -1) { |
72 | +} while (/*CONSTCOND*/0) | 50 | + atomic_set(&nodes[prev].next, id); |
73 | + | 51 | + qemu_futex_wait(&nodes[id].locked, 1); |
74 | +#define QSLIST_REMOVE_HEAD_RCU(head, field) do { \ | 52 | + } |
75 | + atomic_set(&(head)->slh_first, (head)->slh_first->field.sle_next); \ | 53 | +} |
76 | +} while (/*CONSTCOND*/0) | 54 | + |
77 | + | 55 | +static void mcs_mutex_unlock(void) |
78 | +#define QSLIST_REMOVE_RCU(head, elm, type, field) do { \ | 56 | +{ |
79 | + if ((head)->slh_first == (elm)) { \ | 57 | + int next; |
80 | + QSLIST_REMOVE_HEAD_RCU((head), field); \ | 58 | + if (nodes[id].next == -1) { |
81 | + } else { \ | 59 | + if (atomic_read(&mutex_head) == id && |
82 | + struct type *curr = (head)->slh_first; \ | 60 | + atomic_cmpxchg(&mutex_head, id, -1) == id) { |
83 | + while (curr->field.sle_next != (elm)) { \ | 61 | + /* Last item in the list, exit. */ |
84 | + curr = curr->field.sle_next; \ | 62 | + return; |
85 | + } \ | 63 | + } |
86 | + atomic_set(&curr->field.sle_next, \ | 64 | + while (atomic_read(&nodes[id].next) == -1) { |
87 | + curr->field.sle_next->field.sle_next); \ | 65 | + /* mcs_mutex_lock did the xchg, but has not updated |
88 | + } \ | 66 | + * nodes[prev].next yet. |
89 | +} while (/*CONSTCOND*/0) | 67 | + */ |
90 | + | 68 | + } |
91 | +#define QSLIST_FOREACH_RCU(var, head, field) \ | 69 | + } |
92 | + for ((var) = atomic_rcu_read(&(head)->slh_first); \ | 70 | + |
93 | + (var); \ | 71 | + /* Wake up the next in line. */ |
94 | + (var) = atomic_rcu_read(&(var)->field.sle_next)) | 72 | + next = nodes[id].next; |
95 | + | 73 | + nodes[next].locked = 0; |
96 | +#define QSLIST_FOREACH_SAFE_RCU(var, head, field, next) \ | 74 | + qemu_futex_wake(&nodes[next].locked, 1); |
97 | + for ((var) = atomic_rcu_read(&(head)->slh_first); \ | 75 | +} |
98 | + (var) && ((next) = atomic_rcu_read(&(var)->field.sle_next), 1); \ | 76 | + |
99 | + (var) = (next)) | 77 | +static void test_multi_fair_mutex_entry(void *opaque) |
100 | + | 78 | +{ |
101 | #ifdef __cplusplus | 79 | + while (!atomic_mb_read(&now_stopping)) { |
80 | + mcs_mutex_lock(); | ||
81 | + counter++; | ||
82 | + mcs_mutex_unlock(); | ||
83 | + atomic_inc(&atomic_counter); | ||
84 | + } | ||
85 | + atomic_dec(&running); | ||
86 | +} | ||
87 | + | ||
88 | +static void test_multi_fair_mutex(int threads, int seconds) | ||
89 | +{ | ||
90 | + int i; | ||
91 | + | ||
92 | + assert(mutex_head == -1); | ||
93 | + counter = 0; | ||
94 | + atomic_counter = 0; | ||
95 | + now_stopping = false; | ||
96 | + | ||
97 | + create_aio_contexts(); | ||
98 | + assert(threads <= NUM_CONTEXTS); | ||
99 | + running = threads; | ||
100 | + for (i = 0; i < threads; i++) { | ||
101 | + Coroutine *co1 = qemu_coroutine_create(test_multi_fair_mutex_entry, NULL); | ||
102 | + aio_co_schedule(ctx[i], co1); | ||
103 | + } | ||
104 | + | ||
105 | + g_usleep(seconds * 1000000); | ||
106 | + | ||
107 | + atomic_mb_set(&now_stopping, true); | ||
108 | + while (running > 0) { | ||
109 | + g_usleep(100000); | ||
110 | + } | ||
111 | + | ||
112 | + join_aio_contexts(); | ||
113 | + g_test_message("%d iterations/second\n", counter / seconds); | ||
114 | + g_assert_cmpint(counter, ==, atomic_counter); | ||
115 | +} | ||
116 | + | ||
117 | +static void test_multi_fair_mutex_1(void) | ||
118 | +{ | ||
119 | + test_multi_fair_mutex(NUM_CONTEXTS, 1); | ||
120 | +} | ||
121 | + | ||
122 | +static void test_multi_fair_mutex_10(void) | ||
123 | +{ | ||
124 | + test_multi_fair_mutex(NUM_CONTEXTS, 10); | ||
125 | +} | ||
126 | +#endif | ||
127 | + | ||
128 | +/* Same test with pthread mutexes, for performance comparison and | ||
129 | + * portability. */ | ||
130 | + | ||
131 | +static QemuMutex mutex; | ||
132 | + | ||
133 | +static void test_multi_mutex_entry(void *opaque) | ||
134 | +{ | ||
135 | + while (!atomic_mb_read(&now_stopping)) { | ||
136 | + qemu_mutex_lock(&mutex); | ||
137 | + counter++; | ||
138 | + qemu_mutex_unlock(&mutex); | ||
139 | + atomic_inc(&atomic_counter); | ||
140 | + } | ||
141 | + atomic_dec(&running); | ||
142 | +} | ||
143 | + | ||
144 | +static void test_multi_mutex(int threads, int seconds) | ||
145 | +{ | ||
146 | + int i; | ||
147 | + | ||
148 | + qemu_mutex_init(&mutex); | ||
149 | + counter = 0; | ||
150 | + atomic_counter = 0; | ||
151 | + now_stopping = false; | ||
152 | + | ||
153 | + create_aio_contexts(); | ||
154 | + assert(threads <= NUM_CONTEXTS); | ||
155 | + running = threads; | ||
156 | + for (i = 0; i < threads; i++) { | ||
157 | + Coroutine *co1 = qemu_coroutine_create(test_multi_mutex_entry, NULL); | ||
158 | + aio_co_schedule(ctx[i], co1); | ||
159 | + } | ||
160 | + | ||
161 | + g_usleep(seconds * 1000000); | ||
162 | + | ||
163 | + atomic_mb_set(&now_stopping, true); | ||
164 | + while (running > 0) { | ||
165 | + g_usleep(100000); | ||
166 | + } | ||
167 | + | ||
168 | + join_aio_contexts(); | ||
169 | + g_test_message("%d iterations/second\n", counter / seconds); | ||
170 | + g_assert_cmpint(counter, ==, atomic_counter); | ||
171 | +} | ||
172 | + | ||
173 | +static void test_multi_mutex_1(void) | ||
174 | +{ | ||
175 | + test_multi_mutex(NUM_CONTEXTS, 1); | ||
176 | +} | ||
177 | + | ||
178 | +static void test_multi_mutex_10(void) | ||
179 | +{ | ||
180 | + test_multi_mutex(NUM_CONTEXTS, 10); | ||
181 | +} | ||
182 | + | ||
183 | /* End of tests. */ | ||
184 | |||
185 | int main(int argc, char **argv) | ||
186 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv) | ||
187 | g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_1); | ||
188 | g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_1); | ||
189 | g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_3); | ||
190 | +#ifdef CONFIG_LINUX | ||
191 | + g_test_add_func("/aio/multi/mutex/mcs", test_multi_fair_mutex_1); | ||
192 | +#endif | ||
193 | + g_test_add_func("/aio/multi/mutex/pthread", test_multi_mutex_1); | ||
194 | } else { | ||
195 | g_test_add_func("/aio/multi/schedule", test_multi_co_schedule_10); | ||
196 | g_test_add_func("/aio/multi/mutex/contended", test_multi_co_mutex_10); | ||
197 | g_test_add_func("/aio/multi/mutex/handoff", test_multi_co_mutex_2_30); | ||
198 | +#ifdef CONFIG_LINUX | ||
199 | + g_test_add_func("/aio/multi/mutex/mcs", test_multi_fair_mutex_10); | ||
200 | +#endif | ||
201 | + g_test_add_func("/aio/multi/mutex/pthread", test_multi_mutex_10); | ||
202 | } | ||
203 | return g_test_run(); | ||
102 | } | 204 | } |
103 | #endif | ||
104 | diff --git a/tests/Makefile.include b/tests/Makefile.include | ||
105 | index XXXXXXX..XXXXXXX 100644 | ||
106 | --- a/tests/Makefile.include | ||
107 | +++ b/tests/Makefile.include | ||
108 | @@ -XXX,XX +XXX,XX @@ check-unit-y += tests/rcutorture$(EXESUF) | ||
109 | check-unit-y += tests/test-rcu-list$(EXESUF) | ||
110 | check-unit-y += tests/test-rcu-simpleq$(EXESUF) | ||
111 | check-unit-y += tests/test-rcu-tailq$(EXESUF) | ||
112 | +check-unit-y += tests/test-rcu-slist$(EXESUF) | ||
113 | check-unit-y += tests/test-qdist$(EXESUF) | ||
114 | check-unit-y += tests/test-qht$(EXESUF) | ||
115 | check-unit-y += tests/test-qht-par$(EXESUF) | ||
116 | @@ -XXX,XX +XXX,XX @@ tests/rcutorture$(EXESUF): tests/rcutorture.o $(test-util-obj-y) | ||
117 | tests/test-rcu-list$(EXESUF): tests/test-rcu-list.o $(test-util-obj-y) | ||
118 | tests/test-rcu-simpleq$(EXESUF): tests/test-rcu-simpleq.o $(test-util-obj-y) | ||
119 | tests/test-rcu-tailq$(EXESUF): tests/test-rcu-tailq.o $(test-util-obj-y) | ||
120 | +tests/test-rcu-slist$(EXESUF): tests/test-rcu-slist.o $(test-util-obj-y) | ||
121 | tests/test-qdist$(EXESUF): tests/test-qdist.o $(test-util-obj-y) | ||
122 | tests/test-qht$(EXESUF): tests/test-qht.o $(test-util-obj-y) | ||
123 | tests/test-qht-par$(EXESUF): tests/test-qht-par.o tests/qht-bench$(EXESUF) $(test-util-obj-y) | ||
124 | diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c | ||
125 | index XXXXXXX..XXXXXXX 100644 | ||
126 | --- a/tests/test-rcu-list.c | ||
127 | +++ b/tests/test-rcu-list.c | ||
128 | @@ -XXX,XX +XXX,XX @@ struct list_element { | ||
129 | QSIMPLEQ_ENTRY(list_element) entry; | ||
130 | #elif TEST_LIST_TYPE == 3 | ||
131 | QTAILQ_ENTRY(list_element) entry; | ||
132 | +#elif TEST_LIST_TYPE == 4 | ||
133 | + QSLIST_ENTRY(list_element) entry; | ||
134 | #else | ||
135 | #error Invalid TEST_LIST_TYPE | ||
136 | #endif | ||
137 | @@ -XXX,XX +XXX,XX @@ static QTAILQ_HEAD(, list_element) Q_list_head; | ||
138 | #define TEST_LIST_INSERT_HEAD_RCU QTAILQ_INSERT_HEAD_RCU | ||
139 | #define TEST_LIST_FOREACH_RCU QTAILQ_FOREACH_RCU | ||
140 | #define TEST_LIST_FOREACH_SAFE_RCU QTAILQ_FOREACH_SAFE_RCU | ||
141 | + | ||
142 | +#elif TEST_LIST_TYPE == 4 | ||
143 | +static QSLIST_HEAD(, list_element) Q_list_head; | ||
144 | + | ||
145 | +#define TEST_NAME "qslist" | ||
146 | +#define TEST_LIST_REMOVE_RCU(el, f) \ | ||
147 | + QSLIST_REMOVE_RCU(&Q_list_head, el, list_element, f) | ||
148 | + | ||
149 | +#define TEST_LIST_INSERT_AFTER_RCU(list_el, el, f) \ | ||
150 | + QSLIST_INSERT_AFTER_RCU(&Q_list_head, list_el, el, f) | ||
151 | + | ||
152 | +#define TEST_LIST_INSERT_HEAD_RCU QSLIST_INSERT_HEAD_RCU | ||
153 | +#define TEST_LIST_FOREACH_RCU QSLIST_FOREACH_RCU | ||
154 | +#define TEST_LIST_FOREACH_SAFE_RCU QSLIST_FOREACH_SAFE_RCU | ||
155 | #else | ||
156 | #error Invalid TEST_LIST_TYPE | ||
157 | #endif | ||
158 | diff --git a/tests/test-rcu-slist.c b/tests/test-rcu-slist.c | ||
159 | new file mode 100644 | ||
160 | index XXXXXXX..XXXXXXX | ||
161 | --- /dev/null | ||
162 | +++ b/tests/test-rcu-slist.c | ||
163 | @@ -XXX,XX +XXX,XX @@ | ||
164 | +#define TEST_LIST_TYPE 4 | ||
165 | +#include "test-rcu-list.c" | ||
166 | -- | 205 | -- |
167 | 2.24.1 | 206 | 2.9.3 |
168 | 207 | ||
208 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | Don't pass the nanosecond timeout into epoll_wait(), which expects | ||
2 | milliseconds. | ||
3 | 1 | ||
4 | The epoll_wait() timeout value does not matter if qemu_poll_ns() | ||
5 | determined that the poll fd is ready, but passing a value in the wrong | ||
6 | units is still ugly. Pass a 0 timeout to epoll_wait() instead. | ||
7 | |||
8 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
9 | Reviewed-by: Sergio Lopez <slp@redhat.com> | ||
10 | Message-id: 20200214171712.541358-3-stefanha@redhat.com | ||
11 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
12 | --- | ||
13 | util/aio-posix.c | 3 +++ | ||
14 | 1 file changed, 3 insertions(+) | ||
15 | |||
16 | diff --git a/util/aio-posix.c b/util/aio-posix.c | ||
17 | index XXXXXXX..XXXXXXX 100644 | ||
18 | --- a/util/aio-posix.c | ||
19 | +++ b/util/aio-posix.c | ||
20 | @@ -XXX,XX +XXX,XX @@ static int aio_epoll(AioContext *ctx, int64_t timeout) | ||
21 | |||
22 | if (timeout > 0) { | ||
23 | ret = qemu_poll_ns(&pfd, 1, timeout); | ||
24 | + if (ret > 0) { | ||
25 | + timeout = 0; | ||
26 | + } | ||
27 | } | ||
28 | if (timeout <= 0 || ret > 0) { | ||
29 | ret = epoll_wait(ctx->epollfd, events, | ||
30 | -- | ||
31 | 2.24.1 | ||
32 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | QLIST_REMOVE() assumes the element is in a list. It also leaves the | ||
2 | element's linked list pointers dangling. | ||
3 | 1 | ||
4 | Introduce a safe version of QLIST_REMOVE() and convert open-coded | ||
5 | instances of this pattern. | ||
6 | |||
7 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
8 | Reviewed-by: Sergio Lopez <slp@redhat.com> | ||
9 | Message-id: 20200214171712.541358-4-stefanha@redhat.com | ||
10 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
11 | --- | ||
12 | block.c | 5 +---- | ||
13 | chardev/spice.c | 4 +--- | ||
14 | include/qemu/queue.h | 14 ++++++++++++++ | ||
15 | 3 files changed, 16 insertions(+), 7 deletions(-) | ||
16 | |||
17 | diff --git a/block.c b/block.c | ||
18 | index XXXXXXX..XXXXXXX 100644 | ||
19 | --- a/block.c | ||
20 | +++ b/block.c | ||
21 | @@ -XXX,XX +XXX,XX @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs, | ||
22 | |||
23 | static void bdrv_detach_child(BdrvChild *child) | ||
24 | { | ||
25 | - if (child->next.le_prev) { | ||
26 | - QLIST_REMOVE(child, next); | ||
27 | - child->next.le_prev = NULL; | ||
28 | - } | ||
29 | + QLIST_SAFE_REMOVE(child, next); | ||
30 | |||
31 | bdrv_replace_child(child, NULL); | ||
32 | |||
33 | diff --git a/chardev/spice.c b/chardev/spice.c | ||
34 | index XXXXXXX..XXXXXXX 100644 | ||
35 | --- a/chardev/spice.c | ||
36 | +++ b/chardev/spice.c | ||
37 | @@ -XXX,XX +XXX,XX @@ static void char_spice_finalize(Object *obj) | ||
38 | |||
39 | vmc_unregister_interface(s); | ||
40 | |||
41 | - if (s->next.le_prev) { | ||
42 | - QLIST_REMOVE(s, next); | ||
43 | - } | ||
44 | + QLIST_SAFE_REMOVE(s, next); | ||
45 | |||
46 | g_free((char *)s->sin.subtype); | ||
47 | g_free((char *)s->sin.portname); | ||
48 | diff --git a/include/qemu/queue.h b/include/qemu/queue.h | ||
49 | index XXXXXXX..XXXXXXX 100644 | ||
50 | --- a/include/qemu/queue.h | ||
51 | +++ b/include/qemu/queue.h | ||
52 | @@ -XXX,XX +XXX,XX @@ struct { \ | ||
53 | *(elm)->field.le_prev = (elm)->field.le_next; \ | ||
54 | } while (/*CONSTCOND*/0) | ||
55 | |||
56 | +/* | ||
57 | + * Like QLIST_REMOVE() but safe to call when elm is not in a list | ||
58 | + */ | ||
59 | +#define QLIST_SAFE_REMOVE(elm, field) do { \ | ||
60 | + if ((elm)->field.le_prev != NULL) { \ | ||
61 | + if ((elm)->field.le_next != NULL) \ | ||
62 | + (elm)->field.le_next->field.le_prev = \ | ||
63 | + (elm)->field.le_prev; \ | ||
64 | + *(elm)->field.le_prev = (elm)->field.le_next; \ | ||
65 | + (elm)->field.le_next = NULL; \ | ||
66 | + (elm)->field.le_prev = NULL; \ | ||
67 | + } \ | ||
68 | +} while (/*CONSTCOND*/0) | ||
69 | + | ||
70 | #define QLIST_FOREACH(var, head, field) \ | ||
71 | for ((var) = ((head)->lh_first); \ | ||
72 | (var); \ | ||
73 | -- | ||
74 | 2.24.1 | ||
75 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | From: Alexander Bulekov <alxndr@bu.edu> | ||
2 | 1 | ||
3 | Move vl.c to a separate directory, similar to linux-user/ | ||
4 | Update the chechpatch and get_maintainer scripts, since they relied on | ||
5 | /vl.c for top_of_tree checks. | ||
6 | |||
7 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
8 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | ||
9 | Message-id: 20200220041118.23264-2-alxndr@bu.edu | ||
10 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
11 | --- | ||
12 | MAINTAINERS | 2 +- | ||
13 | Makefile.objs | 2 -- | ||
14 | Makefile.target | 1 + | ||
15 | scripts/checkpatch.pl | 2 +- | ||
16 | scripts/get_maintainer.pl | 3 ++- | ||
17 | softmmu/Makefile.objs | 2 ++ | ||
18 | vl.c => softmmu/vl.c | 0 | ||
19 | 7 files changed, 7 insertions(+), 5 deletions(-) | ||
20 | create mode 100644 softmmu/Makefile.objs | ||
21 | rename vl.c => softmmu/vl.c (100%) | ||
22 | |||
23 | diff --git a/MAINTAINERS b/MAINTAINERS | ||
24 | index XXXXXXX..XXXXXXX 100644 | ||
25 | --- a/MAINTAINERS | ||
26 | +++ b/MAINTAINERS | ||
27 | @@ -XXX,XX +XXX,XX @@ F: include/qemu/main-loop.h | ||
28 | F: include/sysemu/runstate.h | ||
29 | F: util/main-loop.c | ||
30 | F: util/qemu-timer.c | ||
31 | -F: vl.c | ||
32 | +F: softmmu/vl.c | ||
33 | F: qapi/run-state.json | ||
34 | |||
35 | Human Monitor (HMP) | ||
36 | diff --git a/Makefile.objs b/Makefile.objs | ||
37 | index XXXXXXX..XXXXXXX 100644 | ||
38 | --- a/Makefile.objs | ||
39 | +++ b/Makefile.objs | ||
40 | @@ -XXX,XX +XXX,XX @@ common-obj-y += ui/ | ||
41 | common-obj-m += ui/ | ||
42 | |||
43 | common-obj-y += dma-helpers.o | ||
44 | -common-obj-y += vl.o | ||
45 | -vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS) | ||
46 | common-obj-$(CONFIG_TPM) += tpm.o | ||
47 | |||
48 | common-obj-y += backends/ | ||
49 | diff --git a/Makefile.target b/Makefile.target | ||
50 | index XXXXXXX..XXXXXXX 100644 | ||
51 | --- a/Makefile.target | ||
52 | +++ b/Makefile.target | ||
53 | @@ -XXX,XX +XXX,XX @@ obj-y += qapi/ | ||
54 | obj-y += memory.o | ||
55 | obj-y += memory_mapping.o | ||
56 | obj-y += migration/ram.o | ||
57 | +obj-y += softmmu/ | ||
58 | LIBS := $(libs_softmmu) $(LIBS) | ||
59 | |||
60 | # Hardware support | ||
61 | diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl | ||
62 | index XXXXXXX..XXXXXXX 100755 | ||
63 | --- a/scripts/checkpatch.pl | ||
64 | +++ b/scripts/checkpatch.pl | ||
65 | @@ -XXX,XX +XXX,XX @@ sub top_of_kernel_tree { | ||
66 | my @tree_check = ( | ||
67 | "COPYING", "MAINTAINERS", "Makefile", | ||
68 | "README.rst", "docs", "VERSION", | ||
69 | - "vl.c" | ||
70 | + "linux-user", "softmmu" | ||
71 | ); | ||
72 | |||
73 | foreach my $check (@tree_check) { | ||
74 | diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl | ||
75 | index XXXXXXX..XXXXXXX 100755 | ||
76 | --- a/scripts/get_maintainer.pl | ||
77 | +++ b/scripts/get_maintainer.pl | ||
78 | @@ -XXX,XX +XXX,XX @@ sub top_of_tree { | ||
79 | && (-f "${lk_path}Makefile") | ||
80 | && (-d "${lk_path}docs") | ||
81 | && (-f "${lk_path}VERSION") | ||
82 | - && (-f "${lk_path}vl.c")) { | ||
83 | + && (-d "${lk_path}linux-user/") | ||
84 | + && (-d "${lk_path}softmmu/")) { | ||
85 | return 1; | ||
86 | } | ||
87 | return 0; | ||
88 | diff --git a/softmmu/Makefile.objs b/softmmu/Makefile.objs | ||
89 | new file mode 100644 | ||
90 | index XXXXXXX..XXXXXXX | ||
91 | --- /dev/null | ||
92 | +++ b/softmmu/Makefile.objs | ||
93 | @@ -XXX,XX +XXX,XX @@ | ||
94 | +obj-y += vl.o | ||
95 | +vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS) | ||
96 | diff --git a/vl.c b/softmmu/vl.c | ||
97 | similarity index 100% | ||
98 | rename from vl.c | ||
99 | rename to softmmu/vl.c | ||
100 | -- | ||
101 | 2.24.1 | ||
102 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | From: Alexander Bulekov <alxndr@bu.edu> | ||
2 | 1 | ||
3 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
4 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
5 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | ||
6 | Message-id: 20200220041118.23264-5-alxndr@bu.edu | ||
7 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
8 | --- | ||
9 | include/qemu/module.h | 4 +++- | ||
10 | 1 file changed, 3 insertions(+), 1 deletion(-) | ||
11 | |||
12 | diff --git a/include/qemu/module.h b/include/qemu/module.h | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/include/qemu/module.h | ||
15 | +++ b/include/qemu/module.h | ||
16 | @@ -XXX,XX +XXX,XX @@ typedef enum { | ||
17 | MODULE_INIT_TRACE, | ||
18 | MODULE_INIT_XEN_BACKEND, | ||
19 | MODULE_INIT_LIBQOS, | ||
20 | + MODULE_INIT_FUZZ_TARGET, | ||
21 | MODULE_INIT_MAX | ||
22 | } module_init_type; | ||
23 | |||
24 | @@ -XXX,XX +XXX,XX @@ typedef enum { | ||
25 | #define xen_backend_init(function) module_init(function, \ | ||
26 | MODULE_INIT_XEN_BACKEND) | ||
27 | #define libqos_init(function) module_init(function, MODULE_INIT_LIBQOS) | ||
28 | - | ||
29 | +#define fuzz_target_init(function) module_init(function, \ | ||
30 | + MODULE_INIT_FUZZ_TARGET) | ||
31 | #define block_module_load_one(lib) module_load_one("block-", lib) | ||
32 | #define ui_module_load_one(lib) module_load_one("ui-", lib) | ||
33 | #define audio_module_load_one(lib) module_load_one("audio-", lib) | ||
34 | -- | ||
35 | 2.24.1 | ||
36 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | From: Alexander Bulekov <alxndr@bu.edu> | ||
2 | 1 | ||
3 | qtest_server_send is a function pointer specifying the handler used to | ||
4 | transmit data to the qtest client. In the standard configuration, this | ||
5 | calls the CharBackend handler, but now it is possible for other types of | ||
6 | handlers, e.g direct-function calls if the qtest client and server | ||
7 | exist within the same process (inproc) | ||
8 | |||
9 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
10 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
11 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | ||
12 | Acked-by: Thomas Huth <thuth@redhat.com> | ||
13 | Message-id: 20200220041118.23264-6-alxndr@bu.edu | ||
14 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
15 | --- | ||
16 | include/sysemu/qtest.h | 3 +++ | ||
17 | qtest.c | 18 ++++++++++++++++-- | ||
18 | 2 files changed, 19 insertions(+), 2 deletions(-) | ||
19 | |||
20 | diff --git a/include/sysemu/qtest.h b/include/sysemu/qtest.h | ||
21 | index XXXXXXX..XXXXXXX 100644 | ||
22 | --- a/include/sysemu/qtest.h | ||
23 | +++ b/include/sysemu/qtest.h | ||
24 | @@ -XXX,XX +XXX,XX @@ bool qtest_driver(void); | ||
25 | |||
26 | void qtest_server_init(const char *qtest_chrdev, const char *qtest_log, Error **errp); | ||
27 | |||
28 | +void qtest_server_set_send_handler(void (*send)(void *, const char *), | ||
29 | + void *opaque); | ||
30 | + | ||
31 | #endif | ||
32 | diff --git a/qtest.c b/qtest.c | ||
33 | index XXXXXXX..XXXXXXX 100644 | ||
34 | --- a/qtest.c | ||
35 | +++ b/qtest.c | ||
36 | @@ -XXX,XX +XXX,XX @@ static GString *inbuf; | ||
37 | static int irq_levels[MAX_IRQ]; | ||
38 | static qemu_timeval start_time; | ||
39 | static bool qtest_opened; | ||
40 | +static void (*qtest_server_send)(void*, const char*); | ||
41 | +static void *qtest_server_send_opaque; | ||
42 | |||
43 | #define FMT_timeval "%ld.%06ld" | ||
44 | |||
45 | @@ -XXX,XX +XXX,XX @@ static void GCC_FMT_ATTR(1, 2) qtest_log_send(const char *fmt, ...) | ||
46 | va_end(ap); | ||
47 | } | ||
48 | |||
49 | -static void do_qtest_send(CharBackend *chr, const char *str, size_t len) | ||
50 | +static void qtest_server_char_be_send(void *opaque, const char *str) | ||
51 | { | ||
52 | + size_t len = strlen(str); | ||
53 | + CharBackend* chr = (CharBackend *)opaque; | ||
54 | qemu_chr_fe_write_all(chr, (uint8_t *)str, len); | ||
55 | if (qtest_log_fp && qtest_opened) { | ||
56 | fprintf(qtest_log_fp, "%s", str); | ||
57 | @@ -XXX,XX +XXX,XX @@ static void do_qtest_send(CharBackend *chr, const char *str, size_t len) | ||
58 | |||
59 | static void qtest_send(CharBackend *chr, const char *str) | ||
60 | { | ||
61 | - do_qtest_send(chr, str, strlen(str)); | ||
62 | + qtest_server_send(qtest_server_send_opaque, str); | ||
63 | } | ||
64 | |||
65 | static void GCC_FMT_ATTR(2, 3) qtest_sendf(CharBackend *chr, | ||
66 | @@ -XXX,XX +XXX,XX @@ void qtest_server_init(const char *qtest_chrdev, const char *qtest_log, Error ** | ||
67 | qemu_chr_fe_set_echo(&qtest_chr, true); | ||
68 | |||
69 | inbuf = g_string_new(""); | ||
70 | + | ||
71 | + if (!qtest_server_send) { | ||
72 | + qtest_server_set_send_handler(qtest_server_char_be_send, &qtest_chr); | ||
73 | + } | ||
74 | +} | ||
75 | + | ||
76 | +void qtest_server_set_send_handler(void (*send)(void*, const char*), void *opaque) | ||
77 | +{ | ||
78 | + qtest_server_send = send; | ||
79 | + qtest_server_send_opaque = opaque; | ||
80 | } | ||
81 | |||
82 | bool qtest_driver(void) | ||
83 | -- | ||
84 | 2.24.1 | ||
85 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 3 | This will avoid forward references in the next patch. It is also |
4 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 4 | more logical because CoQueue is not anymore the basic primitive. |
5 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 5 | |
6 | Message-id: 20200220041118.23264-18-alxndr@bu.edu | 6 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
7 | Reviewed-by: Fam Zheng <famz@redhat.com> | ||
8 | Message-id: 20170213181244.16297-5-pbonzini@redhat.com | ||
7 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 9 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
8 | --- | 10 | --- |
9 | Makefile | 15 ++++++++++++++- | 11 | include/qemu/coroutine.h | 89 ++++++++++++++++++++++++------------------------ |
10 | Makefile.target | 16 ++++++++++++++++ | 12 | 1 file changed, 44 insertions(+), 45 deletions(-) |
11 | 2 files changed, 30 insertions(+), 1 deletion(-) | ||
12 | 13 | ||
13 | diff --git a/Makefile b/Makefile | 14 | diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h |
14 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/Makefile | 16 | --- a/include/qemu/coroutine.h |
16 | +++ b/Makefile | 17 | +++ b/include/qemu/coroutine.h |
17 | @@ -XXX,XX +XXX,XX @@ config-host.h-timestamp: config-host.mak | 18 | @@ -XXX,XX +XXX,XX @@ bool qemu_in_coroutine(void); |
18 | qemu-options.def: $(SRC_PATH)/qemu-options.hx $(SRC_PATH)/scripts/hxtool | 19 | */ |
19 | $(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"GEN","$@") | 20 | bool qemu_coroutine_entered(Coroutine *co); |
20 | 21 | ||
21 | -TARGET_DIRS_RULES := $(foreach t, all clean install, $(addsuffix /$(t), $(TARGET_DIRS))) | 22 | - |
22 | +TARGET_DIRS_RULES := $(foreach t, all fuzz clean install, $(addsuffix /$(t), $(TARGET_DIRS))) | 23 | -/** |
23 | 24 | - * CoQueues are a mechanism to queue coroutines in order to continue executing | |
24 | SOFTMMU_ALL_RULES=$(filter %-softmmu/all, $(TARGET_DIRS_RULES)) | 25 | - * them later. They provide the fundamental primitives on which coroutine locks |
25 | $(SOFTMMU_ALL_RULES): $(authz-obj-y) | 26 | - * are built. |
26 | @@ -XXX,XX +XXX,XX @@ ifdef DECOMPRESS_EDK2_BLOBS | 27 | - */ |
27 | $(SOFTMMU_ALL_RULES): $(edk2-decompressed) | 28 | -typedef struct CoQueue { |
28 | endif | 29 | - QSIMPLEQ_HEAD(, Coroutine) entries; |
29 | 30 | -} CoQueue; | |
30 | +SOFTMMU_FUZZ_RULES=$(filter %-softmmu/fuzz, $(TARGET_DIRS_RULES)) | 31 | - |
31 | +$(SOFTMMU_FUZZ_RULES): $(authz-obj-y) | 32 | -/** |
32 | +$(SOFTMMU_FUZZ_RULES): $(block-obj-y) | 33 | - * Initialise a CoQueue. This must be called before any other operation is used |
33 | +$(SOFTMMU_FUZZ_RULES): $(chardev-obj-y) | 34 | - * on the CoQueue. |
34 | +$(SOFTMMU_FUZZ_RULES): $(crypto-obj-y) | 35 | - */ |
35 | +$(SOFTMMU_FUZZ_RULES): $(io-obj-y) | 36 | -void qemu_co_queue_init(CoQueue *queue); |
36 | +$(SOFTMMU_FUZZ_RULES): config-all-devices.mak | 37 | - |
37 | +$(SOFTMMU_FUZZ_RULES): $(edk2-decompressed) | 38 | -/** |
39 | - * Adds the current coroutine to the CoQueue and transfers control to the | ||
40 | - * caller of the coroutine. | ||
41 | - */ | ||
42 | -void coroutine_fn qemu_co_queue_wait(CoQueue *queue); | ||
43 | - | ||
44 | -/** | ||
45 | - * Restarts the next coroutine in the CoQueue and removes it from the queue. | ||
46 | - * | ||
47 | - * Returns true if a coroutine was restarted, false if the queue is empty. | ||
48 | - */ | ||
49 | -bool coroutine_fn qemu_co_queue_next(CoQueue *queue); | ||
50 | - | ||
51 | -/** | ||
52 | - * Restarts all coroutines in the CoQueue and leaves the queue empty. | ||
53 | - */ | ||
54 | -void coroutine_fn qemu_co_queue_restart_all(CoQueue *queue); | ||
55 | - | ||
56 | -/** | ||
57 | - * Enter the next coroutine in the queue | ||
58 | - */ | ||
59 | -bool qemu_co_enter_next(CoQueue *queue); | ||
60 | - | ||
61 | -/** | ||
62 | - * Checks if the CoQueue is empty. | ||
63 | - */ | ||
64 | -bool qemu_co_queue_empty(CoQueue *queue); | ||
65 | - | ||
66 | - | ||
67 | /** | ||
68 | * Provides a mutex that can be used to synchronise coroutines | ||
69 | */ | ||
70 | @@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex); | ||
71 | */ | ||
72 | void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex); | ||
73 | |||
38 | + | 74 | + |
39 | .PHONY: $(TARGET_DIRS_RULES) | 75 | +/** |
40 | # The $(TARGET_DIRS_RULES) are of the form SUBDIR/GOAL, so that | 76 | + * CoQueues are a mechanism to queue coroutines in order to continue executing |
41 | # $(dir $@) yields the sub-directory, and $(notdir $@) yields the sub-goal | 77 | + * them later. |
42 | @@ -XXX,XX +XXX,XX @@ subdir-slirp: slirp/all | 78 | + */ |
43 | $(filter %/all, $(TARGET_DIRS_RULES)): libqemuutil.a $(common-obj-y) \ | 79 | +typedef struct CoQueue { |
44 | $(qom-obj-y) | 80 | + QSIMPLEQ_HEAD(, Coroutine) entries; |
45 | 81 | +} CoQueue; | |
46 | +$(filter %/fuzz, $(TARGET_DIRS_RULES)): libqemuutil.a $(common-obj-y) \ | ||
47 | + $(qom-obj-y) $(crypto-user-obj-$(CONFIG_USER_ONLY)) | ||
48 | + | 82 | + |
49 | ROM_DIRS = $(addprefix pc-bios/, $(ROMS)) | 83 | +/** |
50 | ROM_DIRS_RULES=$(foreach t, all clean, $(addsuffix /$(t), $(ROM_DIRS))) | 84 | + * Initialise a CoQueue. This must be called before any other operation is used |
51 | # Only keep -O and -g cflags | 85 | + * on the CoQueue. |
52 | @@ -XXX,XX +XXX,XX @@ $(ROM_DIRS_RULES): | 86 | + */ |
53 | 87 | +void qemu_co_queue_init(CoQueue *queue); | |
54 | .PHONY: recurse-all recurse-clean recurse-install | ||
55 | recurse-all: $(addsuffix /all, $(TARGET_DIRS) $(ROM_DIRS)) | ||
56 | +recurse-fuzz: $(addsuffix /fuzz, $(TARGET_DIRS) $(ROM_DIRS)) | ||
57 | recurse-clean: $(addsuffix /clean, $(TARGET_DIRS) $(ROM_DIRS)) | ||
58 | recurse-install: $(addsuffix /install, $(TARGET_DIRS)) | ||
59 | $(addsuffix /install, $(TARGET_DIRS)): all | ||
60 | diff --git a/Makefile.target b/Makefile.target | ||
61 | index XXXXXXX..XXXXXXX 100644 | ||
62 | --- a/Makefile.target | ||
63 | +++ b/Makefile.target | ||
64 | @@ -XXX,XX +XXX,XX @@ ifdef CONFIG_TRACE_SYSTEMTAP | ||
65 | rm -f *.stp | ||
66 | endif | ||
67 | |||
68 | +ifdef CONFIG_FUZZ | ||
69 | +include $(SRC_PATH)/tests/qtest/fuzz/Makefile.include | ||
70 | +include $(SRC_PATH)/tests/qtest/Makefile.include | ||
71 | + | 88 | + |
72 | +fuzz: fuzz-vars | 89 | +/** |
73 | +fuzz-vars: QEMU_CFLAGS := $(FUZZ_CFLAGS) $(QEMU_CFLAGS) | 90 | + * Adds the current coroutine to the CoQueue and transfers control to the |
74 | +fuzz-vars: QEMU_LDFLAGS := $(FUZZ_LDFLAGS) $(QEMU_LDFLAGS) | 91 | + * caller of the coroutine. |
75 | +fuzz-vars: $(QEMU_PROG_FUZZ) | 92 | + */ |
76 | +dummy := $(call unnest-vars,, fuzz-obj-y) | 93 | +void coroutine_fn qemu_co_queue_wait(CoQueue *queue); |
94 | + | ||
95 | +/** | ||
96 | + * Restarts the next coroutine in the CoQueue and removes it from the queue. | ||
97 | + * | ||
98 | + * Returns true if a coroutine was restarted, false if the queue is empty. | ||
99 | + */ | ||
100 | +bool coroutine_fn qemu_co_queue_next(CoQueue *queue); | ||
101 | + | ||
102 | +/** | ||
103 | + * Restarts all coroutines in the CoQueue and leaves the queue empty. | ||
104 | + */ | ||
105 | +void coroutine_fn qemu_co_queue_restart_all(CoQueue *queue); | ||
106 | + | ||
107 | +/** | ||
108 | + * Enter the next coroutine in the queue | ||
109 | + */ | ||
110 | +bool qemu_co_enter_next(CoQueue *queue); | ||
111 | + | ||
112 | +/** | ||
113 | + * Checks if the CoQueue is empty. | ||
114 | + */ | ||
115 | +bool qemu_co_queue_empty(CoQueue *queue); | ||
77 | + | 116 | + |
78 | + | 117 | + |
79 | +$(QEMU_PROG_FUZZ): config-devices.mak $(all-obj-y) $(COMMON_LDADDS) $(fuzz-obj-y) | 118 | typedef struct CoRwlock { |
80 | + $(call LINK, $(filter-out %.mak, $^)) | 119 | bool writer; |
81 | + | 120 | int reader; |
82 | +endif | ||
83 | + | ||
84 | install: all | ||
85 | ifneq ($(PROGS),) | ||
86 | $(call install-prog,$(PROGS),$(DESTDIR)$(bindir)) | ||
87 | -- | 121 | -- |
88 | 2.24.1 | 122 | 2.9.3 |
89 | 123 | ||
124 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | When using qtest "in-process" communication, qtest_sendf directly calls | 3 | All that CoQueue needs in order to become thread-safe is help |
4 | a function in the server (qtest.c). Previously, bufwrite used | 4 | from an external mutex. Add this to the API. |
5 | socket_send, which bypasses the TransportOps enabling the call into | 5 | |
6 | qtest.c. This change replaces the socket_send calls with ops->send, | 6 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
7 | maintaining the benefits of the direct socket_send call, while adding | 7 | Reviewed-by: Fam Zheng <famz@redhat.com> |
8 | support for in-process qtest calls. | 8 | Message-id: 20170213181244.16297-6-pbonzini@redhat.com |
9 | |||
10 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
11 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
12 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | ||
13 | Message-id: 20200220041118.23264-8-alxndr@bu.edu | ||
14 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 9 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
15 | --- | 10 | --- |
16 | tests/qtest/libqtest.c | 71 ++++++++++++++++++++++++++++++++++++++++-- | 11 | include/qemu/coroutine.h | 8 +++++--- |
17 | tests/qtest/libqtest.h | 4 +++ | 12 | block/backup.c | 2 +- |
18 | 2 files changed, 73 insertions(+), 2 deletions(-) | 13 | block/io.c | 4 ++-- |
19 | 14 | block/nbd-client.c | 2 +- | |
20 | diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c | 15 | block/qcow2-cluster.c | 4 +--- |
21 | index XXXXXXX..XXXXXXX 100644 | 16 | block/sheepdog.c | 2 +- |
22 | --- a/tests/qtest/libqtest.c | 17 | block/throttle-groups.c | 2 +- |
23 | +++ b/tests/qtest/libqtest.c | 18 | hw/9pfs/9p.c | 2 +- |
24 | @@ -XXX,XX +XXX,XX @@ | 19 | util/qemu-coroutine-lock.c | 24 +++++++++++++++++++++--- |
25 | 20 | 9 files changed, 34 insertions(+), 16 deletions(-) | |
26 | 21 | ||
27 | typedef void (*QTestSendFn)(QTestState *s, const char *buf); | 22 | diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h |
28 | +typedef void (*ExternalSendFn)(void *s, const char *buf); | 23 | index XXXXXXX..XXXXXXX 100644 |
29 | typedef GString* (*QTestRecvFn)(QTestState *); | 24 | --- a/include/qemu/coroutine.h |
30 | 25 | +++ b/include/qemu/coroutine.h | |
31 | typedef struct QTestClientTransportOps { | 26 | @@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex); |
32 | QTestSendFn send; /* for sending qtest commands */ | 27 | |
28 | /** | ||
29 | * CoQueues are a mechanism to queue coroutines in order to continue executing | ||
30 | - * them later. | ||
31 | + * them later. They are similar to condition variables, but they need help | ||
32 | + * from an external mutex in order to maintain thread-safety. | ||
33 | */ | ||
34 | typedef struct CoQueue { | ||
35 | QSIMPLEQ_HEAD(, Coroutine) entries; | ||
36 | @@ -XXX,XX +XXX,XX @@ void qemu_co_queue_init(CoQueue *queue); | ||
37 | |||
38 | /** | ||
39 | * Adds the current coroutine to the CoQueue and transfers control to the | ||
40 | - * caller of the coroutine. | ||
41 | + * caller of the coroutine. The mutex is unlocked during the wait and | ||
42 | + * locked again afterwards. | ||
43 | */ | ||
44 | -void coroutine_fn qemu_co_queue_wait(CoQueue *queue); | ||
45 | +void coroutine_fn qemu_co_queue_wait(CoQueue *queue, CoMutex *mutex); | ||
46 | |||
47 | /** | ||
48 | * Restarts the next coroutine in the CoQueue and removes it from the queue. | ||
49 | diff --git a/block/backup.c b/block/backup.c | ||
50 | index XXXXXXX..XXXXXXX 100644 | ||
51 | --- a/block/backup.c | ||
52 | +++ b/block/backup.c | ||
53 | @@ -XXX,XX +XXX,XX @@ static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job, | ||
54 | retry = false; | ||
55 | QLIST_FOREACH(req, &job->inflight_reqs, list) { | ||
56 | if (end > req->start && start < req->end) { | ||
57 | - qemu_co_queue_wait(&req->wait_queue); | ||
58 | + qemu_co_queue_wait(&req->wait_queue, NULL); | ||
59 | retry = true; | ||
60 | break; | ||
61 | } | ||
62 | diff --git a/block/io.c b/block/io.c | ||
63 | index XXXXXXX..XXXXXXX 100644 | ||
64 | --- a/block/io.c | ||
65 | +++ b/block/io.c | ||
66 | @@ -XXX,XX +XXX,XX @@ static bool coroutine_fn wait_serialising_requests(BdrvTrackedRequest *self) | ||
67 | * (instead of producing a deadlock in the former case). */ | ||
68 | if (!req->waiting_for) { | ||
69 | self->waiting_for = req; | ||
70 | - qemu_co_queue_wait(&req->wait_queue); | ||
71 | + qemu_co_queue_wait(&req->wait_queue, NULL); | ||
72 | self->waiting_for = NULL; | ||
73 | retry = true; | ||
74 | waited = true; | ||
75 | @@ -XXX,XX +XXX,XX @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs) | ||
76 | |||
77 | /* Wait until any previous flushes are completed */ | ||
78 | while (bs->active_flush_req) { | ||
79 | - qemu_co_queue_wait(&bs->flush_queue); | ||
80 | + qemu_co_queue_wait(&bs->flush_queue, NULL); | ||
81 | } | ||
82 | |||
83 | bs->active_flush_req = true; | ||
84 | diff --git a/block/nbd-client.c b/block/nbd-client.c | ||
85 | index XXXXXXX..XXXXXXX 100644 | ||
86 | --- a/block/nbd-client.c | ||
87 | +++ b/block/nbd-client.c | ||
88 | @@ -XXX,XX +XXX,XX @@ static void nbd_coroutine_start(NBDClientSession *s, | ||
89 | /* Poor man semaphore. The free_sema is locked when no other request | ||
90 | * can be accepted, and unlocked after receiving one reply. */ | ||
91 | if (s->in_flight == MAX_NBD_REQUESTS) { | ||
92 | - qemu_co_queue_wait(&s->free_sema); | ||
93 | + qemu_co_queue_wait(&s->free_sema, NULL); | ||
94 | assert(s->in_flight < MAX_NBD_REQUESTS); | ||
95 | } | ||
96 | s->in_flight++; | ||
97 | diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c | ||
98 | index XXXXXXX..XXXXXXX 100644 | ||
99 | --- a/block/qcow2-cluster.c | ||
100 | +++ b/block/qcow2-cluster.c | ||
101 | @@ -XXX,XX +XXX,XX @@ static int handle_dependencies(BlockDriverState *bs, uint64_t guest_offset, | ||
102 | if (bytes == 0) { | ||
103 | /* Wait for the dependency to complete. We need to recheck | ||
104 | * the free/allocated clusters when we continue. */ | ||
105 | - qemu_co_mutex_unlock(&s->lock); | ||
106 | - qemu_co_queue_wait(&old_alloc->dependent_requests); | ||
107 | - qemu_co_mutex_lock(&s->lock); | ||
108 | + qemu_co_queue_wait(&old_alloc->dependent_requests, &s->lock); | ||
109 | return -EAGAIN; | ||
110 | } | ||
111 | } | ||
112 | diff --git a/block/sheepdog.c b/block/sheepdog.c | ||
113 | index XXXXXXX..XXXXXXX 100644 | ||
114 | --- a/block/sheepdog.c | ||
115 | +++ b/block/sheepdog.c | ||
116 | @@ -XXX,XX +XXX,XX @@ static void wait_for_overlapping_aiocb(BDRVSheepdogState *s, SheepdogAIOCB *acb) | ||
117 | retry: | ||
118 | QLIST_FOREACH(cb, &s->inflight_aiocb_head, aiocb_siblings) { | ||
119 | if (AIOCBOverlapping(acb, cb)) { | ||
120 | - qemu_co_queue_wait(&s->overlapping_queue); | ||
121 | + qemu_co_queue_wait(&s->overlapping_queue, NULL); | ||
122 | goto retry; | ||
123 | } | ||
124 | } | ||
125 | diff --git a/block/throttle-groups.c b/block/throttle-groups.c | ||
126 | index XXXXXXX..XXXXXXX 100644 | ||
127 | --- a/block/throttle-groups.c | ||
128 | +++ b/block/throttle-groups.c | ||
129 | @@ -XXX,XX +XXX,XX @@ void coroutine_fn throttle_group_co_io_limits_intercept(BlockBackend *blk, | ||
130 | if (must_wait || blkp->pending_reqs[is_write]) { | ||
131 | blkp->pending_reqs[is_write]++; | ||
132 | qemu_mutex_unlock(&tg->lock); | ||
133 | - qemu_co_queue_wait(&blkp->throttled_reqs[is_write]); | ||
134 | + qemu_co_queue_wait(&blkp->throttled_reqs[is_write], NULL); | ||
135 | qemu_mutex_lock(&tg->lock); | ||
136 | blkp->pending_reqs[is_write]--; | ||
137 | } | ||
138 | diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c | ||
139 | index XXXXXXX..XXXXXXX 100644 | ||
140 | --- a/hw/9pfs/9p.c | ||
141 | +++ b/hw/9pfs/9p.c | ||
142 | @@ -XXX,XX +XXX,XX @@ static void coroutine_fn v9fs_flush(void *opaque) | ||
143 | /* | ||
144 | * Wait for pdu to complete. | ||
145 | */ | ||
146 | - qemu_co_queue_wait(&cancel_pdu->complete); | ||
147 | + qemu_co_queue_wait(&cancel_pdu->complete, NULL); | ||
148 | cancel_pdu->cancelled = 0; | ||
149 | pdu_free(cancel_pdu); | ||
150 | } | ||
151 | diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c | ||
152 | index XXXXXXX..XXXXXXX 100644 | ||
153 | --- a/util/qemu-coroutine-lock.c | ||
154 | +++ b/util/qemu-coroutine-lock.c | ||
155 | @@ -XXX,XX +XXX,XX @@ void qemu_co_queue_init(CoQueue *queue) | ||
156 | QSIMPLEQ_INIT(&queue->entries); | ||
157 | } | ||
158 | |||
159 | -void coroutine_fn qemu_co_queue_wait(CoQueue *queue) | ||
160 | +void coroutine_fn qemu_co_queue_wait(CoQueue *queue, CoMutex *mutex) | ||
161 | { | ||
162 | Coroutine *self = qemu_coroutine_self(); | ||
163 | QSIMPLEQ_INSERT_TAIL(&queue->entries, self, co_queue_next); | ||
33 | + | 164 | + |
34 | + /* | 165 | + if (mutex) { |
35 | + * use external_send to send qtest command strings through functions which | 166 | + qemu_co_mutex_unlock(mutex); |
36 | + * do not accept a QTestState as the first parameter. | ||
37 | + */ | ||
38 | + ExternalSendFn external_send; | ||
39 | + | ||
40 | QTestRecvFn recv_line; /* for receiving qtest command responses */ | ||
41 | } QTestTransportOps; | ||
42 | |||
43 | @@ -XXX,XX +XXX,XX @@ void qtest_bufwrite(QTestState *s, uint64_t addr, const void *data, size_t size) | ||
44 | |||
45 | bdata = g_base64_encode(data, size); | ||
46 | qtest_sendf(s, "b64write 0x%" PRIx64 " 0x%zx ", addr, size); | ||
47 | - socket_send(s->fd, bdata, strlen(bdata)); | ||
48 | - socket_send(s->fd, "\n", 1); | ||
49 | + s->ops.send(s, bdata); | ||
50 | + s->ops.send(s, "\n"); | ||
51 | qtest_rsp(s, 0); | ||
52 | g_free(bdata); | ||
53 | } | ||
54 | @@ -XXX,XX +XXX,XX @@ static void qtest_client_set_rx_handler(QTestState *s, QTestRecvFn recv) | ||
55 | { | ||
56 | s->ops.recv_line = recv; | ||
57 | } | ||
58 | +/* A type-safe wrapper for s->send() */ | ||
59 | +static void send_wrapper(QTestState *s, const char *buf) | ||
60 | +{ | ||
61 | + s->ops.external_send(s, buf); | ||
62 | +} | ||
63 | + | ||
64 | +static GString *qtest_client_inproc_recv_line(QTestState *s) | ||
65 | +{ | ||
66 | + GString *line; | ||
67 | + size_t offset; | ||
68 | + char *eol; | ||
69 | + | ||
70 | + eol = strchr(s->rx->str, '\n'); | ||
71 | + offset = eol - s->rx->str; | ||
72 | + line = g_string_new_len(s->rx->str, offset); | ||
73 | + g_string_erase(s->rx, 0, offset + 1); | ||
74 | + return line; | ||
75 | +} | ||
76 | + | ||
77 | +QTestState *qtest_inproc_init(QTestState **s, bool log, const char* arch, | ||
78 | + void (*send)(void*, const char*)) | ||
79 | +{ | ||
80 | + QTestState *qts; | ||
81 | + qts = g_new0(QTestState, 1); | ||
82 | + *s = qts; /* Expose qts early on, since the query endianness relies on it */ | ||
83 | + qts->wstatus = 0; | ||
84 | + for (int i = 0; i < MAX_IRQ; i++) { | ||
85 | + qts->irq_level[i] = false; | ||
86 | + } | 167 | + } |
87 | + | 168 | + |
88 | + qtest_client_set_rx_handler(qts, qtest_client_inproc_recv_line); | 169 | + /* There is no race condition here. Other threads will call |
170 | + * aio_co_schedule on our AioContext, which can reenter this | ||
171 | + * coroutine but only after this yield and after the main loop | ||
172 | + * has gone through the next iteration. | ||
173 | + */ | ||
174 | qemu_coroutine_yield(); | ||
175 | assert(qemu_in_coroutine()); | ||
89 | + | 176 | + |
90 | + /* send() may not have a matching protoype, so use a type-safe wrapper */ | 177 | + /* TODO: OSv implements wait morphing here, where the wakeup |
91 | + qts->ops.external_send = send; | 178 | + * primitive automatically places the woken coroutine on the |
92 | + qtest_client_set_tx_handler(qts, send_wrapper); | 179 | + * mutex's queue. This avoids the thundering herd effect. |
93 | + | ||
94 | + qts->big_endian = qtest_query_target_endianness(qts); | ||
95 | + | ||
96 | + /* | ||
97 | + * Set a dummy path for QTEST_QEMU_BINARY. Doesn't need to exist, but this | ||
98 | + * way, qtest_get_arch works for inproc qtest. | ||
99 | + */ | 180 | + */ |
100 | + gchar *bin_path = g_strconcat("/qemu-system-", arch, NULL); | 181 | + if (mutex) { |
101 | + setenv("QTEST_QEMU_BINARY", bin_path, 0); | 182 | + qemu_co_mutex_lock(mutex); |
102 | + g_free(bin_path); | ||
103 | + | ||
104 | + return qts; | ||
105 | +} | ||
106 | + | ||
107 | +void qtest_client_inproc_recv(void *opaque, const char *str) | ||
108 | +{ | ||
109 | + QTestState *qts = *(QTestState **)opaque; | ||
110 | + | ||
111 | + if (!qts->rx) { | ||
112 | + qts->rx = g_string_new(NULL); | ||
113 | + } | 183 | + } |
114 | + g_string_append(qts->rx, str); | 184 | } |
115 | + return; | 185 | |
116 | +} | 186 | /** |
117 | diff --git a/tests/qtest/libqtest.h b/tests/qtest/libqtest.h | 187 | @@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_rdlock(CoRwlock *lock) |
118 | index XXXXXXX..XXXXXXX 100644 | 188 | Coroutine *self = qemu_coroutine_self(); |
119 | --- a/tests/qtest/libqtest.h | 189 | |
120 | +++ b/tests/qtest/libqtest.h | 190 | while (lock->writer) { |
121 | @@ -XXX,XX +XXX,XX @@ bool qtest_probe_child(QTestState *s); | 191 | - qemu_co_queue_wait(&lock->queue); |
122 | */ | 192 | + qemu_co_queue_wait(&lock->queue, NULL); |
123 | void qtest_set_expected_status(QTestState *s, int status); | 193 | } |
124 | 194 | lock->reader++; | |
125 | +QTestState *qtest_inproc_init(QTestState **s, bool log, const char* arch, | 195 | self->locks_held++; |
126 | + void (*send)(void*, const char*)); | 196 | @@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_wrlock(CoRwlock *lock) |
127 | + | 197 | Coroutine *self = qemu_coroutine_self(); |
128 | +void qtest_client_inproc_recv(void *opaque, const char *str); | 198 | |
129 | #endif | 199 | while (lock->writer || lock->reader) { |
200 | - qemu_co_queue_wait(&lock->queue); | ||
201 | + qemu_co_queue_wait(&lock->queue, NULL); | ||
202 | } | ||
203 | lock->writer = true; | ||
204 | self->locks_held++; | ||
130 | -- | 205 | -- |
131 | 2.24.1 | 206 | 2.9.3 |
132 | 207 | ||
208 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | From: Alexander Bulekov <alxndr@bu.edu> | ||
2 | 1 | ||
3 | The handler allows a qtest client to send commands to the server by | ||
4 | directly calling a function, rather than using a file/CharBackend | ||
5 | |||
6 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
7 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
8 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | ||
9 | Message-id: 20200220041118.23264-9-alxndr@bu.edu | ||
10 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
11 | --- | ||
12 | include/sysemu/qtest.h | 1 + | ||
13 | qtest.c | 13 +++++++++++++ | ||
14 | 2 files changed, 14 insertions(+) | ||
15 | |||
16 | diff --git a/include/sysemu/qtest.h b/include/sysemu/qtest.h | ||
17 | index XXXXXXX..XXXXXXX 100644 | ||
18 | --- a/include/sysemu/qtest.h | ||
19 | +++ b/include/sysemu/qtest.h | ||
20 | @@ -XXX,XX +XXX,XX @@ void qtest_server_init(const char *qtest_chrdev, const char *qtest_log, Error ** | ||
21 | |||
22 | void qtest_server_set_send_handler(void (*send)(void *, const char *), | ||
23 | void *opaque); | ||
24 | +void qtest_server_inproc_recv(void *opaque, const char *buf); | ||
25 | |||
26 | #endif | ||
27 | diff --git a/qtest.c b/qtest.c | ||
28 | index XXXXXXX..XXXXXXX 100644 | ||
29 | --- a/qtest.c | ||
30 | +++ b/qtest.c | ||
31 | @@ -XXX,XX +XXX,XX @@ bool qtest_driver(void) | ||
32 | { | ||
33 | return qtest_chr.chr != NULL; | ||
34 | } | ||
35 | + | ||
36 | +void qtest_server_inproc_recv(void *dummy, const char *buf) | ||
37 | +{ | ||
38 | + static GString *gstr; | ||
39 | + if (!gstr) { | ||
40 | + gstr = g_string_new(NULL); | ||
41 | + } | ||
42 | + g_string_append(gstr, buf); | ||
43 | + if (gstr->str[gstr->len - 1] == '\n') { | ||
44 | + qtest_process_inbuf(NULL, gstr); | ||
45 | + g_string_truncate(gstr, 0); | ||
46 | + } | ||
47 | +} | ||
48 | -- | ||
49 | 2.24.1 | ||
50 | diff view generated by jsdifflib |
1 | From: Alexander Bulekov <alxndr@bu.edu> | 1 | From: Paolo Bonzini <pbonzini@redhat.com> |
---|---|---|---|
2 | 2 | ||
3 | The names i2c_send and i2c_recv collide with functions defined in | 3 | This adds a CoMutex around the existing CoQueue. Because the write-side |
4 | hw/i2c/core.c. This causes an error when linking against libqos and | 4 | can just take CoMutex, the old "writer" field is not necessary anymore. |
5 | softmmu simultaneously (for example when using qtest inproc). Rename the | 5 | Instead of removing it altogether, count the number of pending writers |
6 | libqos functions to avoid this. | 6 | during a read-side critical section and forbid further readers from |
7 | entering. | ||
7 | 8 | ||
8 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | 9 | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
9 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | 10 | Reviewed-by: Fam Zheng <famz@redhat.com> |
10 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | 11 | Message-id: 20170213181244.16297-7-pbonzini@redhat.com |
11 | Acked-by: Thomas Huth <thuth@redhat.com> | ||
12 | Message-id: 20200220041118.23264-10-alxndr@bu.edu | ||
13 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | 12 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> |
14 | --- | 13 | --- |
15 | tests/qtest/libqos/i2c.c | 10 +++++----- | 14 | include/qemu/coroutine.h | 3 ++- |
16 | tests/qtest/libqos/i2c.h | 4 ++-- | 15 | util/qemu-coroutine-lock.c | 35 ++++++++++++++++++++++++----------- |
17 | tests/qtest/pca9552-test.c | 10 +++++----- | 16 | 2 files changed, 26 insertions(+), 12 deletions(-) |
18 | 3 files changed, 12 insertions(+), 12 deletions(-) | ||
19 | 17 | ||
20 | diff --git a/tests/qtest/libqos/i2c.c b/tests/qtest/libqos/i2c.c | 18 | diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h |
21 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
22 | --- a/tests/qtest/libqos/i2c.c | 20 | --- a/include/qemu/coroutine.h |
23 | +++ b/tests/qtest/libqos/i2c.c | 21 | +++ b/include/qemu/coroutine.h |
24 | @@ -XXX,XX +XXX,XX @@ | 22 | @@ -XXX,XX +XXX,XX @@ bool qemu_co_queue_empty(CoQueue *queue); |
25 | #include "libqos/i2c.h" | 23 | |
26 | #include "libqtest.h" | 24 | |
27 | 25 | typedef struct CoRwlock { | |
28 | -void i2c_send(QI2CDevice *i2cdev, const uint8_t *buf, uint16_t len) | 26 | - bool writer; |
29 | +void qi2c_send(QI2CDevice *i2cdev, const uint8_t *buf, uint16_t len) | 27 | + int pending_writer; |
28 | int reader; | ||
29 | + CoMutex mutex; | ||
30 | CoQueue queue; | ||
31 | } CoRwlock; | ||
32 | |||
33 | diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c | ||
34 | index XXXXXXX..XXXXXXX 100644 | ||
35 | --- a/util/qemu-coroutine-lock.c | ||
36 | +++ b/util/qemu-coroutine-lock.c | ||
37 | @@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_init(CoRwlock *lock) | ||
30 | { | 38 | { |
31 | i2cdev->bus->send(i2cdev->bus, i2cdev->addr, buf, len); | 39 | memset(lock, 0, sizeof(*lock)); |
40 | qemu_co_queue_init(&lock->queue); | ||
41 | + qemu_co_mutex_init(&lock->mutex); | ||
32 | } | 42 | } |
33 | 43 | ||
34 | -void i2c_recv(QI2CDevice *i2cdev, uint8_t *buf, uint16_t len) | 44 | void qemu_co_rwlock_rdlock(CoRwlock *lock) |
35 | +void qi2c_recv(QI2CDevice *i2cdev, uint8_t *buf, uint16_t len) | ||
36 | { | 45 | { |
37 | i2cdev->bus->recv(i2cdev->bus, i2cdev->addr, buf, len); | 46 | Coroutine *self = qemu_coroutine_self(); |
47 | |||
48 | - while (lock->writer) { | ||
49 | - qemu_co_queue_wait(&lock->queue, NULL); | ||
50 | + qemu_co_mutex_lock(&lock->mutex); | ||
51 | + /* For fairness, wait if a writer is in line. */ | ||
52 | + while (lock->pending_writer) { | ||
53 | + qemu_co_queue_wait(&lock->queue, &lock->mutex); | ||
54 | } | ||
55 | lock->reader++; | ||
56 | + qemu_co_mutex_unlock(&lock->mutex); | ||
57 | + | ||
58 | + /* The rest of the read-side critical section is run without the mutex. */ | ||
59 | self->locks_held++; | ||
38 | } | 60 | } |
39 | @@ -XXX,XX +XXX,XX @@ void i2c_recv(QI2CDevice *i2cdev, uint8_t *buf, uint16_t len) | 61 | |
40 | void i2c_read_block(QI2CDevice *i2cdev, uint8_t reg, | 62 | @@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_unlock(CoRwlock *lock) |
41 | uint8_t *buf, uint16_t len) | 63 | Coroutine *self = qemu_coroutine_self(); |
64 | |||
65 | assert(qemu_in_coroutine()); | ||
66 | - if (lock->writer) { | ||
67 | - lock->writer = false; | ||
68 | + if (!lock->reader) { | ||
69 | + /* The critical section started in qemu_co_rwlock_wrlock. */ | ||
70 | qemu_co_queue_restart_all(&lock->queue); | ||
71 | } else { | ||
72 | + self->locks_held--; | ||
73 | + | ||
74 | + qemu_co_mutex_lock(&lock->mutex); | ||
75 | lock->reader--; | ||
76 | assert(lock->reader >= 0); | ||
77 | /* Wakeup only one waiting writer */ | ||
78 | @@ -XXX,XX +XXX,XX @@ void qemu_co_rwlock_unlock(CoRwlock *lock) | ||
79 | qemu_co_queue_next(&lock->queue); | ||
80 | } | ||
81 | } | ||
82 | - self->locks_held--; | ||
83 | + qemu_co_mutex_unlock(&lock->mutex); | ||
84 | } | ||
85 | |||
86 | void qemu_co_rwlock_wrlock(CoRwlock *lock) | ||
42 | { | 87 | { |
43 | - i2c_send(i2cdev, ®, 1); | 88 | - Coroutine *self = qemu_coroutine_self(); |
44 | - i2c_recv(i2cdev, buf, len); | 89 | - |
45 | + qi2c_send(i2cdev, ®, 1); | 90 | - while (lock->writer || lock->reader) { |
46 | + qi2c_recv(i2cdev, buf, len); | 91 | - qemu_co_queue_wait(&lock->queue, NULL); |
92 | + qemu_co_mutex_lock(&lock->mutex); | ||
93 | + lock->pending_writer++; | ||
94 | + while (lock->reader) { | ||
95 | + qemu_co_queue_wait(&lock->queue, &lock->mutex); | ||
96 | } | ||
97 | - lock->writer = true; | ||
98 | - self->locks_held++; | ||
99 | + lock->pending_writer--; | ||
100 | + | ||
101 | + /* The rest of the write-side critical section is run with | ||
102 | + * the mutex taken, so that lock->reader remains zero. | ||
103 | + * There is no need to update self->locks_held. | ||
104 | + */ | ||
47 | } | 105 | } |
48 | |||
49 | void i2c_write_block(QI2CDevice *i2cdev, uint8_t reg, | ||
50 | @@ -XXX,XX +XXX,XX @@ void i2c_write_block(QI2CDevice *i2cdev, uint8_t reg, | ||
51 | uint8_t *cmd = g_malloc(len + 1); | ||
52 | cmd[0] = reg; | ||
53 | memcpy(&cmd[1], buf, len); | ||
54 | - i2c_send(i2cdev, cmd, len + 1); | ||
55 | + qi2c_send(i2cdev, cmd, len + 1); | ||
56 | g_free(cmd); | ||
57 | } | ||
58 | |||
59 | diff --git a/tests/qtest/libqos/i2c.h b/tests/qtest/libqos/i2c.h | ||
60 | index XXXXXXX..XXXXXXX 100644 | ||
61 | --- a/tests/qtest/libqos/i2c.h | ||
62 | +++ b/tests/qtest/libqos/i2c.h | ||
63 | @@ -XXX,XX +XXX,XX @@ struct QI2CDevice { | ||
64 | void *i2c_device_create(void *i2c_bus, QGuestAllocator *alloc, void *addr); | ||
65 | void add_qi2c_address(QOSGraphEdgeOptions *opts, QI2CAddress *addr); | ||
66 | |||
67 | -void i2c_send(QI2CDevice *dev, const uint8_t *buf, uint16_t len); | ||
68 | -void i2c_recv(QI2CDevice *dev, uint8_t *buf, uint16_t len); | ||
69 | +void qi2c_send(QI2CDevice *dev, const uint8_t *buf, uint16_t len); | ||
70 | +void qi2c_recv(QI2CDevice *dev, uint8_t *buf, uint16_t len); | ||
71 | |||
72 | void i2c_read_block(QI2CDevice *dev, uint8_t reg, | ||
73 | uint8_t *buf, uint16_t len); | ||
74 | diff --git a/tests/qtest/pca9552-test.c b/tests/qtest/pca9552-test.c | ||
75 | index XXXXXXX..XXXXXXX 100644 | ||
76 | --- a/tests/qtest/pca9552-test.c | ||
77 | +++ b/tests/qtest/pca9552-test.c | ||
78 | @@ -XXX,XX +XXX,XX @@ static void receive_autoinc(void *obj, void *data, QGuestAllocator *alloc) | ||
79 | |||
80 | pca9552_init(i2cdev); | ||
81 | |||
82 | - i2c_send(i2cdev, ®, 1); | ||
83 | + qi2c_send(i2cdev, ®, 1); | ||
84 | |||
85 | /* PCA9552_LS0 */ | ||
86 | - i2c_recv(i2cdev, &resp, 1); | ||
87 | + qi2c_recv(i2cdev, &resp, 1); | ||
88 | g_assert_cmphex(resp, ==, 0x54); | ||
89 | |||
90 | /* PCA9552_LS1 */ | ||
91 | - i2c_recv(i2cdev, &resp, 1); | ||
92 | + qi2c_recv(i2cdev, &resp, 1); | ||
93 | g_assert_cmphex(resp, ==, 0x55); | ||
94 | |||
95 | /* PCA9552_LS2 */ | ||
96 | - i2c_recv(i2cdev, &resp, 1); | ||
97 | + qi2c_recv(i2cdev, &resp, 1); | ||
98 | g_assert_cmphex(resp, ==, 0x55); | ||
99 | |||
100 | /* PCA9552_LS3 */ | ||
101 | - i2c_recv(i2cdev, &resp, 1); | ||
102 | + qi2c_recv(i2cdev, &resp, 1); | ||
103 | g_assert_cmphex(resp, ==, 0x54); | ||
104 | } | ||
105 | |||
106 | -- | 106 | -- |
107 | 2.24.1 | 107 | 2.9.3 |
108 | 108 | ||
109 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | From: Alexander Bulekov <alxndr@bu.edu> | ||
2 | 1 | ||
3 | Most qos-related objects were specified in the qos-test-obj-y variable. | ||
4 | qos-test-obj-y also included qos-test.o which defines a main(). | ||
5 | This made it difficult to repurpose qos-test-obj-y to link anything | ||
6 | beside tests/qos-test against libqos. This change separates objects that | ||
7 | are libqos-specific and ones that are qos-test specific into different | ||
8 | variables. | ||
9 | |||
10 | Signed-off-by: Alexander Bulekov <alxndr@bu.edu> | ||
11 | Reviewed-by: Darren Kenny <darren.kenny@oracle.com> | ||
12 | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
13 | Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> | ||
14 | Message-id: 20200220041118.23264-11-alxndr@bu.edu | ||
15 | Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
16 | --- | ||
17 | tests/qtest/Makefile.include | 71 ++++++++++++++++++------------------ | ||
18 | 1 file changed, 36 insertions(+), 35 deletions(-) | ||
19 | |||
20 | diff --git a/tests/qtest/Makefile.include b/tests/qtest/Makefile.include | ||
21 | index XXXXXXX..XXXXXXX 100644 | ||
22 | --- a/tests/qtest/Makefile.include | ||
23 | +++ b/tests/qtest/Makefile.include | ||
24 | @@ -XXX,XX +XXX,XX @@ check-qtest-s390x-y += migration-test | ||
25 | # libqos / qgraph : | ||
26 | libqgraph-obj-y = tests/qtest/libqos/qgraph.o | ||
27 | |||
28 | -libqos-obj-y = $(libqgraph-obj-y) tests/qtest/libqos/pci.o tests/qtest/libqos/fw_cfg.o | ||
29 | -libqos-obj-y += tests/qtest/libqos/malloc.o | ||
30 | -libqos-obj-y += tests/qtest/libqos/libqos.o | ||
31 | -libqos-spapr-obj-y = $(libqos-obj-y) tests/qtest/libqos/malloc-spapr.o | ||
32 | +libqos-core-obj-y = $(libqgraph-obj-y) tests/qtest/libqos/pci.o tests/qtest/libqos/fw_cfg.o | ||
33 | +libqos-core-obj-y += tests/qtest/libqos/malloc.o | ||
34 | +libqos-core-obj-y += tests/qtest/libqos/libqos.o | ||
35 | +libqos-spapr-obj-y = $(libqos-core-obj-y) tests/qtest/libqos/malloc-spapr.o | ||
36 | libqos-spapr-obj-y += tests/qtest/libqos/libqos-spapr.o | ||
37 | libqos-spapr-obj-y += tests/qtest/libqos/rtas.o | ||
38 | libqos-spapr-obj-y += tests/qtest/libqos/pci-spapr.o | ||
39 | -libqos-pc-obj-y = $(libqos-obj-y) tests/qtest/libqos/pci-pc.o | ||
40 | +libqos-pc-obj-y = $(libqos-core-obj-y) tests/qtest/libqos/pci-pc.o | ||
41 | libqos-pc-obj-y += tests/qtest/libqos/malloc-pc.o tests/qtest/libqos/libqos-pc.o | ||
42 | libqos-pc-obj-y += tests/qtest/libqos/ahci.o | ||
43 | libqos-usb-obj-y = $(libqos-spapr-obj-y) $(libqos-pc-obj-y) tests/qtest/libqos/usb.o | ||
44 | |||
45 | # qos devices: | ||
46 | -qos-test-obj-y = tests/qtest/qos-test.o $(libqgraph-obj-y) | ||
47 | -qos-test-obj-y += $(libqos-pc-obj-y) $(libqos-spapr-obj-y) | ||
48 | -qos-test-obj-y += tests/qtest/libqos/e1000e.o | ||
49 | -qos-test-obj-y += tests/qtest/libqos/i2c.o | ||
50 | -qos-test-obj-y += tests/qtest/libqos/i2c-imx.o | ||
51 | -qos-test-obj-y += tests/qtest/libqos/i2c-omap.o | ||
52 | -qos-test-obj-y += tests/qtest/libqos/sdhci.o | ||
53 | -qos-test-obj-y += tests/qtest/libqos/tpci200.o | ||
54 | -qos-test-obj-y += tests/qtest/libqos/virtio.o | ||
55 | -qos-test-obj-$(CONFIG_VIRTFS) += tests/qtest/libqos/virtio-9p.o | ||
56 | -qos-test-obj-y += tests/qtest/libqos/virtio-balloon.o | ||
57 | -qos-test-obj-y += tests/qtest/libqos/virtio-blk.o | ||
58 | -qos-test-obj-y += tests/qtest/libqos/virtio-mmio.o | ||
59 | -qos-test-obj-y += tests/qtest/libqos/virtio-net.o | ||
60 | -qos-test-obj-y += tests/qtest/libqos/virtio-pci.o | ||
61 | -qos-test-obj-y += tests/qtest/libqos/virtio-pci-modern.o | ||
62 | -qos-test-obj-y += tests/qtest/libqos/virtio-rng.o | ||
63 | -qos-test-obj-y += tests/qtest/libqos/virtio-scsi.o | ||
64 | -qos-test-obj-y += tests/qtest/libqos/virtio-serial.o | ||
65 | +libqos-obj-y = $(libqgraph-obj-y) | ||
66 | +libqos-obj-y += $(libqos-pc-obj-y) $(libqos-spapr-obj-y) | ||
67 | +libqos-obj-y += tests/qtest/libqos/e1000e.o | ||
68 | +libqos-obj-y += tests/qtest/libqos/i2c.o | ||
69 | +libqos-obj-y += tests/qtest/libqos/i2c-imx.o | ||
70 | +libqos-obj-y += tests/qtest/libqos/i2c-omap.o | ||
71 | +libqos-obj-y += tests/qtest/libqos/sdhci.o | ||
72 | +libqos-obj-y += tests/qtest/libqos/tpci200.o | ||
73 | +libqos-obj-y += tests/qtest/libqos/virtio.o | ||
74 | +libqos-obj-$(CONFIG_VIRTFS) += tests/qtest/libqos/virtio-9p.o | ||
75 | +libqos-obj-y += tests/qtest/libqos/virtio-balloon.o | ||
76 | +libqos-obj-y += tests/qtest/libqos/virtio-blk.o | ||
77 | +libqos-obj-y += tests/qtest/libqos/virtio-mmio.o | ||
78 | +libqos-obj-y += tests/qtest/libqos/virtio-net.o | ||
79 | +libqos-obj-y += tests/qtest/libqos/virtio-pci.o | ||
80 | +libqos-obj-y += tests/qtest/libqos/virtio-pci-modern.o | ||
81 | +libqos-obj-y += tests/qtest/libqos/virtio-rng.o | ||
82 | +libqos-obj-y += tests/qtest/libqos/virtio-scsi.o | ||
83 | +libqos-obj-y += tests/qtest/libqos/virtio-serial.o | ||
84 | |||
85 | # qos machines: | ||
86 | -qos-test-obj-y += tests/qtest/libqos/aarch64-xlnx-zcu102-machine.o | ||
87 | -qos-test-obj-y += tests/qtest/libqos/arm-imx25-pdk-machine.o | ||
88 | -qos-test-obj-y += tests/qtest/libqos/arm-n800-machine.o | ||
89 | -qos-test-obj-y += tests/qtest/libqos/arm-raspi2-machine.o | ||
90 | -qos-test-obj-y += tests/qtest/libqos/arm-sabrelite-machine.o | ||
91 | -qos-test-obj-y += tests/qtest/libqos/arm-smdkc210-machine.o | ||
92 | -qos-test-obj-y += tests/qtest/libqos/arm-virt-machine.o | ||
93 | -qos-test-obj-y += tests/qtest/libqos/arm-xilinx-zynq-a9-machine.o | ||
94 | -qos-test-obj-y += tests/qtest/libqos/ppc64_pseries-machine.o | ||
95 | -qos-test-obj-y += tests/qtest/libqos/x86_64_pc-machine.o | ||
96 | +libqos-obj-y += tests/qtest/libqos/aarch64-xlnx-zcu102-machine.o | ||
97 | +libqos-obj-y += tests/qtest/libqos/arm-imx25-pdk-machine.o | ||
98 | +libqos-obj-y += tests/qtest/libqos/arm-n800-machine.o | ||
99 | +libqos-obj-y += tests/qtest/libqos/arm-raspi2-machine.o | ||
100 | +libqos-obj-y += tests/qtest/libqos/arm-sabrelite-machine.o | ||
101 | +libqos-obj-y += tests/qtest/libqos/arm-smdkc210-machine.o | ||
102 | +libqos-obj-y += tests/qtest/libqos/arm-virt-machine.o | ||
103 | +libqos-obj-y += tests/qtest/libqos/arm-xilinx-zynq-a9-machine.o | ||
104 | +libqos-obj-y += tests/qtest/libqos/ppc64_pseries-machine.o | ||
105 | +libqos-obj-y += tests/qtest/libqos/x86_64_pc-machine.o | ||
106 | |||
107 | # qos tests: | ||
108 | +qos-test-obj-y += tests/qtest/qos-test.o | ||
109 | qos-test-obj-y += tests/qtest/ac97-test.o | ||
110 | qos-test-obj-y += tests/qtest/ds1338-test.o | ||
111 | qos-test-obj-y += tests/qtest/e1000-test.o | ||
112 | @@ -XXX,XX +XXX,XX @@ check-unit-y += tests/test-qgraph$(EXESUF) | ||
113 | tests/test-qgraph$(EXESUF): tests/test-qgraph.o $(libqgraph-obj-y) | ||
114 | |||
115 | check-qtest-generic-y += qos-test | ||
116 | -tests/qtest/qos-test$(EXESUF): $(qos-test-obj-y) | ||
117 | +tests/qtest/qos-test$(EXESUF): $(qos-test-obj-y) $(libqos-obj-y) | ||
118 | |||
119 | # QTest dependencies: | ||
120 | tests/qtest/qmp-test$(EXESUF): tests/qtest/qmp-test.o | ||
121 | -- | ||
122 | 2.24.1 | ||
123 | diff view generated by jsdifflib |