1
The following changes since commit f58d9620aa4a514b1227074ff56eefd1334a6225:
1
The following changes since commit 887cba855bb6ff4775256f7968409281350b568c:
2
2
3
Merge remote-tracking branch 'remotes/rth/tags/pull-dt-20180326' into staging (2018-03-27 10:27:34 +0100)
3
configure: Fix cross-building for RISCV host (v5) (2023-07-11 17:56:09 +0100)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
git://github.com/stefanha/qemu.git tags/block-pull-request
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to f5a53faad4bfbf1b86012a13055d2a1a774a42b6:
9
for you to fetch changes up to 75dcb4d790bbe5327169fd72b185960ca58e2fa6:
10
10
11
MAINTAINERS: add include/block/aio-wait.h (2018-03-27 13:05:48 +0100)
11
virtio-blk: fix host notifier issues during dataplane start/stop (2023-07-12 15:20:32 -0400)
12
13
----------------------------------------------------------------
14
Pull request
12
15
13
----------------------------------------------------------------
16
----------------------------------------------------------------
14
17
15
----------------------------------------------------------------
18
Stefan Hajnoczi (1):
19
virtio-blk: fix host notifier issues during dataplane start/stop
16
20
17
Stefan Hajnoczi (4):
21
hw/block/dataplane/virtio-blk.c | 67 +++++++++++++++++++--------------
18
queue: add QSIMPLEQ_PREPEND()
22
1 file changed, 38 insertions(+), 29 deletions(-)
19
coroutine: avoid co_queue_wakeup recursion
20
coroutine: add test-aio coroutine queue chaining test case
21
MAINTAINERS: add include/block/aio-wait.h
22
23
MAINTAINERS | 1 +
24
include/qemu/coroutine_int.h | 1 -
25
include/qemu/queue.h | 8 ++++
26
block/io.c | 3 +-
27
tests/test-aio.c | 65 ++++++++++++++++++++-----
28
util/qemu-coroutine-lock.c | 34 -------------
29
util/qemu-coroutine.c | 110 +++++++++++++++++++++++--------------------
30
7 files changed, 121 insertions(+), 101 deletions(-)
31
23
32
--
24
--
33
2.14.3
25
2.40.1
34
35
diff view generated by jsdifflib
Deleted patch
1
QSIMPLEQ_CONCAT(a, b) joins a = a + b. The new QSIMPLEQ_PREPEND(a, b)
2
API joins a = b + a.
3
1
4
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
5
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
6
Message-id: 20180322152834.12656-2-stefanha@redhat.com
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
---
9
include/qemu/queue.h | 8 ++++++++
10
1 file changed, 8 insertions(+)
11
12
diff --git a/include/qemu/queue.h b/include/qemu/queue.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/include/qemu/queue.h
15
+++ b/include/qemu/queue.h
16
@@ -XXX,XX +XXX,XX @@ struct { \
17
} \
18
} while (/*CONSTCOND*/0)
19
20
+#define QSIMPLEQ_PREPEND(head1, head2) do { \
21
+ if (!QSIMPLEQ_EMPTY((head2))) { \
22
+ *(head2)->sqh_last = (head1)->sqh_first; \
23
+ (head1)->sqh_first = (head2)->sqh_first; \
24
+ QSIMPLEQ_INIT((head2)); \
25
+ } \
26
+} while (/*CONSTCOND*/0)
27
+
28
#define QSIMPLEQ_LAST(head, type, field) \
29
(QSIMPLEQ_EMPTY((head)) ? \
30
NULL : \
31
--
32
2.14.3
33
34
diff view generated by jsdifflib
1
qemu_aio_coroutine_enter() is (indirectly) called recursively when
1
The main loop thread can consume 100% CPU when using --device
2
processing co_queue_wakeup. This can lead to stack exhaustion.
2
virtio-blk-pci,iothread=<iothread>. ppoll() constantly returns but
3
reading virtqueue host notifiers fails with EAGAIN. The file descriptors
4
are stale and remain registered with the AioContext because of bugs in
5
the virtio-blk dataplane start/stop code.
3
6
4
This patch rewrites co_queue_wakeup in an iterative fashion (instead of
7
The problem is that the dataplane start/stop code involves drain
5
recursive) with bounded memory usage to prevent stack exhaustion.
8
operations, which call virtio_blk_drained_begin() and
9
virtio_blk_drained_end() at points where the host notifier is not
10
operational:
11
- In virtio_blk_data_plane_start(), blk_set_aio_context() drains after
12
vblk->dataplane_started has been set to true but the host notifier has
13
not been attached yet.
14
- In virtio_blk_data_plane_stop(), blk_drain() and blk_set_aio_context()
15
drain after the host notifier has already been detached but with
16
vblk->dataplane_started still set to true.
6
17
7
qemu_co_queue_run_restart() is inlined into qemu_aio_coroutine_enter()
18
I would like to simplify ->ioeventfd_start/stop() to avoid interactions
8
and the qemu_coroutine_enter() call is turned into a loop to avoid
19
with drain entirely, but couldn't find a way to do that. Instead, this
9
recursion.
20
patch accepts the fragile nature of the code and reorders it so that
21
vblk->dataplane_started is false during drain operations. This way the
22
virtio_blk_drained_begin() and virtio_blk_drained_end() calls don't
23
touch the host notifier. The result is that
24
virtio_blk_data_plane_start() and virtio_blk_data_plane_stop() have
25
complete control over the host notifier and stale file descriptors are
26
no longer left in the AioContext.
10
27
11
There is one change that is worth mentioning: Previously, when
28
This patch fixes the 100% CPU consumption in the main loop thread and
12
coroutine A queued coroutine B, qemu_co_queue_run_restart() entered
29
correctly moves host notifier processing to the IOThread.
13
coroutine B from coroutine A. If A was terminating then it would still
14
stay alive until B yielded. After this patch B is entered by A's parent
15
so that a A can be deleted immediately if it is terminating.
16
30
17
It is safe to make this change since B could never interact with A if it
31
Fixes: 1665d9326fd2 ("virtio-blk: implement BlockDevOps->drained_begin()")
18
was terminating anyway.
32
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
19
20
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
33
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
21
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
34
Tested-by: Lukas Doktor <ldoktor@redhat.com>
22
Message-id: 20180322152834.12656-3-stefanha@redhat.com
35
Message-id: 20230704151527.193586-1-stefanha@redhat.com
23
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
36
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
24
---
37
---
25
include/qemu/coroutine_int.h | 1 -
38
hw/block/dataplane/virtio-blk.c | 67 +++++++++++++++++++--------------
26
block/io.c | 3 +-
39
1 file changed, 38 insertions(+), 29 deletions(-)
27
util/qemu-coroutine-lock.c | 34 -------------
28
util/qemu-coroutine.c | 110 +++++++++++++++++++++++--------------------
29
4 files changed, 60 insertions(+), 88 deletions(-)
30
40
31
diff --git a/include/qemu/coroutine_int.h b/include/qemu/coroutine_int.h
41
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
32
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
33
--- a/include/qemu/coroutine_int.h
43
--- a/hw/block/dataplane/virtio-blk.c
34
+++ b/include/qemu/coroutine_int.h
44
+++ b/hw/block/dataplane/virtio-blk.c
35
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_new(void);
45
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
36
void qemu_coroutine_delete(Coroutine *co);
46
37
CoroutineAction qemu_coroutine_switch(Coroutine *from, Coroutine *to,
47
memory_region_transaction_commit();
38
CoroutineAction action);
48
39
-void coroutine_fn qemu_co_queue_run_restart(Coroutine *co);
49
- /*
40
50
- * These fields are visible to the IOThread so we rely on implicit barriers
41
#endif
51
- * in aio_context_acquire() on the write side and aio_notify_accept() on
42
diff --git a/block/io.c b/block/io.c
52
- * the read side.
43
index XXXXXXX..XXXXXXX 100644
53
- */
44
--- a/block/io.c
54
- s->starting = false;
45
+++ b/block/io.c
55
- vblk->dataplane_started = true;
46
@@ -XXX,XX +XXX,XX @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
56
trace_virtio_blk_data_plane_start(s);
47
BdrvCoDrainData data;
57
48
58
old_context = blk_get_aio_context(s->conf->conf.blk);
49
/* Calling bdrv_drain() from a BH ensures the current coroutine yields and
59
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
50
- * other coroutines run if they were queued from
60
event_notifier_set(virtio_queue_get_host_notifier(vq));
51
- * qemu_co_queue_run_restart(). */
52
+ * other coroutines run if they were queued by aio_co_enter(). */
53
54
assert(qemu_in_coroutine());
55
data = (BdrvCoDrainData) {
56
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/util/qemu-coroutine-lock.c
59
+++ b/util/qemu-coroutine-lock.c
60
@@ -XXX,XX +XXX,XX @@ void coroutine_fn qemu_co_queue_wait_impl(CoQueue *queue, QemuLockable *lock)
61
}
61
}
62
63
+ /*
64
+ * These fields must be visible to the IOThread when it processes the
65
+ * virtqueue, otherwise it will think dataplane has not started yet.
66
+ *
67
+ * Make sure ->dataplane_started is false when blk_set_aio_context() is
68
+ * called above so that draining does not cause the host notifier to be
69
+ * detached/attached prematurely.
70
+ */
71
+ s->starting = false;
72
+ vblk->dataplane_started = true;
73
+ smp_wmb(); /* paired with aio_notify_accept() on the read side */
74
+
75
/* Get this show started by hooking up our callbacks */
76
if (!blk_in_drain(s->conf->conf.blk)) {
77
aio_context_acquire(s->ctx);
78
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
79
fail_guest_notifiers:
80
vblk->dataplane_disabled = true;
81
s->starting = false;
82
- vblk->dataplane_started = true;
83
return -ENOSYS;
62
}
84
}
63
85
64
-/**
86
@@ -XXX,XX +XXX,XX @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
65
- * qemu_co_queue_run_restart:
87
aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
66
- *
88
}
67
- * Enter each coroutine that was previously marked for restart by
89
68
- * qemu_co_queue_next() or qemu_co_queue_restart_all(). This function is
90
+ /*
69
- * invoked by the core coroutine code when the current coroutine yields or
91
+ * Batch all the host notifiers in a single transaction to avoid
70
- * terminates.
92
+ * quadratic time complexity in address_space_update_ioeventfds().
71
- */
93
+ */
72
-void qemu_co_queue_run_restart(Coroutine *co)
94
+ memory_region_transaction_begin();
73
-{
95
+
74
- Coroutine *next;
96
+ for (i = 0; i < nvqs; i++) {
75
- QSIMPLEQ_HEAD(, Coroutine) tmp_queue_wakeup =
97
+ virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
76
- QSIMPLEQ_HEAD_INITIALIZER(tmp_queue_wakeup);
98
+ }
99
+
100
+ /*
101
+ * The transaction expects the ioeventfds to be open when it
102
+ * commits. Do it now, before the cleanup loop.
103
+ */
104
+ memory_region_transaction_commit();
105
+
106
+ for (i = 0; i < nvqs; i++) {
107
+ virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
108
+ }
109
+
110
+ /*
111
+ * Set ->dataplane_started to false before draining so that host notifiers
112
+ * are not detached/attached anymore.
113
+ */
114
+ vblk->dataplane_started = false;
115
+
116
aio_context_acquire(s->ctx);
117
118
/* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
119
@@ -XXX,XX +XXX,XX @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
120
121
aio_context_release(s->ctx);
122
123
- /*
124
- * Batch all the host notifiers in a single transaction to avoid
125
- * quadratic time complexity in address_space_update_ioeventfds().
126
- */
127
- memory_region_transaction_begin();
77
-
128
-
78
- trace_qemu_co_queue_run_restart(co);
129
- for (i = 0; i < nvqs; i++) {
79
-
130
- virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
80
- /* Because "co" has yielded, any coroutine that we wakeup can resume it.
81
- * If this happens and "co" terminates, co->co_queue_wakeup becomes
82
- * invalid memory. Therefore, use a temporary queue and do not touch
83
- * the "co" coroutine as soon as you enter another one.
84
- *
85
- * In its turn resumed "co" can populate "co_queue_wakeup" queue with
86
- * new coroutines to be woken up. The caller, who has resumed "co",
87
- * will be responsible for traversing the same queue, which may cause
88
- * a different wakeup order but not any missing wakeups.
89
- */
90
- QSIMPLEQ_CONCAT(&tmp_queue_wakeup, &co->co_queue_wakeup);
91
-
92
- while ((next = QSIMPLEQ_FIRST(&tmp_queue_wakeup))) {
93
- QSIMPLEQ_REMOVE_HEAD(&tmp_queue_wakeup, co_queue_next);
94
- qemu_coroutine_enter(next);
95
- }
96
-}
97
-
98
static bool qemu_co_queue_do_restart(CoQueue *queue, bool single)
99
{
100
Coroutine *next;
101
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/util/qemu-coroutine.c
104
+++ b/util/qemu-coroutine.c
105
@@ -XXX,XX +XXX,XX @@ static void coroutine_delete(Coroutine *co)
106
107
void qemu_aio_coroutine_enter(AioContext *ctx, Coroutine *co)
108
{
109
- Coroutine *self = qemu_coroutine_self();
110
- CoroutineAction ret;
111
-
112
- /* Cannot rely on the read barrier for co in aio_co_wake(), as there are
113
- * callers outside of aio_co_wake() */
114
- const char *scheduled = atomic_mb_read(&co->scheduled);
115
-
116
- trace_qemu_aio_coroutine_enter(ctx, self, co, co->entry_arg);
117
-
118
- /* if the Coroutine has already been scheduled, entering it again will
119
- * cause us to enter it twice, potentially even after the coroutine has
120
- * been deleted */
121
- if (scheduled) {
122
- fprintf(stderr,
123
- "%s: Co-routine was already scheduled in '%s'\n",
124
- __func__, scheduled);
125
- abort();
126
- }
131
- }
127
-
132
-
128
- if (co->caller) {
133
- /*
129
- fprintf(stderr, "Co-routine re-entered recursively\n");
134
- * The transaction expects the ioeventfds to be open when it
130
- abort();
135
- * commits. Do it now, before the cleanup loop.
136
- */
137
- memory_region_transaction_commit();
138
-
139
- for (i = 0; i < nvqs; i++) {
140
- virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
131
- }
141
- }
132
-
142
-
133
- co->caller = self;
143
qemu_bh_cancel(s->bh);
134
- co->ctx = ctx;
144
notify_guest_bh(s); /* final chance to notify guest */
135
-
145
136
- /* Store co->ctx before anything that stores co. Matches
146
/* Clean up guest notifier (irq) */
137
- * barrier in aio_co_wake and qemu_co_mutex_wake.
147
k->set_guest_notifiers(qbus->parent, nvqs, false);
138
- */
148
139
- smp_wmb();
149
- vblk->dataplane_started = false;
140
-
150
s->stopping = false;
141
- ret = qemu_coroutine_switch(self, co, COROUTINE_ENTER);
142
-
143
- qemu_co_queue_run_restart(co);
144
-
145
- /* Beware, if ret == COROUTINE_YIELD and qemu_co_queue_run_restart()
146
- * has started any other coroutine, "co" might have been reentered
147
- * and even freed by now! So be careful and do not touch it.
148
- */
149
-
150
- switch (ret) {
151
- case COROUTINE_YIELD:
152
- return;
153
- case COROUTINE_TERMINATE:
154
- assert(!co->locks_held);
155
- trace_qemu_coroutine_terminate(co);
156
- coroutine_delete(co);
157
- return;
158
- default:
159
- abort();
160
+ QSIMPLEQ_HEAD(, Coroutine) pending = QSIMPLEQ_HEAD_INITIALIZER(pending);
161
+ Coroutine *from = qemu_coroutine_self();
162
+
163
+ QSIMPLEQ_INSERT_TAIL(&pending, co, co_queue_next);
164
+
165
+ /* Run co and any queued coroutines */
166
+ while (!QSIMPLEQ_EMPTY(&pending)) {
167
+ Coroutine *to = QSIMPLEQ_FIRST(&pending);
168
+ CoroutineAction ret;
169
+
170
+ /* Cannot rely on the read barrier for to in aio_co_wake(), as there are
171
+ * callers outside of aio_co_wake() */
172
+ const char *scheduled = atomic_mb_read(&to->scheduled);
173
+
174
+ QSIMPLEQ_REMOVE_HEAD(&pending, co_queue_next);
175
+
176
+ trace_qemu_aio_coroutine_enter(ctx, from, to, to->entry_arg);
177
+
178
+ /* if the Coroutine has already been scheduled, entering it again will
179
+ * cause us to enter it twice, potentially even after the coroutine has
180
+ * been deleted */
181
+ if (scheduled) {
182
+ fprintf(stderr,
183
+ "%s: Co-routine was already scheduled in '%s'\n",
184
+ __func__, scheduled);
185
+ abort();
186
+ }
187
+
188
+ if (to->caller) {
189
+ fprintf(stderr, "Co-routine re-entered recursively\n");
190
+ abort();
191
+ }
192
+
193
+ to->caller = from;
194
+ to->ctx = ctx;
195
+
196
+ /* Store to->ctx before anything that stores to. Matches
197
+ * barrier in aio_co_wake and qemu_co_mutex_wake.
198
+ */
199
+ smp_wmb();
200
+
201
+ ret = qemu_coroutine_switch(from, to, COROUTINE_ENTER);
202
+
203
+ /* Queued coroutines are run depth-first; previously pending coroutines
204
+ * run after those queued more recently.
205
+ */
206
+ QSIMPLEQ_PREPEND(&pending, &to->co_queue_wakeup);
207
+
208
+ switch (ret) {
209
+ case COROUTINE_YIELD:
210
+ break;
211
+ case COROUTINE_TERMINATE:
212
+ assert(!to->locks_held);
213
+ trace_qemu_coroutine_terminate(to);
214
+ coroutine_delete(to);
215
+ break;
216
+ default:
217
+ abort();
218
+ }
219
}
220
}
151
}
221
222
--
152
--
223
2.14.3
153
2.40.1
224
154
225
155
diff view generated by jsdifflib
Deleted patch
1
Check that two coroutines can queue each other repeatedly without
2
hitting stack exhaustion.
3
1
4
Switch to qemu_init_main_loop() in main() because coroutines use
5
qemu_get_aio_context() - they don't know about test-aio's ctx variable.
6
7
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
8
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
9
Message-id: 20180322152834.12656-4-stefanha@redhat.com
10
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
11
---
12
tests/test-aio.c | 65 ++++++++++++++++++++++++++++++++++++++++++++------------
13
1 file changed, 52 insertions(+), 13 deletions(-)
14
15
diff --git a/tests/test-aio.c b/tests/test-aio.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/test-aio.c
18
+++ b/tests/test-aio.c
19
@@ -XXX,XX +XXX,XX @@
20
#include "qemu/timer.h"
21
#include "qemu/sockets.h"
22
#include "qemu/error-report.h"
23
+#include "qemu/coroutine.h"
24
+#include "qemu/main-loop.h"
25
26
static AioContext *ctx;
27
28
@@ -XXX,XX +XXX,XX @@ static void test_source_timer_schedule(void)
29
timer_del(&data.timer);
30
}
31
32
+/*
33
+ * Check that aio_co_enter() can chain many times
34
+ *
35
+ * Two coroutines should be able to invoke each other via aio_co_enter() many
36
+ * times without hitting a limit like stack exhaustion. In other words, the
37
+ * calls should be chained instead of nested.
38
+ */
39
+
40
+typedef struct {
41
+ Coroutine *other;
42
+ unsigned i;
43
+ unsigned max;
44
+} ChainData;
45
+
46
+static void coroutine_fn chain(void *opaque)
47
+{
48
+ ChainData *data = opaque;
49
+
50
+ for (data->i = 0; data->i < data->max; data->i++) {
51
+ /* Queue up the other coroutine... */
52
+ aio_co_enter(ctx, data->other);
53
+
54
+ /* ...and give control to it */
55
+ qemu_coroutine_yield();
56
+ }
57
+}
58
+
59
+static void test_queue_chaining(void)
60
+{
61
+ /* This number of iterations hit stack exhaustion in the past: */
62
+ ChainData data_a = { .max = 25000 };
63
+ ChainData data_b = { .max = 25000 };
64
+
65
+ data_b.other = qemu_coroutine_create(chain, &data_a);
66
+ data_a.other = qemu_coroutine_create(chain, &data_b);
67
+
68
+ qemu_coroutine_enter(data_b.other);
69
+
70
+ g_assert_cmpint(data_a.i, ==, data_a.max);
71
+ g_assert_cmpint(data_b.i, ==, data_b.max - 1);
72
+
73
+ /* Allow the second coroutine to terminate */
74
+ qemu_coroutine_enter(data_a.other);
75
+
76
+ g_assert_cmpint(data_b.i, ==, data_b.max);
77
+}
78
79
/* End of tests. */
80
81
int main(int argc, char **argv)
82
{
83
- Error *local_error = NULL;
84
- GSource *src;
85
-
86
- init_clocks(NULL);
87
-
88
- ctx = aio_context_new(&local_error);
89
- if (!ctx) {
90
- error_reportf_err(local_error, "Failed to create AIO Context: ");
91
- exit(1);
92
- }
93
- src = aio_get_g_source(ctx);
94
- g_source_attach(src, NULL);
95
- g_source_unref(src);
96
+ qemu_init_main_loop(&error_fatal);
97
+ ctx = qemu_get_aio_context();
98
99
while (g_main_context_iteration(NULL, false));
100
101
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
102
g_test_add_func("/aio/external-client", test_aio_external_client);
103
g_test_add_func("/aio/timer/schedule", test_timer_schedule);
104
105
+ g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining);
106
+
107
g_test_add_func("/aio-gsource/flush", test_source_flush);
108
g_test_add_func("/aio-gsource/bh/schedule", test_source_bh_schedule);
109
g_test_add_func("/aio-gsource/bh/schedule10", test_source_bh_schedule10);
110
--
111
2.14.3
112
113
diff view generated by jsdifflib
Deleted patch
1
The include/block/aio-wait.h header file was added by commit
2
7719f3c968c59e1bcda7e177679dc765b59e578f ("block: extract
3
AIO_WAIT_WHILE() from BlockDriverState") without updating MAINTAINERS.
4
1
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
Reviewed-by: Eric Blake <eblake@redhat.com>
7
Message-id: 20180312132204.23683-1-stefanha@redhat.com
8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9
---
10
MAINTAINERS | 1 +
11
1 file changed, 1 insertion(+)
12
13
diff --git a/MAINTAINERS b/MAINTAINERS
14
index XXXXXXX..XXXXXXX 100644
15
--- a/MAINTAINERS
16
+++ b/MAINTAINERS
17
@@ -XXX,XX +XXX,XX @@ F: util/aio-*.c
18
F: block/io.c
19
F: migration/block*
20
F: include/block/aio.h
21
+F: include/block/aio-wait.h
22
F: scripts/qemugdb/aio.py
23
T: git git://github.com/stefanha/qemu.git block
24
25
--
26
2.14.3
27
28
diff view generated by jsdifflib