1
The following changes since commit d147f7e815f97cb477e223586bcb80c316ae10ea:
1
The following changes since commit 887cba855bb6ff4775256f7968409281350b568c:
2
2
3
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging (2017-10-03 16:27:24 +0100)
3
configure: Fix cross-building for RISCV host (v5) (2023-07-11 17:56:09 +0100)
4
4
5
are available in the git repository at:
5
are available in the Git repository at:
6
6
7
git://github.com/stefanha/qemu.git tags/block-pull-request
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to f708a5e71cba0d784e307334c07ade5f56f827ab:
9
for you to fetch changes up to 75dcb4d790bbe5327169fd72b185960ca58e2fa6:
10
10
11
aio: fix assert when remove poll during destroy (2017-10-03 14:36:19 -0400)
11
virtio-blk: fix host notifier issues during dataplane start/stop (2023-07-12 15:20:32 -0400)
12
13
----------------------------------------------------------------
14
Pull request
12
15
13
----------------------------------------------------------------
16
----------------------------------------------------------------
14
17
15
----------------------------------------------------------------
18
Stefan Hajnoczi (1):
19
virtio-blk: fix host notifier issues during dataplane start/stop
16
20
17
Peter Xu (4):
21
hw/block/dataplane/virtio-blk.c | 67 +++++++++++++++++++--------------
18
qom: provide root container for internal objs
22
1 file changed, 38 insertions(+), 29 deletions(-)
19
iothread: provide helpers for internal use
20
iothread: export iothread_stop()
21
iothread: delay the context release to finalize
22
23
Stefan Hajnoczi (1):
24
aio: fix assert when remove poll during destroy
25
26
include/qom/object.h | 11 +++++++++++
27
include/sysemu/iothread.h | 9 +++++++++
28
iothread.c | 46 ++++++++++++++++++++++++++++++++++++----------
29
qom/object.c | 11 +++++++++++
30
util/aio-posix.c | 9 ++++++++-
31
5 files changed, 75 insertions(+), 11 deletions(-)
32
23
33
--
24
--
34
2.13.6
25
2.40.1
35
36
diff view generated by jsdifflib
1
From: Peter Xu <peterx@redhat.com>
1
The main loop thread can consume 100% CPU when using --device
2
virtio-blk-pci,iothread=<iothread>. ppoll() constantly returns but
3
reading virtqueue host notifiers fails with EAGAIN. The file descriptors
4
are stale and remain registered with the AioContext because of bugs in
5
the virtio-blk dataplane start/stop code.
2
6
3
We have object_get_objects_root() to keep user created objects, however
7
The problem is that the dataplane start/stop code involves drain
4
no place for objects that will be used internally. Create such a
8
operations, which call virtio_blk_drained_begin() and
5
container for internal objects.
9
virtio_blk_drained_end() at points where the host notifier is not
10
operational:
11
- In virtio_blk_data_plane_start(), blk_set_aio_context() drains after
12
vblk->dataplane_started has been set to true but the host notifier has
13
not been attached yet.
14
- In virtio_blk_data_plane_stop(), blk_drain() and blk_set_aio_context()
15
drain after the host notifier has already been detached but with
16
vblk->dataplane_started still set to true.
6
17
7
CC: Andreas Färber <afaerber@suse.de>
18
I would like to simplify ->ioeventfd_start/stop() to avoid interactions
8
CC: Markus Armbruster <armbru@redhat.com>
19
with drain entirely, but couldn't find a way to do that. Instead, this
9
CC: Paolo Bonzini <pbonzini@redhat.com>
20
patch accepts the fragile nature of the code and reorders it so that
10
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
21
vblk->dataplane_started is false during drain operations. This way the
11
Signed-off-by: Peter Xu <peterx@redhat.com>
22
virtio_blk_drained_begin() and virtio_blk_drained_end() calls don't
12
Reviewed-by: Fam Zheng <famz@redhat.com>
23
touch the host notifier. The result is that
13
Message-id: 20170928025958.1420-2-peterx@redhat.com
24
virtio_blk_data_plane_start() and virtio_blk_data_plane_stop() have
25
complete control over the host notifier and stale file descriptors are
26
no longer left in the AioContext.
27
28
This patch fixes the 100% CPU consumption in the main loop thread and
29
correctly moves host notifier processing to the IOThread.
30
31
Fixes: 1665d9326fd2 ("virtio-blk: implement BlockDevOps->drained_begin()")
32
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
33
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
34
Tested-by: Lukas Doktor <ldoktor@redhat.com>
35
Message-id: 20230704151527.193586-1-stefanha@redhat.com
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
36
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
---
37
---
16
include/qom/object.h | 11 +++++++++++
38
hw/block/dataplane/virtio-blk.c | 67 +++++++++++++++++++--------------
17
qom/object.c | 11 +++++++++++
39
1 file changed, 38 insertions(+), 29 deletions(-)
18
2 files changed, 22 insertions(+)
19
40
20
diff --git a/include/qom/object.h b/include/qom/object.h
41
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
21
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
22
--- a/include/qom/object.h
43
--- a/hw/block/dataplane/virtio-blk.c
23
+++ b/include/qom/object.h
44
+++ b/hw/block/dataplane/virtio-blk.c
24
@@ -XXX,XX +XXX,XX @@ Object *object_get_root(void);
45
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
25
Object *object_get_objects_root(void);
46
26
47
memory_region_transaction_commit();
27
/**
48
28
+ * object_get_internal_root:
49
- /*
29
+ *
50
- * These fields are visible to the IOThread so we rely on implicit barriers
30
+ * Get the container object that holds internally used object
51
- * in aio_context_acquire() on the write side and aio_notify_accept() on
31
+ * instances. Any object which is put into this container must not be
52
- * the read side.
32
+ * user visible, and it will not be exposed in the QOM tree.
53
- */
33
+ *
54
- s->starting = false;
34
+ * Returns: the internal object container
55
- vblk->dataplane_started = true;
35
+ */
56
trace_virtio_blk_data_plane_start(s);
36
+Object *object_get_internal_root(void);
57
58
old_context = blk_get_aio_context(s->conf->conf.blk);
59
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
60
event_notifier_set(virtio_queue_get_host_notifier(vq));
61
}
62
63
+ /*
64
+ * These fields must be visible to the IOThread when it processes the
65
+ * virtqueue, otherwise it will think dataplane has not started yet.
66
+ *
67
+ * Make sure ->dataplane_started is false when blk_set_aio_context() is
68
+ * called above so that draining does not cause the host notifier to be
69
+ * detached/attached prematurely.
70
+ */
71
+ s->starting = false;
72
+ vblk->dataplane_started = true;
73
+ smp_wmb(); /* paired with aio_notify_accept() on the read side */
37
+
74
+
38
+/**
75
/* Get this show started by hooking up our callbacks */
39
* object_get_canonical_path_component:
76
if (!blk_in_drain(s->conf->conf.blk)) {
40
*
77
aio_context_acquire(s->ctx);
41
* Returns: The final component in the object's canonical path. The canonical
78
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
42
diff --git a/qom/object.c b/qom/object.c
79
fail_guest_notifiers:
43
index XXXXXXX..XXXXXXX 100644
80
vblk->dataplane_disabled = true;
44
--- a/qom/object.c
81
s->starting = false;
45
+++ b/qom/object.c
82
- vblk->dataplane_started = true;
46
@@ -XXX,XX +XXX,XX @@ Object *object_get_objects_root(void)
83
return -ENOSYS;
47
return container_get(object_get_root(), "/objects");
48
}
84
}
49
85
50
+Object *object_get_internal_root(void)
86
@@ -XXX,XX +XXX,XX @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
51
+{
87
aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
52
+ static Object *internal_root;
88
}
89
90
+ /*
91
+ * Batch all the host notifiers in a single transaction to avoid
92
+ * quadratic time complexity in address_space_update_ioeventfds().
93
+ */
94
+ memory_region_transaction_begin();
53
+
95
+
54
+ if (!internal_root) {
96
+ for (i = 0; i < nvqs; i++) {
55
+ internal_root = object_new("container");
97
+ virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
56
+ }
98
+ }
57
+
99
+
58
+ return internal_root;
100
+ /*
59
+}
101
+ * The transaction expects the ioeventfds to be open when it
102
+ * commits. Do it now, before the cleanup loop.
103
+ */
104
+ memory_region_transaction_commit();
60
+
105
+
61
static void object_get_child_property(Object *obj, Visitor *v,
106
+ for (i = 0; i < nvqs; i++) {
62
const char *name, void *opaque,
107
+ virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
63
Error **errp)
108
+ }
109
+
110
+ /*
111
+ * Set ->dataplane_started to false before draining so that host notifiers
112
+ * are not detached/attached anymore.
113
+ */
114
+ vblk->dataplane_started = false;
115
+
116
aio_context_acquire(s->ctx);
117
118
/* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
119
@@ -XXX,XX +XXX,XX @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
120
121
aio_context_release(s->ctx);
122
123
- /*
124
- * Batch all the host notifiers in a single transaction to avoid
125
- * quadratic time complexity in address_space_update_ioeventfds().
126
- */
127
- memory_region_transaction_begin();
128
-
129
- for (i = 0; i < nvqs; i++) {
130
- virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
131
- }
132
-
133
- /*
134
- * The transaction expects the ioeventfds to be open when it
135
- * commits. Do it now, before the cleanup loop.
136
- */
137
- memory_region_transaction_commit();
138
-
139
- for (i = 0; i < nvqs; i++) {
140
- virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
141
- }
142
-
143
qemu_bh_cancel(s->bh);
144
notify_guest_bh(s); /* final chance to notify guest */
145
146
/* Clean up guest notifier (irq) */
147
k->set_guest_notifiers(qbus->parent, nvqs, false);
148
149
- vblk->dataplane_started = false;
150
s->stopping = false;
151
}
64
--
152
--
65
2.13.6
153
2.40.1
66
154
67
155
diff view generated by jsdifflib
Deleted patch
1
From: Peter Xu <peterx@redhat.com>
2
1
3
IOThread is a general framework that contains IO loop environment and a
4
real thread behind. It's also good to be used internally inside qemu.
5
Provide some helpers for it to create iothreads to be used internally.
6
7
Put all the internal used iothreads into the internal object container.
8
9
Reviewed-by: Fam Zheng <famz@redhat.com>
10
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
11
Signed-off-by: Peter Xu <peterx@redhat.com>
12
Message-id: 20170928025958.1420-3-peterx@redhat.com
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
---
15
include/sysemu/iothread.h | 8 ++++++++
16
iothread.c | 16 ++++++++++++++++
17
2 files changed, 24 insertions(+)
18
19
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/sysemu/iothread.h
22
+++ b/include/sysemu/iothread.h
23
@@ -XXX,XX +XXX,XX @@ AioContext *iothread_get_aio_context(IOThread *iothread);
24
void iothread_stop_all(void);
25
GMainContext *iothread_get_g_main_context(IOThread *iothread);
26
27
+/*
28
+ * Helpers used to allocate iothreads for internal use. These
29
+ * iothreads will not be seen by monitor clients when query using
30
+ * "query-iothreads".
31
+ */
32
+IOThread *iothread_create(const char *id, Error **errp);
33
+void iothread_destroy(IOThread *iothread);
34
+
35
#endif /* IOTHREAD_H */
36
diff --git a/iothread.c b/iothread.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/iothread.c
39
+++ b/iothread.c
40
@@ -XXX,XX +XXX,XX @@ GMainContext *iothread_get_g_main_context(IOThread *iothread)
41
42
return iothread->worker_context;
43
}
44
+
45
+IOThread *iothread_create(const char *id, Error **errp)
46
+{
47
+ Object *obj;
48
+
49
+ obj = object_new_with_props(TYPE_IOTHREAD,
50
+ object_get_internal_root(),
51
+ id, errp, NULL);
52
+
53
+ return IOTHREAD(obj);
54
+}
55
+
56
+void iothread_destroy(IOThread *iothread)
57
+{
58
+ object_unparent(OBJECT(iothread));
59
+}
60
--
61
2.13.6
62
63
diff view generated by jsdifflib
Deleted patch
1
From: Peter Xu <peterx@redhat.com>
2
1
3
So that internal iothread users can explicitly stop one iothread without
4
destroying it.
5
6
Since at it, fix iothread_stop() to allow it to be called multiple
7
times. Before this patch we may call iothread_stop() more than once on
8
single iothread, while that may not be correct since qemu_thread_join()
9
is not allowed to run twice. From manual of pthread_join():
10
11
Joining with a thread that has previously been joined results in
12
undefined behavior.
13
14
Reviewed-by: Fam Zheng <famz@redhat.com>
15
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Signed-off-by: Peter Xu <peterx@redhat.com>
17
Message-id: 20170928025958.1420-4-peterx@redhat.com
18
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
19
---
20
include/sysemu/iothread.h | 1 +
21
iothread.c | 24 ++++++++++++++++--------
22
2 files changed, 17 insertions(+), 8 deletions(-)
23
24
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/sysemu/iothread.h
27
+++ b/include/sysemu/iothread.h
28
@@ -XXX,XX +XXX,XX @@ GMainContext *iothread_get_g_main_context(IOThread *iothread);
29
* "query-iothreads".
30
*/
31
IOThread *iothread_create(const char *id, Error **errp);
32
+void iothread_stop(IOThread *iothread);
33
void iothread_destroy(IOThread *iothread);
34
35
#endif /* IOTHREAD_H */
36
diff --git a/iothread.c b/iothread.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/iothread.c
39
+++ b/iothread.c
40
@@ -XXX,XX +XXX,XX @@ static void *iothread_run(void *opaque)
41
return NULL;
42
}
43
44
-static int iothread_stop(Object *object, void *opaque)
45
+void iothread_stop(IOThread *iothread)
46
{
47
- IOThread *iothread;
48
-
49
- iothread = (IOThread *)object_dynamic_cast(object, TYPE_IOTHREAD);
50
- if (!iothread || !iothread->ctx || iothread->stopping) {
51
- return 0;
52
+ if (!iothread->ctx || iothread->stopping) {
53
+ return;
54
}
55
iothread->stopping = true;
56
aio_notify(iothread->ctx);
57
@@ -XXX,XX +XXX,XX @@ static int iothread_stop(Object *object, void *opaque)
58
g_main_loop_quit(iothread->main_loop);
59
}
60
qemu_thread_join(&iothread->thread);
61
+}
62
+
63
+static int iothread_stop_iter(Object *object, void *opaque)
64
+{
65
+ IOThread *iothread;
66
+
67
+ iothread = (IOThread *)object_dynamic_cast(object, TYPE_IOTHREAD);
68
+ if (!iothread) {
69
+ return 0;
70
+ }
71
+ iothread_stop(iothread);
72
return 0;
73
}
74
75
@@ -XXX,XX +XXX,XX @@ static void iothread_instance_finalize(Object *obj)
76
{
77
IOThread *iothread = IOTHREAD(obj);
78
79
- iothread_stop(obj, NULL);
80
+ iothread_stop(iothread);
81
qemu_cond_destroy(&iothread->init_done_cond);
82
qemu_mutex_destroy(&iothread->init_done_lock);
83
if (!iothread->ctx) {
84
@@ -XXX,XX +XXX,XX @@ void iothread_stop_all(void)
85
aio_context_release(ctx);
86
}
87
88
- object_child_foreach(container, iothread_stop, NULL);
89
+ object_child_foreach(container, iothread_stop_iter, NULL);
90
}
91
92
static gpointer iothread_g_main_context_init(gpointer opaque)
93
--
94
2.13.6
95
96
diff view generated by jsdifflib
Deleted patch
1
From: Peter Xu <peterx@redhat.com>
2
1
3
When gcontext is used with iothread, the context will be destroyed
4
during iothread_stop(). That's not good since sometimes we would like
5
to keep the resources until iothread is destroyed, but we may want to
6
stop the thread before that point.
7
8
Delay the destruction of gcontext to iothread finalize. Then we can do:
9
10
iothread_stop(thread);
11
some_cleanup_on_resources();
12
iothread_destroy(thread);
13
14
We may need this patch if we want to run chardev IOs in iothreads and
15
hopefully clean them up correctly. For more specific information,
16
please see 2b316774f6 ("qemu-char: do not operate on sources from
17
finalize callbacks").
18
19
Reviewed-by: Fam Zheng <famz@redhat.com>
20
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
21
Signed-off-by: Peter Xu <peterx@redhat.com>
22
Message-id: 20170928025958.1420-5-peterx@redhat.com
23
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
24
---
25
iothread.c | 6 ++++--
26
1 file changed, 4 insertions(+), 2 deletions(-)
27
28
diff --git a/iothread.c b/iothread.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/iothread.c
31
+++ b/iothread.c
32
@@ -XXX,XX +XXX,XX @@ static void *iothread_run(void *opaque)
33
g_main_loop_unref(loop);
34
35
g_main_context_pop_thread_default(iothread->worker_context);
36
- g_main_context_unref(iothread->worker_context);
37
- iothread->worker_context = NULL;
38
}
39
}
40
41
@@ -XXX,XX +XXX,XX @@ static void iothread_instance_finalize(Object *obj)
42
IOThread *iothread = IOTHREAD(obj);
43
44
iothread_stop(iothread);
45
+ if (iothread->worker_context) {
46
+ g_main_context_unref(iothread->worker_context);
47
+ iothread->worker_context = NULL;
48
+ }
49
qemu_cond_destroy(&iothread->init_done_cond);
50
qemu_mutex_destroy(&iothread->init_done_lock);
51
if (!iothread->ctx) {
52
--
53
2.13.6
54
55
diff view generated by jsdifflib
Deleted patch
1
After iothread is enabled internally inside QEMU with GMainContext, we
2
may encounter this warning when destroying the iothread:
3
1
4
(qemu-system-x86_64:19925): GLib-CRITICAL **: g_source_remove_poll:
5
assertion '!SOURCE_DESTROYED (source)' failed
6
7
The problem is that g_source_remove_poll() does not allow to remove one
8
source from array if the source is detached from its owner
9
context. (peterx: which IMHO does not make much sense)
10
11
Fix it on QEMU side by avoid calling g_source_remove_poll() if we know
12
the object is during destruction, and we won't leak anything after all
13
since the array will be gone soon cleanly even with that fd.
14
15
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
16
Reviewed-by: Fam Zheng <famz@redhat.com>
17
Signed-off-by: Peter Xu <peterx@redhat.com>
18
Message-id: 20170928025958.1420-6-peterx@redhat.com
19
[peterx: write the commit message]
20
Signed-off-by: Peter Xu <peterx@redhat.com>
21
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
22
---
23
util/aio-posix.c | 9 ++++++++-
24
1 file changed, 8 insertions(+), 1 deletion(-)
25
26
diff --git a/util/aio-posix.c b/util/aio-posix.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/util/aio-posix.c
29
+++ b/util/aio-posix.c
30
@@ -XXX,XX +XXX,XX @@ void aio_set_fd_handler(AioContext *ctx,
31
return;
32
}
33
34
- g_source_remove_poll(&ctx->source, &node->pfd);
35
+ /* If the GSource is in the process of being destroyed then
36
+ * g_source_remove_poll() causes an assertion failure. Skip
37
+ * removal in that case, because glib cleans up its state during
38
+ * destruction anyway.
39
+ */
40
+ if (!g_source_is_destroyed(&ctx->source)) {
41
+ g_source_remove_poll(&ctx->source, &node->pfd);
42
+ }
43
44
/* If the lock is held, just mark the node as deleted */
45
if (qemu_lockcnt_count(&ctx->list_lock)) {
46
--
47
2.13.6
48
49
diff view generated by jsdifflib