1
The following changes since commit 508ba0f7e2092d3ca56e3f75e894d52d8b94818e:
1
The following changes since commit 9cf289af47bcfae5c75de37d8e5d6fd23705322c:
2
2
3
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20171109' into staging (2017-11-13 11:41:47 +0000)
3
Merge tag 'qga-pull-request' of gitlab.com:marcandre.lureau/qemu into staging (2022-05-04 03:42:49 -0700)
4
4
5
are available in the git repository at:
5
are available in the Git repository at:
6
6
7
git://github.com/stefanha/qemu.git tags/block-pull-request
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to 0761562687e0d8135310a94b1d3e08376387c027:
9
for you to fetch changes up to bef2e050d6a7feb865854c65570c496ac5a8cf53:
10
10
11
qemu-iotests: Test I/O limits with removable media (2017-11-13 15:46:26 +0000)
11
util/event-loop-base: Introduce options to set the thread pool size (2022-05-04 17:02:19 +0100)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Pull request
14
Pull request
15
15
16
The following disk I/O throttling fixes solve recent bugs.
16
Add new thread-pool-min/thread-pool-max parameters to control the thread pool
17
used for async I/O.
17
18
18
----------------------------------------------------------------
19
----------------------------------------------------------------
19
20
20
Alberto Garcia (3):
21
Nicolas Saenz Julienne (3):
21
block: Check for inserted BlockDriverState in blk_io_limits_disable()
22
Introduce event-loop-base abstract class
22
block: Leave valid throttle timers when removing a BDS from a backend
23
util/main-loop: Introduce the main loop into QOM
23
qemu-iotests: Test I/O limits with removable media
24
util/event-loop-base: Introduce options to set the thread pool size
24
25
25
Stefan Hajnoczi (1):
26
qapi/qom.json | 43 ++++++++--
26
throttle-groups: drain before detaching ThrottleState
27
meson.build | 26 +++---
27
28
include/block/aio.h | 10 +++
28
Zhengui (1):
29
include/block/thread-pool.h | 3 +
29
block: all I/O should be completed before removing throttle timers.
30
include/qemu/main-loop.h | 10 +++
30
31
include/sysemu/event-loop-base.h | 41 +++++++++
31
block/block-backend.c | 36 ++++++++++++++++++---------
32
include/sysemu/iothread.h | 6 +-
32
block/throttle-groups.c | 6 +++++
33
event-loop-base.c | 140 +++++++++++++++++++++++++++++++
33
tests/qemu-iotests/093 | 62 ++++++++++++++++++++++++++++++++++++++++++++++
34
iothread.c | 68 +++++----------
34
tests/qemu-iotests/093.out | 4 +--
35
util/aio-posix.c | 1 +
35
4 files changed, 94 insertions(+), 14 deletions(-)
36
util/async.c | 20 +++++
37
util/main-loop.c | 65 ++++++++++++++
38
util/thread-pool.c | 55 +++++++++++-
39
13 files changed, 419 insertions(+), 69 deletions(-)
40
create mode 100644 include/sysemu/event-loop-base.h
41
create mode 100644 event-loop-base.c
36
42
37
--
43
--
38
2.13.6
44
2.35.1
39
40
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
2
2
3
If a BlockBackend has I/O limits set then its ThrottleGroupMember
3
Introduce the 'event-loop-base' abstract class, it'll hold the
4
structure uses the AioContext from its attached BlockDriverState.
4
properties common to all event loops and provide the necessary hooks for
5
Those two contexts must be kept in sync manually. This is not
5
their creation and maintenance. Then have iothread inherit from it.
6
ideal and will be fixed in the future by removing the throttling
6
7
configuration from the BlockBackend and storing it in an implicit
7
EventLoopBaseClass is defined as user creatable and provides a hook for
8
filter node instead, but for now we have to live with this.
8
its children to attach themselves to the user creatable class 'complete'
9
9
function. It also provides an update_params() callback to propagate
10
When you remove the BlockDriverState from the backend then the
10
property changes onto its children.
11
throttle timers are destroyed. If a new BlockDriverState is later
11
12
inserted then they are created again using the new AioContext.
12
The new 'event-loop-base' class will live in the root directory. It is
13
13
built on its own using the 'link_whole' option (there are no direct
14
There are a couple of problems with this:
14
function dependencies between the class and its children, it all happens
15
15
trough 'constructor' magic). And also imposes new compilation
16
a) The code manipulates the timers directly, leaving the
16
dependencies:
17
ThrottleGroupMember.aio_context field in an inconsisent state.
17
18
18
qom <- event-loop-base <- blockdev (iothread.c)
19
b) If you remove the I/O limits (e.g by destroying the backend)
19
20
when the timers are gone then throttle_group_unregister_tgm()
20
And in subsequent patches:
21
will attempt to destroy them again, crashing QEMU.
21
22
22
qom <- event-loop-base <- qemuutil (util/main-loop.c)
23
While b) could be fixed easily by allowing the timers to be freed
23
24
twice, this would result in a situation in which we can no longer
24
All this forced some amount of reordering in meson.build:
25
guarantee that a valid ThrottleState has a valid AioContext and
25
26
timers.
26
- Moved qom build definition before qemuutil. Doing it the other way
27
27
around (i.e. moving qemuutil after qom) isn't possible as a lot of
28
This patch ensures that the timers and AioContext are always valid
28
core libraries that live in between the two depend on it.
29
when I/O limits are set, regardless of whether the BlockBackend has a
29
30
BlockDriverState inserted or not.
30
- Process the 'hw' subdir earlier, as it introduces files into the
31
31
'qom' source set.
32
[Fixed "There'a" typo as suggested by Max Reitz <mreitz@redhat.com>
32
33
--Stefan]
33
No functional changes intended.
34
34
35
Reported-by: sochin jiang <sochin.jiang@huawei.com>
35
Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
36
Signed-off-by: Alberto Garcia <berto@igalia.com>
36
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
37
Reviewed-by: Max Reitz <mreitz@redhat.com>
37
Acked-by: Markus Armbruster <armbru@redhat.com>
38
Message-id: e089c66e7c20289b046d782cea4373b765c5bc1d.1510339534.git.berto@igalia.com
38
Message-id: 20220425075723.20019-2-nsaenzju@redhat.com
39
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
39
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
40
---
40
---
41
block/block-backend.c | 16 ++++++++--------
41
qapi/qom.json | 22 +++++--
42
1 file changed, 8 insertions(+), 8 deletions(-)
42
meson.build | 23 ++++---
43
43
include/sysemu/event-loop-base.h | 36 +++++++++++
44
diff --git a/block/block-backend.c b/block/block-backend.c
44
include/sysemu/iothread.h | 6 +-
45
event-loop-base.c | 104 +++++++++++++++++++++++++++++++
46
iothread.c | 65 ++++++-------------
47
6 files changed, 192 insertions(+), 64 deletions(-)
48
create mode 100644 include/sysemu/event-loop-base.h
49
create mode 100644 event-loop-base.c
50
51
diff --git a/qapi/qom.json b/qapi/qom.json
45
index XXXXXXX..XXXXXXX 100644
52
index XXXXXXX..XXXXXXX 100644
46
--- a/block/block-backend.c
53
--- a/qapi/qom.json
47
+++ b/block/block-backend.c
54
+++ b/qapi/qom.json
48
@@ -XXX,XX +XXX,XX @@ BlockBackend *blk_by_public(BlockBackendPublic *public)
55
@@ -XXX,XX +XXX,XX @@
49
*/
56
'*repeat': 'bool',
50
void blk_remove_bs(BlockBackend *blk)
57
'*grab-toggle': 'GrabToggleKeys' } }
58
59
+##
60
+# @EventLoopBaseProperties:
61
+#
62
+# Common properties for event loops
63
+#
64
+# @aio-max-batch: maximum number of requests in a batch for the AIO engine,
65
+# 0 means that the engine will use its default.
66
+# (default: 0)
67
+#
68
+# Since: 7.1
69
+##
70
+{ 'struct': 'EventLoopBaseProperties',
71
+ 'data': { '*aio-max-batch': 'int' } }
72
+
73
##
74
# @IothreadProperties:
75
#
76
@@ -XXX,XX +XXX,XX @@
77
# algorithm detects it is spending too long polling without
78
# encountering events. 0 selects a default behaviour (default: 0)
79
#
80
-# @aio-max-batch: maximum number of requests in a batch for the AIO engine,
81
-# 0 means that the engine will use its default
82
-# (default:0, since 6.1)
83
+# The @aio-max-batch option is available since 6.1.
84
#
85
# Since: 2.0
86
##
87
{ 'struct': 'IothreadProperties',
88
+ 'base': 'EventLoopBaseProperties',
89
'data': { '*poll-max-ns': 'int',
90
'*poll-grow': 'int',
91
- '*poll-shrink': 'int',
92
- '*aio-max-batch': 'int' } }
93
+ '*poll-shrink': 'int' } }
94
95
##
96
# @MemoryBackendProperties:
97
diff --git a/meson.build b/meson.build
98
index XXXXXXX..XXXXXXX 100644
99
--- a/meson.build
100
+++ b/meson.build
101
@@ -XXX,XX +XXX,XX @@ subdir('qom')
102
subdir('authz')
103
subdir('crypto')
104
subdir('ui')
105
+subdir('hw')
106
107
108
if enable_modules
109
@@ -XXX,XX +XXX,XX @@ if enable_modules
110
modulecommon = declare_dependency(link_whole: libmodulecommon, compile_args: '-DBUILD_DSO')
111
endif
112
113
+qom_ss = qom_ss.apply(config_host, strict: false)
114
+libqom = static_library('qom', qom_ss.sources() + genh,
115
+ dependencies: [qom_ss.dependencies()],
116
+ name_suffix: 'fa')
117
+qom = declare_dependency(link_whole: libqom)
118
+
119
+event_loop_base = files('event-loop-base.c')
120
+event_loop_base = static_library('event-loop-base', sources: event_loop_base + genh,
121
+ build_by_default: true)
122
+event_loop_base = declare_dependency(link_whole: event_loop_base,
123
+ dependencies: [qom])
124
+
125
stub_ss = stub_ss.apply(config_all, strict: false)
126
127
util_ss.add_all(trace_ss)
128
@@ -XXX,XX +XXX,XX @@ subdir('monitor')
129
subdir('net')
130
subdir('replay')
131
subdir('semihosting')
132
-subdir('hw')
133
subdir('tcg')
134
subdir('fpu')
135
subdir('accel')
136
@@ -XXX,XX +XXX,XX @@ qemu_syms = custom_target('qemu.syms', output: 'qemu.syms',
137
capture: true,
138
command: [undefsym, nm, '@INPUT@'])
139
140
-qom_ss = qom_ss.apply(config_host, strict: false)
141
-libqom = static_library('qom', qom_ss.sources() + genh,
142
- dependencies: [qom_ss.dependencies()],
143
- name_suffix: 'fa')
144
-
145
-qom = declare_dependency(link_whole: libqom)
146
-
147
authz_ss = authz_ss.apply(config_host, strict: false)
148
libauthz = static_library('authz', authz_ss.sources() + genh,
149
dependencies: [authz_ss.dependencies()],
150
@@ -XXX,XX +XXX,XX @@ libblockdev = static_library('blockdev', blockdev_ss.sources() + genh,
151
build_by_default: false)
152
153
blockdev = declare_dependency(link_whole: [libblockdev],
154
- dependencies: [block])
155
+ dependencies: [block, event_loop_base])
156
157
qmp_ss = qmp_ss.apply(config_host, strict: false)
158
libqmp = static_library('qmp', qmp_ss.sources() + genh,
159
diff --git a/include/sysemu/event-loop-base.h b/include/sysemu/event-loop-base.h
160
new file mode 100644
161
index XXXXXXX..XXXXXXX
162
--- /dev/null
163
+++ b/include/sysemu/event-loop-base.h
164
@@ -XXX,XX +XXX,XX @@
165
+/*
166
+ * QEMU event-loop backend
167
+ *
168
+ * Copyright (C) 2022 Red Hat Inc
169
+ *
170
+ * Authors:
171
+ * Nicolas Saenz Julienne <nsaenzju@redhat.com>
172
+ *
173
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
174
+ * See the COPYING file in the top-level directory.
175
+ */
176
+#ifndef QEMU_EVENT_LOOP_BASE_H
177
+#define QEMU_EVENT_LOOP_BASE_H
178
+
179
+#include "qom/object.h"
180
+#include "block/aio.h"
181
+#include "qemu/typedefs.h"
182
+
183
+#define TYPE_EVENT_LOOP_BASE "event-loop-base"
184
+OBJECT_DECLARE_TYPE(EventLoopBase, EventLoopBaseClass,
185
+ EVENT_LOOP_BASE)
186
+
187
+struct EventLoopBaseClass {
188
+ ObjectClass parent_class;
189
+
190
+ void (*init)(EventLoopBase *base, Error **errp);
191
+ void (*update_params)(EventLoopBase *base, Error **errp);
192
+};
193
+
194
+struct EventLoopBase {
195
+ Object parent;
196
+
197
+ /* AioContext AIO engine parameters */
198
+ int64_t aio_max_batch;
199
+};
200
+#endif
201
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
202
index XXXXXXX..XXXXXXX 100644
203
--- a/include/sysemu/iothread.h
204
+++ b/include/sysemu/iothread.h
205
@@ -XXX,XX +XXX,XX @@
206
#include "block/aio.h"
207
#include "qemu/thread.h"
208
#include "qom/object.h"
209
+#include "sysemu/event-loop-base.h"
210
211
#define TYPE_IOTHREAD "iothread"
212
213
struct IOThread {
214
- Object parent_obj;
215
+ EventLoopBase parent_obj;
216
217
QemuThread thread;
218
AioContext *ctx;
219
@@ -XXX,XX +XXX,XX @@ struct IOThread {
220
int64_t poll_max_ns;
221
int64_t poll_grow;
222
int64_t poll_shrink;
223
-
224
- /* AioContext AIO engine parameters */
225
- int64_t aio_max_batch;
226
};
227
typedef struct IOThread IOThread;
228
229
diff --git a/event-loop-base.c b/event-loop-base.c
230
new file mode 100644
231
index XXXXXXX..XXXXXXX
232
--- /dev/null
233
+++ b/event-loop-base.c
234
@@ -XXX,XX +XXX,XX @@
235
+/*
236
+ * QEMU event-loop base
237
+ *
238
+ * Copyright (C) 2022 Red Hat Inc
239
+ *
240
+ * Authors:
241
+ * Stefan Hajnoczi <stefanha@redhat.com>
242
+ * Nicolas Saenz Julienne <nsaenzju@redhat.com>
243
+ *
244
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
245
+ * See the COPYING file in the top-level directory.
246
+ */
247
+
248
+#include "qemu/osdep.h"
249
+#include "qom/object_interfaces.h"
250
+#include "qapi/error.h"
251
+#include "sysemu/event-loop-base.h"
252
+
253
+typedef struct {
254
+ const char *name;
255
+ ptrdiff_t offset; /* field's byte offset in EventLoopBase struct */
256
+} EventLoopBaseParamInfo;
257
+
258
+static EventLoopBaseParamInfo aio_max_batch_info = {
259
+ "aio-max-batch", offsetof(EventLoopBase, aio_max_batch),
260
+};
261
+
262
+static void event_loop_base_get_param(Object *obj, Visitor *v,
263
+ const char *name, void *opaque, Error **errp)
264
+{
265
+ EventLoopBase *event_loop_base = EVENT_LOOP_BASE(obj);
266
+ EventLoopBaseParamInfo *info = opaque;
267
+ int64_t *field = (void *)event_loop_base + info->offset;
268
+
269
+ visit_type_int64(v, name, field, errp);
270
+}
271
+
272
+static void event_loop_base_set_param(Object *obj, Visitor *v,
273
+ const char *name, void *opaque, Error **errp)
274
+{
275
+ EventLoopBaseClass *bc = EVENT_LOOP_BASE_GET_CLASS(obj);
276
+ EventLoopBase *base = EVENT_LOOP_BASE(obj);
277
+ EventLoopBaseParamInfo *info = opaque;
278
+ int64_t *field = (void *)base + info->offset;
279
+ int64_t value;
280
+
281
+ if (!visit_type_int64(v, name, &value, errp)) {
282
+ return;
283
+ }
284
+
285
+ if (value < 0) {
286
+ error_setg(errp, "%s value must be in range [0, %" PRId64 "]",
287
+ info->name, INT64_MAX);
288
+ return;
289
+ }
290
+
291
+ *field = value;
292
+
293
+ if (bc->update_params) {
294
+ bc->update_params(base, errp);
295
+ }
296
+
297
+ return;
298
+}
299
+
300
+static void event_loop_base_complete(UserCreatable *uc, Error **errp)
301
+{
302
+ EventLoopBaseClass *bc = EVENT_LOOP_BASE_GET_CLASS(uc);
303
+ EventLoopBase *base = EVENT_LOOP_BASE(uc);
304
+
305
+ if (bc->init) {
306
+ bc->init(base, errp);
307
+ }
308
+}
309
+
310
+static void event_loop_base_class_init(ObjectClass *klass, void *class_data)
311
+{
312
+ UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
313
+ ucc->complete = event_loop_base_complete;
314
+
315
+ object_class_property_add(klass, "aio-max-batch", "int",
316
+ event_loop_base_get_param,
317
+ event_loop_base_set_param,
318
+ NULL, &aio_max_batch_info);
319
+}
320
+
321
+static const TypeInfo event_loop_base_info = {
322
+ .name = TYPE_EVENT_LOOP_BASE,
323
+ .parent = TYPE_OBJECT,
324
+ .instance_size = sizeof(EventLoopBase),
325
+ .class_size = sizeof(EventLoopBaseClass),
326
+ .class_init = event_loop_base_class_init,
327
+ .abstract = true,
328
+ .interfaces = (InterfaceInfo[]) {
329
+ { TYPE_USER_CREATABLE },
330
+ { }
331
+ }
332
+};
333
+
334
+static void register_types(void)
335
+{
336
+ type_register_static(&event_loop_base_info);
337
+}
338
+type_init(register_types);
339
diff --git a/iothread.c b/iothread.c
340
index XXXXXXX..XXXXXXX 100644
341
--- a/iothread.c
342
+++ b/iothread.c
343
@@ -XXX,XX +XXX,XX @@
344
#include "qemu/module.h"
345
#include "block/aio.h"
346
#include "block/block.h"
347
+#include "sysemu/event-loop-base.h"
348
#include "sysemu/iothread.h"
349
#include "qapi/error.h"
350
#include "qapi/qapi-commands-misc.h"
351
@@ -XXX,XX +XXX,XX @@ static void iothread_init_gcontext(IOThread *iothread)
352
iothread->main_loop = g_main_loop_new(iothread->worker_context, TRUE);
353
}
354
355
-static void iothread_set_aio_context_params(IOThread *iothread, Error **errp)
356
+static void iothread_set_aio_context_params(EventLoopBase *base, Error **errp)
51
{
357
{
52
+ ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
358
+ IOThread *iothread = IOTHREAD(base);
53
BlockDriverState *bs;
359
ERRP_GUARD();
54
- ThrottleTimers *tt;
360
55
361
+ if (!iothread->ctx) {
56
notifier_list_notify(&blk->remove_bs_notifiers, blk);
362
+ return;
57
- if (blk->public.throttle_group_member.throttle_state) {
363
+ }
58
- tt = &blk->public.throttle_group_member.throttle_timers;
364
+
59
+ if (tgm->throttle_state) {
365
aio_context_set_poll_params(iothread->ctx,
60
bs = blk_bs(blk);
366
iothread->poll_max_ns,
61
bdrv_drained_begin(bs);
367
iothread->poll_grow,
62
- throttle_timers_detach_aio_context(tt);
368
@@ -XXX,XX +XXX,XX @@ static void iothread_set_aio_context_params(IOThread *iothread, Error **errp)
63
+ throttle_group_detach_aio_context(tgm);
64
+ throttle_group_attach_aio_context(tgm, qemu_get_aio_context());
65
bdrv_drained_end(bs);
66
}
369
}
67
370
68
@@ -XXX,XX +XXX,XX @@ void blk_remove_bs(BlockBackend *blk)
371
aio_context_set_aio_params(iothread->ctx,
69
*/
372
- iothread->aio_max_batch,
70
int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
373
+ iothread->parent_obj.aio_max_batch,
374
errp);
375
}
376
377
-static void iothread_complete(UserCreatable *obj, Error **errp)
378
+
379
+static void iothread_init(EventLoopBase *base, Error **errp)
71
{
380
{
72
+ ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
381
Error *local_error = NULL;
73
blk->root = bdrv_root_attach_child(bs, "root", &child_root,
382
- IOThread *iothread = IOTHREAD(obj);
74
blk->perm, blk->shared_perm, blk, errp);
383
+ IOThread *iothread = IOTHREAD(base);
75
if (blk->root == NULL) {
384
char *thread_name;
76
@@ -XXX,XX +XXX,XX @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
385
77
bdrv_ref(bs);
386
iothread->stopping = false;
78
387
@@ -XXX,XX +XXX,XX @@ static void iothread_complete(UserCreatable *obj, Error **errp)
79
notifier_list_notify(&blk->insert_bs_notifiers, blk);
388
*/
80
- if (blk->public.throttle_group_member.throttle_state) {
389
iothread_init_gcontext(iothread);
81
- throttle_timers_attach_aio_context(
390
82
- &blk->public.throttle_group_member.throttle_timers,
391
- iothread_set_aio_context_params(iothread, &local_error);
83
- bdrv_get_aio_context(bs));
392
+ iothread_set_aio_context_params(base, &local_error);
84
+ if (tgm->throttle_state) {
393
if (local_error) {
85
+ throttle_group_detach_aio_context(tgm);
394
error_propagate(errp, local_error);
86
+ throttle_group_attach_aio_context(tgm, bdrv_get_aio_context(bs));
395
aio_context_unref(iothread->ctx);
396
@@ -XXX,XX +XXX,XX @@ static void iothread_complete(UserCreatable *obj, Error **errp)
397
* to inherit.
398
*/
399
thread_name = g_strdup_printf("IO %s",
400
- object_get_canonical_path_component(OBJECT(obj)));
401
+ object_get_canonical_path_component(OBJECT(base)));
402
qemu_thread_create(&iothread->thread, thread_name, iothread_run,
403
iothread, QEMU_THREAD_JOINABLE);
404
g_free(thread_name);
405
@@ -XXX,XX +XXX,XX @@ static IOThreadParamInfo poll_grow_info = {
406
static IOThreadParamInfo poll_shrink_info = {
407
"poll-shrink", offsetof(IOThread, poll_shrink),
408
};
409
-static IOThreadParamInfo aio_max_batch_info = {
410
- "aio-max-batch", offsetof(IOThread, aio_max_batch),
411
-};
412
413
static void iothread_get_param(Object *obj, Visitor *v,
414
const char *name, IOThreadParamInfo *info, Error **errp)
415
@@ -XXX,XX +XXX,XX @@ static void iothread_set_poll_param(Object *obj, Visitor *v,
87
}
416
}
88
417
}
418
419
-static void iothread_get_aio_param(Object *obj, Visitor *v,
420
- const char *name, void *opaque, Error **errp)
421
-{
422
- IOThreadParamInfo *info = opaque;
423
-
424
- iothread_get_param(obj, v, name, info, errp);
425
-}
426
-
427
-static void iothread_set_aio_param(Object *obj, Visitor *v,
428
- const char *name, void *opaque, Error **errp)
429
-{
430
- IOThread *iothread = IOTHREAD(obj);
431
- IOThreadParamInfo *info = opaque;
432
-
433
- if (!iothread_set_param(obj, v, name, info, errp)) {
434
- return;
435
- }
436
-
437
- if (iothread->ctx) {
438
- aio_context_set_aio_params(iothread->ctx,
439
- iothread->aio_max_batch,
440
- errp);
441
- }
442
-}
443
-
444
static void iothread_class_init(ObjectClass *klass, void *class_data)
445
{
446
- UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
447
- ucc->complete = iothread_complete;
448
+ EventLoopBaseClass *bc = EVENT_LOOP_BASE_CLASS(klass);
449
+
450
+ bc->init = iothread_init;
451
+ bc->update_params = iothread_set_aio_context_params;
452
453
object_class_property_add(klass, "poll-max-ns", "int",
454
iothread_get_poll_param,
455
@@ -XXX,XX +XXX,XX @@ static void iothread_class_init(ObjectClass *klass, void *class_data)
456
iothread_get_poll_param,
457
iothread_set_poll_param,
458
NULL, &poll_shrink_info);
459
- object_class_property_add(klass, "aio-max-batch", "int",
460
- iothread_get_aio_param,
461
- iothread_set_aio_param,
462
- NULL, &aio_max_batch_info);
463
}
464
465
static const TypeInfo iothread_info = {
466
.name = TYPE_IOTHREAD,
467
- .parent = TYPE_OBJECT,
468
+ .parent = TYPE_EVENT_LOOP_BASE,
469
.class_init = iothread_class_init,
470
.instance_size = sizeof(IOThread),
471
.instance_init = iothread_instance_init,
472
.instance_finalize = iothread_instance_finalize,
473
- .interfaces = (InterfaceInfo[]) {
474
- {TYPE_USER_CREATABLE},
475
- {}
476
- },
477
};
478
479
static void iothread_register_types(void)
480
@@ -XXX,XX +XXX,XX @@ static int query_one_iothread(Object *object, void *opaque)
481
info->poll_max_ns = iothread->poll_max_ns;
482
info->poll_grow = iothread->poll_grow;
483
info->poll_shrink = iothread->poll_shrink;
484
- info->aio_max_batch = iothread->aio_max_batch;
485
+ info->aio_max_batch = iothread->parent_obj.aio_max_batch;
486
487
QAPI_LIST_APPEND(*tail, info);
89
return 0;
488
return 0;
90
--
489
--
91
2.13.6
490
2.35.1
92
93
diff view generated by jsdifflib
1
From: Alberto Garcia <berto@igalia.com>
1
From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
2
2
3
This test hotplugs a CD drive to a VM and checks that I/O limits can
3
'event-loop-base' provides basic property handling for all 'AioContext'
4
be set only when the drive has media inserted and that they are kept
4
based event loops. So let's define a new 'MainLoopClass' that inherits
5
when the media is replaced.
5
from it. This will permit tweaking the main loop's properties through
6
6
qapi as well as through the command line using the '-object' keyword[1].
7
This also tests the removal of a device with valid I/O limits set but
7
Only one instance of 'MainLoopClass' might be created at any time.
8
no media inserted. This involves deleting and disabling the limits
8
9
of a BlockBackend without BlockDriverState, a scenario that has been
9
'EventLoopBaseClass' learns a new callback, 'can_be_deleted()' so as to
10
crashing until the fixes from the last couple of patches.
10
mark 'MainLoop' as non-deletable.
11
11
12
[Python PEP8 fixup: "Don't use spaces are the = sign when used to
12
[1] For example:
13
indicate a keyword argument or a default parameter value"
13
-object main-loop,id=main-loop,aio-max-batch=<value>
14
--Stefan]
14
15
15
Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
16
Signed-off-by: Alberto Garcia <berto@igalia.com>
16
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
17
Reviewed-by: Max Reitz <mreitz@redhat.com>
17
Acked-by: Markus Armbruster <armbru@redhat.com>
18
Message-id: 071eb397118ed207c5a7f01d58766e415ee18d6a.1510339534.git.berto@igalia.com
18
Message-id: 20220425075723.20019-3-nsaenzju@redhat.com
19
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
19
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
20
---
20
---
21
tests/qemu-iotests/093 | 62 ++++++++++++++++++++++++++++++++++++++++++++++
21
qapi/qom.json | 13 ++++++++
22
tests/qemu-iotests/093.out | 4 +--
22
meson.build | 3 +-
23
2 files changed, 64 insertions(+), 2 deletions(-)
23
include/qemu/main-loop.h | 10 ++++++
24
24
include/sysemu/event-loop-base.h | 1 +
25
diff --git a/tests/qemu-iotests/093 b/tests/qemu-iotests/093
25
event-loop-base.c | 13 ++++++++
26
index XXXXXXX..XXXXXXX 100755
26
util/main-loop.c | 56 ++++++++++++++++++++++++++++++++
27
--- a/tests/qemu-iotests/093
27
6 files changed, 95 insertions(+), 1 deletion(-)
28
+++ b/tests/qemu-iotests/093
28
29
@@ -XXX,XX +XXX,XX @@ class ThrottleTestGroupNames(iotests.QMPTestCase):
29
diff --git a/qapi/qom.json b/qapi/qom.json
30
groupname = "group%d" % i
30
index XXXXXXX..XXXXXXX 100644
31
self.verify_name(devname, groupname)
31
--- a/qapi/qom.json
32
32
+++ b/qapi/qom.json
33
+class ThrottleTestRemovableMedia(iotests.QMPTestCase):
33
@@ -XXX,XX +XXX,XX @@
34
+ def setUp(self):
34
'*poll-grow': 'int',
35
+ self.vm = iotests.VM()
35
'*poll-shrink': 'int' } }
36
+ if iotests.qemu_default_machine == 's390-ccw-virtio':
36
37
+ self.vm.add_device("virtio-scsi-ccw,id=virtio-scsi")
37
+##
38
+ else:
38
+# @MainLoopProperties:
39
+ self.vm.add_device("virtio-scsi-pci,id=virtio-scsi")
39
+#
40
+ self.vm.launch()
40
+# Properties for the main-loop object.
41
+
41
+#
42
+ def tearDown(self):
42
+# Since: 7.1
43
+ self.vm.shutdown()
43
+##
44
+
44
+{ 'struct': 'MainLoopProperties',
45
+ def test_removable_media(self):
45
+ 'base': 'EventLoopBaseProperties',
46
+ # Add a couple of dummy nodes named cd0 and cd1
46
+ 'data': {} }
47
+ result = self.vm.qmp("blockdev-add", driver="null-aio",
47
+
48
+ node_name="cd0")
48
##
49
+ self.assert_qmp(result, 'return', {})
49
# @MemoryBackendProperties:
50
+ result = self.vm.qmp("blockdev-add", driver="null-aio",
50
#
51
+ node_name="cd1")
51
@@ -XXX,XX +XXX,XX @@
52
+ self.assert_qmp(result, 'return', {})
52
{ 'name': 'input-linux',
53
+
53
'if': 'CONFIG_LINUX' },
54
+ # Attach a CD drive with cd0 inserted
54
'iothread',
55
+ result = self.vm.qmp("device_add", driver="scsi-cd",
55
+ 'main-loop',
56
+ id="dev0", drive="cd0")
56
{ 'name': 'memory-backend-epc',
57
+ self.assert_qmp(result, 'return', {})
57
'if': 'CONFIG_LINUX' },
58
+
58
'memory-backend-file',
59
+ # Set I/O limits
59
@@ -XXX,XX +XXX,XX @@
60
+ args = { "id": "dev0", "iops": 100, "iops_rd": 0, "iops_wr": 0,
60
'input-linux': { 'type': 'InputLinuxProperties',
61
+ "bps": 50, "bps_rd": 0, "bps_wr": 0 }
61
'if': 'CONFIG_LINUX' },
62
+ result = self.vm.qmp("block_set_io_throttle", conv_keys=False, **args)
62
'iothread': 'IothreadProperties',
63
+ self.assert_qmp(result, 'return', {})
63
+ 'main-loop': 'MainLoopProperties',
64
+
64
'memory-backend-epc': { 'type': 'MemoryBackendEpcProperties',
65
+ # Check that the I/O limits have been set
65
'if': 'CONFIG_LINUX' },
66
+ result = self.vm.qmp("query-block")
66
'memory-backend-file': 'MemoryBackendFileProperties',
67
+ self.assert_qmp(result, 'return[0]/inserted/iops', 100)
67
diff --git a/meson.build b/meson.build
68
+ self.assert_qmp(result, 'return[0]/inserted/bps', 50)
68
index XXXXXXX..XXXXXXX 100644
69
+
69
--- a/meson.build
70
+ # Now eject cd0 and insert cd1
70
+++ b/meson.build
71
+ result = self.vm.qmp("blockdev-open-tray", id='dev0')
71
@@ -XXX,XX +XXX,XX @@ libqemuutil = static_library('qemuutil',
72
+ self.assert_qmp(result, 'return', {})
72
sources: util_ss.sources() + stub_ss.sources() + genh,
73
+ result = self.vm.qmp("x-blockdev-remove-medium", id='dev0')
73
dependencies: [util_ss.dependencies(), libm, threads, glib, socket, malloc, pixman])
74
+ self.assert_qmp(result, 'return', {})
74
qemuutil = declare_dependency(link_with: libqemuutil,
75
+ result = self.vm.qmp("x-blockdev-insert-medium", id='dev0', node_name='cd1')
75
- sources: genh + version_res)
76
+ self.assert_qmp(result, 'return', {})
76
+ sources: genh + version_res,
77
+
77
+ dependencies: [event_loop_base])
78
+ # Check that the I/O limits are still the same
78
79
+ result = self.vm.qmp("query-block")
79
if have_system or have_user
80
+ self.assert_qmp(result, 'return[0]/inserted/iops', 100)
80
decodetree = generator(find_program('scripts/decodetree.py'),
81
+ self.assert_qmp(result, 'return[0]/inserted/bps', 50)
81
diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
82
+
82
index XXXXXXX..XXXXXXX 100644
83
+ # Eject cd1
83
--- a/include/qemu/main-loop.h
84
+ result = self.vm.qmp("x-blockdev-remove-medium", id='dev0')
84
+++ b/include/qemu/main-loop.h
85
+ self.assert_qmp(result, 'return', {})
85
@@ -XXX,XX +XXX,XX @@
86
+
86
#define QEMU_MAIN_LOOP_H
87
+ # Check that we can't set limits if the device has no medium
87
88
+ result = self.vm.qmp("block_set_io_throttle", conv_keys=False, **args)
88
#include "block/aio.h"
89
+ self.assert_qmp(result, 'error/class', 'GenericError')
89
+#include "qom/object.h"
90
+
90
+#include "sysemu/event-loop-base.h"
91
+ # Remove the CD drive
91
92
+ result = self.vm.qmp("device_del", id='dev0')
92
#define SIG_IPI SIGUSR1
93
+ self.assert_qmp(result, 'return', {})
93
94
+
94
+#define TYPE_MAIN_LOOP "main-loop"
95
95
+OBJECT_DECLARE_TYPE(MainLoop, MainLoopClass, MAIN_LOOP)
96
if __name__ == '__main__':
96
+
97
iotests.main(supported_fmts=["raw"])
97
+struct MainLoop {
98
diff --git a/tests/qemu-iotests/093.out b/tests/qemu-iotests/093.out
98
+ EventLoopBase parent_obj;
99
index XXXXXXX..XXXXXXX 100644
99
+};
100
--- a/tests/qemu-iotests/093.out
100
+typedef struct MainLoop MainLoop;
101
+++ b/tests/qemu-iotests/093.out
101
+
102
@@ -XXX,XX +XXX,XX @@
102
/**
103
-.......
103
* qemu_init_main_loop: Set up the process so that it can run the main loop.
104
+........
104
*
105
----------------------------------------------------------------------
105
diff --git a/include/sysemu/event-loop-base.h b/include/sysemu/event-loop-base.h
106
-Ran 7 tests
106
index XXXXXXX..XXXXXXX 100644
107
+Ran 8 tests
107
--- a/include/sysemu/event-loop-base.h
108
108
+++ b/include/sysemu/event-loop-base.h
109
OK
109
@@ -XXX,XX +XXX,XX @@ struct EventLoopBaseClass {
110
111
void (*init)(EventLoopBase *base, Error **errp);
112
void (*update_params)(EventLoopBase *base, Error **errp);
113
+ bool (*can_be_deleted)(EventLoopBase *base);
114
};
115
116
struct EventLoopBase {
117
diff --git a/event-loop-base.c b/event-loop-base.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/event-loop-base.c
120
+++ b/event-loop-base.c
121
@@ -XXX,XX +XXX,XX @@ static void event_loop_base_complete(UserCreatable *uc, Error **errp)
122
}
123
}
124
125
+static bool event_loop_base_can_be_deleted(UserCreatable *uc)
126
+{
127
+ EventLoopBaseClass *bc = EVENT_LOOP_BASE_GET_CLASS(uc);
128
+ EventLoopBase *backend = EVENT_LOOP_BASE(uc);
129
+
130
+ if (bc->can_be_deleted) {
131
+ return bc->can_be_deleted(backend);
132
+ }
133
+
134
+ return true;
135
+}
136
+
137
static void event_loop_base_class_init(ObjectClass *klass, void *class_data)
138
{
139
UserCreatableClass *ucc = USER_CREATABLE_CLASS(klass);
140
ucc->complete = event_loop_base_complete;
141
+ ucc->can_be_deleted = event_loop_base_can_be_deleted;
142
143
object_class_property_add(klass, "aio-max-batch", "int",
144
event_loop_base_get_param,
145
diff --git a/util/main-loop.c b/util/main-loop.c
146
index XXXXXXX..XXXXXXX 100644
147
--- a/util/main-loop.c
148
+++ b/util/main-loop.c
149
@@ -XXX,XX +XXX,XX @@
150
#include "qemu/error-report.h"
151
#include "qemu/queue.h"
152
#include "qemu/compiler.h"
153
+#include "qom/object.h"
154
155
#ifndef _WIN32
156
#include <sys/wait.h>
157
@@ -XXX,XX +XXX,XX @@ int qemu_init_main_loop(Error **errp)
158
return 0;
159
}
160
161
+static void main_loop_update_params(EventLoopBase *base, Error **errp)
162
+{
163
+ if (!qemu_aio_context) {
164
+ error_setg(errp, "qemu aio context not ready");
165
+ return;
166
+ }
167
+
168
+ aio_context_set_aio_params(qemu_aio_context, base->aio_max_batch, errp);
169
+}
170
+
171
+MainLoop *mloop;
172
+
173
+static void main_loop_init(EventLoopBase *base, Error **errp)
174
+{
175
+ MainLoop *m = MAIN_LOOP(base);
176
+
177
+ if (mloop) {
178
+ error_setg(errp, "only one main-loop instance allowed");
179
+ return;
180
+ }
181
+
182
+ main_loop_update_params(base, errp);
183
+
184
+ mloop = m;
185
+ return;
186
+}
187
+
188
+static bool main_loop_can_be_deleted(EventLoopBase *base)
189
+{
190
+ return false;
191
+}
192
+
193
+static void main_loop_class_init(ObjectClass *oc, void *class_data)
194
+{
195
+ EventLoopBaseClass *bc = EVENT_LOOP_BASE_CLASS(oc);
196
+
197
+ bc->init = main_loop_init;
198
+ bc->update_params = main_loop_update_params;
199
+ bc->can_be_deleted = main_loop_can_be_deleted;
200
+}
201
+
202
+static const TypeInfo main_loop_info = {
203
+ .name = TYPE_MAIN_LOOP,
204
+ .parent = TYPE_EVENT_LOOP_BASE,
205
+ .class_init = main_loop_class_init,
206
+ .instance_size = sizeof(MainLoop),
207
+};
208
+
209
+static void main_loop_register_types(void)
210
+{
211
+ type_register_static(&main_loop_info);
212
+}
213
+
214
+type_init(main_loop_register_types)
215
+
216
static int max_priority;
217
218
#ifndef _WIN32
110
--
219
--
111
2.13.6
220
2.35.1
112
113
diff view generated by jsdifflib
1
From: Zhengui <lizhengui@huawei.com>
1
From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
2
2
3
In blk_remove_bs, all I/O should be completed before removing throttle
3
The thread pool regulates itself: when idle, it kills threads until
4
timers. If there has inflight I/O, removing throttle timers here will
4
empty, when in demand, it creates new threads until full. This behaviour
5
cause the inflight I/O never return.
5
doesn't play well with latency sensitive workloads where the price of
6
This patch add bdrv_drained_begin before throttle_timers_detach_aio_context
6
creating a new thread is too high. For example, when paired with qemu's
7
to let all I/O completed before removing throttle timers.
7
'-mlock', or using safety features like SafeStack, creating a new thread
8
has been measured take multiple milliseconds.
8
9
9
[Moved declaration of bs as suggested by Alberto Garcia
10
In order to mitigate this let's introduce a new 'EventLoopBase'
10
<berto@igalia.com>.
11
property to set the thread pool size. The threads will be created during
11
--Stefan]
12
the pool's initialization or upon updating the property's value, remain
13
available during its lifetime regardless of demand, and destroyed upon
14
freeing it. A properly characterized workload will then be able to
15
configure the pool to avoid any latency spikes.
12
16
13
Signed-off-by: Zhengui <lizhengui@huawei.com>
17
Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
14
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
18
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
15
Reviewed-by: Alberto Garcia <berto@igalia.com>
19
Acked-by: Markus Armbruster <armbru@redhat.com>
16
Message-id: 1508564040-120700-1-git-send-email-lizhengui@huawei.com
20
Message-id: 20220425075723.20019-4-nsaenzju@redhat.com
17
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
21
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
18
---
22
---
19
block/block-backend.c | 4 ++++
23
qapi/qom.json | 10 +++++-
20
1 file changed, 4 insertions(+)
24
include/block/aio.h | 10 ++++++
25
include/block/thread-pool.h | 3 ++
26
include/sysemu/event-loop-base.h | 4 +++
27
event-loop-base.c | 23 +++++++++++++
28
iothread.c | 3 ++
29
util/aio-posix.c | 1 +
30
util/async.c | 20 ++++++++++++
31
util/main-loop.c | 9 ++++++
32
util/thread-pool.c | 55 +++++++++++++++++++++++++++++---
33
10 files changed, 133 insertions(+), 5 deletions(-)
21
34
22
diff --git a/block/block-backend.c b/block/block-backend.c
35
diff --git a/qapi/qom.json b/qapi/qom.json
23
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
24
--- a/block/block-backend.c
37
--- a/qapi/qom.json
25
+++ b/block/block-backend.c
38
+++ b/qapi/qom.json
26
@@ -XXX,XX +XXX,XX @@ BlockBackend *blk_by_public(BlockBackendPublic *public)
39
@@ -XXX,XX +XXX,XX @@
27
*/
40
# 0 means that the engine will use its default.
28
void blk_remove_bs(BlockBackend *blk)
41
# (default: 0)
42
#
43
+# @thread-pool-min: minimum number of threads reserved in the thread pool
44
+# (default:0)
45
+#
46
+# @thread-pool-max: maximum number of threads the thread pool can contain
47
+# (default:64)
48
+#
49
# Since: 7.1
50
##
51
{ 'struct': 'EventLoopBaseProperties',
52
- 'data': { '*aio-max-batch': 'int' } }
53
+ 'data': { '*aio-max-batch': 'int',
54
+ '*thread-pool-min': 'int',
55
+ '*thread-pool-max': 'int' } }
56
57
##
58
# @IothreadProperties:
59
diff --git a/include/block/aio.h b/include/block/aio.h
60
index XXXXXXX..XXXXXXX 100644
61
--- a/include/block/aio.h
62
+++ b/include/block/aio.h
63
@@ -XXX,XX +XXX,XX @@ struct AioContext {
64
QSLIST_HEAD(, Coroutine) scheduled_coroutines;
65
QEMUBH *co_schedule_bh;
66
67
+ int thread_pool_min;
68
+ int thread_pool_max;
69
/* Thread pool for performing work and receiving completion callbacks.
70
* Has its own locking.
71
*/
72
@@ -XXX,XX +XXX,XX @@ void aio_context_set_poll_params(AioContext *ctx, int64_t max_ns,
73
void aio_context_set_aio_params(AioContext *ctx, int64_t max_batch,
74
Error **errp);
75
76
+/**
77
+ * aio_context_set_thread_pool_params:
78
+ * @ctx: the aio context
79
+ * @min: min number of threads to have readily available in the thread pool
80
+ * @min: max number of threads the thread pool can contain
81
+ */
82
+void aio_context_set_thread_pool_params(AioContext *ctx, int64_t min,
83
+ int64_t max, Error **errp);
84
#endif
85
diff --git a/include/block/thread-pool.h b/include/block/thread-pool.h
86
index XXXXXXX..XXXXXXX 100644
87
--- a/include/block/thread-pool.h
88
+++ b/include/block/thread-pool.h
89
@@ -XXX,XX +XXX,XX @@
90
91
#include "block/block.h"
92
93
+#define THREAD_POOL_MAX_THREADS_DEFAULT 64
94
+
95
typedef int ThreadPoolFunc(void *opaque);
96
97
typedef struct ThreadPool ThreadPool;
98
@@ -XXX,XX +XXX,XX @@ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,
99
int coroutine_fn thread_pool_submit_co(ThreadPool *pool,
100
ThreadPoolFunc *func, void *arg);
101
void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg);
102
+void thread_pool_update_params(ThreadPool *pool, struct AioContext *ctx);
103
104
#endif
105
diff --git a/include/sysemu/event-loop-base.h b/include/sysemu/event-loop-base.h
106
index XXXXXXX..XXXXXXX 100644
107
--- a/include/sysemu/event-loop-base.h
108
+++ b/include/sysemu/event-loop-base.h
109
@@ -XXX,XX +XXX,XX @@ struct EventLoopBase {
110
111
/* AioContext AIO engine parameters */
112
int64_t aio_max_batch;
113
+
114
+ /* AioContext thread pool parameters */
115
+ int64_t thread_pool_min;
116
+ int64_t thread_pool_max;
117
};
118
#endif
119
diff --git a/event-loop-base.c b/event-loop-base.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/event-loop-base.c
122
+++ b/event-loop-base.c
123
@@ -XXX,XX +XXX,XX @@
124
#include "qemu/osdep.h"
125
#include "qom/object_interfaces.h"
126
#include "qapi/error.h"
127
+#include "block/thread-pool.h"
128
#include "sysemu/event-loop-base.h"
129
130
typedef struct {
131
@@ -XXX,XX +XXX,XX @@ typedef struct {
132
ptrdiff_t offset; /* field's byte offset in EventLoopBase struct */
133
} EventLoopBaseParamInfo;
134
135
+static void event_loop_base_instance_init(Object *obj)
136
+{
137
+ EventLoopBase *base = EVENT_LOOP_BASE(obj);
138
+
139
+ base->thread_pool_max = THREAD_POOL_MAX_THREADS_DEFAULT;
140
+}
141
+
142
static EventLoopBaseParamInfo aio_max_batch_info = {
143
"aio-max-batch", offsetof(EventLoopBase, aio_max_batch),
144
};
145
+static EventLoopBaseParamInfo thread_pool_min_info = {
146
+ "thread-pool-min", offsetof(EventLoopBase, thread_pool_min),
147
+};
148
+static EventLoopBaseParamInfo thread_pool_max_info = {
149
+ "thread-pool-max", offsetof(EventLoopBase, thread_pool_max),
150
+};
151
152
static void event_loop_base_get_param(Object *obj, Visitor *v,
153
const char *name, void *opaque, Error **errp)
154
@@ -XXX,XX +XXX,XX @@ static void event_loop_base_class_init(ObjectClass *klass, void *class_data)
155
event_loop_base_get_param,
156
event_loop_base_set_param,
157
NULL, &aio_max_batch_info);
158
+ object_class_property_add(klass, "thread-pool-min", "int",
159
+ event_loop_base_get_param,
160
+ event_loop_base_set_param,
161
+ NULL, &thread_pool_min_info);
162
+ object_class_property_add(klass, "thread-pool-max", "int",
163
+ event_loop_base_get_param,
164
+ event_loop_base_set_param,
165
+ NULL, &thread_pool_max_info);
166
}
167
168
static const TypeInfo event_loop_base_info = {
169
.name = TYPE_EVENT_LOOP_BASE,
170
.parent = TYPE_OBJECT,
171
.instance_size = sizeof(EventLoopBase),
172
+ .instance_init = event_loop_base_instance_init,
173
.class_size = sizeof(EventLoopBaseClass),
174
.class_init = event_loop_base_class_init,
175
.abstract = true,
176
diff --git a/iothread.c b/iothread.c
177
index XXXXXXX..XXXXXXX 100644
178
--- a/iothread.c
179
+++ b/iothread.c
180
@@ -XXX,XX +XXX,XX @@ static void iothread_set_aio_context_params(EventLoopBase *base, Error **errp)
181
aio_context_set_aio_params(iothread->ctx,
182
iothread->parent_obj.aio_max_batch,
183
errp);
184
+
185
+ aio_context_set_thread_pool_params(iothread->ctx, base->thread_pool_min,
186
+ base->thread_pool_max, errp);
187
}
188
189
190
diff --git a/util/aio-posix.c b/util/aio-posix.c
191
index XXXXXXX..XXXXXXX 100644
192
--- a/util/aio-posix.c
193
+++ b/util/aio-posix.c
194
@@ -XXX,XX +XXX,XX @@
195
196
#include "qemu/osdep.h"
197
#include "block/block.h"
198
+#include "block/thread-pool.h"
199
#include "qemu/main-loop.h"
200
#include "qemu/rcu.h"
201
#include "qemu/rcu_queue.h"
202
diff --git a/util/async.c b/util/async.c
203
index XXXXXXX..XXXXXXX 100644
204
--- a/util/async.c
205
+++ b/util/async.c
206
@@ -XXX,XX +XXX,XX @@ AioContext *aio_context_new(Error **errp)
207
208
ctx->aio_max_batch = 0;
209
210
+ ctx->thread_pool_min = 0;
211
+ ctx->thread_pool_max = THREAD_POOL_MAX_THREADS_DEFAULT;
212
+
213
return ctx;
214
fail:
215
g_source_destroy(&ctx->source);
216
@@ -XXX,XX +XXX,XX @@ void qemu_set_current_aio_context(AioContext *ctx)
217
assert(!get_my_aiocontext());
218
set_my_aiocontext(ctx);
219
}
220
+
221
+void aio_context_set_thread_pool_params(AioContext *ctx, int64_t min,
222
+ int64_t max, Error **errp)
223
+{
224
+
225
+ if (min > max || !max || min > INT_MAX || max > INT_MAX) {
226
+ error_setg(errp, "bad thread-pool-min/thread-pool-max values");
227
+ return;
228
+ }
229
+
230
+ ctx->thread_pool_min = min;
231
+ ctx->thread_pool_max = max;
232
+
233
+ if (ctx->thread_pool) {
234
+ thread_pool_update_params(ctx->thread_pool, ctx);
235
+ }
236
+}
237
diff --git a/util/main-loop.c b/util/main-loop.c
238
index XXXXXXX..XXXXXXX 100644
239
--- a/util/main-loop.c
240
+++ b/util/main-loop.c
241
@@ -XXX,XX +XXX,XX @@
242
#include "sysemu/replay.h"
243
#include "qemu/main-loop.h"
244
#include "block/aio.h"
245
+#include "block/thread-pool.h"
246
#include "qemu/error-report.h"
247
#include "qemu/queue.h"
248
#include "qemu/compiler.h"
249
@@ -XXX,XX +XXX,XX @@ int qemu_init_main_loop(Error **errp)
250
251
static void main_loop_update_params(EventLoopBase *base, Error **errp)
29
{
252
{
30
+ BlockDriverState *bs;
253
+ ERRP_GUARD();
31
ThrottleTimers *tt;
254
+
32
255
if (!qemu_aio_context) {
33
notifier_list_notify(&blk->remove_bs_notifiers, blk);
256
error_setg(errp, "qemu aio context not ready");
34
if (blk->public.throttle_group_member.throttle_state) {
257
return;
35
tt = &blk->public.throttle_group_member.throttle_timers;
36
+ bs = blk_bs(blk);
37
+ bdrv_drained_begin(bs);
38
throttle_timers_detach_aio_context(tt);
39
+ bdrv_drained_end(bs);
40
}
258
}
41
259
42
blk_update_root_state(blk);
260
aio_context_set_aio_params(qemu_aio_context, base->aio_max_batch, errp);
261
+ if (*errp) {
262
+ return;
263
+ }
264
+
265
+ aio_context_set_thread_pool_params(qemu_aio_context, base->thread_pool_min,
266
+ base->thread_pool_max, errp);
267
}
268
269
MainLoop *mloop;
270
diff --git a/util/thread-pool.c b/util/thread-pool.c
271
index XXXXXXX..XXXXXXX 100644
272
--- a/util/thread-pool.c
273
+++ b/util/thread-pool.c
274
@@ -XXX,XX +XXX,XX @@ struct ThreadPool {
275
QemuMutex lock;
276
QemuCond worker_stopped;
277
QemuSemaphore sem;
278
- int max_threads;
279
QEMUBH *new_thread_bh;
280
281
/* The following variables are only accessed from one AioContext. */
282
@@ -XXX,XX +XXX,XX @@ struct ThreadPool {
283
int new_threads; /* backlog of threads we need to create */
284
int pending_threads; /* threads created but not running yet */
285
bool stopping;
286
+ int min_threads;
287
+ int max_threads;
288
};
289
290
+static inline bool back_to_sleep(ThreadPool *pool, int ret)
291
+{
292
+ /*
293
+ * The semaphore timed out, we should exit the loop except when:
294
+ * - There is work to do, we raced with the signal.
295
+ * - The max threads threshold just changed, we raced with the signal.
296
+ * - The thread pool forces a minimum number of readily available threads.
297
+ */
298
+ if (ret == -1 && (!QTAILQ_EMPTY(&pool->request_list) ||
299
+ pool->cur_threads > pool->max_threads ||
300
+ pool->cur_threads <= pool->min_threads)) {
301
+ return true;
302
+ }
303
+
304
+ return false;
305
+}
306
+
307
static void *worker_thread(void *opaque)
308
{
309
ThreadPool *pool = opaque;
310
@@ -XXX,XX +XXX,XX @@ static void *worker_thread(void *opaque)
311
ret = qemu_sem_timedwait(&pool->sem, 10000);
312
qemu_mutex_lock(&pool->lock);
313
pool->idle_threads--;
314
- } while (ret == -1 && !QTAILQ_EMPTY(&pool->request_list));
315
- if (ret == -1 || pool->stopping) {
316
+ } while (back_to_sleep(pool, ret));
317
+ if (ret == -1 || pool->stopping ||
318
+ pool->cur_threads > pool->max_threads) {
319
break;
320
}
321
322
@@ -XXX,XX +XXX,XX @@ void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg)
323
thread_pool_submit_aio(pool, func, arg, NULL, NULL);
324
}
325
326
+void thread_pool_update_params(ThreadPool *pool, AioContext *ctx)
327
+{
328
+ qemu_mutex_lock(&pool->lock);
329
+
330
+ pool->min_threads = ctx->thread_pool_min;
331
+ pool->max_threads = ctx->thread_pool_max;
332
+
333
+ /*
334
+ * We either have to:
335
+ * - Increase the number available of threads until over the min_threads
336
+ * threshold.
337
+ * - Decrease the number of available threads until under the max_threads
338
+ * threshold.
339
+ * - Do nothing. The current number of threads fall in between the min and
340
+ * max thresholds. We'll let the pool manage itself.
341
+ */
342
+ for (int i = pool->cur_threads; i < pool->min_threads; i++) {
343
+ spawn_thread(pool);
344
+ }
345
+
346
+ for (int i = pool->cur_threads; i > pool->max_threads; i--) {
347
+ qemu_sem_post(&pool->sem);
348
+ }
349
+
350
+ qemu_mutex_unlock(&pool->lock);
351
+}
352
+
353
static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
354
{
355
if (!ctx) {
356
@@ -XXX,XX +XXX,XX @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
357
qemu_mutex_init(&pool->lock);
358
qemu_cond_init(&pool->worker_stopped);
359
qemu_sem_init(&pool->sem, 0);
360
- pool->max_threads = 64;
361
pool->new_thread_bh = aio_bh_new(ctx, spawn_thread_bh_fn, pool);
362
363
QLIST_INIT(&pool->head);
364
QTAILQ_INIT(&pool->request_list);
365
+
366
+ thread_pool_update_params(pool, ctx);
367
}
368
369
ThreadPool *thread_pool_new(AioContext *ctx)
43
--
370
--
44
2.13.6
371
2.35.1
45
46
diff view generated by jsdifflib
Deleted patch
1
I/O requests hang after stop/cont commands at least since QEMU 2.10.0
2
with -drive iops=100:
3
1
4
(guest)$ dd if=/dev/zero of=/dev/vdb oflag=direct count=1000
5
(qemu) stop
6
(qemu) cont
7
...I/O is stuck...
8
9
This happens because blk_set_aio_context() detaches the ThrottleState
10
while requests may still be in flight:
11
12
if (tgm->throttle_state) {
13
throttle_group_detach_aio_context(tgm);
14
throttle_group_attach_aio_context(tgm, new_context);
15
}
16
17
This patch encloses the detach/attach calls in a drained region so no
18
I/O request is left hanging. Also add assertions so we don't make the
19
same mistake again in the future.
20
21
Reported-by: Yongxue Hong <yhong@redhat.com>
22
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
23
Reviewed-by: Alberto Garcia <berto@igalia.com>
24
Message-id: 20171110151934.16883-1-stefanha@redhat.com
25
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
26
---
27
block/block-backend.c | 2 ++
28
block/throttle-groups.c | 6 ++++++
29
2 files changed, 8 insertions(+)
30
31
diff --git a/block/block-backend.c b/block/block-backend.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/block/block-backend.c
34
+++ b/block/block-backend.c
35
@@ -XXX,XX +XXX,XX @@ void blk_set_aio_context(BlockBackend *blk, AioContext *new_context)
36
37
if (bs) {
38
if (tgm->throttle_state) {
39
+ bdrv_drained_begin(bs);
40
throttle_group_detach_aio_context(tgm);
41
throttle_group_attach_aio_context(tgm, new_context);
42
+ bdrv_drained_end(bs);
43
}
44
bdrv_set_aio_context(bs, new_context);
45
}
46
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/block/throttle-groups.c
49
+++ b/block/throttle-groups.c
50
@@ -XXX,XX +XXX,XX @@ void throttle_group_attach_aio_context(ThrottleGroupMember *tgm,
51
void throttle_group_detach_aio_context(ThrottleGroupMember *tgm)
52
{
53
ThrottleTimers *tt = &tgm->throttle_timers;
54
+
55
+ /* Requests must have been drained */
56
+ assert(tgm->pending_reqs[0] == 0 && tgm->pending_reqs[1] == 0);
57
+ assert(qemu_co_queue_empty(&tgm->throttled_reqs[0]));
58
+ assert(qemu_co_queue_empty(&tgm->throttled_reqs[1]));
59
+
60
throttle_timers_detach_aio_context(tt);
61
tgm->aio_context = NULL;
62
}
63
--
64
2.13.6
65
66
diff view generated by jsdifflib
Deleted patch
1
From: Alberto Garcia <berto@igalia.com>
2
1
3
When you set I/O limits using block_set_io_throttle or the command
4
line throttling.* options they are kept in the BlockBackend regardless
5
of whether a BlockDriverState is attached to the backend or not.
6
7
Therefore when removing the limits using blk_io_limits_disable() we
8
need to check if there's a BDS before attempting to drain it, else it
9
will crash QEMU. This can be reproduced very easily using HMP:
10
11
(qemu) drive_add 0 if=none,throttling.iops-total=5000
12
(qemu) drive_del none0
13
14
Reported-by: sochin jiang <sochin.jiang@huawei.com>
15
Signed-off-by: Alberto Garcia <berto@igalia.com>
16
Reviewed-by: Max Reitz <mreitz@redhat.com>
17
Message-id: 0d3a67ce8d948bb33e08672564714dcfb76a3d8c.1510339534.git.berto@igalia.com
18
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
19
---
20
block/block-backend.c | 14 ++++++++++----
21
1 file changed, 10 insertions(+), 4 deletions(-)
22
23
diff --git a/block/block-backend.c b/block/block-backend.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/block/block-backend.c
26
+++ b/block/block-backend.c
27
@@ -XXX,XX +XXX,XX @@ void blk_set_io_limits(BlockBackend *blk, ThrottleConfig *cfg)
28
29
void blk_io_limits_disable(BlockBackend *blk)
30
{
31
- assert(blk->public.throttle_group_member.throttle_state);
32
- bdrv_drained_begin(blk_bs(blk));
33
- throttle_group_unregister_tgm(&blk->public.throttle_group_member);
34
- bdrv_drained_end(blk_bs(blk));
35
+ BlockDriverState *bs = blk_bs(blk);
36
+ ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
37
+ assert(tgm->throttle_state);
38
+ if (bs) {
39
+ bdrv_drained_begin(bs);
40
+ }
41
+ throttle_group_unregister_tgm(tgm);
42
+ if (bs) {
43
+ bdrv_drained_end(bs);
44
+ }
45
}
46
47
/* should be called before blk_set_io_limits if a limit is set */
48
--
49
2.13.6
50
51
diff view generated by jsdifflib