1
The following changes since commit 711c0418c8c1ce3a24346f058b001c4c5a2f0f81:
1
The following changes since commit ddc27d2ad9361a81c2b3800d14143bf420dae172:
2
2
3
Merge remote-tracking branch 'remotes/philmd/tags/mips-20210702' into staging (2021-07-04 14:04:12 +0100)
3
Merge tag 'pull-request-2024-03-18' of https://gitlab.com/thuth/qemu into staging (2024-03-19 10:25:25 +0000)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to 9f460c64e13897117f35ffb61f6f5e0102cabc70:
9
for you to fetch changes up to 86a637e48104ae74d8be53bed6441ce32be33433:
10
10
11
block/io: Merge discard request alignments (2021-07-06 14:28:55 +0100)
11
coroutine: cap per-thread local pool size (2024-03-19 10:49:31 -0400)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Pull request
14
Pull request
15
15
16
This fix solves the "failed to set up stack guard page" error that has been
17
reported on Linux hosts where the QEMU coroutine pool exceeds the
18
vm.max_map_count limit.
19
16
----------------------------------------------------------------
20
----------------------------------------------------------------
17
21
18
Akihiko Odaki (3):
22
Stefan Hajnoczi (1):
19
block/file-posix: Optimize for macOS
23
coroutine: cap per-thread local pool size
20
block: Add backend_defaults property
21
block/io: Merge discard request alignments
22
24
23
Stefan Hajnoczi (2):
25
util/qemu-coroutine.c | 282 +++++++++++++++++++++++++++++++++---------
24
util/async: add a human-readable name to BHs for debugging
26
1 file changed, 223 insertions(+), 59 deletions(-)
25
util/async: print leaked BH name when AioContext finalizes
26
27
include/block/aio.h | 31 ++++++++++++++++++++++---
28
include/hw/block/block.h | 3 +++
29
include/qemu/main-loop.h | 4 +++-
30
block/file-posix.c | 27 ++++++++++++++++++++--
31
block/io.c | 2 ++
32
hw/block/block.c | 42 ++++++++++++++++++++++++++++++----
33
tests/unit/ptimer-test-stubs.c | 2 +-
34
util/async.c | 25 ++++++++++++++++----
35
util/main-loop.c | 4 ++--
36
tests/qemu-iotests/172.out | 38 ++++++++++++++++++++++++++++++
37
10 files changed, 161 insertions(+), 17 deletions(-)
38
27
39
--
28
--
40
2.31.1
29
2.44.0
41
diff view generated by jsdifflib
1
It can be difficult to debug issues with BHs in production environments.
1
The coroutine pool implementation can hit the Linux vm.max_map_count
2
Although BHs can usually be identified by looking up their ->cb()
2
limit, causing QEMU to abort with "failed to allocate memory for stack"
3
function pointer, this requires debug information for the program. It is
3
or "failed to set up stack guard page" during coroutine creation.
4
also not possible to print human-readable diagnostics about BHs because
4
5
they have no identifier.
5
This happens because per-thread pools can grow to tens of thousands of
6
6
coroutines. Each coroutine causes 2 virtual memory areas to be created.
7
This patch adds a name to each BH. The name is not unique per instance
7
Eventually vm.max_map_count is reached and memory-related syscalls fail.
8
but differentiates between cb() functions, which is usually enough. It's
8
The per-thread pool sizes are non-uniform and depend on past coroutine
9
done by changing aio_bh_new() and friends to macros that stringify cb.
9
usage in each thread, so it's possible for one thread to have a large
10
10
pool while another thread's pool is empty.
11
The next patch will use the name field when reporting leaked BHs.
11
12
12
Switch to a new coroutine pool implementation with a global pool that
13
grows to a maximum number of coroutines and per-thread local pools that
14
are capped at hardcoded small number of coroutines.
15
16
This approach does not leave large numbers of coroutines pooled in a
17
thread that may not use them again. In order to perform well it
18
amortizes the cost of global pool accesses by working in batches of
19
coroutines instead of individual coroutines.
20
21
The global pool is a list. Threads donate batches of coroutines to when
22
they have too many and take batches from when they have too few:
23
24
.-----------------------------------.
25
| Batch 1 | Batch 2 | Batch 3 | ... | global_pool
26
`-----------------------------------'
27
28
Each thread has up to 2 batches of coroutines:
29
30
.-------------------.
31
| Batch 1 | Batch 2 | per-thread local_pool (maximum 2 batches)
32
`-------------------'
33
34
The goal of this change is to reduce the excessive number of pooled
35
coroutines that cause QEMU to abort when vm.max_map_count is reached
36
without losing the performance of an adequately sized coroutine pool.
37
38
Here are virtio-blk disk I/O benchmark results:
39
40
RW BLKSIZE IODEPTH OLD NEW CHANGE
41
randread 4k 1 113725 117451 +3.3%
42
randread 4k 8 192968 198510 +2.9%
43
randread 4k 16 207138 209429 +1.1%
44
randread 4k 32 212399 215145 +1.3%
45
randread 4k 64 218319 221277 +1.4%
46
randread 128k 1 17587 17535 -0.3%
47
randread 128k 8 17614 17616 +0.0%
48
randread 128k 16 17608 17609 +0.0%
49
randread 128k 32 17552 17553 +0.0%
50
randread 128k 64 17484 17484 +0.0%
51
52
See files/{fio.sh,test.xml.j2} for the benchmark configuration:
53
https://gitlab.com/stefanha/virt-playbooks/-/tree/coroutine-pool-fix-sizing
54
55
Buglink: https://issues.redhat.com/browse/RHEL-28947
56
Reported-by: Sanjay Rao <srao@redhat.com>
57
Reported-by: Boaz Ben Shabat <bbenshab@redhat.com>
58
Reported-by: Joe Mario <jmario@redhat.com>
59
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
13
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
60
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
61
Message-ID: <20240318183429.1039340-1-stefanha@redhat.com>
15
Message-Id: <20210414200247.917496-2-stefanha@redhat.com>
16
---
62
---
17
include/block/aio.h | 31 ++++++++++++++++++++++++++++---
63
util/qemu-coroutine.c | 282 +++++++++++++++++++++++++++++++++---------
18
include/qemu/main-loop.h | 4 +++-
64
1 file changed, 223 insertions(+), 59 deletions(-)
19
tests/unit/ptimer-test-stubs.c | 2 +-
65
20
util/async.c | 9 +++++++--
66
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
21
util/main-loop.c | 4 ++--
22
5 files changed, 41 insertions(+), 9 deletions(-)
23
24
diff --git a/include/block/aio.h b/include/block/aio.h
25
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
26
--- a/include/block/aio.h
68
--- a/util/qemu-coroutine.c
27
+++ b/include/block/aio.h
69
+++ b/util/qemu-coroutine.c
28
@@ -XXX,XX +XXX,XX @@ void aio_context_acquire(AioContext *ctx);
70
@@ -XXX,XX +XXX,XX @@
29
/* Relinquish ownership of the AioContext. */
71
#include "qemu/atomic.h"
30
void aio_context_release(AioContext *ctx);
72
#include "qemu/coroutine_int.h"
31
73
#include "qemu/coroutine-tls.h"
32
+/**
74
+#include "qemu/cutils.h"
33
+ * aio_bh_schedule_oneshot_full: Allocate a new bottom half structure that will
75
#include "block/aio.h"
34
+ * run only once and as soon as possible.
76
77
-/**
78
- * The minimal batch size is always 64, coroutines from the release_pool are
79
- * reused as soon as there are 64 coroutines in it. The maximum pool size starts
80
- * with 64 and is increased on demand so that coroutines are not deleted even if
81
- * they are not immediately reused.
82
- */
83
enum {
84
- POOL_MIN_BATCH_SIZE = 64,
85
- POOL_INITIAL_MAX_SIZE = 64,
86
+ COROUTINE_POOL_BATCH_MAX_SIZE = 128,
87
};
88
89
-/** Free list to speed up creation */
90
-static QSLIST_HEAD(, Coroutine) release_pool = QSLIST_HEAD_INITIALIZER(pool);
91
-static unsigned int pool_max_size = POOL_INITIAL_MAX_SIZE;
92
-static unsigned int release_pool_size;
93
+/*
94
+ * Coroutine creation and deletion is expensive so a pool of unused coroutines
95
+ * is kept as a cache. When the pool has coroutines available, they are
96
+ * recycled instead of creating new ones from scratch. Coroutines are added to
97
+ * the pool upon termination.
35
+ *
98
+ *
36
+ * @name: A human-readable identifier for debugging purposes.
99
+ * The pool is global but each thread maintains a small local pool to avoid
100
+ * global pool contention. Threads fetch and return batches of coroutines from
101
+ * the global pool to maintain their local pool. The local pool holds up to two
102
+ * batches whereas the maximum size of the global pool is controlled by the
103
+ * qemu_coroutine_inc_pool_size() API.
104
+ *
105
+ * .-----------------------------------.
106
+ * | Batch 1 | Batch 2 | Batch 3 | ... | global_pool
107
+ * `-----------------------------------'
108
+ *
109
+ * .-------------------.
110
+ * | Batch 1 | Batch 2 | per-thread local_pool (maximum 2 batches)
111
+ * `-------------------'
37
+ */
112
+ */
38
+void aio_bh_schedule_oneshot_full(AioContext *ctx, QEMUBHFunc *cb, void *opaque,
113
+typedef struct CoroutinePoolBatch {
39
+ const char *name);
114
+ /* Batches are kept in a list */
40
+
115
+ QSLIST_ENTRY(CoroutinePoolBatch) next;
41
/**
116
42
* aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
117
-typedef QSLIST_HEAD(, Coroutine) CoroutineQSList;
43
* only once and as soon as possible.
118
-QEMU_DEFINE_STATIC_CO_TLS(CoroutineQSList, alloc_pool);
44
+ *
119
-QEMU_DEFINE_STATIC_CO_TLS(unsigned int, alloc_pool_size);
45
+ * A convenience wrapper for aio_bh_schedule_oneshot_full() that uses cb as the
120
-QEMU_DEFINE_STATIC_CO_TLS(Notifier, coroutine_pool_cleanup_notifier);
46
+ * name string.
121
+ /* This batch holds up to @COROUTINE_POOL_BATCH_MAX_SIZE coroutines */
47
*/
122
+ QSLIST_HEAD(, Coroutine) list;
48
-void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque);
123
+ unsigned int size;
49
+#define aio_bh_schedule_oneshot(ctx, cb, opaque) \
124
+} CoroutinePoolBatch;
50
+ aio_bh_schedule_oneshot_full((ctx), (cb), (opaque), (stringify(cb)))
125
51
126
-static void coroutine_pool_cleanup(Notifier *n, void *value)
52
/**
127
+typedef QSLIST_HEAD(, CoroutinePoolBatch) CoroutinePool;
53
- * aio_bh_new: Allocate a new bottom half structure.
128
+
54
+ * aio_bh_new_full: Allocate a new bottom half structure.
129
+/* Host operating system limit on number of pooled coroutines */
55
*
130
+static unsigned int global_pool_hard_max_size;
56
* Bottom halves are lightweight callbacks whose invocation is guaranteed
131
+
57
* to be wait-free, thread-safe and signal-safe. The #QEMUBH structure
132
+static QemuMutex global_pool_lock; /* protects the following variables */
58
* is opaque and must be allocated prior to its use.
133
+static CoroutinePool global_pool = QSLIST_HEAD_INITIALIZER(global_pool);
59
+ *
134
+static unsigned int global_pool_size;
60
+ * @name: A human-readable identifier for debugging purposes.
135
+static unsigned int global_pool_max_size = COROUTINE_POOL_BATCH_MAX_SIZE;
61
*/
136
+
62
-QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque);
137
+QEMU_DEFINE_STATIC_CO_TLS(CoroutinePool, local_pool);
63
+QEMUBH *aio_bh_new_full(AioContext *ctx, QEMUBHFunc *cb, void *opaque,
138
+QEMU_DEFINE_STATIC_CO_TLS(Notifier, local_pool_cleanup_notifier);
64
+ const char *name);
139
+
65
+
140
+static CoroutinePoolBatch *coroutine_pool_batch_new(void)
66
+/**
141
+{
67
+ * aio_bh_new: Allocate a new bottom half structure
142
+ CoroutinePoolBatch *batch = g_new(CoroutinePoolBatch, 1);
68
+ *
143
+
69
+ * A convenience wrapper for aio_bh_new_full() that uses the cb as the name
144
+ QSLIST_INIT(&batch->list);
70
+ * string.
145
+ batch->size = 0;
71
+ */
146
+ return batch;
72
+#define aio_bh_new(ctx, cb, opaque) \
147
+}
73
+ aio_bh_new_full((ctx), (cb), (opaque), (stringify(cb)))
148
+
74
149
+static void coroutine_pool_batch_delete(CoroutinePoolBatch *batch)
75
/**
150
{
76
* aio_notify: Force processing of pending events.
151
Coroutine *co;
77
diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
152
Coroutine *tmp;
78
index XXXXXXX..XXXXXXX 100644
153
- CoroutineQSList *alloc_pool = get_ptr_alloc_pool();
79
--- a/include/qemu/main-loop.h
154
80
+++ b/include/qemu/main-loop.h
155
- QSLIST_FOREACH_SAFE(co, alloc_pool, pool_next, tmp) {
81
@@ -XXX,XX +XXX,XX @@ void qemu_cond_timedwait_iothread(QemuCond *cond, int ms);
156
- QSLIST_REMOVE_HEAD(alloc_pool, pool_next);
82
157
+ QSLIST_FOREACH_SAFE(co, &batch->list, pool_next, tmp) {
83
void qemu_fd_register(int fd);
158
+ QSLIST_REMOVE_HEAD(&batch->list, pool_next);
84
159
qemu_coroutine_delete(co);
85
-QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque);
160
}
86
+#define qemu_bh_new(cb, opaque) \
161
+ g_free(batch);
87
+ qemu_bh_new_full((cb), (opaque), (stringify(cb)))
162
+}
88
+QEMUBH *qemu_bh_new_full(QEMUBHFunc *cb, void *opaque, const char *name);
163
+
89
void qemu_bh_schedule_idle(QEMUBH *bh);
164
+static void local_pool_cleanup(Notifier *n, void *value)
90
165
+{
91
enum {
166
+ CoroutinePool *local_pool = get_ptr_local_pool();
92
diff --git a/tests/unit/ptimer-test-stubs.c b/tests/unit/ptimer-test-stubs.c
167
+ CoroutinePoolBatch *batch;
93
index XXXXXXX..XXXXXXX 100644
168
+ CoroutinePoolBatch *tmp;
94
--- a/tests/unit/ptimer-test-stubs.c
169
+
95
+++ b/tests/unit/ptimer-test-stubs.c
170
+ QSLIST_FOREACH_SAFE(batch, local_pool, next, tmp) {
96
@@ -XXX,XX +XXX,XX @@ int64_t qemu_clock_deadline_ns_all(QEMUClockType type, int attr_mask)
171
+ QSLIST_REMOVE_HEAD(local_pool, next);
97
return deadline;
172
+ coroutine_pool_batch_delete(batch);
173
+ }
174
+}
175
+
176
+/* Ensure the atexit notifier is registered */
177
+static void local_pool_cleanup_init_once(void)
178
+{
179
+ Notifier *notifier = get_ptr_local_pool_cleanup_notifier();
180
+ if (!notifier->notify) {
181
+ notifier->notify = local_pool_cleanup;
182
+ qemu_thread_atexit_add(notifier);
183
+ }
184
+}
185
+
186
+/* Helper to get the next unused coroutine from the local pool */
187
+static Coroutine *coroutine_pool_get_local(void)
188
+{
189
+ CoroutinePool *local_pool = get_ptr_local_pool();
190
+ CoroutinePoolBatch *batch = QSLIST_FIRST(local_pool);
191
+ Coroutine *co;
192
+
193
+ if (unlikely(!batch)) {
194
+ return NULL;
195
+ }
196
+
197
+ co = QSLIST_FIRST(&batch->list);
198
+ QSLIST_REMOVE_HEAD(&batch->list, pool_next);
199
+ batch->size--;
200
+
201
+ if (batch->size == 0) {
202
+ QSLIST_REMOVE_HEAD(local_pool, next);
203
+ coroutine_pool_batch_delete(batch);
204
+ }
205
+ return co;
206
+}
207
+
208
+/* Get the next batch from the global pool */
209
+static void coroutine_pool_refill_local(void)
210
+{
211
+ CoroutinePool *local_pool = get_ptr_local_pool();
212
+ CoroutinePoolBatch *batch;
213
+
214
+ WITH_QEMU_LOCK_GUARD(&global_pool_lock) {
215
+ batch = QSLIST_FIRST(&global_pool);
216
+
217
+ if (batch) {
218
+ QSLIST_REMOVE_HEAD(&global_pool, next);
219
+ global_pool_size -= batch->size;
220
+ }
221
+ }
222
+
223
+ if (batch) {
224
+ QSLIST_INSERT_HEAD(local_pool, batch, next);
225
+ local_pool_cleanup_init_once();
226
+ }
227
+}
228
+
229
+/* Add a batch of coroutines to the global pool */
230
+static void coroutine_pool_put_global(CoroutinePoolBatch *batch)
231
+{
232
+ WITH_QEMU_LOCK_GUARD(&global_pool_lock) {
233
+ unsigned int max = MIN(global_pool_max_size,
234
+ global_pool_hard_max_size);
235
+
236
+ if (global_pool_size < max) {
237
+ QSLIST_INSERT_HEAD(&global_pool, batch, next);
238
+
239
+ /* Overshooting the max pool size is allowed */
240
+ global_pool_size += batch->size;
241
+ return;
242
+ }
243
+ }
244
+
245
+ /* The global pool was full, so throw away this batch */
246
+ coroutine_pool_batch_delete(batch);
247
+}
248
+
249
+/* Get the next unused coroutine from the pool or return NULL */
250
+static Coroutine *coroutine_pool_get(void)
251
+{
252
+ Coroutine *co;
253
+
254
+ co = coroutine_pool_get_local();
255
+ if (!co) {
256
+ coroutine_pool_refill_local();
257
+ co = coroutine_pool_get_local();
258
+ }
259
+ return co;
260
+}
261
+
262
+static void coroutine_pool_put(Coroutine *co)
263
+{
264
+ CoroutinePool *local_pool = get_ptr_local_pool();
265
+ CoroutinePoolBatch *batch = QSLIST_FIRST(local_pool);
266
+
267
+ if (unlikely(!batch)) {
268
+ batch = coroutine_pool_batch_new();
269
+ QSLIST_INSERT_HEAD(local_pool, batch, next);
270
+ local_pool_cleanup_init_once();
271
+ }
272
+
273
+ if (unlikely(batch->size >= COROUTINE_POOL_BATCH_MAX_SIZE)) {
274
+ CoroutinePoolBatch *next = QSLIST_NEXT(batch, next);
275
+
276
+ /* Is the local pool full? */
277
+ if (next) {
278
+ QSLIST_REMOVE_HEAD(local_pool, next);
279
+ coroutine_pool_put_global(batch);
280
+ }
281
+
282
+ batch = coroutine_pool_batch_new();
283
+ QSLIST_INSERT_HEAD(local_pool, batch, next);
284
+ }
285
+
286
+ QSLIST_INSERT_HEAD(&batch->list, co, pool_next);
287
+ batch->size++;
98
}
288
}
99
289
100
-QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque)
290
Coroutine *qemu_coroutine_create(CoroutineEntry *entry, void *opaque)
101
+QEMUBH *qemu_bh_new_full(QEMUBHFunc *cb, void *opaque, const char *name)
291
@@ -XXX,XX +XXX,XX @@ Coroutine *qemu_coroutine_create(CoroutineEntry *entry, void *opaque)
292
Coroutine *co = NULL;
293
294
if (IS_ENABLED(CONFIG_COROUTINE_POOL)) {
295
- CoroutineQSList *alloc_pool = get_ptr_alloc_pool();
296
-
297
- co = QSLIST_FIRST(alloc_pool);
298
- if (!co) {
299
- if (release_pool_size > POOL_MIN_BATCH_SIZE) {
300
- /* Slow path; a good place to register the destructor, too. */
301
- Notifier *notifier = get_ptr_coroutine_pool_cleanup_notifier();
302
- if (!notifier->notify) {
303
- notifier->notify = coroutine_pool_cleanup;
304
- qemu_thread_atexit_add(notifier);
305
- }
306
-
307
- /* This is not exact; there could be a little skew between
308
- * release_pool_size and the actual size of release_pool. But
309
- * it is just a heuristic, it does not need to be perfect.
310
- */
311
- set_alloc_pool_size(qatomic_xchg(&release_pool_size, 0));
312
- QSLIST_MOVE_ATOMIC(alloc_pool, &release_pool);
313
- co = QSLIST_FIRST(alloc_pool);
314
- }
315
- }
316
- if (co) {
317
- QSLIST_REMOVE_HEAD(alloc_pool, pool_next);
318
- set_alloc_pool_size(get_alloc_pool_size() - 1);
319
- }
320
+ co = coroutine_pool_get();
321
}
322
323
if (!co) {
324
@@ -XXX,XX +XXX,XX @@ static void coroutine_delete(Coroutine *co)
325
co->caller = NULL;
326
327
if (IS_ENABLED(CONFIG_COROUTINE_POOL)) {
328
- if (release_pool_size < qatomic_read(&pool_max_size) * 2) {
329
- QSLIST_INSERT_HEAD_ATOMIC(&release_pool, co, pool_next);
330
- qatomic_inc(&release_pool_size);
331
- return;
332
- }
333
- if (get_alloc_pool_size() < qatomic_read(&pool_max_size)) {
334
- QSLIST_INSERT_HEAD(get_ptr_alloc_pool(), co, pool_next);
335
- set_alloc_pool_size(get_alloc_pool_size() + 1);
336
- return;
337
- }
338
+ coroutine_pool_put(co);
339
+ } else {
340
+ qemu_coroutine_delete(co);
341
}
342
-
343
- qemu_coroutine_delete(co);
344
}
345
346
void qemu_aio_coroutine_enter(AioContext *ctx, Coroutine *co)
347
@@ -XXX,XX +XXX,XX @@ AioContext *qemu_coroutine_get_aio_context(Coroutine *co)
348
349
void qemu_coroutine_inc_pool_size(unsigned int additional_pool_size)
102
{
350
{
103
QEMUBH *bh = g_new(QEMUBH, 1);
351
- qatomic_add(&pool_max_size, additional_pool_size);
104
352
+ QEMU_LOCK_GUARD(&global_pool_lock);
105
diff --git a/util/async.c b/util/async.c
353
+ global_pool_max_size += additional_pool_size;
106
index XXXXXXX..XXXXXXX 100644
107
--- a/util/async.c
108
+++ b/util/async.c
109
@@ -XXX,XX +XXX,XX @@ enum {
110
111
struct QEMUBH {
112
AioContext *ctx;
113
+ const char *name;
114
QEMUBHFunc *cb;
115
void *opaque;
116
QSLIST_ENTRY(QEMUBH) next;
117
@@ -XXX,XX +XXX,XX @@ static QEMUBH *aio_bh_dequeue(BHList *head, unsigned *flags)
118
return bh;
119
}
354
}
120
355
121
-void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
356
void qemu_coroutine_dec_pool_size(unsigned int removing_pool_size)
122
+void aio_bh_schedule_oneshot_full(AioContext *ctx, QEMUBHFunc *cb,
123
+ void *opaque, const char *name)
124
{
357
{
125
QEMUBH *bh;
358
- qatomic_sub(&pool_max_size, removing_pool_size);
126
bh = g_new(QEMUBH, 1);
359
+ QEMU_LOCK_GUARD(&global_pool_lock);
127
@@ -XXX,XX +XXX,XX @@ void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
360
+ global_pool_max_size -= removing_pool_size;
128
.ctx = ctx,
361
+}
129
.cb = cb,
362
+
130
.opaque = opaque,
363
+static unsigned int get_global_pool_hard_max_size(void)
131
+ .name = name,
364
+{
132
};
365
+#ifdef __linux__
133
aio_bh_enqueue(bh, BH_SCHEDULED | BH_ONESHOT);
366
+ g_autofree char *contents = NULL;
367
+ int max_map_count;
368
+
369
+ /*
370
+ * Linux processes can have up to max_map_count virtual memory areas
371
+ * (VMAs). mmap(2), mprotect(2), etc fail with ENOMEM beyond this limit. We
372
+ * must limit the coroutine pool to a safe size to avoid running out of
373
+ * VMAs.
374
+ */
375
+ if (g_file_get_contents("/proc/sys/vm/max_map_count", &contents, NULL,
376
+ NULL) &&
377
+ qemu_strtoi(contents, NULL, 10, &max_map_count) == 0) {
378
+ /*
379
+ * This is a conservative upper bound that avoids exceeding
380
+ * max_map_count. Leave half for non-coroutine users like library
381
+ * dependencies, vhost-user, etc. Each coroutine takes up 2 VMAs so
382
+ * halve the amount again.
383
+ */
384
+ return max_map_count / 4;
385
+ }
386
+#endif
387
+
388
+ return UINT_MAX;
389
+}
390
+
391
+static void __attribute__((constructor)) qemu_coroutine_init(void)
392
+{
393
+ qemu_mutex_init(&global_pool_lock);
394
+ global_pool_hard_max_size = get_global_pool_hard_max_size();
134
}
395
}
135
136
-QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
137
+QEMUBH *aio_bh_new_full(AioContext *ctx, QEMUBHFunc *cb, void *opaque,
138
+ const char *name)
139
{
140
QEMUBH *bh;
141
bh = g_new(QEMUBH, 1);
142
@@ -XXX,XX +XXX,XX @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
143
.ctx = ctx,
144
.cb = cb,
145
.opaque = opaque,
146
+ .name = name,
147
};
148
return bh;
149
}
150
diff --git a/util/main-loop.c b/util/main-loop.c
151
index XXXXXXX..XXXXXXX 100644
152
--- a/util/main-loop.c
153
+++ b/util/main-loop.c
154
@@ -XXX,XX +XXX,XX @@ void main_loop_wait(int nonblocking)
155
156
/* Functions to operate on the main QEMU AioContext. */
157
158
-QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque)
159
+QEMUBH *qemu_bh_new_full(QEMUBHFunc *cb, void *opaque, const char *name)
160
{
161
- return aio_bh_new(qemu_aio_context, cb, opaque);
162
+ return aio_bh_new_full(qemu_aio_context, cb, opaque, name);
163
}
164
165
/*
166
--
396
--
167
2.31.1
397
2.44.0
168
diff view generated by jsdifflib
Deleted patch
1
BHs must be deleted before the AioContext is finalized. If not, it's a
2
bug and probably indicates that some part of the program still expects
3
the BH to run in the future. That can lead to memory leaks, inconsistent
4
state, or just hangs.
5
1
6
Unfortunately the assert(flags & BH_DELETED) call in aio_ctx_finalize()
7
is difficult to debug because the assertion failure contains no
8
information about the BH!
9
10
Use the QEMUBH name field added in the previous patch to show a useful
11
error when a leaked BH is detected.
12
13
Suggested-by: Eric Ernst <eric.g.ernst@gmail.com>
14
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
15
Message-Id: <20210414200247.917496-3-stefanha@redhat.com>
16
---
17
util/async.c | 16 ++++++++++++++--
18
1 file changed, 14 insertions(+), 2 deletions(-)
19
20
diff --git a/util/async.c b/util/async.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/util/async.c
23
+++ b/util/async.c
24
@@ -XXX,XX +XXX,XX @@ aio_ctx_finalize(GSource *source)
25
assert(QSIMPLEQ_EMPTY(&ctx->bh_slice_list));
26
27
while ((bh = aio_bh_dequeue(&ctx->bh_list, &flags))) {
28
- /* qemu_bh_delete() must have been called on BHs in this AioContext */
29
- assert(flags & BH_DELETED);
30
+ /*
31
+ * qemu_bh_delete() must have been called on BHs in this AioContext. In
32
+ * many cases memory leaks, hangs, or inconsistent state occur when a
33
+ * BH is leaked because something still expects it to run.
34
+ *
35
+ * If you hit this, fix the lifecycle of the BH so that
36
+ * qemu_bh_delete() and any associated cleanup is called before the
37
+ * AioContext is finalized.
38
+ */
39
+ if (unlikely(!(flags & BH_DELETED))) {
40
+ fprintf(stderr, "%s: BH '%s' leaked, aborting...\n",
41
+ __func__, bh->name);
42
+ abort();
43
+ }
44
45
g_free(bh);
46
}
47
--
48
2.31.1
49
diff view generated by jsdifflib
Deleted patch
1
From: Akihiko Odaki <akihiko.odaki@gmail.com>
2
1
3
This commit introduces "punch hole" operation and optimizes transfer
4
block size for macOS.
5
6
Thanks to Konstantin Nazarov for detailed analysis of a flaw in an
7
old version of this change:
8
https://gist.github.com/akihikodaki/87df4149e7ca87f18dc56807ec5a1bc5#gistcomment-3654667
9
10
Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
11
Message-id: 20210705130458.97642-1-akihiko.odaki@gmail.com
12
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
13
---
14
block/file-posix.c | 27 +++++++++++++++++++++++++--
15
1 file changed, 25 insertions(+), 2 deletions(-)
16
17
diff --git a/block/file-posix.c b/block/file-posix.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/block/file-posix.c
20
+++ b/block/file-posix.c
21
@@ -XXX,XX +XXX,XX @@
22
#if defined(HAVE_HOST_BLOCK_DEVICE)
23
#include <paths.h>
24
#include <sys/param.h>
25
+#include <sys/mount.h>
26
#include <IOKit/IOKitLib.h>
27
#include <IOKit/IOBSD.h>
28
#include <IOKit/storage/IOMediaBSDClient.h>
29
@@ -XXX,XX +XXX,XX @@ static void raw_refresh_limits(BlockDriverState *bs, Error **errp)
30
return;
31
}
32
33
+#if defined(__APPLE__) && (__MACH__)
34
+ struct statfs buf;
35
+
36
+ if (!fstatfs(s->fd, &buf)) {
37
+ bs->bl.opt_transfer = buf.f_iosize;
38
+ bs->bl.pdiscard_alignment = buf.f_bsize;
39
+ }
40
+#endif
41
+
42
if (bs->sg || S_ISBLK(st.st_mode)) {
43
int ret = hdev_get_max_hw_transfer(s->fd, &st);
44
45
@@ -XXX,XX +XXX,XX @@ out:
46
}
47
}
48
49
+#if defined(CONFIG_FALLOCATE) || defined(BLKZEROOUT) || defined(BLKDISCARD)
50
static int translate_err(int err)
51
{
52
if (err == -ENODEV || err == -ENOSYS || err == -EOPNOTSUPP ||
53
@@ -XXX,XX +XXX,XX @@ static int translate_err(int err)
54
}
55
return err;
56
}
57
+#endif
58
59
#ifdef CONFIG_FALLOCATE
60
static int do_fallocate(int fd, int mode, off_t offset, off_t len)
61
@@ -XXX,XX +XXX,XX @@ static int handle_aiocb_discard(void *opaque)
62
}
63
} while (errno == EINTR);
64
65
- ret = -errno;
66
+ ret = translate_err(-errno);
67
#endif
68
} else {
69
#ifdef CONFIG_FALLOCATE_PUNCH_HOLE
70
ret = do_fallocate(s->fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
71
aiocb->aio_offset, aiocb->aio_nbytes);
72
+ ret = translate_err(-errno);
73
+#elif defined(__APPLE__) && (__MACH__)
74
+ fpunchhole_t fpunchhole;
75
+ fpunchhole.fp_flags = 0;
76
+ fpunchhole.reserved = 0;
77
+ fpunchhole.fp_offset = aiocb->aio_offset;
78
+ fpunchhole.fp_length = aiocb->aio_nbytes;
79
+ if (fcntl(s->fd, F_PUNCHHOLE, &fpunchhole) == -1) {
80
+ ret = errno == ENODEV ? -ENOTSUP : -errno;
81
+ } else {
82
+ ret = 0;
83
+ }
84
#endif
85
}
86
87
- ret = translate_err(ret);
88
if (ret == -ENOTSUP) {
89
s->has_discard = false;
90
}
91
--
92
2.31.1
93
diff view generated by jsdifflib
Deleted patch
1
From: Akihiko Odaki <akihiko.odaki@gmail.com>
2
1
3
backend_defaults property allow users to control if default block
4
properties should be decided with backend information.
5
6
If it is off, any backend information will be discarded, which is
7
suitable if you plan to perform live migration to a different disk backend.
8
9
If it is on, a block device may utilize backend information more
10
aggressively.
11
12
By default, it is auto, which uses backend information for block
13
sizes and ignores the others, which is consistent with the older
14
versions.
15
16
Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
17
Message-id: 20210705130458.97642-2-akihiko.odaki@gmail.com
18
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
19
---
20
include/hw/block/block.h | 3 +++
21
hw/block/block.c | 42 ++++++++++++++++++++++++++++++++++----
22
tests/qemu-iotests/172.out | 38 ++++++++++++++++++++++++++++++++++
23
3 files changed, 79 insertions(+), 4 deletions(-)
24
25
diff --git a/include/hw/block/block.h b/include/hw/block/block.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/include/hw/block/block.h
28
+++ b/include/hw/block/block.h
29
@@ -XXX,XX +XXX,XX @@
30
31
typedef struct BlockConf {
32
BlockBackend *blk;
33
+ OnOffAuto backend_defaults;
34
uint32_t physical_block_size;
35
uint32_t logical_block_size;
36
uint32_t min_io_size;
37
@@ -XXX,XX +XXX,XX @@ static inline unsigned int get_physical_block_exp(BlockConf *conf)
38
}
39
40
#define DEFINE_BLOCK_PROPERTIES_BASE(_state, _conf) \
41
+ DEFINE_PROP_ON_OFF_AUTO("backend_defaults", _state, \
42
+ _conf.backend_defaults, ON_OFF_AUTO_AUTO), \
43
DEFINE_PROP_BLOCKSIZE("logical_block_size", _state, \
44
_conf.logical_block_size), \
45
DEFINE_PROP_BLOCKSIZE("physical_block_size", _state, \
46
diff --git a/hw/block/block.c b/hw/block/block.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/block/block.c
49
+++ b/hw/block/block.c
50
@@ -XXX,XX +XXX,XX @@ bool blkconf_blocksizes(BlockConf *conf, Error **errp)
51
{
52
BlockBackend *blk = conf->blk;
53
BlockSizes blocksizes;
54
- int backend_ret;
55
+ BlockDriverState *bs;
56
+ bool use_blocksizes;
57
+ bool use_bs;
58
+
59
+ switch (conf->backend_defaults) {
60
+ case ON_OFF_AUTO_AUTO:
61
+ use_blocksizes = !blk_probe_blocksizes(blk, &blocksizes);
62
+ use_bs = false;
63
+ break;
64
+
65
+ case ON_OFF_AUTO_ON:
66
+ use_blocksizes = !blk_probe_blocksizes(blk, &blocksizes);
67
+ bs = blk_bs(blk);
68
+ use_bs = bs;
69
+ break;
70
+
71
+ case ON_OFF_AUTO_OFF:
72
+ use_blocksizes = false;
73
+ use_bs = false;
74
+ break;
75
+
76
+ default:
77
+ abort();
78
+ }
79
80
- backend_ret = blk_probe_blocksizes(blk, &blocksizes);
81
/* fill in detected values if they are not defined via qemu command line */
82
if (!conf->physical_block_size) {
83
- if (!backend_ret) {
84
+ if (use_blocksizes) {
85
conf->physical_block_size = blocksizes.phys;
86
} else {
87
conf->physical_block_size = BDRV_SECTOR_SIZE;
88
}
89
}
90
if (!conf->logical_block_size) {
91
- if (!backend_ret) {
92
+ if (use_blocksizes) {
93
conf->logical_block_size = blocksizes.log;
94
} else {
95
conf->logical_block_size = BDRV_SECTOR_SIZE;
96
}
97
}
98
+ if (use_bs) {
99
+ if (!conf->opt_io_size) {
100
+ conf->opt_io_size = bs->bl.opt_transfer;
101
+ }
102
+ if (conf->discard_granularity == -1) {
103
+ if (bs->bl.pdiscard_alignment) {
104
+ conf->discard_granularity = bs->bl.pdiscard_alignment;
105
+ } else if (bs->bl.request_alignment != 1) {
106
+ conf->discard_granularity = bs->bl.request_alignment;
107
+ }
108
+ }
109
+ }
110
111
if (conf->logical_block_size > conf->physical_block_size) {
112
error_setg(errp,
113
diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
114
index XXXXXXX..XXXXXXX 100644
115
--- a/tests/qemu-iotests/172.out
116
+++ b/tests/qemu-iotests/172.out
117
@@ -XXX,XX +XXX,XX @@ Testing:
118
dev: floppy, id ""
119
unit = 0 (0x0)
120
drive = "floppy0"
121
+ backend_defaults = "auto"
122
logical_block_size = 512 (512 B)
123
physical_block_size = 512 (512 B)
124
min_io_size = 0 (0 B)
125
@@ -XXX,XX +XXX,XX @@ Testing: -fda TEST_DIR/t.qcow2
126
dev: floppy, id ""
127
unit = 0 (0x0)
128
drive = "floppy0"
129
+ backend_defaults = "auto"
130
logical_block_size = 512 (512 B)
131
physical_block_size = 512 (512 B)
132
min_io_size = 0 (0 B)
133
@@ -XXX,XX +XXX,XX @@ Testing: -fdb TEST_DIR/t.qcow2
134
dev: floppy, id ""
135
unit = 1 (0x1)
136
drive = "floppy1"
137
+ backend_defaults = "auto"
138
logical_block_size = 512 (512 B)
139
physical_block_size = 512 (512 B)
140
min_io_size = 0 (0 B)
141
@@ -XXX,XX +XXX,XX @@ Testing: -fdb TEST_DIR/t.qcow2
142
dev: floppy, id ""
143
unit = 0 (0x0)
144
drive = "floppy0"
145
+ backend_defaults = "auto"
146
logical_block_size = 512 (512 B)
147
physical_block_size = 512 (512 B)
148
min_io_size = 0 (0 B)
149
@@ -XXX,XX +XXX,XX @@ Testing: -fda TEST_DIR/t.qcow2 -fdb TEST_DIR/t.qcow2.2
150
dev: floppy, id ""
151
unit = 1 (0x1)
152
drive = "floppy1"
153
+ backend_defaults = "auto"
154
logical_block_size = 512 (512 B)
155
physical_block_size = 512 (512 B)
156
min_io_size = 0 (0 B)
157
@@ -XXX,XX +XXX,XX @@ Testing: -fda TEST_DIR/t.qcow2 -fdb TEST_DIR/t.qcow2.2
158
dev: floppy, id ""
159
unit = 0 (0x0)
160
drive = "floppy0"
161
+ backend_defaults = "auto"
162
logical_block_size = 512 (512 B)
163
physical_block_size = 512 (512 B)
164
min_io_size = 0 (0 B)
165
@@ -XXX,XX +XXX,XX @@ Testing: -fdb
166
dev: floppy, id ""
167
unit = 1 (0x1)
168
drive = "floppy1"
169
+ backend_defaults = "auto"
170
logical_block_size = 512 (512 B)
171
physical_block_size = 512 (512 B)
172
min_io_size = 0 (0 B)
173
@@ -XXX,XX +XXX,XX @@ Testing: -fdb
174
dev: floppy, id ""
175
unit = 0 (0x0)
176
drive = "floppy0"
177
+ backend_defaults = "auto"
178
logical_block_size = 512 (512 B)
179
physical_block_size = 512 (512 B)
180
min_io_size = 0 (0 B)
181
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2
182
dev: floppy, id ""
183
unit = 0 (0x0)
184
drive = "floppy0"
185
+ backend_defaults = "auto"
186
logical_block_size = 512 (512 B)
187
physical_block_size = 512 (512 B)
188
min_io_size = 0 (0 B)
189
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2,index=1
190
dev: floppy, id ""
191
unit = 1 (0x1)
192
drive = "floppy1"
193
+ backend_defaults = "auto"
194
logical_block_size = 512 (512 B)
195
physical_block_size = 512 (512 B)
196
min_io_size = 0 (0 B)
197
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2,index=1
198
dev: floppy, id ""
199
unit = 0 (0x0)
200
drive = "floppy0"
201
+ backend_defaults = "auto"
202
logical_block_size = 512 (512 B)
203
physical_block_size = 512 (512 B)
204
min_io_size = 0 (0 B)
205
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=floppy,file=TEST_DIR/t
206
dev: floppy, id ""
207
unit = 1 (0x1)
208
drive = "floppy1"
209
+ backend_defaults = "auto"
210
logical_block_size = 512 (512 B)
211
physical_block_size = 512 (512 B)
212
min_io_size = 0 (0 B)
213
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=floppy,file=TEST_DIR/t
214
dev: floppy, id ""
215
unit = 0 (0x0)
216
drive = "floppy0"
217
+ backend_defaults = "auto"
218
logical_block_size = 512 (512 B)
219
physical_block_size = 512 (512 B)
220
min_io_size = 0 (0 B)
221
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0
222
dev: floppy, id ""
223
unit = 0 (0x0)
224
drive = "none0"
225
+ backend_defaults = "auto"
226
logical_block_size = 512 (512 B)
227
physical_block_size = 512 (512 B)
228
min_io_size = 0 (0 B)
229
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,unit=1
230
dev: floppy, id ""
231
unit = 1 (0x1)
232
drive = "none0"
233
+ backend_defaults = "auto"
234
logical_block_size = 512 (512 B)
235
physical_block_size = 512 (512 B)
236
min_io_size = 0 (0 B)
237
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
238
dev: floppy, id ""
239
unit = 1 (0x1)
240
drive = "none1"
241
+ backend_defaults = "auto"
242
logical_block_size = 512 (512 B)
243
physical_block_size = 512 (512 B)
244
min_io_size = 0 (0 B)
245
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
246
dev: floppy, id ""
247
unit = 0 (0x0)
248
drive = "none0"
249
+ backend_defaults = "auto"
250
logical_block_size = 512 (512 B)
251
physical_block_size = 512 (512 B)
252
min_io_size = 0 (0 B)
253
@@ -XXX,XX +XXX,XX @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
254
dev: floppy, id ""
255
unit = 1 (0x1)
256
drive = "none0"
257
+ backend_defaults = "auto"
258
logical_block_size = 512 (512 B)
259
physical_block_size = 512 (512 B)
260
min_io_size = 0 (0 B)
261
@@ -XXX,XX +XXX,XX @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
262
dev: floppy, id ""
263
unit = 0 (0x0)
264
drive = "floppy0"
265
+ backend_defaults = "auto"
266
logical_block_size = 512 (512 B)
267
physical_block_size = 512 (512 B)
268
min_io_size = 0 (0 B)
269
@@ -XXX,XX +XXX,XX @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
270
dev: floppy, id ""
271
unit = 1 (0x1)
272
drive = "none0"
273
+ backend_defaults = "auto"
274
logical_block_size = 512 (512 B)
275
physical_block_size = 512 (512 B)
276
min_io_size = 0 (0 B)
277
@@ -XXX,XX +XXX,XX @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
278
dev: floppy, id ""
279
unit = 0 (0x0)
280
drive = "floppy0"
281
+ backend_defaults = "auto"
282
logical_block_size = 512 (512 B)
283
physical_block_size = 512 (512 B)
284
min_io_size = 0 (0 B)
285
@@ -XXX,XX +XXX,XX @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
286
dev: floppy, id ""
287
unit = 0 (0x0)
288
drive = "none0"
289
+ backend_defaults = "auto"
290
logical_block_size = 512 (512 B)
291
physical_block_size = 512 (512 B)
292
min_io_size = 0 (0 B)
293
@@ -XXX,XX +XXX,XX @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
294
dev: floppy, id ""
295
unit = 1 (0x1)
296
drive = "floppy1"
297
+ backend_defaults = "auto"
298
logical_block_size = 512 (512 B)
299
physical_block_size = 512 (512 B)
300
min_io_size = 0 (0 B)
301
@@ -XXX,XX +XXX,XX @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
302
dev: floppy, id ""
303
unit = 0 (0x0)
304
drive = "none0"
305
+ backend_defaults = "auto"
306
logical_block_size = 512 (512 B)
307
physical_block_size = 512 (512 B)
308
min_io_size = 0 (0 B)
309
@@ -XXX,XX +XXX,XX @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
310
dev: floppy, id ""
311
unit = 1 (0x1)
312
drive = "floppy1"
313
+ backend_defaults = "auto"
314
logical_block_size = 512 (512 B)
315
physical_block_size = 512 (512 B)
316
min_io_size = 0 (0 B)
317
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
318
dev: floppy, id ""
319
unit = 1 (0x1)
320
drive = "none0"
321
+ backend_defaults = "auto"
322
logical_block_size = 512 (512 B)
323
physical_block_size = 512 (512 B)
324
min_io_size = 0 (0 B)
325
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
326
dev: floppy, id ""
327
unit = 0 (0x0)
328
drive = "floppy0"
329
+ backend_defaults = "auto"
330
logical_block_size = 512 (512 B)
331
physical_block_size = 512 (512 B)
332
min_io_size = 0 (0 B)
333
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
334
dev: floppy, id ""
335
unit = 1 (0x1)
336
drive = "none0"
337
+ backend_defaults = "auto"
338
logical_block_size = 512 (512 B)
339
physical_block_size = 512 (512 B)
340
min_io_size = 0 (0 B)
341
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
342
dev: floppy, id ""
343
unit = 0 (0x0)
344
drive = "floppy0"
345
+ backend_defaults = "auto"
346
logical_block_size = 512 (512 B)
347
physical_block_size = 512 (512 B)
348
min_io_size = 0 (0 B)
349
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global floppy.drive=none0 -device
350
dev: floppy, id ""
351
unit = 0 (0x0)
352
drive = "none0"
353
+ backend_defaults = "auto"
354
logical_block_size = 512 (512 B)
355
physical_block_size = 512 (512 B)
356
min_io_size = 0 (0 B)
357
@@ -XXX,XX +XXX,XX @@ Testing: -device floppy
358
dev: floppy, id ""
359
unit = 0 (0x0)
360
drive = ""
361
+ backend_defaults = "auto"
362
logical_block_size = 512 (512 B)
363
physical_block_size = 512 (512 B)
364
min_io_size = 0 (0 B)
365
@@ -XXX,XX +XXX,XX @@ Testing: -device floppy,drive-type=120
366
dev: floppy, id ""
367
unit = 0 (0x0)
368
drive = ""
369
+ backend_defaults = "auto"
370
logical_block_size = 512 (512 B)
371
physical_block_size = 512 (512 B)
372
min_io_size = 0 (0 B)
373
@@ -XXX,XX +XXX,XX @@ Testing: -device floppy,drive-type=144
374
dev: floppy, id ""
375
unit = 0 (0x0)
376
drive = ""
377
+ backend_defaults = "auto"
378
logical_block_size = 512 (512 B)
379
physical_block_size = 512 (512 B)
380
min_io_size = 0 (0 B)
381
@@ -XXX,XX +XXX,XX @@ Testing: -device floppy,drive-type=288
382
dev: floppy, id ""
383
unit = 0 (0x0)
384
drive = ""
385
+ backend_defaults = "auto"
386
logical_block_size = 512 (512 B)
387
physical_block_size = 512 (512 B)
388
min_io_size = 0 (0 B)
389
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
390
dev: floppy, id ""
391
unit = 0 (0x0)
392
drive = "none0"
393
+ backend_defaults = "auto"
394
logical_block_size = 512 (512 B)
395
physical_block_size = 512 (512 B)
396
min_io_size = 0 (0 B)
397
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
398
dev: floppy, id ""
399
unit = 0 (0x0)
400
drive = "none0"
401
+ backend_defaults = "auto"
402
logical_block_size = 512 (512 B)
403
physical_block_size = 512 (512 B)
404
min_io_size = 0 (0 B)
405
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical
406
dev: floppy, id ""
407
unit = 0 (0x0)
408
drive = "none0"
409
+ backend_defaults = "auto"
410
logical_block_size = 512 (512 B)
411
physical_block_size = 512 (512 B)
412
min_io_size = 0 (0 B)
413
@@ -XXX,XX +XXX,XX @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
414
dev: floppy, id ""
415
unit = 0 (0x0)
416
drive = "none0"
417
+ backend_defaults = "auto"
418
logical_block_size = 512 (512 B)
419
physical_block_size = 512 (512 B)
420
min_io_size = 0 (0 B)
421
--
422
2.31.1
423
diff view generated by jsdifflib
Deleted patch
1
From: Akihiko Odaki <akihiko.odaki@gmail.com>
2
1
3
Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
4
Message-id: 20210705130458.97642-3-akihiko.odaki@gmail.com
5
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
6
---
7
block/io.c | 2 ++
8
1 file changed, 2 insertions(+)
9
10
diff --git a/block/io.c b/block/io.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/block/io.c
13
+++ b/block/io.c
14
@@ -XXX,XX +XXX,XX @@ void bdrv_parent_drained_begin_single(BdrvChild *c, bool poll)
15
16
static void bdrv_merge_limits(BlockLimits *dst, const BlockLimits *src)
17
{
18
+ dst->pdiscard_alignment = MAX(dst->pdiscard_alignment,
19
+ src->pdiscard_alignment);
20
dst->opt_transfer = MAX(dst->opt_transfer, src->opt_transfer);
21
dst->max_transfer = MIN_NON_ZERO(dst->max_transfer, src->max_transfer);
22
dst->max_hw_transfer = MIN_NON_ZERO(dst->max_hw_transfer,
23
--
24
2.31.1
25
diff view generated by jsdifflib