1
The following changes since commit 00b1faea41d283e931256aa78aa975a369ec3ae6:
1
The following changes since commit 887cba855bb6ff4775256f7968409281350b568c:
2
2
3
Merge tag 'pull-target-arm-20230123' of https://git.linaro.org/people/pmaydell/qemu-arm into staging (2023-01-23 13:40:28 +0000)
3
configure: Fix cross-building for RISCV host (v5) (2023-07-11 17:56:09 +0100)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
7
https://gitlab.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to 4f01a9bb0461e8c11ee0c94d90a504cb7d580a85:
9
for you to fetch changes up to 75dcb4d790bbe5327169fd72b185960ca58e2fa6:
10
10
11
block/blkio: Fix inclusion of required headers (2023-01-23 15:02:07 -0500)
11
virtio-blk: fix host notifier issues during dataplane start/stop (2023-07-12 15:20:32 -0400)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Pull request
14
Pull request
15
15
16
----------------------------------------------------------------
16
----------------------------------------------------------------
17
17
18
Chao Gao (1):
18
Stefan Hajnoczi (1):
19
util/aio: Defer disabling poll mode as long as possible
19
virtio-blk: fix host notifier issues during dataplane start/stop
20
20
21
Peter Krempa (1):
21
hw/block/dataplane/virtio-blk.c | 67 +++++++++++++++++++--------------
22
block/blkio: Fix inclusion of required headers
22
1 file changed, 38 insertions(+), 29 deletions(-)
23
24
Stefan Hajnoczi (1):
25
virtio-blk: simplify virtio_blk_dma_restart_cb()
26
27
include/hw/virtio/virtio-blk.h | 2 --
28
block/blkio.c | 2 ++
29
hw/block/dataplane/virtio-blk.c | 17 +++++-------
30
hw/block/virtio-blk.c | 46 ++++++++++++++-------------------
31
util/aio-posix.c | 21 ++++++++++-----
32
5 files changed, 43 insertions(+), 45 deletions(-)
33
23
34
--
24
--
35
2.39.0
25
2.40.1
diff view generated by jsdifflib
Deleted patch
1
From: Chao Gao <chao.gao@intel.com>
2
1
3
When we measure FIO read performance (cache=writethrough, bs=4k,
4
iodepth=64) in VMs, ~80K/s notifications (e.g., EPT_MISCONFIG) are observed
5
from guest to qemu.
6
7
It turns out those frequent notificatons are caused by interference from
8
worker threads. Worker threads queue bottom halves after completing IO
9
requests. Pending bottom halves may lead to either aio_compute_timeout()
10
zeros timeout and pass it to try_poll_mode() or run_poll_handlers() returns
11
no progress after noticing pending aio_notify() events. Both cause
12
run_poll_handlers() to call poll_set_started(false) to disable poll mode.
13
However, for both cases, as timeout is already zeroed, the event loop
14
(i.e., aio_poll()) just processes bottom halves and then starts the next
15
event loop iteration. So, disabling poll mode has no value but leads to
16
unnecessary notifications from guest.
17
18
To minimize unnecessary notifications from guest, defer disabling poll
19
mode to when the event loop is about to be blocked.
20
21
With this patch applied, FIO seq-read performance (bs=4k, iodepth=64,
22
cache=writethrough) in VMs increases from 330K/s to 413K/s IOPS.
23
24
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
25
Signed-off-by: Chao Gao <chao.gao@intel.com>
26
Message-id: 20220710120849.63086-1-chao.gao@intel.com
27
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
28
---
29
util/aio-posix.c | 21 +++++++++++++++------
30
1 file changed, 15 insertions(+), 6 deletions(-)
31
32
diff --git a/util/aio-posix.c b/util/aio-posix.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/util/aio-posix.c
35
+++ b/util/aio-posix.c
36
@@ -XXX,XX +XXX,XX @@ static bool try_poll_mode(AioContext *ctx, AioHandlerList *ready_list,
37
38
max_ns = qemu_soonest_timeout(*timeout, ctx->poll_ns);
39
if (max_ns && !ctx->fdmon_ops->need_wait(ctx)) {
40
+ /*
41
+ * Enable poll mode. It pairs with the poll_set_started() in
42
+ * aio_poll() which disables poll mode.
43
+ */
44
poll_set_started(ctx, ready_list, true);
45
46
if (run_poll_handlers(ctx, ready_list, max_ns, timeout)) {
47
return true;
48
}
49
}
50
-
51
- if (poll_set_started(ctx, ready_list, false)) {
52
- *timeout = 0;
53
- return true;
54
- }
55
-
56
return false;
57
}
58
59
@@ -XXX,XX +XXX,XX @@ bool aio_poll(AioContext *ctx, bool blocking)
60
* system call---a single round of run_poll_handlers_once suffices.
61
*/
62
if (timeout || ctx->fdmon_ops->need_wait(ctx)) {
63
+ /*
64
+ * Disable poll mode. poll mode should be disabled before the call
65
+ * of ctx->fdmon_ops->wait() so that guest's notification can wake
66
+ * up IO threads when some work becomes pending. It is essential to
67
+ * avoid hangs or unnecessary latency.
68
+ */
69
+ if (poll_set_started(ctx, &ready_list, false)) {
70
+ timeout = 0;
71
+ progress = true;
72
+ }
73
+
74
ctx->fdmon_ops->wait(ctx, &ready_list, timeout);
75
}
76
77
--
78
2.39.0
diff view generated by jsdifflib
1
virtio_blk_dma_restart_cb() is tricky because the BH must deal with
1
The main loop thread can consume 100% CPU when using --device
2
virtio_blk_data_plane_start()/virtio_blk_data_plane_stop() being called.
2
virtio-blk-pci,iothread=<iothread>. ppoll() constantly returns but
3
reading virtqueue host notifiers fails with EAGAIN. The file descriptors
4
are stale and remain registered with the AioContext because of bugs in
5
the virtio-blk dataplane start/stop code.
3
6
4
There are two issues with the code:
7
The problem is that the dataplane start/stop code involves drain
8
operations, which call virtio_blk_drained_begin() and
9
virtio_blk_drained_end() at points where the host notifier is not
10
operational:
11
- In virtio_blk_data_plane_start(), blk_set_aio_context() drains after
12
vblk->dataplane_started has been set to true but the host notifier has
13
not been attached yet.
14
- In virtio_blk_data_plane_stop(), blk_drain() and blk_set_aio_context()
15
drain after the host notifier has already been detached but with
16
vblk->dataplane_started still set to true.
5
17
6
1. virtio_blk_realize() should use qdev_add_vm_change_state_handler()
18
I would like to simplify ->ioeventfd_start/stop() to avoid interactions
7
instead of qemu_add_vm_change_state_handler(). This ensures the
19
with drain entirely, but couldn't find a way to do that. Instead, this
8
ordering with virtio_init()'s vm change state handler that calls
20
patch accepts the fragile nature of the code and reorders it so that
9
virtio_blk_data_plane_start()/virtio_blk_data_plane_stop() is
21
vblk->dataplane_started is false during drain operations. This way the
10
well-defined. Then blk's AioContext is guaranteed to be up-to-date in
22
virtio_blk_drained_begin() and virtio_blk_drained_end() calls don't
11
virtio_blk_dma_restart_cb() and it's no longer necessary to have a
23
touch the host notifier. The result is that
12
special case for virtio_blk_data_plane_start().
24
virtio_blk_data_plane_start() and virtio_blk_data_plane_stop() have
25
complete control over the host notifier and stale file descriptors are
26
no longer left in the AioContext.
13
27
14
2. Only blk_drain() waits for virtio_blk_dma_restart_cb()'s
28
This patch fixes the 100% CPU consumption in the main loop thread and
15
blk_inc_in_flight() to be decremented. The bdrv_drain() family of
29
correctly moves host notifier processing to the IOThread.
16
functions do not wait for BlockBackend's in_flight counter to reach
17
zero. virtio_blk_data_plane_stop() relies on blk_set_aio_context()'s
18
implicit drain, but that's a bdrv_drain() and not a blk_drain().
19
Note that virtio_blk_reset() already correctly relies on blk_drain().
20
If virtio_blk_data_plane_stop() switches to blk_drain() then we can
21
properly wait for pending virtio_blk_dma_restart_bh() calls.
22
30
23
Once these issues are taken care of the code becomes simpler. This
31
Fixes: 1665d9326fd2 ("virtio-blk: implement BlockDevOps->drained_begin()")
24
change is in preparation for multiple IOThreads in virtio-blk where we
32
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
25
need to clean up the multi-threading behavior.
26
27
I ran the reproducer from commit 49b44549ace7 ("virtio-blk: On restart,
28
process queued requests in the proper context") to check that there is
29
no regression.
30
31
Cc: Sergio Lopez <slp@redhat.com>
32
Cc: Kevin Wolf <kwolf@redhat.com>
33
Cc: Emanuele Giuseppe Esposito <eesposit@redhat.com>
34
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
33
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
35
Acked-by: Michael S. Tsirkin <mst@redhat.com>
34
Tested-by: Lukas Doktor <ldoktor@redhat.com>
36
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
35
Message-id: 20230704151527.193586-1-stefanha@redhat.com
37
Message-id: 20221102182337.252202-1-stefanha@redhat.com
38
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
36
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
39
---
37
---
40
include/hw/virtio/virtio-blk.h | 2 --
38
hw/block/dataplane/virtio-blk.c | 67 +++++++++++++++++++--------------
41
hw/block/dataplane/virtio-blk.c | 17 +++++-------
39
1 file changed, 38 insertions(+), 29 deletions(-)
42
hw/block/virtio-blk.c | 46 ++++++++++++++-------------------
43
3 files changed, 26 insertions(+), 39 deletions(-)
44
40
45
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/include/hw/virtio/virtio-blk.h
48
+++ b/include/hw/virtio/virtio-blk.h
49
@@ -XXX,XX +XXX,XX @@ struct VirtIOBlock {
50
VirtIODevice parent_obj;
51
BlockBackend *blk;
52
void *rq;
53
- QEMUBH *bh;
54
VirtIOBlkConf conf;
55
unsigned short sector_mask;
56
bool original_wce;
57
@@ -XXX,XX +XXX,XX @@ typedef struct MultiReqBuffer {
58
} MultiReqBuffer;
59
60
void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq);
61
-void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh);
62
63
#endif
64
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
41
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
65
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/block/dataplane/virtio-blk.c
43
--- a/hw/block/dataplane/virtio-blk.c
67
+++ b/hw/block/dataplane/virtio-blk.c
44
+++ b/hw/block/dataplane/virtio-blk.c
68
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
45
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
69
goto fail_aio_context;
46
47
memory_region_transaction_commit();
48
49
- /*
50
- * These fields are visible to the IOThread so we rely on implicit barriers
51
- * in aio_context_acquire() on the write side and aio_notify_accept() on
52
- * the read side.
53
- */
54
- s->starting = false;
55
- vblk->dataplane_started = true;
56
trace_virtio_blk_data_plane_start(s);
57
58
old_context = blk_get_aio_context(s->conf->conf.blk);
59
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
60
event_notifier_set(virtio_queue_get_host_notifier(vq));
70
}
61
}
71
62
72
- /* Process queued requests before the ones in vring */
63
+ /*
73
- virtio_blk_process_queued_requests(vblk, false);
64
+ * These fields must be visible to the IOThread when it processes the
74
-
65
+ * virtqueue, otherwise it will think dataplane has not started yet.
75
/* Kick right away to begin processing requests already in vring */
66
+ *
76
for (i = 0; i < nvqs; i++) {
67
+ * Make sure ->dataplane_started is false when blk_set_aio_context() is
77
VirtQueue *vq = virtio_get_queue(s->vdev, i);
68
+ * called above so that draining does not cause the host notifier to be
69
+ * detached/attached prematurely.
70
+ */
71
+ s->starting = false;
72
+ vblk->dataplane_started = true;
73
+ smp_wmb(); /* paired with aio_notify_accept() on the read side */
74
+
75
/* Get this show started by hooking up our callbacks */
76
if (!blk_in_drain(s->conf->conf.blk)) {
77
aio_context_acquire(s->ctx);
78
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
78
@@ -XXX,XX +XXX,XX @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
79
fail_host_notifiers:
80
k->set_guest_notifiers(qbus->parent, nvqs, false);
81
fail_guest_notifiers:
79
fail_guest_notifiers:
82
- /*
83
- * If we failed to set up the guest notifiers queued requests will be
84
- * processed on the main context.
85
- */
86
- virtio_blk_process_queued_requests(vblk, false);
87
vblk->dataplane_disabled = true;
80
vblk->dataplane_disabled = true;
88
s->starting = false;
81
s->starting = false;
89
vblk->dataplane_started = true;
82
- vblk->dataplane_started = true;
83
return -ENOSYS;
84
}
85
90
@@ -XXX,XX +XXX,XX @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
86
@@ -XXX,XX +XXX,XX @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
91
aio_context_acquire(s->ctx);
87
aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
92
aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
88
}
93
89
94
- /* Drain and try to switch bs back to the QEMU main loop. If other users
90
+ /*
95
- * keep the BlockBackend in the iothread, that's ok */
91
+ * Batch all the host notifiers in a single transaction to avoid
96
+ /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
92
+ * quadratic time complexity in address_space_update_ioeventfds().
97
+ blk_drain(s->conf->conf.blk);
93
+ */
94
+ memory_region_transaction_begin();
95
+
96
+ for (i = 0; i < nvqs; i++) {
97
+ virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
98
+ }
98
+
99
+
99
+ /*
100
+ /*
100
+ * Try to switch bs back to the QEMU main loop. If other users keep the
101
+ * The transaction expects the ioeventfds to be open when it
101
+ * BlockBackend in the iothread, that's ok
102
+ * commits. Do it now, before the cleanup loop.
102
+ */
103
+ */
103
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context(), NULL);
104
+ memory_region_transaction_commit();
105
+
106
+ for (i = 0; i < nvqs; i++) {
107
+ virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
108
+ }
109
+
110
+ /*
111
+ * Set ->dataplane_started to false before draining so that host notifiers
112
+ * are not detached/attached anymore.
113
+ */
114
+ vblk->dataplane_started = false;
115
+
116
aio_context_acquire(s->ctx);
117
118
/* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
119
@@ -XXX,XX +XXX,XX @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
104
120
105
aio_context_release(s->ctx);
121
aio_context_release(s->ctx);
106
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
122
107
index XXXXXXX..XXXXXXX 100644
123
- /*
108
--- a/hw/block/virtio-blk.c
124
- * Batch all the host notifiers in a single transaction to avoid
109
+++ b/hw/block/virtio-blk.c
125
- * quadratic time complexity in address_space_update_ioeventfds().
110
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
126
- */
111
virtio_blk_handle_vq(s, vq);
127
- memory_region_transaction_begin();
128
-
129
- for (i = 0; i < nvqs; i++) {
130
- virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false);
131
- }
132
-
133
- /*
134
- * The transaction expects the ioeventfds to be open when it
135
- * commits. Do it now, before the cleanup loop.
136
- */
137
- memory_region_transaction_commit();
138
-
139
- for (i = 0; i < nvqs; i++) {
140
- virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
141
- }
142
-
143
qemu_bh_cancel(s->bh);
144
notify_guest_bh(s); /* final chance to notify guest */
145
146
/* Clean up guest notifier (irq) */
147
k->set_guest_notifiers(qbus->parent, nvqs, false);
148
149
- vblk->dataplane_started = false;
150
s->stopping = false;
112
}
151
}
113
114
-void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh)
115
+static void virtio_blk_dma_restart_bh(void *opaque)
116
{
117
+ VirtIOBlock *s = opaque;
118
+
119
VirtIOBlockReq *req = s->rq;
120
MultiReqBuffer mrb = {};
121
122
@@ -XXX,XX +XXX,XX @@ void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh)
123
if (mrb.num_reqs) {
124
virtio_blk_submit_multireq(s, &mrb);
125
}
126
- if (is_bh) {
127
- blk_dec_in_flight(s->conf.conf.blk);
128
- }
129
+
130
+ /* Paired with inc in virtio_blk_dma_restart_cb() */
131
+ blk_dec_in_flight(s->conf.conf.blk);
132
+
133
aio_context_release(blk_get_aio_context(s->conf.conf.blk));
134
}
135
136
-static void virtio_blk_dma_restart_bh(void *opaque)
137
-{
138
- VirtIOBlock *s = opaque;
139
-
140
- qemu_bh_delete(s->bh);
141
- s->bh = NULL;
142
-
143
- virtio_blk_process_queued_requests(s, true);
144
-}
145
-
146
static void virtio_blk_dma_restart_cb(void *opaque, bool running,
147
RunState state)
148
{
149
VirtIOBlock *s = opaque;
150
- BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
151
- VirtioBusState *bus = VIRTIO_BUS(qbus);
152
153
if (!running) {
154
return;
155
}
156
157
- /*
158
- * If ioeventfd is enabled, don't schedule the BH here as queued
159
- * requests will be processed while starting the data plane.
160
- */
161
- if (!s->bh && !virtio_bus_ioeventfd_enabled(bus)) {
162
- s->bh = aio_bh_new(blk_get_aio_context(s->conf.conf.blk),
163
- virtio_blk_dma_restart_bh, s);
164
- blk_inc_in_flight(s->conf.conf.blk);
165
- qemu_bh_schedule(s->bh);
166
- }
167
+ /* Paired with dec in virtio_blk_dma_restart_bh() */
168
+ blk_inc_in_flight(s->conf.conf.blk);
169
+
170
+ aio_bh_schedule_oneshot(blk_get_aio_context(s->conf.conf.blk),
171
+ virtio_blk_dma_restart_bh, s);
172
}
173
174
static void virtio_blk_reset(VirtIODevice *vdev)
175
@@ -XXX,XX +XXX,XX @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
176
return;
177
}
178
179
- s->change = qemu_add_vm_change_state_handler(virtio_blk_dma_restart_cb, s);
180
+ /*
181
+ * This must be after virtio_init() so virtio_blk_dma_restart_cb() gets
182
+ * called after ->start_ioeventfd() has already set blk's AioContext.
183
+ */
184
+ s->change =
185
+ qdev_add_vm_change_state_handler(dev, virtio_blk_dma_restart_cb, s);
186
+
187
blk_ram_registrar_init(&s->blk_ram_registrar, s->blk);
188
blk_set_dev_ops(s->blk, &virtio_block_ops, s);
189
190
--
152
--
191
2.39.0
153
2.40.1
154
155
diff view generated by jsdifflib
Deleted patch
1
From: Peter Krempa <pkrempa@redhat.com>
2
1
3
After recent header file inclusion rework the build fails when the blkio
4
module is enabled:
5
6
../block/blkio.c: In function ‘blkio_detach_aio_context’:
7
../block/blkio.c:321:24: error: implicit declaration of function ‘bdrv_get_aio_context’; did you mean ‘qemu_get_aio_context’? [-Werror=implicit-function-declaration]
8
321 | aio_set_fd_handler(bdrv_get_aio_context(bs),
9
| ^~~~~~~~~~~~~~~~~~~~
10
| qemu_get_aio_context
11
../block/blkio.c:321:24: error: nested extern declaration of ‘bdrv_get_aio_context’ [-Werror=nested-externs]
12
../block/blkio.c:321:24: error: passing argument 1 of ‘aio_set_fd_handler’ makes pointer from integer without a cast [-Werror=int-conversion]
13
321 | aio_set_fd_handler(bdrv_get_aio_context(bs),
14
| ^~~~~~~~~~~~~~~~~~~~~~~~
15
| |
16
| int
17
In file included from /home/pipo/git/qemu.git/include/qemu/job.h:33,
18
from /home/pipo/git/qemu.git/include/block/blockjob.h:30,
19
from /home/pipo/git/qemu.git/include/block/block_int-global-state.h:28,
20
from /home/pipo/git/qemu.git/include/block/block_int.h:27,
21
from ../block/blkio.c:13:
22
/home/pipo/git/qemu.git/include/block/aio.h:476:37: note: expected ‘AioContext *’ but argument is of type ‘int’
23
476 | void aio_set_fd_handler(AioContext *ctx,
24
| ~~~~~~~~~~~~^~~
25
../block/blkio.c: In function ‘blkio_file_open’:
26
../block/blkio.c:821:34: error: passing argument 2 of ‘blkio_attach_aio_context’ makes pointer from integer without a cast [-Werror=int-conversion]
27
821 | blkio_attach_aio_context(bs, bdrv_get_aio_context(bs));
28
| ^~~~~~~~~~~~~~~~~~~~~~~~
29
| |
30
| int
31
32
Fix it by including 'block/block-io.h' which contains the required
33
declarations.
34
35
Fixes: e2c1c34f139f49ef909bb4322607fb8b39002312
36
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
37
Reviewed-by: Markus Armbruster <armbru@redhat.com>
38
Message-id: 2bc956011404a1ab03342aefde0087b5b4762562.1674477350.git.pkrempa@redhat.com
39
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
40
---
41
block/blkio.c | 2 ++
42
1 file changed, 2 insertions(+)
43
44
diff --git a/block/blkio.c b/block/blkio.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/block/blkio.c
47
+++ b/block/blkio.c
48
@@ -XXX,XX +XXX,XX @@
49
#include "qemu/module.h"
50
#include "exec/memory.h" /* for ram_block_discard_disable() */
51
52
+#include "block/block-io.h"
53
+
54
/*
55
* Keep the QEMU BlockDriver names identical to the libblkio driver names.
56
* Using macros instead of typing out the string literals avoids typos.
57
--
58
2.39.0
59
60
diff view generated by jsdifflib