1
The following changes since commit 64175afc695c0672876fbbfc31b299c86d562cb4:
1
The following changes since commit 33f18cf7dca7741d3647d514040904ce83edd73d:
2
2
3
arm_gicv3: Fix ICC_BPR1 reset value when EL3 not implemented (2017-06-07 17:21:44 +0100)
3
Merge remote-tracking branch 'remotes/kraxel/tags/audio-20190821-pull-request' into staging (2019-08-21 15:18:50 +0100)
4
4
5
are available in the git repository at:
5
are available in the Git repository at:
6
6
7
git://github.com/codyprime/qemu-kvm-jtc.git tags/block-pull-request
7
https://github.com/stefanha/qemu.git tags/block-pull-request
8
8
9
for you to fetch changes up to 56faeb9bb6872b3f926b3b3e0452a70beea10af2:
9
for you to fetch changes up to 5d4c1ed3d46d7e2010b389fe5f3376f605182ab0:
10
10
11
block/gluster.c: Handle qdict_array_entries() failure (2017-06-09 08:41:29 -0400)
11
vhost-user-scsi: prevent using uninitialized vqs (2019-08-22 16:52:23 +0100)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Gluster patch
14
Pull request
15
15
----------------------------------------------------------------
16
----------------------------------------------------------------
16
17
17
Peter Maydell (1):
18
Raphael Norwitz (1):
18
block/gluster.c: Handle qdict_array_entries() failure
19
vhost-user-scsi: prevent using uninitialized vqs
19
20
20
block/gluster.c | 3 +--
21
Stefan Hajnoczi (1):
21
1 file changed, 1 insertion(+), 2 deletions(-)
22
util/async: hold AioContext ref to prevent use-after-free
23
24
hw/scsi/vhost-user-scsi.c | 2 +-
25
util/async.c | 8 ++++++++
26
2 files changed, 9 insertions(+), 1 deletion(-)
22
27
23
--
28
--
24
2.9.3
29
2.21.0
25
30
26
31
diff view generated by jsdifflib
New patch
1
The tests/test-bdrv-drain /bdrv-drain/iothread/drain test case does the
2
following:
1
3
4
1. The preadv coroutine calls aio_bh_schedule_oneshot() and then yields.
5
2. The one-shot BH executes in another AioContext. All it does is call
6
aio_co_wakeup(preadv_co).
7
3. The preadv coroutine is re-entered and returns.
8
9
There is a race condition in aio_co_wake() where the preadv coroutine
10
returns and the test case destroys the preadv IOThread. aio_co_wake()
11
can still be running in the other AioContext and it performs an access
12
to the freed IOThread AioContext.
13
14
Here is the race in aio_co_schedule():
15
16
QSLIST_INSERT_HEAD_ATOMIC(&ctx->scheduled_coroutines,
17
co, co_scheduled_next);
18
<-- race: co may execute before we invoke qemu_bh_schedule()!
19
qemu_bh_schedule(ctx->co_schedule_bh);
20
21
So if co causes ctx to be freed then we're in trouble. Fix this problem
22
by holding a reference to ctx.
23
24
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
25
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
26
Message-id: 20190723190623.21537-1-stefanha@redhat.com
27
Message-Id: <20190723190623.21537-1-stefanha@redhat.com>
28
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
29
---
30
util/async.c | 8 ++++++++
31
1 file changed, 8 insertions(+)
32
33
diff --git a/util/async.c b/util/async.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/util/async.c
36
+++ b/util/async.c
37
@@ -XXX,XX +XXX,XX @@ void aio_co_schedule(AioContext *ctx, Coroutine *co)
38
abort();
39
}
40
41
+ /* The coroutine might run and release the last ctx reference before we
42
+ * invoke qemu_bh_schedule(). Take a reference to keep ctx alive until
43
+ * we're done.
44
+ */
45
+ aio_context_ref(ctx);
46
+
47
QSLIST_INSERT_HEAD_ATOMIC(&ctx->scheduled_coroutines,
48
co, co_scheduled_next);
49
qemu_bh_schedule(ctx->co_schedule_bh);
50
+
51
+ aio_context_unref(ctx);
52
}
53
54
void aio_co_wake(struct Coroutine *co)
55
--
56
2.21.0
57
58
diff view generated by jsdifflib
1
From: Peter Maydell <peter.maydell@linaro.org>
1
From: Raphael Norwitz <raphael.norwitz@nutanix.com>
2
2
3
In qemu_gluster_parse_json(), the call to qdict_array_entries()
3
Of the 3 virtqueues, seabios only sets cmd, leaving ctrl
4
could return a negative error code, which we were ignoring
4
and event without a physical address. This can cause
5
because we assigned the result to an unsigned variable.
5
vhost_verify_ring_part_mapping to return ENOMEM, causing
6
Fix this by using the 'int' type instead, which matches the
6
the following logs:
7
return type of qdict_array_entries() and also the type
8
we use for the loop enumeration variable 'i'.
9
7
10
(Spotted by Coverity, CID 1360960.)
8
qemu-system-x86_64: Unable to map available ring for ring 0
9
qemu-system-x86_64: Verify ring failure on region 0
11
10
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
The qemu commit e6cc11d64fc998c11a4dfcde8fda3fc33a74d844
13
Reviewed-by: Eric Blake <eblake@redhat.com>
12
has already resolved the issue for vhost scsi devices but
14
Reviewed-by: Jeff Cody <jcody@redhat.com>
13
the fix was never applied to vhost-user scsi devices.
15
Message-id: 1496682098-1540-1-git-send-email-peter.maydell@linaro.org
14
16
Signed-off-by: Jeff Cody <jcody@redhat.com>
15
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
16
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
17
Message-id: 1560299717-177734-1-git-send-email-raphael.norwitz@nutanix.com
18
Message-Id: <1560299717-177734-1-git-send-email-raphael.norwitz@nutanix.com>
19
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
17
---
20
---
18
block/gluster.c | 3 +--
21
hw/scsi/vhost-user-scsi.c | 2 +-
19
1 file changed, 1 insertion(+), 2 deletions(-)
22
1 file changed, 1 insertion(+), 1 deletion(-)
20
23
21
diff --git a/block/gluster.c b/block/gluster.c
24
diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c
22
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
23
--- a/block/gluster.c
26
--- a/hw/scsi/vhost-user-scsi.c
24
+++ b/block/gluster.c
27
+++ b/hw/scsi/vhost-user-scsi.c
25
@@ -XXX,XX +XXX,XX @@ static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
28
@@ -XXX,XX +XXX,XX @@ static void vhost_user_scsi_realize(DeviceState *dev, Error **errp)
26
Error *local_err = NULL;
29
}
27
char *str = NULL;
30
28
const char *ptr;
31
vsc->dev.nvqs = 2 + vs->conf.num_queues;
29
- size_t num_servers;
32
- vsc->dev.vqs = g_new(struct vhost_virtqueue, vsc->dev.nvqs);
30
- int i, type;
33
+ vsc->dev.vqs = g_new0(struct vhost_virtqueue, vsc->dev.nvqs);
31
+ int i, type, num_servers;
34
vsc->dev.vq_index = 0;
32
35
vsc->dev.backend_features = 0;
33
/* create opts info from runtime_json_opts list */
36
vqs = vsc->dev.vqs;
34
opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
35
--
37
--
36
2.9.3
38
2.21.0
37
39
38
40
diff view generated by jsdifflib