block/block-backend.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
Removing a drive with drive_del while it is being used to run an I/O
intensive workload can cause QEMU to crash.
An AIO flush can yield at some point:
blk_aio_flush_entry()
blk_co_flush(blk)
bdrv_co_flush(blk->root->bs)
...
qemu_coroutine_yield()
and let the HMP command to run, free blk->root and give control
back to the AIO flush:
hmp_drive_del()
blk_remove_bs()
bdrv_root_unref_child(blk->root)
child_bs = blk->root->bs
bdrv_detach_child(blk->root)
bdrv_replace_child(blk->root, NULL)
blk->root->bs = NULL
g_free(blk->root) <============== blk->root becomes stale
bdrv_unref(child_bs)
bdrv_delete(child_bs)
bdrv_close()
bdrv_drained_begin()
bdrv_do_drained_begin()
bdrv_drain_recurse()
aio_poll()
...
qemu_coroutine_switch()
and the AIO flush completion ends up dereferencing blk->root:
blk_aio_complete()
scsi_aio_complete()
blk_get_aio_context(blk)
bs = blk_bs(blk)
ie, bs = blk->root ? blk->root->bs : NULL
^^^^^
stale
The solution to this user-after-free situation is is to clear
blk->root before calling bdrv_unref() in bdrv_detach_child(),
and let blk_get_aio_context() fall back to the main loop context
since the BDS has been removed.
Signed-off-by: Greg Kurz <groug@kaod.org>
---
The use-after-free condition is easy to reproduce with a stress-ng
run in the guest:
-device virtio-scsi-pci,id=scsi1 \
-drive file=/home/greg/images/scratch.qcow2,format=qcow2,if=none,id=drive1 \
-device scsi-hd,bus=scsi1.0,drive=drive1,id=scsi-hd1
# stress-ng --hdd 0 --aggressive
and doing drive_del from the QEMU monitor while stress-ng is still running:
(qemu) drive_del drive1
The crash is less easy to hit though, as it depends on the bs field
of the stale blk->root to have a non-NULL value that eventually breaks
something when it gets dereferenced. The following patch simulates
that, and allows to validate the fix:
--- a/block.c
+++ b/block.c
@@ -2127,6 +2127,8 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs,
static void bdrv_detach_child(BdrvChild *child)
{
+ BlockDriverState *bs = child->bs;
+
if (child->next.le_prev) {
QLIST_REMOVE(child, next);
child->next.le_prev = NULL;
@@ -2135,7 +2137,15 @@ static void bdrv_detach_child(BdrvChild *child)
bdrv_replace_child(child, NULL);
g_free(child->name);
- g_free(child);
+ /* Poison the BdrvChild instead of freeing it, in order to break blk_bs()
+ * if the blk still has a pointer to this BdrvChild in blk->root.
+ */
+ if (atomic_read(&bs->in_flight)) {
+ child->bs = (BlockDriverState *) -1;
+ fprintf(stderr, "\nPoisonned BdrvChild %p\n", child);
+ } else {
+ g_free(child);
+ }
}
void bdrv_root_unref_child(BdrvChild *child)
---
block/block-backend.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 681b240b1268..ed9434e236b9 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -756,6 +756,7 @@ void blk_remove_bs(BlockBackend *blk)
{
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
BlockDriverState *bs;
+ BdrvChild *root;
notifier_list_notify(&blk->remove_bs_notifiers, blk);
if (tgm->throttle_state) {
@@ -768,8 +769,9 @@ void blk_remove_bs(BlockBackend *blk)
blk_update_root_state(blk);
- bdrv_root_unref_child(blk->root);
+ root = blk->root;
blk->root = NULL;
+ bdrv_root_unref_child(root);
}
/*
On Wed, May 16, 2018 at 01:21:54PM +0200, Greg Kurz wrote: > Removing a drive with drive_del while it is being used to run an I/O > intensive workload can cause QEMU to crash. > > An AIO flush can yield at some point: > > blk_aio_flush_entry() > blk_co_flush(blk) > bdrv_co_flush(blk->root->bs) > ... > qemu_coroutine_yield() > > and let the HMP command to run, free blk->root and give control > back to the AIO flush: > > hmp_drive_del() > blk_remove_bs() > bdrv_root_unref_child(blk->root) > child_bs = blk->root->bs > bdrv_detach_child(blk->root) > bdrv_replace_child(blk->root, NULL) > blk->root->bs = NULL > g_free(blk->root) <============== blk->root becomes stale > bdrv_unref(child_bs) > bdrv_delete(child_bs) > bdrv_close() > bdrv_drained_begin() > bdrv_do_drained_begin() > bdrv_drain_recurse() > aio_poll() > ... > qemu_coroutine_switch() > > and the AIO flush completion ends up dereferencing blk->root: > > blk_aio_complete() > scsi_aio_complete() > blk_get_aio_context(blk) > bs = blk_bs(blk) > ie, bs = blk->root ? blk->root->bs : NULL > ^^^^^ > stale > > The solution to this user-after-free situation is is to clear > blk->root before calling bdrv_unref() in bdrv_detach_child(), > and let blk_get_aio_context() fall back to the main loop context > since the BDS has been removed. > > Signed-off-by: Greg Kurz <groug@kaod.org> > --- > <formletter> This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. </formletter>
Heh, of course I meant qemu-stable@nongnu.org ;) On Wed, 16 May 2018 13:21:54 +0200 Greg Kurz <groug@kaod.org> wrote: > Removing a drive with drive_del while it is being used to run an I/O > intensive workload can cause QEMU to crash. > > An AIO flush can yield at some point: > > blk_aio_flush_entry() > blk_co_flush(blk) > bdrv_co_flush(blk->root->bs) > ... > qemu_coroutine_yield() > > and let the HMP command to run, free blk->root and give control > back to the AIO flush: > > hmp_drive_del() > blk_remove_bs() > bdrv_root_unref_child(blk->root) > child_bs = blk->root->bs > bdrv_detach_child(blk->root) > bdrv_replace_child(blk->root, NULL) > blk->root->bs = NULL > g_free(blk->root) <============== blk->root becomes stale > bdrv_unref(child_bs) > bdrv_delete(child_bs) > bdrv_close() > bdrv_drained_begin() > bdrv_do_drained_begin() > bdrv_drain_recurse() > aio_poll() > ... > qemu_coroutine_switch() > > and the AIO flush completion ends up dereferencing blk->root: > > blk_aio_complete() > scsi_aio_complete() > blk_get_aio_context(blk) > bs = blk_bs(blk) > ie, bs = blk->root ? blk->root->bs : NULL > ^^^^^ > stale > > The solution to this user-after-free situation is is to clear > blk->root before calling bdrv_unref() in bdrv_detach_child(), > and let blk_get_aio_context() fall back to the main loop context > since the BDS has been removed. > > Signed-off-by: Greg Kurz <groug@kaod.org> > --- > > The use-after-free condition is easy to reproduce with a stress-ng > run in the guest: > > -device virtio-scsi-pci,id=scsi1 \ > -drive file=/home/greg/images/scratch.qcow2,format=qcow2,if=none,id=drive1 \ > -device scsi-hd,bus=scsi1.0,drive=drive1,id=scsi-hd1 > > # stress-ng --hdd 0 --aggressive > > and doing drive_del from the QEMU monitor while stress-ng is still running: > > (qemu) drive_del drive1 > > The crash is less easy to hit though, as it depends on the bs field > of the stale blk->root to have a non-NULL value that eventually breaks > something when it gets dereferenced. The following patch simulates > that, and allows to validate the fix: > > --- a/block.c > +++ b/block.c > @@ -2127,6 +2127,8 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs, > > static void bdrv_detach_child(BdrvChild *child) > { > + BlockDriverState *bs = child->bs; > + > if (child->next.le_prev) { > QLIST_REMOVE(child, next); > child->next.le_prev = NULL; > @@ -2135,7 +2137,15 @@ static void bdrv_detach_child(BdrvChild *child) > bdrv_replace_child(child, NULL); > > g_free(child->name); > - g_free(child); > + /* Poison the BdrvChild instead of freeing it, in order to break blk_bs() > + * if the blk still has a pointer to this BdrvChild in blk->root. > + */ > + if (atomic_read(&bs->in_flight)) { > + child->bs = (BlockDriverState *) -1; > + fprintf(stderr, "\nPoisonned BdrvChild %p\n", child); > + } else { > + g_free(child); > + } > } > > void bdrv_root_unref_child(BdrvChild *child) > --- > block/block-backend.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/block/block-backend.c b/block/block-backend.c > index 681b240b1268..ed9434e236b9 100644 > --- a/block/block-backend.c > +++ b/block/block-backend.c > @@ -756,6 +756,7 @@ void blk_remove_bs(BlockBackend *blk) > { > ThrottleGroupMember *tgm = &blk->public.throttle_group_member; > BlockDriverState *bs; > + BdrvChild *root; > > notifier_list_notify(&blk->remove_bs_notifiers, blk); > if (tgm->throttle_state) { > @@ -768,8 +769,9 @@ void blk_remove_bs(BlockBackend *blk) > > blk_update_root_state(blk); > > - bdrv_root_unref_child(blk->root); > + root = blk->root; > blk->root = NULL; > + bdrv_root_unref_child(root); > } > > /* > >
On Wed, May 16, 2018 at 01:21:54PM +0200, Greg Kurz wrote: > Removing a drive with drive_del while it is being used to run an I/O > intensive workload can cause QEMU to crash. > > An AIO flush can yield at some point: > > blk_aio_flush_entry() > blk_co_flush(blk) > bdrv_co_flush(blk->root->bs) > ... > qemu_coroutine_yield() I'm surprised you didn't hit another crash later on with this patch applied. What happens to this completion after you've set blk->root = NULL? > and let the HMP command to run, free blk->root and give control > back to the AIO flush: > > hmp_drive_del() > blk_remove_bs() > bdrv_root_unref_child(blk->root) > child_bs = blk->root->bs > bdrv_detach_child(blk->root) > bdrv_replace_child(blk->root, NULL) > blk->root->bs = NULL > g_free(blk->root) <============== blk->root becomes stale > bdrv_unref(child_bs) > bdrv_delete(child_bs) > bdrv_close() > bdrv_drained_begin() > bdrv_do_drained_begin() > bdrv_drain_recurse() > aio_poll() > ... > qemu_coroutine_switch() > > and the AIO flush completion ends up dereferencing blk->root: > > blk_aio_complete() > scsi_aio_complete() > blk_get_aio_context(blk) > bs = blk_bs(blk) > ie, bs = blk->root ? blk->root->bs : NULL > ^^^^^ > stale > > The solution to this user-after-free situation is is to clear > blk->root before calling bdrv_unref() in bdrv_detach_child(), > and let blk_get_aio_context() fall back to the main loop context > since the BDS has been removed. QEMU should drain I/O requests before making block driver graph changes. I think the drained region in blk_remove_bs() needs to begin earlier so that requests are completed before we begin to change things. Stefan
On Fri, 18 May 2018 16:32:46 +0100 Stefan Hajnoczi <stefanha@redhat.com> wrote: > On Wed, May 16, 2018 at 01:21:54PM +0200, Greg Kurz wrote: > > Removing a drive with drive_del while it is being used to run an I/O > > intensive workload can cause QEMU to crash. > > > > An AIO flush can yield at some point: > > > > blk_aio_flush_entry() > > blk_co_flush(blk) > > bdrv_co_flush(blk->root->bs) > > ... > > qemu_coroutine_yield() > > I'm surprised you didn't hit another crash later on with this patch > applied. What happens to this completion after you've set blk->root = > NULL? > bdrv_co_flush() takes a BDS argument, so I don't see how it would be affected by blk->root being set to NULL. Then blk_co_flush() returns to blk_aio_flush_entry(), and the next user of blk->root is.... > > and let the HMP command to run, free blk->root and give control > > back to the AIO flush: > > > > hmp_drive_del() > > blk_remove_bs() > > bdrv_root_unref_child(blk->root) > > child_bs = blk->root->bs > > bdrv_detach_child(blk->root) > > bdrv_replace_child(blk->root, NULL) > > blk->root->bs = NULL > > g_free(blk->root) <============== blk->root becomes stale > > bdrv_unref(child_bs) > > bdrv_delete(child_bs) > > bdrv_close() > > bdrv_drained_begin() > > bdrv_do_drained_begin() > > bdrv_drain_recurse() > > aio_poll() > > ... > > qemu_coroutine_switch() > > > > and the AIO flush completion ends up dereferencing blk->root: > > > > blk_aio_complete() > > scsi_aio_complete() > > blk_get_aio_context(blk) > > bs = blk_bs(blk) > > ie, bs = blk->root ? blk->root->bs : NULL ... here and the completion ends in the main loop context. > > ^^^^^ > > stale > > > > The solution to this user-after-free situation is is to clear > > blk->root before calling bdrv_unref() in bdrv_detach_child(), > > and let blk_get_aio_context() fall back to the main loop context > > since the BDS has been removed. > > QEMU should drain I/O requests before making block driver graph changes. > I think the drained region in blk_remove_bs() needs to begin earlier so > that requests are completed before we begin to change things. > This looks better indeed. The drained section currently begins in bdrv_close(), which happens much later than bdrv_detach_child(), which actually change things. Maybe change bdrv_root_unref_child() to ensure we don't call bdrv_close() with pending I/O requests ? void bdrv_root_unref_child(BdrvChild *child) { BlockDriverState *child_bs; child_bs = child->bs; + bdrv_drained_begin(child_bs); bdrv_detach_child(child); + bdrv_drained_end(child_bs); bdrv_unref(child_bs); } > Stefan
On 23/05/2018 16:46, Greg Kurz wrote: > Maybe change bdrv_root_unref_child() to ensure we don't call > bdrv_close() with pending I/O requests ? > > void bdrv_root_unref_child(BdrvChild *child) > { > BlockDriverState *child_bs; > > child_bs = child->bs; > + bdrv_drained_begin(child_bs); > bdrv_detach_child(child); > + bdrv_drained_end(child_bs); > bdrv_unref(child_bs); > } Maybe bdrv_detach_child should do it. Paolo
On Thu, May 24, 2018 at 08:04:59AM +0200, Paolo Bonzini wrote: > On 23/05/2018 16:46, Greg Kurz wrote: > > Maybe change bdrv_root_unref_child() to ensure we don't call > > bdrv_close() with pending I/O requests ? > > > > void bdrv_root_unref_child(BdrvChild *child) > > { > > BlockDriverState *child_bs; > > > > child_bs = child->bs; > > + bdrv_drained_begin(child_bs); > > bdrv_detach_child(child); > > + bdrv_drained_end(child_bs); > > bdrv_unref(child_bs); > > } > > Maybe bdrv_detach_child should do it. Sounds good. Stefan
On Thu, 24 May 2018 09:05:53 +0100 Stefan Hajnoczi <stefanha@redhat.com> wrote: > On Thu, May 24, 2018 at 08:04:59AM +0200, Paolo Bonzini wrote: > > On 23/05/2018 16:46, Greg Kurz wrote: > > > Maybe change bdrv_root_unref_child() to ensure we don't call > > > bdrv_close() with pending I/O requests ? > > > > > > void bdrv_root_unref_child(BdrvChild *child) > > > { > > > BlockDriverState *child_bs; > > > > > > child_bs = child->bs; > > > + bdrv_drained_begin(child_bs); > > > bdrv_detach_child(child); > > > + bdrv_drained_end(child_bs); > > > bdrv_unref(child_bs); > > > } > > > > Maybe bdrv_detach_child should do it. > > Sounds good. > > Stefan I guess it makes sense for bdrv_detach_child() to *break* blk->root without leaving I/O requests behind. I'll just do that then. Cheers, -- Greg
© 2016 - 2024 Red Hat, Inc.