[PATCH] block: Fix BB.root changing across bdrv_next()

Hanna Reitz posted 1 patch 2 years, 2 months ago
Test checkpatch passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20220301173914.12279-1-hreitz@redhat.com
Maintainers: Kevin Wolf <kwolf@redhat.com>, Hanna Reitz <hreitz@redhat.com>
block/block-backend.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
[PATCH] block: Fix BB.root changing across bdrv_next()
Posted by Hanna Reitz 2 years, 2 months ago
bdrv_next() has no guarantee that its caller has stopped all block graph
operations; for example, bdrv_flush_all() does not.

The latter can actually provoke such operations, because its
bdrv_flush() call, which runs a coroutine (bdrv_co_flush()), may run
this coroutine in a different AioContext than the main one, and then
when this coroutine is done and invokes aio_wait_kick(), the monitor may
get a chance to run and start executing some graph-modifying QMP
command.

One example for this is when the VM encounters an I/O error on a block
device and stops, triggering a bdrv_flush_all(), and a blockdev-mirror
is started simultaneously on a block node in an I/O thread.  When
bdrv_flush_all() comes to that node[1] and flushes it, the
aio_wait_kick() at the end of bdrv_co_flush_entry() may cause the
monitor to process the mirror request, and mirror_start_job() will then
replace the node by a mirror filter node, before bdrv_flush_all()
resumes and can invoke bdrv_next() again to continue iterating.

[1] Say there is a BlockBackend on top of the node in question, and so
bdrv_next() finds that BB and returns the node as the BB's blk_bs().
bdrv_next() will bdrv_ref() the node such that it remains valid through
bdrv_flush_all()'s iteration, and drop the reference when it is called
the next time.

The problem is that bdrv_next() does not store to which BDS it retains a
strong reference when the BDS is a BB's child, so on the subsequent
call, it will just invoke blk_bs() again and bdrv_unref() the returned
node -- but as the example above shows, this node might be a different
one than the one that was bdrv_ref()-ed before.  This can lead to a
use-after-free (for the mirror filter node in our example), because this
negligent bdrv_unref() would steal a strong reference from someone else.

We can solve this problem by always storing the returned (and strongly
referenced) BDS in BdrvNextIterator.bs.  When we want to drop the strong
reference of a BDS previously returned, always drop BdrvNextIterator.bs
instead of using other ways of trying to figure out what that BDS was
that we returned last time.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2058457
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
Sadly, I didn't find a nice way to test this, primarily because I
believe reproducing this requires a bdrv_flush_all() to come from the
VM (without QMP command).  The only way I know that this can happen is
when the VM encounters an I/O error and responds with stopping the
guest.
It's also anything but easily reproducible, and I can't think of a way
to improve on that, because it really seems to be based on chance
whether the aio_wait_kick() wakes up the monitor and has it process an
incoming QMP command.
---
 block/block-backend.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 4ff6b4d785..c822f257dc 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -575,7 +575,7 @@ BlockBackend *blk_next(BlockBackend *blk)
  * the monitor or attached to a BlockBackend */
 BlockDriverState *bdrv_next(BdrvNextIterator *it)
 {
-    BlockDriverState *bs, *old_bs;
+    BlockDriverState *bs, *old_bs = it->bs;
 
     /* Must be called from the main loop */
     assert(qemu_get_current_aio_context() == qemu_get_aio_context());
@@ -586,8 +586,6 @@ BlockDriverState *bdrv_next(BdrvNextIterator *it)
     if (it->phase == BDRV_NEXT_BACKEND_ROOTS) {
         BlockBackend *old_blk = it->blk;
 
-        old_bs = old_blk ? blk_bs(old_blk) : NULL;
-
         do {
             it->blk = blk_all_next(it->blk);
             bs = it->blk ? blk_bs(it->blk) : NULL;
@@ -601,11 +599,12 @@ BlockDriverState *bdrv_next(BdrvNextIterator *it)
         if (bs) {
             bdrv_ref(bs);
             bdrv_unref(old_bs);
+            it->bs = bs;
             return bs;
         }
         it->phase = BDRV_NEXT_MONITOR_OWNED;
-    } else {
-        old_bs = it->bs;
+        /* Start from the first monitor-owned BDS */
+        it->bs = NULL;
     }
 
     /* Then return the monitor-owned BDSes without a BB attached. Ignore all
-- 
2.34.1
Re: [PATCH] block: Fix BB.root changing across bdrv_next()
Posted by Hanna Reitz 2 years, 1 month ago
On 01.03.22 18:39, Hanna Reitz wrote:
> bdrv_next() has no guarantee that its caller has stopped all block graph
> operations; for example, bdrv_flush_all() does not.
>
> The latter can actually provoke such operations, because its
> bdrv_flush() call, which runs a coroutine (bdrv_co_flush()), may run
> this coroutine in a different AioContext than the main one, and then
> when this coroutine is done and invokes aio_wait_kick(), the monitor may
> get a chance to run and start executing some graph-modifying QMP
> command.
>
> One example for this is when the VM encounters an I/O error on a block
> device and stops, triggering a bdrv_flush_all(), and a blockdev-mirror
> is started simultaneously on a block node in an I/O thread.  When
> bdrv_flush_all() comes to that node[1] and flushes it, the
> aio_wait_kick() at the end of bdrv_co_flush_entry() may cause the
> monitor to process the mirror request, and mirror_start_job() will then
> replace the node by a mirror filter node, before bdrv_flush_all()
> resumes and can invoke bdrv_next() again to continue iterating.
>
> [1] Say there is a BlockBackend on top of the node in question, and so
> bdrv_next() finds that BB and returns the node as the BB's blk_bs().
> bdrv_next() will bdrv_ref() the node such that it remains valid through
> bdrv_flush_all()'s iteration, and drop the reference when it is called
> the next time.
>
> The problem is that bdrv_next() does not store to which BDS it retains a
> strong reference when the BDS is a BB's child, so on the subsequent
> call, it will just invoke blk_bs() again and bdrv_unref() the returned
> node -- but as the example above shows, this node might be a different
> one than the one that was bdrv_ref()-ed before.  This can lead to a
> use-after-free (for the mirror filter node in our example), because this
> negligent bdrv_unref() would steal a strong reference from someone else.
>
> We can solve this problem by always storing the returned (and strongly
> referenced) BDS in BdrvNextIterator.bs.  When we want to drop the strong
> reference of a BDS previously returned, always drop BdrvNextIterator.bs
> instead of using other ways of trying to figure out what that BDS was
> that we returned last time.

So a week ago, Kevin and me talked about this on IRC, and he was rather 
apprehensive of this approach, because (1) it fixes a probably 
high-level problem in one specific low-level place, and (2) it’s not 
even quite clear whether even this specific problem is really fixed.

(For (2): If bdrv_next() can cope with graph changes, then if such a 
change occurs during bdrv_flush_all(), it isn’t entirely clear whether 
we’ve truly iterated over all nodes and flushed them all.)

I’ve put a more detailed description of what I think is happening step 
by step here: https://bugzilla.redhat.com/show_bug.cgi?id=2058457#c7

So, the question came up whether we shouldn’t put bdrv_flush_all() into 
a drained section (there is a bdrv_drain_all() before, it’s just not a 
section), and ensure that no QMP commands can be executed in drained 
sections.  I fiddled around a bit, just wanting to send an extremely 
rough RFC to see whether the direction I’d be going in made any sense at 
all, but I’m not really making progress:

I wanted to basically introduce an Rwlock for QMP request processing, 
and take a read lock while we’re in a drained section. That doesn’t work 
so well, though, because when a QMP command (i.e. Rwlock is taken for a 
writer) uses drain (trying to take it as a reader), there’s a deadlock.  
I don’t really have a good idea to consolidate this, because if running 
QMP commands during drains is forbidden, then, well, a QMP command 
cannot use drain.  We’d need some way to identify that the drain is 
based in the currently running QMP command, and allow that, but I really 
don’t know how.


While looking into the STOP behavior further, something else came up.  
Summarizing part of my comment linked above, part of the problem is that 
vm_stop() runs, and concurrently the guest device requests another STOP; 
therefore, the next main loop iteration will try to STOP again.  That 
seems to be why the monitor makes some progress during one main loop 
iteration, and then the next unfortunate sequence of polling can lead to 
the monitor processing a QMP command.

So other things to consider could be whether we should ensure that the 
monitor is not in danger of processing QMP requests when 
main_loop_should_exit() runs, e.g. by polling until it’s back at the top 
of its main loop (in monitor_qmp_dispatcher_co()).

Or whether we could make qemu_system_vmstop_request() prevent double 
stop requests, by forbidding STOP requests while a previous STOP request 
has not yet been completely processed (i.e. accepting another one only 
once the VM has been stopped one time).

The simplest way to fix this very issue I’m seeing at least would be 
just to pull do_vm_stop()’s bdrv_drain_all()+bdrv_flush_all() into its 
conditional path; i.e. to only do this if the VM hadn’t been already 
stopped.  (I don’t think we need to flush here if the VM was already 
stopped before.  Might be wrong, though.)  I doubt that’s a great 
solution, but it’d be simple at least.

Hanna