From nobody Sat Apr 27 14:33:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1492512057081151.3341650701634; Tue, 18 Apr 2017 03:40:57 -0700 (PDT) Received: from localhost ([::1]:41019 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d0QZ4-0001Gn-LN for importer@patchew.org; Tue, 18 Apr 2017 06:40:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57008) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d0QYM-0000sW-PQ for qemu-devel@nongnu.org; Tue, 18 Apr 2017 06:40:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1d0QYJ-0005ce-Jy for qemu-devel@nongnu.org; Tue, 18 Apr 2017 06:40:10 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49414) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1d0QYA-0005Vh-0J; Tue, 18 Apr 2017 06:39:58 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A5FA98124D; Tue, 18 Apr 2017 10:39:56 +0000 (UTC) Received: from lemon.redhat.com (ovpn-8-82.pek2.redhat.com [10.72.8.82]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1891F53CC3; Tue, 18 Apr 2017 10:39:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com A5FA98124D Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=famz@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com A5FA98124D From: Fam Zheng To: qemu-devel@nongnu.org Date: Tue, 18 Apr 2017 18:39:48 +0800 Message-Id: <20170418103948.13965-1-famz@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 18 Apr 2017 10:39:56 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH for-2.9-rc5 v3] block: Drain BH in bdrv_drained_begin X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, jcody@redhat.com, Max Reitz , Stefan Hajnoczi , pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" During block job completion, nothing is preventing block_job_defer_to_main_loop_bh from being called in a nested aio_poll(), which is a trouble, such as in this code path: qmp_block_commit commit_active_start bdrv_reopen bdrv_reopen_multiple bdrv_reopen_prepare bdrv_flush aio_poll aio_bh_poll aio_bh_call block_job_defer_to_main_loop_bh stream_complete bdrv_reopen block_job_defer_to_main_loop_bh is the last step of the stream job, which should have been "paused" by the bdrv_drained_begin/end in bdrv_reopen_multiple, but it is not done because it's in the form of a main loop BH. Similar to why block jobs should be paused between drained_begin and drained_end, BHs they schedule must be excluded as well. To achieve this, this patch forces draining the BH in BDRV_POLL_WHILE. Also because the BH in question can do bdrv_unref and child replacing, protect @bs carefully to avoid use-after-free. As a side effect this fixes a hang in block_job_detach_aio_context during system_reset when a block job is ready: #0 0x0000555555aa79f3 in bdrv_drain_recurse #1 0x0000555555aa825d in bdrv_drained_begin #2 0x0000555555aa8449 in bdrv_drain #3 0x0000555555a9c356 in blk_drain #4 0x0000555555aa3cfd in mirror_drain #5 0x0000555555a66e11 in block_job_detach_aio_context #6 0x0000555555a62f4d in bdrv_detach_aio_context #7 0x0000555555a63116 in bdrv_set_aio_context #8 0x0000555555a9d326 in blk_set_aio_context #9 0x00005555557e38da in virtio_blk_data_plane_stop #10 0x00005555559f9d5f in virtio_bus_stop_ioeventfd #11 0x00005555559fa49b in virtio_bus_stop_ioeventfd #12 0x00005555559f6a18 in virtio_pci_stop_ioeventfd #13 0x00005555559f6a18 in virtio_pci_reset #14 0x00005555559139a9 in qdev_reset_one #15 0x0000555555916738 in qbus_walk_children #16 0x0000555555913318 in qdev_walk_children #17 0x0000555555916738 in qbus_walk_children #18 0x00005555559168ca in qemu_devices_reset #19 0x000055555581fcbb in pc_machine_reset #20 0x00005555558a4d96 in qemu_system_reset #21 0x000055555577157a in main_loop_should_exit #22 0x000055555577157a in main_loop #23 0x000055555577157a in main The rationale is that the loop in block_job_detach_aio_context cannot make any progress in pausing/completing the job, because bs->in_flight is 0, so bdrv_drain doesn't process the block_job_defer_to_main_loop BH. With this patch, it does. Reported-by: Jeff Cody Signed-off-by: Fam Zheng Reviewed-by: Kevin Wolf Tested-by: Jeff Cody --- v3: Report all aio_poll() progress as waited_. [Kevin] v2: Do the BH poll in BDRV_POLL_WHILE to cover bdrv_drain_all_begin as well. [Kevin] --- block/io.c | 10 +++++++--- include/block/block.h | 22 ++++++++++++++-------- 2 files changed, 21 insertions(+), 11 deletions(-) diff --git a/block/io.c b/block/io.c index 8706bfa..a472157 100644 --- a/block/io.c +++ b/block/io.c @@ -158,7 +158,7 @@ bool bdrv_requests_pending(BlockDriverState *bs) =20 static bool bdrv_drain_recurse(BlockDriverState *bs) { - BdrvChild *child; + BdrvChild *child, *tmp; bool waited; =20 waited =3D BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); @@ -167,8 +167,12 @@ static bool bdrv_drain_recurse(BlockDriverState *bs) bs->drv->bdrv_drain(bs); } =20 - QLIST_FOREACH(child, &bs->children, next) { - waited |=3D bdrv_drain_recurse(child->bs); + QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { + BlockDriverState *bs =3D child->bs; + assert(bs->refcnt > 0); + bdrv_ref(bs); + waited |=3D bdrv_drain_recurse(bs); + bdrv_unref(bs); } =20 return waited; diff --git a/include/block/block.h b/include/block/block.h index 97d4330..5ddc0cf 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -381,12 +381,13 @@ void bdrv_drain_all(void); =20 #define BDRV_POLL_WHILE(bs, cond) ({ \ bool waited_ =3D false; \ + bool busy_ =3D true; \ BlockDriverState *bs_ =3D (bs); \ AioContext *ctx_ =3D bdrv_get_aio_context(bs_); \ if (aio_context_in_iothread(ctx_)) { \ - while ((cond)) { \ - aio_poll(ctx_, true); \ - waited_ =3D true; \ + while ((cond) || busy_) { \ + busy_ =3D aio_poll(ctx_, (cond)); \ + waited_ |=3D !!(cond) | busy_; \ } \ } else { \ assert(qemu_get_current_aio_context() =3D=3D \ @@ -398,11 +399,16 @@ void bdrv_drain_all(void); */ \ assert(!bs_->wakeup); \ bs_->wakeup =3D true; \ - while ((cond)) { \ - aio_context_release(ctx_); \ - aio_poll(qemu_get_aio_context(), true); \ - aio_context_acquire(ctx_); \ - waited_ =3D true; \ + while (busy_) { \ + if ((cond)) { \ + waited_ =3D busy_ =3D true; \ + aio_context_release(ctx_); \ + aio_poll(qemu_get_aio_context(), true); \ + aio_context_acquire(ctx_); \ + } else { \ + busy_ =3D aio_poll(ctx_, false); \ + waited_ |=3D busy_; \ + } \ } \ bs_->wakeup =3D false; \ } \ --=20 2.9.3