From nobody Wed Dec 17 05:41:37 2025 Delivered-To: importer@patchew.org Received-SPF: temperror (zoho.com: Error in retrieving data from DNS) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=temperror (zoho.com: Error in retrieving data from DNS) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1551108413014348.77058488666285; Mon, 25 Feb 2019 07:26:53 -0800 (PST) Received: from localhost ([127.0.0.1]:39067 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gyI9T-0004Eu-MJ for importer@patchew.org; Mon, 25 Feb 2019 10:26:43 -0500 Received: from eggs.gnu.org ([209.51.188.92]:36982) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gyI4D-0008B9-3Q for qemu-devel@nongnu.org; Mon, 25 Feb 2019 10:21:18 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gyI4C-0000bz-5J for qemu-devel@nongnu.org; Mon, 25 Feb 2019 10:21:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:1701) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gyI4A-0000Z2-2V; Mon, 25 Feb 2019 10:21:14 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 37FBC30A9976; Mon, 25 Feb 2019 15:21:13 +0000 (UTC) Received: from linux.fritz.box.com (ovpn-117-243.ams2.redhat.com [10.36.117.243]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2862A5C22B; Mon, 25 Feb 2019 15:21:11 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Mon, 25 Feb 2019 16:19:48 +0100 Message-Id: <20190225152053.15976-7-kwolf@redhat.com> In-Reply-To: <20190225152053.15976-1-kwolf@redhat.com> References: <20190225152053.15976-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Mon, 25 Feb 2019 15:21:13 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 06/71] block: don't set the same context X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, peter.maydell@linaro.org, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Denis Plotnikov Adds a fast path on aio context setting preventing unnecessary context setting routine. Also, it prevents issues with cyclic walk of child bds-es appeared because of registering aio walking notifiers: Call stack: 0 __GI_raise 1 __GI_abort 2 __assert_fail_base 3 __GI___assert_fail 4 bdrv_detach_aio_context (bs=3D0x55f54d65c000) <<< 5 bdrv_detach_aio_context (bs=3D0x55f54fc8a800) 6 bdrv_set_aio_context (bs=3D0x55f54fc8a800, ...) 7 block_job_attached_aio_context 8 bdrv_attach_aio_context (bs=3D0x55f54d65c000, ...) <<< 9 bdrv_set_aio_context (bs=3D0x55f54d65c000) 10 blk_set_aio_context 11 virtio_blk_data_plane_stop 12 virtio_bus_stop_ioeventfd 13 virtio_vmstate_change 14 vm_state_notify (running=3D0, state=3DRUN_STATE_SHUTDOWN) 15 do_vm_stop (state=3DRUN_STATE_SHUTDOWN, send_stop=3Dtrue) 16 vm_stop (state=3DRUN_STATE_SHUTDOWN) 17 main_loop_should_exit 18 main_loop 19 main This can happen because of "new" context attachment to VM disk bds. When attaching a new context the corresponding aio context handler is called for each of aio_notifiers registered on the VM disk bds context. Among those handlers, there is the block_job_attached_aio_context handler which sets a new aio context for the block job bds. When doing so, the old context is detached from all the block job bds children and one of them is the VM disk bds, serving as backing store for the blockjob bds, although the VM disk bds is actually the initializer of that process. Since the VM disk bds is protected with walking_aio_notifiers flag from double processing in recursive calls, the assert fires. Signed-off-by: Denis Plotnikov Signed-off-by: Kevin Wolf --- block.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/block.c b/block.c index 4ad0e90d7e..0c12632661 100644 --- a/block.c +++ b/block.c @@ -5265,6 +5265,10 @@ void bdrv_set_aio_context(BlockDriverState *bs, AioC= ontext *new_context) { AioContext *ctx =3D bdrv_get_aio_context(bs); =20 + if (ctx =3D=3D new_context) { + return; + } + aio_disable_external(ctx); bdrv_parent_drained_begin(bs, NULL, false); bdrv_drain(bs); /* ensure there are no in-flight requests */ --=20 2.20.1