From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443681; cv=none; d=zohomail.com; s=zohoarc; b=hFy+wFZyvTlstIs8zSKpJy/lklPUokTU8MCEePNd/binuxJFj+gv+76ZWHNVHOuiJcAYYzaV9KMSL7NdTdxSI1AfoIqSRFUVRXHrJAhPGIo62X+czDQiCfXDHxILlIMcYb1M1a/02d3VzXA6FqE8vJUXcAgw+xqAqy4K3n/6wfQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443681; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6JkFpuuttzsHF3MmIyOMmXjxDhu1gSfxxf39pT8P3Wk=; b=dVryZ8XkMCsPn3J9/KDZaXgynAqtqGbjX5VCAX49sNdAwXzb32xolFPNW9giqGZ68nyjQWRvhsyiB/eUSi0S1Ae3xvrlERV7DNkYwpcms7l+q6PJ/0SIxFZoatr3RSene+RaFtF70r6CdBdAMMMS2Lmz18IOLUgCJF5TL49iAAA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443681445695.3864934404058; Tue, 25 Apr 2023 10:28:01 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526174.817709 (Exim 4.92) (envelope-from ) id 1prMRw-0006y1-RM; Tue, 25 Apr 2023 17:27:32 +0000 Received: by outflank-mailman (output) from mailman id 526174.817709; Tue, 25 Apr 2023 17:27:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMRw-0006wt-MR; Tue, 25 Apr 2023 17:27:32 +0000 Received: by outflank-mailman (input) for mailman id 526174; Tue, 25 Apr 2023 17:27:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMRw-0006fQ-B3 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 74cbded0-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:27:31 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-492-aQ75ElyHO6Sd8kPgmp3dug-1; Tue, 25 Apr 2023 13:27:22 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E5D9A0F391; Tue, 25 Apr 2023 17:27:21 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id B514514171B8; Tue, 25 Apr 2023 17:27:20 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 74cbded0-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6JkFpuuttzsHF3MmIyOMmXjxDhu1gSfxxf39pT8P3Wk=; b=eOuOf6anvtgtNSlbHPqYrIMaflEVxvgijYK0CdXcZ+t+EOilYXID9tCjQSc25BLZATTlVS 2AEw35AEe5ebwEVKed9ANye+tbEhUMt2kQEBYCqj3lkEkIFBPeG/9eN3DP0/+C6bhSyEP5 3BjO4/tzM0yUcQLmgy5Ba3iJ08QuMDI= X-MC-Unique: aQ75ElyHO6Sd8kPgmp3dug-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 01/20] block-backend: split blk_do_set_aio_context() Date: Tue, 25 Apr 2023 13:26:57 -0400 Message-Id: <20230425172716.1033562-2-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443682229100001 Content-Type: text/plain; charset="utf-8" blk_set_aio_context() is not fully transactional because blk_do_set_aio_context() updates blk->ctx outside the transaction. Most of the time this goes unnoticed but a BlockDevOps.drained_end() callback that invokes blk_get_aio_context() fails assert(ctx =3D=3D blk->ctx). This happens because blk->ctx is only assigned after BlockDevOps.drained_end() is called and we're in an intermediate state where BlockDrvierState nodes already have the new context and the BlockBackend still has the old context. Making blk_set_aio_context() fully transactional solves this assertion failure because the BlockBackend's context is updated as part of the transaction (before BlockDevOps.drained_end() is called). Split blk_do_set_aio_context() in order to solve this assertion failure. This helper function actually serves two different purposes: 1. It drives blk_set_aio_context(). 2. It responds to BdrvChildClass->change_aio_ctx(). Get rid of the helper function. Do #1 inside blk_set_aio_context() and do #2 inside blk_root_set_aio_ctx_commit(). This simplifies the code. The only drawback of the fully transactional approach is that blk_set_aio_context() must contend with blk_root_set_aio_ctx_commit() being invoked as part of the AioContext change propagation. This can be solved by temporarily setting blk->allow_aio_context_change to true. Future patches call blk_get_aio_context() from BlockDevOps->drained_end(), so this patch will become necessary. Signed-off-by: Stefan Hajnoczi --- block/block-backend.c | 71 +++++++++++++++++-------------------------- 1 file changed, 28 insertions(+), 43 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index 5566ea059d..ffd1d66f7d 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -2199,52 +2199,31 @@ static AioContext *blk_aiocb_get_aio_context(BlockA= IOCB *acb) return blk_get_aio_context(blk_acb->blk); } =20 -static int blk_do_set_aio_context(BlockBackend *blk, AioContext *new_conte= xt, - bool update_root_node, Error **errp) -{ - BlockDriverState *bs =3D blk_bs(blk); - ThrottleGroupMember *tgm =3D &blk->public.throttle_group_member; - int ret; - - if (bs) { - bdrv_ref(bs); - - if (update_root_node) { - /* - * update_root_node MUST be false for blk_root_set_aio_ctx_com= mit(), - * as we are already in the commit function of a transaction. - */ - ret =3D bdrv_try_change_aio_context(bs, new_context, blk->root= , errp); - if (ret < 0) { - bdrv_unref(bs); - return ret; - } - } - /* - * Make blk->ctx consistent with the root node before we invoke any - * other operations like drain that might inquire blk->ctx - */ - blk->ctx =3D new_context; - if (tgm->throttle_state) { - bdrv_drained_begin(bs); - throttle_group_detach_aio_context(tgm); - throttle_group_attach_aio_context(tgm, new_context); - bdrv_drained_end(bs); - } - - bdrv_unref(bs); - } else { - blk->ctx =3D new_context; - } - - return 0; -} - int blk_set_aio_context(BlockBackend *blk, AioContext *new_context, Error **errp) { + bool old_allow_change; + BlockDriverState *bs =3D blk_bs(blk); + int ret; + GLOBAL_STATE_CODE(); - return blk_do_set_aio_context(blk, new_context, true, errp); + + if (!bs) { + blk->ctx =3D new_context; + return 0; + } + + bdrv_ref(bs); + + old_allow_change =3D blk->allow_aio_context_change; + blk->allow_aio_context_change =3D true; + + ret =3D bdrv_try_change_aio_context(bs, new_context, NULL, errp); + + blk->allow_aio_context_change =3D old_allow_change; + + bdrv_unref(bs); + return ret; } =20 typedef struct BdrvStateBlkRootContext { @@ -2256,8 +2235,14 @@ static void blk_root_set_aio_ctx_commit(void *opaque) { BdrvStateBlkRootContext *s =3D opaque; BlockBackend *blk =3D s->blk; + AioContext *new_context =3D s->new_ctx; + ThrottleGroupMember *tgm =3D &blk->public.throttle_group_member; =20 - blk_do_set_aio_context(blk, s->new_ctx, false, &error_abort); + blk->ctx =3D new_context; + if (tgm->throttle_state) { + throttle_group_detach_aio_context(tgm); + throttle_group_attach_aio_context(tgm, new_context); + } } =20 static TransactionActionDrv set_blk_root_context =3D { --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443685; cv=none; d=zohomail.com; s=zohoarc; b=ZFibvLgaF5x4dfGWh6KDeo/gbV55Wu28eBrZt1Q6x+y60/8bB6Tmxjos+c2RuEFNmB66UwXDvUAiVONyhojk+XnMDVo/siF3tsup2OmaurGpniIxhtfjlOoRrhJEBF1Y+Z2i6SssmpDLXhk6Z+S2ga8hq1gZJ5eiZ+RsCteWQ4g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443685; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=DrOhiPPcL+d6htjDrKGwRstWfOt7XytfnWLBr7kygzo=; b=kdL8pSfrnXT29VK++E/1hYYG8S2oJu915HWTOnlgMnkd5uXz3AAdrm8t4DWJ4uNMHE17pC+y/F6nKqS74+AN3X0Yn91yGysE0swynrkpgifzqbcsXrFfICfV3a03UTc9xsFR6Ly69o3Vf+CV68M2IJ0MupG4ag126ya+7CkhP0o= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443685974937.2019376408114; Tue, 25 Apr 2023 10:28:05 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526173.817704 (Exim 4.92) (envelope-from ) id 1prMRw-0006uk-J0; Tue, 25 Apr 2023 17:27:32 +0000 Received: by outflank-mailman (output) from mailman id 526173.817704; Tue, 25 Apr 2023 17:27:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMRw-0006ub-EF; Tue, 25 Apr 2023 17:27:32 +0000 Received: by outflank-mailman (input) for mailman id 526173; Tue, 25 Apr 2023 17:27:31 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMRv-0006fQ-Ap for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:31 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 7488fe3d-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:27:30 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-557-5iOd8U5FNmmcREp2ZjiRtA-1; Tue, 25 Apr 2023 13:27:24 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E570EA0F39B; Tue, 25 Apr 2023 17:27:23 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B15E201E75D; Tue, 25 Apr 2023 17:27:22 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7488fe3d-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DrOhiPPcL+d6htjDrKGwRstWfOt7XytfnWLBr7kygzo=; b=RV0Qya3+0X9DteFfqKj729EaykumOuLgFvRXFq1vJktqzag+NEdnyYvZtJwlBfubq/kw85 4iJJHnp9H6EMMvRtR75RrDtVHtaIdjiGM5++g5GRGb8A+MAClpzsbpmkJp2pqn337KI925 6qrlIpaico8BwTkjqlt5Bp4VWnIIIYw= X-MC-Unique: 5iOd8U5FNmmcREp2ZjiRtA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 02/20] hw/qdev: introduce qdev_is_realized() helper Date: Tue, 25 Apr 2023 13:26:58 -0400 Message-Id: <20230425172716.1033562-3-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443688693100012 Add a helper function to check whether the device is realized without requiring the Big QEMU Lock. The next patch adds a second caller. The goal is to avoid spreading DeviceState field accesses throughout the code. Suggested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Stefan Hajnoczi --- include/hw/qdev-core.h | 17 ++++++++++++++--- hw/scsi/scsi-bus.c | 3 +-- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h index bd50ad5ee1..4d734cf35e 100644 --- a/include/hw/qdev-core.h +++ b/include/hw/qdev-core.h @@ -1,6 +1,7 @@ #ifndef QDEV_CORE_H #define QDEV_CORE_H =20 +#include "qemu/atomic.h" #include "qemu/queue.h" #include "qemu/bitmap.h" #include "qemu/rcu.h" @@ -164,9 +165,6 @@ struct NamedClockList { =20 /** * DeviceState: - * @realized: Indicates whether the device has been fully constructed. - * When accessed outside big qemu lock, must be accessed with - * qatomic_load_acquire() * @reset: ResettableState for the device; handled by Resettable interface. * * This structure should not be accessed directly. We declare it here @@ -332,6 +330,19 @@ DeviceState *qdev_new(const char *name); */ DeviceState *qdev_try_new(const char *name); =20 +/** + * qdev_is_realized: + * @dev: The device to check. + * + * May be called outside big qemu lock. + * + * Returns: %true% if the device has been fully constructed, %false% other= wise. + */ +static inline bool qdev_is_realized(DeviceState *dev) +{ + return qatomic_load_acquire(&dev->realized); +} + /** * qdev_realize: Realize @dev. * @dev: device to realize diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index c97176110c..07275fb631 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -60,8 +60,7 @@ static SCSIDevice *do_scsi_device_find(SCSIBus *bus, * the user access the device. */ =20 - if (retval && !include_unrealized && - !qatomic_load_acquire(&retval->qdev.realized)) { + if (retval && !include_unrealized && !qdev_is_realized(&retval->qdev))= { retval =3D NULL; } =20 --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443683; cv=none; d=zohomail.com; s=zohoarc; b=jPeH1FSK9BjP93yVqMJO4Ue3WL/5GxWrUVnp2BwUe+XMpy4VEdlnzmKQB7424i9QCTxfZrH2UBM9UACWRdMAf1H2nffP3Vu9PKg39i/mjS+4sIo8wPy11/dYKJoVa2LpXqwMz8G3YSMDlaGo5ckwFVXyhp9coGKqHkcX0U36o8k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443683; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ct1ICfub90ccCopLja+U3Sq9ZQX/65uvuN+fpfLMIoc=; b=Uj3P9lR+qUVNt8XdaHiIErg0zH3bdSpmj1lEty63IRREeoe1FAiTVdrv6cAoYvMu5bE8KS8rnyIHRFLaB9TWyUgswPvHnRGqiGjsCU0osH0LFDCs1zRjd18LWrQ+Ull4q9awKmu9lgPMRfZe5YT2arrlQBlfV3r8aWCnyMRrK20= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 168244368394225.730353797341536; Tue, 25 Apr 2023 10:28:03 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526175.817725 (Exim 4.92) (envelope-from ) id 1prMRy-0007P5-4d; Tue, 25 Apr 2023 17:27:34 +0000 Received: by outflank-mailman (output) from mailman id 526175.817725; Tue, 25 Apr 2023 17:27:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMRx-0007OU-VW; Tue, 25 Apr 2023 17:27:33 +0000 Received: by outflank-mailman (input) for mailman id 526175; Tue, 25 Apr 2023 17:27:33 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMRx-0006fQ-B3 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:33 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 74f80a3c-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:27:31 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-zygUM1aeMOCsWjm0lHqwCg-1; Tue, 25 Apr 2023 13:27:27 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92D2A28082B1; Tue, 25 Apr 2023 17:27:26 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id A0ED82027043; Tue, 25 Apr 2023 17:27:25 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 74f80a3c-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ct1ICfub90ccCopLja+U3Sq9ZQX/65uvuN+fpfLMIoc=; b=EzqUA7UMmqv3z+VBzjQNsA72l2G4m4KEWGgIorpB/vNg/OdtCvsCI1wpRJDeTIN/dbjg1G 73ol2JnAa5Auqm481++Q4pL3CE2keE2W99drf9/hAGxcd4tXm+vReXj0O5//bIua0zl+no wWukRRzv5NF0b8bXwIIfi4KlJThBDIg= X-MC-Unique: zygUM1aeMOCsWjm0lHqwCg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg , Daniil Tatianin Subject: [PATCH v4 03/20] virtio-scsi: avoid race between unplug and transport event Date: Tue, 25 Apr 2023 13:26:59 -0400 Message-Id: <20230425172716.1033562-4-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443684654100001 Content-Type: text/plain; charset="utf-8" Only report a transport reset event to the guest after the SCSIDevice has been unrealized by qdev_simple_device_unplug_cb(). qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field to false so that scsi_device_find/get() no longer see it. scsi_target_emulate_report_luns() also needs to be updated to filter out SCSIDevices that are unrealized. These changes ensure that the guest driver does not see the SCSIDevice that's being unplugged if it responds very quickly to the transport reset event. Reviewed-by: Paolo Bonzini Reviewed-by: Michael S. Tsirkin Reviewed-by: Daniil Tatianin Signed-off-by: Stefan Hajnoczi --- hw/scsi/scsi-bus.c | 3 ++- hw/scsi/virtio-scsi.c | 18 +++++++++--------- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index 07275fb631..64d7311757 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -486,7 +486,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetR= eq *r) DeviceState *qdev =3D kid->child; SCSIDevice *dev =3D SCSI_DEVICE(qdev); =20 - if (dev->channel =3D=3D channel && dev->id =3D=3D id && dev->l= un !=3D 0) { + if (dev->channel =3D=3D channel && dev->id =3D=3D id && dev->l= un !=3D 0 && + qdev_is_realized(&dev->qdev)) { store_lun(tmp, dev->lun); g_byte_array_append(buf, tmp, 8); len +=3D 8; diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 612c525d9d..000961446c 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *ho= tplug_dev, DeviceState *dev, SCSIDevice *sd =3D SCSI_DEVICE(dev); AioContext *ctx =3D s->ctx ?: qemu_get_aio_context(); =20 - if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { - virtio_scsi_acquire(s); - virtio_scsi_push_event(s, sd, - VIRTIO_SCSI_T_TRANSPORT_RESET, - VIRTIO_SCSI_EVT_RESET_REMOVED); - scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); - virtio_scsi_release(s); - } - aio_disable_external(ctx); qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); aio_enable_external(ctx); @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *ho= tplug_dev, DeviceState *dev, blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL); virtio_scsi_release(s); } + + if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { + virtio_scsi_acquire(s); + virtio_scsi_push_event(s, sd, + VIRTIO_SCSI_T_TRANSPORT_RESET, + VIRTIO_SCSI_EVT_RESET_REMOVED); + scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); + virtio_scsi_release(s); + } } =20 static struct SCSIBusInfo virtio_scsi_scsi_info =3D { --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443689; cv=none; d=zohomail.com; s=zohoarc; b=YNUhsu0rQPmSsmuYTdDkZKGnScZvsBMJmQi+bMdCPCR3QZKfYToX9W4birykLtE7ryx4ojUvS0zyejXVyEU84yhJovfOTl433UMctJk/PKKnf7zgEu7PO6YTaTlpC0MDkqfHW2PyJWYO1GxFaqwP8Ao3eoVvQMcaIeVpyPtFYT8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443689; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=mEUiDYhTGG6/grGbCAPXx5jwh+m436EKZLWWN7O6BQM=; b=XkcZCRelTi95yoFgg7W9+/eaamoUH4Iz5BT9aaP2kxSEBHCS8XNM/0FwcqDwn6rcUYEsqjjaPzUh+W0/JS1mP5HP0hxuTgjAIuDlEt1rWGK6y0zmaLKWna4zjCi+AVim8PR8kXQhqJqRe7faPL4cTpZ/0pszD5EDy7VR3O6tEAo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443689226987.3095472171495; Tue, 25 Apr 2023 10:28:09 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526177.817740 (Exim 4.92) (envelope-from ) id 1prMS4-0007qM-RY; Tue, 25 Apr 2023 17:27:40 +0000 Received: by outflank-mailman (output) from mailman id 526177.817740; Tue, 25 Apr 2023 17:27:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS4-0007q4-JP; Tue, 25 Apr 2023 17:27:40 +0000 Received: by outflank-mailman (input) for mailman id 526177; Tue, 25 Apr 2023 17:27:39 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS3-0007l5-Gq for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:39 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 78a1ec62-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:27:37 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-319-1GkdNZ8vNFmRijBei6Lasw-1; Tue, 25 Apr 2023 13:27:30 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 57853885626; Tue, 25 Apr 2023 17:27:29 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B06A2166B41; Tue, 25 Apr 2023 17:27:28 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 78a1ec62-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443656; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mEUiDYhTGG6/grGbCAPXx5jwh+m436EKZLWWN7O6BQM=; b=LmFcph+6LbhJpiq7MdnyDQOSbqobS1ymcEFCay5kqgB8FYCdBXin46Qdhy395T9c0pNnFe 5vLtIQraJdzSPf7Ms7pgIK9qCZ6DONXRIzANCQuJv1unTM/n+pTsscdiMInYo3RiHvdKzv AekbdPhDwvvQYLCOMHbLr06SszJU1bI= X-MC-Unique: 1GkdNZ8vNFmRijBei6Lasw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg , Zhengui Li , Daniil Tatianin Subject: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external() during unplug Date: Tue, 25 Apr 2023 13:27:00 -0400 Message-Id: <20230425172716.1033562-5-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443691671100017 Content-Type: text/plain; charset="utf-8" This patch is part of an effort to remove the aio_disable_external() API because it does not fit in a multi-queue block layer world where many AioContexts may be submitting requests to the same disk. The SCSI emulation code is already in good shape to stop using aio_disable_external(). It was only used by commit 9c5aad84da1c ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi disk") to ensure that virtio_scsi_hotunplug() works while the guest driver is submitting I/O. Ensure virtio_scsi_hotunplug() is safe as follows: 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() -> device_set_realized() calls qatomic_set(&dev->realized, false) so that future scsi_device_get() calls return NULL because they exclude SCSIDevices with realized=3Dfalse. That means virtio-scsi will reject new I/O requests to this SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while virtio_scsi_hotunplug() is still executing. We are protected against new requests! 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so that in-flight requests are cancelled synchronously. This ensures that no in-flight requests remain once qdev_simple_device_unplug_cb() returns. Thanks to these two conditions we don't need aio_disable_external() anymore. Cc: Zhengui Li Reviewed-by: Paolo Bonzini Reviewed-by: Daniil Tatianin Signed-off-by: Stefan Hajnoczi --- hw/scsi/scsi-disk.c | 1 + hw/scsi/virtio-scsi.c | 3 --- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index 97c9b1c8cd..e01bd84541 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **err= p) =20 static void scsi_unrealize(SCSIDevice *dev) { + scsi_device_purge_requests(dev, SENSE_CODE(RESET)); del_boot_device_lchs(&dev->qdev, NULL); } =20 diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 000961446c..a02f9233ec 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *ho= tplug_dev, DeviceState *dev, VirtIODevice *vdev =3D VIRTIO_DEVICE(hotplug_dev); VirtIOSCSI *s =3D VIRTIO_SCSI(vdev); SCSIDevice *sd =3D SCSI_DEVICE(dev); - AioContext *ctx =3D s->ctx ?: qemu_get_aio_context(); =20 - aio_disable_external(ctx); qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); - aio_enable_external(ctx); =20 if (s->ctx) { virtio_scsi_acquire(s); --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443689; cv=none; d=zohomail.com; s=zohoarc; b=B//t/unX9LguYgKUrXWTUa6F2eP/4J0M2pDIEUu5/iOTz4UGz/m5z+eM14cMGdnfsIepowSxlqsj9mTkDDi/KfCYSOdJO+e7Yc4GmOTKqJO3QIqs8haozE46+dH80pg7MC/2LwuCChBXhZDZYjemrbQum4O+s0mzHNrshy70fT4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443689; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=B3H8MiZnviPPfHHhNt1UwKF8b8RPyeQv8gQKbNPpxCA=; b=aZB6YhJ2+nx55HE8ESZnPWNq72LlNNwfE7exO7ncyMr+Eki+lX59Aa+jJ5zSHBb89EDkPIqD7EGPvPzhL1jaOFK4lDgJKyzdfJNQCUFkaIDNijdsUnByM5slwOZEJfOGWsGvaYA+BVaJAVsSm5MYxjeHIlJ5zFrTq7p6R2SzlZM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443689510421.39135385038537; Tue, 25 Apr 2023 10:28:09 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526176.817734 (Exim 4.92) (envelope-from ) id 1prMS4-0007mh-FL; Tue, 25 Apr 2023 17:27:40 +0000 Received: by outflank-mailman (output) from mailman id 526176.817734; Tue, 25 Apr 2023 17:27:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS4-0007mY-Ae; Tue, 25 Apr 2023 17:27:40 +0000 Received: by outflank-mailman (input) for mailman id 526176; Tue, 25 Apr 2023 17:27:38 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS2-0006fQ-GZ for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:38 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 78b37e55-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:27:37 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-668-sx8UeKvTNgSkYAJ4YHtjZQ-1; Tue, 25 Apr 2023 13:27:33 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 06A6128082AC; Tue, 25 Apr 2023 17:27:32 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5EE9B492C13; Tue, 25 Apr 2023 17:27:31 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 78b37e55-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443656; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B3H8MiZnviPPfHHhNt1UwKF8b8RPyeQv8gQKbNPpxCA=; b=Oo9bBjzectlGtdKIB+P6y1Onka2xWwJDAvL6oj+QsDf44S5PJGiYCO4HVsTbmhySVnTMbn mECVXgQhzw0zZe3pnbaD9b0D6r/+Ln7UpoERLTKUPZr4yumLSvW1D0eKNMGxlhXy4j7kwR IwfWWwlXXsrFV2hQzyuoTeeJBfHRTfk= X-MC-Unique: sx8UeKvTNgSkYAJ4YHtjZQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 05/20] util/vhost-user-server: rename refcount to in_flight counter Date: Tue, 25 Apr 2023 13:27:01 -0400 Message-Id: <20230425172716.1033562-6-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443690371100015 The VuServer object has a refcount field and ref/unref APIs. The name is confusing because it's actually an in-flight request counter instead of a refcount. Normally a refcount destroys the object upon reaching zero. The VuServer counter is used to wake up the vhost-user coroutine when there are no more requests. Avoid confusing by renaming refcount and ref/unref to in_flight and inc/dec. Reviewed-by: Paolo Bonzini Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Stefan Hajnoczi --- include/qemu/vhost-user-server.h | 6 +++--- block/export/vhost-user-blk-server.c | 11 +++++++---- util/vhost-user-server.c | 14 +++++++------- 3 files changed, 17 insertions(+), 14 deletions(-) diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-ser= ver.h index 25c72433ca..bc0ac9ddb6 100644 --- a/include/qemu/vhost-user-server.h +++ b/include/qemu/vhost-user-server.h @@ -41,7 +41,7 @@ typedef struct { const VuDevIface *vu_iface; =20 /* Protected by ctx lock */ - unsigned int refcount; + unsigned int in_flight; bool wait_idle; VuDev vu_dev; QIOChannel *ioc; /* The I/O channel with the client */ @@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server, =20 void vhost_user_server_stop(VuServer *server); =20 -void vhost_user_server_ref(VuServer *server); -void vhost_user_server_unref(VuServer *server); +void vhost_user_server_inc_in_flight(VuServer *server); +void vhost_user_server_dec_in_flight(VuServer *server); =20 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ct= x); void vhost_user_server_detach_aio_context(VuServer *server); diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index e56b92f2e2..841acb36e3 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -50,7 +50,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in= _len) free(req); } =20 -/* Called with server refcount increased, must decrease before returning */ +/* + * Called with server in_flight counter increased, must decrease before + * returning. + */ static void coroutine_fn vu_blk_virtio_process_req(void *opaque) { VuBlkReq *req =3D opaque; @@ -68,12 +71,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void= *opaque) in_num, out_num); if (in_len < 0) { free(req); - vhost_user_server_unref(server); + vhost_user_server_dec_in_flight(server); return; } =20 vu_blk_req_complete(req, in_len); - vhost_user_server_unref(server); + vhost_user_server_dec_in_flight(server); } =20 static void vu_blk_process_vq(VuDev *vu_dev, int idx) @@ -95,7 +98,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx) Coroutine *co =3D qemu_coroutine_create(vu_blk_virtio_process_req, req); =20 - vhost_user_server_ref(server); + vhost_user_server_inc_in_flight(server); qemu_coroutine_enter(co); } } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 5b6216069c..1622f8cfb3 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf) error_report("vu_panic: %s", buf); } =20 -void vhost_user_server_ref(VuServer *server) +void vhost_user_server_inc_in_flight(VuServer *server) { assert(!server->wait_idle); - server->refcount++; + server->in_flight++; } =20 -void vhost_user_server_unref(VuServer *server) +void vhost_user_server_dec_in_flight(VuServer *server) { - server->refcount--; - if (server->wait_idle && !server->refcount) { + server->in_flight--; + if (server->wait_idle && !server->in_flight) { aio_co_wake(server->co_trip); } } @@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque) /* Keep running */ } =20 - if (server->refcount) { + if (server->in_flight) { /* Wait for requests to complete before we can unmap the memory */ server->wait_idle =3D true; qemu_coroutine_yield(); server->wait_idle =3D false; } - assert(server->refcount =3D=3D 0); + assert(server->in_flight =3D=3D 0); =20 vu_deinit(vu_dev); =20 --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443694; cv=none; d=zohomail.com; s=zohoarc; b=H0nCn8sWx/Oqx9pejPoaCAxUSW9J9ACFxecP32kq50DIirRmiNAkp5jpdiqS6tJNTTAeSftiRhpHp1igPEofZJWAesLjGmiQfB4Bx3BrMoHTgBwSaG4xeCNiwYl6wi8+bbEFEfTeDwo/Eclkhe3b8KuH7RJ5PGHjSPVbYwpwWYI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443694; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LYOSkRZsiK3MtWRK+tCyHVAwWhRHynZGItZ8w7U+shA=; b=Xi4Z6z0kt04Yq5dP+wT5q/0zQkWY/98tNB07+zmmd7TFo8ukhtrEWHDBhz+M6FTClv74IzVbDrhZ38utNGU49qYduQLgLGJPutCCzfSx/AUYZV5dMhTEXqUkHjcEi8YtKY57U4sHE7DphawoCEKv0GwuStvCS0ryb37SOdpJJp8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 16824436941355.279715262505761; Tue, 25 Apr 2023 10:28:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526178.817754 (Exim 4.92) (envelope-from ) id 1prMS7-0008Lh-1z; Tue, 25 Apr 2023 17:27:43 +0000 Received: by outflank-mailman (output) from mailman id 526178.817754; Tue, 25 Apr 2023 17:27:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS6-0008LO-Uv; Tue, 25 Apr 2023 17:27:42 +0000 Received: by outflank-mailman (input) for mailman id 526178; Tue, 25 Apr 2023 17:27:42 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS6-0007l5-6D for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:42 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 7a65d7da-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:27:40 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-157-4QWj-JVrOH6B29n66eP05w-1; Tue, 25 Apr 2023 13:27:35 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EAA13185A7A7; Tue, 25 Apr 2023 17:27:33 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6C9B3492B03; Tue, 25 Apr 2023 17:27:33 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7a65d7da-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443659; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LYOSkRZsiK3MtWRK+tCyHVAwWhRHynZGItZ8w7U+shA=; b=e1m7FHO5UOM3fhDIjBPWfsTN4IXTnQK8OEDQLyxagiL0WEZtgpK5NJ1oOuJKwEQk7zRy8P kGu+xXtjAHD0ZEZOc8YK8BehrR/ruKeT7aPjF2II104AXj+da1cL8XwKhBRvfaMhcnOwDx /7tcuyb2NQyJOP8TjkXi7gVnjSIbS98= X-MC-Unique: 4QWj-JVrOH6B29n66eP05w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 06/20] block/export: wait for vhost-user-blk requests when draining Date: Tue, 25 Apr 2023 13:27:02 -0400 Message-Id: <20230425172716.1033562-7-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443694715100001 Content-Type: text/plain; charset="utf-8" Each vhost-user-blk request runs in a coroutine. When the BlockBackend enters a drained section we need to enter a quiescent state. Currently any in-flight requests race with bdrv_drained_begin() because it is unaware of vhost-user-blk requests. When blk_co_preadv/pwritev()/etc returns it wakes the bdrv_drained_begin() thread but vhost-user-blk request processing has not yet finished. The request coroutine continues executing while the main loop thread thinks it is in a drained section. One example where this is unsafe is for blk_set_aio_context() where bdrv_drained_begin() is called before .aio_context_detached() and .aio_context_attach(). If request coroutines are still running after bdrv_drained_begin(), then the AioContext could change underneath them and they race with new requests processed in the new AioContext. This could lead to virtqueue corruption, for example. (This example is theoretical, I came across this while reading the code and have not tried to reproduce it.) It's easy to make bdrv_drained_begin() wait for in-flight requests: add a .drained_poll() callback that checks the VuServer's in-flight counter. VuServer just needs an API that returns true when there are requests in flight. The in-flight counter needs to be atomic. Signed-off-by: Stefan Hajnoczi --- include/qemu/vhost-user-server.h | 4 +++- block/export/vhost-user-blk-server.c | 16 ++++++++++++++++ util/vhost-user-server.c | 14 ++++++++++---- 3 files changed, 29 insertions(+), 5 deletions(-) diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-ser= ver.h index bc0ac9ddb6..b1c1cda886 100644 --- a/include/qemu/vhost-user-server.h +++ b/include/qemu/vhost-user-server.h @@ -40,8 +40,9 @@ typedef struct { int max_queues; const VuDevIface *vu_iface; =20 + unsigned int in_flight; /* atomic */ + /* Protected by ctx lock */ - unsigned int in_flight; bool wait_idle; VuDev vu_dev; QIOChannel *ioc; /* The I/O channel with the client */ @@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server); =20 void vhost_user_server_inc_in_flight(VuServer *server); void vhost_user_server_dec_in_flight(VuServer *server); +bool vhost_user_server_has_in_flight(VuServer *server); =20 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ct= x); void vhost_user_server_detach_aio_context(VuServer *server); diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index 841acb36e3..092b86aae4 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -272,7 +272,20 @@ static void vu_blk_exp_resize(void *opaque) vu_config_change_msg(&vexp->vu_server.vu_dev); } =20 +/* + * Ensures that bdrv_drained_begin() waits until in-flight requests comple= te. + * + * Called with vexp->export.ctx acquired. + */ +static bool vu_blk_drained_poll(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + return vhost_user_server_has_in_flight(&vexp->vu_server); +} + static const BlockDevOps vu_blk_dev_ops =3D { + .drained_poll =3D vu_blk_drained_poll, .resize_cb =3D vu_blk_exp_resize, }; =20 @@ -314,6 +327,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExp= ortOptions *opts, vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg, logical_block_size, num_queues); =20 + blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, vexp); =20 @@ -323,6 +337,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExp= ortOptions *opts, num_queues, &vu_blk_iface, errp)) { blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, vexp); + blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; } @@ -336,6 +351,7 @@ static void vu_blk_exp_delete(BlockExport *exp) =20 blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_de= tach, vexp); + blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } =20 diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 1622f8cfb3..2e6b640050 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf) void vhost_user_server_inc_in_flight(VuServer *server) { assert(!server->wait_idle); - server->in_flight++; + qatomic_inc(&server->in_flight); } =20 void vhost_user_server_dec_in_flight(VuServer *server) { - server->in_flight--; - if (server->wait_idle && !server->in_flight) { - aio_co_wake(server->co_trip); + if (qatomic_fetch_dec(&server->in_flight) =3D=3D 1) { + if (server->wait_idle) { + aio_co_wake(server->co_trip); + } } } =20 +bool vhost_user_server_has_in_flight(VuServer *server) +{ + return qatomic_load_acquire(&server->in_flight) > 0; +} + static bool coroutine_fn vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg) { --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443706; cv=none; d=zohomail.com; s=zohoarc; b=mn1Pva4ZDJQpmLshXhDLiv8gigl16K1/CkMiqKWoVSyXgqyI97EY3z8X0FcgHLQ/+qoER5VXlSktof97+kH2ruav+BlV76CMrzs460CDKGWRWJB63XIUrCQtCdvrJOuHdGWa6XzOeU3WysN3vsUfMVH8cMzfiCmekA/zoFCHOR4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443706; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=DTceCTcDIciTAW6Fo+6Plvs7yeu7soK0WgIi8nuOQs4=; b=gIEvB5yzWiSOqeySsBHAPHOIu+jgk+/YBWebwBT36cGrgKdyobKVEWXsIh4mrTfovrb9owxacezs57GexyBYB3wsDYcpcj3HQf0ZfT2eT0Cit+uqVAlRusUHkknJECJQOObHms62jbv/VUg0hGY2mzJd1K++gULJOO42DdmvRoc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443706526526.1780174218787; Tue, 25 Apr 2023 10:28:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526181.817774 (Exim 4.92) (envelope-from ) id 1prMSA-0000Yu-Py; Tue, 25 Apr 2023 17:27:46 +0000 Received: by outflank-mailman (output) from mailman id 526181.817774; Tue, 25 Apr 2023 17:27:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSA-0000Xq-KC; Tue, 25 Apr 2023 17:27:46 +0000 Received: by outflank-mailman (input) for mailman id 526181; Tue, 25 Apr 2023 17:27:45 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS9-0007l5-1Y for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:45 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 7bec23f2-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:27:43 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-499-bHDEV7dPMI2DaMxqNeeCrQ-1; Tue, 25 Apr 2023 13:27:37 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 833723C0F367; Tue, 25 Apr 2023 17:27:36 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 637BF40C6E68; Tue, 25 Apr 2023 17:27:35 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7bec23f2-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443662; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DTceCTcDIciTAW6Fo+6Plvs7yeu7soK0WgIi8nuOQs4=; b=Zj24rqSLt00mldfV74SYf9OodeOZPhP5ykWc+NYJaNJius8+kGmidmxFxfAPOe+VuIZcuS iFG3BeNG10bo9TRjnq4ipo//2WAxh+zHIMC9oq+VndTMuhE8bcBqwLMh3B2mEQZzHKZgJo FjIsintsF0fPnmk12XP0i8gb88QVJJw= X-MC-Unique: bHDEV7dPMI2DaMxqNeeCrQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 07/20] block/export: stop using is_external in vhost-user-blk server Date: Tue, 25 Apr 2023 13:27:03 -0400 Message-Id: <20230425172716.1033562-8-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443708462100003 Content-Type: text/plain; charset="utf-8" vhost-user activity must be suspended during bdrv_drained_begin/end(). This prevents new requests from interfering with whatever is happening in the drained section. Previously this was done using aio_set_fd_handler()'s is_external argument. In a multi-queue block layer world the aio_disable_external() API cannot be used since multiple AioContext may be processing I/O, not just one. Switch to BlockDevOps->drained_begin/end() callbacks. Signed-off-by: Stefan Hajnoczi --- block/export/vhost-user-blk-server.c | 43 ++++++++++++++-------------- util/vhost-user-server.c | 10 +++---- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index 092b86aae4..d20f69cd74 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface =3D { .process_msg =3D vu_blk_process_msg, }; =20 -static void blk_aio_attached(AioContext *ctx, void *opaque) -{ - VuBlkExport *vexp =3D opaque; - - vexp->export.ctx =3D ctx; - vhost_user_server_attach_aio_context(&vexp->vu_server, ctx); -} - -static void blk_aio_detach(void *opaque) -{ - VuBlkExport *vexp =3D opaque; - - vhost_user_server_detach_aio_context(&vexp->vu_server); - vexp->export.ctx =3D NULL; -} - static void vu_blk_initialize_config(BlockDriverState *bs, struct virtio_blk_config *config, @@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque) vu_config_change_msg(&vexp->vu_server.vu_dev); } =20 +/* Called with vexp->export.ctx acquired */ +static void vu_blk_drained_begin(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + vhost_user_server_detach_aio_context(&vexp->vu_server); +} + +/* Called with vexp->export.blk AioContext acquired */ +static void vu_blk_drained_end(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + /* Refresh AioContext in case it changed */ + vexp->export.ctx =3D blk_get_aio_context(vexp->export.blk); + + vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ct= x); +} + /* * Ensures that bdrv_drained_begin() waits until in-flight requests comple= te. * @@ -285,6 +288,8 @@ static bool vu_blk_drained_poll(void *opaque) } =20 static const BlockDevOps vu_blk_dev_ops =3D { + .drained_begin =3D vu_blk_drained_begin, + .drained_end =3D vu_blk_drained_end, .drained_poll =3D vu_blk_drained_poll, .resize_cb =3D vu_blk_exp_resize, }; @@ -328,15 +333,11 @@ static int vu_blk_exp_create(BlockExport *exp, BlockE= xportOptions *opts, logical_block_size, num_queues); =20 blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); - blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, - vexp); =20 blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); =20 if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx, num_queues, &vu_blk_iface, errp)) { - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, - blk_aio_detach, vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; @@ -349,8 +350,6 @@ static void vu_blk_exp_delete(BlockExport *exp) { VuBlkExport *vexp =3D container_of(exp, VuBlkExport, export); =20 - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_de= tach, - vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 2e6b640050..332aea9306 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd =3D fd; vu_fd_watch->cb =3D cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev =3D vu_dev; vu_fd_watch->pvt =3D pvt; @@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, true, + aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL, NULL); =20 QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); @@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *ser= ver, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *ser= ver) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443701; cv=none; d=zohomail.com; s=zohoarc; b=c09ttiA73f6HyYhZyvYGLWaBvNToH95IPHg/5PnIA1pEQxg08PbApk7wBzY3zNKEjKOyWO1B/lkwseAReupRDAWlTeeITDqE05sG9FqWR1xJT3HILWOXMSLAiruRlFasIzHuOy9Fopf13MABUKYNvaYFp2wX5utz2xWr/moJJvQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443701; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=SGtAMiwxZ6UOXxi5QjVUlwj1Vpn4MSdaUDJYH4vesKw=; b=RlkLWHQ0CUpp34sCLdfjLnP/Kcrz2n9qJN1fh8DmpiFHcczhNUntWJdBPBr+0k7rXzfvZ61yMwxMCStQyaXqkup2In+1GFL71HIB5AroBx8qT0OUfQhUZObCK5loQecy3BBQpcq9nVXCHhepPgeVLnBh5LTavEZ2FXTkm0cYZww= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443701852102.29064671346305; Tue, 25 Apr 2023 10:28:21 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526179.817764 (Exim 4.92) (envelope-from ) id 1prMS9-0000EU-C0; Tue, 25 Apr 2023 17:27:45 +0000 Received: by outflank-mailman (output) from mailman id 526179.817764; Tue, 25 Apr 2023 17:27:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS9-0000EK-8O; Tue, 25 Apr 2023 17:27:45 +0000 Received: by outflank-mailman (input) for mailman id 526179; Tue, 25 Apr 2023 17:27:44 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMS8-0006fQ-2L for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:44 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 7c392ec3-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:27:43 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-170-F5m3ad6vPW6tX-c71MgQag-1; Tue, 25 Apr 2023 13:27:40 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BC32D87A9E0; Tue, 25 Apr 2023 17:27:38 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 33E42141511D; Tue, 25 Apr 2023 17:27:38 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7c392ec3-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443662; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SGtAMiwxZ6UOXxi5QjVUlwj1Vpn4MSdaUDJYH4vesKw=; b=LqasbpwQ5qHA4V6xcQcprrq0AyVvPhlhtiwuEzN3T2ZaaHuGj52rWp1ywlMmRsmsjfWHVe ScmLVFEOy2NrwRkhV7dpl32MglgQlalCj4A+XuZeeBLEv1UGKGn1MqhYgeJk217gyOXz/5 q01/cUlg9kUJcce8epQToRj27m6X7w0= X-MC-Unique: F5m3ad6vPW6tX-c71MgQag-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg , David Woodhouse Subject: [PATCH v4 08/20] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore Date: Tue, 25 Apr 2023 13:27:04 -0400 Message-Id: <20230425172716.1033562-9-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443702705100001 Content-Type: text/plain; charset="utf-8" There is no need to suspend activity between aio_disable_external() and aio_enable_external(), which is mainly used for the block layer's drain operation. This is part of ongoing work to remove the aio_disable_external() API. Reviewed-by: David Woodhouse Reviewed-by: Paul Durrant Signed-off-by: Stefan Hajnoczi --- hw/i386/kvm/xen_xenstore.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c index 900679af8a..6e81bc8791 100644 --- a/hw/i386/kvm/xen_xenstore.c +++ b/hw/i386/kvm/xen_xenstore.c @@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Erro= r **errp) error_setg(errp, "Xenstore evtchn port init failed"); return; } - aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), tr= ue, + aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), fa= lse, xen_xenstore_event, NULL, NULL, NULL, s); =20 s->impl =3D xs_impl_create(xen_domid); --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443685; cv=none; d=zohomail.com; s=zohoarc; b=hXNpiMeEqrXg05oJ7GV95tuTzCkssJ3sFDld2fXaB/VeP81vFYatctxV94VL2ExMyrmJLFiRaQMT+QbbnHf/hAPcD43EJ2OXMMCitmdi0xYdaIT23UQ3WjSlDCareLkVQNgm3twZMc+oYUWDMzIjvgDnYFJIvQfA0NCb9yaO/LU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443685; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=MVghegvQVUaizZCyEWjUSp2wGJFPOHKrCrNbSIUao8c=; b=GJpAVgHpL0sxDX3Fch+3PC9d4LysGLCmY0SyWV1SdvqbfpHFE064iclQP1Q88AAcP/C1kgz4gyb8K1jtV/TgS9Mdb7+giSVrf5AGvhCUChHAb6aLpwFO+o91FxNdZ8CgPHRBVDiz3VoGL+qVF4/x4A0j25i5KwnzqxTTJ1Rt8eA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1682443685832408.2432757132227; Tue, 25 Apr 2023 10:28:05 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prMSE-0006ys-Uo; Tue, 25 Apr 2023 13:27:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prMSC-0006yG-QQ for qemu-devel@nongnu.org; Tue, 25 Apr 2023 13:27:48 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prMSB-00039P-85 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 13:27:48 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-318-brbwTRKSPLCfpDHZ0sB8ug-1; Tue, 25 Apr 2023 13:27:41 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9E4D928082A8; Tue, 25 Apr 2023 17:27:40 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2173440C6E68; Tue, 25 Apr 2023 17:27:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443666; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MVghegvQVUaizZCyEWjUSp2wGJFPOHKrCrNbSIUao8c=; b=gmyr5bcgYtTVf2BqiMiN9KxwthebwbpJFtPpBcEY7JKGF8CbsT1mZj6/tyq90YnBE3i3+9 swbBqttwTtxEEErd6N3EcRfauC/MU7GT1teZhkfanq8HAcwfLZFgh4kdlsjtgHfoL6K/ZV jLm199R+L57p9BxG3r13NcSq8+3AAkk= X-MC-Unique: brbwTRKSPLCfpDHZ0sB8ug-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 09/20] block: add blk_in_drain() API Date: Tue, 25 Apr 2023 13:27:05 -0400 Message-Id: <20230425172716.1033562-10-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -22 X-Spam_score: -2.3 X-Spam_bar: -- X-Spam_report: (-2.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.171, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443686451100007 Content-Type: text/plain; charset="utf-8" The BlockBackend quiesce_counter is greater than zero during drained sections. Add an API to check whether the BlockBackend is in a drained section. The next patch will use this API. Signed-off-by: Stefan Hajnoczi --- include/sysemu/block-backend-global-state.h | 1 + block/block-backend.c | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/include/sysemu/block-backend-global-state.h b/include/sysemu/b= lock-backend-global-state.h index 2b6d27db7c..ac7cbd6b5e 100644 --- a/include/sysemu/block-backend-global-state.h +++ b/include/sysemu/block-backend-global-state.h @@ -78,6 +78,7 @@ void blk_activate(BlockBackend *blk, Error **errp); int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags); void blk_aio_cancel(BlockAIOCB *acb); int blk_commit_all(void); +bool blk_in_drain(BlockBackend *blk); void blk_drain(BlockBackend *blk); void blk_drain_all(void); void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error, diff --git a/block/block-backend.c b/block/block-backend.c index ffd1d66f7d..42721a3592 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -1266,6 +1266,13 @@ blk_check_byte_request(BlockBackend *blk, int64_t of= fset, int64_t bytes) return 0; } =20 +/* Are we currently in a drained section? */ +bool blk_in_drain(BlockBackend *blk) +{ + GLOBAL_STATE_CODE(); /* change to IO_OR_GS_CODE(), if necessary */ + return qatomic_read(&blk->quiesce_counter); +} + /* To be called between exactly one pair of blk_inc/dec_in_flight() */ static void coroutine_fn blk_wait_while_drained(BlockBackend *blk) { --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443706; cv=none; d=zohomail.com; s=zohoarc; b=dep64Ly3Sm5XLXbIo96xK6PK5wCtGhRqz+lD3iio0k6A14ODKcaBo7WCShJJP2NN8o7meF07p3Akab1qBmG2fLL347GhyVFWe6cPLZPRtG3Crk/s9+e35AvYt3MkK4MPbZM97SuHnHtbLNp93S8JC0ntHPgKidSzBLc91UHwQrY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443706; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=BWB8WEf8NZeINbe7UA+dJnukN7hLgv/twVbNfXFQN24=; b=gDcEEBagzu307DoPKb3MuQF79jpUpK4aJixWWU0LEWkwhsfeBdWqz9x3jQNPA6I/I5iOcljY8ZeeR1p7V4Cq7Xe/j3B8pdogoeBdsy5UDn7jCPmd204BEJfUNJ1N3v4IYXG0HEdbU4Ur3YTBa83jZoG+oIjXGoLa4Oou7qs0re8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443706761853.7043047277192; Tue, 25 Apr 2023 10:28:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526185.817794 (Exim 4.92) (envelope-from ) id 1prMSF-0001Vr-Ub; Tue, 25 Apr 2023 17:27:51 +0000 Received: by outflank-mailman (output) from mailman id 526185.817794; Tue, 25 Apr 2023 17:27:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSF-0001VP-Ow; Tue, 25 Apr 2023 17:27:51 +0000 Received: by outflank-mailman (input) for mailman id 526185; Tue, 25 Apr 2023 17:27:50 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSE-0006fQ-RP for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:50 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 80272708-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:27:50 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-371-trIX9XLKPRWWnULzgTRHCw-1; Tue, 25 Apr 2023 13:27:43 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 82C42A0F397; Tue, 25 Apr 2023 17:27:42 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 02082492B03; Tue, 25 Apr 2023 17:27:41 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 80272708-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BWB8WEf8NZeINbe7UA+dJnukN7hLgv/twVbNfXFQN24=; b=iBvj65WC46y7UY2FK/n/dWDBPyu25N3/nCSWwwWal3myS92rcpBK8foIhWeDE09MzxuX6t mA0QKnN0j8PsZbs08zqJxk4Qdx7zSSEajrrxOxrZls5bz2tFVPge/e/F04FtIJOZU5dLpR sG3mr4mo6wv3HixueGYKt5vhbAEkLtQ= X-MC-Unique: trIX9XLKPRWWnULzgTRHCw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 10/20] block: drain from main loop thread in bdrv_co_yield_to_drain() Date: Tue, 25 Apr 2023 13:27:06 -0400 Message-Id: <20230425172716.1033562-11-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443708292100001 Content-Type: text/plain; charset="utf-8" For simplicity, always run BlockDevOps .drained_begin/end/poll() callbacks in the main loop thread. This makes it easier to implement the callbacks and avoids extra locks. Move the function pointer declarations from the I/O Code section to the Global State section in block-backend-common.h. Signed-off-by: Stefan Hajnoczi --- include/sysemu/block-backend-common.h | 25 +++++++++++++------------ block/io.c | 3 ++- 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-b= ackend-common.h index 2391679c56..780cea7305 100644 --- a/include/sysemu/block-backend-common.h +++ b/include/sysemu/block-backend-common.h @@ -59,6 +59,19 @@ typedef struct BlockDevOps { */ bool (*is_medium_locked)(void *opaque); =20 + /* + * Runs when the backend receives a drain request. + */ + void (*drained_begin)(void *opaque); + /* + * Runs when the backend's last drain request ends. + */ + void (*drained_end)(void *opaque); + /* + * Is the device still busy? + */ + bool (*drained_poll)(void *opaque); + /* * I/O API functions. These functions are thread-safe. * @@ -76,18 +89,6 @@ typedef struct BlockDevOps { * Runs when the size changed (e.g. monitor command block_resize) */ void (*resize_cb)(void *opaque); - /* - * Runs when the backend receives a drain request. - */ - void (*drained_begin)(void *opaque); - /* - * Runs when the backend's last drain request ends. - */ - void (*drained_end)(void *opaque); - /* - * Is the device still busy? - */ - bool (*drained_poll)(void *opaque); } BlockDevOps; =20 /* diff --git a/block/io.c b/block/io.c index 2e267a85ab..4f9fe2f808 100644 --- a/block/io.c +++ b/block/io.c @@ -335,7 +335,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDr= iverState *bs, if (ctx !=3D co_ctx) { aio_context_release(ctx); } - replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data); + replay_bh_schedule_oneshot_event(qemu_get_aio_context(), + bdrv_co_drain_bh_cb, &data); =20 qemu_coroutine_yield(); /* If we are resumed from some other event (such as an aio completion = or a --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443708; cv=none; d=zohomail.com; s=zohoarc; b=AyDArkVgODT3HOwlVubHp0VuNcWxdxyU2oHCmvfLFPkN4zj/qtigV1q4flZwQDNVzICMFOy7/vlYJMtxxWoYmFCDJOLhLi6OF5TYjLUoTIti/bJSG4cbXTX5/Omp6KRRUDDsmignSN/itU+LWrh0y0G0q0il4koTKjkV3WXvmBg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443708; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=pr0VKOkDxuZ5hinADqgnbKz4Kufrp9KJiNLqpg8yv3Q=; b=iTtWmIXXP5WUnEtCF2uhEj2o20ufwQLOW3VDdcTydwqDQmPr61A3cgeS+TuW0BXsZX1qVSW2QNszVJ4qA4eiljwKx/bnkEryrtGcaA54loJO4sU5BDWZ5WN1LxNb3wQiO+nKGQ/Ng4vfbIHy0AMdY1RM3ZBYJRsziaRC03tz+6M= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443708106797.563009017648; Tue, 25 Apr 2023 10:28:28 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526186.817804 (Exim 4.92) (envelope-from ) id 1prMSI-00022V-Ag; Tue, 25 Apr 2023 17:27:54 +0000 Received: by outflank-mailman (output) from mailman id 526186.817804; Tue, 25 Apr 2023 17:27:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSI-00021N-5e; Tue, 25 Apr 2023 17:27:54 +0000 Received: by outflank-mailman (input) for mailman id 526186; Tue, 25 Apr 2023 17:27:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSG-0007l5-8O for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:52 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 805102ac-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:27:50 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-324-_LlzPYThP92zuiCljAn7kw-1; Tue, 25 Apr 2023 13:27:45 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A638828082A8; Tue, 25 Apr 2023 17:27:44 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 27D82492B10; Tue, 25 Apr 2023 17:27:43 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 805102ac-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pr0VKOkDxuZ5hinADqgnbKz4Kufrp9KJiNLqpg8yv3Q=; b=PWBBM/H09rSu7gAuqjo4eV+haZlIkNYbsyfabYgtk4SPsfsSNcQbqxmRMfnToVXvbaxzfg inthcnyivnDcl275+0Cn8/xzpZnTSCC40ks1cyKGGpHKDcTDSgosw0dj61Hufkz1MlUhyQ cBK9ltLwJ5FHcINPM3sLmhDkP7G1FaY= X-MC-Unique: _LlzPYThP92zuiCljAn7kw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 11/20] xen-block: implement BlockDevOps->drained_begin() Date: Tue, 25 Apr 2023 13:27:07 -0400 Message-Id: <20230425172716.1033562-12-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443708610100006 Content-Type: text/plain; charset="utf-8" Detach event channels during drained sections to stop I/O submission from the ring. xen-block is no longer reliant on aio_disable_external() after this patch. This will allow us to remove the aio_disable_external() API once all other code that relies on it is converted. Extend xen_device_set_event_channel_context() to allow ctx=3DNULL. The event channel still exists but the event loop does not monitor the file descriptor. Event channel processing can resume by calling xen_device_set_event_channel_context() with a non-NULL ctx. Factor out xen_device_set_event_channel_context() calls in hw/block/dataplane/xen-block.c into attach/detach helper functions. Incidentally, these don't require the AioContext lock because aio_set_fd_handler() is thread-safe. It's safer to register BlockDevOps after the dataplane instance has been created. The BlockDevOps .drained_begin/end() callbacks depend on the dataplane instance, so move the blk_set_dev_ops() call after xen_block_dataplane_create(). Signed-off-by: Stefan Hajnoczi --- hw/block/dataplane/xen-block.h | 2 ++ hw/block/dataplane/xen-block.c | 42 +++++++++++++++++++++++++--------- hw/block/xen-block.c | 24 ++++++++++++++++--- hw/xen/xen-bus.c | 7 ++++-- 4 files changed, 59 insertions(+), 16 deletions(-) diff --git a/hw/block/dataplane/xen-block.h b/hw/block/dataplane/xen-block.h index 76dcd51c3d..7b8e9df09f 100644 --- a/hw/block/dataplane/xen-block.h +++ b/hw/block/dataplane/xen-block.h @@ -26,5 +26,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *datapla= ne, unsigned int protocol, Error **errp); void xen_block_dataplane_stop(XenBlockDataPlane *dataplane); +void xen_block_dataplane_attach(XenBlockDataPlane *dataplane); +void xen_block_dataplane_detach(XenBlockDataPlane *dataplane); =20 #endif /* HW_BLOCK_DATAPLANE_XEN_BLOCK_H */ diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index 734da42ea7..02e0fd6115 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -663,6 +663,30 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *da= taplane) g_free(dataplane); } =20 +void xen_block_dataplane_detach(XenBlockDataPlane *dataplane) +{ + if (!dataplane || !dataplane->event_channel) { + return; + } + + /* Only reason for failure is a NULL channel */ + xen_device_set_event_channel_context(dataplane->xendev, + dataplane->event_channel, + NULL, &error_abort); +} + +void xen_block_dataplane_attach(XenBlockDataPlane *dataplane) +{ + if (!dataplane || !dataplane->event_channel) { + return; + } + + /* Only reason for failure is a NULL channel */ + xen_device_set_event_channel_context(dataplane->xendev, + dataplane->event_channel, + dataplane->ctx, &error_abort); +} + void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) { XenDevice *xendev; @@ -673,13 +697,11 @@ void xen_block_dataplane_stop(XenBlockDataPlane *data= plane) =20 xendev =3D dataplane->xendev; =20 - aio_context_acquire(dataplane->ctx); - if (dataplane->event_channel) { - /* Only reason for failure is a NULL channel */ - xen_device_set_event_channel_context(xendev, dataplane->event_chan= nel, - qemu_get_aio_context(), - &error_abort); + if (!blk_in_drain(dataplane->blk)) { + xen_block_dataplane_detach(dataplane); } + + aio_context_acquire(dataplane->ctx); /* Xen doesn't have multiple users for nodes, so this can't fail */ blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abo= rt); aio_context_release(dataplane->ctx); @@ -818,11 +840,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *data= plane, blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL); aio_context_release(old_context); =20 - /* Only reason for failure is a NULL channel */ - aio_context_acquire(dataplane->ctx); - xen_device_set_event_channel_context(xendev, dataplane->event_channel, - dataplane->ctx, &error_abort); - aio_context_release(dataplane->ctx); + if (!blk_in_drain(dataplane->blk)) { + xen_block_dataplane_attach(dataplane); + } =20 return; =20 diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c index f5a744589d..f099914831 100644 --- a/hw/block/xen-block.c +++ b/hw/block/xen-block.c @@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque) xen_device_backend_printf(xendev, "state", "%u", state); } =20 +/* Suspend request handling */ +static void xen_block_drained_begin(void *opaque) +{ + XenBlockDevice *blockdev =3D opaque; + + xen_block_dataplane_detach(blockdev->dataplane); +} + +/* Resume request handling */ +static void xen_block_drained_end(void *opaque) +{ + XenBlockDevice *blockdev =3D opaque; + + xen_block_dataplane_attach(blockdev->dataplane); +} + static const BlockDevOps xen_block_dev_ops =3D { - .resize_cb =3D xen_block_resize_cb, + .resize_cb =3D xen_block_resize_cb, + .drained_begin =3D xen_block_drained_begin, + .drained_end =3D xen_block_drained_end, }; =20 static void xen_block_realize(XenDevice *xendev, Error **errp) @@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error = **errp) return; } =20 - blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev); - if (conf->discard_granularity =3D=3D -1) { conf->discard_granularity =3D conf->physical_block_size; } @@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error = **errp) blockdev->dataplane =3D xen_block_dataplane_create(xendev, blk, conf->logical_block_size, blockdev->props.iothread); + + blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev); } =20 static void xen_block_frontend_changed(XenDevice *xendev, diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c index c59850b1de..b8f408c9ed 100644 --- a/hw/xen/xen-bus.c +++ b/hw/xen/xen-bus.c @@ -846,8 +846,11 @@ void xen_device_set_event_channel_context(XenDevice *x= endev, NULL, NULL, NULL, NULL, NULL); =20 channel->ctx =3D ctx; - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), tru= e, - xen_device_event, NULL, xen_device_poll, NULL, chan= nel); + if (ctx) { + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), + true, xen_device_event, NULL, xen_device_poll, = NULL, + channel); + } } =20 XenEventChannel *xen_device_bind_event_channel(XenDevice *xendev, --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443716; cv=none; d=zohomail.com; s=zohoarc; b=SPumY8vwS28N7d2nPsKAGZq1b0pGk0k8cmxz9qBoRb9dqJRl0aQPZi49Bavvt5NcaCVILc+t3k6WE0soGrb4mLPWvgR4QTpbttO9LrB1x+jkyI5KtilSzyEwi2dXQC3MKHgaonSVynmvTQMSPvgj3Ad0rWyHfRLXueZOApmYE1Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443716; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=l1BfjF5WOabKn8yxsLU/Ixv2u9YtxHZYDFF52BuLnDA=; b=iGcKiHB0oUuNCnyeXUhOH4tCL5VO64xP31JqafRBTLBPzLW9xQ3H7T/Pyi+Ix6U/+6daC4g2RsqajATSyGdDMzgp0MGQiaBbr0LQkuGzg45K9e3s/tN1ts75MpuOusEeCb5yV3bquqh/CIQOkG/KelqEGPCPoMJa2px52I3euRU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 16824437159991009.5839047032085; Tue, 25 Apr 2023 10:28:35 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526192.817814 (Exim 4.92) (envelope-from ) id 1prMSM-0002lf-Pl; Tue, 25 Apr 2023 17:27:58 +0000 Received: by outflank-mailman (output) from mailman id 526192.817814; Tue, 25 Apr 2023 17:27:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSM-0002lQ-Jd; Tue, 25 Apr 2023 17:27:58 +0000 Received: by outflank-mailman (input) for mailman id 526192; Tue, 25 Apr 2023 17:27:57 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSL-0007l5-Cl for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:57 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 818b67b1-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:27:52 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-196-6D_FeGHnNl-RzOiCHv9FDA-1; Tue, 25 Apr 2023 13:27:48 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E805487A9E6; Tue, 25 Apr 2023 17:27:46 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1CE1E2166B41; Tue, 25 Apr 2023 17:27:45 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 818b67b1-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l1BfjF5WOabKn8yxsLU/Ixv2u9YtxHZYDFF52BuLnDA=; b=R3hAfaGqHbElLEBj5rTWRitcmgkqgDlMBepN/lfTDeTz/0w8D3OgY0R6yH+gWeB6oEVYQ/ +Y2GqMp4Z6ntVxXawasE9Rngg0vr7/rwddFMsHfjtNn4xB6vuCnJ5L177HT/NgwZtQ4ST5 VFHHOa61GMzklIFW8oruWmH3oaa4oao= X-MC-Unique: 6D_FeGHnNl-RzOiCHv9FDA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 12/20] hw/xen: do not set is_external=true on evtchn fds Date: Tue, 25 Apr 2023 13:27:08 -0400 Message-Id: <20230425172716.1033562-13-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443716282100001 Content-Type: text/plain; charset="utf-8" is_external=3Dtrue suspends fd handlers between aio_disable_external() and aio_enable_external(). The block layer's drain operation uses this mechanism to prevent new I/O from sneaking in between bdrv_drained_begin() and bdrv_drained_end(). The previous commit converted the xen-block device to use BlockDevOps .drained_begin/end() callbacks. It no longer relies on is_external=3Dtrue so it is safe to pass is_external=3Dfalse. This is part of ongoing work to remove the aio_disable_external() API. Signed-off-by: Stefan Hajnoczi --- hw/xen/xen-bus.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c index b8f408c9ed..bf256d4da2 100644 --- a/hw/xen/xen-bus.c +++ b/hw/xen/xen-bus.c @@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *= xendev, } =20 if (channel->ctx) - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),= true, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),= false, NULL, NULL, NULL, NULL, NULL); =20 channel->ctx =3D ctx; if (ctx) { aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), - true, xen_device_event, NULL, xen_device_poll, = NULL, - channel); + false, xen_device_event, NULL, xen_device_poll, + NULL, channel); } } =20 @@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev, =20 QLIST_REMOVE(channel, list); =20 - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), tru= e, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), fal= se, NULL, NULL, NULL, NULL, NULL); =20 if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) { --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443722; cv=none; d=zohomail.com; s=zohoarc; b=LtQ2uPROFYlEu9VpeZBYf2ogubJUR44gjrdEkVXFh9IatSYrZW+sr5VC8lM8KayiO0SIgynelqsFRkDwcL7JpVXqk2a9TIbtZocqJXPqMVLEFN7WgQnMo/Z2p9nyzSkG4iVBvsyu+0HctydqhZeVgx5t04ZFz4XFK4fvbtZzEMM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443722; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=; b=TWJsjTZ6+eyIwjuOZ4HPb6lIlc2NCxAHxqw0m/2KvYjYPZ6c+tKs8O7aKeWDOneQqkQescJ/CtDuY0p3bGq3kc2UoYpyL/A6ubiPMj1lFqhSt+otvyW5D9e2GIuTy2FjOCd4UWrV8H6wirPeat0JQApeTaN1L86aXMZeBx1UJVs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443722195273.21464030910215; Tue, 25 Apr 2023 10:28:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526196.817824 (Exim 4.92) (envelope-from ) id 1prMSP-0003H3-6w; Tue, 25 Apr 2023 17:28:01 +0000 Received: by outflank-mailman (output) from mailman id 526196.817824; Tue, 25 Apr 2023 17:28:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSP-0003GX-02; Tue, 25 Apr 2023 17:28:01 +0000 Received: by outflank-mailman (input) for mailman id 526196; Tue, 25 Apr 2023 17:27:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSN-0007l5-DC for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:59 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8402edc3-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:27:56 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-246-UQWURWHJOjqLQ6VvJalAKQ-1; Tue, 25 Apr 2023 13:27:50 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D9ABA3C0F367; Tue, 25 Apr 2023 17:27:48 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 58627492B0F; Tue, 25 Apr 2023 17:27:48 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8402edc3-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443675; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=; b=admB9LnPDnWhPKv/fVY0i5/DcuQGGpZyske878zB7sZvAtA+P9TcjMeFAaIQnxltzGc99B SR7UDwP5M1TdPoiEsoaMjXX+o823+nJ4lwEMqapQ5hJof5t0PXrVw/T/o5kzB4tZ7RA//P XZ0ISI4b+41LQkhi/RNeMX6nGxMqleE= X-MC-Unique: UQWURWHJOjqLQ6VvJalAKQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 13/20] block/export: rewrite vduse-blk drain code Date: Tue, 25 Apr 2023 13:27:09 -0400 Message-Id: <20230425172716.1033562-14-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443722603100001 Content-Type: text/plain; charset="utf-8" vduse_blk_detach_ctx() waits for in-flight requests using AIO_WAIT_WHILE(). This is not allowed according to a comment in bdrv_set_aio_context_commit(): /* * Take the old AioContex when detaching it from bs. * At this point, new_context lock is already acquired, and we are now * also taking old_context. This is safe as long as bdrv_detach_aio_conte= xt * does not call AIO_POLL_WHILE(). */ Use this opportunity to rewrite the drain code in vduse-blk: - Use the BlockExport refcount so that vduse_blk_exp_delete() is only called when there are no more requests in flight. - Implement .drained_poll() so in-flight request coroutines are stopped by the time .bdrv_detach_aio_context() is called. - Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the .bdrv_detach_aio_context() constraint violation. It's no longer needed due to the previous changes. - Always handle the VDUSE file descriptor, even in drained sections. The VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in drained sections. This ensures that the VDUSE kernel code gets a fast response. - Suspend virtqueue fd handlers in .drained_begin() and resume them in .drained_end(). This eliminates the need for the aio_set_fd_handler(is_external=3Dtrue) flag, which is being removed from QEMU. This is a long list but splitting it into individual commits would probably lead to git bisect failures - the changes are all related. Signed-off-by: Stefan Hajnoczi --- block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------ 1 file changed, 93 insertions(+), 39 deletions(-) diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c index f7ae44e3ce..35dc8fcf45 100644 --- a/block/export/vduse-blk.c +++ b/block/export/vduse-blk.c @@ -31,7 +31,8 @@ typedef struct VduseBlkExport { VduseDev *dev; uint16_t num_queues; char *recon_file; - unsigned int inflight; + unsigned int inflight; /* atomic */ + bool vqs_started; } VduseBlkExport; =20 typedef struct VduseBlkReq { @@ -41,13 +42,24 @@ typedef struct VduseBlkReq { =20 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp) { - vblk_exp->inflight++; + if (qatomic_fetch_inc(&vblk_exp->inflight) =3D=3D 0) { + /* Prevent export from being deleted */ + aio_context_acquire(vblk_exp->export.ctx); + blk_exp_ref(&vblk_exp->export); + aio_context_release(vblk_exp->export.ctx); + } } =20 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp) { - if (--vblk_exp->inflight =3D=3D 0) { + if (qatomic_fetch_dec(&vblk_exp->inflight) =3D=3D 1) { + /* Wake AIO_WAIT_WHILE() */ aio_wait_kick(); + + /* Now the export can be deleted */ + aio_context_acquire(vblk_exp->export.ctx); + blk_exp_unref(&vblk_exp->export); + aio_context_release(vblk_exp->export.ctx); } } =20 @@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vdus= eVirtq *vq) { VduseBlkExport *vblk_exp =3D vduse_dev_get_priv(dev); =20 + if (!vblk_exp->vqs_started) { + return; /* vduse_blk_drained_end() will start vqs later */ + } + aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - true, on_vduse_vq_kick, NULL, NULL, NULL, vq); + false, on_vduse_vq_kick, NULL, NULL, NULL, vq); /* Make sure we don't miss any kick afer reconnecting */ eventfd_write(vduse_queue_get_fd(vq), 1); } @@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vdus= eVirtq *vq) static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq) { VduseBlkExport *vblk_exp =3D vduse_dev_get_priv(dev); + int fd =3D vduse_queue_get_fd(vq); =20 - aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - true, NULL, NULL, NULL, NULL, NULL); + if (fd < 0) { + return; + } + + aio_set_fd_handler(vblk_exp->export.ctx, fd, false, + NULL, NULL, NULL, NULL, NULL); } =20 static const VduseOps vduse_blk_ops =3D { @@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque) =20 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx) { - int i; - aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - true, on_vduse_dev_kick, NULL, NULL, NULL, + false, on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 - for (i =3D 0; i < vblk_exp->num_queues; i++) { - VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); - int fd =3D vduse_queue_get_fd(vq); - - if (fd < 0) { - continue; - } - aio_set_fd_handler(vblk_exp->export.ctx, fd, true, - on_vduse_vq_kick, NULL, NULL, NULL, vq); - } + /* Virtqueues are handled by vduse_blk_drained_end() */ } =20 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp) { - int i; - - for (i =3D 0; i < vblk_exp->num_queues; i++) { - VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); - int fd =3D vduse_queue_get_fd(vq); - - if (fd < 0) { - continue; - } - aio_set_fd_handler(vblk_exp->export.ctx, fd, - true, NULL, NULL, NULL, NULL, NULL); - } aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - true, NULL, NULL, NULL, NULL, NULL); + false, NULL, NULL, NULL, NULL, NULL); =20 - AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0); + /* Virtqueues are handled by vduse_blk_drained_begin() */ } =20 =20 @@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque) (char *)&config.capacity); } =20 +static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp) +{ + for (uint16_t i =3D 0; i < vblk_exp->num_queues; i++) { + VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); + vduse_blk_disable_queue(vblk_exp->dev, vq); + } + + vblk_exp->vqs_started =3D false; +} + +static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp) +{ + vblk_exp->vqs_started =3D true; + + for (uint16_t i =3D 0; i < vblk_exp->num_queues; i++) { + VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); + vduse_blk_enable_queue(vblk_exp->dev, vq); + } +} + +static void vduse_blk_drained_begin(void *opaque) +{ + BlockExport *exp =3D opaque; + VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); + + vduse_blk_stop_virtqueues(vblk_exp); +} + +static void vduse_blk_drained_end(void *opaque) +{ + BlockExport *exp =3D opaque; + VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); + + vduse_blk_start_virtqueues(vblk_exp); +} + +static bool vduse_blk_drained_poll(void *opaque) +{ + BlockExport *exp =3D opaque; + VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); + + return qatomic_read(&vblk_exp->inflight) > 0; +} + static const BlockDevOps vduse_block_ops =3D { - .resize_cb =3D vduse_blk_resize, + .resize_cb =3D vduse_blk_resize, + .drained_begin =3D vduse_blk_drained_begin, + .drained_end =3D vduse_blk_drained_end, + .drained_poll =3D vduse_blk_drained_poll, }; =20 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, @@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, Block= ExportOptions *opts, vblk_exp->handler.serial =3D g_strdup(vblk_opts->serial ?: ""); vblk_exp->handler.logical_block_size =3D logical_block_size; vblk_exp->handler.writable =3D opts->writable; + vblk_exp->vqs_started =3D true; =20 config.capacity =3D cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS); @@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, Blo= ckExportOptions *opts, vduse_dev_setup_queue(vblk_exp->dev, i, queue_size); } =20 - aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true, + aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false, on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, vblk_exp); - blk_set_dev_ops(exp->blk, &vduse_block_ops, exp); =20 + /* + * We handle draining ourselves using an in-flight counter and by disa= bling + * virtqueue fd handlers. Do not queue BlockBackend requests, they nee= d to + * complete so the in-flight counter reaches zero. + */ + blk_set_disable_request_queuing(exp->blk, true); + return 0; err: vduse_dev_destroy(vblk_exp->dev); @@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp) VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); int ret; =20 + assert(qatomic_read(&vblk_exp->inflight) =3D=3D 0); + + vduse_blk_detach_ctx(vblk_exp); blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_de= tach, vblk_exp); blk_set_dev_ops(exp->blk, NULL, NULL); @@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp) g_free(vblk_exp->handler.serial); } =20 +/* Called with exp->ctx acquired */ static void vduse_blk_exp_request_shutdown(BlockExport *exp) { VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); =20 - aio_context_acquire(vblk_exp->export.ctx); - vduse_blk_detach_ctx(vblk_exp); - aio_context_acquire(vblk_exp->export.ctx); + vduse_blk_stop_virtqueues(vblk_exp); } =20 const BlockExportDriver blk_exp_vduse_blk =3D { --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443771; cv=none; d=zohomail.com; s=zohoarc; b=jVYwyQvh9d//oxID4rrfgQzILwwMLtCMOUrVX7nsWNvGj81GQVT34LVXOkn3S+y57hhC98BtYd6v7Yn5sy5Ni4UHTGuwdfjd2lBRMWs7PIOu/X3sHZVhDLvc+oZgBZbHSXeba8EpvTr4jSiXffyl2UdOzs2+LB9HIdZPRNW8WUk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443771; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=RuyPP84PtIn0scDzjbw4vF6+iUnekek3zn9Q9Bf1Sn4=; b=EElKM6c/sL2YcZf9rihsSLeyoE7HLJC7853waxELJ7zDWsEWBDj7h63GbaFMJL7/2SEMcB1h9TZuUZfVRni9e83PJYdA4DLhwTOHdxFFRaj3meLQg/bcKqSI79XobY3WvvvHtRrKBgiCyrILy6fdzJkUtVykV1bBa5hbw6uAPGo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443771319113.22669517714769; Tue, 25 Apr 2023 10:29:31 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526219.817863 (Exim 4.92) (envelope-from ) id 1prMTO-0006lO-3L; Tue, 25 Apr 2023 17:29:02 +0000 Received: by outflank-mailman (output) from mailman id 526219.817863; Tue, 25 Apr 2023 17:29:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMTN-0006ke-Te; Tue, 25 Apr 2023 17:29:01 +0000 Received: by outflank-mailman (input) for mailman id 526219; Tue, 25 Apr 2023 17:28:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMTI-0007l5-53 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:56 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a65dc6eb-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:28:54 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-673-veeuRtEqOMmqQHFesBK3Zg-1; Tue, 25 Apr 2023 13:28:41 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 564C310146EC; Tue, 25 Apr 2023 17:27:51 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 83601C15BA0; Tue, 25 Apr 2023 17:27:50 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a65dc6eb-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443733; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RuyPP84PtIn0scDzjbw4vF6+iUnekek3zn9Q9Bf1Sn4=; b=SHnm/ofxiXQSJJm+mFY5E8cWc45r8bQAyEv0/KXZLZfDBqeKhY4IDi7oQBGl/E6nc0HcMH 141dYfbtfLt48ubJQPLKKedqs3KDh2AljOUaz4OuXo4k3CB4Z6hOKp6NvAzz8FOT1oR02g wRDMFAhpV8CMPyevDb6nAfB2OtDYZHQ= X-MC-Unique: veeuRtEqOMmqQHFesBK3Zg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 14/20] block/export: don't require AioContext lock around blk_exp_ref/unref() Date: Tue, 25 Apr 2023 13:27:10 -0400 Message-Id: <20230425172716.1033562-15-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443771870100001 Content-Type: text/plain; charset="utf-8" The FUSE export calls blk_exp_ref/unref() without the AioContext lock. Instead of fixing the FUSE export, adjust blk_exp_ref/unref() so they work without the AioContext lock. This way it's less error-prone. Suggested-by: Paolo Bonzini Signed-off-by: Stefan Hajnoczi --- include/block/export.h | 2 ++ block/export/export.c | 13 ++++++------- block/export/vduse-blk.c | 4 ---- 3 files changed, 8 insertions(+), 11 deletions(-) diff --git a/include/block/export.h b/include/block/export.h index 7feb02e10d..f2fe0f8078 100644 --- a/include/block/export.h +++ b/include/block/export.h @@ -57,6 +57,8 @@ struct BlockExport { * Reference count for this block export. This includes strong referen= ces * both from the owner (qemu-nbd or the monitor) and clients connected= to * the export. + * + * Use atomics to access this field. */ int refcount; =20 diff --git a/block/export/export.c b/block/export/export.c index 28a91c9c42..ddaf8036e5 100644 --- a/block/export/export.c +++ b/block/export/export.c @@ -201,11 +201,10 @@ fail: return NULL; } =20 -/* Callers must hold exp->ctx lock */ void blk_exp_ref(BlockExport *exp) { - assert(exp->refcount > 0); - exp->refcount++; + assert(qatomic_read(&exp->refcount) > 0); + qatomic_inc(&exp->refcount); } =20 /* Runs in the main thread */ @@ -227,11 +226,10 @@ static void blk_exp_delete_bh(void *opaque) aio_context_release(aio_context); } =20 -/* Callers must hold exp->ctx lock */ void blk_exp_unref(BlockExport *exp) { - assert(exp->refcount > 0); - if (--exp->refcount =3D=3D 0) { + assert(qatomic_read(&exp->refcount) > 0); + if (qatomic_fetch_dec(&exp->refcount) =3D=3D 1) { /* Touch the block_exports list only in the main thread */ aio_bh_schedule_oneshot(qemu_get_aio_context(), blk_exp_delete_bh, exp); @@ -339,7 +337,8 @@ void qmp_block_export_del(const char *id, if (!has_mode) { mode =3D BLOCK_EXPORT_REMOVE_MODE_SAFE; } - if (mode =3D=3D BLOCK_EXPORT_REMOVE_MODE_SAFE && exp->refcount > 1) { + if (mode =3D=3D BLOCK_EXPORT_REMOVE_MODE_SAFE && + qatomic_read(&exp->refcount) > 1) { error_setg(errp, "export '%s' still in use", exp->id); error_append_hint(errp, "Use mode=3D'hard' to force client " "disconnect\n"); diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c index 35dc8fcf45..611430afda 100644 --- a/block/export/vduse-blk.c +++ b/block/export/vduse-blk.c @@ -44,9 +44,7 @@ static void vduse_blk_inflight_inc(VduseBlkExport *vblk_e= xp) { if (qatomic_fetch_inc(&vblk_exp->inflight) =3D=3D 0) { /* Prevent export from being deleted */ - aio_context_acquire(vblk_exp->export.ctx); blk_exp_ref(&vblk_exp->export); - aio_context_release(vblk_exp->export.ctx); } } =20 @@ -57,9 +55,7 @@ static void vduse_blk_inflight_dec(VduseBlkExport *vblk_e= xp) aio_wait_kick(); =20 /* Now the export can be deleted */ - aio_context_acquire(vblk_exp->export.ctx); blk_exp_unref(&vblk_exp->export); - aio_context_release(vblk_exp->export.ctx); } } =20 --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443722; cv=none; d=zohomail.com; s=zohoarc; b=eIeqkui7CEJTi3Mv2SEM8xu9XEbMOchpDrwOrWU5YY9Ad4rpUl1rqaiuyaEUvUE+J0DkEDO9+1Ur1FIr1OcqGuobnMPvEmCUhkrM+03Ixk+slXCUhooBDzJVik7uWiAZOFcgChmZhIS6K2zO6ygILFbGI7PIlk/BCFSknTRK7OU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443722; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=AI5kqkdzipCjtseeQPE5/aVyAJxrbcrgigA/+qZzi/8=; b=dN1H9UCm4uasEuzHBYNqYzSJ9AjunGP3JC6V01BBddRDQ8DTJwZGKx0RQAr8wqWycrpvUYURxX6HYbACWPFET4A1CZHcZkXT2gBEMR1O7JJ0Q9r/D6cg3NRCiRSunTeC4jaA5eymKozUtnichf9X3RIlnbVcHyjFti7C6Yna7M4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443722315689.0772592712226; Tue, 25 Apr 2023 10:28:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526197.817831 (Exim 4.92) (envelope-from ) id 1prMSQ-0003TS-27; Tue, 25 Apr 2023 17:28:02 +0000 Received: by outflank-mailman (output) from mailman id 526197.817831; Tue, 25 Apr 2023 17:28:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSP-0003QU-O7; Tue, 25 Apr 2023 17:28:01 +0000 Received: by outflank-mailman (input) for mailman id 526197; Tue, 25 Apr 2023 17:28:00 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSO-0007l5-O5 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:00 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 855969ce-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:27:59 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-166-4yW8uuHRMwqs5zACvwosHQ-1; Tue, 25 Apr 2023 13:27:54 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 444E887A9E8; Tue, 25 Apr 2023 17:27:53 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id B7927C15BA0; Tue, 25 Apr 2023 17:27:52 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 855969ce-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443677; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AI5kqkdzipCjtseeQPE5/aVyAJxrbcrgigA/+qZzi/8=; b=idZ+gkjmvrwYeE6c6KQnRkXUNMjTDeTpBaZZPJgqnsu9vZmQOvSnD+eemSKQjmdE1NBcme 1Ewy3ltVhx+d25ab+zfTV75mHM51VC6/j0cLXdaeocC/NZZ/+XnNGk/Yp9elP/UMUnoQNG +TGFajP/qC0wLb/1Z57F523FHGsb64g= X-MC-Unique: 4yW8uuHRMwqs5zACvwosHQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 15/20] block/fuse: do not set is_external=true on FUSE fd Date: Tue, 25 Apr 2023 13:27:11 -0400 Message-Id: <20230425172716.1033562-16-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443722611100002 Content-Type: text/plain; charset="utf-8" This is part of ongoing work to remove the aio_disable_external() API. Use BlockDevOps .drained_begin/end/poll() instead of aio_set_fd_handler(is_external=3Dtrue). As a side-effect the FUSE export now follows AioContext changes like the other export types. Signed-off-by: Stefan Hajnoczi --- block/export/fuse.c | 58 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 56 insertions(+), 2 deletions(-) diff --git a/block/export/fuse.c b/block/export/fuse.c index 06fa41079e..65a7f4d723 100644 --- a/block/export/fuse.c +++ b/block/export/fuse.c @@ -50,6 +50,7 @@ typedef struct FuseExport { =20 struct fuse_session *fuse_session; struct fuse_buf fuse_buf; + unsigned int in_flight; /* atomic */ bool mounted, fd_handler_set_up; =20 char *mountpoint; @@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque); static bool is_regular_file(const char *path, Error **errp); =20 =20 +static void fuse_export_drained_begin(void *opaque) +{ + FuseExport *exp =3D opaque; + + aio_set_fd_handler(exp->common.ctx, + fuse_session_fd(exp->fuse_session), false, + NULL, NULL, NULL, NULL, NULL); + exp->fd_handler_set_up =3D false; +} + +static void fuse_export_drained_end(void *opaque) +{ + FuseExport *exp =3D opaque; + + /* Refresh AioContext in case it changed */ + exp->common.ctx =3D blk_get_aio_context(exp->common.blk); + + aio_set_fd_handler(exp->common.ctx, + fuse_session_fd(exp->fuse_session), false, + read_from_fuse_export, NULL, NULL, NULL, exp); + exp->fd_handler_set_up =3D true; +} + +static bool fuse_export_drained_poll(void *opaque) +{ + FuseExport *exp =3D opaque; + + return qatomic_read(&exp->in_flight) > 0; +} + +static const BlockDevOps fuse_export_blk_dev_ops =3D { + .drained_begin =3D fuse_export_drained_begin, + .drained_end =3D fuse_export_drained_end, + .drained_poll =3D fuse_export_drained_poll, +}; + static int fuse_export_create(BlockExport *blk_exp, BlockExportOptions *blk_exp_args, Error **errp) @@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp, } } =20 + blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp); + + /* + * We handle draining ourselves using an in-flight counter and by disa= bling + * the FUSE fd handler. Do not queue BlockBackend requests, they need = to + * complete so the in-flight counter reaches zero. + */ + blk_set_disable_request_queuing(exp->common.blk, true); + init_exports_table(); =20 /* @@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const cha= r *mountpoint, g_hash_table_insert(exports, g_strdup(mountpoint), NULL); =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), true, + fuse_session_fd(exp->fuse_session), false, read_from_fuse_export, NULL, NULL, NULL, exp); exp->fd_handler_set_up =3D true; =20 @@ -246,6 +292,8 @@ static void read_from_fuse_export(void *opaque) =20 blk_exp_ref(&exp->common); =20 + qatomic_inc(&exp->in_flight); + do { ret =3D fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf= ); } while (ret =3D=3D -EINTR); @@ -256,6 +304,10 @@ static void read_from_fuse_export(void *opaque) fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf); =20 out: + if (qatomic_fetch_dec(&exp->in_flight) =3D=3D 1) { + aio_wait_kick(); /* wake AIO_WAIT_WHILE() */ + } + blk_exp_unref(&exp->common); } =20 @@ -268,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp) =20 if (exp->fd_handler_set_up) { aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), true, + fuse_session_fd(exp->fuse_session), false, NULL, NULL, NULL, NULL, NULL); exp->fd_handler_set_up =3D false; } @@ -287,6 +339,8 @@ static void fuse_export_delete(BlockExport *blk_exp) { FuseExport *exp =3D container_of(blk_exp, FuseExport, common); =20 + blk_set_dev_ops(exp->common.blk, NULL, NULL); + if (exp->fuse_session) { if (exp->mounted) { fuse_session_unmount(exp->fuse_session); --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443789; cv=none; d=zohomail.com; s=zohoarc; b=F2CvMg/XkwbEgJx99N7HbH0QPbx3VsX50RQU5i3H+hezvOFaO3n6tmAYHpnTLT8lAC9fDSCTHQUVqnJKPMD9Opx5FuvzSGbV+48iSRCNYGCzIK8sQ8N+8xStIFmTuILVUQ2bpQlbKKj7FzNLJDU2iaGKShqzSPAPwqWK/Nr5u5k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443789; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Qs0Wu6hxc7j58DwPPRdDhx+qIxo2MCSBXMmfJ14jr7A=; b=aaGF7KygoHY/Bobm/DZohJ8T9ryT5UGi1siEGS5NEDPaVWO+FcY+aH8zh5qj6K1yCY665B+1fYlEDZLXjxslsiL9BRBGWDdBPGI1Fqt+TTcg5Kz6EG8hR1piyrfPEr+OlKjRxY5WjmhIvlP2CKFvjXYUkh709kHPrUQNbXD2ZVk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443789447578.4004003293361; Tue, 25 Apr 2023 10:29:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526240.817884 (Exim 4.92) (envelope-from ) id 1prMTh-0007ya-6p; Tue, 25 Apr 2023 17:29:21 +0000 Received: by outflank-mailman (output) from mailman id 526240.817884; Tue, 25 Apr 2023 17:29:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMTh-0007uh-1s; Tue, 25 Apr 2023 17:29:21 +0000 Received: by outflank-mailman (input) for mailman id 526240; Tue, 25 Apr 2023 17:29:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSS-0006fQ-JV for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:04 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 8778516b-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:28:02 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-307-Lmx5UjrjNP6ebcnT7dlP9g-1; Tue, 25 Apr 2023 13:27:56 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 234BE185A7A7; Tue, 25 Apr 2023 17:27:55 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 995BA492B03; Tue, 25 Apr 2023 17:27:54 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8778516b-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Qs0Wu6hxc7j58DwPPRdDhx+qIxo2MCSBXMmfJ14jr7A=; b=h5KKjKc2m6DuNvVJE8O+TqaGUD2iAG6DpNUHa6S1bfdoNUuNkFJ616eo/Un6D6eCy+O5Rr S3MEyZ64fCUdEZidqDCitm0xsRXEF5gURknxBufySWSGN/ETm8KfEy8Cg8N6ZgugHuY/gt oOQSUp7/OIYgkOOrbIYUbHyiXnEQ4qg= X-MC-Unique: Lmx5UjrjNP6ebcnT7dlP9g-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 16/20] virtio: make it possible to detach host notifier from any thread Date: Tue, 25 Apr 2023 13:27:12 -0400 Message-Id: <20230425172716.1033562-17-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443789783100003 Content-Type: text/plain; charset="utf-8" virtio_queue_aio_detach_host_notifier() does two things: 1. It removes the fd handler from the event loop. 2. It processes the virtqueue one last time. The first step can be peformed by any thread and without taking the AioContext lock. The second step may need the AioContext lock (depending on the device implementation) and runs in the thread where request processing takes place. virtio-blk and virtio-scsi therefore call virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in AioContext Scheduling a BH is undesirable for .drained_begin() functions. The next patch will introduce a .drained_begin() function that needs to call virtio_queue_aio_detach_host_notifier(). Move the virtqueue processing out to the callers of virtio_queue_aio_detach_host_notifier() so that the function can be called from any thread. This is in preparation for the next patch. Signed-off-by: Stefan Hajnoczi --- hw/block/dataplane/virtio-blk.c | 2 ++ hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++ 2 files changed, 11 insertions(+) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-bl= k.c index b28d81737e..bd7cc6e76b 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -286,8 +286,10 @@ static void virtio_blk_data_plane_stop_bh(void *opaque) =20 for (i =3D 0; i < s->conf->num_queues; i++) { VirtQueue *vq =3D virtio_get_queue(s->vdev, i); + EventNotifier *host_notifier =3D virtio_queue_get_host_notifier(vq= ); =20 virtio_queue_aio_detach_host_notifier(vq, s->ctx); + virtio_queue_host_notifier_read(host_notifier); } } =20 diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplan= e.c index 20bb91766e..81643445ed 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -71,12 +71,21 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque) { VirtIOSCSI *s =3D opaque; VirtIOSCSICommon *vs =3D VIRTIO_SCSI_COMMON(s); + EventNotifier *host_notifier; int i; =20 virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx); + host_notifier =3D virtio_queue_get_host_notifier(vs->ctrl_vq); + virtio_queue_host_notifier_read(host_notifier); + virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx); + host_notifier =3D virtio_queue_get_host_notifier(vs->event_vq); + virtio_queue_host_notifier_read(host_notifier); + for (i =3D 0; i < vs->conf.num_queues; i++) { virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx); + host_notifier =3D virtio_queue_get_host_notifier(vs->cmd_vqs[i]); + virtio_queue_host_notifier_read(host_notifier); } } =20 --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443769; cv=none; d=zohomail.com; s=zohoarc; b=NxZ6mduulZ2djUQF3VtE1cafJ3Sexv/D21pZ3CExNUUUDQ1uDh4PM1Cq40MpjbHkwePLocYQPvztSNLYIQ5mSOCBd6y1JozKy8/ant4o16jSXOiGJE7bcstHqJv5CbBF8WYKOfWlkVQgo8SYYlpx/KE93Gw0I3RzDBtpku1u0Hw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443769; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=NTRadM71WOmRghvzBom8j3O5LrKwHmY9JQuXgfzMJZs=; b=VgNUi/bwvrf3JDpOl/xGbw0umLKEFKb7rzFve2MzwY9cvzwtHP3G5pcPvYyEITZ2yNLA+6jyqiYSnbI4dy2wCC3UBh9y3JbnEEWd/uhF+Ld8/4E4skbEYoePy5yYYENOwG7PHm6WxEN+xOrqJEmwbDDgdA3ze3fkMjl//iCvCJA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443769750206.0873976674842; Tue, 25 Apr 2023 10:29:29 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526216.817848 (Exim 4.92) (envelope-from ) id 1prMTM-0006MM-Kj; Tue, 25 Apr 2023 17:29:00 +0000 Received: by outflank-mailman (output) from mailman id 526216.817848; Tue, 25 Apr 2023 17:29:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMTM-0006LE-F2; Tue, 25 Apr 2023 17:29:00 +0000 Received: by outflank-mailman (input) for mailman id 526216; Tue, 25 Apr 2023 17:28:59 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMST-0006fQ-JU for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:05 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 878b602f-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:28:02 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-135-xhkpTfX3OsmcGVRiBiaH6Q-1; Tue, 25 Apr 2023 13:27:58 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2607C1C02C9C; Tue, 25 Apr 2023 17:27:57 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9478314171B8; Tue, 25 Apr 2023 17:27:56 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 878b602f-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NTRadM71WOmRghvzBom8j3O5LrKwHmY9JQuXgfzMJZs=; b=Eo/BY+5G4tvepPg7/VqkGl66Ffv7ln2B9qN0qWNDhg7YHzIIYGcp15WFkSIDMm8Getfjmh UdF9L0OLgtkPuN92EVuxhFTCoPRXEdxThsJzyMA1AHBmts9IYIRMe6GqHt8QOpXZxaPQKJ KGYWKOLWpkA8+wB7aD7laMgUiE/8BAc= X-MC-Unique: xhkpTfX3OsmcGVRiBiaH6Q-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 17/20] virtio-blk: implement BlockDevOps->drained_begin() Date: Tue, 25 Apr 2023 13:27:13 -0400 Message-Id: <20230425172716.1033562-18-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443771872100002 Content-Type: text/plain; charset="utf-8" Detach ioeventfds during drained sections to stop I/O submission from the guest. virtio-blk is no longer reliant on aio_disable_external() after this patch. This will allow us to remove the aio_disable_external() API once all other code that relies on it is converted. Take extra care to avoid attaching/detaching ioeventfds if the data plane is started/stopped during a drained section. This should be rare, but maybe the mirror block job can trigger it. Signed-off-by: Stefan Hajnoczi --- hw/block/dataplane/virtio-blk.c | 17 +++++++++------ hw/block/virtio-blk.c | 38 ++++++++++++++++++++++++++++++++- 2 files changed, 48 insertions(+), 7 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-bl= k.c index bd7cc6e76b..d77fc6028c 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) } =20 /* Get this show started by hooking up our callbacks */ - aio_context_acquire(s->ctx); - for (i =3D 0; i < nvqs; i++) { - VirtQueue *vq =3D virtio_get_queue(s->vdev, i); + if (!blk_in_drain(s->conf->conf.blk)) { + aio_context_acquire(s->ctx); + for (i =3D 0; i < nvqs; i++) { + VirtQueue *vq =3D virtio_get_queue(s->vdev, i); =20 - virtio_queue_aio_attach_host_notifier(vq, s->ctx); + virtio_queue_aio_attach_host_notifier(vq, s->ctx); + } + aio_context_release(s->ctx); } - aio_context_release(s->ctx); return 0; =20 fail_aio_context: @@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) trace_virtio_blk_data_plane_stop(s); =20 aio_context_acquire(s->ctx); - aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); + + if (!blk_in_drain(s->conf->conf.blk)) { + aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); + } =20 /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete = */ blk_drain(s->conf->conf.blk); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index cefca93b31..d8dedc575c 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -1109,8 +1109,44 @@ static void virtio_blk_resize(void *opaque) aio_bh_schedule_oneshot(qemu_get_aio_context(), virtio_resize_cb, vdev= ); } =20 +/* Suspend virtqueue ioeventfd processing during drain */ +static void virtio_blk_drained_begin(void *opaque) +{ + VirtIOBlock *s =3D opaque; + VirtIODevice *vdev =3D VIRTIO_DEVICE(opaque); + AioContext *ctx =3D blk_get_aio_context(s->conf.conf.blk); + + if (!s->dataplane || !s->dataplane_started) { + return; + } + + for (uint16_t i =3D 0; i < s->conf.num_queues; i++) { + VirtQueue *vq =3D virtio_get_queue(vdev, i); + virtio_queue_aio_detach_host_notifier(vq, ctx); + } +} + +/* Resume virtqueue ioeventfd processing after drain */ +static void virtio_blk_drained_end(void *opaque) +{ + VirtIOBlock *s =3D opaque; + VirtIODevice *vdev =3D VIRTIO_DEVICE(opaque); + AioContext *ctx =3D blk_get_aio_context(s->conf.conf.blk); + + if (!s->dataplane || !s->dataplane_started) { + return; + } + + for (uint16_t i =3D 0; i < s->conf.num_queues; i++) { + VirtQueue *vq =3D virtio_get_queue(vdev, i); + virtio_queue_aio_attach_host_notifier(vq, ctx); + } +} + static const BlockDevOps virtio_block_ops =3D { - .resize_cb =3D virtio_blk_resize, + .resize_cb =3D virtio_blk_resize, + .drained_begin =3D virtio_blk_drained_begin, + .drained_end =3D virtio_blk_drained_end, }; =20 static void virtio_blk_device_realize(DeviceState *dev, Error **errp) --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443728; cv=none; d=zohomail.com; s=zohoarc; b=TMCyYFX2ffjw/IFQBVb4d6hEXzpfHxBe/wZZ7TsjFOZHPdJ6/lqQJ0CtW7RMEWzbz4aJ/7feGszg/EffbzLMrWTkUtNjFT8H1CDrqrudt2720V7CEvHyII/DYlNElmvtHNNvNKalZ/Yhw3nsMNwRXl5pH6Wj55znZlRWR4LHeB4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443728; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=V8/JM/4FpUbs1MFSPNPGCdntPIYaUvTzToJc4P5DEws=; b=RbLbcxsa5hrP/kiwCunGIhJxb3yE0aXaa6sq1q7UGypRNa/HhBYZi6anjbrO8BOH7sdnYeTEM3as5XJ1q/Wk9WelUaVFHhJK29fOoRtshGhu3GDxU0XwOnijhC2igHd+fEpS+vcCHIgCta9hPqDLhS1qsO1uJ+qvVCm6RqzmZEc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1682443728371299.1241173988053; Tue, 25 Apr 2023 10:28:48 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prMSY-0007D3-0o; Tue, 25 Apr 2023 13:28:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prMSV-0007C8-6e for qemu-devel@nongnu.org; Tue, 25 Apr 2023 13:28:07 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prMSS-0003Dx-5h for qemu-devel@nongnu.org; Tue, 25 Apr 2023 13:28:05 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-208-MR8Wa4WXPsOHkLnVhYPEbw-1; Tue, 25 Apr 2023 13:28:00 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 33B01858289; Tue, 25 Apr 2023 17:27:59 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8808C1121314; Tue, 25 Apr 2023 17:27:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443683; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V8/JM/4FpUbs1MFSPNPGCdntPIYaUvTzToJc4P5DEws=; b=VXpCKAU40Th/gVWSI7bWavW2ClOQzp1a/mOb9Zf9XYTdHhznRYTqpIZYa1kte/o2PqTRsy jiN51T8GWRliEyzIvzYCZKOJqO7aNaxwKM9+sFpUrelI5LGq6X6bhzUM9SfTJ6PMgGNPuM khqMvTgSPXzljHWP5w/mwkSHwfH+J8E= X-MC-Unique: MR8Wa4WXPsOHkLnVhYPEbw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 18/20] virtio-scsi: implement BlockDevOps->drained_begin() Date: Tue, 25 Apr 2023 13:27:14 -0400 Message-Id: <20230425172716.1033562-19-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -22 X-Spam_score: -2.3 X-Spam_bar: -- X-Spam_report: (-2.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.171, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443728911100004 Content-Type: text/plain; charset="utf-8" The virtio-scsi Host Bus Adapter provides access to devices on a SCSI bus. Those SCSI devices typically have a BlockBackend. When the BlockBackend enters a drained section, the SCSI device must temporarily stop submitting new I/O requests. Implement this behavior by temporarily stopping virtio-scsi virtqueue processing when one of the SCSI devices enters a drained section. The new scsi_device_drained_begin() API allows scsi-disk to message the virtio-scsi HBA. scsi_device_drained_begin() uses a drain counter so that multiple SCSI devices can have overlapping drained sections. The HBA only sees one pair of .drained_begin/end() calls. After this commit, virtio-scsi no longer depends on hw/virtio's ioeventfd aio_set_event_notifier(is_external=3Dtrue). This commit is a step towards removing the aio_disable_external() API. Signed-off-by: Stefan Hajnoczi --- include/hw/scsi/scsi.h | 14 ++++++++++++ hw/scsi/scsi-bus.c | 40 +++++++++++++++++++++++++++++++++ hw/scsi/scsi-disk.c | 27 +++++++++++++++++----- hw/scsi/virtio-scsi-dataplane.c | 22 ++++++++++-------- hw/scsi/virtio-scsi.c | 38 +++++++++++++++++++++++++++++++ hw/scsi/trace-events | 2 ++ 6 files changed, 129 insertions(+), 14 deletions(-) diff --git a/include/hw/scsi/scsi.h b/include/hw/scsi/scsi.h index 6f23a7a73e..e2bb1a2fbf 100644 --- a/include/hw/scsi/scsi.h +++ b/include/hw/scsi/scsi.h @@ -133,6 +133,16 @@ struct SCSIBusInfo { void (*save_request)(QEMUFile *f, SCSIRequest *req); void *(*load_request)(QEMUFile *f, SCSIRequest *req); void (*free_request)(SCSIBus *bus, void *priv); + + /* + * Temporarily stop submitting new requests between drained_begin() and + * drained_end(). Called from the main loop thread with the BQL held. + * + * Implement these callbacks if request processing is triggered by a f= ile + * descriptor like an EventNotifier. Otherwise set them to NULL. + */ + void (*drained_begin)(SCSIBus *bus); + void (*drained_end)(SCSIBus *bus); }; =20 #define TYPE_SCSI_BUS "SCSI" @@ -144,6 +154,8 @@ struct SCSIBus { =20 SCSISense unit_attention; const SCSIBusInfo *info; + + int drain_count; /* protected by BQL */ }; =20 /** @@ -213,6 +225,8 @@ void scsi_req_cancel_complete(SCSIRequest *req); void scsi_req_cancel(SCSIRequest *req); void scsi_req_cancel_async(SCSIRequest *req, Notifier *notifier); void scsi_req_retry(SCSIRequest *req); +void scsi_device_drained_begin(SCSIDevice *sdev); +void scsi_device_drained_end(SCSIDevice *sdev); void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense); void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense); void scsi_device_report_change(SCSIDevice *dev, SCSISense sense); diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index 64d7311757..b571fdf895 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -1668,6 +1668,46 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SC= SISense sense) scsi_device_set_ua(sdev, sense); } =20 +void scsi_device_drained_begin(SCSIDevice *sdev) +{ + SCSIBus *bus =3D DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus); + if (!bus) { + return; + } + + assert(qemu_get_current_aio_context() =3D=3D qemu_get_aio_context()); + assert(bus->drain_count < INT_MAX); + + /* + * Multiple BlockBackends can be on a SCSIBus and each may begin/end + * draining at any time. Keep a counter so HBAs only see begin/end onc= e. + */ + if (bus->drain_count++ =3D=3D 0) { + trace_scsi_bus_drained_begin(bus, sdev); + if (bus->info->drained_begin) { + bus->info->drained_begin(bus); + } + } +} + +void scsi_device_drained_end(SCSIDevice *sdev) +{ + SCSIBus *bus =3D DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus); + if (!bus) { + return; + } + + assert(qemu_get_current_aio_context() =3D=3D qemu_get_aio_context()); + assert(bus->drain_count > 0); + + if (bus->drain_count-- =3D=3D 1) { + trace_scsi_bus_drained_end(bus, sdev); + if (bus->info->drained_end) { + bus->info->drained_end(bus); + } + } +} + static char *scsibus_get_dev_path(DeviceState *dev) { SCSIDevice *d =3D SCSI_DEVICE(dev); diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index e01bd84541..2249087d6a 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -2360,6 +2360,20 @@ static void scsi_disk_reset(DeviceState *dev) s->qdev.scsi_version =3D s->qdev.default_scsi_version; } =20 +static void scsi_disk_drained_begin(void *opaque) +{ + SCSIDiskState *s =3D opaque; + + scsi_device_drained_begin(&s->qdev); +} + +static void scsi_disk_drained_end(void *opaque) +{ + SCSIDiskState *s =3D opaque; + + scsi_device_drained_end(&s->qdev); +} + static void scsi_disk_resize_cb(void *opaque) { SCSIDiskState *s =3D opaque; @@ -2414,16 +2428,19 @@ static bool scsi_cd_is_medium_locked(void *opaque) } =20 static const BlockDevOps scsi_disk_removable_block_ops =3D { - .change_media_cb =3D scsi_cd_change_media_cb, + .change_media_cb =3D scsi_cd_change_media_cb, + .drained_begin =3D scsi_disk_drained_begin, + .drained_end =3D scsi_disk_drained_end, .eject_request_cb =3D scsi_cd_eject_request_cb, - .is_tray_open =3D scsi_cd_is_tray_open, .is_medium_locked =3D scsi_cd_is_medium_locked, - - .resize_cb =3D scsi_disk_resize_cb, + .is_tray_open =3D scsi_cd_is_tray_open, + .resize_cb =3D scsi_disk_resize_cb, }; =20 static const BlockDevOps scsi_disk_block_ops =3D { - .resize_cb =3D scsi_disk_resize_cb, + .drained_begin =3D scsi_disk_drained_begin, + .drained_end =3D scsi_disk_drained_end, + .resize_cb =3D scsi_disk_resize_cb, }; =20 static void scsi_disk_unit_attention_reported(SCSIDevice *dev) diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplan= e.c index 81643445ed..1060038e13 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -153,14 +153,16 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) s->dataplane_starting =3D false; s->dataplane_started =3D true; =20 - aio_context_acquire(s->ctx); - virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx); - virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx); + if (s->bus.drain_count =3D=3D 0) { + aio_context_acquire(s->ctx); + virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx); + virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx= ); =20 - for (i =3D 0; i < vs->conf.num_queues; i++) { - virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx); + for (i =3D 0; i < vs->conf.num_queues; i++) { + virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx); + } + aio_context_release(s->ctx); } - aio_context_release(s->ctx); return 0; =20 fail_host_notifiers: @@ -206,9 +208,11 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev) } s->dataplane_stopping =3D true; =20 - aio_context_acquire(s->ctx); - aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s); - aio_context_release(s->ctx); + if (s->bus.drain_count =3D=3D 0) { + aio_context_acquire(s->ctx); + aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s); + aio_context_release(s->ctx); + } =20 blk_drain_all(); /* ensure there are no in-flight requests */ =20 diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index a02f9233ec..eba1e84dac 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -1081,6 +1081,42 @@ static void virtio_scsi_hotunplug(HotplugHandler *ho= tplug_dev, DeviceState *dev, } } =20 +/* Suspend virtqueue ioeventfd processing during drain */ +static void virtio_scsi_drained_begin(SCSIBus *bus) +{ + VirtIOSCSI *s =3D container_of(bus, VirtIOSCSI, bus); + VirtIODevice *vdev =3D VIRTIO_DEVICE(s); + uint32_t total_queues =3D VIRTIO_SCSI_VQ_NUM_FIXED + + s->parent_obj.conf.num_queues; + + if (!s->dataplane_started) { + return; + } + + for (uint32_t i =3D 0; i < total_queues; i++) { + VirtQueue *vq =3D virtio_get_queue(vdev, i); + virtio_queue_aio_detach_host_notifier(vq, s->ctx); + } +} + +/* Resume virtqueue ioeventfd processing after drain */ +static void virtio_scsi_drained_end(SCSIBus *bus) +{ + VirtIOSCSI *s =3D container_of(bus, VirtIOSCSI, bus); + VirtIODevice *vdev =3D VIRTIO_DEVICE(s); + uint32_t total_queues =3D VIRTIO_SCSI_VQ_NUM_FIXED + + s->parent_obj.conf.num_queues; + + if (!s->dataplane_started) { + return; + } + + for (uint32_t i =3D 0; i < total_queues; i++) { + VirtQueue *vq =3D virtio_get_queue(vdev, i); + virtio_queue_aio_attach_host_notifier(vq, s->ctx); + } +} + static struct SCSIBusInfo virtio_scsi_scsi_info =3D { .tcq =3D true, .max_channel =3D VIRTIO_SCSI_MAX_CHANNEL, @@ -1095,6 +1131,8 @@ static struct SCSIBusInfo virtio_scsi_scsi_info =3D { .get_sg_list =3D virtio_scsi_get_sg_list, .save_request =3D virtio_scsi_save_request, .load_request =3D virtio_scsi_load_request, + .drained_begin =3D virtio_scsi_drained_begin, + .drained_end =3D virtio_scsi_drained_end, }; =20 void virtio_scsi_common_realize(DeviceState *dev, diff --git a/hw/scsi/trace-events b/hw/scsi/trace-events index ab238293f0..bdd4e2c7c7 100644 --- a/hw/scsi/trace-events +++ b/hw/scsi/trace-events @@ -6,6 +6,8 @@ scsi_req_cancel(int target, int lun, int tag) "target %d lu= n %d tag %d" scsi_req_data(int target, int lun, int tag, int len) "target %d lun %d tag= %d len %d" scsi_req_data_canceled(int target, int lun, int tag, int len) "target %d l= un %d tag %d len %d" scsi_req_dequeue(int target, int lun, int tag) "target %d lun %d tag %d" +scsi_bus_drained_begin(void *bus, void *sdev) "bus %p sdev %p" +scsi_bus_drained_end(void *bus, void *sdev) "bus %p sdev %p" scsi_req_continue(int target, int lun, int tag) "target %d lun %d tag %d" scsi_req_continue_canceled(int target, int lun, int tag) "target %d lun %d= tag %d" scsi_req_parsed(int target, int lun, int tag, int cmd, int mode, int xfer)= "target %d lun %d tag %d command %d dir %d length %d" --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443788; cv=none; d=zohomail.com; s=zohoarc; b=ZYMyCvhGMbBJMF/Xwjj7XkxD3XO/fUo7GiltPbdhqDle7qrt+WjqmETWcUT87IZ2ibj5LXBLVUW0jJjnxRP2ZZbgyCVuZzgCG0hErskolmaBUvPhLO8jdGKQyXL3uiKyWJ9IVPoi4Fq/Cm4AzwhrMMk3V0TgbFFjPB/rfB/XW2s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443788; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=8sGNuhNmUwwRYy7CrP+NJgAMgjXbbyx47QaYmFQ/Pvo=; b=oDPVRfbVosiHqr1mr/eam+1oyU2ZWfRb1sa66Wg/WAjvpjtsKZZz4fcCLA2NcKGKLJkGINyhJ5SHmRW06+RTFks9AkSfKR7ORFzTvrTRm6Y3rHD/gZ92M479UD6PmOFHNmkagtAnAY+j64S/xcLA1lLJUJ4rGrKPpCXd0efwN5A= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443788703215.7537885985114; Tue, 25 Apr 2023 10:29:48 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526239.817881 (Exim 4.92) (envelope-from ) id 1prMTg-0007p2-SO; Tue, 25 Apr 2023 17:29:20 +0000 Received: by outflank-mailman (output) from mailman id 526239.817881; Tue, 25 Apr 2023 17:29:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMTg-0007nQ-IW; Tue, 25 Apr 2023 17:29:20 +0000 Received: by outflank-mailman (input) for mailman id 526239; Tue, 25 Apr 2023 17:29:19 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSW-0007l5-LW for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:08 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8a0f6fa9-e38e-11ed-8611-37d641c3527e; Tue, 25 Apr 2023 19:28:06 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-453-xunltWRNMdyqImzyKHy4uA-1; Tue, 25 Apr 2023 13:28:02 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 315EC185A791; Tue, 25 Apr 2023 17:28:01 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9F13CC15BA0; Tue, 25 Apr 2023 17:28:00 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8a0f6fa9-e38e-11ed-8611-37d641c3527e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443685; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8sGNuhNmUwwRYy7CrP+NJgAMgjXbbyx47QaYmFQ/Pvo=; b=J9MyOaTReKFIdTcL/owuHs4VIGDofp1Ujq5pw8wLbexx0G8M6Ufc4tWxdFUBcBsjUw9EIY 5nVJXrqmoEz3fVhpryBo01dLWu/QEMq+MZdRazw0676FtTDRwCsA6eCcPEaYg9FRakj3pE mzcWy+CRasB3Mp205NrypT85WlbwhW0= X-MC-Unique: xunltWRNMdyqImzyKHy4uA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 19/20] virtio: do not set is_external=true on host notifiers Date: Tue, 25 Apr 2023 13:27:15 -0400 Message-Id: <20230425172716.1033562-20-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443789571100001 Content-Type: text/plain; charset="utf-8" Host notifiers can now use is_external=3Dfalse since virtio-blk and virtio-scsi no longer rely on is_external=3Dtrue for drained sections. Signed-off-by: Stefan Hajnoczi --- hw/virtio/virtio.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 272d930721..9cdad7e550 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(E= ventNotifier *n) =20 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, + aio_set_event_notifier(ctx, &vq->host_notifier, false, virtio_queue_host_notifier_read, virtio_queue_host_notifier_aio_poll, virtio_queue_host_notifier_aio_poll_ready); @@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueu= e *vq, AioContext *ctx) */ void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioConte= xt *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, + aio_set_event_notifier(ctx, &vq->host_notifier, false, virtio_queue_host_notifier_read, NULL, NULL); } =20 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL= ); + aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NUL= L); /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ virtio_queue_host_notifier_read(&vq->host_notifier); --=20 2.39.2 From nobody Sat May 18 06:04:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682443794; cv=none; d=zohomail.com; s=zohoarc; b=LhJYWwXM2rt2dvWvfVlE86RCm2WAOE2GFFWqbaBoU6PcS3Ao+h9xh2b5ktLVT+OIc4cF97bE12G0FXa8gyIXi+bSRMCs1UzPMGxpIdusmpxjXBxVBYv1qHeUWC6Il7E5HEhphZdhZtRIVUnqJ9YP/9OK9yYiSmajhjVmkQfOXJU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682443794; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=cyIMCQvAF20/OYjye5kCDL3cT6miBWdMHrsXrPBAXnM=; b=jbwwYqRXrfg3XyX5H+BqbALhonPG5QiO97R4mBxBVBDPRvLK99falrl4tUYuavtUMDGP3745HZcIqR6471PMYtxZpV+UF71gmxVvd6AvGBYbPEx7zC3p37iO87SMrCb+UkhntZOpLEcgMIUYRJ4kScdSK+zWbTJ79owdR7giC48= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682443794238440.91459356220344; Tue, 25 Apr 2023 10:29:54 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.526238.817874 (Exim 4.92) (envelope-from ) id 1prMTg-0007lG-ES; Tue, 25 Apr 2023 17:29:20 +0000 Received: by outflank-mailman (output) from mailman id 526238.817874; Tue, 25 Apr 2023 17:29:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMTg-0007l2-AA; Tue, 25 Apr 2023 17:29:20 +0000 Received: by outflank-mailman (input) for mailman id 526238; Tue, 25 Apr 2023 17:29:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1prMSa-0006fQ-LJ for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:12 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 8c03efce-e38e-11ed-b223-6b7b168915f2; Tue, 25 Apr 2023 19:28:10 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-372-LxNHUAMcMq-A2DsTWxkDNA-1; Tue, 25 Apr 2023 13:28:05 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 607D8858289; Tue, 25 Apr 2023 17:28:04 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id EA7CF7ADE; Tue, 25 Apr 2023 17:28:02 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8c03efce-e38e-11ed-b223-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443689; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cyIMCQvAF20/OYjye5kCDL3cT6miBWdMHrsXrPBAXnM=; b=Ky3kcMjv3SCB6MDR3TZ/UUSnL7T3S2+fv/UKLNxMWza+R1nmgXIH2t9K7zpK/lw74i+xnt DAjoceG7tImYAx9vhEoQZkUJqMjY5Q8HCiezwQN3eE9KVU6UliVk+EEzuUX/NV0uuOBFzU 5MDweU1bdDRAN6heaS0GI5OdX8fEJhQ= X-MC-Unique: LxNHUAMcMq-A2DsTWxkDNA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 20/20] aio: remove aio_disable_external() API Date: Tue, 25 Apr 2023 13:27:16 -0400 Message-Id: <20230425172716.1033562-21-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682443795839100001 All callers now pass is_external=3Dfalse to aio_set_fd_handler() and aio_set_event_notifier(). The aio_disable_external() API that temporarily disables fd handlers that were registered is_external=3Dtrue is therefore dead code. Remove aio_disable_external(), aio_enable_external(), and the is_external arguments to aio_set_fd_handler() and aio_set_event_notifier(). The entire test-fdmon-epoll test is removed because its sole purpose was testing aio_disable_external(). Parts of this patch were generated using the following coccinelle (https://coccinelle.lip6.fr/) semantic patch: @@ expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_read= y, opaque; @@ - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io= _poll_ready, opaque) + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, = opaque) @@ expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready; @@ - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io= _poll_ready) + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready) Reviewed-by: Juan Quintela Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Stefan Hajnoczi --- include/block/aio.h | 57 --------------------------- util/aio-posix.h | 1 - block.c | 7 ---- block/blkio.c | 15 +++---- block/curl.c | 10 ++--- block/export/fuse.c | 8 ++-- block/export/vduse-blk.c | 10 ++--- block/io.c | 2 - block/io_uring.c | 4 +- block/iscsi.c | 3 +- block/linux-aio.c | 4 +- block/nfs.c | 5 +-- block/nvme.c | 8 ++-- block/ssh.c | 4 +- block/win32-aio.c | 6 +-- hw/i386/kvm/xen_xenstore.c | 2 +- hw/virtio/virtio.c | 6 +-- hw/xen/xen-bus.c | 8 ++-- io/channel-command.c | 6 +-- io/channel-file.c | 3 +- io/channel-socket.c | 3 +- migration/rdma.c | 16 ++++---- tests/unit/test-aio.c | 27 +------------ tests/unit/test-bdrv-drain.c | 1 - tests/unit/test-fdmon-epoll.c | 73 ----------------------------------- util/aio-posix.c | 20 +++------- util/aio-win32.c | 8 +--- util/async.c | 3 +- util/fdmon-epoll.c | 18 +++------ util/fdmon-io_uring.c | 8 +--- util/fdmon-poll.c | 3 +- util/main-loop.c | 7 ++-- util/qemu-coroutine-io.c | 7 ++-- util/vhost-user-server.c | 11 +++--- tests/unit/meson.build | 3 -- 35 files changed, 82 insertions(+), 295 deletions(-) delete mode 100644 tests/unit/test-fdmon-epoll.c diff --git a/include/block/aio.h b/include/block/aio.h index 543717f294..bb38f0753f 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -231,8 +231,6 @@ struct AioContext { */ QEMUTimerListGroup tlg; =20 - int external_disable_cnt; - /* Number of AioHandlers without .io_poll() */ int poll_disable_cnt; =20 @@ -475,7 +473,6 @@ bool aio_poll(AioContext *ctx, bool blocking); */ void aio_set_fd_handler(AioContext *ctx, int fd, - bool is_external, IOHandler *io_read, IOHandler *io_write, AioPollFn *io_poll, @@ -491,7 +488,6 @@ void aio_set_fd_handler(AioContext *ctx, */ void aio_set_event_notifier(AioContext *ctx, EventNotifier *notifier, - bool is_external, EventNotifierHandler *io_read, AioPollFn *io_poll, EventNotifierHandler *io_poll_ready); @@ -620,59 +616,6 @@ static inline void aio_timer_init(AioContext *ctx, */ int64_t aio_compute_timeout(AioContext *ctx); =20 -/** - * aio_disable_external: - * @ctx: the aio context - * - * Disable the further processing of external clients. - */ -static inline void aio_disable_external(AioContext *ctx) -{ - qatomic_inc(&ctx->external_disable_cnt); -} - -/** - * aio_enable_external: - * @ctx: the aio context - * - * Enable the processing of external clients. - */ -static inline void aio_enable_external(AioContext *ctx) -{ - int old; - - old =3D qatomic_fetch_dec(&ctx->external_disable_cnt); - assert(old > 0); - if (old =3D=3D 1) { - /* Kick event loop so it re-arms file descriptors */ - aio_notify(ctx); - } -} - -/** - * aio_external_disabled: - * @ctx: the aio context - * - * Return true if the external clients are disabled. - */ -static inline bool aio_external_disabled(AioContext *ctx) -{ - return qatomic_read(&ctx->external_disable_cnt); -} - -/** - * aio_node_check: - * @ctx: the aio context - * @is_external: Whether or not the checked node is an external event sour= ce. - * - * Check if the node's is_external flag is okay to be polled by the ctx at= this - * moment. True means green light. - */ -static inline bool aio_node_check(AioContext *ctx, bool is_external) -{ - return !is_external || !qatomic_read(&ctx->external_disable_cnt); -} - /** * aio_co_schedule: * @ctx: the aio context diff --git a/util/aio-posix.h b/util/aio-posix.h index 80b927c7f4..4264c518be 100644 --- a/util/aio-posix.h +++ b/util/aio-posix.h @@ -38,7 +38,6 @@ struct AioHandler { #endif int64_t poll_idle_timeout; /* when to stop userspace polling */ bool poll_ready; /* has polling detected an event? */ - bool is_external; }; =20 /* Add a handler to a ready list */ diff --git a/block.c b/block.c index d79a52ca74..608c99a219 100644 --- a/block.c +++ b/block.c @@ -7268,9 +7268,6 @@ static void bdrv_detach_aio_context(BlockDriverState = *bs) bs->drv->bdrv_detach_aio_context(bs); } =20 - if (bs->quiesce_counter) { - aio_enable_external(bs->aio_context); - } bs->aio_context =3D NULL; } =20 @@ -7280,10 +7277,6 @@ static void bdrv_attach_aio_context(BlockDriverState= *bs, BdrvAioNotifier *ban, *ban_tmp; GLOBAL_STATE_CODE(); =20 - if (bs->quiesce_counter) { - aio_disable_external(new_context); - } - bs->aio_context =3D new_context; =20 if (bs->drv && bs->drv->bdrv_attach_aio_context) { diff --git a/block/blkio.c b/block/blkio.c index 0cdc99a729..72117fa005 100644 --- a/block/blkio.c +++ b/block/blkio.c @@ -306,23 +306,18 @@ static void blkio_attach_aio_context(BlockDriverState= *bs, { BDRVBlkioState *s =3D bs->opaque; =20 - aio_set_fd_handler(new_context, - s->completion_fd, - false, - blkio_completion_fd_read, - NULL, + aio_set_fd_handler(new_context, s->completion_fd, + blkio_completion_fd_read, NULL, blkio_completion_fd_poll, - blkio_completion_fd_poll_ready, - bs); + blkio_completion_fd_poll_ready, bs); } =20 static void blkio_detach_aio_context(BlockDriverState *bs) { BDRVBlkioState *s =3D bs->opaque; =20 - aio_set_fd_handler(bdrv_get_aio_context(bs), - s->completion_fd, - false, NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(bdrv_get_aio_context(bs), s->completion_fd, NULL, N= ULL, + NULL, NULL, NULL); } =20 /* Call with s->blkio_lock held to submit I/O after enqueuing a new reques= t */ diff --git a/block/curl.c b/block/curl.c index 8bb39a134e..0fc42d03d7 100644 --- a/block/curl.c +++ b/block/curl.c @@ -132,7 +132,7 @@ static gboolean curl_drop_socket(void *key, void *value= , void *opaque) CURLSocket *socket =3D value; BDRVCURLState *s =3D socket->s; =20 - aio_set_fd_handler(s->aio_context, socket->fd, false, + aio_set_fd_handler(s->aio_context, socket->fd, NULL, NULL, NULL, NULL, NULL); return true; } @@ -180,20 +180,20 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd,= int action, trace_curl_sock_cb(action, (int)fd); switch (action) { case CURL_POLL_IN: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, curl_multi_do, NULL, NULL, NULL, socket); break; case CURL_POLL_OUT: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, NULL, curl_multi_do, NULL, NULL, socket); break; case CURL_POLL_INOUT: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, curl_multi_do, curl_multi_do, NULL, NULL, socket); break; case CURL_POLL_REMOVE: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, NULL, NULL, NULL, NULL, NULL); break; } diff --git a/block/export/fuse.c b/block/export/fuse.c index 65a7f4d723..5c75c9407e 100644 --- a/block/export/fuse.c +++ b/block/export/fuse.c @@ -84,7 +84,7 @@ static void fuse_export_drained_begin(void *opaque) FuseExport *exp =3D opaque; =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), NULL, NULL, NULL, NULL, NULL); exp->fd_handler_set_up =3D false; } @@ -97,7 +97,7 @@ static void fuse_export_drained_end(void *opaque) exp->common.ctx =3D blk_get_aio_context(exp->common.blk); =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), read_from_fuse_export, NULL, NULL, NULL, exp); exp->fd_handler_set_up =3D true; } @@ -270,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const cha= r *mountpoint, g_hash_table_insert(exports, g_strdup(mountpoint), NULL); =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), read_from_fuse_export, NULL, NULL, NULL, exp); exp->fd_handler_set_up =3D true; =20 @@ -320,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp) =20 if (exp->fd_handler_set_up) { aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), NULL, NULL, NULL, NULL, NULL); exp->fd_handler_set_up =3D false; } diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c index 611430afda..048bdcbfb6 100644 --- a/block/export/vduse-blk.c +++ b/block/export/vduse-blk.c @@ -137,7 +137,7 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vduse= Virtq *vq) } =20 aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - false, on_vduse_vq_kick, NULL, NULL, NULL, vq); + on_vduse_vq_kick, NULL, NULL, NULL, vq); /* Make sure we don't miss any kick afer reconnecting */ eventfd_write(vduse_queue_get_fd(vq), 1); } @@ -151,7 +151,7 @@ static void vduse_blk_disable_queue(VduseDev *dev, Vdus= eVirtq *vq) return; } =20 - aio_set_fd_handler(vblk_exp->export.ctx, fd, false, + aio_set_fd_handler(vblk_exp->export.ctx, fd, NULL, NULL, NULL, NULL, NULL); } =20 @@ -170,7 +170,7 @@ static void on_vduse_dev_kick(void *opaque) static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx) { aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - false, on_vduse_dev_kick, NULL, NULL, NULL, + on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 /* Virtqueues are handled by vduse_blk_drained_end() */ @@ -179,7 +179,7 @@ static void vduse_blk_attach_ctx(VduseBlkExport *vblk_e= xp, AioContext *ctx) static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp) { aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); =20 /* Virtqueues are handled by vduse_blk_drained_begin() */ } @@ -364,7 +364,7 @@ static int vduse_blk_exp_create(BlockExport *exp, Block= ExportOptions *opts, vduse_dev_setup_queue(vblk_exp->dev, i, queue_size); } =20 - aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false, + aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, diff --git a/block/io.c b/block/io.c index 4f9fe2f808..9affddb3a0 100644 --- a/block/io.c +++ b/block/io.c @@ -361,7 +361,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs,= BdrvChild *parent, =20 /* Stop things in parent-to-child order */ if (qatomic_fetch_inc(&bs->quiesce_counter) =3D=3D 0) { - aio_disable_external(bdrv_get_aio_context(bs)); bdrv_parent_drained_begin(bs, parent); if (bs->drv && bs->drv->bdrv_drain_begin) { bs->drv->bdrv_drain_begin(bs); @@ -414,7 +413,6 @@ static void bdrv_do_drained_end(BlockDriverState *bs, B= drvChild *parent) bs->drv->bdrv_drain_end(bs); } bdrv_parent_drained_end(bs, parent); - aio_enable_external(bdrv_get_aio_context(bs)); } } =20 diff --git a/block/io_uring.c b/block/io_uring.c index 973e15d876..3a07215ded 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -399,7 +399,7 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs,= LuringState *s, int fd, =20 void luring_detach_aio_context(LuringState *s, AioContext *old_context) { - aio_set_fd_handler(old_context, s->ring.ring_fd, false, + aio_set_fd_handler(old_context, s->ring.ring_fd, NULL, NULL, NULL, NULL, s); qemu_bh_delete(s->completion_bh); s->aio_context =3D NULL; @@ -409,7 +409,7 @@ void luring_attach_aio_context(LuringState *s, AioConte= xt *new_context) { s->aio_context =3D new_context; s->completion_bh =3D aio_bh_new(new_context, qemu_luring_completion_bh= , s); - aio_set_fd_handler(s->aio_context, s->ring.ring_fd, false, + aio_set_fd_handler(s->aio_context, s->ring.ring_fd, qemu_luring_completion_cb, NULL, qemu_luring_poll_cb, qemu_luring_poll_ready, s); } diff --git a/block/iscsi.c b/block/iscsi.c index 9fc0bed90b..34f97ab646 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -363,7 +363,6 @@ iscsi_set_events(IscsiLun *iscsilun) =20 if (ev !=3D iscsilun->events) { aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi), - false, (ev & POLLIN) ? iscsi_process_read : NULL, (ev & POLLOUT) ? iscsi_process_write : NULL, NULL, NULL, @@ -1540,7 +1539,7 @@ static void iscsi_detach_aio_context(BlockDriverState= *bs) IscsiLun *iscsilun =3D bs->opaque; =20 aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi= ), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); iscsilun->events =3D 0; =20 if (iscsilun->nop_timer) { diff --git a/block/linux-aio.c b/block/linux-aio.c index d2cfb7f523..d4a9e21a11 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -438,7 +438,7 @@ int coroutine_fn laio_co_submit(BlockDriverState *bs, L= inuxAioState *s, int fd, =20 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context) { - aio_set_event_notifier(old_context, &s->e, false, NULL, NULL, NULL); + aio_set_event_notifier(old_context, &s->e, NULL, NULL, NULL); qemu_bh_delete(s->completion_bh); s->aio_context =3D NULL; } @@ -447,7 +447,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioConte= xt *new_context) { s->aio_context =3D new_context; s->completion_bh =3D aio_bh_new(new_context, qemu_laio_completion_bh, = s); - aio_set_event_notifier(new_context, &s->e, false, + aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb, qemu_laio_poll_cb, qemu_laio_poll_ready); diff --git a/block/nfs.c b/block/nfs.c index 006045d71a..8f89ece69f 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -195,7 +195,6 @@ static void nfs_set_events(NFSClient *client) int ev =3D nfs_which_events(client->context); if (ev !=3D client->events) { aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context= ), - false, (ev & POLLIN) ? nfs_process_read : NULL, (ev & POLLOUT) ? nfs_process_write : NULL, NULL, NULL, client); @@ -373,7 +372,7 @@ static void nfs_detach_aio_context(BlockDriverState *bs) NFSClient *client =3D bs->opaque; =20 aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); client->events =3D 0; } =20 @@ -391,7 +390,7 @@ static void nfs_client_close(NFSClient *client) if (client->context) { qemu_mutex_lock(&client->mutex); aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context= ), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); qemu_mutex_unlock(&client->mutex); if (client->fh) { nfs_close(client->context, client->fh); diff --git a/block/nvme.c b/block/nvme.c index 5b744c2bda..17937d398d 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -862,7 +862,7 @@ static int nvme_init(BlockDriverState *bs, const char *= device, int namespace, } aio_set_event_notifier(bdrv_get_aio_context(bs), &s->irq_notifier[MSIX_SHARED_IRQ_IDX], - false, nvme_handle_event, nvme_poll_cb, + nvme_handle_event, nvme_poll_cb, nvme_poll_ready); =20 if (!nvme_identify(bs, namespace, errp)) { @@ -948,7 +948,7 @@ static void nvme_close(BlockDriverState *bs) g_free(s->queues); aio_set_event_notifier(bdrv_get_aio_context(bs), &s->irq_notifier[MSIX_SHARED_IRQ_IDX], - false, NULL, NULL, NULL); + NULL, NULL, NULL); event_notifier_cleanup(&s->irq_notifier[MSIX_SHARED_IRQ_IDX]); qemu_vfio_pci_unmap_bar(s->vfio, 0, s->bar0_wo_map, 0, sizeof(NvmeBar) + NVME_DOORBELL_SIZE); @@ -1546,7 +1546,7 @@ static void nvme_detach_aio_context(BlockDriverState = *bs) =20 aio_set_event_notifier(bdrv_get_aio_context(bs), &s->irq_notifier[MSIX_SHARED_IRQ_IDX], - false, NULL, NULL, NULL); + NULL, NULL, NULL); } =20 static void nvme_attach_aio_context(BlockDriverState *bs, @@ -1556,7 +1556,7 @@ static void nvme_attach_aio_context(BlockDriverState = *bs, =20 s->aio_context =3D new_context; aio_set_event_notifier(new_context, &s->irq_notifier[MSIX_SHARED_IRQ_I= DX], - false, nvme_handle_event, nvme_poll_cb, + nvme_handle_event, nvme_poll_cb, nvme_poll_ready); =20 for (unsigned i =3D 0; i < s->queue_count; i++) { diff --git a/block/ssh.c b/block/ssh.c index b3b3352075..2748253d4a 100644 --- a/block/ssh.c +++ b/block/ssh.c @@ -1019,7 +1019,7 @@ static void restart_coroutine(void *opaque) AioContext *ctx =3D bdrv_get_aio_context(bs); =20 trace_ssh_restart_coroutine(restart->co); - aio_set_fd_handler(ctx, s->sock, false, NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(ctx, s->sock, NULL, NULL, NULL, NULL, NULL); =20 aio_co_wake(restart->co); } @@ -1049,7 +1049,7 @@ static coroutine_fn void co_yield(BDRVSSHState *s, Bl= ockDriverState *bs) trace_ssh_co_yield(s->sock, rd_handler, wr_handler); =20 aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, - false, rd_handler, wr_handler, NULL, NULL, &restart= ); + rd_handler, wr_handler, NULL, NULL, &restart); qemu_coroutine_yield(); trace_ssh_co_yield_back(s->sock); } diff --git a/block/win32-aio.c b/block/win32-aio.c index ee87d6048f..6327861e1d 100644 --- a/block/win32-aio.c +++ b/block/win32-aio.c @@ -174,7 +174,7 @@ int win32_aio_attach(QEMUWin32AIOState *aio, HANDLE hfi= le) void win32_aio_detach_aio_context(QEMUWin32AIOState *aio, AioContext *old_context) { - aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL, NULL); + aio_set_event_notifier(old_context, &aio->e, NULL, NULL, NULL); aio->aio_ctx =3D NULL; } =20 @@ -182,8 +182,8 @@ void win32_aio_attach_aio_context(QEMUWin32AIOState *ai= o, AioContext *new_context) { aio->aio_ctx =3D new_context; - aio_set_event_notifier(new_context, &aio->e, false, - win32_aio_completion_cb, NULL, NULL); + aio_set_event_notifier(new_context, &aio->e, win32_aio_completion_cb, + NULL, NULL); } =20 QEMUWin32AIOState *win32_aio_init(void) diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c index 6e81bc8791..0b189c6ab8 100644 --- a/hw/i386/kvm/xen_xenstore.c +++ b/hw/i386/kvm/xen_xenstore.c @@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Erro= r **errp) error_setg(errp, "Xenstore evtchn port init failed"); return; } - aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), fa= lse, + aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), xen_xenstore_event, NULL, NULL, NULL, s); =20 s->impl =3D xs_impl_create(xen_domid); diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 9cdad7e550..d48e240c37 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(E= ventNotifier *n) =20 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, false, + aio_set_event_notifier(ctx, &vq->host_notifier, virtio_queue_host_notifier_read, virtio_queue_host_notifier_aio_poll, virtio_queue_host_notifier_aio_poll_ready); @@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueu= e *vq, AioContext *ctx) */ void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioConte= xt *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, false, + aio_set_event_notifier(ctx, &vq->host_notifier, virtio_queue_host_notifier_read, NULL, NULL); } =20 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NUL= L); + aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL); /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ virtio_queue_host_notifier_read(&vq->host_notifier); diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c index bf256d4da2..1e08cf027a 100644 --- a/hw/xen/xen-bus.c +++ b/hw/xen/xen-bus.c @@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *= xendev, } =20 if (channel->ctx) - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),= false, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), NULL, NULL, NULL, NULL, NULL); =20 channel->ctx =3D ctx; if (ctx) { aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), - false, xen_device_event, NULL, xen_device_poll, - NULL, channel); + xen_device_event, NULL, xen_device_poll, NULL, + channel); } } =20 @@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev, =20 QLIST_REMOVE(channel, list); =20 - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), fal= se, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), NULL, NULL, NULL, NULL, NULL); =20 if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) { diff --git a/io/channel-command.c b/io/channel-command.c index e7edd091af..7ed726c802 100644 --- a/io/channel-command.c +++ b/io/channel-command.c @@ -337,10 +337,8 @@ static void qio_channel_command_set_aio_fd_handler(QIO= Channel *ioc, void *opaque) { QIOChannelCommand *cioc =3D QIO_CHANNEL_COMMAND(ioc); - aio_set_fd_handler(ctx, cioc->readfd, false, - io_read, NULL, NULL, NULL, opaque); - aio_set_fd_handler(ctx, cioc->writefd, false, - NULL, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, cioc->readfd, io_read, NULL, NULL, NULL, opaqu= e); + aio_set_fd_handler(ctx, cioc->writefd, NULL, io_write, NULL, NULL, opa= que); } =20 =20 diff --git a/io/channel-file.c b/io/channel-file.c index d76663e6ae..8b5821f452 100644 --- a/io/channel-file.c +++ b/io/channel-file.c @@ -198,8 +198,7 @@ static void qio_channel_file_set_aio_fd_handler(QIOChan= nel *ioc, void *opaque) { QIOChannelFile *fioc =3D QIO_CHANNEL_FILE(ioc); - aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write, - NULL, NULL, opaque); + aio_set_fd_handler(ctx, fioc->fd, io_read, io_write, NULL, NULL, opaqu= e); } =20 static GSource *qio_channel_file_create_watch(QIOChannel *ioc, diff --git a/io/channel-socket.c b/io/channel-socket.c index b0ea7d48b3..d99945ebec 100644 --- a/io/channel-socket.c +++ b/io/channel-socket.c @@ -899,8 +899,7 @@ static void qio_channel_socket_set_aio_fd_handler(QIOCh= annel *ioc, void *opaque) { QIOChannelSocket *sioc =3D QIO_CHANNEL_SOCKET(ioc); - aio_set_fd_handler(ctx, sioc->fd, false, - io_read, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, sioc->fd, io_read, io_write, NULL, NULL, opaqu= e); } =20 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc, diff --git a/migration/rdma.c b/migration/rdma.c index 0af5e944f0..4149662fc6 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -3105,15 +3105,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIO= Channel *ioc, { QIOChannelRDMA *rioc =3D QIO_CHANNEL_RDMA(ioc); if (io_read) { - aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); - aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_re= ad, + io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_re= ad, + io_write, NULL, NULL, opaque); } else { - aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); - aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_r= ead, + io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_r= ead, + io_write, NULL, NULL, opaque); } } =20 diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c index 321d7ab01a..519440eed3 100644 --- a/tests/unit/test-aio.c +++ b/tests/unit/test-aio.c @@ -130,7 +130,7 @@ static void *test_acquire_thread(void *opaque) static void set_event_notifier(AioContext *ctx, EventNotifier *notifier, EventNotifierHandler *handler) { - aio_set_event_notifier(ctx, notifier, false, handler, NULL, NULL); + aio_set_event_notifier(ctx, notifier, handler, NULL, NULL); } =20 static void dummy_notifier_read(EventNotifier *n) @@ -383,30 +383,6 @@ static void test_flush_event_notifier(void) event_notifier_cleanup(&data.e); } =20 -static void test_aio_external_client(void) -{ - int i, j; - - for (i =3D 1; i < 3; i++) { - EventNotifierTestData data =3D { .n =3D 0, .active =3D 10, .auto_s= et =3D true }; - event_notifier_init(&data.e, false); - aio_set_event_notifier(ctx, &data.e, true, event_ready_cb, NULL, N= ULL); - event_notifier_set(&data.e); - for (j =3D 0; j < i; j++) { - aio_disable_external(ctx); - } - for (j =3D 0; j < i; j++) { - assert(!aio_poll(ctx, false)); - assert(event_notifier_test_and_clear(&data.e)); - event_notifier_set(&data.e); - aio_enable_external(ctx); - } - assert(aio_poll(ctx, false)); - set_event_notifier(ctx, &data.e, NULL); - event_notifier_cleanup(&data.e); - } -} - static void test_wait_event_notifier_noflush(void) { EventNotifierTestData data =3D { .n =3D 0 }; @@ -935,7 +911,6 @@ int main(int argc, char **argv) g_test_add_func("/aio/event/wait", test_wait_event_notifi= er); g_test_add_func("/aio/event/wait/no-flush-cb", test_wait_event_notifi= er_noflush); g_test_add_func("/aio/event/flush", test_flush_event_notif= ier); - g_test_add_func("/aio/external-client", test_aio_external_clie= nt); g_test_add_func("/aio/timer/schedule", test_timer_schedule); =20 g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining); diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index d9d3807062..5c89169e46 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -435,7 +435,6 @@ static void test_graph_change_drain_all(void) =20 g_assert_cmpint(bs_b->quiesce_counter, =3D=3D, 0); g_assert_cmpint(b_s->drain_count, =3D=3D, 0); - g_assert_cmpint(qemu_get_aio_context()->external_disable_cnt, =3D=3D, = 0); =20 bdrv_unref(bs_b); blk_unref(blk_b); diff --git a/tests/unit/test-fdmon-epoll.c b/tests/unit/test-fdmon-epoll.c deleted file mode 100644 index ef5a856d09..0000000000 --- a/tests/unit/test-fdmon-epoll.c +++ /dev/null @@ -1,73 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * fdmon-epoll tests - * - * Copyright (c) 2020 Red Hat, Inc. - */ - -#include "qemu/osdep.h" -#include "block/aio.h" -#include "qapi/error.h" -#include "qemu/main-loop.h" - -static AioContext *ctx; - -static void dummy_fd_handler(EventNotifier *notifier) -{ - event_notifier_test_and_clear(notifier); -} - -static void add_event_notifiers(EventNotifier *notifiers, size_t n) -{ - for (size_t i =3D 0; i < n; i++) { - event_notifier_init(¬ifiers[i], false); - aio_set_event_notifier(ctx, ¬ifiers[i], false, - dummy_fd_handler, NULL, NULL); - } -} - -static void remove_event_notifiers(EventNotifier *notifiers, size_t n) -{ - for (size_t i =3D 0; i < n; i++) { - aio_set_event_notifier(ctx, ¬ifiers[i], false, NULL, NULL, NULL= ); - event_notifier_cleanup(¬ifiers[i]); - } -} - -/* Check that fd handlers work when external clients are disabled */ -static void test_external_disabled(void) -{ - EventNotifier notifiers[100]; - - /* fdmon-epoll is only enabled when many fd handlers are registered */ - add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers)); - - event_notifier_set(¬ifiers[0]); - assert(aio_poll(ctx, true)); - - aio_disable_external(ctx); - event_notifier_set(¬ifiers[0]); - assert(aio_poll(ctx, true)); - aio_enable_external(ctx); - - remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers)); -} - -int main(int argc, char **argv) -{ - /* - * This code relies on the fact that fdmon-io_uring disables itself wh= en - * the glib main loop is in use. The main loop uses fdmon-poll and upg= rades - * to fdmon-epoll when the number of fds exceeds a threshold. - */ - qemu_init_main_loop(&error_fatal); - ctx =3D qemu_get_aio_context(); - - while (g_main_context_iteration(NULL, false)) { - /* Do nothing */ - } - - g_test_init(&argc, &argv, NULL); - g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabl= ed); - return g_test_run(); -} diff --git a/util/aio-posix.c b/util/aio-posix.c index a8be940f76..934b1bbb85 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -99,7 +99,6 @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHan= dler *node) =20 void aio_set_fd_handler(AioContext *ctx, int fd, - bool is_external, IOHandler *io_read, IOHandler *io_write, AioPollFn *io_poll, @@ -144,7 +143,6 @@ void aio_set_fd_handler(AioContext *ctx, new_node->io_poll =3D io_poll; new_node->io_poll_ready =3D io_poll_ready; new_node->opaque =3D opaque; - new_node->is_external =3D is_external; =20 if (is_new) { new_node->pfd.fd =3D fd; @@ -196,12 +194,11 @@ static void aio_set_fd_poll(AioContext *ctx, int fd, =20 void aio_set_event_notifier(AioContext *ctx, EventNotifier *notifier, - bool is_external, EventNotifierHandler *io_read, AioPollFn *io_poll, EventNotifierHandler *io_poll_ready) { - aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), is_external, + aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), (IOHandler *)io_read, NULL, io_poll, (IOHandler *)io_poll_ready, notifier); } @@ -285,13 +282,11 @@ bool aio_pending(AioContext *ctx) =20 /* TODO should this check poll ready? */ revents =3D node->pfd.revents & node->pfd.events; - if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read && - aio_node_check(ctx, node->is_external)) { + if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) { result =3D true; break; } - if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write && - aio_node_check(ctx, node->is_external)) { + if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) { result =3D true; break; } @@ -350,9 +345,7 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHa= ndler *node) QLIST_INSERT_HEAD(&ctx->poll_aio_handlers, node, node_poll); } if (!QLIST_IS_INSERTED(node, node_deleted) && - poll_ready && revents =3D=3D 0 && - aio_node_check(ctx, node->is_external) && - node->io_poll_ready) { + poll_ready && revents =3D=3D 0 && node->io_poll_ready) { node->io_poll_ready(node->opaque); =20 /* @@ -364,7 +357,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHa= ndler *node) =20 if (!QLIST_IS_INSERTED(node, node_deleted) && (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && - aio_node_check(ctx, node->is_external) && node->io_read) { node->io_read(node->opaque); =20 @@ -375,7 +367,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHa= ndler *node) } if (!QLIST_IS_INSERTED(node, node_deleted) && (revents & (G_IO_OUT | G_IO_ERR)) && - aio_node_check(ctx, node->is_external) && node->io_write) { node->io_write(node->opaque); progress =3D true; @@ -436,8 +427,7 @@ static bool run_poll_handlers_once(AioContext *ctx, AioHandler *tmp; =20 QLIST_FOREACH_SAFE(node, &ctx->poll_aio_handlers, node_poll, tmp) { - if (aio_node_check(ctx, node->is_external) && - node->io_poll(node->opaque)) { + if (node->io_poll(node->opaque)) { aio_add_poll_ready_handler(ready_list, node); =20 node->poll_idle_timeout =3D now + POLL_IDLE_INTERVAL_NS; diff --git a/util/aio-win32.c b/util/aio-win32.c index 6bded009a4..948ef47a4d 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -32,7 +32,6 @@ struct AioHandler { GPollFD pfd; int deleted; void *opaque; - bool is_external; QLIST_ENTRY(AioHandler) node; }; =20 @@ -64,7 +63,6 @@ static void aio_remove_fd_handler(AioContext *ctx, AioHan= dler *node) =20 void aio_set_fd_handler(AioContext *ctx, int fd, - bool is_external, IOHandler *io_read, IOHandler *io_write, AioPollFn *io_poll, @@ -111,7 +109,6 @@ void aio_set_fd_handler(AioContext *ctx, node->opaque =3D opaque; node->io_read =3D io_read; node->io_write =3D io_write; - node->is_external =3D is_external; =20 if (io_read) { bitmask |=3D FD_READ | FD_ACCEPT | FD_CLOSE; @@ -135,7 +132,6 @@ void aio_set_fd_handler(AioContext *ctx, =20 void aio_set_event_notifier(AioContext *ctx, EventNotifier *e, - bool is_external, EventNotifierHandler *io_notify, AioPollFn *io_poll, EventNotifierHandler *io_poll_ready) @@ -161,7 +157,6 @@ void aio_set_event_notifier(AioContext *ctx, node->e =3D e; node->pfd.fd =3D (uintptr_t)event_notifier_get_handle(e); node->pfd.events =3D G_IO_IN; - node->is_external =3D is_external; QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node); =20 g_source_add_poll(&ctx->source, &node->pfd); @@ -368,8 +363,7 @@ bool aio_poll(AioContext *ctx, bool blocking) /* fill fd sets */ count =3D 0; QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { - if (!node->deleted && node->io_notify - && aio_node_check(ctx, node->is_external)) { + if (!node->deleted && node->io_notify) { assert(count < MAXIMUM_WAIT_OBJECTS); events[count++] =3D event_notifier_get_handle(node->e); } diff --git a/util/async.c b/util/async.c index 21016a1ac7..be0726038e 100644 --- a/util/async.c +++ b/util/async.c @@ -377,7 +377,7 @@ aio_ctx_finalize(GSource *source) g_free(bh); } =20 - aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL, NULL); + aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL, NULL); event_notifier_cleanup(&ctx->notifier); qemu_rec_mutex_destroy(&ctx->lock); qemu_lockcnt_destroy(&ctx->list_lock); @@ -561,7 +561,6 @@ AioContext *aio_context_new(Error **errp) QSLIST_INIT(&ctx->scheduled_coroutines); =20 aio_set_event_notifier(ctx, &ctx->notifier, - false, aio_context_notifier_cb, aio_context_notifier_poll, aio_context_notifier_poll_ready); diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c index 1683aa1105..6b6a1a91f8 100644 --- a/util/fdmon-epoll.c +++ b/util/fdmon-epoll.c @@ -64,11 +64,6 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerL= ist *ready_list, int i, ret =3D 0; struct epoll_event events[128]; =20 - /* Fall back while external clients are disabled */ - if (qatomic_read(&ctx->external_disable_cnt)) { - return fdmon_poll_ops.wait(ctx, ready_list, timeout); - } - if (timeout > 0) { ret =3D qemu_poll_ns(&pfd, 1, timeout); if (ret > 0) { @@ -133,13 +128,12 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigne= d npfd) return false; } =20 - /* Do not upgrade while external clients are disabled */ - if (qatomic_read(&ctx->external_disable_cnt)) { - return false; - } - - if (npfd < EPOLL_ENABLE_THRESHOLD) { - return false; + if (npfd >=3D EPOLL_ENABLE_THRESHOLD) { + if (fdmon_epoll_try_enable(ctx)) { + return true; + } else { + fdmon_epoll_disable(ctx); + } } =20 /* The list must not change while we add fds to epoll */ diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c index ab43052dd7..17ec18b7bd 100644 --- a/util/fdmon-io_uring.c +++ b/util/fdmon-io_uring.c @@ -276,11 +276,6 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHan= dlerList *ready_list, unsigned wait_nr =3D 1; /* block until at least one cqe is ready */ int ret; =20 - /* Fall back while external clients are disabled */ - if (qatomic_read(&ctx->external_disable_cnt)) { - return fdmon_poll_ops.wait(ctx, ready_list, timeout); - } - if (timeout =3D=3D 0) { wait_nr =3D 0; /* non-blocking */ } else if (timeout > 0) { @@ -315,8 +310,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx) return true; } =20 - /* Are we falling back to fdmon-poll? */ - return qatomic_read(&ctx->external_disable_cnt); + return false; } =20 static const FDMonOps fdmon_io_uring_ops =3D { diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c index 5fe3b47865..17df917cf9 100644 --- a/util/fdmon-poll.c +++ b/util/fdmon-poll.c @@ -65,8 +65,7 @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerLis= t *ready_list, assert(npfd =3D=3D 0); =20 QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { - if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events - && aio_node_check(ctx, node->is_external)) { + if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events) { add_pollfd(node); } } diff --git a/util/main-loop.c b/util/main-loop.c index e180c85145..3e43a9cd38 100644 --- a/util/main-loop.c +++ b/util/main-loop.c @@ -642,14 +642,13 @@ void qemu_set_fd_handler(int fd, void *opaque) { iohandler_init(); - aio_set_fd_handler(iohandler_ctx, fd, false, - fd_read, fd_write, NULL, NULL, opaque); + aio_set_fd_handler(iohandler_ctx, fd, fd_read, fd_write, NULL, NULL, + opaque); } =20 void event_notifier_set_handler(EventNotifier *e, EventNotifierHandler *handler) { iohandler_init(); - aio_set_event_notifier(iohandler_ctx, e, false, - handler, NULL, NULL); + aio_set_event_notifier(iohandler_ctx, e, handler, NULL, NULL); } diff --git a/util/qemu-coroutine-io.c b/util/qemu-coroutine-io.c index d791932d63..364f4d5abf 100644 --- a/util/qemu-coroutine-io.c +++ b/util/qemu-coroutine-io.c @@ -74,8 +74,7 @@ typedef struct { static void fd_coroutine_enter(void *opaque) { FDYieldUntilData *data =3D opaque; - aio_set_fd_handler(data->ctx, data->fd, false, - NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(data->ctx, data->fd, NULL, NULL, NULL, NULL, NULL); qemu_coroutine_enter(data->co); } =20 @@ -87,7 +86,7 @@ void coroutine_fn yield_until_fd_readable(int fd) data.ctx =3D qemu_get_current_aio_context(); data.co =3D qemu_coroutine_self(); data.fd =3D fd; - aio_set_fd_handler( - data.ctx, fd, false, fd_coroutine_enter, NULL, NULL, NULL, &data); + aio_set_fd_handler(data.ctx, fd, fd_coroutine_enter, NULL, NULL, NULL, + &data); qemu_coroutine_yield(); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 332aea9306..9ba19121a2 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd =3D fd; vu_fd_watch->cb =3D cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev =3D vu_dev; vu_fd_watch->pvt =3D pvt; @@ -299,8 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, false, - NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(server->ioc->ctx, fd, NULL, NULL, NULL, NULL, NULL); =20 QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); g_free(vu_fd_watch); @@ -362,7 +361,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -403,7 +402,7 @@ void vhost_user_server_attach_aio_context(VuServer *ser= ver, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, kick_handler, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -417,7 +416,7 @@ void vhost_user_server_detach_aio_context(VuServer *ser= ver) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 diff --git a/tests/unit/meson.build b/tests/unit/meson.build index 3bc78d8660..b33298a444 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -122,9 +122,6 @@ if have_block if nettle.found() or gcrypt.found() tests +=3D {'test-crypto-pbkdf': [io]} endif - if config_host_data.get('CONFIG_EPOLL_CREATE1') - tests +=3D {'test-fdmon-epoll': [testblock]} - endif endif =20 if have_system --=20 2.39.2