From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546667; cv=none; d=zohomail.com; s=zohoarc; b=WUFK5qq5YQOPT0DAjbso9gEXcSfJkPMYdb4CBKr/81m6BSNiJ/6TVl1ks2/jQncDeN+YnjkjpeY38sewLGFrgCtUg3OAM6sXhWYP+X+66PZn+0Ev+uIj/mq0ySAmYekFNlJBpF9saUnA7BE6ynS+MQK/L7EMSNMEdW6Qt88Naq8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546667; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=oL+HrJxL6ytOleAzvmZ4SJ3Zeezd7vV75aui9s0hUdo=; b=LGXPWgS2dkMJJBacmpWK+afqto9dV7ParFSsIYd9RqOYhYTve/r/vzDqGtozkGc8XeAug3Bwpje1/BpCVz9IElRQQA3tTrpJPdiYooDQuTHCWA/zl3SNucaO3MtywyWDsoXh1WGNB1mQZkRLCQfPh2keubvkOuJoLobPkR5PSPA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546667578353.87213781871947; Mon, 3 Apr 2023 11:31:07 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517552.803069 (Exim 4.92) (envelope-from ) id 1pjOwt-0005JH-TH; Mon, 03 Apr 2023 18:30:35 +0000 Received: by outflank-mailman (output) from mailman id 517552.803069; Mon, 03 Apr 2023 18:30:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwt-0005J6-PZ; Mon, 03 Apr 2023 18:30:35 +0000 Received: by outflank-mailman (input) for mailman id 517552; Mon, 03 Apr 2023 18:30:34 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwr-0004HH-VE for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:34 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9db74a78-d24d-11ed-85db-49a42c6b2330; Mon, 03 Apr 2023 20:30:33 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-281-cN00YhrQM7uB9EfAu9sSvQ-1; Mon, 03 Apr 2023 14:30:26 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 416D13C10EEF; Mon, 3 Apr 2023 18:30:12 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 751A6C15BA0; Mon, 3 Apr 2023 18:30:11 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9db74a78-d24d-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546631; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oL+HrJxL6ytOleAzvmZ4SJ3Zeezd7vV75aui9s0hUdo=; b=dI+yjjuDwKKViSZTb1wuclFWGfaAIY4B8fBgPsiEvcqMluIfiN33JcNCj4A5KlkR1WTkz6 vj61EvXAzYx80j80rRhCLyGfHn8Se3lfYfZD51uWRrbQzZRFUUapSK/N67fQDa9wTwq0O3 fum+ALo6iIoQJmuSCHUgxKt8wCcQc9Q= X-MC-Unique: cN00YhrQM7uB9EfAu9sSvQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 01/13] virtio-scsi: avoid race between unplug and transport event Date: Mon, 3 Apr 2023 14:29:52 -0400 Message-Id: <20230403183004.347205-2-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546669596100015 Content-Type: text/plain; charset="utf-8" Only report a transport reset event to the guest after the SCSIDevice has been unrealized by qdev_simple_device_unplug_cb(). qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field to false so that scsi_device_find/get() no longer see it. scsi_target_emulate_report_luns() also needs to be updated to filter out SCSIDevices that are unrealized. These changes ensure that the guest driver does not see the SCSIDevice that's being unplugged if it responds very quickly to the transport reset event. Signed-off-by: Stefan Hajnoczi Reviewed-by: Michael S. Tsirkin Reviewed-by: Paolo Bonzini --- hw/scsi/scsi-bus.c | 3 ++- hw/scsi/virtio-scsi.c | 18 +++++++++--------- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c index c97176110c..f9bd064833 100644 --- a/hw/scsi/scsi-bus.c +++ b/hw/scsi/scsi-bus.c @@ -487,7 +487,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetR= eq *r) DeviceState *qdev =3D kid->child; SCSIDevice *dev =3D SCSI_DEVICE(qdev); =20 - if (dev->channel =3D=3D channel && dev->id =3D=3D id && dev->l= un !=3D 0) { + if (dev->channel =3D=3D channel && dev->id =3D=3D id && dev->l= un !=3D 0 && + qatomic_load_acquire(&dev->qdev.realized)) { store_lun(tmp, dev->lun); g_byte_array_append(buf, tmp, 8); len +=3D 8; diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 612c525d9d..000961446c 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *ho= tplug_dev, DeviceState *dev, SCSIDevice *sd =3D SCSI_DEVICE(dev); AioContext *ctx =3D s->ctx ?: qemu_get_aio_context(); =20 - if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { - virtio_scsi_acquire(s); - virtio_scsi_push_event(s, sd, - VIRTIO_SCSI_T_TRANSPORT_RESET, - VIRTIO_SCSI_EVT_RESET_REMOVED); - scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); - virtio_scsi_release(s); - } - aio_disable_external(ctx); qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); aio_enable_external(ctx); @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *ho= tplug_dev, DeviceState *dev, blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL); virtio_scsi_release(s); } + + if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { + virtio_scsi_acquire(s); + virtio_scsi_push_event(s, sd, + VIRTIO_SCSI_T_TRANSPORT_RESET, + VIRTIO_SCSI_EVT_RESET_REMOVED); + scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)); + virtio_scsi_release(s); + } } =20 static struct SCSIBusInfo virtio_scsi_scsi_info =3D { --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546659; cv=none; d=zohomail.com; s=zohoarc; b=V97JXm/VNnTrDPaALHApN7R7IHImChRNhtLa70DnRs0xhn01hxZ1N2mJJryfjAjM+6BMEfXQlUXnw8l+Z84SyW2wdWiIBvoUvpwqgMstvzEVt/T98HD7JsIra7suYjKmLQN4vSThTE/AsEEXtk8SM7NTwg23Fou8csld7ePzJTs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546659; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=e0HFVVv61t7j/DUMbeicZfUXtZsQPz0KQrxVQvmIXfw=; b=WnF5ComMiS5HmW/PerajisheDUjODBoWv9oozongtrXCIhwmjqJo4otIlP9mSMiulaHxorcEpea996+ViPKc0nYoxKzkaxtMkMWGAN3uE8aZSuMaQWb2x9BFUebJCM85OswP09wgktWau73+YlgY81hgkCX3NvAKcTjqu31kPW4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546659251610.1292881949245; Mon, 3 Apr 2023 11:30:59 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517549.803038 (Exim 4.92) (envelope-from ) id 1pjOwp-0004Xh-UQ; Mon, 03 Apr 2023 18:30:31 +0000 Received: by outflank-mailman (output) from mailman id 517549.803038; Mon, 03 Apr 2023 18:30:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwp-0004Xa-Rk; Mon, 03 Apr 2023 18:30:31 +0000 Received: by outflank-mailman (input) for mailman id 517549; Mon, 03 Apr 2023 18:30:30 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwo-0004HH-5Z for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:30 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9b015c4b-d24d-11ed-85db-49a42c6b2330; Mon, 03 Apr 2023 20:30:28 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-519-3isl-chxN3OIRuBPnd0Pfg-1; Mon, 03 Apr 2023 14:30:21 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 71BBF385F379; Mon, 3 Apr 2023 18:30:14 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id D2F9E400F4F; Mon, 3 Apr 2023 18:30:13 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9b015c4b-d24d-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e0HFVVv61t7j/DUMbeicZfUXtZsQPz0KQrxVQvmIXfw=; b=VM3hW7NwOHLGEw7z2TtAkaM9y+rDacCrFrDh2TCrbnubtXCyVGSV17liiQ5mgbFdza1Jic Q2QqKnofNnVRFGtZkh9OpKE80WPzwBqW5+qlNwVg2u9Abup3t4lZXGrUypOlnGm8quH8+g yHlDjsb/8OFN1+Rwu7LI7yWeXQhMif4= X-MC-Unique: 3isl-chxN3OIRuBPnd0Pfg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard , Zhengui Li Subject: [PATCH 02/13] virtio-scsi: stop using aio_disable_external() during unplug Date: Mon, 3 Apr 2023 14:29:53 -0400 Message-Id: <20230403183004.347205-3-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546659611100012 Content-Type: text/plain; charset="utf-8" This patch is part of an effort to remove the aio_disable_external() API because it does not fit in a multi-queue block layer world where many AioContexts may be submitting requests to the same disk. The SCSI emulation code is already in good shape to stop using aio_disable_external(). It was only used by commit 9c5aad84da1c ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi disk") to ensure that virtio_scsi_hotunplug() works while the guest driver is submitting I/O. Ensure virtio_scsi_hotunplug() is safe as follows: 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() -> device_set_realized() calls qatomic_set(&dev->realized, false) so that future scsi_device_get() calls return NULL because they exclude SCSIDevices with realized=3Dfalse. That means virtio-scsi will reject new I/O requests to this SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while virtio_scsi_hotunplug() is still executing. We are protected against new requests! 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so that in-flight requests are cancelled synchronously. This ensures that no in-flight requests remain once qdev_simple_device_unplug_cb() returns. Thanks to these two conditions we don't need aio_disable_external() anymore. Cc: Zhengui Li Signed-off-by: Stefan Hajnoczi Reviewed-by: Paolo Bonzini --- hw/scsi/scsi-disk.c | 1 + hw/scsi/virtio-scsi.c | 3 --- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c index 97c9b1c8cd..e01bd84541 100644 --- a/hw/scsi/scsi-disk.c +++ b/hw/scsi/scsi-disk.c @@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **err= p) =20 static void scsi_unrealize(SCSIDevice *dev) { + scsi_device_purge_requests(dev, SENSE_CODE(RESET)); del_boot_device_lchs(&dev->qdev, NULL); } =20 diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 000961446c..a02f9233ec 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *ho= tplug_dev, DeviceState *dev, VirtIODevice *vdev =3D VIRTIO_DEVICE(hotplug_dev); VirtIOSCSI *s =3D VIRTIO_SCSI(vdev); SCSIDevice *sd =3D SCSI_DEVICE(dev); - AioContext *ctx =3D s->ctx ?: qemu_get_aio_context(); =20 - aio_disable_external(ctx); qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); - aio_enable_external(ctx); =20 if (s->ctx) { virtio_scsi_acquire(s); --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546694; cv=none; d=zohomail.com; s=zohoarc; b=AE3Jb64JARcccDrQs4sW5Rl+qTzX8yfyjVnsvzIM+g1RmjOtGnuQ6wiq207dyxbn7TWnrNq7iaT748ncNPUMmbZXrgeyyKN6rgz4E8l7PVlq5zlKUh85ryJIiTlMJj8pbyhkoz2jzb9qYPJHTvGQ8z9DoCz8dxyEY5dHO/0gNzM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546694; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=cpef4kT6gqf7n/rR0/ynntTmCBLqt8QRSEC7iM5/ics=; b=hwa9WoWqIbqkQBH8BZFsJZo8MyINjaPg/eZJpNasTgSfwI/wWfxmVLjIEJWFYn/RnW50hFWPN0B/WVmZ/LEoVicUB+rONv8fJcdgtpAtV8vbooSpT+7EgALXW57+8lwpoCLMA78nbPdoPZ5FVtJqwtV6I4kC3TjFq/2R60Ksdlc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1680546694850694.9620697430718; Mon, 3 Apr 2023 11:31:34 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pjOwt-00012Y-3T; Mon, 03 Apr 2023 14:30:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOwq-00010g-Cx for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:32 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOwo-0000WS-Rd for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:32 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-604-2uGakXqQNEinBtqwH2bvkA-1; Mon, 03 Apr 2023 14:30:25 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 07C581C27DBC; Mon, 3 Apr 2023 18:30:17 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5F35E2166B26; Mon, 3 Apr 2023 18:30:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546630; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cpef4kT6gqf7n/rR0/ynntTmCBLqt8QRSEC7iM5/ics=; b=aSetXRzN2IcE3HduZ7yiNL6XVIjKqIc/cesqpEAl2lW5AGM6E+DuJraU42rE2wJVuP0mT3 vCIm7SJuIkMWUOnxEtAHe5SqnvFQ3iIPmbkguHFYVlQE0i1VTv0usiJWYQh1HfVKyQ7+8Y 449x4ZjYFK6unDe0iEfoNqVkCHLjYNg= X-MC-Unique: 2uGakXqQNEinBtqwH2bvkA-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 03/13] block/export: only acquire AioContext once for vhost_user_server_stop() Date: Mon, 3 Apr 2023 14:29:54 -0400 Message-Id: <20230403183004.347205-4-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546696022100001 Content-Type: text/plain; charset="utf-8" vhost_user_server_stop() uses AIO_WAIT_WHILE(). AIO_WAIT_WHILE() requires that AioContext is only acquired once. Since blk_exp_request_shutdown() already acquires the AioContext it shouldn't be acquired again in vhost_user_server_stop(). Signed-off-by: Stefan Hajnoczi --- util/vhost-user-server.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 40f36ea214..5b6216069c 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -346,10 +346,9 @@ static void vu_accept(QIONetListener *listener, QIOCha= nnelSocket *sioc, aio_context_release(server->ctx); } =20 +/* server->ctx acquired by caller */ void vhost_user_server_stop(VuServer *server) { - aio_context_acquire(server->ctx); - qemu_bh_delete(server->restart_listener_bh); server->restart_listener_bh =3D NULL; =20 @@ -366,8 +365,6 @@ void vhost_user_server_stop(VuServer *server) AIO_WAIT_WHILE(server->ctx, server->co_trip); } =20 - aio_context_release(server->ctx); - if (server->listener) { qio_net_listener_disconnect(server->listener); object_unref(OBJECT(server->listener)); --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546663; cv=none; d=zohomail.com; s=zohoarc; b=RLiCwwO6HxJpGQajPkTyXdngKVAFdH3qZO23PEgh4BIt7kqrkYRvsYS05eTCqn6F/g4Ju1Fhsp/S9VyPbyj9MAvpkkS43oRAUV6A52bU+wvLoGJRpnUIB0wabt0vMUNbN4Y/aTTCR+hAeVTed/VU+Wbxeo0B/MfnN36J95S8CUw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546663; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=TKqwyQ4Hx/raK7iVoVjGcYzVd4PPhwE8qnDjfh3DtNg=; b=H22muN+uAc5RwifEiri0a81k8GxjYn6LBsR3kCCfH9UD3/d/aIe1tyAa9w44fvCU9tNq455wyUQLs506KkdUiLkqVQz54iAa9VXedxzUdqgZ9cKzMAl4DqLM0Ut0S9x+IftVpFlDOi3csVHwUgg/3dIIyuo3BfMYu8Ax2GtQDk4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546663648850.7691561760913; Mon, 3 Apr 2023 11:31:03 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517553.803073 (Exim 4.92) (envelope-from ) id 1pjOwu-0005Ny-B9; Mon, 03 Apr 2023 18:30:36 +0000 Received: by outflank-mailman (output) from mailman id 517553.803073; Mon, 03 Apr 2023 18:30:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwu-0005LQ-2c; Mon, 03 Apr 2023 18:30:36 +0000 Received: by outflank-mailman (input) for mailman id 517553; Mon, 03 Apr 2023 18:30:34 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOws-0004HH-9S for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:34 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9df5a3e9-d24d-11ed-85db-49a42c6b2330; Mon, 03 Apr 2023 20:30:33 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-127-rxZjFOypOgG8cMOE2p_TKw-1; Mon, 03 Apr 2023 14:30:27 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8F1D68C447E; Mon, 3 Apr 2023 18:30:19 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id CC2A9483EC1; Mon, 3 Apr 2023 18:30:18 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9df5a3e9-d24d-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546632; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TKqwyQ4Hx/raK7iVoVjGcYzVd4PPhwE8qnDjfh3DtNg=; b=URu9BWdvsR30tJn1o9ILEciRQJDN6aQR/LQ34COhjAr41NM9w0luWbVDE2Fopal95lT5a4 dPKaZHwJCbahzydaAA3tNYRtPCYgHjJU8Az79cykjloJEmwPhp8WgC+KBmmSupMbTKqC7r 0TyJqYPIL+P9rEqRVhsS8ZCNDavhVWI= X-MC-Unique: rxZjFOypOgG8cMOE2p_TKw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 04/13] util/vhost-user-server: rename refcount to in_flight counter Date: Mon, 3 Apr 2023 14:29:55 -0400 Message-Id: <20230403183004.347205-5-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546664040100003 Content-Type: text/plain; charset="utf-8" The VuServer object has a refcount field and ref/unref APIs. The name is confusing because it's actually an in-flight request counter instead of a refcount. Normally a refcount destroys the object upon reaching zero. The VuServer counter is used to wake up the vhost-user coroutine when there are no more requests. Avoid confusing by renaming refcount and ref/unref to in_flight and inc/dec. Signed-off-by: Stefan Hajnoczi Reviewed-by: Paolo Bonzini --- include/qemu/vhost-user-server.h | 6 +++--- block/export/vhost-user-blk-server.c | 11 +++++++---- util/vhost-user-server.c | 14 +++++++------- 3 files changed, 17 insertions(+), 14 deletions(-) diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-ser= ver.h index 25c72433ca..bc0ac9ddb6 100644 --- a/include/qemu/vhost-user-server.h +++ b/include/qemu/vhost-user-server.h @@ -41,7 +41,7 @@ typedef struct { const VuDevIface *vu_iface; =20 /* Protected by ctx lock */ - unsigned int refcount; + unsigned int in_flight; bool wait_idle; VuDev vu_dev; QIOChannel *ioc; /* The I/O channel with the client */ @@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server, =20 void vhost_user_server_stop(VuServer *server); =20 -void vhost_user_server_ref(VuServer *server); -void vhost_user_server_unref(VuServer *server); +void vhost_user_server_inc_in_flight(VuServer *server); +void vhost_user_server_dec_in_flight(VuServer *server); =20 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ct= x); void vhost_user_server_detach_aio_context(VuServer *server); diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index 3409d9e02e..e93f2ed6b4 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -49,7 +49,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in= _len) free(req); } =20 -/* Called with server refcount increased, must decrease before returning */ +/* + * Called with server in_flight counter increased, must decrease before + * returning. + */ static void coroutine_fn vu_blk_virtio_process_req(void *opaque) { VuBlkReq *req =3D opaque; @@ -67,12 +70,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void= *opaque) in_num, out_num); if (in_len < 0) { free(req); - vhost_user_server_unref(server); + vhost_user_server_dec_in_flight(server); return; } =20 vu_blk_req_complete(req, in_len); - vhost_user_server_unref(server); + vhost_user_server_dec_in_flight(server); } =20 static void vu_blk_process_vq(VuDev *vu_dev, int idx) @@ -94,7 +97,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx) Coroutine *co =3D qemu_coroutine_create(vu_blk_virtio_process_req, req); =20 - vhost_user_server_ref(server); + vhost_user_server_inc_in_flight(server); qemu_coroutine_enter(co); } } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 5b6216069c..1622f8cfb3 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf) error_report("vu_panic: %s", buf); } =20 -void vhost_user_server_ref(VuServer *server) +void vhost_user_server_inc_in_flight(VuServer *server) { assert(!server->wait_idle); - server->refcount++; + server->in_flight++; } =20 -void vhost_user_server_unref(VuServer *server) +void vhost_user_server_dec_in_flight(VuServer *server) { - server->refcount--; - if (server->wait_idle && !server->refcount) { + server->in_flight--; + if (server->wait_idle && !server->in_flight) { aio_co_wake(server->co_trip); } } @@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque) /* Keep running */ } =20 - if (server->refcount) { + if (server->in_flight) { /* Wait for requests to complete before we can unmap the memory */ server->wait_idle =3D true; qemu_coroutine_yield(); server->wait_idle =3D false; } - assert(server->refcount =3D=3D 0); + assert(server->in_flight =3D=3D 0); =20 vu_deinit(vu_dev); =20 --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546656; cv=none; d=zohomail.com; s=zohoarc; b=avNqpd6OvboJG75YM7ymuFSSx7rWyvFz67AaCr1KX1J5Qu3ObqR30s9fXdogmb3W/sd52ocB1mvIKFzJR1JZsIF/np6zhhtI1T36X248aP7cpF80NdHxFFEsOK/J5wBCCPXzrOoaAzEtWWSVaDU4vsrO/SbCihVnZAEwuVZCpl4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546656; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+bLg6uTzP7Qgt6lQ3xKdY10AY3Nt3xLke1uGL1u9ShU=; b=g0WTcfP5Xunb0aqpsbcPHwlGrscpAzdnahxggFyu0mJqQTfwcNd/DUefu86tug6gDPqakKF2ZeJze9URFGu6NbAcenLozDQlUaxpw/SgGeylJCSaLxYWVvjoQrWmtaw6jxka99slgm+Dqs5LPKbDoT8OsSR3LR4ZSvcSoq/3C7U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546656610397.785367409037; Mon, 3 Apr 2023 11:30:56 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517548.803029 (Exim 4.92) (envelope-from ) id 1pjOwo-0004HZ-MV; Mon, 03 Apr 2023 18:30:30 +0000 Received: by outflank-mailman (output) from mailman id 517548.803029; Mon, 03 Apr 2023 18:30:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwo-0004HS-Jq; Mon, 03 Apr 2023 18:30:30 +0000 Received: by outflank-mailman (input) for mailman id 517548; Mon, 03 Apr 2023 18:30:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwn-0004HH-HE for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:29 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9addabbf-d24d-11ed-85db-49a42c6b2330; Mon, 03 Apr 2023 20:30:28 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-654-gJweyXgPPBCNaCJfjwIa3g-1; Mon, 03 Apr 2023 14:30:25 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0DAFE3C20EE3; Mon, 3 Apr 2023 18:30:22 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 63861440D8; Mon, 3 Apr 2023 18:30:21 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9addabbf-d24d-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+bLg6uTzP7Qgt6lQ3xKdY10AY3Nt3xLke1uGL1u9ShU=; b=Pq2O9MN9GXcdwkjsOEXC2ygoZmqJr/kEU4W7AK0q6j9jVTcqTNpCNN+GkCurWt9ZTXcZDr cWsXfXcmXuSV2JbzMfZGjFywQYe5D67mOdFgj/SCkcgvvMP2r1WwRHh09k/JSLX69u8JT+ coX+sYvJTKC3zXKhJeSkaOVEOAzsNp0= X-MC-Unique: gJweyXgPPBCNaCJfjwIa3g-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 05/13] block/export: wait for vhost-user-blk requests when draining Date: Mon, 3 Apr 2023 14:29:56 -0400 Message-Id: <20230403183004.347205-6-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546658546100005 Content-Type: text/plain; charset="utf-8" Each vhost-user-blk request runs in a coroutine. When the BlockBackend enters a drained section we need to enter a quiescent state. Currently any in-flight requests race with bdrv_drained_begin() because it is unaware of vhost-user-blk requests. When blk_co_preadv/pwritev()/etc returns it wakes the bdrv_drained_begin() thread but vhost-user-blk request processing has not yet finished. The request coroutine continues executing while the main loop thread thinks it is in a drained section. One example where this is unsafe is for blk_set_aio_context() where bdrv_drained_begin() is called before .aio_context_detached() and .aio_context_attach(). If request coroutines are still running after bdrv_drained_begin(), then the AioContext could change underneath them and they race with new requests processed in the new AioContext. This could lead to virtqueue corruption, for example. (This example is theoretical, I came across this while reading the code and have not tried to reproduce it.) It's easy to make bdrv_drained_begin() wait for in-flight requests: add a .drained_poll() callback that checks the VuServer's in-flight counter. VuServer just needs an API that returns true when there are requests in flight. The in-flight counter needs to be atomic. Signed-off-by: Stefan Hajnoczi --- include/qemu/vhost-user-server.h | 4 +++- block/export/vhost-user-blk-server.c | 19 +++++++++++++++++++ util/vhost-user-server.c | 14 ++++++++++---- 3 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-ser= ver.h index bc0ac9ddb6..b1c1cda886 100644 --- a/include/qemu/vhost-user-server.h +++ b/include/qemu/vhost-user-server.h @@ -40,8 +40,9 @@ typedef struct { int max_queues; const VuDevIface *vu_iface; =20 + unsigned int in_flight; /* atomic */ + /* Protected by ctx lock */ - unsigned int in_flight; bool wait_idle; VuDev vu_dev; QIOChannel *ioc; /* The I/O channel with the client */ @@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server); =20 void vhost_user_server_inc_in_flight(VuServer *server); void vhost_user_server_dec_in_flight(VuServer *server); +bool vhost_user_server_has_in_flight(VuServer *server); =20 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ct= x); void vhost_user_server_detach_aio_context(VuServer *server); diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index e93f2ed6b4..dbf5207162 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -254,6 +254,22 @@ static void vu_blk_exp_request_shutdown(BlockExport *e= xp) vhost_user_server_stop(&vexp->vu_server); } =20 +/* + * Ensures that bdrv_drained_begin() waits until in-flight requests comple= te. + * + * Called with vexp->export.ctx acquired. + */ +static bool vu_blk_drained_poll(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + return vhost_user_server_has_in_flight(&vexp->vu_server); +} + +static const BlockDevOps vu_blk_dev_ops =3D { + .drained_poll =3D vu_blk_drained_poll, +}; + static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, Error **errp) { @@ -292,6 +308,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExp= ortOptions *opts, vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg, logical_block_size, num_queues); =20 + blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, vexp); =20 @@ -299,6 +316,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExp= ortOptions *opts, num_queues, &vu_blk_iface, errp)) { blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, vexp); + blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; } @@ -312,6 +330,7 @@ static void vu_blk_exp_delete(BlockExport *exp) =20 blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_de= tach, vexp); + blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } =20 diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 1622f8cfb3..2e6b640050 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf) void vhost_user_server_inc_in_flight(VuServer *server) { assert(!server->wait_idle); - server->in_flight++; + qatomic_inc(&server->in_flight); } =20 void vhost_user_server_dec_in_flight(VuServer *server) { - server->in_flight--; - if (server->wait_idle && !server->in_flight) { - aio_co_wake(server->co_trip); + if (qatomic_fetch_dec(&server->in_flight) =3D=3D 1) { + if (server->wait_idle) { + aio_co_wake(server->co_trip); + } } } =20 +bool vhost_user_server_has_in_flight(VuServer *server) +{ + return qatomic_load_acquire(&server->in_flight) > 0; +} + static bool coroutine_fn vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg) { --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546657; cv=none; d=zohomail.com; s=zohoarc; b=CFs+5+Pv+1g1w8/a1KpN8jphJVSHyXSyauF9Xsz6wP48VMvMHI/26Ame7QtamzQ0jshEjmwFUDzgND+RKQaDp5TuLJsI+ZJu9y1efpniJcXzvS9orl6yeyhLUhmmcjknJHvc8QvonYZ9Rd1kjYibEGggyJt9S2RQ/wFbZUaDNoY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546657; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=; b=Egm8BF+2haSslZVHI1N5tBb2YuDBHAAaBIEy4YCanZ35IGsI2Rrbl231AVWPjk/KczmN4M7K1WiF3DD4u5TX/zG+9AQp7aAsa/StvJ0M9en+JyLiSz6wCliD1S4hTbFfYrHDtGNyrkIKjOXB9r/DxnU6aFfx2Kzei6x4I6yfiCg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 168054665784837.81956867417955; Mon, 3 Apr 2023 11:30:57 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517551.803055 (Exim 4.92) (envelope-from ) id 1pjOwr-0004s2-M3; Mon, 03 Apr 2023 18:30:33 +0000 Received: by outflank-mailman (output) from mailman id 517551.803055; Mon, 03 Apr 2023 18:30:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwr-0004qd-Fl; Mon, 03 Apr 2023 18:30:33 +0000 Received: by outflank-mailman (input) for mailman id 517551; Mon, 03 Apr 2023 18:30:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwq-0004HH-5i for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9c1693d8-d24d-11ed-85db-49a42c6b2330; Mon, 03 Apr 2023 20:30:30 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-270-yDm9BV4sNRCoPTzB8fuUCw-1; Mon, 03 Apr 2023 14:30:25 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A606E885620; Mon, 3 Apr 2023 18:30:24 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id ECAF31121314; Mon, 3 Apr 2023 18:30:23 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9c1693d8-d24d-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546629; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=; b=TlI8CNolDE3g8bsImw9FNt8x40JPFGkUBySN9BQP+3GjyzqjIIxRGy+AfzPyW0ldWpqr4V kZLSEAa487q8i718P8/eG/s1EoWGN1VTHrH++d1e5o9jIW/g/nHzbvB8y0X2oIGIgmQwk+ UkICGXaHU2W1Kb0v/6RCdJKtrFRAf8Y= X-MC-Unique: yDm9BV4sNRCoPTzB8fuUCw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 06/13] block/export: stop using is_external in vhost-user-blk server Date: Mon, 3 Apr 2023 14:29:57 -0400 Message-Id: <20230403183004.347205-7-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546660039100015 Content-Type: text/plain; charset="utf-8" vhost-user activity must be suspended during bdrv_drained_begin/end(). This prevents new requests from interfering with whatever is happening in the drained section. Previously this was done using aio_set_fd_handler()'s is_external argument. In a multi-queue block layer world the aio_disable_external() API cannot be used since multiple AioContext may be processing I/O, not just one. Switch to BlockDevOps->drained_begin/end() callbacks. Signed-off-by: Stefan Hajnoczi --- block/export/vhost-user-blk-server.c | 43 ++++++++++++++-------------- util/vhost-user-server.c | 10 +++---- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index dbf5207162..6e1bc196fb 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -207,22 +207,6 @@ static const VuDevIface vu_blk_iface =3D { .process_msg =3D vu_blk_process_msg, }; =20 -static void blk_aio_attached(AioContext *ctx, void *opaque) -{ - VuBlkExport *vexp =3D opaque; - - vexp->export.ctx =3D ctx; - vhost_user_server_attach_aio_context(&vexp->vu_server, ctx); -} - -static void blk_aio_detach(void *opaque) -{ - VuBlkExport *vexp =3D opaque; - - vhost_user_server_detach_aio_context(&vexp->vu_server); - vexp->export.ctx =3D NULL; -} - static void vu_blk_initialize_config(BlockDriverState *bs, struct virtio_blk_config *config, @@ -254,6 +238,25 @@ static void vu_blk_exp_request_shutdown(BlockExport *e= xp) vhost_user_server_stop(&vexp->vu_server); } =20 +/* Called with vexp->export.ctx acquired */ +static void vu_blk_drained_begin(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + vhost_user_server_detach_aio_context(&vexp->vu_server); +} + +/* Called with vexp->export.blk AioContext acquired */ +static void vu_blk_drained_end(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + /* Refresh AioContext in case it changed */ + vexp->export.ctx =3D blk_get_aio_context(vexp->export.blk); + + vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ct= x); +} + /* * Ensures that bdrv_drained_begin() waits until in-flight requests comple= te. * @@ -267,6 +270,8 @@ static bool vu_blk_drained_poll(void *opaque) } =20 static const BlockDevOps vu_blk_dev_ops =3D { + .drained_begin =3D vu_blk_drained_begin, + .drained_end =3D vu_blk_drained_end, .drained_poll =3D vu_blk_drained_poll, }; =20 @@ -309,13 +314,9 @@ static int vu_blk_exp_create(BlockExport *exp, BlockEx= portOptions *opts, logical_block_size, num_queues); =20 blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); - blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, - vexp); =20 if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx, num_queues, &vu_blk_iface, errp)) { - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, - blk_aio_detach, vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; @@ -328,8 +329,6 @@ static void vu_blk_exp_delete(BlockExport *exp) { VuBlkExport *vexp =3D container_of(exp, VuBlkExport, export); =20 - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_de= tach, - vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 2e6b640050..332aea9306 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd =3D fd; vu_fd_watch->cb =3D cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev =3D vu_dev; vu_fd_watch->pvt =3D pvt; @@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, true, + aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL, NULL); =20 QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); @@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *ser= ver, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *ser= ver) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546658; cv=none; d=zohomail.com; s=zohoarc; b=UMgPq2/hatm/ByvOEiV4Gv/PPWhqctbPgoqgTnw2+x/l0mQsc6BCH1fcu/BTUJFhfsjP5WVjXMFeaeYfUxQo+3zADafUurLtyj2QSzdVKmveghFsP9o5fRT8Cta/WOcJmgshWgB2uXQCMpL6a9Vc61jpDDrC4tKRE65MgGFSE+g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546658; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hEwtCA8s9JuIEhTUsGQuJCNn3dK7Smgq43ityTlAMA0=; b=ClXz6twsFVyguuCyWeALx0wy6ac7WaBYBoSV9NiH5dUg2moGClL7BFkcOPzU62W/e1/bGEDctsfxZE/g7z4JQA03fIKPU9rzR6fb7+7Q9jxsNA1qctNv8WLObev+a7xDOHnqNCFGHwYrDyAC1zEyhoAWo9VC2KwBZAL2XJvRFv0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546658161946.7067624349937; Mon, 3 Apr 2023 11:30:58 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517557.803115 (Exim 4.92) (envelope-from ) id 1pjOwz-0006a7-W0; Mon, 03 Apr 2023 18:30:41 +0000 Received: by outflank-mailman (output) from mailman id 517557.803115; Mon, 03 Apr 2023 18:30:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwz-0006ZY-Om; Mon, 03 Apr 2023 18:30:41 +0000 Received: by outflank-mailman (input) for mailman id 517557; Mon, 03 Apr 2023 18:30:39 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwx-00058E-FX for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:39 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a0980774-d24d-11ed-b464-930f4c7d94ae; Mon, 03 Apr 2023 20:30:37 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-37-LlfGX1TgPQ-M7RE1UMTAgw-1; Mon, 03 Apr 2023 14:30:34 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 408E3101A551; Mon, 3 Apr 2023 18:30:27 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9B806140EBF4; Mon, 3 Apr 2023 18:30:26 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a0980774-d24d-11ed-b464-930f4c7d94ae DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546636; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hEwtCA8s9JuIEhTUsGQuJCNn3dK7Smgq43ityTlAMA0=; b=Xcp4WtcqbfNKMOakkTqb4qmt7Tm+ZsvjLsYs4l1qKl45bB3w/KSqrfXNUWwmk0tydT95ez Ltj8xIHGvxr0HOvGhFmmFFc1PbDlC/PZCR7ZjmVwU4UszcBPcMRVWP1b7eJ0/iF2NB1l7e zSUfRYz8Svn4VgimrgfHMFDLwZY6H/E= X-MC-Unique: LlfGX1TgPQ-M7RE1UMTAgw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 07/13] virtio: do not set is_external=true on host notifiers Date: Mon, 3 Apr 2023 14:29:58 -0400 Message-Id: <20230403183004.347205-8-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546659461100007 Content-Type: text/plain; charset="utf-8" Host notifiers trigger virtqueue processing. There are critical sections when new I/O requests must not be submitted because they would cause interference. In the past this was solved using aio_set_event_notifiers() is_external=3Dtrue argument, which disables fd monitoring between aio_disable/enable_external() calls. This API is not multi-queue block layer friendly because it requires knowledge of the specific AioContext. In a multi-queue block layer world any thread can submit I/O and we don't know which AioContexts are currently involved. virtio-blk and virtio-scsi are the only users that depend on is_external=3Dtrue. Both rely on the block layer, where we can take advantage of the existing request queuing behavior that happens during drained sections. The block layer's drained sections are the only user of aio_disable_external(). After this patch the virtqueues will be processed during drained section, but submitted I/O requests will be queued in the BlockBackend. Queued requests are resumed when the drained section ends. Therefore, the BlockBackend is still quiesced during drained sections but we no longer rely on is_external=3Dtrue to achieve this. Note that virtqueues have a finite size, so queuing requests does not lead to unbounded memory usage. Signed-off-by: Stefan Hajnoczi --- hw/virtio/virtio.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 98c4819fcc..dcd7aabb4e 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(E= ventNotifier *n) =20 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, + aio_set_event_notifier(ctx, &vq->host_notifier, false, virtio_queue_host_notifier_read, virtio_queue_host_notifier_aio_poll, virtio_queue_host_notifier_aio_poll_ready); @@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueu= e *vq, AioContext *ctx) */ void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioConte= xt *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, + aio_set_event_notifier(ctx, &vq->host_notifier, false, virtio_queue_host_notifier_read, NULL, NULL); } =20 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL= ); + aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NUL= L); /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ virtio_queue_host_notifier_read(&vq->host_notifier); --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546661; cv=none; d=zohomail.com; s=zohoarc; b=ftt/KydSMG7ihudmlpU+XqgHFrgSchvJRoqv5es4f+OMvsnr/WqA36fQr7rSe/0Bh1lAbleDLSrLA5AS11+p6hRK/Vx6M1vDsOplDn/YbFAJWkNR3uda37LtroJtLO3fk79YUJnCyS3Nr9egZXVVCK+SeGVYOinTSpWHJzM0cMA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546661; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=02kXX9aX6Yr2x3VYWWh4xRNvlIxdZtHtMoEfskS16XA=; b=HTJZA76DWX8ZqdkIojg84UJ1mI8qDAUbTvyXi203AKr6g8/nil4tkEtFL0p86lEQ5idqdKp2iGcBdzlp5jsKHNJq9P+VcGl4IjYFKgGaH9we5HRPTxrZqcSwM545qAvj33RSJprIJxVNRDuSpkQF2Z2RmdVWuwlq6zHtLTtUXRU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546661397683.4676054972072; Mon, 3 Apr 2023 11:31:01 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517555.803099 (Exim 4.92) (envelope-from ) id 1pjOwx-0006AV-7f; Mon, 03 Apr 2023 18:30:39 +0000 Received: by outflank-mailman (output) from mailman id 517555.803099; Mon, 03 Apr 2023 18:30:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwx-0006A0-46; Mon, 03 Apr 2023 18:30:39 +0000 Received: by outflank-mailman (input) for mailman id 517555; Mon, 03 Apr 2023 18:30:37 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwv-00058E-F1 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:37 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9f6e1b61-d24d-11ed-b464-930f4c7d94ae; Mon, 03 Apr 2023 20:30:35 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-453-9CKa4rlENESBndwEgTyXrw-1; Mon, 03 Apr 2023 14:30:31 -0400 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AC85B185A790; Mon, 3 Apr 2023 18:30:29 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1ED14400F4F; Mon, 3 Apr 2023 18:30:28 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9f6e1b61-d24d-11ed-b464-930f4c7d94ae DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546634; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=02kXX9aX6Yr2x3VYWWh4xRNvlIxdZtHtMoEfskS16XA=; b=jG/URGlzduMp2atXz4KaNq3IKZO6oJIHO0i5UZihTsZ1MocxkdAM98rtBCvP7YZGG6WByY GluIqgXkZH/9e/fZj2iM1JdrVfoiobkwrMrhMTdqDgAAt0Q+c34ldRkCdmeb6CCCAJruct 0V59YdLPn84w4MwihA1UoDBLiAAhgBc= X-MC-Unique: 9CKa4rlENESBndwEgTyXrw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 08/13] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore Date: Mon, 3 Apr 2023 14:29:59 -0400 Message-Id: <20230403183004.347205-9-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546663537100001 Content-Type: text/plain; charset="utf-8" There is no need to suspend activity between aio_disable_external() and aio_enable_external(), which is mainly used for the block layer's drain operation. This is part of ongoing work to remove the aio_disable_external() API. Signed-off-by: Stefan Hajnoczi Reviewed-by: David Woodhouse --- hw/i386/kvm/xen_xenstore.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c index 900679af8a..6e81bc8791 100644 --- a/hw/i386/kvm/xen_xenstore.c +++ b/hw/i386/kvm/xen_xenstore.c @@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Erro= r **errp) error_setg(errp, "Xenstore evtchn port init failed"); return; } - aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), tr= ue, + aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), fa= lse, xen_xenstore_event, NULL, NULL, NULL, s); =20 s->impl =3D xs_impl_create(xen_domid); --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546655; cv=none; d=zohomail.com; s=zohoarc; b=YZIC8JQY9zS9IA+whIOgQkzml5Wp3vPnoaLQZthP5Gz11ASGXBOG3mD/ZYXuKNdf64BZqO21BxJpqr4VePxAcUs3JW8iZrnpyMuUmgwl6/shZQcsr6E+TrrNcG9o5NzobKjCJ6S4Tt8xewzZ3jStodisvSDoH78rIqpbHuxTofk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546655; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=w78ixlnTJkfmit1c5uhGwpfg3e2qbfo6w6b+6TW1iVA=; b=l8A7fcMTeF5jsuf1yXoNcuAspfDfEpV9qMrKrQ61StySqGg70w7oDdqvMKn76qzjoOSCHx5mTPmJDFFR/Abh5blw5GvFHMLMkVfOtbNOXFeFEpukPqklqWeFdVj2E6WhzSbjP1IgitBgiaNXy+gFX2O7ktbRmgi92+RM68dGuDg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1680546655533139.5063867174199; Mon, 3 Apr 2023 11:30:55 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pjOwz-00017q-R0; Mon, 03 Apr 2023 14:30:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOwv-00014h-6B for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:39 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOwt-0000YW-ME for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:36 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-439-74AM9wfyN9eiYjF1jls1ng-1; Mon, 03 Apr 2023 14:30:33 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EE94B811E7C; Mon, 3 Apr 2023 18:30:31 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 521171121314; Mon, 3 Apr 2023 18:30:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w78ixlnTJkfmit1c5uhGwpfg3e2qbfo6w6b+6TW1iVA=; b=NnAqcyCLKjOMKft6xBEN4IomHgt+gmS71mOCo3mNKtuGDyktPFN2ZmX+/Z7dXbSZ/NxJjN rYiTnJRHssxivsuUsL/D7B7b2Uh5AqdkosRHAnGF9cyql8dvEkB7MtKnFUnbVO3OhFRdRO fqGxA+LxXPT325KvyC12Rpgj39WoK0c= X-MC-Unique: 74AM9wfyN9eiYjF1jls1ng-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 09/13] hw/xen: do not set is_external=true on evtchn fds Date: Mon, 3 Apr 2023 14:30:00 -0400 Message-Id: <20230403183004.347205-10-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546656545100001 Content-Type: text/plain; charset="utf-8" is_external=3Dtrue suspends fd handlers between aio_disable_external() and aio_enable_external(). The block layer's drain operation uses this mechanism to prevent new I/O from sneaking in between bdrv_drained_begin() and bdrv_drained_end(). The xen-block device actually works fine with is_external=3Dfalse because BlockBackend requests are already queued between bdrv_drained_begin() and bdrv_drained_end(). Since the Xen ring size is finite, request queuing will stop once the ring is full and memory usage is bounded. After bdrv_drained_end() the BlockBackend requests will resume and xen-block's processing will continue. This is part of ongoing work to remove the aio_disable_external() API. Signed-off-by: Stefan Hajnoczi --- hw/xen/xen-bus.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c index c59850b1de..c4fd26abe1 100644 --- a/hw/xen/xen-bus.c +++ b/hw/xen/xen-bus.c @@ -842,11 +842,11 @@ void xen_device_set_event_channel_context(XenDevice *= xendev, } =20 if (channel->ctx) - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),= true, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),= false, NULL, NULL, NULL, NULL, NULL); =20 channel->ctx =3D ctx; - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), tru= e, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), fal= se, xen_device_event, NULL, xen_device_poll, NULL, chan= nel); } =20 @@ -920,7 +920,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev, =20 QLIST_REMOVE(channel, list); =20 - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), tru= e, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), fal= se, NULL, NULL, NULL, NULL, NULL); =20 if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) { --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546665; cv=none; d=zohomail.com; s=zohoarc; b=g9tDc8a/u9PVBQVkVN5n8129ndQwQyNycU2NhDz58rltro/RrWlOWB37BuuPUuovj6NcHk9d8unksuw5T7UAz7aBTyk4Sp4oajXU2i9cNy5yjFSmvtxy2b5UG1gc9IK/6pOYEbDfR70TAqZrgef36787IrCRW8zvQMysjbm9oHI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546665; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=; b=RaS6irERQp2veG6KwD3m/DA+975Xq+RkORA1JzZz1MsmpiO6CGvwGAXb9uRSJxpFC504qTVOO3NMAv0qCTsy6NF9HmPwLd33/TpDswQwmCpH/RqzMtvclBCwDCx46DfGSl/03lqPFBxcTh1bC1p/vVTFQYtCi5sew1jKEBl7j+A= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1680546665324388.95560431534636; Mon, 3 Apr 2023 11:31:05 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pjOxE-0001I8-Em; Mon, 03 Apr 2023 14:30:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOx0-000198-EZ for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:42 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOwy-0000ZZ-Cy for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:42 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-614-fKluM2IgOGOWbZExTvDE9Q-1; Mon, 03 Apr 2023 14:30:35 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B9501101A553; Mon, 3 Apr 2023 18:30:34 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B0242166B26; Mon, 3 Apr 2023 18:30:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546639; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=; b=DyZWgozvonCNdk51UFkNvkrpbWYGOejy/COWk3F8LTqA/htvwuZSXeNqKNT2K4KKUdE+e2 FHSAdd7gAZiylf5VinXU4HrQkFT3K9NIc8SadSGtdOwLgkbgoyQMlcFYimUSGxr62JTnUD ivJfVPg7oV32j2orFv9Rxgox5lnPAMU= X-MC-Unique: fKluM2IgOGOWbZExTvDE9Q-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 10/13] block/export: rewrite vduse-blk drain code Date: Mon, 3 Apr 2023 14:30:01 -0400 Message-Id: <20230403183004.347205-11-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546665560100005 Content-Type: text/plain; charset="utf-8" vduse_blk_detach_ctx() waits for in-flight requests using AIO_WAIT_WHILE(). This is not allowed according to a comment in bdrv_set_aio_context_commit(): /* * Take the old AioContex when detaching it from bs. * At this point, new_context lock is already acquired, and we are now * also taking old_context. This is safe as long as bdrv_detach_aio_conte= xt * does not call AIO_POLL_WHILE(). */ Use this opportunity to rewrite the drain code in vduse-blk: - Use the BlockExport refcount so that vduse_blk_exp_delete() is only called when there are no more requests in flight. - Implement .drained_poll() so in-flight request coroutines are stopped by the time .bdrv_detach_aio_context() is called. - Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the .bdrv_detach_aio_context() constraint violation. It's no longer needed due to the previous changes. - Always handle the VDUSE file descriptor, even in drained sections. The VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in drained sections. This ensures that the VDUSE kernel code gets a fast response. - Suspend virtqueue fd handlers in .drained_begin() and resume them in .drained_end(). This eliminates the need for the aio_set_fd_handler(is_external=3Dtrue) flag, which is being removed from QEMU. This is a long list but splitting it into individual commits would probably lead to git bisect failures - the changes are all related. Signed-off-by: Stefan Hajnoczi --- block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------ 1 file changed, 93 insertions(+), 39 deletions(-) diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c index f7ae44e3ce..35dc8fcf45 100644 --- a/block/export/vduse-blk.c +++ b/block/export/vduse-blk.c @@ -31,7 +31,8 @@ typedef struct VduseBlkExport { VduseDev *dev; uint16_t num_queues; char *recon_file; - unsigned int inflight; + unsigned int inflight; /* atomic */ + bool vqs_started; } VduseBlkExport; =20 typedef struct VduseBlkReq { @@ -41,13 +42,24 @@ typedef struct VduseBlkReq { =20 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp) { - vblk_exp->inflight++; + if (qatomic_fetch_inc(&vblk_exp->inflight) =3D=3D 0) { + /* Prevent export from being deleted */ + aio_context_acquire(vblk_exp->export.ctx); + blk_exp_ref(&vblk_exp->export); + aio_context_release(vblk_exp->export.ctx); + } } =20 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp) { - if (--vblk_exp->inflight =3D=3D 0) { + if (qatomic_fetch_dec(&vblk_exp->inflight) =3D=3D 1) { + /* Wake AIO_WAIT_WHILE() */ aio_wait_kick(); + + /* Now the export can be deleted */ + aio_context_acquire(vblk_exp->export.ctx); + blk_exp_unref(&vblk_exp->export); + aio_context_release(vblk_exp->export.ctx); } } =20 @@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vdus= eVirtq *vq) { VduseBlkExport *vblk_exp =3D vduse_dev_get_priv(dev); =20 + if (!vblk_exp->vqs_started) { + return; /* vduse_blk_drained_end() will start vqs later */ + } + aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - true, on_vduse_vq_kick, NULL, NULL, NULL, vq); + false, on_vduse_vq_kick, NULL, NULL, NULL, vq); /* Make sure we don't miss any kick afer reconnecting */ eventfd_write(vduse_queue_get_fd(vq), 1); } @@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vdus= eVirtq *vq) static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq) { VduseBlkExport *vblk_exp =3D vduse_dev_get_priv(dev); + int fd =3D vduse_queue_get_fd(vq); =20 - aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - true, NULL, NULL, NULL, NULL, NULL); + if (fd < 0) { + return; + } + + aio_set_fd_handler(vblk_exp->export.ctx, fd, false, + NULL, NULL, NULL, NULL, NULL); } =20 static const VduseOps vduse_blk_ops =3D { @@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque) =20 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx) { - int i; - aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - true, on_vduse_dev_kick, NULL, NULL, NULL, + false, on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 - for (i =3D 0; i < vblk_exp->num_queues; i++) { - VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); - int fd =3D vduse_queue_get_fd(vq); - - if (fd < 0) { - continue; - } - aio_set_fd_handler(vblk_exp->export.ctx, fd, true, - on_vduse_vq_kick, NULL, NULL, NULL, vq); - } + /* Virtqueues are handled by vduse_blk_drained_end() */ } =20 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp) { - int i; - - for (i =3D 0; i < vblk_exp->num_queues; i++) { - VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); - int fd =3D vduse_queue_get_fd(vq); - - if (fd < 0) { - continue; - } - aio_set_fd_handler(vblk_exp->export.ctx, fd, - true, NULL, NULL, NULL, NULL, NULL); - } aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - true, NULL, NULL, NULL, NULL, NULL); + false, NULL, NULL, NULL, NULL, NULL); =20 - AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0); + /* Virtqueues are handled by vduse_blk_drained_begin() */ } =20 =20 @@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque) (char *)&config.capacity); } =20 +static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp) +{ + for (uint16_t i =3D 0; i < vblk_exp->num_queues; i++) { + VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); + vduse_blk_disable_queue(vblk_exp->dev, vq); + } + + vblk_exp->vqs_started =3D false; +} + +static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp) +{ + vblk_exp->vqs_started =3D true; + + for (uint16_t i =3D 0; i < vblk_exp->num_queues; i++) { + VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i); + vduse_blk_enable_queue(vblk_exp->dev, vq); + } +} + +static void vduse_blk_drained_begin(void *opaque) +{ + BlockExport *exp =3D opaque; + VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); + + vduse_blk_stop_virtqueues(vblk_exp); +} + +static void vduse_blk_drained_end(void *opaque) +{ + BlockExport *exp =3D opaque; + VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); + + vduse_blk_start_virtqueues(vblk_exp); +} + +static bool vduse_blk_drained_poll(void *opaque) +{ + BlockExport *exp =3D opaque; + VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); + + return qatomic_read(&vblk_exp->inflight) > 0; +} + static const BlockDevOps vduse_block_ops =3D { - .resize_cb =3D vduse_blk_resize, + .resize_cb =3D vduse_blk_resize, + .drained_begin =3D vduse_blk_drained_begin, + .drained_end =3D vduse_blk_drained_end, + .drained_poll =3D vduse_blk_drained_poll, }; =20 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, @@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, Block= ExportOptions *opts, vblk_exp->handler.serial =3D g_strdup(vblk_opts->serial ?: ""); vblk_exp->handler.logical_block_size =3D logical_block_size; vblk_exp->handler.writable =3D opts->writable; + vblk_exp->vqs_started =3D true; =20 config.capacity =3D cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS); @@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, Blo= ckExportOptions *opts, vduse_dev_setup_queue(vblk_exp->dev, i, queue_size); } =20 - aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true, + aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false, on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, vblk_exp); - blk_set_dev_ops(exp->blk, &vduse_block_ops, exp); =20 + /* + * We handle draining ourselves using an in-flight counter and by disa= bling + * virtqueue fd handlers. Do not queue BlockBackend requests, they nee= d to + * complete so the in-flight counter reaches zero. + */ + blk_set_disable_request_queuing(exp->blk, true); + return 0; err: vduse_dev_destroy(vblk_exp->dev); @@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp) VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); int ret; =20 + assert(qatomic_read(&vblk_exp->inflight) =3D=3D 0); + + vduse_blk_detach_ctx(vblk_exp); blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_de= tach, vblk_exp); blk_set_dev_ops(exp->blk, NULL, NULL); @@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp) g_free(vblk_exp->handler.serial); } =20 +/* Called with exp->ctx acquired */ static void vduse_blk_exp_request_shutdown(BlockExport *exp) { VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export); =20 - aio_context_acquire(vblk_exp->export.ctx); - vduse_blk_detach_ctx(vblk_exp); - aio_context_acquire(vblk_exp->export.ctx); + vduse_blk_stop_virtqueues(vblk_exp); } =20 const BlockExportDriver blk_exp_vduse_blk =3D { --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546847; cv=none; d=zohomail.com; s=zohoarc; b=T/zFFL62OetUbFHfRDvDtoKpDlVhbDBoUs2cg/cwMNN5gRlFG6CKvxNyh53uRZSPvGxdmydY7CRNUd6oG77jyzU16VRE0flgoA2D3UccFPJuX11dazIh6G8wda6HNqsGDWuhSL+TY7nZdy5GQntFDwYUjiS8uYtsXOu52Ryh6GY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546847; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hLGTNhZNr3bIXX7Mx11PRwkKdI3CfrxoXkyIIe5WNhM=; b=gE0lOnk4TkYv0h83eZFZJLyAEu2tLPnfMDQXJ8xKEpnbqIyOld6ap2SavJbxQJaiNd+NbM/DhUPy78sVBW7QlAT5qMiX8JJ4eJRXXJ6CC6g8ylZCYnvPPRJc/TKbeNMFK/7YNDUVUW46woVzgVqlTZiJF6Rz+LJT4/J9hIx0xdQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1680546847175874.1151417915546; Mon, 3 Apr 2023 11:34:07 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pjP0D-0001lJ-Ff; Mon, 03 Apr 2023 14:34:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjP0C-0001l4-3b for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:34:00 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjP0A-0001H7-Os for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:33:59 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-117-j26lxWggN_O159O5n3-PNg-1; Mon, 03 Apr 2023 14:33:46 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3118C885620; Mon, 3 Apr 2023 18:30:37 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 71C5440C6EC4; Mon, 3 Apr 2023 18:30:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546837; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hLGTNhZNr3bIXX7Mx11PRwkKdI3CfrxoXkyIIe5WNhM=; b=F+KGODZeWnjO06iBeGXu1GsIu2iw04oQLGujd32EevpihS9Rzx5T4Qqtfrde3BHMI4LZgx iYk27pdQUail9zYFunJ3RytksdYoSbs+oUuJVGn/eXH1RzdkZ8BNdIHrOaXRiKQ/nf/wQn +6IYJL4BHd8jmiVFQeTEV2m4dfzayFI= X-MC-Unique: j26lxWggN_O159O5n3-PNg-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 11/13] block/fuse: take AioContext lock around blk_exp_ref/unref() Date: Mon, 3 Apr 2023 14:30:02 -0400 Message-Id: <20230403183004.347205-12-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546849052100001 Content-Type: text/plain; charset="utf-8" These functions must be called with the AioContext acquired: /* Callers must hold exp->ctx lock */ void blk_exp_ref(BlockExport *exp) ... /* Callers must hold exp->ctx lock */ void blk_exp_unref(BlockExport *exp) Signed-off-by: Stefan Hajnoczi --- block/export/fuse.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/block/export/fuse.c b/block/export/fuse.c index 06fa41079e..18394f9e07 100644 --- a/block/export/fuse.c +++ b/block/export/fuse.c @@ -244,7 +244,9 @@ static void read_from_fuse_export(void *opaque) FuseExport *exp =3D opaque; int ret; =20 + aio_context_acquire(exp->common.ctx); blk_exp_ref(&exp->common); + aio_context_release(exp->common.ctx); =20 do { ret =3D fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf= ); @@ -256,7 +258,9 @@ static void read_from_fuse_export(void *opaque) fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf); =20 out: + aio_context_acquire(exp->common.ctx); blk_exp_unref(&exp->common); + aio_context_release(exp->common.ctx); } =20 static void fuse_export_shutdown(BlockExport *blk_exp) --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546670; cv=none; d=zohomail.com; s=zohoarc; b=mqHJdOe5uQv12C9aDkSkMrLnAaY5AJztUHmSgnmg5cVs2VNs++OXFCr5U+Vag+oCPdqeA80DebYSPWL1x0zLkBvAQIXg6foa4R9bt2k9DE27RiS+NHHmdW1ZS5hk1uL7DOdkdmFrtOjdV4bpHUXtrwja+ZJDFMWMITJL2bUxFmY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546670; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=9pwn7OV2gNoHAp8haSXqR0tkaO5uu/ou2YhRF+j7lsw=; b=cFny/o+hbMwZxZxq0TKSkp15e4wjVoEkx8jAd3Gct//fi3fAavM4qtdRBaCj7EwUkRUFMqFFun1SOJeB6umRJx6r9wMmM1TF7ytpSvLaXuvADS4Np7F6w3ZdYW4CFNC8TEBppt41Xi8p2utT2BzuJEVvJSB94AqojPMuSjJv+os= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1680546669989904.3346928093422; Mon, 3 Apr 2023 11:31:09 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pjOx9-0001Ey-TQ; Mon, 03 Apr 2023 14:30:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOx5-0001C5-RG for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pjOx4-0000cv-6n for qemu-devel@nongnu.org; Mon, 03 Apr 2023 14:30:47 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-86-LcBnC9OFNAS0HwwcyCiR0g-1; Mon, 03 Apr 2023 14:30:41 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A154885A5B1; Mon, 3 Apr 2023 18:30:39 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 062F21121314; Mon, 3 Apr 2023 18:30:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9pwn7OV2gNoHAp8haSXqR0tkaO5uu/ou2YhRF+j7lsw=; b=EGTbOea+ak9SYtzO+aVAVA2Y+2CkXykxup8kNxRaKnXHXnwBcfP0+xH7VBkSlj2GdrtYV2 COH63lMNNoZXy+yJrPpOXtwIXdzkOkmQHbQlCsgeCB9HzvUPCpO2WgZuQc5yW6fwB9BR5P HtgKaM7xuPwuFxr6/ZKZMxngsBWKs+M= X-MC-Unique: LcBnC9OFNAS0HwwcyCiR0g-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 12/13] block/fuse: do not set is_external=true on FUSE fd Date: Mon, 3 Apr 2023 14:30:03 -0400 Message-Id: <20230403183004.347205-13-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546672042100019 Content-Type: text/plain; charset="utf-8" This is part of ongoing work to remove the aio_disable_external() API. Use BlockDevOps .drained_begin/end/poll() instead of aio_set_fd_handler(is_external=3Dtrue). As a side-effect the FUSE export now follows AioContext changes like the other export types. Signed-off-by: Stefan Hajnoczi --- block/export/fuse.c | 58 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 56 insertions(+), 2 deletions(-) diff --git a/block/export/fuse.c b/block/export/fuse.c index 18394f9e07..83bccf046b 100644 --- a/block/export/fuse.c +++ b/block/export/fuse.c @@ -50,6 +50,7 @@ typedef struct FuseExport { =20 struct fuse_session *fuse_session; struct fuse_buf fuse_buf; + unsigned int in_flight; /* atomic */ bool mounted, fd_handler_set_up; =20 char *mountpoint; @@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque); static bool is_regular_file(const char *path, Error **errp); =20 =20 +static void fuse_export_drained_begin(void *opaque) +{ + FuseExport *exp =3D opaque; + + aio_set_fd_handler(exp->common.ctx, + fuse_session_fd(exp->fuse_session), false, + NULL, NULL, NULL, NULL, NULL); + exp->fd_handler_set_up =3D false; +} + +static void fuse_export_drained_end(void *opaque) +{ + FuseExport *exp =3D opaque; + + /* Refresh AioContext in case it changed */ + exp->common.ctx =3D blk_get_aio_context(exp->common.blk); + + aio_set_fd_handler(exp->common.ctx, + fuse_session_fd(exp->fuse_session), false, + read_from_fuse_export, NULL, NULL, NULL, exp); + exp->fd_handler_set_up =3D true; +} + +static bool fuse_export_drained_poll(void *opaque) +{ + FuseExport *exp =3D opaque; + + return qatomic_read(&exp->in_flight) > 0; +} + +static const BlockDevOps fuse_export_blk_dev_ops =3D { + .drained_begin =3D fuse_export_drained_begin, + .drained_end =3D fuse_export_drained_end, + .drained_poll =3D fuse_export_drained_poll, +}; + static int fuse_export_create(BlockExport *blk_exp, BlockExportOptions *blk_exp_args, Error **errp) @@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp, } } =20 + blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp); + + /* + * We handle draining ourselves using an in-flight counter and by disa= bling + * the FUSE fd handler. Do not queue BlockBackend requests, they need = to + * complete so the in-flight counter reaches zero. + */ + blk_set_disable_request_queuing(exp->common.blk, true); + init_exports_table(); =20 /* @@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const cha= r *mountpoint, g_hash_table_insert(exports, g_strdup(mountpoint), NULL); =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), true, + fuse_session_fd(exp->fuse_session), false, read_from_fuse_export, NULL, NULL, NULL, exp); exp->fd_handler_set_up =3D true; =20 @@ -248,6 +294,8 @@ static void read_from_fuse_export(void *opaque) blk_exp_ref(&exp->common); aio_context_release(exp->common.ctx); =20 + qatomic_inc(&exp->in_flight); + do { ret =3D fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf= ); } while (ret =3D=3D -EINTR); @@ -258,6 +306,10 @@ static void read_from_fuse_export(void *opaque) fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf); =20 out: + if (qatomic_fetch_dec(&exp->in_flight) =3D=3D 1) { + aio_wait_kick(); /* wake AIO_WAIT_WHILE() */ + } + aio_context_acquire(exp->common.ctx); blk_exp_unref(&exp->common); aio_context_release(exp->common.ctx); @@ -272,7 +324,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp) =20 if (exp->fd_handler_set_up) { aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), true, + fuse_session_fd(exp->fuse_session), false, NULL, NULL, NULL, NULL, NULL); exp->fd_handler_set_up =3D false; } @@ -291,6 +343,8 @@ static void fuse_export_delete(BlockExport *blk_exp) { FuseExport *exp =3D container_of(blk_exp, FuseExport, common); =20 + blk_set_dev_ops(exp->common.blk, NULL, NULL); + if (exp->fuse_session) { if (exp->mounted) { fuse_session_unmount(exp->fuse_session); --=20 2.39.2 From nobody Sun May 12 22:47:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546674; cv=none; d=zohomail.com; s=zohoarc; b=dXIQS/NxWL2L5YAR5VfMk97hKXbZrk9U0ZRLZzSPSIUSw5V8/Bd+ljh1EfYVf8xCWenBxY5FlTFUnaPp0So1e8FyKSYngzNfZ3MrMZsS2VlHUf0gRMeYB/4gdLPI+KwP2EhdmPC7EXgZdI5lY42/yiPIXxFVdfqJe+j7+rc8ZQQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546674; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LN11HOmb2+zF0uNZ22EvLdu7DtwYKh88e9e0E+eLcmM=; b=TO3RgkndiaLwGMfBhcxr332f2rsLmEZ8menXnwl5JAMF+MkZYGsAvjm7Eq1jek5SqCEiQdxyz109COxhVYNCs6oH+eOWlwASwh5E1S0ZW18Qs4mEh/2Z8dLgADNtPAd3eNsduPG5judlcVFk7kY8Zw+oftPJ0ZzPT8GNTTSui90= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546674496548.4851016226468; Mon, 3 Apr 2023 11:31:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517560.803149 (Exim 4.92) (envelope-from ) id 1pjOxA-0008VW-MY; Mon, 03 Apr 2023 18:30:52 +0000 Received: by outflank-mailman (output) from mailman id 517560.803149; Mon, 03 Apr 2023 18:30:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOxA-0008Sh-IR; Mon, 03 Apr 2023 18:30:52 +0000 Received: by outflank-mailman (input) for mailman id 517560; Mon, 03 Apr 2023 18:30:50 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOx8-0004HH-IU for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:50 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a7013ff6-d24d-11ed-85db-49a42c6b2330; Mon, 03 Apr 2023 20:30:48 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-436-vCApPF78Ozqd8fbf0y-d6A-1; Mon, 03 Apr 2023 14:30:44 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 25B603C20EE6; Mon, 3 Apr 2023 18:30:43 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id D84B140C83A9; Mon, 3 Apr 2023 18:30:41 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a7013ff6-d24d-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546647; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LN11HOmb2+zF0uNZ22EvLdu7DtwYKh88e9e0E+eLcmM=; b=gL14HbSADW2gwQ0ncwOvH5wYgDWFBpRpxmyqosZ/+uVGw/TZE28V0W2Jn2YIf2PY76nALb EKVCVkxs4/ED5laBfU9NgCHe7ys10tvVFA++F/J8rkVXr4GaN9LHljVViYQjheFKuRFT5J X5FdIokvqMciwcgfAbYGc7KZr8UJyWs= X-MC-Unique: vCApPF78Ozqd8fbf0y-d6A-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 13/13] aio: remove aio_disable_external() API Date: Mon, 3 Apr 2023 14:30:04 -0400 Message-Id: <20230403183004.347205-14-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546675740100025 Content-Type: text/plain; charset="utf-8" All callers now pass is_external=3Dfalse to aio_set_fd_handler() and aio_set_event_notifier(). The aio_disable_external() API that temporarily disables fd handlers that were registered is_external=3Dtrue is therefore dead code. Remove aio_disable_external(), aio_enable_external(), and the is_external arguments to aio_set_fd_handler() and aio_set_event_notifier(). The entire test-fdmon-epoll test is removed because its sole purpose was testing aio_disable_external(). Parts of this patch were generated using the following coccinelle (https://coccinelle.lip6.fr/) semantic patch: @@ expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_read= y, opaque; @@ - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io= _poll_ready, opaque) + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, = opaque) @@ expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready; @@ - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io= _poll_ready) + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready) Signed-off-by: Stefan Hajnoczi Reviewed-by: Juan Quintela --- include/block/aio.h | 55 -------------------------- util/aio-posix.h | 1 - block.c | 7 ---- block/blkio.c | 15 +++---- block/curl.c | 10 ++--- block/export/fuse.c | 8 ++-- block/export/vduse-blk.c | 10 ++--- block/io.c | 2 - block/io_uring.c | 4 +- block/iscsi.c | 3 +- block/linux-aio.c | 4 +- block/nfs.c | 5 +-- block/nvme.c | 8 ++-- block/ssh.c | 4 +- block/win32-aio.c | 6 +-- hw/i386/kvm/xen_xenstore.c | 2 +- hw/virtio/virtio.c | 6 +-- hw/xen/xen-bus.c | 6 +-- io/channel-command.c | 6 +-- io/channel-file.c | 3 +- io/channel-socket.c | 3 +- migration/rdma.c | 16 ++++---- tests/unit/test-aio.c | 27 +------------ tests/unit/test-fdmon-epoll.c | 73 ----------------------------------- util/aio-posix.c | 20 +++------- util/aio-win32.c | 8 +--- util/async.c | 3 +- util/fdmon-epoll.c | 10 ----- util/fdmon-io_uring.c | 8 +--- util/fdmon-poll.c | 3 +- util/main-loop.c | 7 ++-- util/qemu-coroutine-io.c | 7 ++-- util/vhost-user-server.c | 11 +++--- tests/unit/meson.build | 3 -- 34 files changed, 75 insertions(+), 289 deletions(-) delete mode 100644 tests/unit/test-fdmon-epoll.c diff --git a/include/block/aio.h b/include/block/aio.h index e267d918fd..d4ce01ea08 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -467,7 +467,6 @@ bool aio_poll(AioContext *ctx, bool blocking); */ void aio_set_fd_handler(AioContext *ctx, int fd, - bool is_external, IOHandler *io_read, IOHandler *io_write, AioPollFn *io_poll, @@ -483,7 +482,6 @@ void aio_set_fd_handler(AioContext *ctx, */ void aio_set_event_notifier(AioContext *ctx, EventNotifier *notifier, - bool is_external, EventNotifierHandler *io_read, AioPollFn *io_poll, EventNotifierHandler *io_poll_ready); @@ -612,59 +610,6 @@ static inline void aio_timer_init(AioContext *ctx, */ int64_t aio_compute_timeout(AioContext *ctx); =20 -/** - * aio_disable_external: - * @ctx: the aio context - * - * Disable the further processing of external clients. - */ -static inline void aio_disable_external(AioContext *ctx) -{ - qatomic_inc(&ctx->external_disable_cnt); -} - -/** - * aio_enable_external: - * @ctx: the aio context - * - * Enable the processing of external clients. - */ -static inline void aio_enable_external(AioContext *ctx) -{ - int old; - - old =3D qatomic_fetch_dec(&ctx->external_disable_cnt); - assert(old > 0); - if (old =3D=3D 1) { - /* Kick event loop so it re-arms file descriptors */ - aio_notify(ctx); - } -} - -/** - * aio_external_disabled: - * @ctx: the aio context - * - * Return true if the external clients are disabled. - */ -static inline bool aio_external_disabled(AioContext *ctx) -{ - return qatomic_read(&ctx->external_disable_cnt); -} - -/** - * aio_node_check: - * @ctx: the aio context - * @is_external: Whether or not the checked node is an external event sour= ce. - * - * Check if the node's is_external flag is okay to be polled by the ctx at= this - * moment. True means green light. - */ -static inline bool aio_node_check(AioContext *ctx, bool is_external) -{ - return !is_external || !qatomic_read(&ctx->external_disable_cnt); -} - /** * aio_co_schedule: * @ctx: the aio context diff --git a/util/aio-posix.h b/util/aio-posix.h index 80b927c7f4..4264c518be 100644 --- a/util/aio-posix.h +++ b/util/aio-posix.h @@ -38,7 +38,6 @@ struct AioHandler { #endif int64_t poll_idle_timeout; /* when to stop userspace polling */ bool poll_ready; /* has polling detected an event? */ - bool is_external; }; =20 /* Add a handler to a ready list */ diff --git a/block.c b/block.c index a79297f99b..e9625ffeee 100644 --- a/block.c +++ b/block.c @@ -7254,9 +7254,6 @@ static void bdrv_detach_aio_context(BlockDriverState = *bs) bs->drv->bdrv_detach_aio_context(bs); } =20 - if (bs->quiesce_counter) { - aio_enable_external(bs->aio_context); - } bs->aio_context =3D NULL; } =20 @@ -7266,10 +7263,6 @@ static void bdrv_attach_aio_context(BlockDriverState= *bs, BdrvAioNotifier *ban, *ban_tmp; GLOBAL_STATE_CODE(); =20 - if (bs->quiesce_counter) { - aio_disable_external(new_context); - } - bs->aio_context =3D new_context; =20 if (bs->drv && bs->drv->bdrv_attach_aio_context) { diff --git a/block/blkio.c b/block/blkio.c index 0cdc99a729..72117fa005 100644 --- a/block/blkio.c +++ b/block/blkio.c @@ -306,23 +306,18 @@ static void blkio_attach_aio_context(BlockDriverState= *bs, { BDRVBlkioState *s =3D bs->opaque; =20 - aio_set_fd_handler(new_context, - s->completion_fd, - false, - blkio_completion_fd_read, - NULL, + aio_set_fd_handler(new_context, s->completion_fd, + blkio_completion_fd_read, NULL, blkio_completion_fd_poll, - blkio_completion_fd_poll_ready, - bs); + blkio_completion_fd_poll_ready, bs); } =20 static void blkio_detach_aio_context(BlockDriverState *bs) { BDRVBlkioState *s =3D bs->opaque; =20 - aio_set_fd_handler(bdrv_get_aio_context(bs), - s->completion_fd, - false, NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(bdrv_get_aio_context(bs), s->completion_fd, NULL, N= ULL, + NULL, NULL, NULL); } =20 /* Call with s->blkio_lock held to submit I/O after enqueuing a new reques= t */ diff --git a/block/curl.c b/block/curl.c index 8bb39a134e..0fc42d03d7 100644 --- a/block/curl.c +++ b/block/curl.c @@ -132,7 +132,7 @@ static gboolean curl_drop_socket(void *key, void *value= , void *opaque) CURLSocket *socket =3D value; BDRVCURLState *s =3D socket->s; =20 - aio_set_fd_handler(s->aio_context, socket->fd, false, + aio_set_fd_handler(s->aio_context, socket->fd, NULL, NULL, NULL, NULL, NULL); return true; } @@ -180,20 +180,20 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd,= int action, trace_curl_sock_cb(action, (int)fd); switch (action) { case CURL_POLL_IN: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, curl_multi_do, NULL, NULL, NULL, socket); break; case CURL_POLL_OUT: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, NULL, curl_multi_do, NULL, NULL, socket); break; case CURL_POLL_INOUT: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, curl_multi_do, curl_multi_do, NULL, NULL, socket); break; case CURL_POLL_REMOVE: - aio_set_fd_handler(s->aio_context, fd, false, + aio_set_fd_handler(s->aio_context, fd, NULL, NULL, NULL, NULL, NULL); break; } diff --git a/block/export/fuse.c b/block/export/fuse.c index 83bccf046b..9d4c6c4dee 100644 --- a/block/export/fuse.c +++ b/block/export/fuse.c @@ -84,7 +84,7 @@ static void fuse_export_drained_begin(void *opaque) FuseExport *exp =3D opaque; =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), NULL, NULL, NULL, NULL, NULL); exp->fd_handler_set_up =3D false; } @@ -97,7 +97,7 @@ static void fuse_export_drained_end(void *opaque) exp->common.ctx =3D blk_get_aio_context(exp->common.blk); =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), read_from_fuse_export, NULL, NULL, NULL, exp); exp->fd_handler_set_up =3D true; } @@ -270,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const cha= r *mountpoint, g_hash_table_insert(exports, g_strdup(mountpoint), NULL); =20 aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), read_from_fuse_export, NULL, NULL, NULL, exp); exp->fd_handler_set_up =3D true; =20 @@ -324,7 +324,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp) =20 if (exp->fd_handler_set_up) { aio_set_fd_handler(exp->common.ctx, - fuse_session_fd(exp->fuse_session), false, + fuse_session_fd(exp->fuse_session), NULL, NULL, NULL, NULL, NULL); exp->fd_handler_set_up =3D false; } diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c index 35dc8fcf45..94e2491e4a 100644 --- a/block/export/vduse-blk.c +++ b/block/export/vduse-blk.c @@ -141,7 +141,7 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vduse= Virtq *vq) } =20 aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq), - false, on_vduse_vq_kick, NULL, NULL, NULL, vq); + on_vduse_vq_kick, NULL, NULL, NULL, vq); /* Make sure we don't miss any kick afer reconnecting */ eventfd_write(vduse_queue_get_fd(vq), 1); } @@ -155,7 +155,7 @@ static void vduse_blk_disable_queue(VduseDev *dev, Vdus= eVirtq *vq) return; } =20 - aio_set_fd_handler(vblk_exp->export.ctx, fd, false, + aio_set_fd_handler(vblk_exp->export.ctx, fd, NULL, NULL, NULL, NULL, NULL); } =20 @@ -174,7 +174,7 @@ static void on_vduse_dev_kick(void *opaque) static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx) { aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - false, on_vduse_dev_kick, NULL, NULL, NULL, + on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 /* Virtqueues are handled by vduse_blk_drained_end() */ @@ -183,7 +183,7 @@ static void vduse_blk_attach_ctx(VduseBlkExport *vblk_e= xp, AioContext *ctx) static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp) { aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->de= v), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); =20 /* Virtqueues are handled by vduse_blk_drained_begin() */ } @@ -368,7 +368,7 @@ static int vduse_blk_exp_create(BlockExport *exp, Block= ExportOptions *opts, vduse_dev_setup_queue(vblk_exp->dev, i, queue_size); } =20 - aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false, + aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev); =20 blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, diff --git a/block/io.c b/block/io.c index db438c7657..5b0b126422 100644 --- a/block/io.c +++ b/block/io.c @@ -356,7 +356,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs,= BdrvChild *parent, =20 /* Stop things in parent-to-child order */ if (qatomic_fetch_inc(&bs->quiesce_counter) =3D=3D 0) { - aio_disable_external(bdrv_get_aio_context(bs)); bdrv_parent_drained_begin(bs, parent); if (bs->drv && bs->drv->bdrv_drain_begin) { bs->drv->bdrv_drain_begin(bs); @@ -409,7 +408,6 @@ static void bdrv_do_drained_end(BlockDriverState *bs, B= drvChild *parent) bs->drv->bdrv_drain_end(bs); } bdrv_parent_drained_end(bs, parent); - aio_enable_external(bdrv_get_aio_context(bs)); } } =20 diff --git a/block/io_uring.c b/block/io_uring.c index 989f9a99ed..b64a3e6285 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -406,7 +406,7 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs,= int fd, uint64_t offset, =20 void luring_detach_aio_context(LuringState *s, AioContext *old_context) { - aio_set_fd_handler(old_context, s->ring.ring_fd, false, + aio_set_fd_handler(old_context, s->ring.ring_fd, NULL, NULL, NULL, NULL, s); qemu_bh_delete(s->completion_bh); s->aio_context =3D NULL; @@ -416,7 +416,7 @@ void luring_attach_aio_context(LuringState *s, AioConte= xt *new_context) { s->aio_context =3D new_context; s->completion_bh =3D aio_bh_new(new_context, qemu_luring_completion_bh= , s); - aio_set_fd_handler(s->aio_context, s->ring.ring_fd, false, + aio_set_fd_handler(s->aio_context, s->ring.ring_fd, qemu_luring_completion_cb, NULL, qemu_luring_poll_cb, qemu_luring_poll_ready, s); } diff --git a/block/iscsi.c b/block/iscsi.c index 9fc0bed90b..34f97ab646 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -363,7 +363,6 @@ iscsi_set_events(IscsiLun *iscsilun) =20 if (ev !=3D iscsilun->events) { aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi), - false, (ev & POLLIN) ? iscsi_process_read : NULL, (ev & POLLOUT) ? iscsi_process_write : NULL, NULL, NULL, @@ -1540,7 +1539,7 @@ static void iscsi_detach_aio_context(BlockDriverState= *bs) IscsiLun *iscsilun =3D bs->opaque; =20 aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi= ), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); iscsilun->events =3D 0; =20 if (iscsilun->nop_timer) { diff --git a/block/linux-aio.c b/block/linux-aio.c index fc50cdd1bf..129908531a 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -443,7 +443,7 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset= , QEMUIOVector *qiov, =20 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context) { - aio_set_event_notifier(old_context, &s->e, false, NULL, NULL, NULL); + aio_set_event_notifier(old_context, &s->e, NULL, NULL, NULL); qemu_bh_delete(s->completion_bh); s->aio_context =3D NULL; } @@ -452,7 +452,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioConte= xt *new_context) { s->aio_context =3D new_context; s->completion_bh =3D aio_bh_new(new_context, qemu_laio_completion_bh, = s); - aio_set_event_notifier(new_context, &s->e, false, + aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb, qemu_laio_poll_cb, qemu_laio_poll_ready); diff --git a/block/nfs.c b/block/nfs.c index 351dc6ec8d..2f603bba42 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -195,7 +195,6 @@ static void nfs_set_events(NFSClient *client) int ev =3D nfs_which_events(client->context); if (ev !=3D client->events) { aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context= ), - false, (ev & POLLIN) ? nfs_process_read : NULL, (ev & POLLOUT) ? nfs_process_write : NULL, NULL, NULL, client); @@ -373,7 +372,7 @@ static void nfs_detach_aio_context(BlockDriverState *bs) NFSClient *client =3D bs->opaque; =20 aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); client->events =3D 0; } =20 @@ -391,7 +390,7 @@ static void nfs_client_close(NFSClient *client) if (client->context) { qemu_mutex_lock(&client->mutex); aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context= ), - false, NULL, NULL, NULL, NULL, NULL); + NULL, NULL, NULL, NULL, NULL); qemu_mutex_unlock(&client->mutex); if (client->fh) { nfs_close(client->context, client->fh); diff --git a/block/nvme.c b/block/nvme.c index 5b744c2bda..17937d398d 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -862,7 +862,7 @@ static int nvme_init(BlockDriverState *bs, const char *= device, int namespace, } aio_set_event_notifier(bdrv_get_aio_context(bs), &s->irq_notifier[MSIX_SHARED_IRQ_IDX], - false, nvme_handle_event, nvme_poll_cb, + nvme_handle_event, nvme_poll_cb, nvme_poll_ready); =20 if (!nvme_identify(bs, namespace, errp)) { @@ -948,7 +948,7 @@ static void nvme_close(BlockDriverState *bs) g_free(s->queues); aio_set_event_notifier(bdrv_get_aio_context(bs), &s->irq_notifier[MSIX_SHARED_IRQ_IDX], - false, NULL, NULL, NULL); + NULL, NULL, NULL); event_notifier_cleanup(&s->irq_notifier[MSIX_SHARED_IRQ_IDX]); qemu_vfio_pci_unmap_bar(s->vfio, 0, s->bar0_wo_map, 0, sizeof(NvmeBar) + NVME_DOORBELL_SIZE); @@ -1546,7 +1546,7 @@ static void nvme_detach_aio_context(BlockDriverState = *bs) =20 aio_set_event_notifier(bdrv_get_aio_context(bs), &s->irq_notifier[MSIX_SHARED_IRQ_IDX], - false, NULL, NULL, NULL); + NULL, NULL, NULL); } =20 static void nvme_attach_aio_context(BlockDriverState *bs, @@ -1556,7 +1556,7 @@ static void nvme_attach_aio_context(BlockDriverState = *bs, =20 s->aio_context =3D new_context; aio_set_event_notifier(new_context, &s->irq_notifier[MSIX_SHARED_IRQ_I= DX], - false, nvme_handle_event, nvme_poll_cb, + nvme_handle_event, nvme_poll_cb, nvme_poll_ready); =20 for (unsigned i =3D 0; i < s->queue_count; i++) { diff --git a/block/ssh.c b/block/ssh.c index b3b3352075..2748253d4a 100644 --- a/block/ssh.c +++ b/block/ssh.c @@ -1019,7 +1019,7 @@ static void restart_coroutine(void *opaque) AioContext *ctx =3D bdrv_get_aio_context(bs); =20 trace_ssh_restart_coroutine(restart->co); - aio_set_fd_handler(ctx, s->sock, false, NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(ctx, s->sock, NULL, NULL, NULL, NULL, NULL); =20 aio_co_wake(restart->co); } @@ -1049,7 +1049,7 @@ static coroutine_fn void co_yield(BDRVSSHState *s, Bl= ockDriverState *bs) trace_ssh_co_yield(s->sock, rd_handler, wr_handler); =20 aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock, - false, rd_handler, wr_handler, NULL, NULL, &restart= ); + rd_handler, wr_handler, NULL, NULL, &restart); qemu_coroutine_yield(); trace_ssh_co_yield_back(s->sock); } diff --git a/block/win32-aio.c b/block/win32-aio.c index ee87d6048f..6327861e1d 100644 --- a/block/win32-aio.c +++ b/block/win32-aio.c @@ -174,7 +174,7 @@ int win32_aio_attach(QEMUWin32AIOState *aio, HANDLE hfi= le) void win32_aio_detach_aio_context(QEMUWin32AIOState *aio, AioContext *old_context) { - aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL, NULL); + aio_set_event_notifier(old_context, &aio->e, NULL, NULL, NULL); aio->aio_ctx =3D NULL; } =20 @@ -182,8 +182,8 @@ void win32_aio_attach_aio_context(QEMUWin32AIOState *ai= o, AioContext *new_context) { aio->aio_ctx =3D new_context; - aio_set_event_notifier(new_context, &aio->e, false, - win32_aio_completion_cb, NULL, NULL); + aio_set_event_notifier(new_context, &aio->e, win32_aio_completion_cb, + NULL, NULL); } =20 QEMUWin32AIOState *win32_aio_init(void) diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c index 6e81bc8791..0b189c6ab8 100644 --- a/hw/i386/kvm/xen_xenstore.c +++ b/hw/i386/kvm/xen_xenstore.c @@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Erro= r **errp) error_setg(errp, "Xenstore evtchn port init failed"); return; } - aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), fa= lse, + aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), xen_xenstore_event, NULL, NULL, NULL, s); =20 s->impl =3D xs_impl_create(xen_domid); diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index dcd7aabb4e..6125e4d556 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(E= ventNotifier *n) =20 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, false, + aio_set_event_notifier(ctx, &vq->host_notifier, virtio_queue_host_notifier_read, virtio_queue_host_notifier_aio_poll, virtio_queue_host_notifier_aio_poll_ready); @@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueu= e *vq, AioContext *ctx) */ void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioConte= xt *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, false, + aio_set_event_notifier(ctx, &vq->host_notifier, virtio_queue_host_notifier_read, NULL, NULL); } =20 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NUL= L); + aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL); /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ virtio_queue_host_notifier_read(&vq->host_notifier); diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c index c4fd26abe1..7b5c338ea1 100644 --- a/hw/xen/xen-bus.c +++ b/hw/xen/xen-bus.c @@ -842,11 +842,11 @@ void xen_device_set_event_channel_context(XenDevice *= xendev, } =20 if (channel->ctx) - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),= false, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), NULL, NULL, NULL, NULL, NULL); =20 channel->ctx =3D ctx; - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), fal= se, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), xen_device_event, NULL, xen_device_poll, NULL, chan= nel); } =20 @@ -920,7 +920,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev, =20 QLIST_REMOVE(channel, list); =20 - aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), fal= se, + aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), NULL, NULL, NULL, NULL, NULL); =20 if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) { diff --git a/io/channel-command.c b/io/channel-command.c index e7edd091af..7ed726c802 100644 --- a/io/channel-command.c +++ b/io/channel-command.c @@ -337,10 +337,8 @@ static void qio_channel_command_set_aio_fd_handler(QIO= Channel *ioc, void *opaque) { QIOChannelCommand *cioc =3D QIO_CHANNEL_COMMAND(ioc); - aio_set_fd_handler(ctx, cioc->readfd, false, - io_read, NULL, NULL, NULL, opaque); - aio_set_fd_handler(ctx, cioc->writefd, false, - NULL, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, cioc->readfd, io_read, NULL, NULL, NULL, opaqu= e); + aio_set_fd_handler(ctx, cioc->writefd, NULL, io_write, NULL, NULL, opa= que); } =20 =20 diff --git a/io/channel-file.c b/io/channel-file.c index d76663e6ae..8b5821f452 100644 --- a/io/channel-file.c +++ b/io/channel-file.c @@ -198,8 +198,7 @@ static void qio_channel_file_set_aio_fd_handler(QIOChan= nel *ioc, void *opaque) { QIOChannelFile *fioc =3D QIO_CHANNEL_FILE(ioc); - aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write, - NULL, NULL, opaque); + aio_set_fd_handler(ctx, fioc->fd, io_read, io_write, NULL, NULL, opaqu= e); } =20 static GSource *qio_channel_file_create_watch(QIOChannel *ioc, diff --git a/io/channel-socket.c b/io/channel-socket.c index b0ea7d48b3..d99945ebec 100644 --- a/io/channel-socket.c +++ b/io/channel-socket.c @@ -899,8 +899,7 @@ static void qio_channel_socket_set_aio_fd_handler(QIOCh= annel *ioc, void *opaque) { QIOChannelSocket *sioc =3D QIO_CHANNEL_SOCKET(ioc); - aio_set_fd_handler(ctx, sioc->fd, false, - io_read, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, sioc->fd, io_read, io_write, NULL, NULL, opaqu= e); } =20 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc, diff --git a/migration/rdma.c b/migration/rdma.c index df646be35e..aee41ca43e 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -3104,15 +3104,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIO= Channel *ioc, { QIOChannelRDMA *rioc =3D QIO_CHANNEL_RDMA(ioc); if (io_read) { - aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); - aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_re= ad, + io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_re= ad, + io_write, NULL, NULL, opaque); } else { - aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); - aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, - false, io_read, io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_r= ead, + io_write, NULL, NULL, opaque); + aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_r= ead, + io_write, NULL, NULL, opaque); } } =20 diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c index 321d7ab01a..519440eed3 100644 --- a/tests/unit/test-aio.c +++ b/tests/unit/test-aio.c @@ -130,7 +130,7 @@ static void *test_acquire_thread(void *opaque) static void set_event_notifier(AioContext *ctx, EventNotifier *notifier, EventNotifierHandler *handler) { - aio_set_event_notifier(ctx, notifier, false, handler, NULL, NULL); + aio_set_event_notifier(ctx, notifier, handler, NULL, NULL); } =20 static void dummy_notifier_read(EventNotifier *n) @@ -383,30 +383,6 @@ static void test_flush_event_notifier(void) event_notifier_cleanup(&data.e); } =20 -static void test_aio_external_client(void) -{ - int i, j; - - for (i =3D 1; i < 3; i++) { - EventNotifierTestData data =3D { .n =3D 0, .active =3D 10, .auto_s= et =3D true }; - event_notifier_init(&data.e, false); - aio_set_event_notifier(ctx, &data.e, true, event_ready_cb, NULL, N= ULL); - event_notifier_set(&data.e); - for (j =3D 0; j < i; j++) { - aio_disable_external(ctx); - } - for (j =3D 0; j < i; j++) { - assert(!aio_poll(ctx, false)); - assert(event_notifier_test_and_clear(&data.e)); - event_notifier_set(&data.e); - aio_enable_external(ctx); - } - assert(aio_poll(ctx, false)); - set_event_notifier(ctx, &data.e, NULL); - event_notifier_cleanup(&data.e); - } -} - static void test_wait_event_notifier_noflush(void) { EventNotifierTestData data =3D { .n =3D 0 }; @@ -935,7 +911,6 @@ int main(int argc, char **argv) g_test_add_func("/aio/event/wait", test_wait_event_notifi= er); g_test_add_func("/aio/event/wait/no-flush-cb", test_wait_event_notifi= er_noflush); g_test_add_func("/aio/event/flush", test_flush_event_notif= ier); - g_test_add_func("/aio/external-client", test_aio_external_clie= nt); g_test_add_func("/aio/timer/schedule", test_timer_schedule); =20 g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining); diff --git a/tests/unit/test-fdmon-epoll.c b/tests/unit/test-fdmon-epoll.c deleted file mode 100644 index ef5a856d09..0000000000 --- a/tests/unit/test-fdmon-epoll.c +++ /dev/null @@ -1,73 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * fdmon-epoll tests - * - * Copyright (c) 2020 Red Hat, Inc. - */ - -#include "qemu/osdep.h" -#include "block/aio.h" -#include "qapi/error.h" -#include "qemu/main-loop.h" - -static AioContext *ctx; - -static void dummy_fd_handler(EventNotifier *notifier) -{ - event_notifier_test_and_clear(notifier); -} - -static void add_event_notifiers(EventNotifier *notifiers, size_t n) -{ - for (size_t i =3D 0; i < n; i++) { - event_notifier_init(¬ifiers[i], false); - aio_set_event_notifier(ctx, ¬ifiers[i], false, - dummy_fd_handler, NULL, NULL); - } -} - -static void remove_event_notifiers(EventNotifier *notifiers, size_t n) -{ - for (size_t i =3D 0; i < n; i++) { - aio_set_event_notifier(ctx, ¬ifiers[i], false, NULL, NULL, NULL= ); - event_notifier_cleanup(¬ifiers[i]); - } -} - -/* Check that fd handlers work when external clients are disabled */ -static void test_external_disabled(void) -{ - EventNotifier notifiers[100]; - - /* fdmon-epoll is only enabled when many fd handlers are registered */ - add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers)); - - event_notifier_set(¬ifiers[0]); - assert(aio_poll(ctx, true)); - - aio_disable_external(ctx); - event_notifier_set(¬ifiers[0]); - assert(aio_poll(ctx, true)); - aio_enable_external(ctx); - - remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers)); -} - -int main(int argc, char **argv) -{ - /* - * This code relies on the fact that fdmon-io_uring disables itself wh= en - * the glib main loop is in use. The main loop uses fdmon-poll and upg= rades - * to fdmon-epoll when the number of fds exceeds a threshold. - */ - qemu_init_main_loop(&error_fatal); - ctx =3D qemu_get_aio_context(); - - while (g_main_context_iteration(NULL, false)) { - /* Do nothing */ - } - - g_test_init(&argc, &argv, NULL); - g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabl= ed); - return g_test_run(); -} diff --git a/util/aio-posix.c b/util/aio-posix.c index a8be940f76..934b1bbb85 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -99,7 +99,6 @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHan= dler *node) =20 void aio_set_fd_handler(AioContext *ctx, int fd, - bool is_external, IOHandler *io_read, IOHandler *io_write, AioPollFn *io_poll, @@ -144,7 +143,6 @@ void aio_set_fd_handler(AioContext *ctx, new_node->io_poll =3D io_poll; new_node->io_poll_ready =3D io_poll_ready; new_node->opaque =3D opaque; - new_node->is_external =3D is_external; =20 if (is_new) { new_node->pfd.fd =3D fd; @@ -196,12 +194,11 @@ static void aio_set_fd_poll(AioContext *ctx, int fd, =20 void aio_set_event_notifier(AioContext *ctx, EventNotifier *notifier, - bool is_external, EventNotifierHandler *io_read, AioPollFn *io_poll, EventNotifierHandler *io_poll_ready) { - aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), is_external, + aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), (IOHandler *)io_read, NULL, io_poll, (IOHandler *)io_poll_ready, notifier); } @@ -285,13 +282,11 @@ bool aio_pending(AioContext *ctx) =20 /* TODO should this check poll ready? */ revents =3D node->pfd.revents & node->pfd.events; - if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read && - aio_node_check(ctx, node->is_external)) { + if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) { result =3D true; break; } - if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write && - aio_node_check(ctx, node->is_external)) { + if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) { result =3D true; break; } @@ -350,9 +345,7 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHa= ndler *node) QLIST_INSERT_HEAD(&ctx->poll_aio_handlers, node, node_poll); } if (!QLIST_IS_INSERTED(node, node_deleted) && - poll_ready && revents =3D=3D 0 && - aio_node_check(ctx, node->is_external) && - node->io_poll_ready) { + poll_ready && revents =3D=3D 0 && node->io_poll_ready) { node->io_poll_ready(node->opaque); =20 /* @@ -364,7 +357,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHa= ndler *node) =20 if (!QLIST_IS_INSERTED(node, node_deleted) && (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && - aio_node_check(ctx, node->is_external) && node->io_read) { node->io_read(node->opaque); =20 @@ -375,7 +367,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHa= ndler *node) } if (!QLIST_IS_INSERTED(node, node_deleted) && (revents & (G_IO_OUT | G_IO_ERR)) && - aio_node_check(ctx, node->is_external) && node->io_write) { node->io_write(node->opaque); progress =3D true; @@ -436,8 +427,7 @@ static bool run_poll_handlers_once(AioContext *ctx, AioHandler *tmp; =20 QLIST_FOREACH_SAFE(node, &ctx->poll_aio_handlers, node_poll, tmp) { - if (aio_node_check(ctx, node->is_external) && - node->io_poll(node->opaque)) { + if (node->io_poll(node->opaque)) { aio_add_poll_ready_handler(ready_list, node); =20 node->poll_idle_timeout =3D now + POLL_IDLE_INTERVAL_NS; diff --git a/util/aio-win32.c b/util/aio-win32.c index 6bded009a4..948ef47a4d 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -32,7 +32,6 @@ struct AioHandler { GPollFD pfd; int deleted; void *opaque; - bool is_external; QLIST_ENTRY(AioHandler) node; }; =20 @@ -64,7 +63,6 @@ static void aio_remove_fd_handler(AioContext *ctx, AioHan= dler *node) =20 void aio_set_fd_handler(AioContext *ctx, int fd, - bool is_external, IOHandler *io_read, IOHandler *io_write, AioPollFn *io_poll, @@ -111,7 +109,6 @@ void aio_set_fd_handler(AioContext *ctx, node->opaque =3D opaque; node->io_read =3D io_read; node->io_write =3D io_write; - node->is_external =3D is_external; =20 if (io_read) { bitmask |=3D FD_READ | FD_ACCEPT | FD_CLOSE; @@ -135,7 +132,6 @@ void aio_set_fd_handler(AioContext *ctx, =20 void aio_set_event_notifier(AioContext *ctx, EventNotifier *e, - bool is_external, EventNotifierHandler *io_notify, AioPollFn *io_poll, EventNotifierHandler *io_poll_ready) @@ -161,7 +157,6 @@ void aio_set_event_notifier(AioContext *ctx, node->e =3D e; node->pfd.fd =3D (uintptr_t)event_notifier_get_handle(e); node->pfd.events =3D G_IO_IN; - node->is_external =3D is_external; QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node); =20 g_source_add_poll(&ctx->source, &node->pfd); @@ -368,8 +363,7 @@ bool aio_poll(AioContext *ctx, bool blocking) /* fill fd sets */ count =3D 0; QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { - if (!node->deleted && node->io_notify - && aio_node_check(ctx, node->is_external)) { + if (!node->deleted && node->io_notify) { assert(count < MAXIMUM_WAIT_OBJECTS); events[count++] =3D event_notifier_get_handle(node->e); } diff --git a/util/async.c b/util/async.c index 21016a1ac7..be0726038e 100644 --- a/util/async.c +++ b/util/async.c @@ -377,7 +377,7 @@ aio_ctx_finalize(GSource *source) g_free(bh); } =20 - aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL, NULL); + aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL, NULL); event_notifier_cleanup(&ctx->notifier); qemu_rec_mutex_destroy(&ctx->lock); qemu_lockcnt_destroy(&ctx->list_lock); @@ -561,7 +561,6 @@ AioContext *aio_context_new(Error **errp) QSLIST_INIT(&ctx->scheduled_coroutines); =20 aio_set_event_notifier(ctx, &ctx->notifier, - false, aio_context_notifier_cb, aio_context_notifier_poll, aio_context_notifier_poll_ready); diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c index e11a8a022e..ef3eacacd2 100644 --- a/util/fdmon-epoll.c +++ b/util/fdmon-epoll.c @@ -64,11 +64,6 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerL= ist *ready_list, int i, ret =3D 0; struct epoll_event events[128]; =20 - /* Fall back while external clients are disabled */ - if (qatomic_read(&ctx->external_disable_cnt)) { - return fdmon_poll_ops.wait(ctx, ready_list, timeout); - } - if (timeout > 0) { ret =3D qemu_poll_ns(&pfd, 1, timeout); if (ret > 0) { @@ -131,11 +126,6 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned= npfd) return false; } =20 - /* Do not upgrade while external clients are disabled */ - if (qatomic_read(&ctx->external_disable_cnt)) { - return false; - } - if (npfd >=3D EPOLL_ENABLE_THRESHOLD) { if (fdmon_epoll_try_enable(ctx)) { return true; diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c index ab43052dd7..17ec18b7bd 100644 --- a/util/fdmon-io_uring.c +++ b/util/fdmon-io_uring.c @@ -276,11 +276,6 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHan= dlerList *ready_list, unsigned wait_nr =3D 1; /* block until at least one cqe is ready */ int ret; =20 - /* Fall back while external clients are disabled */ - if (qatomic_read(&ctx->external_disable_cnt)) { - return fdmon_poll_ops.wait(ctx, ready_list, timeout); - } - if (timeout =3D=3D 0) { wait_nr =3D 0; /* non-blocking */ } else if (timeout > 0) { @@ -315,8 +310,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx) return true; } =20 - /* Are we falling back to fdmon-poll? */ - return qatomic_read(&ctx->external_disable_cnt); + return false; } =20 static const FDMonOps fdmon_io_uring_ops =3D { diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c index 5fe3b47865..17df917cf9 100644 --- a/util/fdmon-poll.c +++ b/util/fdmon-poll.c @@ -65,8 +65,7 @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerLis= t *ready_list, assert(npfd =3D=3D 0); =20 QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { - if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events - && aio_node_check(ctx, node->is_external)) { + if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events) { add_pollfd(node); } } diff --git a/util/main-loop.c b/util/main-loop.c index e180c85145..3e43a9cd38 100644 --- a/util/main-loop.c +++ b/util/main-loop.c @@ -642,14 +642,13 @@ void qemu_set_fd_handler(int fd, void *opaque) { iohandler_init(); - aio_set_fd_handler(iohandler_ctx, fd, false, - fd_read, fd_write, NULL, NULL, opaque); + aio_set_fd_handler(iohandler_ctx, fd, fd_read, fd_write, NULL, NULL, + opaque); } =20 void event_notifier_set_handler(EventNotifier *e, EventNotifierHandler *handler) { iohandler_init(); - aio_set_event_notifier(iohandler_ctx, e, false, - handler, NULL, NULL); + aio_set_event_notifier(iohandler_ctx, e, handler, NULL, NULL); } diff --git a/util/qemu-coroutine-io.c b/util/qemu-coroutine-io.c index d791932d63..364f4d5abf 100644 --- a/util/qemu-coroutine-io.c +++ b/util/qemu-coroutine-io.c @@ -74,8 +74,7 @@ typedef struct { static void fd_coroutine_enter(void *opaque) { FDYieldUntilData *data =3D opaque; - aio_set_fd_handler(data->ctx, data->fd, false, - NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(data->ctx, data->fd, NULL, NULL, NULL, NULL, NULL); qemu_coroutine_enter(data->co); } =20 @@ -87,7 +86,7 @@ void coroutine_fn yield_until_fd_readable(int fd) data.ctx =3D qemu_get_current_aio_context(); data.co =3D qemu_coroutine_self(); data.fd =3D fd; - aio_set_fd_handler( - data.ctx, fd, false, fd_coroutine_enter, NULL, NULL, NULL, &data); + aio_set_fd_handler(data.ctx, fd, fd_coroutine_enter, NULL, NULL, NULL, + &data); qemu_coroutine_yield(); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 332aea9306..9ba19121a2 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd =3D fd; vu_fd_watch->cb =3D cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev =3D vu_dev; vu_fd_watch->pvt =3D pvt; @@ -299,8 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, false, - NULL, NULL, NULL, NULL, NULL); + aio_set_fd_handler(server->ioc->ctx, fd, NULL, NULL, NULL, NULL, NULL); =20 QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); g_free(vu_fd_watch); @@ -362,7 +361,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -403,7 +402,7 @@ void vhost_user_server_attach_aio_context(VuServer *ser= ver, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, kick_handler, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -417,7 +416,7 @@ void vhost_user_server_detach_aio_context(VuServer *ser= ver) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 diff --git a/tests/unit/meson.build b/tests/unit/meson.build index fa63cfe6ff..2980170397 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -121,9 +121,6 @@ if have_block if nettle.found() or gcrypt.found() tests +=3D {'test-crypto-pbkdf': [io]} endif - if config_host_data.get('CONFIG_EPOLL_CREATE1') - tests +=3D {'test-fdmon-epoll': [testblock]} - endif endif =20 if have_system --=20 2.39.2