From nobody Mon Feb 9 23:38:35 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546657; cv=none; d=zohomail.com; s=zohoarc; b=CFs+5+Pv+1g1w8/a1KpN8jphJVSHyXSyauF9Xsz6wP48VMvMHI/26Ame7QtamzQ0jshEjmwFUDzgND+RKQaDp5TuLJsI+ZJu9y1efpniJcXzvS9orl6yeyhLUhmmcjknJHvc8QvonYZ9Rd1kjYibEGggyJt9S2RQ/wFbZUaDNoY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546657; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=; b=Egm8BF+2haSslZVHI1N5tBb2YuDBHAAaBIEy4YCanZ35IGsI2Rrbl231AVWPjk/KczmN4M7K1WiF3DD4u5TX/zG+9AQp7aAsa/StvJ0M9en+JyLiSz6wCliD1S4hTbFfYrHDtGNyrkIKjOXB9r/DxnU6aFfx2Kzei6x4I6yfiCg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 168054665784837.81956867417955; Mon, 3 Apr 2023 11:30:57 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517551.803055 (Exim 4.92) (envelope-from ) id 1pjOwr-0004s2-M3; Mon, 03 Apr 2023 18:30:33 +0000 Received: by outflank-mailman (output) from mailman id 517551.803055; Mon, 03 Apr 2023 18:30:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwr-0004qd-Fl; Mon, 03 Apr 2023 18:30:33 +0000 Received: by outflank-mailman (input) for mailman id 517551; Mon, 03 Apr 2023 18:30:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwq-0004HH-5i for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9c1693d8-d24d-11ed-85db-49a42c6b2330; Mon, 03 Apr 2023 20:30:30 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-270-yDm9BV4sNRCoPTzB8fuUCw-1; Mon, 03 Apr 2023 14:30:25 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A606E885620; Mon, 3 Apr 2023 18:30:24 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id ECAF31121314; Mon, 3 Apr 2023 18:30:23 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9c1693d8-d24d-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546629; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=; b=TlI8CNolDE3g8bsImw9FNt8x40JPFGkUBySN9BQP+3GjyzqjIIxRGy+AfzPyW0ldWpqr4V kZLSEAa487q8i718P8/eG/s1EoWGN1VTHrH++d1e5o9jIW/g/nHzbvB8y0X2oIGIgmQwk+ UkICGXaHU2W1Kb0v/6RCdJKtrFRAf8Y= X-MC-Unique: yDm9BV4sNRCoPTzB8fuUCw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 06/13] block/export: stop using is_external in vhost-user-blk server Date: Mon, 3 Apr 2023 14:29:57 -0400 Message-Id: <20230403183004.347205-7-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546660039100015 Content-Type: text/plain; charset="utf-8" vhost-user activity must be suspended during bdrv_drained_begin/end(). This prevents new requests from interfering with whatever is happening in the drained section. Previously this was done using aio_set_fd_handler()'s is_external argument. In a multi-queue block layer world the aio_disable_external() API cannot be used since multiple AioContext may be processing I/O, not just one. Switch to BlockDevOps->drained_begin/end() callbacks. Signed-off-by: Stefan Hajnoczi --- block/export/vhost-user-blk-server.c | 43 ++++++++++++++-------------- util/vhost-user-server.c | 10 +++---- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user= -blk-server.c index dbf5207162..6e1bc196fb 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -207,22 +207,6 @@ static const VuDevIface vu_blk_iface =3D { .process_msg =3D vu_blk_process_msg, }; =20 -static void blk_aio_attached(AioContext *ctx, void *opaque) -{ - VuBlkExport *vexp =3D opaque; - - vexp->export.ctx =3D ctx; - vhost_user_server_attach_aio_context(&vexp->vu_server, ctx); -} - -static void blk_aio_detach(void *opaque) -{ - VuBlkExport *vexp =3D opaque; - - vhost_user_server_detach_aio_context(&vexp->vu_server); - vexp->export.ctx =3D NULL; -} - static void vu_blk_initialize_config(BlockDriverState *bs, struct virtio_blk_config *config, @@ -254,6 +238,25 @@ static void vu_blk_exp_request_shutdown(BlockExport *e= xp) vhost_user_server_stop(&vexp->vu_server); } =20 +/* Called with vexp->export.ctx acquired */ +static void vu_blk_drained_begin(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + vhost_user_server_detach_aio_context(&vexp->vu_server); +} + +/* Called with vexp->export.blk AioContext acquired */ +static void vu_blk_drained_end(void *opaque) +{ + VuBlkExport *vexp =3D opaque; + + /* Refresh AioContext in case it changed */ + vexp->export.ctx =3D blk_get_aio_context(vexp->export.blk); + + vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ct= x); +} + /* * Ensures that bdrv_drained_begin() waits until in-flight requests comple= te. * @@ -267,6 +270,8 @@ static bool vu_blk_drained_poll(void *opaque) } =20 static const BlockDevOps vu_blk_dev_ops =3D { + .drained_begin =3D vu_blk_drained_begin, + .drained_end =3D vu_blk_drained_end, .drained_poll =3D vu_blk_drained_poll, }; =20 @@ -309,13 +314,9 @@ static int vu_blk_exp_create(BlockExport *exp, BlockEx= portOptions *opts, logical_block_size, num_queues); =20 blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); - blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detac= h, - vexp); =20 if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx, num_queues, &vu_blk_iface, errp)) { - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, - blk_aio_detach, vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; @@ -328,8 +329,6 @@ static void vu_blk_exp_delete(BlockExport *exp) { VuBlkExport *vexp =3D container_of(exp, VuBlkExport, export); =20 - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_de= tach, - vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 2e6b640050..332aea9306 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd =3D fd; vu_fd_watch->cb =3D cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev =3D vu_dev; vu_fd_watch->pvt =3D pvt; @@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, true, + aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL, NULL); =20 QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); @@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *ser= ver, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); } =20 @@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *ser= ver) VuFdWatch *vu_fd_watch; =20 QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } =20 --=20 2.39.2