From nobody Mon Feb 9 10:12:24 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1680546658; cv=none; d=zohomail.com; s=zohoarc; b=UMgPq2/hatm/ByvOEiV4Gv/PPWhqctbPgoqgTnw2+x/l0mQsc6BCH1fcu/BTUJFhfsjP5WVjXMFeaeYfUxQo+3zADafUurLtyj2QSzdVKmveghFsP9o5fRT8Cta/WOcJmgshWgB2uXQCMpL6a9Vc61jpDDrC4tKRE65MgGFSE+g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1680546658; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hEwtCA8s9JuIEhTUsGQuJCNn3dK7Smgq43ityTlAMA0=; b=ClXz6twsFVyguuCyWeALx0wy6ac7WaBYBoSV9NiH5dUg2moGClL7BFkcOPzU62W/e1/bGEDctsfxZE/g7z4JQA03fIKPU9rzR6fb7+7Q9jxsNA1qctNv8WLObev+a7xDOHnqNCFGHwYrDyAC1zEyhoAWo9VC2KwBZAL2XJvRFv0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1680546658161946.7067624349937; Mon, 3 Apr 2023 11:30:58 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.517557.803115 (Exim 4.92) (envelope-from ) id 1pjOwz-0006a7-W0; Mon, 03 Apr 2023 18:30:41 +0000 Received: by outflank-mailman (output) from mailman id 517557.803115; Mon, 03 Apr 2023 18:30:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwz-0006ZY-Om; Mon, 03 Apr 2023 18:30:41 +0000 Received: by outflank-mailman (input) for mailman id 517557; Mon, 03 Apr 2023 18:30:39 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pjOwx-00058E-FX for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:39 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a0980774-d24d-11ed-b464-930f4c7d94ae; Mon, 03 Apr 2023 20:30:37 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-37-LlfGX1TgPQ-M7RE1UMTAgw-1; Mon, 03 Apr 2023 14:30:34 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 408E3101A551; Mon, 3 Apr 2023 18:30:27 +0000 (UTC) Received: from localhost (unknown [10.39.192.107]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9B806140EBF4; Mon, 3 Apr 2023 18:30:26 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a0980774-d24d-11ed-b464-930f4c7d94ae DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680546636; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hEwtCA8s9JuIEhTUsGQuJCNn3dK7Smgq43ityTlAMA0=; b=Xcp4WtcqbfNKMOakkTqb4qmt7Tm+ZsvjLsYs4l1qKl45bB3w/KSqrfXNUWwmk0tydT95ez Ltj8xIHGvxr0HOvGhFmmFFc1PbDlC/PZCR7ZjmVwU4UszcBPcMRVWP1b7eJ0/iF2NB1l7e zSUfRYz8Svn4VgimrgfHMFDLwZY6H/E= X-MC-Unique: LlfGX1TgPQ-M7RE1UMTAgw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Julia Suvorova , Stefan Hajnoczi , Kevin Wolf , Peter Lieven , Coiby Xu , xen-devel@lists.xenproject.org, Richard Henderson , Stefano Garzarella , , Eduardo Habkost , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paul Durrant , "Richard W.M. Jones" , "Dr. David Alan Gilbert" , Marcel Apfelbaum , Aarushi Mehta , Stefano Stabellini , Fam Zheng , David Woodhouse , Stefan Weil , Juan Quintela , Xie Yongji , Hanna Reitz , Ronnie Sahlberg , eesposit@redhat.com, "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Anthony Perard Subject: [PATCH 07/13] virtio: do not set is_external=true on host notifiers Date: Mon, 3 Apr 2023 14:29:58 -0400 Message-Id: <20230403183004.347205-8-stefanha@redhat.com> In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com> References: <20230403183004.347205-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1680546659461100007 Content-Type: text/plain; charset="utf-8" Host notifiers trigger virtqueue processing. There are critical sections when new I/O requests must not be submitted because they would cause interference. In the past this was solved using aio_set_event_notifiers() is_external=3Dtrue argument, which disables fd monitoring between aio_disable/enable_external() calls. This API is not multi-queue block layer friendly because it requires knowledge of the specific AioContext. In a multi-queue block layer world any thread can submit I/O and we don't know which AioContexts are currently involved. virtio-blk and virtio-scsi are the only users that depend on is_external=3Dtrue. Both rely on the block layer, where we can take advantage of the existing request queuing behavior that happens during drained sections. The block layer's drained sections are the only user of aio_disable_external(). After this patch the virtqueues will be processed during drained section, but submitted I/O requests will be queued in the BlockBackend. Queued requests are resumed when the drained section ends. Therefore, the BlockBackend is still quiesced during drained sections but we no longer rely on is_external=3Dtrue to achieve this. Note that virtqueues have a finite size, so queuing requests does not lead to unbounded memory usage. Signed-off-by: Stefan Hajnoczi --- hw/virtio/virtio.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 98c4819fcc..dcd7aabb4e 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(E= ventNotifier *n) =20 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, + aio_set_event_notifier(ctx, &vq->host_notifier, false, virtio_queue_host_notifier_read, virtio_queue_host_notifier_aio_poll, virtio_queue_host_notifier_aio_poll_ready); @@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueu= e *vq, AioContext *ctx) */ void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioConte= xt *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, + aio_set_event_notifier(ctx, &vq->host_notifier, false, virtio_queue_host_notifier_read, NULL, NULL); } =20 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx) { - aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL= ); + aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NUL= L); /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ virtio_queue_host_notifier_read(&vq->host_notifier); --=20 2.39.2