From nobody Mon Feb 9 20:10:18 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1685633204; cv=none; d=zohomail.com; s=zohoarc; b=YhCmGAJkjKD0odgR3rI22lBDpxgQxLhnUF1mCnhtS+nf0pZELoSf9Vxs4+99SlGe6kICiCxpXNcAuEqvZbDufCJOQnAkxvNLvT5wLLZ1ymKghcFu2EPAEC4QU6DyjVtBT+gKhmsiRFWn/p8ksfbQV8xdOWpx0udIJ3iZpFar7ag= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1685633204; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=56sRbxTeLa9JZmQN0B8QX1cUKslR1+CU7IhQ7z+Xw5c=; b=cpuFfpQgOhPdBJg8nD8eCwTbdYq3aoK78nEMERaAB3ZF1HmXSXtgmRspH86CWNAGSu9z2NDd6d1BZF94NeOdk5lL0EevXaGW8ZkImtT+7DNZDxu4YTIXTfwmreTznAdUgfQ0lZCWkUGPOxxNtsxXjxrFcnU/62LiDVHc6sOQUzM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 16856332040641004.1358491462566; Thu, 1 Jun 2023 08:26:44 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.542620.846684 (Exim 4.92) (envelope-from ) id 1q4kBl-0004Sa-RB; Thu, 01 Jun 2023 15:26:09 +0000 Received: by outflank-mailman (output) from mailman id 542620.846684; Thu, 01 Jun 2023 15:26:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1q4kBl-0004ST-LT; Thu, 01 Jun 2023 15:26:09 +0000 Received: by outflank-mailman (input) for mailman id 542620; Thu, 01 Jun 2023 15:26:08 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1q4kBk-0004Rb-HD for xen-devel@lists.xenproject.org; Thu, 01 Jun 2023 15:26:08 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9f801383-0090-11ee-b231-6b7b168915f2; Thu, 01 Jun 2023 17:26:05 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-227-lWN_D032PEW-zCcOIkDfhw-1; Thu, 01 Jun 2023 11:25:59 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 27AEA800159; Thu, 1 Jun 2023 15:25:59 +0000 (UTC) Received: from localhost (unknown [10.39.194.5]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8A0932166B27; Thu, 1 Jun 2023 15:25:58 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9f801383-0090-11ee-b231-6b7b168915f2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685633164; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=56sRbxTeLa9JZmQN0B8QX1cUKslR1+CU7IhQ7z+Xw5c=; b=HvNDpY1i0uwe9nhgCXmysny06ANPuohRStW3BbBDV8U+Uw0Rgk2RV0fNfuS4imEE8hYhSQ dzhZYm/4v7VPgW7PEYZbBWYhEMFDE3Pl3VTixVmLHUgX9cTheucanO9YeFZFQWOR6UwoEL ruCWiQn3/coEXU0JVu7//anVL9ViOU4= X-MC-Unique: lWN_D032PEW-zCcOIkDfhw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: qemu-block@nongnu.org, Stefano Stabellini , Aarushi Mehta , Anthony Perard , Thomas Huth , Julia Suvorova , Paolo Bonzini , Fam Zheng , Hanna Reitz , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefano Garzarella , "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Markus Armbruster , Cornelia Huck , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , xen-devel@lists.xenproject.org, Paul Durrant , Kevin Wolf , Richard Henderson , Eric Blake , Stefan Hajnoczi , Raphael Norwitz , kvm@vger.kernel.org Subject: [PULL 2/8] block/nvme: convert to blk_io_plug_call() API Date: Thu, 1 Jun 2023 11:25:46 -0400 Message-Id: <20230601152552.1603119-3-stefanha@redhat.com> In-Reply-To: <20230601152552.1603119-1-stefanha@redhat.com> References: <20230601152552.1603119-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1685633204624100001 Content-Type: text/plain; charset="utf-8" Stop using the .bdrv_co_io_plug() API because it is not multi-queue block layer friendly. Use the new blk_io_plug_call() API to batch I/O submission instead. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Stefano Garzarella Acked-by: Kevin Wolf Message-id: 20230530180959.1108766-3-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi --- block/nvme.c | 44 ++++++++++++-------------------------------- block/trace-events | 1 - 2 files changed, 12 insertions(+), 33 deletions(-) diff --git a/block/nvme.c b/block/nvme.c index 17937d398d..7ca85bc44a 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -25,6 +25,7 @@ #include "qemu/vfio-helpers.h" #include "block/block-io.h" #include "block/block_int.h" +#include "sysemu/block-backend.h" #include "sysemu/replay.h" #include "trace.h" =20 @@ -119,7 +120,6 @@ struct BDRVNVMeState { int blkshift; =20 uint64_t max_transfer; - bool plugged; =20 bool supports_write_zeroes; bool supports_discard; @@ -282,7 +282,7 @@ static void nvme_kick(NVMeQueuePair *q) { BDRVNVMeState *s =3D q->s; =20 - if (s->plugged || !q->need_kick) { + if (!q->need_kick) { return; } trace_nvme_kick(s, q->index); @@ -387,10 +387,6 @@ static bool nvme_process_completion(NVMeQueuePair *q) NvmeCqe *c; =20 trace_nvme_process_completion(s, q->index, q->inflight); - if (s->plugged) { - trace_nvme_process_completion_queue_plugged(s, q->index); - return false; - } =20 /* * Support re-entrancy when a request cb() function invokes aio_poll(). @@ -480,6 +476,15 @@ static void nvme_trace_command(const NvmeCmd *cmd) } } =20 +static void nvme_unplug_fn(void *opaque) +{ + NVMeQueuePair *q =3D opaque; + + QEMU_LOCK_GUARD(&q->lock); + nvme_kick(q); + nvme_process_completion(q); +} + static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req, NvmeCmd *cmd, BlockCompletionFunc cb, void *opaque) @@ -496,8 +501,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeR= equest *req, q->sq.tail * NVME_SQ_ENTRY_BYTES, cmd, sizeof(*cmd)); q->sq.tail =3D (q->sq.tail + 1) % NVME_QUEUE_SIZE; q->need_kick++; - nvme_kick(q); - nvme_process_completion(q); + blk_io_plug_call(nvme_unplug_fn, q); qemu_mutex_unlock(&q->lock); } =20 @@ -1567,27 +1571,6 @@ static void nvme_attach_aio_context(BlockDriverState= *bs, } } =20 -static void coroutine_fn nvme_co_io_plug(BlockDriverState *bs) -{ - BDRVNVMeState *s =3D bs->opaque; - assert(!s->plugged); - s->plugged =3D true; -} - -static void coroutine_fn nvme_co_io_unplug(BlockDriverState *bs) -{ - BDRVNVMeState *s =3D bs->opaque; - assert(s->plugged); - s->plugged =3D false; - for (unsigned i =3D INDEX_IO(0); i < s->queue_count; i++) { - NVMeQueuePair *q =3D s->queues[i]; - qemu_mutex_lock(&q->lock); - nvme_kick(q); - nvme_process_completion(q); - qemu_mutex_unlock(&q->lock); - } -} - static bool nvme_register_buf(BlockDriverState *bs, void *host, size_t siz= e, Error **errp) { @@ -1664,9 +1647,6 @@ static BlockDriver bdrv_nvme =3D { .bdrv_detach_aio_context =3D nvme_detach_aio_context, .bdrv_attach_aio_context =3D nvme_attach_aio_context, =20 - .bdrv_co_io_plug =3D nvme_co_io_plug, - .bdrv_co_io_unplug =3D nvme_co_io_unplug, - .bdrv_register_buf =3D nvme_register_buf, .bdrv_unregister_buf =3D nvme_unregister_buf, }; diff --git a/block/trace-events b/block/trace-events index 32665158d6..048ad27519 100644 --- a/block/trace-events +++ b/block/trace-events @@ -141,7 +141,6 @@ nvme_kick(void *s, unsigned q_index) "s %p q #%u" nvme_dma_flush_queue_wait(void *s) "s %p" nvme_error(int cmd_specific, int sq_head, int sqid, int cid, int status) "= cmd_specific %d sq_head %d sqid %d cid %d status 0x%x" nvme_process_completion(void *s, unsigned q_index, int inflight) "s %p q #= %u inflight %d" -nvme_process_completion_queue_plugged(void *s, unsigned q_index) "s %p q #= %u" nvme_complete_command(void *s, unsigned q_index, int cid) "s %p q #%u cid = %d" nvme_submit_command(void *s, unsigned q_index, int cid) "s %p q #%u cid %d" nvme_submit_command_raw(int c0, int c1, int c2, int c3, int c4, int c5, in= t c6, int c7) "%02x %02x %02x %02x %02x %02x %02x %02x" --=20 2.40.1