From nobody Fri Apr 4 05:35:37 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1741688193; cv=none; d=zohomail.com; s=zohoarc; b=iukcMPCwyWvAP2wysDOObF1WXotMnbt88/ug98Q/68SIIQX7lqeFIuajJwAncO59db+IqT/+Xu19ncVItdfT9gyeT3bo8/Ye1m/gKQQ/xtKM88iBA8YF01GXMYoH8i7IjQurI7BL3dC/ruvrob3EaWsmVnXb9XF62lPsRFB04aU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1741688193; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=SInkv3+4iXLYLQLSSosBRRJsz7eyt1TKd6LgPP/53NI=; b=m4xvrotoiflZTRGzHDzS2gu8etWFMFsn6OPeV8otr9tfUatlvBRlaMuJ0pioqcUb6AO0D0eCow0ABt/9pRw5XMru0hF3h6uj9tIaN9a6qC3Q5MT6BSsbD1PRQiM+MsUQB1hghLw7m+Oq8CpXQWedhJf6xe7GUde7CKRJchrPg/A= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1741688192993480.32492377718927; Tue, 11 Mar 2025 03:16:32 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1trwbu-0006wJ-1o; Tue, 11 Mar 2025 06:13:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1trwbZ-0006h8-6T for qemu-devel@nongnu.org; Tue, 11 Mar 2025 06:12:58 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1trwbP-0005jJ-1O for qemu-devel@nongnu.org; Tue, 11 Mar 2025 06:12:55 -0400 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-190-kBwsaYgXNiO1ty125-Y04g-1; Tue, 11 Mar 2025 06:12:42 -0400 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 918181955BC1; Tue, 11 Mar 2025 10:12:41 +0000 (UTC) Received: from localhost (unknown [10.2.16.75]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7FA1018001EF; Tue, 11 Mar 2025 10:12:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741687964; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SInkv3+4iXLYLQLSSosBRRJsz7eyt1TKd6LgPP/53NI=; b=ALGWODeBnOsqNaMzZTraN4Xpc3sue3wVdfoFn5CK/k3qcxr4e3yudIFSlS0g5zTHeDfzFe SD1K9NqkMp6rV2osXHzNl1L5m48DlS+1srTZiT80vspCtX2tB7jHBuHqimHWhxb8wQRoCT vwZW8wHx/rfnBXvK+yt00bAsSSQNmBQ= X-MC-Unique: kBwsaYgXNiO1ty125-Y04g-1 X-Mimecast-MFC-AGG-ID: kBwsaYgXNiO1ty125-Y04g_1741687961 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Paolo Bonzini , David Hildenbrand , , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Kevin Wolf , Fam Zheng , "Michael S. Tsirkin" , Stefan Hajnoczi , Hanna Reitz , pkrempa@redhat.com, John Snow Subject: [PATCH v2 11/13] virtio-scsi: add iothread-vq-mapping parameter Date: Tue, 11 Mar 2025 18:11:43 +0800 Message-ID: <20250311101145.1037388-12-stefanha@redhat.com> In-Reply-To: <20250311101145.1037388-1-stefanha@redhat.com> References: <20250311101145.1037388-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1741688194358019100 Content-Type: text/plain; charset="utf-8" Allow virtio-scsi virtqueues to be assigned to different IOThreads. This makes it possible to take advantage of host multi-queue block layer scalability by assigning virtqueues that have affinity with vCPUs to different IOThreads that have affinity with host CPUs. The same feature was introduced for virtio-blk in the past: https://developers.redhat.com/articles/2024/09/05/scaling-virtio-blk-disk-i= o-iothread-virtqueue-mapping Here are fio randread 4k iodepth=3D64 results from a 4 vCPU guest with an Intel P4800X SSD: iothreads IOPS ------------------------------ 1 189576 2 312698 4 346744 Signed-off-by: Stefan Hajnoczi --- include/hw/virtio/virtio-scsi.h | 5 +- hw/scsi/virtio-scsi-dataplane.c | 90 ++++++++++++++++++++++++--------- hw/scsi/virtio-scsi.c | 63 ++++++++++++++--------- 3 files changed, 107 insertions(+), 51 deletions(-) diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scs= i.h index 7b7e3ced7a..086201efa2 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -22,6 +22,7 @@ #include "hw/virtio/virtio.h" #include "hw/scsi/scsi.h" #include "chardev/char-fe.h" +#include "qapi/qapi-types-virtio.h" #include "system/iothread.h" =20 #define TYPE_VIRTIO_SCSI_COMMON "virtio-scsi-common" @@ -60,6 +61,7 @@ struct VirtIOSCSIConf { CharBackend chardev; uint32_t boot_tpgt; IOThread *iothread; + IOThreadVirtQueueMappingList *iothread_vq_mapping_list; }; =20 struct VirtIOSCSI; @@ -97,7 +99,7 @@ struct VirtIOSCSI { QTAILQ_HEAD(, VirtIOSCSIReq) tmf_bh_list; =20 /* Fields for dataplane below */ - AioContext *ctx; /* one iothread per virtio-scsi-pci for now */ + AioContext **vq_aio_context; /* per-virtqueue AioContext pointer */ =20 bool dataplane_started; bool dataplane_starting; @@ -115,6 +117,7 @@ void virtio_scsi_common_realize(DeviceState *dev, void virtio_scsi_common_unrealize(DeviceState *dev); =20 void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error **errp); +void virtio_scsi_dataplane_cleanup(VirtIOSCSI *s); int virtio_scsi_dataplane_start(VirtIODevice *s); void virtio_scsi_dataplane_stop(VirtIODevice *s); =20 diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplan= e.c index f49ab98ecc..6bb368c8a5 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -18,6 +18,7 @@ #include "system/block-backend.h" #include "hw/scsi/scsi.h" #include "scsi/constants.h" +#include "hw/virtio/iothread-vq-mapping.h" #include "hw/virtio/virtio-bus.h" =20 /* Context: BQL held */ @@ -27,8 +28,16 @@ void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error **= errp) VirtIODevice *vdev =3D VIRTIO_DEVICE(s); BusState *qbus =3D qdev_get_parent_bus(DEVICE(vdev)); VirtioBusClass *k =3D VIRTIO_BUS_GET_CLASS(qbus); + uint16_t num_vqs =3D vs->conf.num_queues + VIRTIO_SCSI_VQ_NUM_FIXED; =20 - if (vs->conf.iothread) { + if (vs->conf.iothread && vs->conf.iothread_vq_mapping_list) { + error_setg(errp, + "iothread and iothread-vq-mapping properties cannot be = set " + "at the same time"); + return; + } + + if (vs->conf.iothread || vs->conf.iothread_vq_mapping_list) { if (!k->set_guest_notifiers || !k->ioeventfd_assign) { error_setg(errp, "device is incompatible with iothread " @@ -39,15 +48,50 @@ void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error *= *errp) error_setg(errp, "ioeventfd is required for iothread"); return; } - s->ctx =3D iothread_get_aio_context(vs->conf.iothread); - } else { - if (!virtio_device_ioeventfd_enabled(vdev)) { + } + + s->vq_aio_context =3D g_new(AioContext *, num_vqs); + + if (vs->conf.iothread_vq_mapping_list) { + if (!iothread_vq_mapping_apply(vs->conf.iothread_vq_mapping_list, + s->vq_aio_context, num_vqs, errp)) { + g_free(s->vq_aio_context); + s->vq_aio_context =3D NULL; return; } - s->ctx =3D qemu_get_aio_context(); + } else if (vs->conf.iothread) { + AioContext *ctx =3D iothread_get_aio_context(vs->conf.iothread); + for (uint16_t i =3D 0; i < num_vqs; i++) { + s->vq_aio_context[i] =3D ctx; + } + + /* Released in virtio_scsi_dataplane_cleanup() */ + object_ref(OBJECT(vs->conf.iothread)); + } else { + AioContext *ctx =3D qemu_get_aio_context(); + for (unsigned i =3D 0; i < num_vqs; i++) { + s->vq_aio_context[i] =3D ctx; + } } } =20 +/* Context: BQL held */ +void virtio_scsi_dataplane_cleanup(VirtIOSCSI *s) +{ + VirtIOSCSICommon *vs =3D VIRTIO_SCSI_COMMON(s); + + if (vs->conf.iothread_vq_mapping_list) { + iothread_vq_mapping_cleanup(vs->conf.iothread_vq_mapping_list); + } + + if (vs->conf.iothread) { + object_unref(OBJECT(vs->conf.iothread)); + } + + g_free(s->vq_aio_context); + s->vq_aio_context =3D NULL; +} + static int virtio_scsi_set_host_notifier(VirtIOSCSI *s, VirtQueue *vq, int= n) { BusState *qbus =3D BUS(qdev_get_parent_bus(DEVICE(s))); @@ -66,31 +110,20 @@ static int virtio_scsi_set_host_notifier(VirtIOSCSI *s= , VirtQueue *vq, int n) } =20 /* Context: BH in IOThread */ -static void virtio_scsi_dataplane_stop_bh(void *opaque) +static void virtio_scsi_dataplane_stop_vq_bh(void *opaque) { - VirtIOSCSI *s =3D opaque; - VirtIOSCSICommon *vs =3D VIRTIO_SCSI_COMMON(s); + AioContext *ctx =3D qemu_get_current_aio_context(); + VirtQueue *vq =3D opaque; EventNotifier *host_notifier; - int i; =20 - virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx); - host_notifier =3D virtio_queue_get_host_notifier(vs->ctrl_vq); + virtio_queue_aio_detach_host_notifier(vq, ctx); + host_notifier =3D virtio_queue_get_host_notifier(vq); =20 /* * Test and clear notifier after disabling event, in case poll callback * didn't have time to run. */ virtio_queue_host_notifier_read(host_notifier); - - virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx); - host_notifier =3D virtio_queue_get_host_notifier(vs->event_vq); - virtio_queue_host_notifier_read(host_notifier); - - for (i =3D 0; i < vs->conf.num_queues; i++) { - virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx); - host_notifier =3D virtio_queue_get_host_notifier(vs->cmd_vqs[i]); - virtio_queue_host_notifier_read(host_notifier); - } } =20 /* Context: BQL held */ @@ -154,11 +187,14 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) smp_wmb(); /* paired with aio_notify_accept() */ =20 if (s->bus.drain_count =3D=3D 0) { - virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx); - virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx= ); + virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, + s->vq_aio_context[0]); + virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, + s->vq_aio_context[1]= ); =20 for (i =3D 0; i < vs->conf.num_queues; i++) { - virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx); + AioContext *ctx =3D s->vq_aio_context[VIRTIO_SCSI_VQ_NUM_FIXED= + i]; + virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], ctx); } } return 0; @@ -207,7 +243,11 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev) s->dataplane_stopping =3D true; =20 if (s->bus.drain_count =3D=3D 0) { - aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s); + for (i =3D 0; i < vs->conf.num_queues + VIRTIO_SCSI_VQ_NUM_FIXED; = i++) { + VirtQueue *vq =3D virtio_get_queue(&vs->parent_obj, i); + AioContext *ctx =3D s->vq_aio_context[i]; + aio_wait_bh_oneshot(ctx, virtio_scsi_dataplane_stop_vq_bh, vq); + } } =20 blk_drain_all(); /* ensure there are no in-flight requests */ diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 2045d27289..dabf8ace23 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -27,6 +27,7 @@ #include "hw/qdev-properties.h" #include "hw/scsi/scsi.h" #include "scsi/constants.h" +#include "hw/virtio/iothread-vq-mapping.h" #include "hw/virtio/virtio-bus.h" #include "hw/virtio/virtio-access.h" #include "trace.h" @@ -318,13 +319,6 @@ static void virtio_scsi_cancel_notify(Notifier *notifi= er, void *data) g_free(n); } =20 -static inline void virtio_scsi_ctx_check(VirtIOSCSI *s, SCSIDevice *d) -{ - if (s->dataplane_started && d && blk_is_available(d->conf.blk)) { - assert(blk_get_aio_context(d->conf.blk) =3D=3D s->ctx); - } -} - static void virtio_scsi_do_one_tmf_bh(VirtIOSCSIReq *req) { VirtIOSCSI *s =3D req->dev; @@ -517,9 +511,11 @@ static void virtio_scsi_flush_defer_tmf_to_aio_context= (VirtIOSCSI *s) =20 assert(!s->dataplane_started); =20 - if (s->ctx) { + for (uint32_t i =3D 0; i < s->parent_obj.conf.num_queues; i++) { + AioContext *ctx =3D s->vq_aio_context[VIRTIO_SCSI_VQ_NUM_FIXED + i= ]; + /* Our BH only runs after previously scheduled BHs */ - aio_wait_bh_oneshot(s->ctx, dummy_bh, NULL); + aio_wait_bh_oneshot(ctx, dummy_bh, NULL); } } =20 @@ -575,7 +571,6 @@ static int virtio_scsi_do_tmf(VirtIOSCSI *s, VirtIOSCSI= Req *req) AioContext *ctx; int ret =3D 0; =20 - virtio_scsi_ctx_check(s, d); /* Here VIRTIO_SCSI_S_OK means "FUNCTION COMPLETE". */ req->resp.tmf.response =3D VIRTIO_SCSI_S_OK; =20 @@ -639,6 +634,8 @@ static int virtio_scsi_do_tmf(VirtIOSCSI *s, VirtIOSCSI= Req *req) =20 case VIRTIO_SCSI_T_TMF_ABORT_TASK_SET: case VIRTIO_SCSI_T_TMF_CLEAR_TASK_SET: { + g_autoptr(GHashTable) aio_contexts =3D g_hash_table_new(NULL, NULL= ); + if (!d) { goto fail; } @@ -648,8 +645,15 @@ static int virtio_scsi_do_tmf(VirtIOSCSI *s, VirtIOSCS= IReq *req) =20 qatomic_inc(&req->remaining); =20 - ctx =3D s->ctx ?: qemu_get_aio_context(); - virtio_scsi_defer_tmf_to_aio_context(req, ctx); + for (uint32_t i =3D 0; i < s->parent_obj.conf.num_queues; i++) { + ctx =3D s->vq_aio_context[VIRTIO_SCSI_VQ_NUM_FIXED + i]; + + if (!g_hash_table_add(aio_contexts, ctx)) { + continue; /* skip previously added AioContext */ + } + + virtio_scsi_defer_tmf_to_aio_context(req, ctx); + } =20 virtio_scsi_tmf_dec_remaining(req); ret =3D -EINPROGRESS; @@ -770,9 +774,12 @@ static void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, = VirtQueue *vq) */ static bool virtio_scsi_defer_to_dataplane(VirtIOSCSI *s) { - if (!s->ctx || s->dataplane_started) { + if (s->dataplane_started) { return false; } + if (s->vq_aio_context[0] =3D=3D qemu_get_aio_context()) { + return false; /* not using IOThreads */ + } =20 virtio_device_start_ioeventfd(&s->parent_obj.parent_obj); return !s->dataplane_fenced; @@ -946,7 +953,6 @@ static int virtio_scsi_handle_cmd_req_prepare(VirtIOSCS= I *s, VirtIOSCSIReq *req) virtio_scsi_complete_cmd_req(req); return -ENOENT; } - virtio_scsi_ctx_check(s, d); req->sreq =3D scsi_req_new(d, req->req.cmd.tag, virtio_scsi_get_lun(req->req.cmd.lun), req->req.cmd.cdb, vs->cdb_size, req); @@ -1218,14 +1224,16 @@ static void virtio_scsi_hotplug(HotplugHandler *hot= plug_dev, DeviceState *dev, { VirtIODevice *vdev =3D VIRTIO_DEVICE(hotplug_dev); VirtIOSCSI *s =3D VIRTIO_SCSI(vdev); + AioContext *ctx =3D s->vq_aio_context[0]; SCSIDevice *sd =3D SCSI_DEVICE(dev); - int ret; =20 - if (s->ctx && !s->dataplane_fenced) { - ret =3D blk_set_aio_context(sd->conf.blk, s->ctx, errp); - if (ret < 0) { - return; - } + if (ctx !=3D qemu_get_aio_context() && !s->dataplane_fenced) { + /* + * Try to make the BlockBackend's AioContext match ours. Ignore fa= ilure + * because I/O will still work although block jobs and other users + * might be slower when multiple AioContexts use a BlockBackend. + */ + blk_set_aio_context(sd->conf.blk, ctx, errp); } =20 if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) { @@ -1260,7 +1268,7 @@ static void virtio_scsi_hotunplug(HotplugHandler *hot= plug_dev, DeviceState *dev, =20 qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); =20 - if (s->ctx) { + if (s->vq_aio_context[0] !=3D qemu_get_aio_context()) { /* If other users keep the BlockBackend in the iothread, that's ok= */ blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL); } @@ -1294,7 +1302,7 @@ static void virtio_scsi_drained_begin(SCSIBus *bus) =20 for (uint32_t i =3D 0; i < total_queues; i++) { VirtQueue *vq =3D virtio_get_queue(vdev, i); - virtio_queue_aio_detach_host_notifier(vq, s->ctx); + virtio_queue_aio_detach_host_notifier(vq, s->vq_aio_context[i]); } } =20 @@ -1320,10 +1328,12 @@ static void virtio_scsi_drained_end(SCSIBus *bus) =20 for (uint32_t i =3D 0; i < total_queues; i++) { VirtQueue *vq =3D virtio_get_queue(vdev, i); + AioContext *ctx =3D s->vq_aio_context[i]; + if (vq =3D=3D vs->event_vq) { - virtio_queue_aio_attach_host_notifier_no_poll(vq, s->ctx); + virtio_queue_aio_attach_host_notifier_no_poll(vq, ctx); } else { - virtio_queue_aio_attach_host_notifier(vq, s->ctx); + virtio_queue_aio_attach_host_notifier(vq, ctx); } } } @@ -1430,12 +1440,13 @@ void virtio_scsi_common_unrealize(DeviceState *dev) virtio_cleanup(vdev); } =20 +/* main loop */ static void virtio_scsi_device_unrealize(DeviceState *dev) { VirtIOSCSI *s =3D VIRTIO_SCSI(dev); =20 virtio_scsi_reset_tmf_bh(s); - + virtio_scsi_dataplane_cleanup(s); qbus_set_hotplug_handler(BUS(&s->bus), NULL); virtio_scsi_common_unrealize(dev); qemu_mutex_destroy(&s->tmf_bh_lock); @@ -1460,6 +1471,8 @@ static const Property virtio_scsi_properties[] =3D { VIRTIO_SCSI_F_CHANGE, true= ), DEFINE_PROP_LINK("iothread", VirtIOSCSI, parent_obj.conf.iothread, TYPE_IOTHREAD, IOThread *), + DEFINE_PROP_IOTHREAD_VQ_MAPPING_LIST("iothread-vq-mapping", VirtIOSCSI, + parent_obj.conf.iothread_vq_mapping_list), }; =20 static const VMStateDescription vmstate_virtio_scsi =3D { --=20 2.48.1