From nobody Mon Feb 9 06:49:13 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1657133600; cv=none; d=zohomail.com; s=zohoarc; b=Sh1QBJh8zO0gokav/SuwaG2Z9fkYcEXUbDKjIIS/iaLzCUBTQCaXRK7QcNACC6z6WzM59TREz4FCRESbo+FEXYT2BD5g9hep+7qrG6Jszc+E2yL1DxU9yJWFB3P/O0SNI2u5l6kvtrpA8Mkn1zG50yQpDbEsm9n0n4U4aE/0mnk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1657133600; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=yKvfOXHSuDLmmx0WvW0ZNBepkzXeLbxysaOxkM8tv8o=; b=XO/JDYg1CD5oI+P1dKwxFb178CMRccvEDEdVIzUwq1fRv+3Wn8sDXc9ReCy+5h7qY6XB0pRIWxcm+fPSvPg02/8dUqFVNBZ5bp0OVp3S3X71mNPmxPIXGDYZ3w/GX75X/LhUhYv6n08+dp2mAfoM45Fa9Bapy2JYxdh3skV6b7g= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1657133600685385.9089635456072; Wed, 6 Jul 2022 11:53:20 -0700 (PDT) Received: from localhost ([::1]:39626 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o9A9G-0004hE-Ng for importer@patchew.org; Wed, 06 Jul 2022 14:53:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45044) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o99yF-0005lH-BF for qemu-devel@nongnu.org; Wed, 06 Jul 2022 14:41:56 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:43176) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o99xm-0000wB-AV for qemu-devel@nongnu.org; Wed, 06 Jul 2022 14:41:45 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-206-MRQiBnkRMqKAd4K2PDTl9Q-1; Wed, 06 Jul 2022 14:41:11 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D7C3D811E87; Wed, 6 Jul 2022 18:41:10 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.192.119]) by smtp.corp.redhat.com (Postfix) with ESMTP id 52F581415116; Wed, 6 Jul 2022 18:41:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1657132883; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yKvfOXHSuDLmmx0WvW0ZNBepkzXeLbxysaOxkM8tv8o=; b=ZmYbuCJUcCOgzICHjQsqPn8g2uP4Ulu6zw527L1LMGCT+uIO0liJ7d6nW9R1sPGo5YF7Jl ks/92uBTBeaT3b1hVkxl8H+XnOwgi/p4gJlpELX/Ehnh5KKZ1KfPerMME0zKHXOUmPYAjF 7WAnh9O4Q1qywbqQBQUK8Bcp4GE2uIY= X-MC-Unique: MRQiBnkRMqKAd4K2PDTl9Q-1 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Liuxiangdong , Markus Armbruster , Harpreet Singh Anand , Eric Blake , Laurent Vivier , Parav Pandit , Cornelia Huck , Paolo Bonzini , Gautam Dawar , Eli Cohen , "Gonglei (Arei)" , Zhu Lingshan , "Michael S. Tsirkin" , Cindy Lu , Jason Wang Subject: [RFC PATCH v9 20/23] vdpa: Buffer CVQ support on shadow virtqueue Date: Wed, 6 Jul 2022 20:40:05 +0200 Message-Id: <20220706184008.1649478-21-eperezma@redhat.com> In-Reply-To: <20220706184008.1649478-1-eperezma@redhat.com> References: <20220706184008.1649478-1-eperezma@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, T_SPF_HELO_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1657133602437100001 Introduce the control virtqueue support for vDPA shadow virtqueue. This is needed for advanced networking features like multiqueue. Virtio-net control VQ will copy the descriptors to qemu's VA, so we avoid TOCTOU with the guest's or device's memory every time there is a device model change. When address space isolation is implemented, this will allow, CVQ to only have access to control messages too. To demonstrate command handling, VIRTIO_NET_F_CTRL_MACADDR is implemented. If virtio-net driver changes MAC the virtio-net device model will be updated with the new one. Others cvq commands could be added here straightforwardly but they have been not tested. Signed-off-by: Eugenio P=C3=A9rez --- include/hw/virtio/vhost-vdpa.h | 3 + hw/virtio/vhost-vdpa.c | 5 +- net/vhost-vdpa.c | 373 +++++++++++++++++++++++++++++++++ 3 files changed, 379 insertions(+), 2 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 7214eb47dc..1111d85643 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -15,6 +15,7 @@ #include =20 #include "hw/virtio/vhost-iova-tree.h" +#include "hw/virtio/vhost-shadow-virtqueue.h" #include "hw/virtio/virtio.h" #include "standard-headers/linux/vhost_types.h" =20 @@ -35,6 +36,8 @@ typedef struct vhost_vdpa { /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; GPtrArray *shadow_vqs; + const VhostShadowVirtqueueOps *shadow_vq_ops; + void *shadow_vq_ops_opaque; struct vhost_dev *dev; VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX]; } VhostVDPA; diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 613c3483b0..94bda07b4d 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -417,9 +417,10 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev,= struct vhost_vdpa *v, =20 shadow_vqs =3D g_ptr_array_new_full(hdev->nvqs, vhost_svq_free); for (unsigned n =3D 0; n < hdev->nvqs; ++n) { - g_autoptr(VhostShadowVirtqueue) svq =3D vhost_svq_new(v->iova_tree= , NULL, - NULL); + g_autoptr(VhostShadowVirtqueue) svq =3D NULL; =20 + svq =3D vhost_svq_new(v->iova_tree, v->shadow_vq_ops, + v->shadow_vq_ops_opaque); if (unlikely(!svq)) { error_setg(errp, "Cannot create svq %u", n); return -1; diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index b0158f625e..e415cc8de5 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -11,11 +11,15 @@ =20 #include "qemu/osdep.h" #include "clients.h" +#include "hw/virtio/virtio-net.h" #include "net/vhost_net.h" #include "net/vhost-vdpa.h" #include "hw/virtio/vhost-vdpa.h" +#include "qemu/buffer.h" #include "qemu/config-file.h" #include "qemu/error-report.h" +#include "qemu/log.h" +#include "qemu/memalign.h" #include "qemu/option.h" #include "qapi/error.h" #include @@ -25,6 +29,26 @@ #include "monitor/monitor.h" #include "hw/virtio/vhost.h" =20 +typedef struct CVQElement { + /* Device's in and out buffer */ + void *in_buf, *out_buf; + + /* Optional guest element from where this cvqelement was created */ + VirtQueueElement *guest_elem; + + /* Control header sent by the guest. */ + struct virtio_net_ctrl_hdr ctrl; + + /* vhost-vdpa device, for cleanup reasons */ + struct vhost_vdpa *vdpa; + + /* Length of out data */ + size_t out_len; + + /* Copy of the out data sent by the guest excluding ctrl. */ + uint8_t out_data[]; +} CVQElement; + /* Todo:need to add the multiqueue support here */ typedef struct VhostVDPAState { NetClientState nc; @@ -187,6 +211,351 @@ static NetClientInfo net_vhost_vdpa_info =3D { .check_peer_type =3D vhost_vdpa_check_peer_type, }; =20 +/** + * Unmap a descriptor chain of a SVQ element, optionally copying its in bu= ffers + * + * @svq: Shadow VirtQueue + * @iova: SVQ IO Virtual address of descriptor + * @iov: Optional iovec to store device writable buffer + * @iov_cnt: iov length + * @buf_len: Length written by the device + * + * TODO: Use me! and adapt to net/vhost-vdpa format + * Print error message in case of error + */ +static void vhost_vdpa_cvq_unmap_buf(CVQElement *elem, void *addr) +{ + struct vhost_vdpa *v =3D elem->vdpa; + VhostIOVATree *tree =3D v->iova_tree; + DMAMap needle =3D { + /* + * No need to specify size or to look for more translations since + * this contiguous chunk was allocated by us. + */ + .translated_addr =3D (hwaddr)(uintptr_t)addr, + }; + const DMAMap *map =3D vhost_iova_tree_find_iova(tree, &needle); + int r; + + if (unlikely(!map)) { + error_report("Cannot locate expected map"); + goto err; + } + + r =3D vhost_vdpa_dma_unmap(v, map->iova, map->size + 1); + if (unlikely(r !=3D 0)) { + error_report("Device cannot unmap: %s(%d)", g_strerror(r), r); + } + + vhost_iova_tree_remove(tree, map); + +err: + qemu_vfree(addr); +} + +static void vhost_vdpa_cvq_delete_elem(CVQElement *elem) +{ + if (elem->out_buf) { + vhost_vdpa_cvq_unmap_buf(elem, g_steal_pointer(&elem->out_buf)); + } + + if (elem->in_buf) { + vhost_vdpa_cvq_unmap_buf(elem, g_steal_pointer(&elem->in_buf)); + } + + /* Guest element must have been returned to the guest or free otherway= */ + assert(!elem->guest_elem); + + g_free(elem); +} +G_DEFINE_AUTOPTR_CLEANUP_FUNC(CVQElement, vhost_vdpa_cvq_delete_elem); + +static int vhost_vdpa_net_cvq_svq_inject(VhostShadowVirtqueue *svq, + CVQElement *cvq_elem, + size_t out_len) +{ + const struct iovec iov[] =3D { + { + .iov_base =3D cvq_elem->out_buf, + .iov_len =3D out_len, + },{ + .iov_base =3D cvq_elem->in_buf, + .iov_len =3D sizeof(virtio_net_ctrl_ack), + } + }; + + return vhost_svq_inject(svq, iov, 1, 1, cvq_elem); +} + +static void *vhost_vdpa_cvq_alloc_buf(struct vhost_vdpa *v, + const uint8_t *out_data, size_t data= _len, + bool write) +{ + DMAMap map =3D {}; + size_t buf_len =3D ROUND_UP(data_len, qemu_real_host_page_size()); + void *buf =3D qemu_memalign(qemu_real_host_page_size(), buf_len); + int r; + + if (!write) { + memcpy(buf, out_data, data_len); + memset(buf + data_len, 0, buf_len - data_len); + } else { + memset(buf, 0, data_len); + } + + map.translated_addr =3D (hwaddr)(uintptr_t)buf; + map.size =3D buf_len - 1; + map.perm =3D write ? IOMMU_RW : IOMMU_RO, + r =3D vhost_iova_tree_map_alloc(v->iova_tree, &map); + if (unlikely(r !=3D IOVA_OK)) { + error_report("Cannot map injected element"); + goto err; + } + + r =3D vhost_vdpa_dma_map(v, map.iova, buf_len, buf, !write); + /* TODO: Handle error */ + assert(r =3D=3D 0); + + return buf; + +err: + qemu_vfree(buf); + return NULL; +} + +/** + * Allocate an element suitable to be injected + * + * @iov: The iovec + * @out_num: Number of out elements, placed first in iov + * @in_num: Number of in elements, placed after out ones + * @elem: Optional guest element from where this one was created + * + * TODO: Do we need a sg for out_num? I think not + */ +static CVQElement *vhost_vdpa_cvq_alloc_elem(VhostVDPAState *s, + struct virtio_net_ctrl_hdr ct= rl, + const struct iovec *out_sg, + size_t out_num, size_t out_si= ze, + VirtQueueElement *elem) +{ + g_autoptr(CVQElement) cvq_elem =3D g_malloc(sizeof(CVQElement) + out_s= ize); + uint8_t *out_cursor =3D cvq_elem->out_data; + struct vhost_vdpa *v =3D &s->vhost_vdpa; + + /* Start with a clean base */ + memset(cvq_elem, 0, sizeof(*cvq_elem)); + cvq_elem->vdpa =3D &s->vhost_vdpa; + + /* + * Linearize element. If guest had a descriptor chain, we expose the d= evice + * a single buffer. + */ + cvq_elem->out_len =3D out_size; + memcpy(out_cursor, &ctrl, sizeof(ctrl)); + out_size -=3D sizeof(ctrl); + out_cursor +=3D sizeof(ctrl); + iov_to_buf(out_sg, out_num, 0, out_cursor, out_size); + + cvq_elem->out_buf =3D vhost_vdpa_cvq_alloc_buf(v, cvq_elem->out_data, + out_size, false); + assert(cvq_elem->out_buf); + cvq_elem->in_buf =3D vhost_vdpa_cvq_alloc_buf(v, NULL, + sizeof(virtio_net_ctrl_ack= ), + true); + assert(cvq_elem->in_buf); + + cvq_elem->guest_elem =3D elem; + cvq_elem->ctrl =3D ctrl; + return g_steal_pointer(&cvq_elem); +} + +/** + * iov_size with an upper limit. It's assumed UINT64_MAX is an invalid + * iov_size. + */ +static uint64_t vhost_vdpa_net_iov_len(const struct iovec *iov, + unsigned int iov_cnt, size_t max) +{ + uint64_t len =3D 0; + + for (unsigned int i =3D 0; len < max && i < iov_cnt; i++) { + bool overflow =3D uadd64_overflow(iov[i].iov_len, len, &len); + if (unlikely(overflow)) { + return UINT64_MAX; + } + } + + return len; +} + +static CVQElement *vhost_vdpa_net_cvq_copy_elem(VhostVDPAState *s, + VirtQueueElement *elem) +{ + struct virtio_net_ctrl_hdr ctrl; + g_autofree struct iovec *iov =3D NULL; + struct iovec *iov2; + unsigned int out_num =3D elem->out_num; + size_t n, out_size =3D 0; + + /* TODO: in buffer MUST have only a single entry with a char? size */ + if (unlikely(vhost_vdpa_net_iov_len(elem->in_sg, elem->in_num, + sizeof(virtio_net_ctrl_ack)) + < sizeof(virtio_net_ctrl_ack= ))) { + return NULL; + } + + n =3D iov_to_buf(elem->out_sg, out_num, 0, &ctrl, sizeof(ctrl)); + if (unlikely(n !=3D sizeof(ctrl))) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid out size\n", __func__); + return NULL; + } + + iov =3D iov2 =3D g_memdup2(elem->out_sg, sizeof(struct iovec) * elem->= out_num); + iov_discard_front(&iov2, &out_num, sizeof(ctrl)); + switch (ctrl.class) { + case VIRTIO_NET_CTRL_MAC: + switch (ctrl.cmd) { + case VIRTIO_NET_CTRL_MAC_ADDR_SET: + if (likely(vhost_vdpa_net_iov_len(iov2, out_num, 6))) { + out_size +=3D 6; + break; + } + + qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid mac size\n", __fun= c__); + return NULL; + default: + qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid mac cmd %u\n", + __func__, ctrl.cmd); + return NULL; + }; + break; + default: + qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid control class %u\n", + __func__, ctrl.class); + return NULL; + }; + + return vhost_vdpa_cvq_alloc_elem(s, ctrl, iov2, out_num, + sizeof(ctrl) + out_size, elem); +} + +/** + * Validate and copy control virtqueue commands. + * + * Following QEMU guidelines, we offer a copy of the buffers to the device= to + * prevent TOCTOU bugs. This functions check that the buffers length are + * expected too. + */ +static bool vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq, + VirtQueueElement *guest_elem, + void *opaque) +{ + VhostVDPAState *s =3D opaque; + g_autoptr(CVQElement) cvq_elem =3D NULL; + g_autofree VirtQueueElement *elem =3D guest_elem; + size_t out_size, in_len; + virtio_net_ctrl_ack status =3D VIRTIO_NET_ERR; + int r; + + cvq_elem =3D vhost_vdpa_net_cvq_copy_elem(s, elem); + if (unlikely(!cvq_elem)) { + goto err; + } + + /* out size validated at vhost_vdpa_net_cvq_copy_elem */ + out_size =3D iov_size(elem->out_sg, elem->out_num); + r =3D vhost_vdpa_net_cvq_svq_inject(svq, cvq_elem, out_size); + if (unlikely(r !=3D 0)) { + goto err; + } + + cvq_elem->guest_elem =3D g_steal_pointer(&elem); + /* Now CVQ elem belongs to SVQ */ + g_steal_pointer(&cvq_elem); + return true; + +err: + in_len =3D iov_from_buf(elem->in_sg, elem->in_num, 0, &status, + sizeof(status)); + vhost_svq_push_elem(svq, elem, in_len); + return true; +} + +static VirtQueueElement *vhost_vdpa_net_handle_ctrl_detach(void *elem_opaq= ue) +{ + g_autoptr(CVQElement) cvq_elem =3D elem_opaque; + return g_steal_pointer(&cvq_elem->guest_elem); +} + +static void vhost_vdpa_net_handle_ctrl_used(VhostShadowVirtqueue *svq, + void *vq_elem_opaque, + uint32_t dev_written) +{ + g_autoptr(CVQElement) cvq_elem =3D vq_elem_opaque; + virtio_net_ctrl_ack status =3D VIRTIO_NET_ERR; + const struct iovec out =3D { + .iov_base =3D cvq_elem->out_data, + .iov_len =3D cvq_elem->out_len, + }; + const DMAMap status_map_needle =3D { + .translated_addr =3D (hwaddr)(uintptr_t)cvq_elem->in_buf, + .size =3D sizeof(status), + }; + const DMAMap *in_map; + const struct iovec in =3D { + .iov_base =3D &status, + .iov_len =3D sizeof(status), + }; + g_autofree VirtQueueElement *guest_elem =3D NULL; + + if (unlikely(dev_written < sizeof(status))) { + error_report("Insufficient written data (%llu)", + (long long unsigned)dev_written); + goto out; + } + + in_map =3D vhost_iova_tree_find_iova(svq->iova_tree, &status_map_needl= e); + if (unlikely(!in_map)) { + error_report("Cannot locate out mapping"); + goto out; + } + + switch (cvq_elem->ctrl.class) { + case VIRTIO_NET_CTRL_MAC_ADDR_SET: + break; + default: + error_report("Unexpected ctrl class %u", cvq_elem->ctrl.class); + goto out; + }; + + memcpy(&status, cvq_elem->in_buf, sizeof(status)); + if (status !=3D VIRTIO_NET_OK) { + goto out; + } + + status =3D VIRTIO_NET_ERR; + virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1); + if (status !=3D VIRTIO_NET_OK) { + error_report("Bad CVQ processing in model"); + goto out; + } + +out: + guest_elem =3D g_steal_pointer(&cvq_elem->guest_elem); + if (guest_elem) { + iov_from_buf(guest_elem->in_sg, guest_elem->in_num, 0, &status, + sizeof(status)); + vhost_svq_push_elem(svq, guest_elem, sizeof(status)); + } +} + +static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops =3D { + .avail_handler =3D vhost_vdpa_net_handle_ctrl_avail, + .used_handler =3D vhost_vdpa_net_handle_ctrl_used, + .detach_handler =3D vhost_vdpa_net_handle_ctrl_detach, +}; + static NetClientState *net_vhost_vdpa_init(NetClientState *peer, const char *device, const char *name, @@ -211,6 +580,10 @@ static NetClientState *net_vhost_vdpa_init(NetClientSt= ate *peer, =20 s->vhost_vdpa.device_fd =3D vdpa_device_fd; s->vhost_vdpa.index =3D queue_pair_index; + if (!is_datapath) { + s->vhost_vdpa.shadow_vq_ops =3D &vhost_vdpa_net_svq_ops; + s->vhost_vdpa.shadow_vq_ops_opaque =3D s; + } ret =3D vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, n= vqs); if (ret) { qemu_del_net_client(nc); --=20 2.31.1