From nobody Sat May 18 10:30:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1638940898; cv=none; d=zohomail.com; s=zohoarc; b=E7j/TiPS6O5qQjwCrzlNP4Hrc4DbX4kOrYm9cD9LWwTk+75IWpF6WgyAxRhjq6I3J2mht6TKLQDrydiUKfydAk2l/J94gprJYqJ/+J/eUQHpX4ybu3ZYUdHI0nab+ootHV7VdDd+5syLOrjpD2zFsPEgTN+AvkI0PT5KB97WXgw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1638940898; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Sender:Subject:To; bh=6IDgKvZPxLow+YaGiA6WOaHEHVabWZH73C6Do/qtA1M=; b=QJqLmD3Ork6QPC3zvZBSwqMI/ECdoQnE09kOv39/O7rfc0xfS60H8Sp2gm/4O+C/JvQhoy7vfNaBsM8N8y3nEV+XWsiqbSLuJ+gaK0QnwCDlCD4Lvr7uLi4gECO9EgQUffA3vfyg4+RAd21xm8B7qtVNp7FdNjaJAT2yQZKVVP8= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1638940898257584.073873573952; Tue, 7 Dec 2021 21:21:38 -0800 (PST) Received: from localhost ([::1]:51622 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mupOa-000648-Ka for importer@patchew.org; Wed, 08 Dec 2021 00:21:36 -0500 Received: from eggs.gnu.org ([209.51.188.92]:47906) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mupNZ-0005J3-EP for qemu-devel@nongnu.org; Wed, 08 Dec 2021 00:20:33 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:3505) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mupNU-00060R-3t for qemu-devel@nongnu.org; Wed, 08 Dec 2021 00:20:33 -0500 Received: from dggpeml500023.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4J855J4lsNzZdZ7; Wed, 8 Dec 2021 13:17:24 +0800 (CST) Received: from dggpeml100016.china.huawei.com (7.185.36.216) by dggpeml500023.china.huawei.com (7.185.36.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 8 Dec 2021 13:20:15 +0800 Received: from DESKTOP-27KDQMV.china.huawei.com (10.174.148.223) by dggpeml100016.china.huawei.com (7.185.36.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 8 Dec 2021 13:20:14 +0800 To: , CC: , , , , , , , Longpeng Subject: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support Date: Wed, 8 Dec 2021 13:20:10 +0800 Message-ID: <20211208052010.1719-1-longpeng2@huawei.com> X-Mailer: git-send-email 2.25.0.windows.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.174.148.223] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml100016.china.huawei.com (7.185.36.216) X-CFilter-Loop: Reflected Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=45.249.212.187; envelope-from=longpeng2@huawei.com; helo=szxga01-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Reply-to: "Longpeng(Mike)" From: "Longpeng(Mike)" via X-ZM-MESSAGEID: 1638940899758100001 Content-Type: text/plain; charset="utf-8" From: Longpeng Hi guys, This patch introduces vhost-vdpa-net device, which is inspired by vhost-user-blk and the proposal of vhost-vdpa-blk device [1]. I've tested this patch on Huawei's offload card: ./x86_64-softmmu/qemu-system-x86_64 \ -device vhost-vdpa-net-pci,vdpa-dev=3D/dev/vhost-vdpa-0 For virtio hardware offloading, the most important requirement for us is to support live migration between offloading cards from different vendors, the combination of netdev and virtio-net seems too heavy, we prefer a lightweight way. Maybe we could support both in the future ? Such as: * Lightweight Net: vhost-vdpa-net Storage: vhost-vdpa-blk * Heavy but more powerful Net: netdev + virtio-net + vhost-vdpa Storage: bdrv + virtio-blk + vhost-vdpa [1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html Signed-off-by: Longpeng(Mike) --- hw/net/meson.build | 1 + hw/net/vhost-vdpa-net.c | 338 +++++++++++++++++++++++++++++++++= ++++ hw/virtio/Kconfig | 5 + hw/virtio/meson.build | 1 + hw/virtio/vhost-vdpa-net-pci.c | 118 +++++++++++++ include/hw/virtio/vhost-vdpa-net.h | 31 ++++ include/net/vhost-vdpa.h | 2 + net/vhost-vdpa.c | 2 +- 8 files changed, 497 insertions(+), 1 deletion(-) create mode 100644 hw/net/vhost-vdpa-net.c create mode 100644 hw/virtio/vhost-vdpa-net-pci.c create mode 100644 include/hw/virtio/vhost-vdpa-net.h diff --git a/hw/net/meson.build b/hw/net/meson.build index bdf71f1..139ebc4 100644 --- a/hw/net/meson.build +++ b/hw/net/meson.build @@ -44,6 +44,7 @@ specific_ss.add(when: 'CONFIG_XILINX_ETHLITE', if_true: f= iles('xilinx_ethlite.c' =20 softmmu_ss.add(when: 'CONFIG_VIRTIO_NET', if_true: files('net_rx_pkt.c')) specific_ss.add(when: 'CONFIG_VIRTIO_NET', if_true: files('virtio-net.c')) +specific_ss.add(when: 'CONFIG_VHOST_VDPA_NET', if_true: files('vhost-vdpa-= net.c')) =20 softmmu_ss.add(when: ['CONFIG_VIRTIO_NET', 'CONFIG_VHOST_NET'], if_true: f= iles('vhost_net.c'), if_false: files('vhost_net-stub.c')) softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('vhost_net-stub.c')) diff --git a/hw/net/vhost-vdpa-net.c b/hw/net/vhost-vdpa-net.c new file mode 100644 index 0000000..48b99f9 --- /dev/null +++ b/hw/net/vhost-vdpa-net.c @@ -0,0 +1,338 @@ +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "qemu/error-report.h" +#include "qemu/cutils.h" +#include "hw/qdev-core.h" +#include "hw/qdev-properties.h" +#include "hw/qdev-properties-system.h" +#include "hw/virtio/vhost.h" +#include "hw/virtio/vhost-vdpa-net.h" +#include "hw/virtio/virtio.h" +#include "hw/virtio/virtio-bus.h" +#include "hw/virtio/virtio-access.h" +#include "sysemu/sysemu.h" +#include "sysemu/runstate.h" +#include "net/vhost-vdpa.h" + +static void vhost_vdpa_net_get_config(VirtIODevice *vdev, uint8_t *config) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + + memcpy(config, &s->netcfg, sizeof(struct virtio_net_config)); +} + +static void vhost_vdpa_net_set_config(VirtIODevice *vdev, const uint8_t *c= onfig) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + struct virtio_net_config *netcfg =3D (struct virtio_net_config *)confi= g; + int ret; + + ret =3D vhost_dev_set_config(&s->dev, (uint8_t *)netcfg, 0, sizeof(*ne= tcfg), + VHOST_SET_CONFIG_TYPE_MASTER); + if (ret) { + error_report("set device config space failed"); + return; + } +} + +static uint64_t vhost_vdpa_net_get_features(VirtIODevice *vdev, + uint64_t features, + Error **errp) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + + virtio_add_feature(&features, VIRTIO_NET_F_CSUM); + virtio_add_feature(&features, VIRTIO_NET_F_GUEST_CSUM); + virtio_add_feature(&features, VIRTIO_NET_F_MAC); + virtio_add_feature(&features, VIRTIO_NET_F_GSO); + virtio_add_feature(&features, VIRTIO_NET_F_GUEST_TSO4); + virtio_add_feature(&features, VIRTIO_NET_F_GUEST_TSO6); + virtio_add_feature(&features, VIRTIO_NET_F_GUEST_ECN); + virtio_add_feature(&features, VIRTIO_NET_F_GUEST_UFO); + virtio_add_feature(&features, VIRTIO_NET_F_GUEST_ANNOUNCE); + virtio_add_feature(&features, VIRTIO_NET_F_HOST_TSO4); + virtio_add_feature(&features, VIRTIO_NET_F_HOST_TSO6); + virtio_add_feature(&features, VIRTIO_NET_F_HOST_ECN); + virtio_add_feature(&features, VIRTIO_NET_F_HOST_UFO); + virtio_add_feature(&features, VIRTIO_NET_F_MRG_RXBUF); + virtio_add_feature(&features, VIRTIO_NET_F_STATUS); + virtio_add_feature(&features, VIRTIO_NET_F_CTRL_VQ); + virtio_add_feature(&features, VIRTIO_NET_F_CTRL_RX); + virtio_add_feature(&features, VIRTIO_NET_F_CTRL_VLAN); + virtio_add_feature(&features, VIRTIO_NET_F_CTRL_RX_EXTRA); + virtio_add_feature(&features, VIRTIO_NET_F_CTRL_MAC_ADDR); + virtio_add_feature(&features, VIRTIO_NET_F_MQ); + + return vhost_get_features(&s->dev, vdpa_feature_bits, features); +} + +static int vhost_vdpa_net_start(VirtIODevice *vdev, Error **errp) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + BusState *qbus =3D BUS(qdev_get_parent_bus(DEVICE(vdev))); + VirtioBusClass *k =3D VIRTIO_BUS_GET_CLASS(qbus); + int i, ret; + + if (!k->set_guest_notifiers) { + error_setg(errp, "binding does not support guest notifiers"); + return -ENOSYS; + } + + ret =3D vhost_dev_enable_notifiers(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error enabling host notifiers"); + return ret; + } + + ret =3D k->set_guest_notifiers(qbus->parent, s->dev.nvqs, true); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error binding guest notifier"); + goto err_host_notifiers; + } + + s->dev.acked_features =3D vdev->guest_features; + + ret =3D vhost_dev_start(&s->dev, vdev); + if (ret < 0) { + error_setg_errno(errp, -ret, "Error starting vhost"); + goto err_guest_notifiers; + } + s->started =3D true; + + /* guest_notifier_mask/pending not used yet, so just unmask + * everything here. virtio-pci will do the right thing by + * enabling/disabling irqfd. + */ + for (i =3D 0; i < s->dev.nvqs; i++) { + vhost_virtqueue_mask(&s->dev, vdev, i, false); + } + + return ret; + +err_guest_notifiers: + k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); +err_host_notifiers: + vhost_dev_disable_notifiers(&s->dev, vdev); + return ret; +} + +static void vhost_vdpa_net_handle_output(VirtIODevice *vdev, VirtQueue *vq) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + Error *local_err =3D NULL; + int i, ret; + + if (!vdev->start_on_kick) { + return; + } + + if (s->dev.started) { + return; + } + + /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start + * vhost here instead of waiting for .set_status(). + */ + ret =3D vhost_vdpa_net_start(vdev, &local_err); + if (ret < 0) { + error_reportf_err(local_err, "vhost-vdpa-net: start failed: "); + return; + } + + /* Kick right away to begin processing requests already in vring */ + for (i =3D 0; i < s->dev.nvqs; i++) { + VirtQueue *kick_vq =3D virtio_get_queue(vdev, i); + + if (!virtio_queue_get_desc_addr(vdev, i)) { + continue; + } + event_notifier_set(virtio_queue_get_host_notifier(kick_vq)); + } +} + +static void vhost_vdpa_net_stop(VirtIODevice *vdev) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + BusState *qbus =3D BUS(qdev_get_parent_bus(DEVICE(vdev))); + VirtioBusClass *k =3D VIRTIO_BUS_GET_CLASS(qbus); + int ret; + + if (!s->started) { + return; + } + s->started =3D false; + + if (!k->set_guest_notifiers) { + return; + } + + vhost_dev_stop(&s->dev, vdev); + + ret =3D k->set_guest_notifiers(qbus->parent, s->dev.nvqs, false); + if (ret < 0) { + error_report("vhost guest notifier cleanup failed: %d", ret); + return; + } + + vhost_dev_disable_notifiers(&s->dev, vdev); +} + +static void vhost_vdpa_net_set_status(VirtIODevice *vdev, uint8_t status) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + bool should_start =3D virtio_device_started(vdev, status); + Error *local_err =3D NULL; + int ret; + + if (!vdev->vm_running) { + should_start =3D false; + } + + if (s->started =3D=3D should_start) { + return; + } + + if (should_start) { + ret =3D vhost_vdpa_net_start(vdev, &local_err); + if (ret < 0) { + error_reportf_err(local_err, "vhost-vdpa-net: start failed: "); + } + } else { + vhost_vdpa_net_stop(vdev); + } +} + +static void vhost_vdpa_net_unrealize(VHostVdpaNet *s) +{ + VirtIODevice *vdev =3D VIRTIO_DEVICE(s); + int i; + + for (i =3D 0; i < s->queue_pairs * 2; i++) { + virtio_delete_queue(s->virtqs[i]); + } + /* ctrl vq */ + virtio_delete_queue(s->virtqs[i]); + + g_free(s->virtqs); + virtio_cleanup(vdev); +} + +static void vhost_vdpa_net_device_realize(DeviceState *dev, Error **errp) +{ + VirtIODevice *vdev =3D VIRTIO_DEVICE(dev); + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + int i, ret; + + s->vdpa.device_fd =3D qemu_open_old(s->vdpa_dev, O_RDWR); + if (s->vdpa.device_fd =3D=3D -1) { + error_setg(errp, "vhost-vdpa-net: open %s failed: %s", + s->vdpa_dev, strerror(errno)); + return; + } + + virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, + sizeof(struct virtio_net_config)); + + s->dev.nvqs =3D s->queue_pairs * 2 + 1; + s->dev.vqs =3D g_new0(struct vhost_virtqueue, s->dev.nvqs); + s->dev.vq_index =3D 0; + s->dev.vq_index_end =3D s->dev.nvqs; + s->dev.backend_features =3D 0; + s->started =3D false; + + s->virtqs =3D g_new0(VirtQueue *, s->dev.nvqs); + for (i =3D 0; i < s->dev.nvqs; i++) { + s->virtqs[i] =3D virtio_add_queue(vdev, s->queue_size, + vhost_vdpa_net_handle_output); + } + + ret =3D vhost_dev_init(&s->dev, &s->vdpa, VHOST_BACKEND_TYPE_VDPA, 0, = NULL); + if (ret < 0) { + error_setg(errp, "vhost-vdpa-net: vhost initialization failed: %s", + strerror(-ret)); + goto init_err; + } + + ret =3D vhost_dev_get_config(&s->dev, (uint8_t *)&s->netcfg, + sizeof(struct virtio_net_config), NULL); + if (ret < 0) { + error_setg(errp, "vhost-vdpa-net: get network config failed"); + goto config_err; + } + + return; +config_err: + vhost_dev_cleanup(&s->dev); +init_err: + vhost_vdpa_net_unrealize(s); + close(s->vdpa.device_fd); +} + +static void vhost_vdpa_net_device_unrealize(DeviceState *dev) +{ + VirtIODevice *vdev =3D VIRTIO_DEVICE(dev); + VHostVdpaNet *s =3D VHOST_VDPA_NET(vdev); + + virtio_set_status(vdev, 0); + vhost_dev_cleanup(&s->dev); + vhost_vdpa_net_unrealize(s); + close(s->vdpa.device_fd); +} + +static const VMStateDescription vmstate_vhost_vdpa_net =3D { + .name =3D "vhost-vdpa-net", + .minimum_version_id =3D 1, + .version_id =3D 1, + .fields =3D (VMStateField[]) { + VMSTATE_VIRTIO_DEVICE, + VMSTATE_END_OF_LIST() + }, +}; + +static void vhost_vdpa_net_instance_init(Object *obj) +{ + VHostVdpaNet *s =3D VHOST_VDPA_NET(obj); + + device_add_bootindex_property(obj, &s->bootindex, "bootindex", + "/ethernet-phy@0,0", DEVICE(obj)); +} + +static Property vhost_vdpa_net_properties[] =3D { + DEFINE_PROP_STRING("vdpa-dev", VHostVdpaNet, vdpa_dev), + DEFINE_PROP_UINT16("queue-pairs", VHostVdpaNet, queue_pairs, + VHOST_VDPA_NET_AUTO_QUEUE_PAIRS), + DEFINE_PROP_UINT32("queue-size", VHostVdpaNet, queue_size, + VHOST_VDPA_NET_QUEUE_DEFAULT_SIZE), + DEFINE_PROP_END_OF_LIST(), +}; + +static void vhost_vdpa_net_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc =3D DEVICE_CLASS(klass); + VirtioDeviceClass *vdc =3D VIRTIO_DEVICE_CLASS(klass); + + device_class_set_props(dc, vhost_vdpa_net_properties); + dc->vmsd =3D &vmstate_vhost_vdpa_net; + set_bit(DEVICE_CATEGORY_NETWORK, dc->categories); + vdc->realize =3D vhost_vdpa_net_device_realize; + vdc->unrealize =3D vhost_vdpa_net_device_unrealize; + vdc->get_config =3D vhost_vdpa_net_get_config; + vdc->set_config =3D vhost_vdpa_net_set_config; + vdc->get_features =3D vhost_vdpa_net_get_features; + vdc->set_status =3D vhost_vdpa_net_set_status; +} + +static const TypeInfo vhost_vdpa_net_info =3D { + .name =3D TYPE_VHOST_VDPA_NET, + .parent =3D TYPE_VIRTIO_DEVICE, + .instance_size =3D sizeof(VHostVdpaNet), + .instance_init =3D vhost_vdpa_net_instance_init, + .class_init =3D vhost_vdpa_net_class_init, +}; + +static void virtio_register_types(void) +{ + type_register_static(&vhost_vdpa_net_info); +} + +type_init(virtio_register_types) diff --git a/hw/virtio/Kconfig b/hw/virtio/Kconfig index c144d42..50dba2e 100644 --- a/hw/virtio/Kconfig +++ b/hw/virtio/Kconfig @@ -68,3 +68,8 @@ config VHOST_USER_RNG bool default y depends on VIRTIO && VHOST_USER + +config VHOST_VDPA_NET + bool + default y if VIRTIO_PCI + depends on VIRTIO && VHOST_VDPA && LINUX diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build index 521f7d6..3089222 100644 --- a/hw/virtio/meson.build +++ b/hw/virtio/meson.build @@ -34,6 +34,7 @@ virtio_pci_ss =3D ss.source_set() virtio_pci_ss.add(when: 'CONFIG_VHOST_VSOCK', if_true: files('vhost-vsock-= pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_VSOCK', if_true: files('vhost-u= ser-vsock-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_BLK', if_true: files('vhost-use= r-blk-pci.c')) +virtio_pci_ss.add(when: 'CONFIG_VHOST_VDPA_NET', if_true: files('vhost-vdp= a-net-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_INPUT', if_true: files('vhost-u= ser-input-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_USER_SCSI', if_true: files('vhost-us= er-scsi-pci.c')) virtio_pci_ss.add(when: 'CONFIG_VHOST_SCSI', if_true: files('vhost-scsi-pc= i.c')) diff --git a/hw/virtio/vhost-vdpa-net-pci.c b/hw/virtio/vhost-vdpa-net-pci.c new file mode 100644 index 0000000..84199a8 --- /dev/null +++ b/hw/virtio/vhost-vdpa-net-pci.c @@ -0,0 +1,118 @@ +#include "qemu/osdep.h" +#include "standard-headers/linux/virtio_pci.h" +#include "hw/virtio/virtio.h" +#include "hw/virtio/vhost-vdpa-net.h" +#include "hw/pci/pci.h" +#include "hw/qdev-properties.h" +#include "qapi/error.h" +#include "qemu/error-report.h" +#include "qemu/module.h" +#include "virtio-pci.h" +#include "qom/object.h" +#include "net/vhost-vdpa.h" + +typedef struct VHostVdpaNetPCI VHostVdpaNetPCI; + +#define TYPE_VHOST_VDPA_NET_PCI "vhost-vdpa-net-pci-base" +DECLARE_INSTANCE_CHECKER(VHostVdpaNetPCI, VHOST_VDPA_NET_PCI, + TYPE_VHOST_VDPA_NET_PCI) + +struct VHostVdpaNetPCI { + VirtIOPCIProxy parent_obj; + VHostVdpaNet vdev; +}; + +static Property vhost_vdpa_net_pci_properties[] =3D { + DEFINE_PROP_UINT32("vectors", VirtIOPCIProxy, nvectors, + DEV_NVECTORS_UNSPECIFIED), + DEFINE_PROP_END_OF_LIST(), +}; + +static int vhost_vdpa_net_get_queue_pairs(VHostVdpaNetPCI *dev, Error **er= rp) +{ + int device_fd, queue_pairs; + int has_cvq; + + device_fd =3D qemu_open_old(dev->vdev.vdpa_dev, O_RDWR); + if (device_fd =3D=3D -1) { + error_setg(errp, "vhost-vdpa-net: open %s failed: %s", + dev->vdev.vdpa_dev, strerror(errno)); + return -1; + } + + queue_pairs =3D vhost_vdpa_get_max_queue_pairs(device_fd, &has_cvq, er= rp); + if (queue_pairs < 0) { + error_setg(errp, "vhost-vdpa-net: get queue pairs failed: %s", + strerror(errno)); + goto out; + } + + if (!has_cvq) { + error_setg(errp, "vhost-vdpa-net: not support ctrl vq"); + } + +out: + close(device_fd); + return queue_pairs; +} + +static void vhost_vdpa_net_pci_realize(VirtIOPCIProxy *vpci_dev, Error **e= rrp) +{ + VHostVdpaNetPCI *dev =3D VHOST_VDPA_NET_PCI(vpci_dev); + DeviceState *vdev =3D DEVICE(&dev->vdev); + + if (dev->vdev.queue_pairs =3D=3D VHOST_VDPA_NET_AUTO_QUEUE_PAIRS) { + dev->vdev.queue_pairs =3D vhost_vdpa_net_get_queue_pairs(dev, errp= ); + if (*errp) { + return; + } + } + + if (vpci_dev->nvectors =3D=3D DEV_NVECTORS_UNSPECIFIED) { + vpci_dev->nvectors =3D dev->vdev.queue_pairs * 2 + 1; + } + + qdev_realize(vdev, BUS(&vpci_dev->bus), errp); +} + +static void vhost_vdpa_net_pci_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc =3D DEVICE_CLASS(klass); + VirtioPCIClass *k =3D VIRTIO_PCI_CLASS(klass); + PCIDeviceClass *pcidev_k =3D PCI_DEVICE_CLASS(klass); + + set_bit(DEVICE_CATEGORY_NETWORK, dc->categories); + device_class_set_props(dc, vhost_vdpa_net_pci_properties); + k->realize =3D vhost_vdpa_net_pci_realize; + pcidev_k->vendor_id =3D PCI_VENDOR_ID_REDHAT_QUMRANET; + pcidev_k->device_id =3D PCI_DEVICE_ID_VIRTIO_NET; + pcidev_k->revision =3D VIRTIO_PCI_ABI_VERSION; + pcidev_k->class_id =3D PCI_CLASS_NETWORK_ETHERNET; +} + +static void vhost_vdpa_net_pci_instance_init(Object *obj) +{ + VHostVdpaNetPCI *dev =3D VHOST_VDPA_NET_PCI(obj); + + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev), + TYPE_VHOST_VDPA_NET); + object_property_add_alias(obj, "bootindex", OBJECT(&dev->vdev), + "bootindex"); +} + +static const VirtioPCIDeviceTypeInfo vhost_vdpa_net_pci_info =3D { + .base_name =3D TYPE_VHOST_VDPA_NET_PCI, + .generic_name =3D "vhost-vdpa-net-pci", + .transitional_name =3D "vhost-vdpa-net-pci-transitional", + .non_transitional_name =3D "vhost-vdpa-net-pci-non-transitional", + .instance_size =3D sizeof(VHostVdpaNetPCI), + .instance_init =3D vhost_vdpa_net_pci_instance_init, + .class_init =3D vhost_vdpa_net_pci_class_init, +}; + +static void vhost_vdpa_net_pci_register(void) +{ + virtio_pci_types_register(&vhost_vdpa_net_pci_info); +} + +type_init(vhost_vdpa_net_pci_register) diff --git a/include/hw/virtio/vhost-vdpa-net.h b/include/hw/virtio/vhost-v= dpa-net.h new file mode 100644 index 0000000..63bf3a6 --- /dev/null +++ b/include/hw/virtio/vhost-vdpa-net.h @@ -0,0 +1,31 @@ +#ifndef VHOST_VDPA_NET_H +#define VHOST_VDPA_NET_H + +#include "standard-headers/linux/virtio_blk.h" +#include "hw/block/block.h" +#include "chardev/char-fe.h" +#include "hw/virtio/vhost.h" +#include "hw/virtio/vhost-vdpa.h" +#include "hw/virtio/virtio-net.h" +#include "qom/object.h" + +#define TYPE_VHOST_VDPA_NET "vhost-vdpa-net" +OBJECT_DECLARE_SIMPLE_TYPE(VHostVdpaNet, VHOST_VDPA_NET) + +struct VHostVdpaNet { + VirtIODevice parent_obj; + int32_t bootindex; + struct virtio_net_config netcfg; + uint16_t queue_pairs; + uint32_t queue_size; + struct vhost_dev dev; + VirtQueue **virtqs; + struct vhost_vdpa vdpa; + char *vdpa_dev; + bool started; +}; + +#define VHOST_VDPA_NET_AUTO_QUEUE_PAIRS UINT16_MAX +#define VHOST_VDPA_NET_QUEUE_DEFAULT_SIZE 256 + +#endif diff --git a/include/net/vhost-vdpa.h b/include/net/vhost-vdpa.h index b81f9a6..f029972 100644 --- a/include/net/vhost-vdpa.h +++ b/include/net/vhost-vdpa.h @@ -18,4 +18,6 @@ struct vhost_net *vhost_vdpa_get_vhost_net(NetClientState= *nc); =20 extern const int vdpa_feature_bits[]; =20 +int vhost_vdpa_get_max_queue_pairs(int fd, int *has_cvq, Error **errp); + #endif /* VHOST_VDPA_H */ diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 25dd6dd..8ee6ba5 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -219,7 +219,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientSta= te *peer, return nc; } =20 -static int vhost_vdpa_get_max_queue_pairs(int fd, int *has_cvq, Error **er= rp) +int vhost_vdpa_get_max_queue_pairs(int fd, int *has_cvq, Error **errp) { unsigned long config_size =3D offsetof(struct vhost_vdpa_config, buf); g_autofree struct vhost_vdpa_config *config =3D NULL; --=20 1.8.3.1