From nobody Thu May 2 12:17:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1515920659783691.895287064155; Sun, 14 Jan 2018 01:04:19 -0800 (PST) Received: from localhost ([::1]:51580 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeDA-0005ri-RL for importer@patchew.org; Sun, 14 Jan 2018 04:04:16 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50103) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeB4-0004nA-LN for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eaeB1-0001tS-LF for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:06 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48300) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eaeB1-0001tL-GK for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:03 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8A5C9C0467D5; Sun, 14 Jan 2018 09:02:02 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id BAA76600CC; Sun, 14 Jan 2018 09:01:59 +0000 (UTC) From: Marcel Apfelbaum To: qemu-devel@nongnu.org Date: Sun, 14 Jan 2018 11:01:43 +0200 Message-Id: <20180114090147.39255-2-marcel@redhat.com> In-Reply-To: <20180114090147.39255-1-marcel@redhat.com> References: <20180114090147.39255-1-marcel@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Sun, 14 Jan 2018 09:02:02 +0000 (UTC) Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH V7 1/5] pci/shpc: Move function to generic header file X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ehabkost@redhat.com, mst@redhat.com, cohuck@redhat.com, f4bug@amsat.org, yuval.shaia@oracle.com, borntraeger@de.ibm.com, pbonzini@redhat.com, marcel@redhat.com, imammedo@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" From: Yuval Shaia This function should be declared in generic header file so we can utilize it. Reviewed-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Yuval Shaia Signed-off-by: Marcel Apfelbaum --- hw/pci/shpc.c | 13 ++----------- include/qemu/host-utils.h | 10 ++++++++++ 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/hw/pci/shpc.c b/hw/pci/shpc.c index 69fc14b218..a8462d48bb 100644 --- a/hw/pci/shpc.c +++ b/hw/pci/shpc.c @@ -1,6 +1,7 @@ #include "qemu/osdep.h" #include "qapi/error.h" #include "qemu-common.h" +#include "qemu/host-utils.h" #include "qemu/range.h" #include "qemu/error-report.h" #include "hw/pci/shpc.h" @@ -122,16 +123,6 @@ #define SHPC_PCI_TO_IDX(pci_slot) ((pci_slot) - 1) #define SHPC_IDX_TO_PHYSICAL(slot) ((slot) + 1) =20 -static int roundup_pow_of_two(int x) -{ - x |=3D (x >> 1); - x |=3D (x >> 2); - x |=3D (x >> 4); - x |=3D (x >> 8); - x |=3D (x >> 16); - return x + 1; -} - static uint16_t shpc_get_status(SHPCDevice *shpc, int slot, uint16_t msk) { uint8_t *status =3D shpc->config + SHPC_SLOT_STATUS(slot); @@ -656,7 +647,7 @@ int shpc_init(PCIDevice *d, PCIBus *sec_bus, MemoryRegi= on *bar, =20 int shpc_bar_size(PCIDevice *d) { - return roundup_pow_of_two(SHPC_SLOT_REG(SHPC_MAX_SLOTS)); + return pow2roundup32(SHPC_SLOT_REG(SHPC_MAX_SLOTS)); } =20 void shpc_cleanup(PCIDevice *d, MemoryRegion *bar) diff --git a/include/qemu/host-utils.h b/include/qemu/host-utils.h index 5ac621cf1f..38da849be9 100644 --- a/include/qemu/host-utils.h +++ b/include/qemu/host-utils.h @@ -400,6 +400,16 @@ static inline uint64_t pow2ceil(uint64_t value) return 0x8000000000000000ull >> (n - 1); } =20 +static inline uint32_t pow2roundup32(uint32_t x) +{ + x |=3D (x >> 1); + x |=3D (x >> 2); + x |=3D (x >> 4); + x |=3D (x >> 8); + x |=3D (x >> 16); + return x + 1; +} + /** * urshift - 128-bit Unsigned Right Shift. * @plow: in/out - lower 64-bit integer. --=20 2.13.5 From nobody Thu May 2 12:17:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1515920682493865.7344286977155; Sun, 14 Jan 2018 01:04:42 -0800 (PST) Received: from localhost ([::1]:51581 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeDZ-0006Er-Ko for importer@patchew.org; Sun, 14 Jan 2018 04:04:41 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50150) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeBI-0004vo-8q for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:22 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eaeBE-0001wX-3o for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:20 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58792) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eaeBD-0001wL-Qu for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:16 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DC6D13683C; Sun, 14 Jan 2018 09:02:14 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id E1A4A64458; Sun, 14 Jan 2018 09:02:02 +0000 (UTC) From: Marcel Apfelbaum To: qemu-devel@nongnu.org Date: Sun, 14 Jan 2018 11:01:44 +0200 Message-Id: <20180114090147.39255-3-marcel@redhat.com> In-Reply-To: <20180114090147.39255-1-marcel@redhat.com> References: <20180114090147.39255-1-marcel@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Sun, 14 Jan 2018 09:02:14 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH V7 2/5] mem: add share parameter to memory-backend-ram X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ehabkost@redhat.com, mst@redhat.com, cohuck@redhat.com, f4bug@amsat.org, yuval.shaia@oracle.com, borntraeger@de.ibm.com, pbonzini@redhat.com, marcel@redhat.com, imammedo@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Currently only file backed memory backend can be created with a "share" flag in order to allow sharing guest RAM with other processes in the host. Add the "share" flag also to RAM Memory Backend in order to allow remapping parts of the guest RAM to different host virtual addresses. This is needed by the RDMA devices in order to remap non-contiguous QEMU virtual addresses to a contiguous virtual address range. Moved the "share" flag to the Host Memory base class, modified phys_mem_alloc to include the new parameter and a new interface memory_region_init_ram_shared_nomigrate. There are no functional changes if the new flag is not used. Signed-off-by: Marcel Apfelbaum --- backends/hostmem-file.c | 25 +------------------------ backends/hostmem-ram.c | 4 ++-- backends/hostmem.c | 21 +++++++++++++++++++++ exec.c | 26 +++++++++++++++----------- include/exec/memory.h | 23 +++++++++++++++++++++++ include/exec/ram_addr.h | 3 ++- include/qemu/osdep.h | 2 +- include/sysemu/hostmem.h | 2 +- include/sysemu/kvm.h | 2 +- memory.c | 16 +++++++++++++--- target/s390x/kvm.c | 4 ++-- util/oslib-posix.c | 4 ++-- util/oslib-win32.c | 2 +- 13 files changed, 85 insertions(+), 49 deletions(-) diff --git a/backends/hostmem-file.c b/backends/hostmem-file.c index e44c319915..bc95022a68 100644 --- a/backends/hostmem-file.c +++ b/backends/hostmem-file.c @@ -31,7 +31,6 @@ typedef struct HostMemoryBackendFile HostMemoryBackendFil= e; struct HostMemoryBackendFile { HostMemoryBackend parent_obj; =20 - bool share; bool discard_data; char *mem_path; }; @@ -58,7 +57,7 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Err= or **errp) path =3D object_get_canonical_path(OBJECT(backend)); memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), path, - backend->size, fb->share, + backend->size, backend->share, fb->mem_path, errp); g_free(path); } @@ -85,25 +84,6 @@ static void set_mem_path(Object *o, const char *str, Err= or **errp) fb->mem_path =3D g_strdup(str); } =20 -static bool file_memory_backend_get_share(Object *o, Error **errp) -{ - HostMemoryBackendFile *fb =3D MEMORY_BACKEND_FILE(o); - - return fb->share; -} - -static void file_memory_backend_set_share(Object *o, bool value, Error **e= rrp) -{ - HostMemoryBackend *backend =3D MEMORY_BACKEND(o); - HostMemoryBackendFile *fb =3D MEMORY_BACKEND_FILE(o); - - if (host_memory_backend_mr_inited(backend)) { - error_setg(errp, "cannot change property value"); - return; - } - fb->share =3D value; -} - static bool file_memory_backend_get_discard_data(Object *o, Error **errp) { return MEMORY_BACKEND_FILE(o)->discard_data; @@ -136,9 +116,6 @@ file_backend_class_init(ObjectClass *oc, void *data) bc->alloc =3D file_backend_memory_alloc; oc->unparent =3D file_backend_unparent; =20 - object_class_property_add_bool(oc, "share", - file_memory_backend_get_share, file_memory_backend_set_share, - &error_abort); object_class_property_add_bool(oc, "discard-data", file_memory_backend_get_discard_data, file_memory_backend_set_disc= ard_data, &error_abort); diff --git a/backends/hostmem-ram.c b/backends/hostmem-ram.c index 38977be73e..7ddd08d370 100644 --- a/backends/hostmem-ram.c +++ b/backends/hostmem-ram.c @@ -28,8 +28,8 @@ ram_backend_memory_alloc(HostMemoryBackend *backend, Erro= r **errp) } =20 path =3D object_get_canonical_path_component(OBJECT(backend)); - memory_region_init_ram_nomigrate(&backend->mr, OBJECT(backend), path, - backend->size, errp); + memory_region_init_ram_shared_nomigrate(&backend->mr, OBJECT(backend),= path, + backend->size, backend->share, errp); g_free(path); } =20 diff --git a/backends/hostmem.c b/backends/hostmem.c index ee2c2d5bfd..1daf13bd2e 100644 --- a/backends/hostmem.c +++ b/backends/hostmem.c @@ -369,6 +369,24 @@ static void set_id(Object *o, const char *str, Error *= *errp) backend->id =3D g_strdup(str); } =20 +static bool host_memory_backend_get_share(Object *o, Error **errp) +{ + HostMemoryBackend *backend =3D MEMORY_BACKEND(o); + + return backend->share; +} + +static void host_memory_backend_set_share(Object *o, bool value, Error **e= rrp) +{ + HostMemoryBackend *backend =3D MEMORY_BACKEND(o); + + if (host_memory_backend_mr_inited(backend)) { + error_setg(errp, "cannot change property value"); + return; + } + backend->share =3D value; +} + static void host_memory_backend_class_init(ObjectClass *oc, void *data) { @@ -399,6 +417,9 @@ host_memory_backend_class_init(ObjectClass *oc, void *d= ata) host_memory_backend_get_policy, host_memory_backend_set_policy, &error_abort); object_class_property_add_str(oc, "id", get_id, set_id, &error_abort); + object_class_property_add_bool(oc, "share", + host_memory_backend_get_share, host_memory_backend_set_share, + &error_abort); } =20 static void host_memory_backend_finalize(Object *o) diff --git a/exec.c b/exec.c index 4722e521d4..247f8bd0c0 100644 --- a/exec.c +++ b/exec.c @@ -1278,7 +1278,7 @@ static int subpage_register (subpage_t *mmio, uint32_= t start, uint32_t end, uint16_t section); static subpage_t *subpage_init(FlatView *fv, hwaddr base); =20 -static void *(*phys_mem_alloc)(size_t size, uint64_t *align) =3D +static void *(*phys_mem_alloc)(size_t size, uint64_t *align, bool shared) = =3D qemu_anon_ram_alloc; =20 /* @@ -1286,7 +1286,7 @@ static void *(*phys_mem_alloc)(size_t size, uint64_t = *align) =3D * Accelerators with unusual needs may need this. Hopefully, we can * get rid of it eventually. */ -void phys_mem_set_alloc(void *(*alloc)(size_t, uint64_t *align)) +void phys_mem_set_alloc(void *(*alloc)(size_t, uint64_t *align, bool share= d)) { phys_mem_alloc =3D alloc; } @@ -1889,7 +1889,7 @@ static void dirty_memory_extend(ram_addr_t old_ram_si= ze, } } =20 -static void ram_block_add(RAMBlock *new_block, Error **errp) +static void ram_block_add(RAMBlock *new_block, Error **errp, bool shared) { RAMBlock *block; RAMBlock *last_block =3D NULL; @@ -1912,7 +1912,7 @@ static void ram_block_add(RAMBlock *new_block, Error = **errp) } } else { new_block->host =3D phys_mem_alloc(new_block->max_length, - &new_block->mr->align); + &new_block->mr->align, shared= ); if (!new_block->host) { error_setg_errno(errp, errno, "cannot set up guest memory '%s'", @@ -2017,7 +2017,7 @@ RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, Mem= oryRegion *mr, return NULL; } =20 - ram_block_add(new_block, &local_err); + ram_block_add(new_block, &local_err, share); if (local_err) { g_free(new_block); error_propagate(errp, local_err); @@ -2059,7 +2059,7 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, ra= m_addr_t max_size, void (*resized)(const char*, uint64_t length, void *host), - void *host, bool resizeable, + void *host, bool resizeable, bool share, MemoryRegion *mr, Error **errp) { RAMBlock *new_block; @@ -2082,7 +2082,7 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, ra= m_addr_t max_size, if (resizeable) { new_block->flags |=3D RAM_RESIZEABLE; } - ram_block_add(new_block, &local_err); + ram_block_add(new_block, &local_err, share); if (local_err) { g_free(new_block); error_propagate(errp, local_err); @@ -2094,12 +2094,15 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, = ram_addr_t max_size, RAMBlock *qemu_ram_alloc_from_ptr(ram_addr_t size, void *host, MemoryRegion *mr, Error **errp) { - return qemu_ram_alloc_internal(size, size, NULL, host, false, mr, errp= ); + return qemu_ram_alloc_internal(size, size, NULL, host, false, + false, mr, errp); } =20 -RAMBlock *qemu_ram_alloc(ram_addr_t size, MemoryRegion *mr, Error **errp) +RAMBlock *qemu_ram_alloc(ram_addr_t size, bool share, + MemoryRegion *mr, Error **errp) { - return qemu_ram_alloc_internal(size, size, NULL, NULL, false, mr, errp= ); + return qemu_ram_alloc_internal(size, size, NULL, NULL, false, + share, mr, errp); } =20 RAMBlock *qemu_ram_alloc_resizeable(ram_addr_t size, ram_addr_t maxsz, @@ -2108,7 +2111,8 @@ RAMBlock *qemu_ram_alloc_resizeable(ram_addr_t size, = ram_addr_t maxsz, void *host), MemoryRegion *mr, Error **errp) { - return qemu_ram_alloc_internal(size, maxsz, resized, NULL, true, mr, e= rrp); + return qemu_ram_alloc_internal(size, maxsz, resized, NULL, true, + false, mr, errp); } =20 static void reclaim_ramblock(RAMBlock *block) diff --git a/include/exec/memory.h b/include/exec/memory.h index a4cabdf44c..dd28eaba68 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -428,6 +428,29 @@ void memory_region_init_ram_nomigrate(MemoryRegion *mr, Error **errp); =20 /** + * memory_region_init_ram_shared_nomigrate: Initialize RAM memory region. + * Accesses into the region will + * modify memory directly. + * + * @mr: the #MemoryRegion to be initialized. + * @owner: the object that tracks the region's reference count + * @name: Region name, becomes part of RAMBlock name used in migration str= eam + * must be unique within any device + * @size: size of the region. + * @share: allow remapping RAM to different addresses + * @errp: pointer to Error*, to store an error if it happens. + * + * Note that this function is similar to memory_region_init_ram_nomigrate. + * The only difference is part of the RAM region can be remapped. + */ +void memory_region_init_ram_shared_nomigrate(MemoryRegion *mr, + struct Object *owner, + const char *name, + uint64_t size, + bool share, + Error **errp); + +/** * memory_region_init_resizeable_ram: Initialize memory region with resiz= eable * RAM. Accesses into the region will * modify memory directly. Only an in= itial diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 6cbc02aa0f..7d980572c0 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -80,7 +80,8 @@ RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryR= egion *mr, Error **errp); RAMBlock *qemu_ram_alloc_from_ptr(ram_addr_t size, void *host, MemoryRegion *mr, Error **errp); -RAMBlock *qemu_ram_alloc(ram_addr_t size, MemoryRegion *mr, Error **errp); +RAMBlock *qemu_ram_alloc(ram_addr_t size, bool share, MemoryRegion *mr, + Error **errp); RAMBlock *qemu_ram_alloc_resizeable(ram_addr_t size, ram_addr_t max_size, void (*resized)(const char*, uint64_t length, diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h index adb3758275..41658060a7 100644 --- a/include/qemu/osdep.h +++ b/include/qemu/osdep.h @@ -255,7 +255,7 @@ extern int daemon(int, int); int qemu_daemon(int nochdir, int noclose); void *qemu_try_memalign(size_t alignment, size_t size); void *qemu_memalign(size_t alignment, size_t size); -void *qemu_anon_ram_alloc(size_t size, uint64_t *align); +void *qemu_anon_ram_alloc(size_t size, uint64_t *align, bool shared); void qemu_vfree(void *ptr); void qemu_anon_ram_free(void *ptr, size_t size); =20 diff --git a/include/sysemu/hostmem.h b/include/sysemu/hostmem.h index ed6a437f4d..4d8f859f03 100644 --- a/include/sysemu/hostmem.h +++ b/include/sysemu/hostmem.h @@ -55,7 +55,7 @@ struct HostMemoryBackend { char *id; uint64_t size; bool merge, dump; - bool prealloc, force_prealloc, is_mapped; + bool prealloc, force_prealloc, is_mapped, share; DECLARE_BITMAP(host_nodes, MAX_NODES + 1); HostMemPolicy policy; =20 diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index bbf12a1723..85002ac49a 100644 --- a/include/sysemu/kvm.h +++ b/include/sysemu/kvm.h @@ -248,7 +248,7 @@ int kvm_on_sigbus(int code, void *addr); =20 /* interface with exec.c */ =20 -void phys_mem_set_alloc(void *(*alloc)(size_t, uint64_t *align)); +void phys_mem_set_alloc(void *(*alloc)(size_t, uint64_t *align, bool share= d)); =20 /* internal API */ =20 diff --git a/memory.c b/memory.c index 4b41fb837b..cb4fe1a55a 100644 --- a/memory.c +++ b/memory.c @@ -1538,11 +1538,21 @@ void memory_region_init_ram_nomigrate(MemoryRegion = *mr, uint64_t size, Error **errp) { + memory_region_init_ram_shared_nomigrate(mr, owner, name, size, false, = errp); +} + +void memory_region_init_ram_shared_nomigrate(MemoryRegion *mr, + Object *owner, + const char *name, + uint64_t size, + bool share, + Error **errp) +{ memory_region_init(mr, owner, name, size); mr->ram =3D true; mr->terminates =3D true; mr->destructor =3D memory_region_destructor_ram; - mr->ram_block =3D qemu_ram_alloc(size, mr, errp); + mr->ram_block =3D qemu_ram_alloc(size, share, mr, errp); mr->dirty_log_mask =3D tcg_enabled() ? (1 << DIRTY_MEMORY_CODE) : 0; } =20 @@ -1651,7 +1661,7 @@ void memory_region_init_rom_nomigrate(MemoryRegion *m= r, mr->readonly =3D true; mr->terminates =3D true; mr->destructor =3D memory_region_destructor_ram; - mr->ram_block =3D qemu_ram_alloc(size, mr, errp); + mr->ram_block =3D qemu_ram_alloc(size, false, mr, errp); mr->dirty_log_mask =3D tcg_enabled() ? (1 << DIRTY_MEMORY_CODE) : 0; } =20 @@ -1670,7 +1680,7 @@ void memory_region_init_rom_device_nomigrate(MemoryRe= gion *mr, mr->terminates =3D true; mr->rom_device =3D true; mr->destructor =3D memory_region_destructor_ram; - mr->ram_block =3D qemu_ram_alloc(size, mr, errp); + mr->ram_block =3D qemu_ram_alloc(size, false, mr, errp); } =20 void memory_region_init_iommu(void *_iommu_mr, diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c index 9b8b59f2a2..6c0fc2f89c 100644 --- a/target/s390x/kvm.c +++ b/target/s390x/kvm.c @@ -144,7 +144,7 @@ static int cap_gs; =20 static int active_cmma; =20 -static void *legacy_s390_alloc(size_t size, uint64_t *align); +static void *legacy_s390_alloc(size_t size, uint64_t *align, bool shared); =20 static int kvm_s390_query_mem_limit(uint64_t *memory_limit) { @@ -743,7 +743,7 @@ int kvm_s390_mem_op(S390CPU *cpu, vaddr addr, uint8_t a= r, void *hostbuf, * to grow. We also have to use MAP parameters that avoid * read-only mapping of guest pages. */ -static void *legacy_s390_alloc(size_t size, uint64_t *align) +static void *legacy_s390_alloc(size_t size, uint64_t *align, bool shared) { void *mem; =20 diff --git a/util/oslib-posix.c b/util/oslib-posix.c index 77369c92ce..0cf3548778 100644 --- a/util/oslib-posix.c +++ b/util/oslib-posix.c @@ -127,10 +127,10 @@ void *qemu_memalign(size_t alignment, size_t size) } =20 /* alloc shared memory pages */ -void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment) +void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment, bool shared) { size_t align =3D QEMU_VMALLOC_ALIGN; - void *ptr =3D qemu_ram_mmap(-1, size, align, false); + void *ptr =3D qemu_ram_mmap(-1, size, align, shared); =20 if (ptr =3D=3D MAP_FAILED) { return NULL; diff --git a/util/oslib-win32.c b/util/oslib-win32.c index 69a6286d50..bb5ad28bd3 100644 --- a/util/oslib-win32.c +++ b/util/oslib-win32.c @@ -67,7 +67,7 @@ void *qemu_memalign(size_t alignment, size_t size) return qemu_oom_check(qemu_try_memalign(alignment, size)); } =20 -void *qemu_anon_ram_alloc(size_t size, uint64_t *align) +void *qemu_anon_ram_alloc(size_t size, uint64_t *align, bool shared) { void *ptr; =20 --=20 2.13.5 From nobody Thu May 2 12:17:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1515920691952645.1145971384019; Sun, 14 Jan 2018 01:04:51 -0800 (PST) Received: from localhost ([::1]:51582 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeDj-0006L8-3e for importer@patchew.org; Sun, 14 Jan 2018 04:04:51 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50170) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeBN-000508-8s for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:28 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eaeBL-00020T-F5 for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:25 -0500 Received: from mx1.redhat.com ([209.132.183.28]:35568) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eaeBL-0001zv-5Z for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:23 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3EA59883C2; Sun, 14 Jan 2018 09:02:22 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3CB9A600CC; Sun, 14 Jan 2018 09:02:15 +0000 (UTC) From: Marcel Apfelbaum To: qemu-devel@nongnu.org Date: Sun, 14 Jan 2018 11:01:45 +0200 Message-Id: <20180114090147.39255-4-marcel@redhat.com> In-Reply-To: <20180114090147.39255-1-marcel@redhat.com> References: <20180114090147.39255-1-marcel@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Sun, 14 Jan 2018 09:02:22 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH V7 3/5] docs: add pvrdma device documentation. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ehabkost@redhat.com, mst@redhat.com, cohuck@redhat.com, f4bug@amsat.org, yuval.shaia@oracle.com, borntraeger@de.ibm.com, pbonzini@redhat.com, marcel@redhat.com, imammedo@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Signed-off-by: Marcel Apfelbaum Signed-off-by: Yuval Shaia Reviewed-by: Shamir Rabinovitch --- docs/pvrdma.txt | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 254 insertions(+) create mode 100644 docs/pvrdma.txt diff --git a/docs/pvrdma.txt b/docs/pvrdma.txt new file mode 100644 index 0000000000..efaeca0758 --- /dev/null +++ b/docs/pvrdma.txt @@ -0,0 +1,254 @@ +Paravirtualized RDMA Device (PVRDMA) +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + + +1. Description +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +PVRDMA is the QEMU implementation of VMware's paravirtualized RDMA device. +It works with its Linux Kernel driver AS IS, no need for any special guest +modifications. + +While it complies with the VMware device, it can also communicate with bare +metal RDMA-enabled machines and does not require an RDMA HCA in the host, = it +can work with Soft-RoCE (rxe). + +It does not require the whole guest RAM to be pinned allowing memory +over-commit and, even if not implemented yet, migration support will be +possible with some HW assistance. + +A project presentation accompany this document: +- http://events.linuxfoundation.org/sites/events/files/slides/lpc-2017-pvr= dma-marcel-apfelbaum-yuval-shaia.pdf + + + +2. Setup +=3D=3D=3D=3D=3D=3D=3D=3D + + +2.1 Guest setup +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Fedora 27+ kernels work out of the box, older distributions +require updating the kernel to 4.14 to include the pvrdma driver. + +However the libpvrdma library needed by User Level Software is still +not available as part of the distributions, so the rdma-core library +needs to be compiled and optionally installed. + +Please follow the instructions at: + https://github.com/linux-rdma/rdma-core.git + + +2.2 Host Setup +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +The pvrdma backend is an ibdevice interface that can be exposed +either by a Soft-RoCE(rxe) device on machines with no RDMA device, +or an HCA SRIOV function(VF/PF). +Note that ibdevice interfaces can't be shared between pvrdma devices, +each one requiring a separate instance (rxe or SRIOV VF). + + +2.2.1 Soft-RoCE backend(rxe) +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D +A stable version of rxe is required, Fedora 27+ or a Linux +Kernel 4.14+ is preferred. + +The rdma_rxe module is part of the Linux Kernel but not loaded by default. +Install the User Level library (librxe) following the instructions from: +https://github.com/SoftRoCE/rxe-dev/wiki/rxe-dev:-Home + +Associate an ETH interface with rxe by running: + rxe_cfg add eth0 +An rxe0 ibdevice interface will be created and can be used as pvrdma backe= nd. + + +2.2.2 RDMA device Virtual Function backend +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Nothing special is required, the pvrdma device can work not only with +Ethernet Links, but also Infinibands Links. +All is needed is an ibdevice with an active port, for Mellanox cards +will be something like mlx5_6 which can be the backend. + + +2.2.3 QEMU setup +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Configure QEMU with --enable-rdma flag, installing +the required RDMA libraries. + + + +3. Usage +=3D=3D=3D=3D=3D=3D=3D=3D +Currently the device is working only with memory backed RAM +and it must be mark as "shared": + -m 1G \ + -object memory-backend-ram,id=3Dmb1,size=3D1G,share \ + -numa node,memdev=3Dmb1 \ + +The pvrdma device is composed of two functions: + - Function 0 is a vmxnet Ethernet Device which is redundant in Guest + but is required to pass the ibdevice GID using its MAC. + Examples: + For an rxe backend using eth0 interface it will use its mac: + -device vmxnet3,addr=3D.0,multifunction=3Don,mac=3D + For an SRIOV VF, we take the Ethernet Interface exposed by it: + -device vmxnet3,multifunction=3Don,mac=3D + - Function 1 is the actual device: + -device pvrdma,addr=3D.1,backend-dev=3D,backend-gid= -idx=3D,backend-port=3D + where the ibdevice can be rxe or RDMA VF (e.g. mlx5_4) + Note: Pay special attention that the GID at backend-gid-idx matches vmxne= t's MAC. + The rules of conversion are part of the RoCE spec, but since manual conve= rsion + is not required, spotting problems is not hard: + Example: GID: fe80:0000:0000:0000:7efe:90ff:fecb:743a + MAC: 7c:fe:90:cb:74:3a + Note the difference between the first byte of the MAC and the GID. + + + +4. Implementation details +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + + +4.1 Overview +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +The device acts like a proxy between the Guest Driver and the host +ibdevice interface. +On configuration path: + - For every hardware resource request (PD/QP/CQ/...) the pvrdma will requ= est + a resource from the backend interface, maintaining a 1-1 mapping + between the guest and host. +On data path: + - Every post_send/receive received from the guest will be converted into + a post_send/receive for the backend. The buffers data will not be touch= ed + or copied resulting in near bare-metal performance for large enough buf= fers. + - Completions from the backend interface will result in completions for + the pvrdma device. + + +4.2 PCI BARs +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +PCI Bars: + BAR 0 - MSI-X + MSI-X vectors: + (0) Command - used when execution of a command is completed. + (1) Async - not in use. + (2) Completion - used when a completion event is placed in + device's CQ ring. + BAR 1 - Registers + -------------------------------------------------------- + | VERSION | DSR | CTL | REQ | ERR | ICR | IMR | MAC | + -------------------------------------------------------- + DSR - Address of driver/device shared memory used + for the command channel, used for passing: + - General info such as driver version + - Address of 'command' and 'response' + - Address of async ring + - Address of device's CQ ring + - Device capabilities + CTL - Device control operations (activate, reset etc) + IMG - Set interrupt mask + REQ - Command execution register + ERR - Operation status + + BAR 2 - UAR + --------------------------------------------------------- + | QP_NUM | SEND/RECV Flag || CQ_NUM | ARM/POLL Flag | + --------------------------------------------------------- + - Offset 0 used for QP operations (send and recv) + - Offset 4 used for CQ operations (arm and poll) + + +4.3 Major flows +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +4.3.1 Create CQ +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + - Guest driver + - Allocates pages for CQ ring + - Creates page directory (pdir) to hold CQ ring's pages + - Initializes CQ ring + - Initializes 'Create CQ' command object (cqe, pdir etc) + - Copies the command to 'command' address + - Writes 0 into REQ register + - Device + - Reads the request object from the 'command' address + - Allocates CQ object and initialize CQ ring based on pdir + - Creates the backend CQ + - Writes operation status to ERR register + - Posts command-interrupt to guest + - Guest driver + - Reads the HW response code from ERR register + +4.3.2 Create QP +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + - Guest driver + - Allocates pages for send and receive rings + - Creates page directory(pdir) to hold the ring's pages + - Initializes 'Create QP' command object (max_send_wr, + send_cq_handle, recv_cq_handle, pdir etc) + - Copies the object to 'command' address + - Write 0 into REQ register + - Device + - Reads the request object from 'command' address + - Allocates the QP object and initialize + - Send and recv rings based on pdir + - Send and recv ring state + - Creates the backend QP + - Writes the operation status to ERR register + - Posts command-interrupt to guest + - Guest driver + - Reads the HW response code from ERR register + +4.3.3 Post receive +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + - Guest driver + - Initializes a wqe and place it on recv ring + - Write to qpn|qp_recv_bit (31) to QP offset in UAR + - Device + - Extracts qpn from UAR + - Walks through the ring and does the following for each wqe + - Prepares the backend CQE context to be used when + receiving completion from backend (wr_id, op_code, emu_cq_nu= m) + - For each sge prepares backend sge + - Calls backend's post_recv + +4.3.4 Process backend events +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D + - Done by a dedicated thread used to process backend events; + at initialization is attached to the device and creates + the communication channel. + - Thread main loop: + - Polls for completions + - Extracts QEMU _cq_num, wr_id and op_code from context + - Writes CQE to CQ ring + - Writes CQ number to device CQ + - Sends completion-interrupt to guest + - Deallocates context + - Acks the event to backend + + + +5. Limitations +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +- The device obviously is limited by the Guest Linux Driver features imple= mentation + of the VMware device API. +- Memory registration mechanism requires mremap for every page in the buff= er in order + to map it to a contiguous virtual address range. Since this is not the d= ata path + it should not matter much. +- The device requires target page size to be the same as the host page siz= e. +- QEMU cannot map guest RAM from a file descriptor if a pvrdma device is a= ttached, + so it can't work with huge pages. The limitation will be addressed in th= e future, + however QEMU allocates Guest RAM with MADV_HUGEPAGE so if there are enou= gh huge + pages available, QEMU will use them. +- As previously stated, migration is not supported yet, however with some = hardware + support can be done. + + + +6. Performance +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +By design the pvrdma device exits on each post-send/receive, so for small = buffers +the performance is affected; however for medium buffers it will became clo= se to +bare metal and from 1MB buffers and up it reaches bare metal performance. +(tested with 2 VMs, the pvrdma devices connected to 2 VFs of the same devi= ce) + +All the above assumes no memory registration is done on data path. --=20 2.13.5 From nobody Thu May 2 12:17:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 151592088070363.111970014683834; Sun, 14 Jan 2018 01:08:00 -0800 (PST) Received: from localhost ([::1]:51731 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeGl-0000Yg-Gn for importer@patchew.org; Sun, 14 Jan 2018 04:07:59 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50243) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeBd-0005Ae-Oy for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eaeBP-000232-Q5 for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:41 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47094) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eaeBP-00021Z-36 for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:27 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 20A4D81DFF; Sun, 14 Jan 2018 09:02:26 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id A1C4B600CC; Sun, 14 Jan 2018 09:02:22 +0000 (UTC) From: Marcel Apfelbaum To: qemu-devel@nongnu.org Date: Sun, 14 Jan 2018 11:01:46 +0200 Message-Id: <20180114090147.39255-5-marcel@redhat.com> In-Reply-To: <20180114090147.39255-1-marcel@redhat.com> References: <20180114090147.39255-1-marcel@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Sun, 14 Jan 2018 09:02:26 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH V7 4/5] pvrdma: initial implementation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ehabkost@redhat.com, mst@redhat.com, cohuck@redhat.com, f4bug@amsat.org, yuval.shaia@oracle.com, borntraeger@de.ibm.com, pbonzini@redhat.com, marcel@redhat.com, imammedo@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Yuval Shaia PVRDMA is the QEMU implementation of VMware's paravirtualized RDMA device. It works with its Linux Kernel driver AS IS, no need for any special guest modifications. While it complies with the VMware device, it can also communicate with bare metal RDMA-enabled machines and does not require an RDMA HCA in the host, it can work with Soft-RoCE (rxe). It does not require the whole guest RAM to be pinned allowing memory over-commit and, even if not implemented yet, migration support will be possible with some HW assistance. Signed-off-by: Yuval Shaia Signed-off-by: Marcel Apfelbaum --- Makefile.objs | 2 + configure | 9 +- hw/Makefile.objs | 1 + hw/rdma/Makefile.objs | 6 + hw/rdma/rdma_backend.c | 815 ++++++++++++++++++++++++++++++++++++++= ++++ hw/rdma/rdma_backend.h | 92 +++++ hw/rdma/rdma_backend_defs.h | 62 ++++ hw/rdma/rdma_rm.c | 619 ++++++++++++++++++++++++++++++++ hw/rdma/rdma_rm.h | 69 ++++ hw/rdma/rdma_rm_defs.h | 106 ++++++ hw/rdma/rdma_utils.c | 52 +++ hw/rdma/rdma_utils.h | 43 +++ hw/rdma/trace-events | 5 + hw/rdma/vmw/pvrdma.h | 122 +++++++ hw/rdma/vmw/pvrdma_cmd.c | 679 +++++++++++++++++++++++++++++++++++ hw/rdma/vmw/pvrdma_dev_api.h | 602 +++++++++++++++++++++++++++++++ hw/rdma/vmw/pvrdma_dev_ring.c | 139 +++++++ hw/rdma/vmw/pvrdma_dev_ring.h | 42 +++ hw/rdma/vmw/pvrdma_ib_verbs.h | 433 ++++++++++++++++++++++ hw/rdma/vmw/pvrdma_main.c | 644 +++++++++++++++++++++++++++++++++ hw/rdma/vmw/pvrdma_qp_ops.c | 212 +++++++++++ hw/rdma/vmw/pvrdma_qp_ops.h | 27 ++ hw/rdma/vmw/pvrdma_ring.h | 134 +++++++ hw/rdma/vmw/trace-events | 5 + hw/rdma/vmw/vmw_pvrdma-abi.h | 311 ++++++++++++++++ include/hw/pci/pci_ids.h | 3 + 26 files changed, 5230 insertions(+), 4 deletions(-) create mode 100644 hw/rdma/Makefile.objs create mode 100644 hw/rdma/rdma_backend.c create mode 100644 hw/rdma/rdma_backend.h create mode 100644 hw/rdma/rdma_backend_defs.h create mode 100644 hw/rdma/rdma_rm.c create mode 100644 hw/rdma/rdma_rm.h create mode 100644 hw/rdma/rdma_rm_defs.h create mode 100644 hw/rdma/rdma_utils.c create mode 100644 hw/rdma/rdma_utils.h create mode 100644 hw/rdma/trace-events create mode 100644 hw/rdma/vmw/pvrdma.h create mode 100644 hw/rdma/vmw/pvrdma_cmd.c create mode 100644 hw/rdma/vmw/pvrdma_dev_api.h create mode 100644 hw/rdma/vmw/pvrdma_dev_ring.c create mode 100644 hw/rdma/vmw/pvrdma_dev_ring.h create mode 100644 hw/rdma/vmw/pvrdma_ib_verbs.h create mode 100644 hw/rdma/vmw/pvrdma_main.c create mode 100644 hw/rdma/vmw/pvrdma_qp_ops.c create mode 100644 hw/rdma/vmw/pvrdma_qp_ops.h create mode 100644 hw/rdma/vmw/pvrdma_ring.h create mode 100644 hw/rdma/vmw/trace-events create mode 100644 hw/rdma/vmw/vmw_pvrdma-abi.h diff --git a/Makefile.objs b/Makefile.objs index c8b1bba593..85dde3c4a1 100644 --- a/Makefile.objs +++ b/Makefile.objs @@ -129,6 +129,8 @@ trace-events-subdirs +=3D hw/block/dataplane trace-events-subdirs +=3D hw/char trace-events-subdirs +=3D hw/intc trace-events-subdirs +=3D hw/net +trace-events-subdirs +=3D hw/rdma +trace-events-subdirs +=3D hw/rdma/vmw trace-events-subdirs +=3D hw/virtio trace-events-subdirs +=3D hw/audio trace-events-subdirs +=3D hw/misc diff --git a/configure b/configure index 89bd662a6a..652aa69539 100755 --- a/configure +++ b/configure @@ -1548,7 +1548,7 @@ disabled with --disable-FEATURE, default is enabled i= f available: kvm KVM acceleration support hax HAX acceleration support hvf Hypervisor.framework acceleration support - rdma RDMA-based migration support + rdma Enable RDMA-based migration support and PVRDMA vde support for vde network netmap support for netmap network linux-aio Linux AIO support @@ -2875,15 +2875,16 @@ if test "$rdma" !=3D "no" ; then #include int main(void) { return 0; } EOF - rdma_libs=3D"-lrdmacm -libverbs" + rdma_libs=3D"-lrdmacm -libverbs -libumad" if compile_prog "" "$rdma_libs" ; then rdma=3D"yes" + libs_softmmu=3D"$libs_softmmu $rdma_libs" else if test "$rdma" =3D "yes" ; then error_exit \ - " OpenFabrics librdmacm/libibverbs not present." \ + " OpenFabrics librdmacm/libibverbs/libibumad not present." \ " Your options:" \ - " (1) Fast: Install infiniband packages from your distro." \ + " (1) Fast: Install infiniband packages (devel) from your dis= tro." \ " (2) Cleanest: Install libraries from www.openfabrics.org" \ " (3) Also: Install softiwarp if you don't have RDMA hardware" fi diff --git a/hw/Makefile.objs b/hw/Makefile.objs index cf4cb2010b..6a0ffe0afd 100644 --- a/hw/Makefile.objs +++ b/hw/Makefile.objs @@ -18,6 +18,7 @@ devices-dirs-$(CONFIG_IPMI) +=3D ipmi/ devices-dirs-$(CONFIG_SOFTMMU) +=3D isa/ devices-dirs-$(CONFIG_SOFTMMU) +=3D misc/ devices-dirs-$(CONFIG_SOFTMMU) +=3D net/ +devices-dirs-$(CONFIG_SOFTMMU) +=3D rdma/ devices-dirs-$(CONFIG_SOFTMMU) +=3D nvram/ devices-dirs-$(CONFIG_SOFTMMU) +=3D pci/ devices-dirs-$(CONFIG_PCI) +=3D pci-bridge/ pci-host/ diff --git a/hw/rdma/Makefile.objs b/hw/rdma/Makefile.objs new file mode 100644 index 0000000000..6c6272cfcc --- /dev/null +++ b/hw/rdma/Makefile.objs @@ -0,0 +1,6 @@ +ifeq ($(CONFIG_RDMA),y) +obj-$(CONFIG_PCI) +=3D rdma_utils.o rdma_backend.o rdma_rm.o +obj-$(CONFIG_PCI) +=3D vmw/pvrdma_dev_ring.o vmw/pvrdma_cmd.o \ + vmw/pvrdma_qp_ops.o vmw/pvrdma_main.o +endif + diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c new file mode 100644 index 0000000000..b26a1a01a1 --- /dev/null +++ b/hw/rdma/rdma_backend.c @@ -0,0 +1,815 @@ +/* + * QEMU paravirtual RDMA - Generic RDMA backend + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#include +#include +#include + +#include + +#include "trace.h" +#include "rdma_utils.h" +#include "rdma_rm.h" +#include "rdma_backend.h" + +/* Vendor Errors */ +#define VENDOR_ERR_FAIL_BACKEND 0x201 +#define VENDOR_ERR_TOO_MANY_SGES 0x202 +#define VENDOR_ERR_NOMEM 0x203 +#define VENDOR_ERR_QP0 0x204 +#define VENDOR_ERR_NO_SGE 0x205 +#define VENDOR_ERR_MAD_SEND 0x206 +#define VENDOR_ERR_INVLKEY 0x207 +#define VENDOR_ERR_MR_SMALL 0x208 + +typedef struct BackendCtx { + void *up_ctx; + uint64_t req_id; + bool is_tx_req; +} BackendCtx; + +static void (*comp_handler)(int status, unsigned int vendor_err, void *ctx= ); + +static void dummy_comp_handler(int status, unsigned int vendor_err, void *= ctx) +{ + pr_err("No completion handler is registered\n"); +} + +static void poll_cq(RdmaDeviceResources *rdma_dev_res, struct ibv_cq *ibcq, + bool one_poll) +{ + int i, ne; + BackendCtx *bctx; + struct ibv_wc wc[2]; + + pr_dbg("Entering poll_cq loop on cq %p\n", ibcq); + do { + ne =3D ibv_poll_cq(ibcq, 2, wc); + if (ne =3D=3D 0 && one_poll) { + pr_dbg("CQ is empty\n"); + return; + } + } while (ne < 0); + + pr_dbg("Got %d completion(s) from cq %p\n", ne, ibcq); + + for (i =3D 0; i < ne; i++) { + pr_dbg("wr_id=3D0x%lx\n", wc[i].wr_id); + pr_dbg("status=3D%d\n", wc[i].status); + + bctx =3D rdma_rm_get_cqe_ctx(rdma_dev_res, wc[i].wr_id); + if (unlikely(!bctx)) { + pr_dbg("Error: Failed to find ctx for req %ld\n", wc[i].wr_id); + continue; + } + pr_dbg("Processing %s CQE\n", bctx->is_tx_req ? "send" : "recv"); + + comp_handler(wc[i].status, wc[i].vendor_err, bctx->up_ctx); + + rdma_rm_dealloc_cqe_ctx(rdma_dev_res, wc[i].wr_id); + free(bctx); + } +} + +static void *comp_handler_thread(void *arg) +{ + RdmaBackendDev *backend_dev =3D (RdmaBackendDev *)arg; + int rc; + struct ibv_cq *ev_cq; + void *ev_ctx; + + pr_dbg("Starting\n"); + + while (backend_dev->comp_thread.run) { + pr_dbg("Waiting for completion on channel %p\n", backend_dev->chan= nel); + rc =3D ibv_get_cq_event(backend_dev->channel, &ev_cq, &ev_ctx); + pr_dbg("ibv_get_cq_event=3D%d\n", rc); + if (unlikely(rc)) { + pr_dbg("---> ibv_get_cq_event (%d)\n", rc); + continue; + } + + if (unlikely(ibv_req_notify_cq(ev_cq, 0))) { + pr_dbg("---> ibv_req_notify_cq\n"); + } + + poll_cq(backend_dev->rdma_dev_res, ev_cq, false); + + ibv_ack_cq_events(ev_cq, 1); + } + + pr_dbg("Going down\n"); + /* TODO: Post cqe for all remaining buffs that were posted */ + + return NULL; +} + +void rdma_backend_register_comp_handler(void (*handler)(int status, + unsigned int vendor_err, void *ctx= )) +{ + comp_handler =3D handler; +} + +void rdma_backend_unregister_comp_handler(void) +{ + rdma_backend_register_comp_handler(dummy_comp_handler); +} + +int rdma_backend_query_port(RdmaBackendDev *backend_dev, + struct ibv_port_attr *port_attr) +{ + int rc; + + memset(port_attr, 0, sizeof(*port_attr)); + + rc =3D ibv_query_port(backend_dev->context, backend_dev->port_num, por= t_attr); + if (rc) { + pr_dbg("Error %d from ibv_query_port\n", rc); + return -EIO; + } + + return 0; +} + +void rdma_backend_poll_cq(RdmaDeviceResources *rdma_dev_res, RdmaBackendCQ= *cq) +{ + poll_cq(rdma_dev_res, cq->ibcq, true); +} + +static GHashTable *ah_hash; + +static struct ibv_ah *create_ah(RdmaBackendDev *backend_dev, struct ibv_pd= *pd, + uint8_t sgid_idx, union ibv_gid *dgid) +{ + GBytes *ah_key =3D g_bytes_new(dgid, sizeof(*dgid)); + struct ibv_ah *ah =3D g_hash_table_lookup(ah_hash, ah_key); + + if (ah) { + trace_create_ah_cache_hit(be64_to_cpu(dgid->global.subnet_prefix), + be64_to_cpu(dgid->global.interface_id)); + g_bytes_unref(ah_key); + } else { + struct ibv_ah_attr ah_attr =3D { + .is_global =3D 1, + .port_num =3D backend_dev->port_num, + .grh.hop_limit =3D 1, + }; + + ah_attr.grh.dgid =3D *dgid; + ah_attr.grh.sgid_index =3D sgid_idx; + + ah =3D ibv_create_ah(pd, &ah_attr); + if (ah) { + g_hash_table_insert(ah_hash, ah_key, ah); + } else { + pr_dbg("ibv_create_ah failed for gid <%lx %lx>\n", + be64_to_cpu(dgid->global.subnet_prefix), + be64_to_cpu(dgid->global.interface_id)); + } + + trace_create_ah_cache_miss(be64_to_cpu(dgid->global.subnet_prefix), + be64_to_cpu(dgid->global.interface_id)); + } + + return ah; +} + +static void destroy_ah_hash_key(gpointer data) +{ + g_bytes_unref(data); +} + +static void destroy_ah_hast_data(gpointer data) +{ + struct ibv_ah *ah =3D data; + ibv_destroy_ah(ah); +} + +static void ah_cache_init(void) +{ + ah_hash =3D g_hash_table_new_full(g_bytes_hash, g_bytes_equal, + destroy_ah_hash_key, destroy_ah_hast_d= ata); +} + +static int build_host_sge_array(RdmaDeviceResources *rdma_dev_res, + struct ibv_sge *dsge, struct ibv_sge *ssge, + uint8_t num_sge) +{ + RdmaRmMR *mr; + int ssge_idx; + int ret =3D 0; + + pr_dbg("num_sge=3D%d\n", num_sge); + + for (ssge_idx =3D 0; ssge_idx < num_sge; ssge_idx++) { + mr =3D rdma_rm_get_mr(rdma_dev_res, ssge[ssge_idx].lkey); + if (unlikely(!mr)) { + ret =3D VENDOR_ERR_INVLKEY | ssge[ssge_idx].lkey; + pr_dbg("Invalid lkey 0x%x\n", ssge[ssge_idx].lkey); + goto out; + } + + dsge->addr =3D mr->user_mr.host_virt + ssge[ssge_idx].addr - + mr->user_mr.guest_start; + dsge->length =3D ssge[ssge_idx].length; + dsge->lkey =3D rdma_backend_mr_lkey(&mr->backend_mr); + + pr_dbg("ssge->addr=3D0x%lx\n", (uint64_t)ssge[ssge_idx].addr); + pr_dbg("dsge->addr=3D0x%lx\n", dsge->addr); + pr_dbg("dsge->lenght=3D%d\n", dsge->length); + pr_dbg("dsge->lkey=3D0x%x\n", dsge->lkey); + + dsge++; + } + +out: + return ret; +} + +void rdma_backend_post_send(RdmaBackendDev *backend_dev, + RdmaDeviceResources *rdma_dev_res, + RdmaBackendQP *qp, uint8_t qp_type, + struct ibv_sge *sge, uint32_t num_sge, + union ibv_gid *dgid, uint32_t dqpn, + uint32_t dqkey, void *ctx) +{ + BackendCtx *bctx; + struct ibv_sge new_sge[MAX_SGE]; + uint32_t bctx_id; + int rc; + struct ibv_send_wr wr =3D {0}, *bad_wr; + + if (!qp->ibqp && qp_type =3D=3D 0) { + pr_dbg("QP0 is not supported\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_QP0, ctx); + return; + } + + pr_dbg("num_sge=3D%d\n", num_sge); + + if (!qp->ibqp && qp_type =3D=3D 1) { + pr_dbg("QP1\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_MAD_SEND, ctx); + } + + if (!num_sge) { + pr_dbg("num_sge=3D0\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_NO_SGE, ctx); + return; + } + + bctx =3D malloc(sizeof(*bctx)); + if (unlikely(!bctx)) { + pr_dbg("Failed to allocate request ctx\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_NOMEM, ctx); + return; + } + memset(bctx, 0, sizeof(*bctx)); + + bctx->up_ctx =3D ctx; + bctx->is_tx_req =3D 1; + + rc =3D rdma_rm_alloc_cqe_ctx(rdma_dev_res, &bctx_id, bctx); + if (unlikely(rc)) { + pr_dbg("Failed to allocate cqe_ctx\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_NOMEM, ctx); + goto out_free_bctx; + } + + rc =3D build_host_sge_array(rdma_dev_res, new_sge, sge, num_sge); + if (rc) { + pr_dbg("Error: Failed to build host SGE array\n"); + comp_handler(IBV_WC_GENERAL_ERR, rc, ctx); + goto out_dealloc_cqe_ctx; + } + + if (qp_type =3D=3D IBV_QPT_UD) { + wr.wr.ud.ah =3D create_ah(backend_dev, qp->ibpd, + backend_dev->backend_gid_idx, dgid); + wr.wr.ud.remote_qpn =3D dqpn; + wr.wr.ud.remote_qkey =3D dqkey; + } + + wr.num_sge =3D num_sge; + wr.opcode =3D IBV_WR_SEND; + wr.send_flags =3D IBV_SEND_SIGNALED; + wr.sg_list =3D &new_sge[0]; + wr.wr_id =3D bctx_id; + + rc =3D ibv_post_send(qp->ibqp, &wr, &bad_wr); + pr_dbg("ibv_post_send=3D%d\n", rc); + if (rc) { + pr_dbg("Fail (%d, %d) to post send WQE to qpn %d\n", rc, errno, + qp->ibqp->qp_num); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_FAIL_BACKEND, ctx); + goto out_dealloc_cqe_ctx; + } + + return; + +out_dealloc_cqe_ctx: + rdma_rm_dealloc_cqe_ctx(rdma_dev_res, bctx_id); + +out_free_bctx: + free(bctx); +} + +void rdma_backend_post_recv(RdmaBackendDev *backend_dev, + RdmaDeviceResources *rdma_dev_res, + RdmaBackendQP *qp, uint8_t qp_type, + struct ibv_sge *sge, uint32_t num_sge, void *c= tx) +{ + BackendCtx *bctx; + struct ibv_sge new_sge[MAX_SGE]; + uint32_t bctx_id; + int rc; + struct ibv_recv_wr wr =3D {0}, *bad_wr; + + if (!qp->ibqp && qp_type =3D=3D 0) { + pr_dbg("QP0 is not supported\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_QP0, ctx); + return; + } + + pr_dbg("num_sge=3D%d\n", num_sge); + + if (!qp->ibqp && qp_type =3D=3D 1) { + pr_dbg("QP1\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_MAD_SEND, ctx); + return; + } + + if (!num_sge) { + pr_dbg("num_sge=3D0\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_NO_SGE, ctx); + return; + } + + bctx =3D malloc(sizeof(*bctx)); + if (unlikely(!bctx)) { + pr_dbg("Failed to allocate request ctx\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_NOMEM, ctx); + return; + } + memset(bctx, 0, sizeof(*bctx)); + + bctx->up_ctx =3D ctx; + bctx->is_tx_req =3D 0; + + rc =3D rdma_rm_alloc_cqe_ctx(rdma_dev_res, &bctx_id, bctx); + if (unlikely(rc)) { + pr_dbg("Failed to allocate cqe_ctx\n"); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_NOMEM, ctx); + goto out_free_bctx; + } + + rc =3D build_host_sge_array(rdma_dev_res, new_sge, sge, num_sge); + if (rc) { + pr_dbg("Error: Failed to build host SGE array\n"); + comp_handler(IBV_WC_GENERAL_ERR, rc, ctx); + goto out_dealloc_cqe_ctx; + } + + wr.num_sge =3D num_sge; + wr.sg_list =3D &new_sge[0]; + wr.wr_id =3D bctx_id; + rc =3D ibv_post_recv(qp->ibqp, &wr, &bad_wr); + pr_dbg("ibv_post_recv=3D%d\n", rc); + if (rc) { + pr_dbg("Fail (%d, %d) to post recv WQE to qpn %d\n", rc, errno, + qp->ibqp->qp_num); + comp_handler(IBV_WC_GENERAL_ERR, VENDOR_ERR_FAIL_BACKEND, ctx); + goto out_dealloc_cqe_ctx; + } + + return; + +out_dealloc_cqe_ctx: + rdma_rm_dealloc_cqe_ctx(rdma_dev_res, bctx_id); + +out_free_bctx: + free(bctx); +} + +int rdma_backend_create_pd(RdmaBackendDev *backend_dev, RdmaBackendPD *pd) +{ + pd->ibpd =3D ibv_alloc_pd(backend_dev->context); + + return pd->ibpd ? 0 : -EIO; +} + +void rdma_backend_destroy_pd(RdmaBackendPD *pd) +{ + if (pd->ibpd) { + ibv_dealloc_pd(pd->ibpd); + } +} + +int rdma_backend_create_mr(RdmaBackendMR *mr, RdmaBackendPD *pd, uint64_t = addr, + size_t length, int access) +{ + pr_dbg("addr=3D0x%lx\n", addr); + pr_dbg("len=3D%ld\n", length); + mr->ibpd =3D pd->ibpd; + mr->ibmr =3D ibv_reg_mr(mr->ibpd, (void *)addr, length, access); + + if (mr->ibmr) { + pr_dbg("lkey=3D0x%x\n", mr->ibmr->lkey); + pr_dbg("rkey=3D0x%x\n", mr->ibmr->rkey); + } + + return mr->ibmr ? 0 : -EIO; +} + +void rdma_backend_destroy_mr(RdmaBackendMR *mr) +{ + if (mr->ibmr) { + ibv_dereg_mr(mr->ibmr); + } +} + +int rdma_backend_create_cq(RdmaBackendDev *backend_dev, RdmaBackendCQ *cq, + int cqe) +{ + pr_dbg("cqe=3D%d\n", cqe); + + pr_dbg("dev->channel=3D%p\n", backend_dev->channel); + cq->ibcq =3D ibv_create_cq(backend_dev->context, cqe + 1, NULL, + backend_dev->channel, 0); + + if (cq->ibcq) { + if (ibv_req_notify_cq(cq->ibcq, 0)) { + pr_dbg("---> ibv_req_notify_cq\n"); + } + } + + cq->backend_dev =3D backend_dev; + + return cq->ibcq ? 0 : -EIO; +} + +void rdma_backend_destroy_cq(RdmaBackendCQ *cq) +{ + if (cq->ibcq) { + ibv_req_notify_cq(cq->ibcq, 0); + + /* Cleanup the queue before destruction */ + poll_cq(cq->backend_dev->rdma_dev_res, cq->ibcq, false); + + ibv_destroy_cq(cq->ibcq); + } +} + +int rdma_backend_create_qp(RdmaBackendQP *qp, uint8_t qp_type, + RdmaBackendPD *pd, RdmaBackendCQ *scq, + RdmaBackendCQ *rcq, uint32_t max_send_wr, + uint32_t max_recv_wr, uint32_t max_send_sge, + uint32_t max_recv_sge) +{ + struct ibv_qp_init_attr attr =3D {0}; + + qp->ibqp =3D 0; + pr_dbg("qp_type=3D%d\n", qp_type); + + if (qp_type =3D=3D 0) { + pr_dbg("QP0 is not supported\n"); + return -EPERM; + } + + if (qp_type =3D=3D 1) { + pr_dbg("QP1\n"); + return 0; + } + + attr.qp_type =3D qp_type; + attr.send_cq =3D scq->ibcq; + attr.recv_cq =3D rcq->ibcq; + attr.cap.max_send_wr =3D max_send_wr; + attr.cap.max_recv_wr =3D max_recv_wr; + attr.cap.max_send_sge =3D max_send_sge; + attr.cap.max_recv_sge =3D max_recv_sge; + + pr_dbg("max_send_wr=3D%d\n", max_send_wr); + pr_dbg("max_recv_wr=3D%d\n", max_recv_wr); + pr_dbg("max_send_sge=3D%d\n", max_send_sge); + pr_dbg("max_recv_sge=3D%d\n", max_recv_sge); + + qp->ibpd =3D pd->ibpd; + qp->ibqp =3D ibv_create_qp(qp->ibpd, &attr); + + if (likely(!qp->ibqp)) { + pr_dbg("Error from ibv_create_qp\n"); + return -EIO; + } + + /* TODO: Query QP to get max_inline_data and save it to be used in sen= d */ + + pr_dbg("qpn=3D0x%x\n", qp->ibqp->qp_num); + + return 0; +} + +int rdma_backend_qp_state_init(RdmaBackendDev *backend_dev, RdmaBackendQP = *qp, + uint8_t qp_type, uint32_t qkey) +{ + struct ibv_qp_attr attr =3D {0}; + int rc, attr_mask; + + pr_dbg("qpn=3D0x%x\n", qp->ibqp->qp_num); + pr_dbg("sport_num=3D%d\n", backend_dev->port_num); + + attr_mask =3D IBV_QP_STATE | IBV_QP_PKEY_INDEX | IBV_QP_PORT; + attr.qp_state =3D IBV_QPS_INIT; + attr.pkey_index =3D 0; + attr.port_num =3D backend_dev->port_num; + if (qp_type =3D=3D IBV_QPT_RC) { + attr_mask |=3D IBV_QP_ACCESS_FLAGS; + } + if (qp_type =3D=3D IBV_QPT_UD) { + attr.qkey =3D qkey; + if (qkey) { + attr_mask |=3D IBV_QP_QKEY; + } + } + + rc =3D ibv_modify_qp(qp->ibqp, &attr, attr_mask); + if (rc) { + pr_dbg("Error %d from ibv_modify_qp\n", rc); + return -EIO; + } + + return 0; +} + +int rdma_backend_qp_state_rtr(RdmaBackendDev *backend_dev, RdmaBackendQP *= qp, + uint8_t qp_type, union ibv_gid *dgid, + uint32_t dqpn, uint32_t rq_psn, uint32_t qke= y) +{ + struct ibv_qp_attr attr =3D {0}; + union ibv_gid ibv_gid =3D { + .global.interface_id =3D dgid->global.interface_id, + .global.subnet_prefix =3D dgid->global.subnet_prefix + }; + int rc, attr_mask =3D 0; + + attr.qp_state =3D IBV_QPS_RTR; + attr_mask =3D IBV_QP_STATE; + + if (qp_type =3D=3D IBV_QPT_RC) { + pr_dbg("dgid=3D0x%lx,%lx\n", be64_to_cpu(ibv_gid.global.subnet_pre= fix), + be64_to_cpu(ibv_gid.global.interface_id)); + pr_dbg("dqpn=3D0x%x\n", dqpn); + pr_dbg("sgid_idx=3D%d\n", backend_dev->backend_gid_idx); + pr_dbg("sport_num=3D%d\n", backend_dev->port_num); + pr_dbg("rq_psn=3D0x%x\n", rq_psn); + + attr.path_mtu =3D IBV_MTU_1024; + attr.dest_qp_num =3D dqpn; + attr.max_dest_rd_atomic =3D 1; + attr.min_rnr_timer =3D 12; + attr.ah_attr.port_num =3D backend_dev->port_num; + attr.ah_attr.is_global =3D 1; + attr.ah_attr.grh.hop_limit =3D 1; + attr.ah_attr.grh.dgid =3D ibv_gid; + attr.ah_attr.grh.sgid_index =3D backend_dev->backend_gid_idx; + attr.rq_psn =3D rq_psn; + + attr_mask |=3D IBV_QP_AV | IBV_QP_PATH_MTU | IBV_QP_DEST_QPN | + IBV_QP_RQ_PSN | IBV_QP_MAX_DEST_RD_ATOMIC | + IBV_QP_MIN_RNR_TIMER; + } + + if (qp_type =3D=3D IBV_QPT_UD) { + pr_dbg("qkey=3D0x%x\n", qkey); + attr.qkey =3D qkey; + if (qkey) { + attr_mask |=3D IBV_QP_QKEY; + } + } + + rc =3D ibv_modify_qp(qp->ibqp, &attr, attr_mask); + if (rc) { + pr_dbg("Error %d from ibv_modify_qp\n", rc); + return -EIO; + } + + return 0; +} + +int rdma_backend_qp_state_rts(RdmaBackendQP *qp, uint8_t qp_type, + uint32_t sq_psn, uint32_t qkey) +{ + struct ibv_qp_attr attr =3D {0}; + int rc, attr_mask =3D 0; + + pr_dbg("qpn=3D0x%x\n", qp->ibqp->qp_num); + pr_dbg("sq_psn=3D0x%x\n", sq_psn); + + attr_mask |=3D IBV_QP_SQ_PSN; + attr.sq_psn =3D sq_psn; + + attr.qp_state =3D IBV_QPS_RTS; + attr_mask =3D IBV_QP_STATE | IBV_QP_SQ_PSN; + + if (qp_type =3D=3D IBV_QPT_RC) { + attr.timeout =3D 14; + attr.retry_cnt =3D 7; + attr.rnr_retry =3D 7; + attr.max_rd_atomic =3D 1; + + attr_mask |=3D IBV_QP_TIMEOUT | IBV_QP_RETRY_CNT | IBV_QP_RNR_RETR= Y | + IBV_QP_MAX_QP_RD_ATOMIC; + } + + if (qp_type =3D=3D IBV_QPT_UD) { + attr.qkey =3D qkey; + if (qkey) { + attr_mask |=3D IBV_QP_QKEY; + } + } + + rc =3D ibv_modify_qp(qp->ibqp, &attr, attr_mask); + if (rc) { + pr_dbg("Error %d from ibv_modify_qp\n", rc); + return -EIO; + } + + return 0; +} + +void rdma_backend_destroy_qp(RdmaBackendQP *qp) +{ + if (qp->ibqp) { + ibv_destroy_qp(qp->ibqp); + } +} + +#define CHK_ATTR(req, dev, member, fmt) ({ \ + pr_dbg("%s=3D"fmt","fmt"\n", #member, dev.member, req->member); \ + if (req->member > dev.member) { \ + warn_report("Setting of %s to 0x%lx higher than host device capabi= lity 0x%lx", \ + #member, (uint64_t)req->member, (uint64_t)dev.member);= \ + req->member =3D dev.member; \ + } \ + pr_dbg("%s=3D"fmt"\n", #member, req->member); }) + +static int init_device_caps(RdmaBackendDev *backend_dev, + struct ibv_device_attr *dev_attr) +{ + memset(&backend_dev->dev_attr, 0, sizeof(backend_dev->dev_attr)); + + if (ibv_query_device(backend_dev->context, &backend_dev->dev_attr)) { + return -EIO; + } + + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_mr_size, "%ld"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_qp, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_sge, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_qp_wr, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_cq, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_cqe, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_mr, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_pd, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_qp_rd_atom, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_qp_init_rd_atom, "%d"); + CHK_ATTR(dev_attr, backend_dev->dev_attr, max_ah, "%d"); + + return 0; +} + +int rdma_backend_init(RdmaBackendDev *backend_dev, + RdmaDeviceResources *rdma_dev_res, + const char *backend_device_name, uint8_t port_num, + uint8_t backend_gid_idx, struct ibv_device_attr *dev= _attr, + Error **errp) +{ + int i; + int ret =3D 0; + int num_ibv_devices; + char thread_name[80] =3D {0}; + struct ibv_device **dev_list; + struct ibv_port_attr port_attr; + + backend_dev->backend_gid_idx =3D backend_gid_idx; + backend_dev->port_num =3D port_num; + backend_dev->rdma_dev_res =3D rdma_dev_res; + + rdma_backend_register_comp_handler(dummy_comp_handler); + + dev_list =3D ibv_get_device_list(&num_ibv_devices); + if (!dev_list) { + error_setg(errp, "Failed to get IB devices list"); + ret =3D -EIO; + goto out; + } + if (num_ibv_devices =3D=3D 0) { + error_setg(errp, "No IB devices were found"); + ret =3D -ENXIO; + goto out; + } + + if (backend_device_name) { + for (i =3D 0; dev_list[i]; ++i) { + if (!strcmp(ibv_get_device_name(dev_list[i]), + backend_device_name)) { + break; + } + } + + backend_dev->ib_dev =3D dev_list[i]; + if (!backend_dev->ib_dev) { + error_setg(errp, "Failed to find IB device %s", + backend_device_name); + ret =3D -EIO; + goto out; + } + } else { + backend_dev->ib_dev =3D *dev_list; + } + ibv_free_device_list(dev_list); + + pr_dbg("Using backend device %s, port %d, gid_idx %d\n", + ibv_get_device_name(backend_dev->ib_dev), + backend_dev->port_num, backend_dev->backend_gid_idx); + + backend_dev->context =3D ibv_open_device(backend_dev->ib_dev); + if (!backend_dev->context) { + error_setg(errp, "Failed to open IB device"); + ret =3D -EIO; + goto out; + } + + backend_dev->channel =3D ibv_create_comp_channel(backend_dev->context); + if (!backend_dev->channel) { + error_setg(errp, "Failed to create IB communication channel"); + ret =3D -EIO; + goto out_close_device; + } + pr_dbg("dev->backend_dev.channel=3D%p\n", backend_dev->channel); + + ret =3D ibv_query_gid(backend_dev->context, backend_dev->port_num, + backend_dev->backend_gid_idx, &backend_dev->gid); + if (ret) { + error_setg(errp, "Failed to query gid %d", + backend_dev->backend_gid_idx); + ret =3D -EIO; + goto out_destroy_comm_channel; + } + pr_dbg("subnet_prefix=3D0x%lx\n", + be64_to_cpu(backend_dev->gid.global.subnet_prefix)); + pr_dbg("interface_id=3D0x%lx\n", + be64_to_cpu(backend_dev->gid.global.interface_id)); + + ret =3D ibv_query_port(backend_dev->context, backend_dev->port_num, + &port_attr); + if (ret) { + error_setg(errp, "Error %d from ibv_query_port", ret); + ret =3D -EIO; + goto out_destroy_comm_channel; + } + + ret =3D init_device_caps(backend_dev, dev_attr); + if (ret) { + error_setg(errp, "Failed to initialize device capabilities"); + ret =3D -EIO; + goto out_destroy_comm_channel; + } + + sprintf(thread_name, "pvrdma_comp_%s", + ibv_get_device_name(backend_dev->ib_dev)); + backend_dev->comp_thread.run =3D true; + qemu_thread_create(&backend_dev->comp_thread.thread, thread_name, + comp_handler_thread, backend_dev, QEMU_THREAD_DETAC= HED); + + ah_cache_init(); + + goto out; + +out_destroy_comm_channel: + ibv_destroy_comp_channel(backend_dev->channel); + +out_close_device: + ibv_close_device(backend_dev->context); + +out: + return ret; +} + +void rdma_backend_fini(RdmaBackendDev *backend_dev) +{ + g_hash_table_destroy(ah_hash); + ibv_destroy_comp_channel(backend_dev->channel); + ibv_close_device(backend_dev->context); +} diff --git a/hw/rdma/rdma_backend.h b/hw/rdma/rdma_backend.h new file mode 100644 index 0000000000..0a594a42ca --- /dev/null +++ b/hw/rdma/rdma_backend.h @@ -0,0 +1,92 @@ +/* + * RDMA device: Definitions of Backend Device functions + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef RDMA_BACKEND_H +#define RDMA_BACKEND_H + +#include +#include "rdma_rm_defs.h" +#include "rdma_backend_defs.h" + +static inline union ibv_gid *rdma_backend_gid(RdmaBackendDev *dev) +{ + return &dev->gid; +} + +static inline uint32_t rdma_backend_qpn(RdmaBackendQP *qp) +{ + return qp->ibqp ? qp->ibqp->qp_num : 0; +} + +static inline uint32_t rdma_backend_mr_lkey(RdmaBackendMR *mr) +{ + return mr->ibmr ? mr->ibmr->lkey : 0; +} + +static inline uint32_t rdma_backend_mr_rkey(RdmaBackendMR *mr) +{ + return mr->ibmr ? mr->ibmr->rkey : 0; +} + +int rdma_backend_init(RdmaBackendDev *backend_dev, + RdmaDeviceResources *rdma_dev_res, + const char *backend_device_name, uint8_t port_num, + uint8_t backend_gid_idx, struct ibv_device_attr *dev= _attr, + Error **errp); +void rdma_backend_fini(RdmaBackendDev *backend_dev); +void rdma_backend_register_comp_handler(void (*handler)(int status, + unsigned int vendor_err, void *ctx= )); +void rdma_backend_unregister_comp_handler(void); + +int rdma_backend_query_port(RdmaBackendDev *backend_dev, + struct ibv_port_attr *port_attr); +int rdma_backend_create_pd(RdmaBackendDev *backend_dev, RdmaBackendPD *pd); +void rdma_backend_destroy_pd(RdmaBackendPD *pd); + +int rdma_backend_create_mr(RdmaBackendMR *mr, RdmaBackendPD *pd, uint64_t = addr, + size_t length, int access); +void rdma_backend_destroy_mr(RdmaBackendMR *mr); + +int rdma_backend_create_cq(RdmaBackendDev *backend_dev, RdmaBackendCQ *cq, + int cqe); +void rdma_backend_destroy_cq(RdmaBackendCQ *cq); +void rdma_backend_poll_cq(RdmaDeviceResources *rdma_dev_res, RdmaBackendCQ= *cq); + +int rdma_backend_create_qp(RdmaBackendQP *qp, uint8_t qp_type, + RdmaBackendPD *pd, RdmaBackendCQ *scq, + RdmaBackendCQ *rcq, uint32_t max_send_wr, + uint32_t max_recv_wr, uint32_t max_send_sge, + uint32_t max_recv_sge); +int rdma_backend_qp_state_init(RdmaBackendDev *backend_dev, RdmaBackendQP = *qp, + uint8_t qp_type, uint32_t qkey); +int rdma_backend_qp_state_rtr(RdmaBackendDev *backend_dev, RdmaBackendQP *= qp, + uint8_t qp_type, union ibv_gid *dgid, + uint32_t dqpn, uint32_t rq_psn, uint32_t qke= y); +int rdma_backend_qp_state_rts(RdmaBackendQP *qp, uint8_t qp_type, + uint32_t sq_psn, uint32_t qkey); +void rdma_backend_destroy_qp(RdmaBackendQP *qp); + +void rdma_backend_post_send(RdmaBackendDev *backend_dev, + RdmaDeviceResources *rdma_dev_res, + RdmaBackendQP *qp, uint8_t qp_type, + struct ibv_sge *sge, uint32_t num_sge, + union ibv_gid *dgid, uint32_t dqpn, uint32_t d= qkey, + void *ctx); +void rdma_backend_post_recv(RdmaBackendDev *backend_dev, + RdmaDeviceResources *rdma_dev_res, + RdmaBackendQP *qp, uint8_t qp_type, + struct ibv_sge *sge, uint32_t num_sge, void *c= tx); + +#endif diff --git a/hw/rdma/rdma_backend_defs.h b/hw/rdma/rdma_backend_defs.h new file mode 100644 index 0000000000..516e6e0b19 --- /dev/null +++ b/hw/rdma/rdma_backend_defs.h @@ -0,0 +1,62 @@ +/* + * RDMA device: Definitions of Backend Device structures + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef RDMA_BACKEND_DEFS_H +#define RDMA_BACKEND_DEFS_H + +#include +#include + +typedef struct RdmaDeviceResources RdmaDeviceResources; + +typedef struct RdmaBackendThread { + QemuThread thread; + QemuMutex mutex; + bool run; +} RdmaBackendThread; + +typedef struct RdmaBackendDev { + PCIDevice *dev; + RdmaBackendThread comp_thread; + struct ibv_device *ib_dev; + uint8_t port_num; + struct ibv_context *context; + struct ibv_comp_channel *channel; + union ibv_gid gid; + struct ibv_device_attr dev_attr; + uint8_t backend_gid_idx; + RdmaDeviceResources *rdma_dev_res; +} RdmaBackendDev; + +typedef struct RdmaBackendPD { + struct ibv_pd *ibpd; +} RdmaBackendPD; + +typedef struct RdmaBackendMR { + struct ibv_pd *ibpd; + struct ibv_mr *ibmr; +} RdmaBackendMR; + +typedef struct RdmaBackendCQ { + RdmaBackendDev *backend_dev; + struct ibv_cq *ibcq; +} RdmaBackendCQ; + +typedef struct RdmaBackendQP { + struct ibv_pd *ibpd; + struct ibv_qp *ibqp; +} RdmaBackendQP; + +#endif diff --git a/hw/rdma/rdma_rm.c b/hw/rdma/rdma_rm.c new file mode 100644 index 0000000000..888718698c --- /dev/null +++ b/hw/rdma/rdma_rm.c @@ -0,0 +1,619 @@ +/* + * QEMU paravirtual RDMA - Resource Manager Implementation + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#include +#include +#include + +#include "rdma_utils.h" +#include "rdma_backend.h" +#include "rdma_rm.h" + +#define MAX_RM_TBL_NAME 16 + +/* Page directory and page tables */ +#define PG_DIR_SZ { TARGET_PAGE_SIZE / sizeof(__u64) } +#define PG_TBL_SZ { TARGET_PAGE_SIZE / sizeof(__u64) } + +static inline int res_tbl_init(const char *name, RdmaRmResTbl *tbl, + uint32_t tbl_sz, uint32_t res_sz) +{ + tbl->tbl =3D malloc(tbl_sz * res_sz); + if (!tbl->tbl) { + return -ENOMEM; + } + + strncpy(tbl->name, name, MAX_RM_TBL_NAME); + tbl->name[MAX_RM_TBL_NAME - 1] =3D 0; + + tbl->bitmap =3D bitmap_new(tbl_sz); + tbl->tbl_sz =3D tbl_sz; + tbl->res_sz =3D res_sz; + qemu_mutex_init(&tbl->lock); + + return 0; +} + +static inline void res_tbl_free(RdmaRmResTbl *tbl) +{ + qemu_mutex_destroy(&tbl->lock); + free(tbl->tbl); + bitmap_zero_extend(tbl->bitmap, tbl->tbl_sz, 0); +} + +static inline void *res_tbl_get(RdmaRmResTbl *tbl, uint32_t handle) +{ + pr_dbg("%s, handle=3D%d\n", tbl->name, handle); + + if ((handle < tbl->tbl_sz) && (test_bit(handle, tbl->bitmap))) { + return tbl->tbl + handle * tbl->res_sz; + } else { + pr_dbg("Invalid handle %d\n", handle); + return NULL; + } +} + +static inline void *res_tbl_alloc(RdmaRmResTbl *tbl, uint32_t *handle) +{ + qemu_mutex_lock(&tbl->lock); + + *handle =3D find_first_zero_bit(tbl->bitmap, tbl->tbl_sz); + if (*handle > tbl->tbl_sz) { + pr_dbg("Failed to alloc, bitmap is full\n"); + qemu_mutex_unlock(&tbl->lock); + return NULL; + } + + set_bit(*handle, tbl->bitmap); + + qemu_mutex_unlock(&tbl->lock); + + memset(tbl->tbl + *handle * tbl->res_sz, 0, tbl->res_sz); + + pr_dbg("%s, handle=3D%d\n", tbl->name, *handle); + + return tbl->tbl + *handle * tbl->res_sz; +} + +static inline void res_tbl_dealloc(RdmaRmResTbl *tbl, uint32_t handle) +{ + pr_dbg("%s, handle=3D%d\n", tbl->name, handle); + + qemu_mutex_lock(&tbl->lock); + + if (handle < tbl->tbl_sz) { + clear_bit(handle, tbl->bitmap); + } + + qemu_mutex_unlock(&tbl->lock); +} + +int rdma_rm_alloc_pd(RdmaDeviceResources *dev_res, RdmaBackendDev *backend= _dev, + uint32_t *pd_handle, uint32_t ctx_handle) +{ + RdmaRmPD *pd; + int ret =3D -ENOMEM; + + pd =3D res_tbl_alloc(&dev_res->pd_tbl, pd_handle); + if (!pd) { + goto out; + } + + ret =3D rdma_backend_create_pd(backend_dev, &pd->backend_pd); + if (ret) { + ret =3D -EIO; + goto out_tbl_dealloc; + } + + pd->ctx_handle =3D ctx_handle; + + return 0; + +out_tbl_dealloc: + res_tbl_dealloc(&dev_res->pd_tbl, *pd_handle); + +out: + return ret; +} + +RdmaRmPD *rdma_rm_get_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle) +{ + return res_tbl_get(&dev_res->pd_tbl, pd_handle); +} + +void rdma_rm_dealloc_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle) +{ + RdmaRmPD *pd =3D rdma_rm_get_pd(dev_res, pd_handle); + + if (pd) { + rdma_backend_destroy_pd(&pd->backend_pd); + res_tbl_dealloc(&dev_res->pd_tbl, pd_handle); + } +} + +int rdma_rm_alloc_mr(RdmaDeviceResources *dev_res, uint32_t pd_handle, + uint64_t guest_start, size_t guest_length, void *host= _virt, + int access_flags, uint32_t *mr_handle, uint32_t *lkey, + uint32_t *rkey) +{ + RdmaRmMR *mr; + int ret =3D 0; + RdmaRmPD *pd; + uint64_t addr; + size_t length; + + pd =3D rdma_rm_get_pd(dev_res, pd_handle); + if (!pd) { + pr_dbg("Invalid PD\n"); + ret =3D -EINVAL; + goto out; + } + + mr =3D res_tbl_alloc(&dev_res->mr_tbl, mr_handle); + if (!mr) { + pr_dbg("Failed to allocate obj in table\n"); + ret =3D -ENOMEM; + goto out; + } + + if (!host_virt) { + /* TODO: This is my guess but not so sure that this needs to be + * done */ + length =3D mr->length =3D TARGET_PAGE_SIZE; + addr =3D mr->addr =3D (uint64_t)malloc(length); + } else { + mr->addr =3D 0; + + mr->user_mr.host_virt =3D (uint64_t) host_virt; + pr_dbg("host_virt=3D0x%lx\n", mr->user_mr.host_virt); + mr->user_mr.length =3D guest_length; + pr_dbg("length=3D0x%lx\n", guest_length); + mr->user_mr.guest_start =3D guest_start; + pr_dbg("guest_start=3D0x%lx\n", mr->user_mr.guest_start); + + length =3D mr->user_mr.length; + addr =3D mr->user_mr.host_virt; + } + + ret =3D rdma_backend_create_mr(&mr->backend_mr, &pd->backend_pd, addr,= length, + access_flags); + if (ret) { + pr_dbg("Fail in rdma_backend_create_mr, err=3D%d\n", ret); + ret =3D -EIO; + goto out_dealloc_mr; + } + + if (!host_virt) { + *lkey =3D mr->lkey =3D rdma_backend_mr_lkey(&mr->backend_mr); + *rkey =3D mr->rkey =3D rdma_backend_mr_rkey(&mr->backend_mr); + } else { + /* We keep mr_handle in lkey so send and recv get get mr ptr */ + *lkey =3D *mr_handle; + *rkey =3D -1; + } + + mr->pd_handle =3D pd_handle; + + return 0; + +out_dealloc_mr: + res_tbl_dealloc(&dev_res->mr_tbl, *mr_handle); + +out: + return ret; +} + +RdmaRmMR *rdma_rm_get_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle) +{ + return res_tbl_get(&dev_res->mr_tbl, mr_handle); +} + +void rdma_rm_dealloc_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle) +{ + RdmaRmMR *mr =3D rdma_rm_get_mr(dev_res, mr_handle); + + if (mr) { + rdma_backend_destroy_mr(&mr->backend_mr); + munmap((void *)mr->user_mr.host_virt, mr->user_mr.length); + free((void *)mr->addr); + res_tbl_dealloc(&dev_res->mr_tbl, mr_handle); + } +} + +int rdma_rm_alloc_uc(RdmaDeviceResources *dev_res, uint32_t pfn, + uint32_t *uc_handle) +{ + RdmaRmUC *uc; + + /* TODO: Need to make sure pfn is between bar start address and + * bsd+RDMA_BAR2_UAR_SIZE + if (pfn > RDMA_BAR2_UAR_SIZE) { + pr_err("pfn out of range (%d > %d)\n", pfn, RDMA_BAR2_UAR_SIZE); + return -ENOMEM; + } + */ + + uc =3D res_tbl_alloc(&dev_res->uc_tbl, uc_handle); + if (!uc) { + return -ENOMEM; + } + + return 0; +} + +RdmaRmUC *rdma_rm_get_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle) +{ + return res_tbl_get(&dev_res->uc_tbl, uc_handle); +} + +void rdma_rm_dealloc_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle) +{ + RdmaRmUC *uc =3D rdma_rm_get_uc(dev_res, uc_handle); + + if (uc) { + res_tbl_dealloc(&dev_res->uc_tbl, uc_handle); + } +} + +RdmaRmCQ *rdma_rm_get_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle) +{ + return res_tbl_get(&dev_res->cq_tbl, cq_handle); +} + +int rdma_rm_alloc_cq(RdmaDeviceResources *dev_res, RdmaBackendDev *backend= _dev, + uint32_t cqe, uint32_t *cq_handle, void *opaque) +{ + int rc =3D -ENOMEM; + RdmaRmCQ *cq; + + cq =3D res_tbl_alloc(&dev_res->cq_tbl, cq_handle); + if (!cq) { + return -ENOMEM; + } + + cq->opaque =3D opaque; + cq->notify =3D false; + + rc =3D rdma_backend_create_cq(backend_dev, &cq->backend_cq, cqe); + if (rc) { + rc =3D -EIO; + goto out_dealloc_cq; + } + + goto out; + +out_dealloc_cq: + rdma_rm_dealloc_cq(dev_res, *cq_handle); + +out: + return rc; +} + +void rdma_rm_req_notify_cq(RdmaDeviceResources *dev_res, uint32_t cq_handl= e, + bool notify) +{ + RdmaRmCQ *cq; + + pr_dbg("cq_handle=3D%d, notify=3D0x%x\n", cq_handle, notify); + + cq =3D rdma_rm_get_cq(dev_res, cq_handle); + if (!cq) { + return; + } + + cq->notify =3D notify; + pr_dbg("notify=3D%d\n", cq->notify); +} + +void rdma_rm_dealloc_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle) +{ + RdmaRmCQ *cq; + + cq =3D rdma_rm_get_cq(dev_res, cq_handle); + if (!cq) { + return; + } + + rdma_backend_destroy_cq(&cq->backend_cq); + + res_tbl_dealloc(&dev_res->cq_tbl, cq_handle); +} + +RdmaRmQP *rdma_rm_get_qp(RdmaDeviceResources *dev_res, uint32_t qpn) +{ + GBytes *key =3D g_bytes_new(&qpn, sizeof(qpn)); + + RdmaRmQP *qp =3D g_hash_table_lookup(dev_res->qp_hash, key); + + g_bytes_unref(key); + + return qp; +} + +int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle, + uint8_t qp_type, uint32_t max_send_wr, + uint32_t max_send_sge, uint32_t send_cq_handle, + uint32_t max_recv_wr, uint32_t max_recv_sge, + uint32_t recv_cq_handle, void *opaque, uint32_t *qpn) +{ + int rc =3D 0; + RdmaRmQP *qp; + RdmaRmCQ *scq, *rcq; + RdmaRmPD *pd; + uint32_t rm_qpn; + + pr_dbg("qp_type=3D%d\n", qp_type); + + pd =3D rdma_rm_get_pd(dev_res, pd_handle); + if (!pd) { + pr_err("Invalid pd handle (%d)\n", pd_handle); + return -EINVAL; + } + + scq =3D rdma_rm_get_cq(dev_res, send_cq_handle); + rcq =3D rdma_rm_get_cq(dev_res, recv_cq_handle); + + if (!scq || !rcq) { + pr_err("Invalid send_cqn or recv_cqn (%d, %d)\n", + send_cq_handle, recv_cq_handle); + return -EINVAL; + } + + qp =3D res_tbl_alloc(&dev_res->qp_tbl, &rm_qpn); + if (!qp) { + return -ENOMEM; + } + pr_dbg("rm_qpn=3D%d\n", rm_qpn); + + qp->qpn =3D rm_qpn; + qp->qp_state =3D IBV_QPS_ERR; + qp->qp_type =3D qp_type; + qp->send_cq_handle =3D send_cq_handle; + qp->recv_cq_handle =3D recv_cq_handle; + qp->opaque =3D opaque; + + rc =3D rdma_backend_create_qp(&qp->backend_qp, qp_type, &pd->backend_p= d, + &scq->backend_cq, &rcq->backend_cq, max_se= nd_wr, + max_recv_wr, max_send_sge, max_recv_sge); + if (rc) { + rc =3D -EIO; + goto out_dealloc_qp; + } + + *qpn =3D rdma_backend_qpn(&qp->backend_qp); + pr_dbg("rm_qpn=3D%d, backend_qpn=3D0x%x\n", rm_qpn, *qpn); + g_hash_table_insert(dev_res->qp_hash, g_bytes_new(qpn, sizeof(*qpn)), = qp); + + return 0; + +out_dealloc_qp: + res_tbl_dealloc(&dev_res->qp_tbl, qp->qpn); + + return rc; +} + +int rdma_rm_modify_qp(RdmaDeviceResources *dev_res, RdmaBackendDev *backen= d_dev, + uint32_t qp_handle, uint32_t attr_mask, + union ibv_gid *dgid, uint32_t dqpn, + enum ibv_qp_state qp_state, uint32_t qkey, + uint32_t rq_psn, uint32_t sq_psn) +{ + RdmaRmQP *qp; + int ret; + + pr_dbg("qpn=3D%d\n", qp_handle); + + qp =3D rdma_rm_get_qp(dev_res, qp_handle); + if (!qp) { + return -EINVAL; + } + + pr_dbg("qp_type=3D%d\n", qp->qp_type); + pr_dbg("attr_mask=3D0x%x\n", attr_mask); + + if (qp->qp_type =3D=3D 0) { + pr_dbg("QP0 is not supported\n"); + return -EPERM; + } + + if (qp->qp_type =3D=3D 1) { + pr_dbg("QP1\n"); + return 0; + } + + if (attr_mask & IBV_QP_STATE) { + qp->qp_state =3D qp_state; + pr_dbg("qp_state=3D%d\n", qp->qp_state); + + if (qp->qp_state =3D=3D IBV_QPS_INIT) { + ret =3D rdma_backend_qp_state_init(backend_dev, &qp->backend_q= p, + qp->qp_type, qkey); + if (ret) { + return -EIO; + } + } + + if (qp->qp_state =3D=3D IBV_QPS_RTR) { + if (!(attr_mask & IBV_QP_DEST_QPN) && qp->qp_type =3D=3D IBV_Q= PT_RC) { + pr_dbg("IBV_QPS_RTR but not IBV_QP_DEST_QPN\n"); + return -EIO; + } + if (!(attr_mask & IBV_QP_AV) && qp->qp_type =3D=3D IBV_QPT_RC)= { + pr_dbg("IBV_QPS_RTR but not IBV_QP_AV\n"); + return -EIO; + } + + ret =3D rdma_backend_qp_state_rtr(backend_dev, &qp->backend_qp, + qp->qp_type, dgid, dqpn, rq_ps= n, + qkey); + if (ret) { + return -EIO; + } + } + + if (qp->qp_state =3D=3D IBV_QPS_RTS) { + ret =3D rdma_backend_qp_state_rts(&qp->backend_qp, qp->qp_type, + sq_psn, qkey); + if (ret) { + return -EIO; + } + } + } + + return 0; +} + +void rdma_rm_dealloc_qp(RdmaDeviceResources *dev_res, uint32_t qp_handle) +{ + RdmaRmQP *qp; + GBytes *key; + + key =3D g_bytes_new(&qp_handle, sizeof(qp_handle)); + qp =3D g_hash_table_lookup(dev_res->qp_hash, key); + if (!qp) { + return; + } + g_hash_table_remove(dev_res->qp_hash, key); + g_bytes_unref(key); + + rdma_backend_destroy_qp(&qp->backend_qp); + + res_tbl_dealloc(&dev_res->qp_tbl, qp->qpn); +} + +void *rdma_rm_get_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ctx_i= d) +{ + void **cqe_ctx; + + cqe_ctx =3D res_tbl_get(&dev_res->cqe_ctx_tbl, cqe_ctx_id); + if (!cqe_ctx) { + return NULL; + } + + pr_dbg("ctx=3D%p\n", *cqe_ctx); + + return *cqe_ctx; +} + +int rdma_rm_alloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t *cqe_ctx_= id, + void *ctx) +{ + void **cqe_ctx; + + cqe_ctx =3D res_tbl_alloc(&dev_res->cqe_ctx_tbl, cqe_ctx_id); + if (!cqe_ctx) { + return -ENOMEM; + } + + pr_dbg("ctx=3D%p\n", ctx); + *cqe_ctx =3D ctx; + + return 0; +} + +void rdma_rm_dealloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ct= x_id) +{ + res_tbl_dealloc(&dev_res->cqe_ctx_tbl, cqe_ctx_id); +} + +static void destroy_qp_hash_key(gpointer data) +{ + g_bytes_unref(data); +} + +int rdma_rm_init(RdmaDeviceResources *dev_res, struct ibv_device_attr *dev= _attr, + Error **errp) +{ + int ret =3D 0; + + ret =3D res_tbl_init("PD", &dev_res->pd_tbl, dev_attr->max_pd, + sizeof(RdmaRmPD)); + if (ret) { + goto out; + } + + ret =3D res_tbl_init("CQ", &dev_res->cq_tbl, dev_attr->max_cq, + sizeof(RdmaRmCQ)); + if (ret) { + goto cln_pds; + } + + ret =3D res_tbl_init("MR", &dev_res->mr_tbl, dev_attr->max_mr, + sizeof(RdmaRmMR)); + if (ret) { + goto cln_cqs; + } + + ret =3D res_tbl_init("QP", &dev_res->qp_tbl, dev_attr->max_qp, + sizeof(RdmaRmQP)); + if (ret) { + goto cln_mrs; + } + + ret =3D res_tbl_init("CQE_CTX", &dev_res->cqe_ctx_tbl, dev_attr->max_q= p * + dev_attr->max_qp_wr, sizeof(void *)); + if (ret) { + goto cln_qps; + } + + ret =3D res_tbl_init("UC", &dev_res->uc_tbl, MAX_UCS, sizeof(RdmaRmUC)= ); + if (ret) { + goto cln_cqe_ctxs; + } + + dev_res->qp_hash =3D g_hash_table_new_full(g_bytes_hash, g_bytes_equal, + destroy_qp_hash_key, NULL); + if (!dev_res->qp_hash) { + goto cln_ucs; + } + + goto out; + +cln_ucs: + res_tbl_free(&dev_res->uc_tbl); + +cln_cqe_ctxs: + res_tbl_free(&dev_res->cqe_ctx_tbl); + +cln_qps: + res_tbl_free(&dev_res->qp_tbl); + +cln_mrs: + res_tbl_free(&dev_res->mr_tbl); + +cln_cqs: + res_tbl_free(&dev_res->cq_tbl); + +cln_pds: + res_tbl_free(&dev_res->pd_tbl); + +out: + if (ret) { + error_setg(errp, "Failed to initialize RM"); + } + + return ret; +} + +void rdma_rm_fini(RdmaDeviceResources *dev_res) +{ + res_tbl_free(&dev_res->uc_tbl); + res_tbl_free(&dev_res->cqe_ctx_tbl); + res_tbl_free(&dev_res->qp_tbl); + res_tbl_free(&dev_res->cq_tbl); + res_tbl_free(&dev_res->mr_tbl); + res_tbl_free(&dev_res->pd_tbl); + g_hash_table_destroy(dev_res->qp_hash); +} diff --git a/hw/rdma/rdma_rm.h b/hw/rdma/rdma_rm.h new file mode 100644 index 0000000000..82a1026629 --- /dev/null +++ b/hw/rdma/rdma_rm.h @@ -0,0 +1,69 @@ +/* + * RDMA device: Definitions of Resource Manager functions + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef RDMA_RM_H +#define RDMA_RM_H + +#include +#include "rdma_backend_defs.h" +#include "rdma_rm_defs.h" + +int rdma_rm_init(RdmaDeviceResources *dev_res, struct ibv_device_attr *dev= _attr, + Error **errp); +void rdma_rm_fini(RdmaDeviceResources *dev_res); + +int rdma_rm_alloc_pd(RdmaDeviceResources *dev_res, RdmaBackendDev *backend= _dev, + uint32_t *pd_handle, uint32_t ctx_handle); +RdmaRmPD *rdma_rm_get_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle); +void rdma_rm_dealloc_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle); + +int rdma_rm_alloc_mr(RdmaDeviceResources *dev_res, uint32_t pd_handle, + uint64_t guest_start, size_t guest_length, void *host= _virt, + int access_flags, uint32_t *mr_handle, uint32_t *lkey, + uint32_t *rkey); +RdmaRmMR *rdma_rm_get_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle); +void rdma_rm_dealloc_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle); + +int rdma_rm_alloc_uc(RdmaDeviceResources *dev_res, uint32_t pfn, + uint32_t *uc_handle); +RdmaRmUC *rdma_rm_get_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle); +void rdma_rm_dealloc_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle); + +int rdma_rm_alloc_cq(RdmaDeviceResources *dev_res, RdmaBackendDev *backend= _dev, + uint32_t cqe, uint32_t *cq_handle, void *opaque); +RdmaRmCQ *rdma_rm_get_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle); +void rdma_rm_req_notify_cq(RdmaDeviceResources *dev_res, uint32_t cq_handl= e, + bool notify); +void rdma_rm_dealloc_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle); + +int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle, + uint8_t qp_type, uint32_t max_send_wr, + uint32_t max_send_sge, uint32_t send_cq_handle, + uint32_t max_recv_wr, uint32_t max_recv_sge, + uint32_t recv_cq_handle, void *opaque, uint32_t *qpn); +RdmaRmQP *rdma_rm_get_qp(RdmaDeviceResources *dev_res, uint32_t qpn); +int rdma_rm_modify_qp(RdmaDeviceResources *dev_res, RdmaBackendDev *backen= d_dev, + uint32_t qp_handle, uint32_t attr_mask, + union ibv_gid *dgid, uint32_t dqpn, + enum ibv_qp_state qp_state, uint32_t qkey, + uint32_t rq_psn, uint32_t sq_psn); +void rdma_rm_dealloc_qp(RdmaDeviceResources *dev_res, uint32_t qp_handle); + +int rdma_rm_alloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t *cqe_ctx_= id, + void *ctx); +void *rdma_rm_get_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ctx_i= d); +void rdma_rm_dealloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ct= x_id); + +#endif diff --git a/hw/rdma/rdma_rm_defs.h b/hw/rdma/rdma_rm_defs.h new file mode 100644 index 0000000000..e327b7177a --- /dev/null +++ b/hw/rdma/rdma_rm_defs.h @@ -0,0 +1,106 @@ +/* + * RDMA device: Definitions of Resource Manager structures + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef RDMA_RM_DEFS_H +#define RDMA_RM_DEFS_H + +#include "rdma_backend_defs.h" + +#define MAX_PORTS 1 +#define MAX_PORT_GIDS 1 +#define MAX_PORT_PKEYS 1 +#define MAX_PKEYS 1 +#define MAX_GIDS 2048 +#define MAX_UCS 512 +#define MAX_MR_SIZE (1UL << 27) +#define MAX_QP 1024 +#define MAX_SGE 4 +#define MAX_CQ 2048 +#define MAX_MR 1024 +#define MAX_PD 1024 +#define MAX_QP_RD_ATOM 16 +#define MAX_QP_INIT_RD_ATOM 16 +#define MAX_AH 64 + +#define MAX_RMRESTBL_NAME_SZ 16 +typedef struct RdmaRmResTbl { + char name[MAX_RMRESTBL_NAME_SZ]; + unsigned long *bitmap; + size_t tbl_sz; + size_t res_sz; + void *tbl; + QemuMutex lock; +} RdmaRmResTbl; + +typedef struct RdmaRmPD { + uint32_t ctx_handle; + RdmaBackendPD backend_pd; +} RdmaRmPD; + +typedef struct RdmaRmCQ { + void *opaque; + bool notify; + RdmaBackendCQ backend_cq; +} RdmaRmCQ; + +typedef struct RdmaRmUserMR { + uint64_t host_virt; + uint64_t guest_start; + size_t length; +} RdmaRmUserMR; + +/* MR (DMA region) */ +typedef struct RdmaRmMR { + uint32_t pd_handle; + uint32_t lkey; + uint32_t rkey; + RdmaBackendMR backend_mr; + RdmaRmUserMR user_mr; + uint64_t addr; + size_t length; +} RdmaRmMR; + +typedef struct RdmaRmUC { + uint64_t uc_handle; +} RdmaRmUC; + +typedef struct RdmaRmQP { + uint32_t qp_type; + enum ibv_qp_state qp_state; + uint32_t qpn; + void *opaque; + uint32_t send_cq_handle; + uint32_t recv_cq_handle; + RdmaBackendQP backend_qp; +} RdmaRmQP; + +typedef struct RdmaRmPort { + enum ibv_port_state state; + union ibv_gid gid_tbl[MAX_PORT_GIDS]; + int *pkey_tbl; /* TODO: Not yet supported */ +} RdmaRmPort; + +typedef struct RdmaDeviceResources { + RdmaRmPort ports[MAX_PORTS]; + RdmaRmResTbl pd_tbl; + RdmaRmResTbl mr_tbl; + RdmaRmResTbl uc_tbl; + RdmaRmResTbl qp_tbl; + RdmaRmResTbl cq_tbl; + RdmaRmResTbl cqe_ctx_tbl; + GHashTable *qp_hash; /* Keeps mapping between real and emulated */ +} RdmaDeviceResources; + +#endif diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c new file mode 100644 index 0000000000..fb3e2a76c0 --- /dev/null +++ b/hw/rdma/rdma_utils.c @@ -0,0 +1,52 @@ + +/* + * QEMU paravirtual RDMA - Generic RDMA backend + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#include "rdma_utils.h" + +void *rdma_pci_dma_map(PCIDevice *dev, dma_addr_t addr, dma_addr_t plen) +{ + void *p; + hwaddr len =3D plen; + + if (!addr) { + pr_dbg("addr is NULL\n"); + return NULL; + } + + p =3D pci_dma_map(dev, addr, &len, DMA_DIRECTION_TO_DEVICE); + if (!p) { + pr_dbg("Fail in pci_dma_map, addr=3D0x%llx, len=3D%ld\n", + (long long unsigned int)addr, len); + return NULL; + } + + if (len !=3D plen) { + rdma_pci_dma_unmap(dev, p, len); + return NULL; + } + + pr_dbg("0x%llx -> %p (len=3D%ld)\n", (long long unsigned int)addr, p, = len); + + return p; +} + +void rdma_pci_dma_unmap(PCIDevice *dev, void *buffer, dma_addr_t len) +{ + pr_dbg("%p\n", buffer); + if (buffer) { + pci_dma_unmap(dev, buffer, len, DMA_DIRECTION_TO_DEVICE, 0); + } +} diff --git a/hw/rdma/rdma_utils.h b/hw/rdma/rdma_utils.h new file mode 100644 index 0000000000..b54e96b0ca --- /dev/null +++ b/hw/rdma/rdma_utils.h @@ -0,0 +1,43 @@ +/* + * RDMA device: Debug utilities + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef RDMA_UTILS_H +#define RDMA_UTILS_H + +#include +#include +#include + +#define pr_info(fmt, ...) \ + fprintf(stdout, "%s: %-20s (%3d): " fmt, "pvrdma", __func__, __LINE__= ,\ + ## __VA_ARGS__) + +#define pr_err(fmt, ...) \ + fprintf(stderr, "%s: Error at %-20s (%3d): " fmt, "pvrdma", __func__, \ + __LINE__, ## __VA_ARGS__) + +#ifdef PVRDMA_DEBUG +#define pr_dbg(fmt, ...) \ + fprintf(stdout, "%s: %-20s (%3d): " fmt, "pvrdma", __func__, __LINE__,\ + ## __VA_ARGS__) +#else +#define pr_dbg(fmt, ...) +#endif + +void *rdma_pci_dma_map(PCIDevice *dev, dma_addr_t addr, dma_addr_t plen); +void rdma_pci_dma_unmap(PCIDevice *dev, void *buffer, dma_addr_t len); + +#endif diff --git a/hw/rdma/trace-events b/hw/rdma/trace-events new file mode 100644 index 0000000000..421986adbe --- /dev/null +++ b/hw/rdma/trace-events @@ -0,0 +1,5 @@ +# See docs/tracing.txt for syntax documentation. + +#hw/rdma/rdma_backend.c +create_ah_cache_hit(uint64_t subnet, uint64_t net_id) "subnet =3D 0x%lx ne= t_id =3D 0x%lx" +create_ah_cache_miss(uint64_t subnet, uint64_t net_id) "subnet =3D 0x%lx n= et_id =3D 0x%lx" diff --git a/hw/rdma/vmw/pvrdma.h b/hw/rdma/vmw/pvrdma.h new file mode 100644 index 0000000000..2a8a4beb78 --- /dev/null +++ b/hw/rdma/vmw/pvrdma.h @@ -0,0 +1,122 @@ +/* + * QEMU VMWARE paravirtual RDMA device definitions + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef PVRDMA_PVRDMA_H +#define PVRDMA_PVRDMA_H + +#include +#include + +#include "../rdma_backend_defs.h" +#include "../rdma_rm_defs.h" + +#include "pvrdma_dev_api.h" +#include "pvrdma_ring.h" +#include "pvrdma_dev_ring.h" + +/* BARs */ +#define RDMA_MSIX_BAR_IDX 0 +#define RDMA_REG_BAR_IDX 1 +#define RDMA_UAR_BAR_IDX 2 +#define RDMA_BAR0_MSIX_SIZE (16 * 1024) +#define RDMA_BAR1_REGS_SIZE 256 +#define RDMA_BAR2_UAR_SIZE (0x1000 * MAX_UCS) /* each uc gets page */ + +/* MSIX */ +#define RDMA_MAX_INTRS 3 +#define RDMA_MSIX_TABLE 0x0000 +#define RDMA_MSIX_PBA 0x2000 + +/* Interrupts Vectors */ +#define INTR_VEC_CMD_RING 0 +#define INTR_VEC_CMD_ASYNC_EVENTS 1 +#define INTR_VEC_CMD_COMPLETION_Q 2 + +/* HW attributes */ +#define PVRDMA_HW_NAME "pvrdma" +#define PVRDMA_HW_VERSION 17 +#define PVRDMA_FW_VERSION 14 + +typedef struct DSRInfo { + dma_addr_t dma; + struct pvrdma_device_shared_region *dsr; + + union pvrdma_cmd_req *req; + union pvrdma_cmd_resp *rsp; + + struct pvrdma_ring *async_ring_state; + PvrdmaRing async; + + struct pvrdma_ring *cq_ring_state; + PvrdmaRing cq; +} DSRInfo; + +typedef struct PVRDMADev { + PCIDevice parent_obj; + MemoryRegion msix; + MemoryRegion regs; + __u32 regs_data[RDMA_BAR1_REGS_SIZE]; + MemoryRegion uar; + __u32 uar_data[RDMA_BAR2_UAR_SIZE]; + DSRInfo dsr_info; + int interrupt_mask; + struct ibv_device_attr dev_attr; + u64 node_guid; + char *backend_device_name; + u8 backend_gid_idx; + u8 backend_port_num; + RdmaBackendDev backend_dev; + RdmaDeviceResources rdma_dev_res; +} PVRDMADev; +#define PVRDMA_DEV(dev) OBJECT_CHECK(PVRDMADev, (dev), PVRDMA_HW_NAME) + +static inline int get_reg_val(PVRDMADev *dev, hwaddr addr, __u32 *val) +{ + int idx =3D addr >> 2; + + if (idx > RDMA_BAR1_REGS_SIZE) { + return -EINVAL; + } + + *val =3D dev->regs_data[idx]; + + return 0; +} + +static inline int set_reg_val(PVRDMADev *dev, hwaddr addr, __u32 val) +{ + int idx =3D addr >> 2; + + if (idx > RDMA_BAR1_REGS_SIZE) { + return -EINVAL; + } + + dev->regs_data[idx] =3D val; + + return 0; +} + +static inline void post_interrupt(PVRDMADev *dev, unsigned vector) +{ + PCIDevice *pci_dev =3D PCI_DEVICE(dev); + + if (likely(!dev->interrupt_mask)) { + msix_notify(pci_dev, vector); + } +} + +int execute_command(PVRDMADev *dev); + +#endif diff --git a/hw/rdma/vmw/pvrdma_cmd.c b/hw/rdma/vmw/pvrdma_cmd.c new file mode 100644 index 0000000000..0cf92a8010 --- /dev/null +++ b/hw/rdma/vmw/pvrdma_cmd.c @@ -0,0 +1,679 @@ +#include +#include +#include +#include "hw/hw.h" +#include "hw/pci/pci.h" +#include "hw/pci/pci_ids.h" + +#include "../rdma_backend.h" +#include "../rdma_rm.h" +#include "../rdma_utils.h" + +#include "pvrdma.h" +#include "vmw_pvrdma-abi.h" + +static void *pvrdma_map_to_pdir(PCIDevice *pdev, uint64_t pdir_dma, + uint32_t nchunks, size_t length) +{ + uint64_t *dir =3D NULL, *tbl =3D NULL; + int tbl_idx, dir_idx, addr_idx; + void *host_virt =3D NULL, *curr_page; + + if (!nchunks) { + pr_dbg("nchunks=3D0\n"); + goto out; + } + + dir =3D rdma_pci_dma_map(pdev, pdir_dma, TARGET_PAGE_SIZE); + if (!dir) { + error_report("PVRDMA: Failed to map to page directory"); + goto out; + } + + tbl =3D rdma_pci_dma_map(pdev, dir[0], TARGET_PAGE_SIZE); + if (!tbl) { + error_report("PVRDMA: Failed to map to page table 0"); + goto out_unmap_dir; + } + + curr_page =3D rdma_pci_dma_map(pdev, (dma_addr_t)tbl[0], TARGET_PAGE_S= IZE); + if (!curr_page) { + error_report("PVRDMA: Failed to map the first page"); + goto out_unmap_tbl; + } + + host_virt =3D mremap(curr_page, 0, length, MREMAP_MAYMOVE); + if (host_virt =3D=3D MAP_FAILED) { + host_virt =3D NULL; + error_report("PVRDMA: Failed to remap memory for host_virt"); + goto out_unmap_tbl; + } + + rdma_pci_dma_unmap(pdev, curr_page, TARGET_PAGE_SIZE); + + pr_dbg("host_virt=3D%p\n", host_virt); + + dir_idx =3D 0; + tbl_idx =3D 1; + addr_idx =3D 1; + while (addr_idx < nchunks) { + if ((tbl_idx =3D=3D (TARGET_PAGE_SIZE / sizeof(uint64_t)))) { + tbl_idx =3D 0; + dir_idx++; + pr_dbg("Mapping to table %d\n", dir_idx); + rdma_pci_dma_unmap(pdev, tbl, TARGET_PAGE_SIZE); + tbl =3D rdma_pci_dma_map(pdev, dir[dir_idx], TARGET_PAGE_SIZE); + if (!tbl) { + error_report("PVRDMA: Failed to map to page table %d", dir= _idx); + goto out_unmap_host_virt; + } + } + + pr_dbg("guest_dma[%d]=3D0x%lx\n", addr_idx, tbl[tbl_idx]); + + curr_page =3D rdma_pci_dma_map(pdev, (dma_addr_t)tbl[tbl_idx], + TARGET_PAGE_SIZE); + if (!curr_page) { + error_report("PVRDMA: Failed to map to page %d, dir %d", tbl_i= dx, + dir_idx); + goto out_unmap_host_virt; + } + + mremap(curr_page, 0, TARGET_PAGE_SIZE, MREMAP_MAYMOVE | MREMAP_FIX= ED, + host_virt + TARGET_PAGE_SIZE * addr_idx); + + rdma_pci_dma_unmap(pdev, curr_page, TARGET_PAGE_SIZE); + + addr_idx++; + + tbl_idx++; + } + + goto out_unmap_tbl; + +out_unmap_host_virt: + munmap(host_virt, length); + host_virt =3D NULL; + +out_unmap_tbl: + rdma_pci_dma_unmap(pdev, tbl, TARGET_PAGE_SIZE); + +out_unmap_dir: + rdma_pci_dma_unmap(pdev, dir, TARGET_PAGE_SIZE); + +out: + return host_virt; +} + +static int query_port(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_query_port *cmd =3D &req->query_port; + struct pvrdma_cmd_query_port_resp *resp =3D &rsp->query_port_resp; + struct pvrdma_port_attr attrs =3D {0}; + + pr_dbg("port=3D%d\n", cmd->port_num); + + if (rdma_backend_query_port(&dev->backend_dev, + (struct ibv_port_attr *)&attrs)) { + return -ENOMEM; + } + + memset(resp, 0, sizeof(*resp)); + resp->hdr.response =3D cmd->hdr.response; + resp->hdr.ack =3D PVRDMA_CMD_QUERY_PORT_RESP; + resp->hdr.err =3D 0; + + resp->attrs.state =3D attrs.state; + resp->attrs.max_mtu =3D attrs.max_mtu; + resp->attrs.active_mtu =3D attrs.active_mtu; + resp->attrs.phys_state =3D attrs.phys_state; + resp->attrs.gid_tbl_len =3D MIN(MAX_PORT_GIDS, attrs.gid_tbl_len); + resp->attrs.port_cap_flags =3D 0; + resp->attrs.max_msg_sz =3D 1024; + resp->attrs.bad_pkey_cntr =3D 0; + resp->attrs.qkey_viol_cntr =3D 0; + resp->attrs.pkey_tbl_len =3D MIN(MAX_PORT_PKEYS, attrs.pkey_tbl_len); + resp->attrs.lid =3D 0; + resp->attrs.sm_lid =3D 0; + resp->attrs.lmc =3D 0; + resp->attrs.max_vl_num =3D 0; + resp->attrs.sm_sl =3D 0; + resp->attrs.subnet_timeout =3D 0; + resp->attrs.init_type_reply =3D 0; + resp->attrs.active_width =3D 1; + resp->attrs.active_speed =3D 1; + + return 0; +} + +static int query_pkey(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_query_pkey *cmd =3D &req->query_pkey; + struct pvrdma_cmd_query_pkey_resp *resp =3D &rsp->query_pkey_resp; + + pr_dbg("port=3D%d\n", cmd->port_num); + pr_dbg("index=3D%d\n", cmd->index); + + memset(resp, 0, sizeof(*resp)); + resp->hdr.response =3D cmd->hdr.response; + resp->hdr.ack =3D PVRDMA_CMD_QUERY_PKEY_RESP; + resp->hdr.err =3D 0; + + resp->pkey =3D 0x7FFF; + pr_dbg("pkey=3D0x%x\n", resp->pkey); + + return 0; +} + +static int create_pd(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_create_pd *cmd =3D &req->create_pd; + struct pvrdma_cmd_create_pd_resp *resp =3D &rsp->create_pd_resp; + + pr_dbg("context=3D0x%x\n", cmd->ctx_handle ? cmd->ctx_handle : 0); + + memset(resp, 0, sizeof(*resp)); + resp->hdr.response =3D cmd->hdr.response; + resp->hdr.ack =3D PVRDMA_CMD_CREATE_PD_RESP; + resp->hdr.err =3D rdma_rm_alloc_pd(&dev->rdma_dev_res, &dev->backend_d= ev, + &resp->pd_handle, cmd->ctx_handle); + + pr_dbg("ret=3D%d\n", resp->hdr.err); + return resp->hdr.err; +} + +static int destroy_pd(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_destroy_pd *cmd =3D &req->destroy_pd; + + pr_dbg("pd_handle=3D%d\n", cmd->pd_handle); + + rdma_rm_dealloc_pd(&dev->rdma_dev_res, cmd->pd_handle); + + return 0; +} + +static int create_mr(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_create_mr *cmd =3D &req->create_mr; + struct pvrdma_cmd_create_mr_resp *resp =3D &rsp->create_mr_resp; + PCIDevice *pci_dev =3D PCI_DEVICE(dev); + void *host_virt =3D 0; + + memset(resp, 0, sizeof(*resp)); + resp->hdr.response =3D cmd->hdr.response; + resp->hdr.ack =3D PVRDMA_CMD_CREATE_MR_RESP; + + pr_dbg("pd_handle=3D%d\n", cmd->pd_handle); + pr_dbg("access_flags=3D0x%x\n", cmd->access_flags); + pr_dbg("flags=3D0x%x\n", cmd->flags); + + if (!(cmd->flags & PVRDMA_MR_FLAG_DMA)) { + host_virt =3D pvrdma_map_to_pdir(pci_dev, cmd->pdir_dma, cmd->nchu= nks, + cmd->length); + if (!host_virt) { + pr_dbg("Failed to map to pdir\n"); + resp->hdr.err =3D -EINVAL; + goto out; + } + } + + resp->hdr.err =3D rdma_rm_alloc_mr(&dev->rdma_dev_res, cmd->pd_handle, + cmd->start, cmd->length, host_virt, + cmd->access_flags, &resp->mr_handle, + &resp->lkey, &resp->rkey); + if (!resp->hdr.err) { + munmap(host_virt, cmd->length); + } + +out: + pr_dbg("ret=3D%d\n", resp->hdr.err); + return resp->hdr.err; +} + +static int destroy_mr(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_destroy_mr *cmd =3D &req->destroy_mr; + + pr_dbg("mr_handle=3D%d\n", cmd->mr_handle); + + rdma_rm_dealloc_mr(&dev->rdma_dev_res, cmd->mr_handle); + + return 0; +} + +static int create_cq_ring(PCIDevice *pci_dev , PvrdmaRing **ring, u64 pdir= _dma, + u32 nchunks, u32 cqe) +{ + u64 *dir, *tbl =3D 0; + PvrdmaRing *r; + int rc =3D -EINVAL; + char ring_name[MAX_RING_NAME_SZ]; + + pr_dbg("pdir_dma=3D0x%llx\n", (long long unsigned int)pdir_dma); + dir =3D rdma_pci_dma_map(pci_dev, pdir_dma, TARGET_PAGE_SIZE); + if (!dir) { + pr_dbg("Failed to map to CQ page directory\n"); + goto out; + } + + tbl =3D rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE); + if (!tbl) { + pr_dbg("Failed to map to CQ page table\n"); + goto out; + } + + r =3D malloc(sizeof(*r)); + if (!r) { + pr_dbg("Fail allocate memory for CQ ring\n"); + rc =3D -ENOMEM; + goto out; + } + *ring =3D r; + + r->ring_state =3D (struct pvrdma_ring *) + rdma_pci_dma_map(pci_dev, tbl[0], TARGET_PAGE_SIZE); + + if (!r->ring_state) { + pr_dbg("Failed to map to CQ ring state\n"); + goto out_free_ring; + } + + sprintf(ring_name, "cq_ring_%lx", pdir_dma); + rc =3D pvrdma_ring_init(r, ring_name, pci_dev, &r->ring_state[1], + cqe, sizeof(struct pvrdma_cqe), + /* first page is ring state */ + (dma_addr_t *)&tbl[1], nchunks - 1); + if (rc) { + goto out_unmap_ring_state; + } + + goto out; + +out_unmap_ring_state: + /* ring_state was in slot 1, not 0 so need to jump back */ + rdma_pci_dma_unmap(pci_dev, --r->ring_state, TARGET_PAGE_SIZE); + +out_free_ring: + free(r); + r =3D NULL; + +out: + rdma_pci_dma_unmap(pci_dev, tbl, TARGET_PAGE_SIZE); + rdma_pci_dma_unmap(pci_dev, dir, TARGET_PAGE_SIZE); + + return rc; +} + +static int create_cq(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_create_cq *cmd =3D &req->create_cq; + struct pvrdma_cmd_create_cq_resp *resp =3D &rsp->create_cq_resp; + PvrdmaRing *ring =3D NULL; + + memset(resp, 0, sizeof(*resp)); + resp->hdr.response =3D cmd->hdr.response; + resp->hdr.ack =3D PVRDMA_CMD_CREATE_CQ_RESP; + + resp->cqe =3D cmd->cqe; + + resp->hdr.err =3D create_cq_ring(PCI_DEVICE(dev), &ring, cmd->pdir_dma, + cmd->nchunks, cmd->cqe); + if (resp->hdr.err) { + goto out; + } + + pr_dbg("ring=3D%p\n", ring); + + resp->hdr.err =3D rdma_rm_alloc_cq(&dev->rdma_dev_res, &dev->backend_d= ev, + cmd->cqe, &resp->cq_handle, ring); + resp->cqe =3D cmd->cqe; + +out: + pr_dbg("ret=3D%d\n", resp->hdr.err); + return resp->hdr.err; +} + +static int destroy_cq(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_destroy_cq *cmd =3D &req->destroy_cq; + RdmaRmCQ *cq; + PvrdmaRing *ring; + + pr_dbg("cq_handle=3D%d\n", cmd->cq_handle); + + cq =3D rdma_rm_get_cq(&dev->rdma_dev_res, cmd->cq_handle); + if (!cq) { + pr_dbg("Invalid CQ handle\n"); + return -EINVAL; + } + + ring =3D (PvrdmaRing *)cq->opaque; + pvrdma_ring_free(ring); + /* ring_state was in slot 1, not 0 so need to jump back */ + rdma_pci_dma_unmap(PCI_DEVICE(dev), --ring->ring_state, TARGET_PAGE_SI= ZE); + free(ring); + + rdma_rm_dealloc_cq(&dev->rdma_dev_res, cmd->cq_handle); + + return 0; +} + +static int create_qp_rings(PCIDevice *pci_dev, u64 pdir_dma, PvrdmaRing **= rings, + u32 scqe, u32 smax_sge, u32 spages, u32 rcqe, + u32 rmax_sge, u32 rpages) +{ + u64 *dir, *tbl =3D 0; + PvrdmaRing *sr, *rr; + int rc =3D -EINVAL;; + char ring_name[MAX_RING_NAME_SZ]; + uint32_t wqe_sz; + + pr_dbg("pdir_dma=3D0x%llx\n", (long long unsigned int)pdir_dma); + dir =3D rdma_pci_dma_map(pci_dev, pdir_dma, TARGET_PAGE_SIZE); + if (!dir) { + pr_dbg("Failed to map to CQ page directory\n"); + goto out; + } + + tbl =3D rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE); + if (!tbl) { + pr_dbg("Failed to map to CQ page table\n"); + goto out; + } + + sr =3D malloc(2 * sizeof(*rr)); + if (!sr) { + pr_dbg("Fail allocate memory for QP send and recv rings\n"); + rc =3D -ENOMEM; + goto out; + } + rr =3D &sr[1]; + pr_dbg("sring=3D%p\n", sr); + pr_dbg("rring=3D%p\n", rr); + + *rings =3D sr; + + pr_dbg("scqe=3D%d\n", scqe); + pr_dbg("smax_sge=3D%d\n", smax_sge); + pr_dbg("spages=3D%d\n", spages); + pr_dbg("rcqe=3D%d\n", rcqe); + pr_dbg("rmax_sge=3D%d\n", rmax_sge); + pr_dbg("rpages=3D%d\n", rpages); + + /* Create send ring */ + sr->ring_state =3D (struct pvrdma_ring *) + rdma_pci_dma_map(pci_dev, tbl[0], TARGET_PAGE_SIZE); + if (!sr->ring_state) { + pr_dbg("Failed to map to CQ ring state\n"); + goto out_free_sr_mem; + } + + wqe_sz =3D pow2roundup32(sizeof(struct pvrdma_sq_wqe_hdr) + + sizeof(struct pvrdma_sge) * smax_sge - 1); + + sprintf(ring_name, "qp_sring_%lx", pdir_dma); + rc =3D pvrdma_ring_init(sr, ring_name, pci_dev, sr->ring_state, + scqe, wqe_sz, (dma_addr_t *)&tbl[1], spages); + if (rc) { + goto out_unmap_ring_state; + } + + /* Create recv ring */ + rr->ring_state =3D &sr->ring_state[1]; + wqe_sz =3D pow2roundup32(sizeof(struct pvrdma_rq_wqe_hdr) + + sizeof(struct pvrdma_sge) * rmax_sge - 1); + sprintf(ring_name, "qp_rring_%lx", pdir_dma); + rc =3D pvrdma_ring_init(rr, ring_name, pci_dev, rr->ring_state, + rcqe, wqe_sz, (dma_addr_t *)&tbl[1 + spages], rp= ages); + if (rc) { + goto out_free_sr; + } + + goto out; + +out_free_sr: + pvrdma_ring_free(sr); + +out_unmap_ring_state: + rdma_pci_dma_unmap(pci_dev, sr->ring_state, TARGET_PAGE_SIZE); + +out_free_sr_mem: + free(sr); + sr =3D NULL; + +out: + rdma_pci_dma_unmap(pci_dev, tbl, TARGET_PAGE_SIZE); + rdma_pci_dma_unmap(pci_dev, dir, TARGET_PAGE_SIZE); + + return rc; +} + +static int create_qp(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_create_qp *cmd =3D &req->create_qp; + struct pvrdma_cmd_create_qp_resp *resp =3D &rsp->create_qp_resp; + PvrdmaRing *rings =3D NULL; + + memset(resp, 0, sizeof(*resp)); + resp->hdr.response =3D cmd->hdr.response; + resp->hdr.ack =3D PVRDMA_CMD_CREATE_QP_RESP; + + pr_dbg("total_chunks=3D%d\n", cmd->total_chunks); + pr_dbg("send_chunks=3D%d\n", cmd->send_chunks); + + resp->hdr.err =3D create_qp_rings(PCI_DEVICE(dev), cmd->pdir_dma, &rin= gs, + cmd->max_send_wr, cmd->max_send_sge, + cmd->send_chunks, cmd->max_recv_wr, + cmd->max_recv_sge, cmd->total_chunks - + cmd->send_chunks - 1); + if (resp->hdr.err) { + goto out; + } + + pr_dbg("rings=3D%p\n", rings); + + resp->hdr.err =3D rdma_rm_alloc_qp(&dev->rdma_dev_res, cmd->pd_handle, + cmd->qp_type, cmd->max_send_wr, + cmd->max_send_sge, cmd->send_cq_handl= e, + cmd->max_recv_wr, cmd->max_recv_sge, + cmd->recv_cq_handle, rings, &resp->qp= n); + + resp->max_send_wr =3D cmd->max_send_wr; + resp->max_recv_wr =3D cmd->max_recv_wr; + resp->max_send_sge =3D cmd->max_send_sge; + resp->max_recv_sge =3D cmd->max_recv_sge; + resp->max_inline_data =3D cmd->max_inline_data; + +out: + pr_dbg("ret=3D%d\n", resp->hdr.err); + return resp->hdr.err; +} + +static int modify_qp(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_modify_qp *cmd =3D &req->modify_qp; + + pr_dbg("qp_handle=3D%d\n", cmd->qp_handle); + + memset(rsp, 0, sizeof(*rsp)); + rsp->hdr.response =3D cmd->hdr.response; + rsp->hdr.ack =3D PVRDMA_CMD_MODIFY_QP_RESP; + + rsp->hdr.err =3D rdma_rm_modify_qp(&dev->rdma_dev_res, &dev->backend_d= ev, + cmd->qp_handle, cmd->attr_mask, + (union ibv_gid *)&cmd->attrs.ah_attr.grh.= dgid, + cmd->attrs.dest_qp_num, cmd->attrs.qp_sta= te, + cmd->attrs.qkey, cmd->attrs.rq_psn, + cmd->attrs.sq_psn); + + pr_dbg("ret=3D%d\n", rsp->hdr.err); + return rsp->hdr.err; +} + +static int destroy_qp(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_destroy_qp *cmd =3D &req->destroy_qp; + RdmaRmQP *qp; + PvrdmaRing *ring; + + qp =3D rdma_rm_get_qp(&dev->rdma_dev_res, cmd->qp_handle); + if (!qp) { + pr_dbg("Invalid QP handle\n"); + return -EINVAL; + } + + rdma_rm_dealloc_qp(&dev->rdma_dev_res, cmd->qp_handle); + + ring =3D (PvrdmaRing *)qp->opaque; + pr_dbg("sring=3D%p\n", &ring[0]); + pvrdma_ring_free(&ring[0]); + pr_dbg("rring=3D%p\n", &ring[1]); + pvrdma_ring_free(&ring[1]); + + rdma_pci_dma_unmap(PCI_DEVICE(dev), ring->ring_state, TARGET_PAGE_SIZE= ); + free(ring); + + return 0; +} + +static int create_bind(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_create_bind *cmd =3D &req->create_bind; +#ifdef PVRDMA_DEBUG + __be64 *subnet =3D (__be64 *)&cmd->new_gid[0]; + __be64 *if_id =3D (__be64 *)&cmd->new_gid[8]; +#endif + + pr_dbg("index=3D%d\n", cmd->index); + + if (cmd->index > MAX_PORT_GIDS) { + return -EINVAL; + } + + pr_dbg("gid[%d]=3D0x%llx,0x%llx\n", cmd->index, + (long long unsigned int)be64_to_cpu(*subnet), + (long long unsigned int)be64_to_cpu(*if_id)); + + /* Driver forces to one port only */ + memcpy(dev->rdma_dev_res.ports[0].gid_tbl[cmd->index].raw, &cmd->new_g= id, + sizeof(cmd->new_gid)); + + /* TODO: Since drivers stores node_guid at load_dsr phase then this + * assignment is not relevant, i need to figure out a way how to + * retrieve MAC of our netdev */ + dev->node_guid =3D dev->rdma_dev_res.ports[0].gid_tbl[0].global.interf= ace_id; + pr_dbg("dev->node_guid=3D0x%llx\n", + (long long unsigned int)be64_to_cpu(dev->node_guid)); + + return 0; +} + +static int destroy_bind(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_destroy_bind *cmd =3D &req->destroy_bind; + + pr_dbg("clear index %d\n", cmd->index); + + memset(dev->rdma_dev_res.ports[0].gid_tbl[cmd->index].raw, 0, + sizeof(dev->rdma_dev_res.ports[0].gid_tbl[cmd->index].raw)); + + return 0; +} + +static int create_uc(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_create_uc *cmd =3D &req->create_uc; + struct pvrdma_cmd_create_uc_resp *resp =3D &rsp->create_uc_resp; + + pr_dbg("pfn=3D%d\n", cmd->pfn); + + memset(resp, 0, sizeof(*resp)); + resp->hdr.response =3D cmd->hdr.response; + resp->hdr.ack =3D PVRDMA_CMD_CREATE_UC_RESP; + resp->hdr.err =3D rdma_rm_alloc_uc(&dev->rdma_dev_res, cmd->pfn, + &resp->ctx_handle); + + pr_dbg("ret=3D%d\n", resp->hdr.err); + + return 0; +} + +static int destroy_uc(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp) +{ + struct pvrdma_cmd_destroy_uc *cmd =3D &req->destroy_uc; + + pr_dbg("ctx_handle=3D%d\n", cmd->ctx_handle); + + rdma_rm_dealloc_uc(&dev->rdma_dev_res, cmd->ctx_handle); + + return 0; +} +struct cmd_handler { + __u32 cmd; + int (*exec)(PVRDMADev *dev, union pvrdma_cmd_req *req, + union pvrdma_cmd_resp *rsp); +}; + +static struct cmd_handler cmd_handlers[] =3D { + {PVRDMA_CMD_QUERY_PORT, query_port}, + {PVRDMA_CMD_QUERY_PKEY, query_pkey}, + {PVRDMA_CMD_CREATE_PD, create_pd}, + {PVRDMA_CMD_DESTROY_PD, destroy_pd}, + {PVRDMA_CMD_CREATE_MR, create_mr}, + {PVRDMA_CMD_DESTROY_MR, destroy_mr}, + {PVRDMA_CMD_CREATE_CQ, create_cq}, + {PVRDMA_CMD_RESIZE_CQ, NULL}, + {PVRDMA_CMD_DESTROY_CQ, destroy_cq}, + {PVRDMA_CMD_CREATE_QP, create_qp}, + {PVRDMA_CMD_MODIFY_QP, modify_qp}, + {PVRDMA_CMD_QUERY_QP, NULL}, + {PVRDMA_CMD_DESTROY_QP, destroy_qp}, + {PVRDMA_CMD_CREATE_UC, create_uc}, + {PVRDMA_CMD_DESTROY_UC, destroy_uc}, + {PVRDMA_CMD_CREATE_BIND, create_bind}, + {PVRDMA_CMD_DESTROY_BIND, destroy_bind}, +}; + +int execute_command(PVRDMADev *dev) +{ + int err =3D 0xFFFF; + DSRInfo *dsr_info; + + dsr_info =3D &dev->dsr_info; + + pr_dbg("cmd=3D%d\n", dsr_info->req->hdr.cmd); + if (dsr_info->req->hdr.cmd >=3D sizeof(cmd_handlers) / + sizeof(struct cmd_handler)) { + pr_dbg("Unsupported command\n"); + goto out; + } + + if (!cmd_handlers[dsr_info->req->hdr.cmd].exec) { + pr_dbg("Unsupported command (not implemented yet)\n"); + goto out; + } + + err =3D cmd_handlers[dsr_info->req->hdr.cmd].exec(dev, dsr_info->req, + dsr_info->rsp); +out: + set_reg_val(dev, PVRDMA_REG_ERR, err); + post_interrupt(dev, INTR_VEC_CMD_RING); + + return (err =3D=3D 0) ? 0 : -EINVAL; +} diff --git a/hw/rdma/vmw/pvrdma_dev_api.h b/hw/rdma/vmw/pvrdma_dev_api.h new file mode 100644 index 0000000000..bf1986a976 --- /dev/null +++ b/hw/rdma/vmw/pvrdma_dev_api.h @@ -0,0 +1,602 @@ +/* + * QEMU VMWARE paravirtual RDMA device definitions + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef PVRDMA_DEV_API_H +#define PVRDMA_DEV_API_H + +/* + * Following is an interface definition for PVRDMA device as provided by + * VMWARE. + * See original copyright from Linux kernel v4.14.5 header file + * drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h + */ + +/* + * Copyright (c) 2012-2016 VMware, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of EITHER the GNU General Public License + * version 2 as published by the Free Software Foundation or the BSD + * 2-Clause License. This program is distributed in the hope that it + * will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED + * WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. + * See the GNU General Public License version 2 for more details at + * http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html. + * + * You should have received a copy of the GNU General Public License + * along with this program available in the file COPYING in the main + * directory of this source tree. + * + * The BSD 2-Clause License + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + * OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include + +#include "pvrdma_ib_verbs.h" + +#define PVRDMA_VERSION 17 +#define PVRDMA_BOARD_ID 1 +#define PVRDMA_REV_ID 1 + +/* + * Masks and accessors for page directory, which is a two-level lookup: + * page directory -> page table -> page. Only one directory for now, but we + * could expand that easily. 9 bits for tables, 9 bits for pages, gives one + * gigabyte for memory regions and so forth. + */ + +#define PVRDMA_PDIR_SHIFT 18 +#define PVRDMA_PTABLE_SHIFT 9 +#define PVRDMA_PAGE_DIR_DIR(x) (((x) >> PVRDMA_PDIR_SHIFT) & 0x1) +#define PVRDMA_PAGE_DIR_TABLE(x) (((x) >> PVRDMA_PTABLE_SHIFT) & 0x1ff) +#define PVRDMA_PAGE_DIR_PAGE(x) ((x) & 0x1ff) +#define PVRDMA_PAGE_DIR_MAX_PAGES (1 * 512 * 512) +#define PVRDMA_MAX_FAST_REG_PAGES 128 + +/* + * Max MSI-X vectors. + */ + +#define PVRDMA_MAX_INTERRUPTS 3 + +/* Register offsets within PCI resource on BAR1. */ +#define PVRDMA_REG_VERSION 0x00 /* R: Version of device. */ +#define PVRDMA_REG_DSRLOW 0x04 /* W: Device shared region low PA. */ +#define PVRDMA_REG_DSRHIGH 0x08 /* W: Device shared region high PA. = */ +#define PVRDMA_REG_CTL 0x0c /* W: PVRDMA_DEVICE_CTL */ +#define PVRDMA_REG_REQUEST 0x10 /* W: Indicate device request. */ +#define PVRDMA_REG_ERR 0x14 /* R: Device error. */ +#define PVRDMA_REG_ICR 0x18 /* R: Interrupt cause. */ +#define PVRDMA_REG_IMR 0x1c /* R/W: Interrupt mask. */ +#define PVRDMA_REG_MACL 0x20 /* R/W: MAC address low. */ +#define PVRDMA_REG_MACH 0x24 /* R/W: MAC address high. */ + +/* Object flags. */ +#define PVRDMA_CQ_FLAG_ARMED_SOL BIT(0) /* Armed for solicited-only.= */ +#define PVRDMA_CQ_FLAG_ARMED BIT(1) /* Armed. */ +#define PVRDMA_MR_FLAG_DMA BIT(0) /* DMA region. */ +#define PVRDMA_MR_FLAG_FRMR BIT(1) /* Fast reg memory region. */ + +/* + * Atomic operation capability (masked versions are extended atomic + * operations. + */ + +#define PVRDMA_ATOMIC_OP_COMP_SWAP BIT(0) /* Compare and swap. */ +#define PVRDMA_ATOMIC_OP_FETCH_ADD BIT(1) /* Fetch and add. */ +#define PVRDMA_ATOMIC_OP_MASK_COMP_SWAP BIT(2) /* Masked compare and sw= ap. */ +#define PVRDMA_ATOMIC_OP_MASK_FETCH_ADD BIT(3) /* Masked fetch and add.= */ + +/* + * Base Memory Management Extension flags to support Fast Reg Memory Regio= ns + * and Fast Reg Work Requests. Each flag represents a verb operation and we + * must support all of them to qualify for the BMME device cap. + */ + +#define PVRDMA_BMME_FLAG_LOCAL_INV BIT(0) /* Local Invalidate. */ +#define PVRDMA_BMME_FLAG_REMOTE_INV BIT(1) /* Remote Invalidate. */ +#define PVRDMA_BMME_FLAG_FAST_REG_WR BIT(2) /* Fast Reg Work Request= . */ + +/* + * GID types. The interpretation of the gid_types bit field in the device + * capabilities will depend on the device mode. For now, the device only + * supports RoCE as mode, so only the different GID types for RoCE are + * defined. + */ + +#define PVRDMA_GID_TYPE_FLAG_ROCE_V1 BIT(0) +#define PVRDMA_GID_TYPE_FLAG_ROCE_V2 BIT(1) + +enum pvrdma_pci_resource { + PVRDMA_PCI_RESOURCE_MSIX, /* BAR0: MSI-X, MMIO. */ + PVRDMA_PCI_RESOURCE_REG, /* BAR1: Registers, MMIO. */ + PVRDMA_PCI_RESOURCE_UAR, /* BAR2: UAR pages, MMIO, 64-bit. */ + PVRDMA_PCI_RESOURCE_LAST, /* Last. */ +}; + +enum pvrdma_device_ctl { + PVRDMA_DEVICE_CTL_ACTIVATE, /* Activate device. */ + PVRDMA_DEVICE_CTL_UNQUIESCE, /* Unquiesce device. */ + PVRDMA_DEVICE_CTL_RESET, /* Reset device. */ +}; + +enum pvrdma_intr_vector { + PVRDMA_INTR_VECTOR_RESPONSE, /* Command response. */ + PVRDMA_INTR_VECTOR_ASYNC, /* Async events. */ + PVRDMA_INTR_VECTOR_CQ, /* CQ notification. */ + /* Additional CQ notification vectors. */ +}; + +enum pvrdma_intr_cause { + PVRDMA_INTR_CAUSE_RESPONSE =3D (1 << PVRDMA_INTR_VECTOR_RESPONSE), + PVRDMA_INTR_CAUSE_ASYNC =3D (1 << PVRDMA_INTR_VECTOR_ASYNC), + PVRDMA_INTR_CAUSE_CQ =3D (1 << PVRDMA_INTR_VECTOR_CQ), +}; + +enum pvrdma_gos_bits { + PVRDMA_GOS_BITS_UNK, /* Unknown. */ + PVRDMA_GOS_BITS_32, /* 32-bit. */ + PVRDMA_GOS_BITS_64, /* 64-bit. */ +}; + +enum pvrdma_gos_type { + PVRDMA_GOS_TYPE_UNK, /* Unknown. */ + PVRDMA_GOS_TYPE_LINUX, /* Linux. */ +}; + +enum pvrdma_device_mode { + PVRDMA_DEVICE_MODE_ROCE, /* RoCE. */ + PVRDMA_DEVICE_MODE_IWARP, /* iWarp. */ + PVRDMA_DEVICE_MODE_IB, /* InfiniBand. */ +}; + +struct pvrdma_gos_info { + u32 gos_bits:2; /* W: PVRDMA_GOS_BITS_ */ + u32 gos_type:4; /* W: PVRDMA_GOS_TYPE_ */ + u32 gos_ver:16; /* W: Guest OS version. */ + u32 gos_misc:10; /* W: Other. */ + u32 pad; /* Pad to 8-byte alignment. */ +}; + +struct pvrdma_device_caps { + u64 fw_ver; /* R: Query device. */ + __be64 node_guid; + __be64 sys_image_guid; + u64 max_mr_size; + u64 page_size_cap; + u64 atomic_arg_sizes; /* EX verbs. */ + u32 ex_comp_mask; /* EX verbs. */ + u32 device_cap_flags2; /* EX verbs. */ + u32 max_fa_bit_boundary; /* EX verbs. */ + u32 log_max_atomic_inline_arg; /* EX verbs. */ + u32 vendor_id; + u32 vendor_part_id; + u32 hw_ver; + u32 max_qp; + u32 max_qp_wr; + u32 device_cap_flags; + u32 max_sge; + u32 max_sge_rd; + u32 max_cq; + u32 max_cqe; + u32 max_mr; + u32 max_pd; + u32 max_qp_rd_atom; + u32 max_ee_rd_atom; + u32 max_res_rd_atom; + u32 max_qp_init_rd_atom; + u32 max_ee_init_rd_atom; + u32 max_ee; + u32 max_rdd; + u32 max_mw; + u32 max_raw_ipv6_qp; + u32 max_raw_ethy_qp; + u32 max_mcast_grp; + u32 max_mcast_qp_attach; + u32 max_total_mcast_qp_attach; + u32 max_ah; + u32 max_fmr; + u32 max_map_per_fmr; + u32 max_srq; + u32 max_srq_wr; + u32 max_srq_sge; + u32 max_uar; + u32 gid_tbl_len; + u16 max_pkeys; + u8 local_ca_ack_delay; + u8 phys_port_cnt; + u8 mode; /* PVRDMA_DEVICE_MODE_ */ + u8 atomic_ops; /* PVRDMA_ATOMIC_OP_* bits */ + u8 bmme_flags; /* FRWR Mem Mgmt Extensions */ + u8 gid_types; /* PVRDMA_GID_TYPE_FLAG_ */ + u8 reserved[4]; +}; + +struct pvrdma_ring_page_info { + u32 num_pages; /* Num pages incl. header. */ + u32 reserved; /* Reserved. */ + u64 pdir_dma; /* Page directory PA. */ +}; + +#pragma pack(push, 1) + +struct pvrdma_device_shared_region { + u32 driver_version; /* W: Driver version. */ + u32 pad; /* Pad to 8-byte align. */ + struct pvrdma_gos_info gos_info; /* W: Guest OS information. */ + u64 cmd_slot_dma; /* W: Command slot address. */ + u64 resp_slot_dma; /* W: Response slot address. */ + struct pvrdma_ring_page_info async_ring_pages; + /* W: Async ring page info. */ + struct pvrdma_ring_page_info cq_ring_pages; + /* W: CQ ring page info. */ + u32 uar_pfn; /* W: UAR pageframe. */ + u32 pad2; /* Pad to 8-byte align. */ + struct pvrdma_device_caps caps; /* R: Device capabilities. */ +}; + +#pragma pack(pop) + +/* Event types. Currently a 1:1 mapping with enum ib_event. */ +enum pvrdma_eqe_type { + PVRDMA_EVENT_CQ_ERR, + PVRDMA_EVENT_QP_FATAL, + PVRDMA_EVENT_QP_REQ_ERR, + PVRDMA_EVENT_QP_ACCESS_ERR, + PVRDMA_EVENT_COMM_EST, + PVRDMA_EVENT_SQ_DRAINED, + PVRDMA_EVENT_PATH_MIG, + PVRDMA_EVENT_PATH_MIG_ERR, + PVRDMA_EVENT_DEVICE_FATAL, + PVRDMA_EVENT_PORT_ACTIVE, + PVRDMA_EVENT_PORT_ERR, + PVRDMA_EVENT_LID_CHANGE, + PVRDMA_EVENT_PKEY_CHANGE, + PVRDMA_EVENT_SM_CHANGE, + PVRDMA_EVENT_SRQ_ERR, + PVRDMA_EVENT_SRQ_LIMIT_REACHED, + PVRDMA_EVENT_QP_LAST_WQE_REACHED, + PVRDMA_EVENT_CLIENT_REREGISTER, + PVRDMA_EVENT_GID_CHANGE, +}; + +/* Event queue element. */ +struct pvrdma_eqe { + u32 type; /* Event type. */ + u32 info; /* Handle, other. */ +}; + +/* CQ notification queue element. */ +struct pvrdma_cqne { + u32 info; /* Handle */ +}; + +enum { + PVRDMA_CMD_FIRST, + PVRDMA_CMD_QUERY_PORT =3D PVRDMA_CMD_FIRST, + PVRDMA_CMD_QUERY_PKEY, + PVRDMA_CMD_CREATE_PD, + PVRDMA_CMD_DESTROY_PD, + PVRDMA_CMD_CREATE_MR, + PVRDMA_CMD_DESTROY_MR, + PVRDMA_CMD_CREATE_CQ, + PVRDMA_CMD_RESIZE_CQ, + PVRDMA_CMD_DESTROY_CQ, + PVRDMA_CMD_CREATE_QP, + PVRDMA_CMD_MODIFY_QP, + PVRDMA_CMD_QUERY_QP, + PVRDMA_CMD_DESTROY_QP, + PVRDMA_CMD_CREATE_UC, + PVRDMA_CMD_DESTROY_UC, + PVRDMA_CMD_CREATE_BIND, + PVRDMA_CMD_DESTROY_BIND, + PVRDMA_CMD_MAX, +}; + +enum { + PVRDMA_CMD_FIRST_RESP =3D (1 << 31), + PVRDMA_CMD_QUERY_PORT_RESP =3D PVRDMA_CMD_FIRST_RESP, + PVRDMA_CMD_QUERY_PKEY_RESP, + PVRDMA_CMD_CREATE_PD_RESP, + PVRDMA_CMD_DESTROY_PD_RESP_NOOP, + PVRDMA_CMD_CREATE_MR_RESP, + PVRDMA_CMD_DESTROY_MR_RESP_NOOP, + PVRDMA_CMD_CREATE_CQ_RESP, + PVRDMA_CMD_RESIZE_CQ_RESP, + PVRDMA_CMD_DESTROY_CQ_RESP_NOOP, + PVRDMA_CMD_CREATE_QP_RESP, + PVRDMA_CMD_MODIFY_QP_RESP, + PVRDMA_CMD_QUERY_QP_RESP, + PVRDMA_CMD_DESTROY_QP_RESP, + PVRDMA_CMD_CREATE_UC_RESP, + PVRDMA_CMD_DESTROY_UC_RESP_NOOP, + PVRDMA_CMD_CREATE_BIND_RESP_NOOP, + PVRDMA_CMD_DESTROY_BIND_RESP_NOOP, + PVRDMA_CMD_MAX_RESP, +}; + +struct pvrdma_cmd_hdr { + u64 response; /* Key for response lookup. */ + u32 cmd; /* PVRDMA_CMD_ */ + u32 reserved; /* Reserved. */ +}; + +struct pvrdma_cmd_resp_hdr { + u64 response; /* From cmd hdr. */ + u32 ack; /* PVRDMA_CMD_XXX_RESP */ + u8 err; /* Error. */ + u8 reserved[3]; /* Reserved. */ +}; + +struct pvrdma_cmd_query_port { + struct pvrdma_cmd_hdr hdr; + u8 port_num; + u8 reserved[7]; +}; + +struct pvrdma_cmd_query_port_resp { + struct pvrdma_cmd_resp_hdr hdr; + struct pvrdma_port_attr attrs; +}; + +struct pvrdma_cmd_query_pkey { + struct pvrdma_cmd_hdr hdr; + u8 port_num; + u8 index; + u8 reserved[6]; +}; + +struct pvrdma_cmd_query_pkey_resp { + struct pvrdma_cmd_resp_hdr hdr; + u16 pkey; + u8 reserved[6]; +}; + +struct pvrdma_cmd_create_uc { + struct pvrdma_cmd_hdr hdr; + u32 pfn; /* UAR page frame number */ + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_uc_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 ctx_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_destroy_uc { + struct pvrdma_cmd_hdr hdr; + u32 ctx_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_pd { + struct pvrdma_cmd_hdr hdr; + u32 ctx_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_pd_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 pd_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_destroy_pd { + struct pvrdma_cmd_hdr hdr; + u32 pd_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_mr { + struct pvrdma_cmd_hdr hdr; + u64 start; + u64 length; + u64 pdir_dma; + u32 pd_handle; + u32 access_flags; + u32 flags; + u32 nchunks; +}; + +struct pvrdma_cmd_create_mr_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 mr_handle; + u32 lkey; + u32 rkey; + u8 reserved[4]; +}; + +struct pvrdma_cmd_destroy_mr { + struct pvrdma_cmd_hdr hdr; + u32 mr_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_cq { + struct pvrdma_cmd_hdr hdr; + u64 pdir_dma; + u32 ctx_handle; + u32 cqe; + u32 nchunks; + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_cq_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 cq_handle; + u32 cqe; +}; + +struct pvrdma_cmd_resize_cq { + struct pvrdma_cmd_hdr hdr; + u32 cq_handle; + u32 cqe; +}; + +struct pvrdma_cmd_resize_cq_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 cqe; + u8 reserved[4]; +}; + +struct pvrdma_cmd_destroy_cq { + struct pvrdma_cmd_hdr hdr; + u32 cq_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_qp { + struct pvrdma_cmd_hdr hdr; + u64 pdir_dma; + u32 pd_handle; + u32 send_cq_handle; + u32 recv_cq_handle; + u32 srq_handle; + u32 max_send_wr; + u32 max_recv_wr; + u32 max_send_sge; + u32 max_recv_sge; + u32 max_inline_data; + u32 lkey; + u32 access_flags; + u16 total_chunks; + u16 send_chunks; + u16 max_atomic_arg; + u8 sq_sig_all; + u8 qp_type; + u8 is_srq; + u8 reserved[3]; +}; + +struct pvrdma_cmd_create_qp_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 qpn; + u32 max_send_wr; + u32 max_recv_wr; + u32 max_send_sge; + u32 max_recv_sge; + u32 max_inline_data; +}; + +struct pvrdma_cmd_modify_qp { + struct pvrdma_cmd_hdr hdr; + u32 qp_handle; + u32 attr_mask; + struct pvrdma_qp_attr attrs; +}; + +struct pvrdma_cmd_query_qp { + struct pvrdma_cmd_hdr hdr; + u32 qp_handle; + u32 attr_mask; +}; + +struct pvrdma_cmd_query_qp_resp { + struct pvrdma_cmd_resp_hdr hdr; + struct pvrdma_qp_attr attrs; +}; + +struct pvrdma_cmd_destroy_qp { + struct pvrdma_cmd_hdr hdr; + u32 qp_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_destroy_qp_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 events_reported; + u8 reserved[4]; +}; + +struct pvrdma_cmd_create_bind { + struct pvrdma_cmd_hdr hdr; + u32 mtu; + u32 vlan; + u32 index; + u8 new_gid[16]; + u8 gid_type; + u8 reserved[3]; +}; + +struct pvrdma_cmd_destroy_bind { + struct pvrdma_cmd_hdr hdr; + u32 index; + u8 dest_gid[16]; + u8 reserved[4]; +}; + +union pvrdma_cmd_req { + struct pvrdma_cmd_hdr hdr; + struct pvrdma_cmd_query_port query_port; + struct pvrdma_cmd_query_pkey query_pkey; + struct pvrdma_cmd_create_uc create_uc; + struct pvrdma_cmd_destroy_uc destroy_uc; + struct pvrdma_cmd_create_pd create_pd; + struct pvrdma_cmd_destroy_pd destroy_pd; + struct pvrdma_cmd_create_mr create_mr; + struct pvrdma_cmd_destroy_mr destroy_mr; + struct pvrdma_cmd_create_cq create_cq; + struct pvrdma_cmd_resize_cq resize_cq; + struct pvrdma_cmd_destroy_cq destroy_cq; + struct pvrdma_cmd_create_qp create_qp; + struct pvrdma_cmd_modify_qp modify_qp; + struct pvrdma_cmd_query_qp query_qp; + struct pvrdma_cmd_destroy_qp destroy_qp; + struct pvrdma_cmd_create_bind create_bind; + struct pvrdma_cmd_destroy_bind destroy_bind; +}; + +union pvrdma_cmd_resp { + struct pvrdma_cmd_resp_hdr hdr; + struct pvrdma_cmd_query_port_resp query_port_resp; + struct pvrdma_cmd_query_pkey_resp query_pkey_resp; + struct pvrdma_cmd_create_uc_resp create_uc_resp; + struct pvrdma_cmd_create_pd_resp create_pd_resp; + struct pvrdma_cmd_create_mr_resp create_mr_resp; + struct pvrdma_cmd_create_cq_resp create_cq_resp; + struct pvrdma_cmd_resize_cq_resp resize_cq_resp; + struct pvrdma_cmd_create_qp_resp create_qp_resp; + struct pvrdma_cmd_query_qp_resp query_qp_resp; + struct pvrdma_cmd_destroy_qp_resp destroy_qp_resp; +}; + +#endif /* PVRDMA_DEV_API_H */ diff --git a/hw/rdma/vmw/pvrdma_dev_ring.c b/hw/rdma/vmw/pvrdma_dev_ring.c new file mode 100644 index 0000000000..e348db028d --- /dev/null +++ b/hw/rdma/vmw/pvrdma_dev_ring.c @@ -0,0 +1,139 @@ +#include +#include +#include + +#include "../rdma_utils.h" +#include "pvrdma_ring.h" +#include "pvrdma_dev_ring.h" + +int pvrdma_ring_init(PvrdmaRing *ring, const char *name, PCIDevice *dev, + struct pvrdma_ring *ring_state, uint32_t max_elems, + size_t elem_sz, dma_addr_t *tbl, dma_addr_t npages) +{ + int i; + int rc =3D 0; + + strncpy(ring->name, name, MAX_RING_NAME_SZ); + ring->name[MAX_RING_NAME_SZ - 1] =3D 0; + pr_dbg("Initializing %s ring\n", ring->name); + ring->dev =3D dev; + ring->ring_state =3D ring_state; + ring->max_elems =3D max_elems; + ring->elem_sz =3D elem_sz; + pr_dbg("ring->elem_sz=3D%ld\n", ring->elem_sz); + pr_dbg("npages=3D%ld\n", npages); + /* TODO: Give a moment to think if we want to redo driver settings + atomic_set(&ring->ring_state->prod_tail, 0); + atomic_set(&ring->ring_state->cons_head, 0); + */ + ring->npages =3D npages; + ring->pages =3D malloc(npages * sizeof(void *)); + for (i =3D 0; i < npages; i++) { + if (!tbl[i]) { + pr_err("npages=3D%ld but tbl[%d] is NULL\n", (long)npages, i); + continue; + } + + ring->pages[i] =3D rdma_pci_dma_map(dev, tbl[i], TARGET_PAGE_SIZE); + if (!ring->pages[i]) { + rc =3D -ENOMEM; + pr_dbg("Failed to map to page %d\n", i); + goto out_free; + } + memset(ring->pages[i], 0, TARGET_PAGE_SIZE); + } + + goto out; + +out_free: + while (i--) { + rdma_pci_dma_unmap(dev, ring->pages[i], TARGET_PAGE_SIZE); + } + free(ring->pages); + +out: + return rc; +} + +void *pvrdma_ring_next_elem_read(PvrdmaRing *ring) +{ + unsigned int idx =3D 0, offset; + + /* + pr_dbg("%s: t=3D%d, h=3D%d\n", ring->name, ring->ring_state->prod_tail, + ring->ring_state->cons_head); + */ + + if (!pvrdma_idx_ring_has_data(ring->ring_state, ring->max_elems, &idx)= ) { + pr_dbg("No more data in ring\n"); + return NULL; + } + + offset =3D idx * ring->elem_sz; + /* + pr_dbg("idx=3D%d\n", idx); + pr_dbg("offset=3D%d\n", offset); + */ + return ring->pages[offset / TARGET_PAGE_SIZE] + (offset % TARGET_PAGE_= SIZE); +} + +void pvrdma_ring_read_inc(PvrdmaRing *ring) +{ + pvrdma_idx_ring_inc(&ring->ring_state->cons_head, ring->max_elems); + /* + pr_dbg("%s: t=3D%d, h=3D%d, m=3D%ld\n", ring->name, + ring->ring_state->prod_tail, ring->ring_state->cons_head, + ring->max_elems); + */ +} + +void *pvrdma_ring_next_elem_write(PvrdmaRing *ring) +{ + unsigned int idx, offset, tail; + + /* + pr_dbg("%s: t=3D%d, h=3D%d\n", ring->name, ring->ring_state->prod_tail, + ring->ring_state->cons_head); + */ + + if (!pvrdma_idx_ring_has_space(ring->ring_state, ring->max_elems, &tai= l)) { + pr_dbg("CQ is full\n"); + return NULL; + } + + idx =3D pvrdma_idx(&ring->ring_state->prod_tail, ring->max_elems); + /* TODO: tail =3D=3D idx */ + + offset =3D idx * ring->elem_sz; + return ring->pages[offset / TARGET_PAGE_SIZE] + (offset % TARGET_PAGE_= SIZE); +} + +void pvrdma_ring_write_inc(PvrdmaRing *ring) +{ + pvrdma_idx_ring_inc(&ring->ring_state->prod_tail, ring->max_elems); + /* + pr_dbg("%s: t=3D%d, h=3D%d, m=3D%ld\n", ring->name, + ring->ring_state->prod_tail, ring->ring_state->cons_head, + ring->max_elems); + */ +} + +void pvrdma_ring_free(PvrdmaRing *ring) +{ + if (!ring) { + return; + } + + if (!ring->pages) { + return; + } + + pr_dbg("ring->npages=3D%d\n", ring->npages); + while (ring->npages--) { + rdma_pci_dma_unmap(ring->dev, ring->pages[ring->npages], + TARGET_PAGE_SIZE); + } + + free(ring->pages); + ring->pages =3D NULL; +} diff --git a/hw/rdma/vmw/pvrdma_dev_ring.h b/hw/rdma/vmw/pvrdma_dev_ring.h new file mode 100644 index 0000000000..26188b3543 --- /dev/null +++ b/hw/rdma/vmw/pvrdma_dev_ring.h @@ -0,0 +1,42 @@ +/* + * QEMU VMWARE paravirtual RDMA ring utilities + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef PVRDMA_DEV_RING_H +#define PVRDMA_DEV_RING_H + +#include + +#define MAX_RING_NAME_SZ 32 + +typedef struct PvrdmaRing { + char name[MAX_RING_NAME_SZ]; + PCIDevice *dev; + uint32_t max_elems; + size_t elem_sz; + struct pvrdma_ring *ring_state; /* used only for unmap */ + int npages; + void **pages; +} PvrdmaRing; + +int pvrdma_ring_init(PvrdmaRing *ring, const char *name, PCIDevice *dev, + struct pvrdma_ring *ring_state, uint32_t max_elems, + size_t elem_sz, dma_addr_t *tbl, dma_addr_t npages); +void *pvrdma_ring_next_elem_read(PvrdmaRing *ring); +void pvrdma_ring_read_inc(PvrdmaRing *ring); +void *pvrdma_ring_next_elem_write(PvrdmaRing *ring); +void pvrdma_ring_write_inc(PvrdmaRing *ring); +void pvrdma_ring_free(PvrdmaRing *ring); + +#endif diff --git a/hw/rdma/vmw/pvrdma_ib_verbs.h b/hw/rdma/vmw/pvrdma_ib_verbs.h new file mode 100644 index 0000000000..cf1430024b --- /dev/null +++ b/hw/rdma/vmw/pvrdma_ib_verbs.h @@ -0,0 +1,433 @@ +/* + * QEMU VMWARE paravirtual RDMA device definitions + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef PVRDMA_IB_VERBS_H +#define PVRDMA_IB_VERBS_H + +/* + * VMWARE headers we got from Linux kernel do not fully comply QEMU coding + * standards in sense of types and defines used. + * Since we didn't want to change VMWARE code, following set of typedefs + * and defines needed to compile these headers with QEMU introduced. + */ + +#define u8 uint8_t +#define u16 unsigned short +#define u32 uint32_t +#define u64 uint64_t + +/* + * Following is an interface definition for PVRDMA device as provided by + * VMWARE. + * See original copyright from Linux kernel v4.14.5 header file + * drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h + */ + +/* + * [PLEASE NOTE: VMWARE, INC. ELECTS TO USE AND DISTRIBUTE THIS COMPONENT + * UNDER THE TERMS OF THE OpenIB.org BSD license. THE ORIGINAL LICENSE TE= RMS + * ARE REPRODUCED BELOW ONLY AS A REFERENCE.] + * + * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved. + * Copyright (c) 2004 Infinicon Corporation. All rights reserved. + * Copyright (c) 2004 Intel Corporation. All rights reserved. + * Copyright (c) 2004 Topspin Corporation. All rights reserved. + * Copyright (c) 2004 Voltaire Corporation. All rights reserved. + * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. + * Copyright (c) 2005, 2006, 2007 Cisco Systems. All rights reserved. + * Copyright (c) 2015-2016 VMware, Inc. All rights reserved. + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * OpenIB.org BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#include + +union pvrdma_gid { + u8 raw[16]; + struct { + __be64 subnet_prefix; + __be64 interface_id; + } global; +}; + +enum pvrdma_link_layer { + PVRDMA_LINK_LAYER_UNSPECIFIED, + PVRDMA_LINK_LAYER_INFINIBAND, + PVRDMA_LINK_LAYER_ETHERNET, +}; + +enum pvrdma_mtu { + PVRDMA_MTU_256 =3D 1, + PVRDMA_MTU_512 =3D 2, + PVRDMA_MTU_1024 =3D 3, + PVRDMA_MTU_2048 =3D 4, + PVRDMA_MTU_4096 =3D 5, +}; + +static inline int pvrdma_mtu_enum_to_int(enum pvrdma_mtu mtu) +{ + switch (mtu) { + case PVRDMA_MTU_256: return 256; + case PVRDMA_MTU_512: return 512; + case PVRDMA_MTU_1024: return 1024; + case PVRDMA_MTU_2048: return 2048; + case PVRDMA_MTU_4096: return 4096; + default: return -1; + } +} + +static inline enum pvrdma_mtu pvrdma_mtu_int_to_enum(int mtu) +{ + switch (mtu) { + case 256: return PVRDMA_MTU_256; + case 512: return PVRDMA_MTU_512; + case 1024: return PVRDMA_MTU_1024; + case 2048: return PVRDMA_MTU_2048; + case 4096: + default: return PVRDMA_MTU_4096; + } +} + +enum pvrdma_port_state { + PVRDMA_PORT_NOP =3D 0, + PVRDMA_PORT_DOWN =3D 1, + PVRDMA_PORT_INIT =3D 2, + PVRDMA_PORT_ARMED =3D 3, + PVRDMA_PORT_ACTIVE =3D 4, + PVRDMA_PORT_ACTIVE_DEFER =3D 5, +}; + +enum pvrdma_port_cap_flags { + PVRDMA_PORT_SM =3D 1 << 1, + PVRDMA_PORT_NOTICE_SUP =3D 1 << 2, + PVRDMA_PORT_TRAP_SUP =3D 1 << 3, + PVRDMA_PORT_OPT_IPD_SUP =3D 1 << 4, + PVRDMA_PORT_AUTO_MIGR_SUP =3D 1 << 5, + PVRDMA_PORT_SL_MAP_SUP =3D 1 << 6, + PVRDMA_PORT_MKEY_NVRAM =3D 1 << 7, + PVRDMA_PORT_PKEY_NVRAM =3D 1 << 8, + PVRDMA_PORT_LED_INFO_SUP =3D 1 << 9, + PVRDMA_PORT_SM_DISABLED =3D 1 << 10, + PVRDMA_PORT_SYS_IMAGE_GUID_SUP =3D 1 << 11, + PVRDMA_PORT_PKEY_SW_EXT_PORT_TRAP_SUP =3D 1 << 12, + PVRDMA_PORT_EXTENDED_SPEEDS_SUP =3D 1 << 14, + PVRDMA_PORT_CM_SUP =3D 1 << 16, + PVRDMA_PORT_SNMP_TUNNEL_SUP =3D 1 << 17, + PVRDMA_PORT_REINIT_SUP =3D 1 << 18, + PVRDMA_PORT_DEVICE_MGMT_SUP =3D 1 << 19, + PVRDMA_PORT_VENDOR_CLASS_SUP =3D 1 << 20, + PVRDMA_PORT_DR_NOTICE_SUP =3D 1 << 21, + PVRDMA_PORT_CAP_MASK_NOTICE_SUP =3D 1 << 22, + PVRDMA_PORT_BOOT_MGMT_SUP =3D 1 << 23, + PVRDMA_PORT_LINK_LATENCY_SUP =3D 1 << 24, + PVRDMA_PORT_CLIENT_REG_SUP =3D 1 << 25, + PVRDMA_PORT_IP_BASED_GIDS =3D 1 << 26, + PVRDMA_PORT_CAP_FLAGS_MAX =3D PVRDMA_PORT_IP_BASED_GIDS, +}; + +enum pvrdma_port_width { + PVRDMA_WIDTH_1X =3D 1, + PVRDMA_WIDTH_4X =3D 2, + PVRDMA_WIDTH_8X =3D 4, + PVRDMA_WIDTH_12X =3D 8, +}; + +static inline int pvrdma_width_enum_to_int(enum pvrdma_port_width width) +{ + switch (width) { + case PVRDMA_WIDTH_1X: return 1; + case PVRDMA_WIDTH_4X: return 4; + case PVRDMA_WIDTH_8X: return 8; + case PVRDMA_WIDTH_12X: return 12; + default: return -1; + } +} + +enum pvrdma_port_speed { + PVRDMA_SPEED_SDR =3D 1, + PVRDMA_SPEED_DDR =3D 2, + PVRDMA_SPEED_QDR =3D 4, + PVRDMA_SPEED_FDR10 =3D 8, + PVRDMA_SPEED_FDR =3D 16, + PVRDMA_SPEED_EDR =3D 32, +}; + +struct pvrdma_port_attr { + enum pvrdma_port_state state; + enum pvrdma_mtu max_mtu; + enum pvrdma_mtu active_mtu; + u32 gid_tbl_len; + u32 port_cap_flags; + u32 max_msg_sz; + u32 bad_pkey_cntr; + u32 qkey_viol_cntr; + u16 pkey_tbl_len; + u16 lid; + u16 sm_lid; + u8 lmc; + u8 max_vl_num; + u8 sm_sl; + u8 subnet_timeout; + u8 init_type_reply; + u8 active_width; + u8 active_speed; + u8 phys_state; + u8 reserved[2]; +}; + +struct pvrdma_global_route { + union pvrdma_gid dgid; + u32 flow_label; + u8 sgid_index; + u8 hop_limit; + u8 traffic_class; + u8 reserved; +}; + +struct pvrdma_grh { + __be32 version_tclass_flow; + __be16 paylen; + u8 next_hdr; + u8 hop_limit; + union pvrdma_gid sgid; + union pvrdma_gid dgid; +}; + +enum pvrdma_ah_flags { + PVRDMA_AH_GRH =3D 1, +}; + +enum pvrdma_rate { + PVRDMA_RATE_PORT_CURRENT =3D 0, + PVRDMA_RATE_2_5_GBPS =3D 2, + PVRDMA_RATE_5_GBPS =3D 5, + PVRDMA_RATE_10_GBPS =3D 3, + PVRDMA_RATE_20_GBPS =3D 6, + PVRDMA_RATE_30_GBPS =3D 4, + PVRDMA_RATE_40_GBPS =3D 7, + PVRDMA_RATE_60_GBPS =3D 8, + PVRDMA_RATE_80_GBPS =3D 9, + PVRDMA_RATE_120_GBPS =3D 10, + PVRDMA_RATE_14_GBPS =3D 11, + PVRDMA_RATE_56_GBPS =3D 12, + PVRDMA_RATE_112_GBPS =3D 13, + PVRDMA_RATE_168_GBPS =3D 14, + PVRDMA_RATE_25_GBPS =3D 15, + PVRDMA_RATE_100_GBPS =3D 16, + PVRDMA_RATE_200_GBPS =3D 17, + PVRDMA_RATE_300_GBPS =3D 18, +}; + +struct pvrdma_ah_attr { + struct pvrdma_global_route grh; + u16 dlid; + u16 vlan_id; + u8 sl; + u8 src_path_bits; + u8 static_rate; + u8 ah_flags; + u8 port_num; + u8 dmac[6]; + u8 reserved; +}; + +enum pvrdma_cq_notify_flags { + PVRDMA_CQ_SOLICITED =3D 1 << 0, + PVRDMA_CQ_NEXT_COMP =3D 1 << 1, + PVRDMA_CQ_SOLICITED_MASK =3D PVRDMA_CQ_SOLICITED | + PVRDMA_CQ_NEXT_COMP, + PVRDMA_CQ_REPORT_MISSED_EVENTS =3D 1 << 2, +}; + +struct pvrdma_qp_cap { + u32 max_send_wr; + u32 max_recv_wr; + u32 max_send_sge; + u32 max_recv_sge; + u32 max_inline_data; + u32 reserved; +}; + +enum pvrdma_sig_type { + PVRDMA_SIGNAL_ALL_WR, + PVRDMA_SIGNAL_REQ_WR, +}; + +enum pvrdma_qp_type { + PVRDMA_QPT_SMI, + PVRDMA_QPT_GSI, + PVRDMA_QPT_RC, + PVRDMA_QPT_UC, + PVRDMA_QPT_UD, + PVRDMA_QPT_RAW_IPV6, + PVRDMA_QPT_RAW_ETHERTYPE, + PVRDMA_QPT_RAW_PACKET =3D 8, + PVRDMA_QPT_XRC_INI =3D 9, + PVRDMA_QPT_XRC_TGT, + PVRDMA_QPT_MAX, +}; + +enum pvrdma_qp_create_flags { + PVRDMA_QP_CREATE_IPOPVRDMA_UD_LSO =3D 1 << 0, + PVRDMA_QP_CREATE_BLOCK_MULTICAST_LOOPBACK =3D 1 << 1, +}; + +enum pvrdma_qp_attr_mask { + PVRDMA_QP_STATE =3D 1 << 0, + PVRDMA_QP_CUR_STATE =3D 1 << 1, + PVRDMA_QP_EN_SQD_ASYNC_NOTIFY =3D 1 << 2, + PVRDMA_QP_ACCESS_FLAGS =3D 1 << 3, + PVRDMA_QP_PKEY_INDEX =3D 1 << 4, + PVRDMA_QP_PORT =3D 1 << 5, + PVRDMA_QP_QKEY =3D 1 << 6, + PVRDMA_QP_AV =3D 1 << 7, + PVRDMA_QP_PATH_MTU =3D 1 << 8, + PVRDMA_QP_TIMEOUT =3D 1 << 9, + PVRDMA_QP_RETRY_CNT =3D 1 << 10, + PVRDMA_QP_RNR_RETRY =3D 1 << 11, + PVRDMA_QP_RQ_PSN =3D 1 << 12, + PVRDMA_QP_MAX_QP_RD_ATOMIC =3D 1 << 13, + PVRDMA_QP_ALT_PATH =3D 1 << 14, + PVRDMA_QP_MIN_RNR_TIMER =3D 1 << 15, + PVRDMA_QP_SQ_PSN =3D 1 << 16, + PVRDMA_QP_MAX_DEST_RD_ATOMIC =3D 1 << 17, + PVRDMA_QP_PATH_MIG_STATE =3D 1 << 18, + PVRDMA_QP_CAP =3D 1 << 19, + PVRDMA_QP_DEST_QPN =3D 1 << 20, + PVRDMA_QP_ATTR_MASK_MAX =3D PVRDMA_QP_DEST_QPN, +}; + +enum pvrdma_qp_state { + PVRDMA_QPS_RESET, + PVRDMA_QPS_INIT, + PVRDMA_QPS_RTR, + PVRDMA_QPS_RTS, + PVRDMA_QPS_SQD, + PVRDMA_QPS_SQE, + PVRDMA_QPS_ERR, +}; + +enum pvrdma_mig_state { + PVRDMA_MIG_MIGRATED, + PVRDMA_MIG_REARM, + PVRDMA_MIG_ARMED, +}; + +enum pvrdma_mw_type { + PVRDMA_MW_TYPE_1 =3D 1, + PVRDMA_MW_TYPE_2 =3D 2, +}; + +struct pvrdma_qp_attr { + enum pvrdma_qp_state qp_state; + enum pvrdma_qp_state cur_qp_state; + enum pvrdma_mtu path_mtu; + enum pvrdma_mig_state path_mig_state; + u32 qkey; + u32 rq_psn; + u32 sq_psn; + u32 dest_qp_num; + u32 qp_access_flags; + u16 pkey_index; + u16 alt_pkey_index; + u8 en_sqd_async_notify; + u8 sq_draining; + u8 max_rd_atomic; + u8 max_dest_rd_atomic; + u8 min_rnr_timer; + u8 port_num; + u8 timeout; + u8 retry_cnt; + u8 rnr_retry; + u8 alt_port_num; + u8 alt_timeout; + u8 reserved[5]; + struct pvrdma_qp_cap cap; + struct pvrdma_ah_attr ah_attr; + struct pvrdma_ah_attr alt_ah_attr; +}; + +enum pvrdma_send_flags { + PVRDMA_SEND_FENCE =3D 1 << 0, + PVRDMA_SEND_SIGNALED =3D 1 << 1, + PVRDMA_SEND_SOLICITED =3D 1 << 2, + PVRDMA_SEND_INLINE =3D 1 << 3, + PVRDMA_SEND_IP_CSUM =3D 1 << 4, + PVRDMA_SEND_FLAGS_MAX =3D PVRDMA_SEND_IP_CSUM, +}; + +enum pvrdma_access_flags { + PVRDMA_ACCESS_LOCAL_WRITE =3D 1 << 0, + PVRDMA_ACCESS_REMOTE_WRITE =3D 1 << 1, + PVRDMA_ACCESS_REMOTE_READ =3D 1 << 2, + PVRDMA_ACCESS_REMOTE_ATOMIC =3D 1 << 3, + PVRDMA_ACCESS_MW_BIND =3D 1 << 4, + PVRDMA_ZERO_BASED =3D 1 << 5, + PVRDMA_ACCESS_ON_DEMAND =3D 1 << 6, + PVRDMA_ACCESS_FLAGS_MAX =3D PVRDMA_ACCESS_ON_DEMAND, +}; + +enum ib_wc_status { + IB_WC_SUCCESS, + IB_WC_LOC_LEN_ERR, + IB_WC_LOC_QP_OP_ERR, + IB_WC_LOC_EEC_OP_ERR, + IB_WC_LOC_PROT_ERR, + IB_WC_WR_FLUSH_ERR, + IB_WC_MW_BIND_ERR, + IB_WC_BAD_RESP_ERR, + IB_WC_LOC_ACCESS_ERR, + IB_WC_REM_INV_REQ_ERR, + IB_WC_REM_ACCESS_ERR, + IB_WC_REM_OP_ERR, + IB_WC_RETRY_EXC_ERR, + IB_WC_RNR_RETRY_EXC_ERR, + IB_WC_LOC_RDD_VIOL_ERR, + IB_WC_REM_INV_RD_REQ_ERR, + IB_WC_REM_ABORT_ERR, + IB_WC_INV_EECN_ERR, + IB_WC_INV_EEC_STATE_ERR, + IB_WC_FATAL_ERR, + IB_WC_RESP_TIMEOUT_ERR, + IB_WC_GENERAL_ERR +}; + +#endif /* PVRDMA_IB_VERBS_H */ diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c new file mode 100644 index 0000000000..813c8f2ceb --- /dev/null +++ b/hw/rdma/vmw/pvrdma_main.c @@ -0,0 +1,644 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "trace.h" + +#include "../rdma_rm.h" +#include "../rdma_backend.h" +#include "../rdma_utils.h" + +#include "pvrdma.h" +#include "vmw_pvrdma-abi.h" +#include "pvrdma_dev_api.h" +#include "pvrdma_qp_ops.h" + +static Property pvrdma_dev_properties[] =3D { + DEFINE_PROP_STRING("backend-dev", PVRDMADev, backend_device_name), + DEFINE_PROP_UINT8("backend-port", PVRDMADev, backend_port_num, 1), + DEFINE_PROP_UINT8("backend-gid-idx", PVRDMADev, backend_gid_idx, 0), + DEFINE_PROP_UINT64("dev-caps-max-mr-size", PVRDMADev, dev_attr.max_mr_= size, + MAX_MR_SIZE), + DEFINE_PROP_INT32("dev-caps-max-qp", PVRDMADev, dev_attr.max_qp, MAX_Q= P), + DEFINE_PROP_INT32("dev-caps-max-sge", PVRDMADev, dev_attr.max_sge, MAX= _SGE), + DEFINE_PROP_INT32("dev-caps-max-cq", PVRDMADev, dev_attr.max_cq, MAX_C= Q), + DEFINE_PROP_INT32("dev-caps-max-mr", PVRDMADev, dev_attr.max_mr, MAX_M= R), + DEFINE_PROP_INT32("dev-caps-max-pd", PVRDMADev, dev_attr.max_pd, MAX_P= D), + DEFINE_PROP_INT32("dev-caps-qp-rd-atom", PVRDMADev, dev_attr.max_qp_rd= _atom, + MAX_QP_RD_ATOM), + DEFINE_PROP_INT32("dev-caps-max-qp-init-rd-atom", PVRDMADev, + dev_attr.max_qp_init_rd_atom, MAX_QP_INIT_RD_ATOM), + DEFINE_PROP_INT32("dev-caps-max-ah", PVRDMADev, dev_attr.max_ah, MAX_A= H), + DEFINE_PROP_END_OF_LIST(), +}; + +static void free_dev_ring(PCIDevice *pci_dev, PvrdmaRing *ring, + void *ring_state) +{ + pvrdma_ring_free(ring); + rdma_pci_dma_unmap(pci_dev, ring_state, TARGET_PAGE_SIZE); +} + +static int init_dev_ring(PvrdmaRing *ring, struct pvrdma_ring **ring_state, + const char *name, PCIDevice *pci_dev, + dma_addr_t dir_addr, u32 num_pages) +{ + __u64 *dir, *tbl; + int rc =3D 0; + + pr_dbg("Initializing device ring %s\n", name); + pr_dbg("pdir_dma=3D0x%llx\n", (long long unsigned int)dir_addr); + pr_dbg("num_pages=3D%d\n", num_pages); + dir =3D rdma_pci_dma_map(pci_dev, dir_addr, TARGET_PAGE_SIZE); + if (!dir) { + pr_err("Failed to map to page directory\n"); + rc =3D -ENOMEM; + goto out; + } + tbl =3D rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE); + if (!tbl) { + pr_err("Failed to map to page table\n"); + rc =3D -ENOMEM; + goto out_free_dir; + } + + *ring_state =3D rdma_pci_dma_map(pci_dev, tbl[0], TARGET_PAGE_SIZE); + if (!*ring_state) { + pr_err("Failed to map to ring state\n"); + rc =3D -ENOMEM; + goto out_free_tbl; + } + /* RX ring is the second */ + (struct pvrdma_ring *)(*ring_state)++; + rc =3D pvrdma_ring_init(ring, name, pci_dev, + (struct pvrdma_ring *)*ring_state, + (num_pages - 1) * TARGET_PAGE_SIZE / + sizeof(struct pvrdma_cqne), + sizeof(struct pvrdma_cqne), + (dma_addr_t *)&tbl[1], (dma_addr_t)num_pages - 1= ); + if (rc) { + pr_err("Failed to initialize ring\n"); + rc =3D -ENOMEM; + goto out_free_ring_state; + } + + goto out_free_tbl; + +out_free_ring_state: + rdma_pci_dma_unmap(pci_dev, *ring_state, TARGET_PAGE_SIZE); + +out_free_tbl: + rdma_pci_dma_unmap(pci_dev, tbl, TARGET_PAGE_SIZE); + +out_free_dir: + rdma_pci_dma_unmap(pci_dev, dir, TARGET_PAGE_SIZE); + +out: + return rc; +} + +static void free_dsr(PVRDMADev *dev) +{ + PCIDevice *pci_dev =3D PCI_DEVICE(dev); + + if (!dev->dsr_info.dsr) { + return; + } + + free_dev_ring(pci_dev, &dev->dsr_info.async, + dev->dsr_info.async_ring_state); + + free_dev_ring(pci_dev, &dev->dsr_info.cq, dev->dsr_info.cq_ring_state); + + rdma_pci_dma_unmap(pci_dev, dev->dsr_info.req, + sizeof(union pvrdma_cmd_req)); + + rdma_pci_dma_unmap(pci_dev, dev->dsr_info.rsp, + sizeof(union pvrdma_cmd_resp)); + + rdma_pci_dma_unmap(pci_dev, dev->dsr_info.dsr, + sizeof(struct pvrdma_device_shared_region)); + + dev->dsr_info.dsr =3D NULL; +} + +static int load_dsr(PVRDMADev *dev) +{ + int rc =3D 0; + PCIDevice *pci_dev =3D PCI_DEVICE(dev); + DSRInfo *dsr_info; + struct pvrdma_device_shared_region *dsr; + + free_dsr(dev); + + /* Map to DSR */ + pr_dbg("dsr_dma=3D0x%llx\n", (long long unsigned int)dev->dsr_info.dma= ); + dev->dsr_info.dsr =3D rdma_pci_dma_map(pci_dev, dev->dsr_info.dma, + sizeof(struct pvrdma_device_shared_region)); + if (!dev->dsr_info.dsr) { + pr_err("Failed to map to DSR\n"); + rc =3D -ENOMEM; + goto out; + } + + /* Shortcuts */ + dsr_info =3D &dev->dsr_info; + dsr =3D dsr_info->dsr; + + /* Map to command slot */ + pr_dbg("cmd_dma=3D0x%llx\n", (long long unsigned int)dsr->cmd_slot_dma= ); + dsr_info->req =3D rdma_pci_dma_map(pci_dev, dsr->cmd_slot_dma, + sizeof(union pvrdma_cmd_req)); + if (!dsr_info->req) { + pr_err("Failed to map to command slot address\n"); + rc =3D -ENOMEM; + goto out_free_dsr; + } + + /* Map to response slot */ + pr_dbg("rsp_dma=3D0x%llx\n", (long long unsigned int)dsr->resp_slot_dm= a); + dsr_info->rsp =3D rdma_pci_dma_map(pci_dev, dsr->resp_slot_dma, + sizeof(union pvrdma_cmd_resp)); + if (!dsr_info->rsp) { + pr_err("Failed to map to response slot address\n"); + rc =3D -ENOMEM; + goto out_free_req; + } + + /* Map to CQ notification ring */ + rc =3D init_dev_ring(&dsr_info->cq, &dsr_info->cq_ring_state, "dev_cq", + pci_dev, dsr->cq_ring_pages.pdir_dma, + dsr->cq_ring_pages.num_pages); + if (rc) { + pr_err("Failed to map to initialize CQ ring\n"); + rc =3D -ENOMEM; + goto out_free_rsp; + } + + /* Map to event notification ring */ + rc =3D init_dev_ring(&dsr_info->async, &dsr_info->async_ring_state, + "dev_async", pci_dev, dsr->async_ring_pages.pdir_dm= a, + dsr->async_ring_pages.num_pages); + if (rc) { + pr_err("Failed to map to initialize event ring\n"); + rc =3D -ENOMEM; + goto out_free_rsp; + } + + goto out; + +out_free_rsp: + rdma_pci_dma_unmap(pci_dev, dsr_info->rsp, sizeof(union pvrdma_cmd_res= p)); + +out_free_req: + rdma_pci_dma_unmap(pci_dev, dsr_info->req, sizeof(union pvrdma_cmd_req= )); + +out_free_dsr: + rdma_pci_dma_unmap(pci_dev, dsr_info->dsr, + sizeof(struct pvrdma_device_shared_region)); + dsr_info->dsr =3D NULL; + +out: + return rc; +} + +static void init_dsr_dev_caps(PVRDMADev *dev) +{ + struct pvrdma_device_shared_region *dsr; + + if (dev->dsr_info.dsr =3D=3D NULL) { + pr_err("Can't initialized DSR\n"); + return; + } + + dsr =3D dev->dsr_info.dsr; + + dsr->caps.fw_ver =3D PVRDMA_FW_VERSION; + pr_dbg("fw_ver=3D0x%lx\n", dsr->caps.fw_ver); + + dsr->caps.mode =3D PVRDMA_DEVICE_MODE_ROCE; + pr_dbg("mode=3D%d\n", dsr->caps.mode); + + dsr->caps.gid_types |=3D PVRDMA_GID_TYPE_FLAG_ROCE_V1; + pr_dbg("gid_types=3D0x%x\n", dsr->caps.gid_types); + + dsr->caps.max_uar =3D RDMA_BAR2_UAR_SIZE; + pr_dbg("max_uar=3D%d\n", dsr->caps.max_uar); + + dsr->caps.max_mr_size =3D dev->dev_attr.max_mr_size; + dsr->caps.max_qp =3D dev->dev_attr.max_qp; + dsr->caps.max_qp_wr =3D dev->dev_attr.max_qp_wr; + dsr->caps.max_sge =3D dev->dev_attr.max_sge; + dsr->caps.max_cq =3D dev->dev_attr.max_cq; + dsr->caps.max_cqe =3D dev->dev_attr.max_cqe; + dsr->caps.max_mr =3D dev->dev_attr.max_mr; + dsr->caps.max_pd =3D dev->dev_attr.max_pd; + dsr->caps.max_ah =3D dev->dev_attr.max_ah; + + dsr->caps.gid_tbl_len =3D MAX_GIDS; + pr_dbg("gid_tbl_len=3D%d\n", dsr->caps.gid_tbl_len); + + dsr->caps.sys_image_guid =3D 0; + pr_dbg("sys_image_guid=3D%llx\n", dsr->caps.sys_image_guid); + + dsr->caps.node_guid =3D cpu_to_be64(dev->node_guid); + pr_dbg("node_guid=3D%llx\n", + (long long unsigned int)be64_to_cpu(dsr->caps.node_guid)); + + dsr->caps.phys_port_cnt =3D MAX_PORTS; + pr_dbg("phys_port_cnt=3D%d\n", dsr->caps.phys_port_cnt); + + dsr->caps.max_pkeys =3D MAX_PKEYS; + pr_dbg("max_pkeys=3D%d\n", dsr->caps.max_pkeys); + + pr_dbg("Initialized\n"); +} + +static void free_ports(PVRDMADev *dev) +{ + int i; + + for (i =3D 0; i < MAX_PORTS; i++) { + free(dev->rdma_dev_res.ports[i].gid_tbl); + } +} + +static int init_ports(PVRDMADev *dev, Error **errp) +{ + int i, ret =3D 0; + + memset(dev->rdma_dev_res.ports, 0, sizeof(dev->rdma_dev_res.ports)); + + for (i =3D 0; i < MAX_PORTS; i++) { + dev->rdma_dev_res.ports[i].state =3D PVRDMA_PORT_DOWN; + + dev->rdma_dev_res.ports[i].pkey_tbl =3D + malloc(sizeof(*dev->rdma_dev_res.ports[i].pkey_tbl) * + MAX_PORT_PKEYS); + if (dev->rdma_dev_res.ports[i].gid_tbl =3D=3D NULL) { + goto err_free_ports; + } + + memset(dev->rdma_dev_res.ports[i].gid_tbl, 0, + sizeof(dev->rdma_dev_res.ports[i].gid_tbl)); + } + + return 0; + +err_free_ports: + free_ports(dev); + + error_setg(errp, "Failed to initialize device's ports"); + + return ret; +} + +static void activate_device(PVRDMADev *dev) +{ + set_reg_val(dev, PVRDMA_REG_ERR, 0); + pr_dbg("Device activated\n"); +} + +static int unquiesce_device(PVRDMADev *dev) +{ + pr_dbg("Device unquiesced\n"); + return 0; +} + +static int reset_device(PVRDMADev *dev) +{ + pr_dbg("Device reset complete\n"); + return 0; +} + +static uint64_t regs_read(void *opaque, hwaddr addr, unsigned size) +{ + PVRDMADev *dev =3D opaque; + __u32 val; + + /* pr_dbg("addr=3D0x%lx, size=3D%d\n", addr, size); */ + + if (get_reg_val(dev, addr, &val)) { + pr_dbg("Error trying to read REG value from address 0x%x\n", + (__u32)addr); + return -EINVAL; + } + + trace_pvrdma_regs_read(addr, val); + + return val; +} + +static void regs_write(void *opaque, hwaddr addr, uint64_t val, unsigned s= ize) +{ + PVRDMADev *dev =3D opaque; + + /* pr_dbg("addr=3D0x%lx, val=3D0x%x, size=3D%d\n", addr, (uint32_t)val= , size); */ + + if (set_reg_val(dev, addr, val)) { + pr_err("Error trying to set REG value, addr=3D0x%lx, val=3D0x%lx\n= ", + (uint64_t)addr, val); + return; + } + + trace_pvrdma_regs_write(addr, val); + + switch (addr) { + case PVRDMA_REG_DSRLOW: + dev->dsr_info.dma =3D val; + break; + case PVRDMA_REG_DSRHIGH: + dev->dsr_info.dma |=3D val << 32; + load_dsr(dev); + init_dsr_dev_caps(dev); + break; + case PVRDMA_REG_CTL: + switch (val) { + case PVRDMA_DEVICE_CTL_ACTIVATE: + activate_device(dev); + break; + case PVRDMA_DEVICE_CTL_UNQUIESCE: + unquiesce_device(dev); + break; + case PVRDMA_DEVICE_CTL_RESET: + reset_device(dev); + break; + } + break; + case PVRDMA_REG_IMR: + pr_dbg("Interrupt mask=3D0x%lx\n", val); + dev->interrupt_mask =3D val; + break; + case PVRDMA_REG_REQUEST: + if (val =3D=3D 0) { + execute_command(dev); + } + break; + default: + break; + } +} + +static const MemoryRegionOps regs_ops =3D { + .read =3D regs_read, + .write =3D regs_write, + .endianness =3D DEVICE_LITTLE_ENDIAN, + .impl =3D { + .min_access_size =3D sizeof(uint32_t), + .max_access_size =3D sizeof(uint32_t), + }, +}; + +static void uar_write(void *opaque, hwaddr addr, uint64_t val, unsigned si= ze) +{ + PVRDMADev *dev =3D opaque; + + /* pr_dbg("addr=3D0x%lx, val=3D0x%x, size=3D%d\n", addr, (uint32_t)val= , size); */ + + switch (addr & 0xFFF) { /* Mask with 0xFFF as each UC gets page */ + case PVRDMA_UAR_QP_OFFSET: + pr_dbg("UAR QP command, addr=3D0x%x, val=3D0x%lx\n", (__u32)addr, = val); + if (val & PVRDMA_UAR_QP_SEND) { + pvrdma_qp_send(dev, val & PVRDMA_UAR_HANDLE_MASK); + } + if (val & PVRDMA_UAR_QP_RECV) { + pvrdma_qp_recv(dev, val & PVRDMA_UAR_HANDLE_MASK); + } + break; + case PVRDMA_UAR_CQ_OFFSET: + /* pr_dbg("UAR CQ cmd, addr=3D0x%x, val=3D0x%lx\n", (__u32)addr, v= al); */ + if (val & PVRDMA_UAR_CQ_ARM) { + rdma_rm_req_notify_cq(&dev->rdma_dev_res, + val & PVRDMA_UAR_HANDLE_MASK, + !!(val & PVRDMA_UAR_CQ_ARM_SOL)); + } + if (val & PVRDMA_UAR_CQ_ARM_SOL) { + pr_dbg("UAR_CQ_ARM_SOL (%ld)\n", val & PVRDMA_UAR_HANDLE_MASK); + } + if (val & PVRDMA_UAR_CQ_POLL) { + pr_dbg("UAR_CQ_POLL (%ld)\n", val & PVRDMA_UAR_HANDLE_MASK); + pvrdma_cq_poll(&dev->rdma_dev_res, val & PVRDMA_UAR_HANDLE_MAS= K); + } + break; + default: + pr_err("Unsupported command, addr=3D0x%lx, val=3D0x%lx\n", + (uint64_t)addr, val); + break; + } +} + +static const MemoryRegionOps uar_ops =3D { + .write =3D uar_write, + .endianness =3D DEVICE_LITTLE_ENDIAN, + .impl =3D { + .min_access_size =3D sizeof(uint32_t), + .max_access_size =3D sizeof(uint32_t), + }, +}; + +static void init_pci_config(PCIDevice *pdev) +{ + pdev->config[PCI_INTERRUPT_PIN] =3D 1; +} + +static void init_bars(PCIDevice *pdev) +{ + PVRDMADev *dev =3D PVRDMA_DEV(pdev); + + /* BAR 0 - MSI-X */ + memory_region_init(&dev->msix, OBJECT(dev), "pvrdma-msix", + RDMA_BAR0_MSIX_SIZE); + pci_register_bar(pdev, RDMA_MSIX_BAR_IDX, PCI_BASE_ADDRESS_SPACE_MEMOR= Y, + &dev->msix); + + /* BAR 1 - Registers */ + memset(&dev->regs_data, 0, sizeof(dev->regs_data)); + memory_region_init_io(&dev->regs, OBJECT(dev), ®s_ops, dev, + "pvrdma-regs", RDMA_BAR1_REGS_SIZE); + pci_register_bar(pdev, RDMA_REG_BAR_IDX, PCI_BASE_ADDRESS_SPACE_MEMORY, + &dev->regs); + + /* BAR 2 - UAR */ + memset(&dev->uar_data, 0, sizeof(dev->uar_data)); + memory_region_init_io(&dev->uar, OBJECT(dev), &uar_ops, dev, "rdma-uar= ", + RDMA_BAR2_UAR_SIZE); + pci_register_bar(pdev, RDMA_UAR_BAR_IDX, PCI_BASE_ADDRESS_SPACE_MEMORY, + &dev->uar); +} + +static void init_regs(PCIDevice *pdev) +{ + PVRDMADev *dev =3D PVRDMA_DEV(pdev); + + set_reg_val(dev, PVRDMA_REG_VERSION, PVRDMA_HW_VERSION); + set_reg_val(dev, PVRDMA_REG_ERR, 0xFFFF); +} + +static void uninit_msix(PCIDevice *pdev, int used_vectors) +{ + PVRDMADev *dev =3D PVRDMA_DEV(pdev); + int i; + + for (i =3D 0; i < used_vectors; i++) { + msix_vector_unuse(pdev, i); + } + + msix_uninit(pdev, &dev->msix, &dev->msix); +} + +static int init_msix(PCIDevice *pdev, Error **errp) +{ + PVRDMADev *dev =3D PVRDMA_DEV(pdev); + int i; + int rc; + + rc =3D msix_init(pdev, RDMA_MAX_INTRS, &dev->msix, RDMA_MSIX_BAR_IDX, + RDMA_MSIX_TABLE, &dev->msix, RDMA_MSIX_BAR_IDX, + RDMA_MSIX_PBA, 0, NULL); + + if (rc < 0) { + error_setg(errp, "Failed to initialize MSI-X"); + return rc; + } + + for (i =3D 0; i < RDMA_MAX_INTRS; i++) { + rc =3D msix_vector_use(PCI_DEVICE(dev), i); + if (rc < 0) { + error_setg(errp, "Fail mark MSI-X vercor %d", i); + uninit_msix(pdev, i); + return rc; + } + } + + return 0; +} + +static void init_dev_caps(PVRDMADev *dev) +{ + size_t pg_tbl_bytes =3D TARGET_PAGE_SIZE * (TARGET_PAGE_SIZE / sizeof(= u64)); + size_t wr_sz =3D MAX(sizeof(struct pvrdma_sq_wqe_hdr), + sizeof(struct pvrdma_rq_wqe_hdr)); + + dev->dev_attr.max_qp_wr =3D pg_tbl_bytes / + (wr_sz + sizeof(struct pvrdma_sge) * MAX_SGE= ) - + TARGET_PAGE_SIZE; /* First page is ring stat= e */ + pr_dbg("max_qp_wr=3D%d\n", dev->dev_attr.max_qp_wr); + + dev->dev_attr.max_cqe =3D pg_tbl_bytes / sizeof(struct pvrdma_cqe) - + TARGET_PAGE_SIZE; /* First page is ring state = */ + pr_dbg("max_cqe=3D%d\n", dev->dev_attr.max_cqe); +} + +static void pvrdma_realize(PCIDevice *pdev, Error **errp) +{ + int rc; + PVRDMADev *dev =3D PVRDMA_DEV(pdev); + + pr_dbg("Initializing device %s %x.%x\n", pdev->name, + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); + + dev->dsr_info.dsr =3D NULL; + + init_pci_config(pdev); + + init_bars(pdev); + + init_regs(pdev); + + init_dev_caps(dev); + + rc =3D init_msix(pdev, errp); + if (rc) { + goto out; + } + + rc =3D rdma_backend_init(&dev->backend_dev, &dev->rdma_dev_res, + dev->backend_device_name, dev->backend_port_num, + dev->backend_gid_idx, &dev->dev_attr, errp); + if (rc) { + goto out; + } + + rc =3D rdma_rm_init(&dev->rdma_dev_res, &dev->dev_attr, errp); + if (rc) { + goto out; + } + + rc =3D init_ports(dev, errp); + if (rc) { + goto out; + } + + rc =3D pvrdma_qp_ops_init(); + if (rc) { + goto out; + } + +out: + if (rc) { + error_append_hint(errp, "Device fail to load\n"); + } +} + +static void pvrdma_exit(PCIDevice *pdev) +{ + PVRDMADev *dev =3D PVRDMA_DEV(pdev); + + pr_dbg("Closing device %s %x.%x\n", pdev->name, PCI_SLOT(pdev->devfn), + PCI_FUNC(pdev->devfn)); + + pvrdma_qp_ops_fini(); + + free_ports(dev); + + rdma_rm_fini(&dev->rdma_dev_res); + + rdma_backend_fini(&dev->backend_dev); + + free_dsr(dev); + + if (msix_enabled(pdev)) { + uninit_msix(pdev, RDMA_MAX_INTRS); + } +} + +static void pvrdma_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc =3D DEVICE_CLASS(klass); + PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass); + + k->realize =3D pvrdma_realize; + k->exit =3D pvrdma_exit; + k->vendor_id =3D PCI_VENDOR_ID_VMWARE; + k->device_id =3D PCI_DEVICE_ID_VMWARE_PVRDMA; + k->revision =3D 0x00; + k->class_id =3D PCI_CLASS_NETWORK_OTHER; + + dc->desc =3D "RDMA Device"; + dc->props =3D pvrdma_dev_properties; + set_bit(DEVICE_CATEGORY_NETWORK, dc->categories); +} + +static const TypeInfo pvrdma_info =3D { + .name =3D PVRDMA_HW_NAME, + .parent =3D TYPE_PCI_DEVICE, + .instance_size =3D sizeof(PVRDMADev), + .class_init =3D pvrdma_class_init, + .interfaces =3D (InterfaceInfo[]) { + { INTERFACE_CONVENTIONAL_PCI_DEVICE }, + { } + } +}; + +static void register_types(void) +{ + type_register_static(&pvrdma_info); +} + +type_init(register_types) diff --git a/hw/rdma/vmw/pvrdma_qp_ops.c b/hw/rdma/vmw/pvrdma_qp_ops.c new file mode 100644 index 0000000000..5b95833569 --- /dev/null +++ b/hw/rdma/vmw/pvrdma_qp_ops.c @@ -0,0 +1,212 @@ +#include + +#include "../rdma_utils.h" +#include "../rdma_rm.h" +#include "../rdma_backend.h" + +#include "pvrdma.h" +#include "vmw_pvrdma-abi.h" +#include "pvrdma_qp_ops.h" + +typedef struct CompHandlerCtx { + PVRDMADev *dev; + u32 cq_handle; + struct pvrdma_cqe cqe; +} CompHandlerCtx; + +/* Send Queue WQE */ +typedef struct PvrdmaSqWqe { + struct pvrdma_sq_wqe_hdr hdr; + struct pvrdma_sge sge[0]; +} PvrdmaSqWqe; + +/* Recv Queue WQE */ +typedef struct PvrdmaRqWqe { + struct pvrdma_rq_wqe_hdr hdr; + struct pvrdma_sge sge[0]; +} PvrdmaRqWqe; + +/* + * 1. Put CQE on send CQ ring + * 2. Put CQ number on dsr completion ring + * 3. Interrupt host + */ +static int pvrdma_post_cqe(PVRDMADev *dev, u32 cq_handle, + struct pvrdma_cqe *cqe) +{ + struct pvrdma_cqe *cqe1; + struct pvrdma_cqne *cqne; + PvrdmaRing *ring; + RdmaRmCQ *cq =3D rdma_rm_get_cq(&dev->rdma_dev_res, cq_handle); + + if (unlikely(!cq)) { + pr_dbg("Invalid cqn %d\n", cq_handle); + return -EINVAL; + } + + ring =3D (PvrdmaRing *)cq->opaque; + pr_dbg("ring=3D%p\n", ring); + + /* Step #1: Put CQE on CQ ring */ + pr_dbg("Writing CQE\n"); + cqe1 =3D pvrdma_ring_next_elem_write(ring); + if (unlikely(!cqe1)) { + return -EINVAL; + } + + cqe1->wr_id =3D cqe->wr_id; + cqe1->qp =3D cqe->qp; + cqe1->opcode =3D cqe->opcode; + cqe1->status =3D cqe->status; + cqe1->vendor_err =3D cqe->vendor_err; + + pvrdma_ring_write_inc(ring); + + /* Step #2: Put CQ number on dsr completion ring */ + pr_dbg("Writing CQNE\n"); + cqne =3D pvrdma_ring_next_elem_write(&dev->dsr_info.cq); + if (unlikely(!cqne)) { + return -EINVAL; + } + + cqne->info =3D cq_handle; + pvrdma_ring_write_inc(&dev->dsr_info.cq); + + pr_dbg("cq->notify=3D%d\n", cq->notify); + if (cq->notify) { + cq->notify =3D false; + post_interrupt(dev, INTR_VEC_CMD_COMPLETION_Q); + } + + return 0; +} + +static void pvrdma_qp_ops_comp_handler(int status, unsigned int vendor_err, + void *ctx) +{ + CompHandlerCtx *comp_ctx =3D (CompHandlerCtx *)ctx; + + pr_dbg("cq_handle=3D%d\n", comp_ctx->cq_handle); + pr_dbg("wr_id=3D%lld\n", comp_ctx->cqe.wr_id); + pr_dbg("status=3D%d\n", status); + pr_dbg("vendor_err=3D0x%x\n", vendor_err); + comp_ctx->cqe.status =3D status; + comp_ctx->cqe.vendor_err =3D vendor_err; + pvrdma_post_cqe(comp_ctx->dev, comp_ctx->cq_handle, &comp_ctx->cqe); + free(ctx); +} + +void pvrdma_qp_ops_fini(void) +{ + rdma_backend_unregister_comp_handler(); +} + +int pvrdma_qp_ops_init(void) +{ + rdma_backend_register_comp_handler(pvrdma_qp_ops_comp_handler); + + return 0; +} + +int pvrdma_qp_send(PVRDMADev *dev, __u32 qp_handle) +{ + RdmaRmQP *qp; + PvrdmaSqWqe *wqe; + PvrdmaRing *ring; + + qp =3D rdma_rm_get_qp(&dev->rdma_dev_res, qp_handle); + if (unlikely(!qp)) { + return -EINVAL; + } + + ring =3D (PvrdmaRing *)qp->opaque; + pr_dbg("sring=3D%p\n", ring); + + if (qp->qp_state < IBV_QPS_RTS) { + pr_dbg("Invalid QP state for send (%d < %d) for qp %d\n", qp->qp_s= tate, + IBV_QPS_RTS, qp_handle); + return -EINVAL; + } + + wqe =3D (struct PvrdmaSqWqe *)pvrdma_ring_next_elem_read(ring); + while (wqe) { + CompHandlerCtx *comp_ctx; + + pr_dbg("wr_id=3D%lld\n", wqe->hdr.wr_id); + + /* Prepare CQE */ + comp_ctx =3D malloc(sizeof(CompHandlerCtx)); + comp_ctx->dev =3D dev; + comp_ctx->cq_handle =3D qp->send_cq_handle; + comp_ctx->cqe.wr_id =3D wqe->hdr.wr_id; + comp_ctx->cqe.qp =3D qp_handle; + comp_ctx->cqe.opcode =3D wqe->hdr.opcode; + + rdma_backend_post_send(&dev->backend_dev, &dev->rdma_dev_res, + &qp->backend_qp, qp->qp_type, + (struct ibv_sge *)&wqe->sge[0], wqe->hdr.nu= m_sge, + (union ibv_gid *)wqe->hdr.wr.ud.av.dgid, + wqe->hdr.wr.ud.remote_qpn, + wqe->hdr.wr.ud.remote_qkey, comp_ctx); + + pvrdma_ring_read_inc(ring); + + wqe =3D pvrdma_ring_next_elem_read(ring); + } + + return 0; +} + +int pvrdma_qp_recv(PVRDMADev *dev, __u32 qp_handle) +{ + RdmaRmQP *qp; + PvrdmaRqWqe *wqe; + PvrdmaRing *ring; + + pr_dbg("qp_handle=3D%d\n", qp_handle); + + qp =3D rdma_rm_get_qp(&dev->rdma_dev_res, qp_handle); + if (unlikely(!qp)) { + return -EINVAL; + } + + ring =3D &((PvrdmaRing *)qp->opaque)[1]; + pr_dbg("rring=3D%p\n", ring); + + wqe =3D (struct PvrdmaRqWqe *)pvrdma_ring_next_elem_read(ring); + while (wqe) { + CompHandlerCtx *comp_ctx; + + pr_dbg("wr_id=3D%lld\n", wqe->hdr.wr_id); + + /* Prepare CQE */ + comp_ctx =3D malloc(sizeof(CompHandlerCtx)); + comp_ctx->dev =3D dev; + comp_ctx->cq_handle =3D qp->recv_cq_handle; + comp_ctx->cqe.qp =3D qp_handle; + comp_ctx->cqe.wr_id =3D wqe->hdr.wr_id; + + rdma_backend_post_recv(&dev->backend_dev, &dev->rdma_dev_res, + &qp->backend_qp, qp->qp_type, + (struct ibv_sge *)&wqe->sge[0], wqe->hdr.nu= m_sge, + comp_ctx); + + pvrdma_ring_read_inc(ring); + + wqe =3D pvrdma_ring_next_elem_read(ring); + } + + return 0; +} + +void pvrdma_cq_poll(RdmaDeviceResources *dev_res, __u32 cq_handle) +{ + RdmaRmCQ *cq; + + cq =3D rdma_rm_get_cq(dev_res, cq_handle); + if (!cq) { + pr_dbg("Invalid CQ# %d\n", cq_handle); + } + + rdma_backend_poll_cq(dev_res, &cq->backend_cq); +} diff --git a/hw/rdma/vmw/pvrdma_qp_ops.h b/hw/rdma/vmw/pvrdma_qp_ops.h new file mode 100644 index 0000000000..61ec036835 --- /dev/null +++ b/hw/rdma/vmw/pvrdma_qp_ops.h @@ -0,0 +1,27 @@ +/* + * QEMU VMWARE paravirtual RDMA QP Operations + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef PVRDMA_QP_H +#define PVRDMA_QP_H + +#include "pvrdma.h" + +int pvrdma_qp_ops_init(void); +void pvrdma_qp_ops_fini(void); +int pvrdma_qp_send(PVRDMADev *dev, __u32 qp_handle); +int pvrdma_qp_recv(PVRDMADev *dev, __u32 qp_handle); +void pvrdma_cq_poll(RdmaDeviceResources *dev_res, __u32 cq_handle); + +#endif diff --git a/hw/rdma/vmw/pvrdma_ring.h b/hw/rdma/vmw/pvrdma_ring.h new file mode 100644 index 0000000000..c616cc586c --- /dev/null +++ b/hw/rdma/vmw/pvrdma_ring.h @@ -0,0 +1,134 @@ +/* + * Copyright (c) 2012-2016 VMware, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of EITHER the GNU General Public License + * version 2 as published by the Free Software Foundation or the BSD + * 2-Clause License. This program is distributed in the hope that it + * will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED + * WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. + * See the GNU General Public License version 2 for more details at + * http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html. + * + * You should have received a copy of the GNU General Public License + * along with this program available in the file COPYING in the main + * directory of this source tree. + * + * The BSD 2-Clause License + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + * OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef PVRDMA_RING_H +#define PVRDMA_RING_H + +#include +#include + +#define PVRDMA_INVALID_IDX -1 /* Invalid index. */ + +struct pvrdma_ring { + int prod_tail; /* Producer tail. */ + int cons_head; /* Consumer head. */ +}; + +struct pvrdma_ring_state { + struct pvrdma_ring tx; /* Tx ring. */ + struct pvrdma_ring rx; /* Rx ring. */ +}; + +static inline int pvrdma_idx_valid(__u32 idx, __u32 max_elems) +{ + /* Generates fewer instructions than a less-than. */ + return (idx & ~((max_elems << 1) - 1)) =3D=3D 0; +} + +static inline __s32 pvrdma_idx(int *var, __u32 max_elems) +{ + const unsigned int idx =3D atomic_read(var); + + if (pvrdma_idx_valid(idx, max_elems)) { + return idx & (max_elems - 1); + } + return PVRDMA_INVALID_IDX; +} + +static inline void pvrdma_idx_ring_inc(int *var, __u32 max_elems) +{ + __u32 idx =3D atomic_read(var) + 1; /* Increment. */ + + idx &=3D (max_elems << 1) - 1; /* Modulo size, flip gen. */ + atomic_set(var, idx); +} + +static inline __s32 pvrdma_idx_ring_has_space(const struct pvrdma_ring *r, + __u32 max_elems, __u32 *out_= tail) +{ + const __u32 tail =3D atomic_read(&r->prod_tail); + const __u32 head =3D atomic_read(&r->cons_head); + + if (pvrdma_idx_valid(tail, max_elems) && + pvrdma_idx_valid(head, max_elems)) { + *out_tail =3D tail & (max_elems - 1); + return tail !=3D (head ^ max_elems); + } + return PVRDMA_INVALID_IDX; +} + +static inline __s32 pvrdma_idx_ring_has_data(const struct pvrdma_ring *r, + __u32 max_elems, __u32 *out_h= ead) +{ + const __u32 tail =3D atomic_read(&r->prod_tail); + const __u32 head =3D atomic_read(&r->cons_head); + + if (pvrdma_idx_valid(tail, max_elems) && + pvrdma_idx_valid(head, max_elems)) { + *out_head =3D head & (max_elems - 1); + return tail !=3D head; + } + return PVRDMA_INVALID_IDX; +} + +static inline bool pvrdma_idx_ring_is_valid_idx(const struct pvrdma_ring *= r, + __u32 max_elems, __u32 *id= x) +{ + const __u32 tail =3D atomic_read(&r->prod_tail); + const __u32 head =3D atomic_read(&r->cons_head); + + if (pvrdma_idx_valid(tail, max_elems) && + pvrdma_idx_valid(head, max_elems) && + pvrdma_idx_valid(*idx, max_elems)) { + if (tail > head && (*idx < tail && *idx >=3D head)) { + return true; + } else if (head > tail && (*idx >=3D head || *idx < tail)) { + return true; + } + } + return false; +} + +#endif /* PVRDMA_RING_H */ diff --git a/hw/rdma/vmw/trace-events b/hw/rdma/vmw/trace-events new file mode 100644 index 0000000000..dd4dde311e --- /dev/null +++ b/hw/rdma/vmw/trace-events @@ -0,0 +1,5 @@ +# See docs/tracing.txt for syntax documentation. + +# hw/rdma/vmw/pvrdma_main.c +pvrdma_regs_read(uint64_t addr, uint64_t val) "regs[0x%lx] =3D 0x%lx" +pvrdma_regs_write(uint64_t addr, uint64_t val) "regs[0x%lx] =3D 0x%lx" diff --git a/hw/rdma/vmw/vmw_pvrdma-abi.h b/hw/rdma/vmw/vmw_pvrdma-abi.h new file mode 100644 index 0000000000..8cfb9d7745 --- /dev/null +++ b/hw/rdma/vmw/vmw_pvrdma-abi.h @@ -0,0 +1,311 @@ +/* + * QEMU VMWARE paravirtual RDMA device definitions + * + * Copyright (C) 2018 Oracle + * Copyright (C) 2018 Red Hat Inc + * + * Authors: + * Yuval Shaia + * Marcel Apfelbaum + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef VMW_PVRDMA_ABI_H +#define VMW_PVRDMA_ABI_H + +/* + * Following is an interface definition for PVRDMA device as provided by + * VMWARE. + * See original copyright from Linux kernel v4.14.5 header file + * include/uapi/rdma/vmw_pvrdma-abi.h + */ + +/* + * Copyright (c) 2012-2016 VMware, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of EITHER the GNU General Public License + * version 2 as published by the Free Software Foundation or the BSD + * 2-Clause License. This program is distributed in the hope that it + * will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED + * WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. + * See the GNU General Public License version 2 for more details at + * http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html. + * + * You should have received a copy of the GNU General Public License + * along with this program available in the file COPYING in the main + * directory of this source tree. + * + * The BSD 2-Clause License + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + * OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include + +#define PVRDMA_UVERBS_ABI_VERSION 3 /* ABI Version. */ +#define PVRDMA_UAR_HANDLE_MASK 0x00FFFFFF /* Bottom 24 bits. */ +#define PVRDMA_UAR_QP_OFFSET 0 /* QP doorbell. */ +#define PVRDMA_UAR_QP_SEND BIT(30) /* Send bit. */ +#define PVRDMA_UAR_QP_RECV BIT(31) /* Recv bit. */ +#define PVRDMA_UAR_CQ_OFFSET 4 /* CQ doorbell. */ +#define PVRDMA_UAR_CQ_ARM_SOL BIT(29) /* Arm solicited bit. = */ +#define PVRDMA_UAR_CQ_ARM BIT(30) /* Arm bit. */ +#define PVRDMA_UAR_CQ_POLL BIT(31) /* Poll bit. */ + +enum pvrdma_wr_opcode { + PVRDMA_WR_RDMA_WRITE, + PVRDMA_WR_RDMA_WRITE_WITH_IMM, + PVRDMA_WR_SEND, + PVRDMA_WR_SEND_WITH_IMM, + PVRDMA_WR_RDMA_READ, + PVRDMA_WR_ATOMIC_CMP_AND_SWP, + PVRDMA_WR_ATOMIC_FETCH_AND_ADD, + PVRDMA_WR_LSO, + PVRDMA_WR_SEND_WITH_INV, + PVRDMA_WR_RDMA_READ_WITH_INV, + PVRDMA_WR_LOCAL_INV, + PVRDMA_WR_FAST_REG_MR, + PVRDMA_WR_MASKED_ATOMIC_CMP_AND_SWP, + PVRDMA_WR_MASKED_ATOMIC_FETCH_AND_ADD, + PVRDMA_WR_BIND_MW, + PVRDMA_WR_REG_SIG_MR, +}; + +enum pvrdma_wc_status { + PVRDMA_WC_SUCCESS, + PVRDMA_WC_LOC_LEN_ERR, + PVRDMA_WC_LOC_QP_OP_ERR, + PVRDMA_WC_LOC_EEC_OP_ERR, + PVRDMA_WC_LOC_PROT_ERR, + PVRDMA_WC_WR_FLUSH_ERR, + PVRDMA_WC_MW_BIND_ERR, + PVRDMA_WC_BAD_RESP_ERR, + PVRDMA_WC_LOC_ACCESS_ERR, + PVRDMA_WC_REM_INV_REQ_ERR, + PVRDMA_WC_REM_ACCESS_ERR, + PVRDMA_WC_REM_OP_ERR, + PVRDMA_WC_RETRY_EXC_ERR, + PVRDMA_WC_RNR_RETRY_EXC_ERR, + PVRDMA_WC_LOC_RDD_VIOL_ERR, + PVRDMA_WC_REM_INV_RD_REQ_ERR, + PVRDMA_WC_REM_ABORT_ERR, + PVRDMA_WC_INV_EECN_ERR, + PVRDMA_WC_INV_EEC_STATE_ERR, + PVRDMA_WC_FATAL_ERR, + PVRDMA_WC_RESP_TIMEOUT_ERR, + PVRDMA_WC_GENERAL_ERR, +}; + +enum pvrdma_wc_opcode { + PVRDMA_WC_SEND, + PVRDMA_WC_RDMA_WRITE, + PVRDMA_WC_RDMA_READ, + PVRDMA_WC_COMP_SWAP, + PVRDMA_WC_FETCH_ADD, + PVRDMA_WC_BIND_MW, + PVRDMA_WC_LSO, + PVRDMA_WC_LOCAL_INV, + PVRDMA_WC_FAST_REG_MR, + PVRDMA_WC_MASKED_COMP_SWAP, + PVRDMA_WC_MASKED_FETCH_ADD, + PVRDMA_WC_RECV =3D 1 << 7, + PVRDMA_WC_RECV_RDMA_WITH_IMM, +}; + +enum pvrdma_wc_flags { + PVRDMA_WC_GRH =3D 1 << 0, + PVRDMA_WC_WITH_IMM =3D 1 << 1, + PVRDMA_WC_WITH_INVALIDATE =3D 1 << 2, + PVRDMA_WC_IP_CSUM_OK =3D 1 << 3, + PVRDMA_WC_WITH_SMAC =3D 1 << 4, + PVRDMA_WC_WITH_VLAN =3D 1 << 5, + PVRDMA_WC_FLAGS_MAX =3D PVRDMA_WC_WITH_VLAN, +}; + +struct pvrdma_alloc_ucontext_resp { + __u32 qp_tab_size; + __u32 reserved; +}; + +struct pvrdma_alloc_pd_resp { + __u32 pdn; + __u32 reserved; +}; + +struct pvrdma_create_cq { + __u64 buf_addr; + __u32 buf_size; + __u32 reserved; +}; + +struct pvrdma_create_cq_resp { + __u32 cqn; + __u32 reserved; +}; + +struct pvrdma_resize_cq { + __u64 buf_addr; + __u32 buf_size; + __u32 reserved; +}; + +struct pvrdma_create_srq { + __u64 buf_addr; +}; + +struct pvrdma_create_srq_resp { + __u32 srqn; + __u32 reserved; +}; + +struct pvrdma_create_qp { + __u64 rbuf_addr; + __u64 sbuf_addr; + __u32 rbuf_size; + __u32 sbuf_size; + __u64 qp_addr; +}; + +/* PVRDMA masked atomic compare and swap */ +struct pvrdma_ex_cmp_swap { + __u64 swap_val; + __u64 compare_val; + __u64 swap_mask; + __u64 compare_mask; +}; + +/* PVRDMA masked atomic fetch and add */ +struct pvrdma_ex_fetch_add { + __u64 add_val; + __u64 field_boundary; +}; + +/* PVRDMA address vector. */ +struct pvrdma_av { + __u32 port_pd; + __u32 sl_tclass_flowlabel; + __u8 dgid[16]; + __u8 src_path_bits; + __u8 gid_index; + __u8 stat_rate; + __u8 hop_limit; + __u8 dmac[6]; + __u8 reserved[6]; +}; + +/* PVRDMA scatter/gather entry */ +struct pvrdma_sge { + __u64 addr; + __u32 length; + __u32 lkey; +}; + +/* PVRDMA receive queue work request */ +struct pvrdma_rq_wqe_hdr { + __u64 wr_id; /* wr id */ + __u32 num_sge; /* size of s/g array */ + __u32 total_len; /* reserved */ +}; +/* Use pvrdma_sge (ib_sge) for receive queue s/g array elements. */ + +/* PVRDMA send queue work request */ +struct pvrdma_sq_wqe_hdr { + __u64 wr_id; /* wr id */ + __u32 num_sge; /* size of s/g array */ + __u32 total_len; /* reserved */ + __u32 opcode; /* operation type */ + __u32 send_flags; /* wr flags */ + union { + __be32 imm_data; + __u32 invalidate_rkey; + } ex; + __u32 reserved; + union { + struct { + __u64 remote_addr; + __u32 rkey; + __u8 reserved[4]; + } rdma; + struct { + __u64 remote_addr; + __u64 compare_add; + __u64 swap; + __u32 rkey; + __u32 reserved; + } atomic; + struct { + __u64 remote_addr; + __u32 log_arg_sz; + __u32 rkey; + union { + struct pvrdma_ex_cmp_swap cmp_swap; + struct pvrdma_ex_fetch_add fetch_add; + } wr_data; + } masked_atomics; + struct { + __u64 iova_start; + __u64 pl_pdir_dma; + __u32 page_shift; + __u32 page_list_len; + __u32 length; + __u32 access_flags; + __u32 rkey; + } fast_reg; + struct { + __u32 remote_qpn; + __u32 remote_qkey; + struct pvrdma_av av; + } ud; + } wr; +}; +/* Use pvrdma_sge (ib_sge) for send queue s/g array elements. */ + +/* Completion queue element. */ +struct pvrdma_cqe { + __u64 wr_id; + __u64 qp; + __u32 opcode; + __u32 status; + __u32 byte_len; + __be32 imm_data; + __u32 src_qp; + __u32 wc_flags; + __u32 vendor_err; + __u16 pkey_index; + __u16 slid; + __u8 sl; + __u8 dlid_path_bits; + __u8 port_num; + __u8 smac[6]; + __u8 reserved2[7]; /* Pad to next power of 2 (64). */ +}; + +#endif /* VMW_PVRDMA_ABI_H */ diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h index 35df1874a9..1dbf53627c 100644 --- a/include/hw/pci/pci_ids.h +++ b/include/hw/pci/pci_ids.h @@ -266,4 +266,7 @@ #define PCI_VENDOR_ID_TEWS 0x1498 #define PCI_DEVICE_ID_TEWS_TPCI200 0x30C8 =20 +#define PCI_VENDOR_ID_VMWARE 0x15ad +#define PCI_DEVICE_ID_VMWARE_PVRDMA 0x0820 + #endif --=20 2.13.5 From nobody Thu May 2 12:17:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1515920859973703.1657597277489; Sun, 14 Jan 2018 01:07:39 -0800 (PST) Received: from localhost ([::1]:51726 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeGM-0000Io-6f for importer@patchew.org; Sun, 14 Jan 2018 04:07:34 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50200) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eaeBU-00054A-0y for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:32 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eaeBS-000256-Fg for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:32 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58846) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eaeBS-00024q-AJ for qemu-devel@nongnu.org; Sun, 14 Jan 2018 04:02:30 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6609C356C4; Sun, 14 Jan 2018 09:02:29 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8743B600CC; Sun, 14 Jan 2018 09:02:26 +0000 (UTC) From: Marcel Apfelbaum To: qemu-devel@nongnu.org Date: Sun, 14 Jan 2018 11:01:47 +0200 Message-Id: <20180114090147.39255-6-marcel@redhat.com> In-Reply-To: <20180114090147.39255-1-marcel@redhat.com> References: <20180114090147.39255-1-marcel@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Sun, 14 Jan 2018 09:02:29 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH V7 5/5] MAINTAINERS: add entry for hw/rdma X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ehabkost@redhat.com, mst@redhat.com, cohuck@redhat.com, f4bug@amsat.org, yuval.shaia@oracle.com, borntraeger@de.ibm.com, pbonzini@redhat.com, marcel@redhat.com, imammedo@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Signed-off-by: Marcel Apfelbaum Signed-off-by: Yuval Shaia --- MAINTAINERS | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 4770f105d4..fc4f54eebb 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1984,6 +1984,14 @@ F: block/replication.c F: tests/test-replication.c F: docs/block-replication.txt =20 +PVRDMA +M: Yuval Shaia +M: Marcel Apfelbaum +S: Maintained +F: hw/rdma/* +F: hw/rdma/vmw/* +F: docs/pvrdma.txt + Build and test automation ------------------------- Build and test automation --=20 2.13.5