:p
atchew
Login
This series adds basic support for message-based DMA in qemu's vfio-user server. This is useful for cases where the client does not provide file descriptors for accessing system memory via memory mappings. My motivating use case is to hook up device models as PCIe endpoints to a hardware design. This works by bridging the PCIe transaction layer to vfio-user, and the endpoint does not access memory directly, but sends memory requests TLPs to the hardware design in order to perform DMA. Note that more work is needed to make message-based DMA work well: qemu currently breaks down DMA accesses into chunks of size 8 bytes at maximum, each of which will be handled in a separate vfio-user DMA request message. This is quite terrible for large DMA accesses, such as when nvme reads and writes page-sized blocks for example. Thus, I would like to improve qemu to be able to perform larger accesses, at least for indirect memory regions. I have something working locally, but since this will likely result in more involved surgery and discussion, I am leaving this to be addressed in a separate patch. Changes from v1: * Address Stefan's review comments. In particular, enforce an allocation limit and don't drop the map client callbacks given that map requests can fail when hitting size limits. * libvfio-user version bump now included in the series. * Tested as well on big-endian s390x. This uncovered another byte order issue in vfio-user server code that I've included a fix for. Changes from v2: * Add a preparatory patch to make bounce buffering an AddressSpace-specific concept. * The total buffer size limit parameter is now per AdressSpace and can be configured for PCIDevice via a property. * Store a magic value in first bytes of bounce buffer struct as a best effort measure to detect invalid pointers in address_space_unmap. Changes from v3: * libvfio-user now supports twin-socket mode which uses separate sockets for client->server and server->client commands, respectively. This addresses the concurrent command bug triggered by server->client DMA access commands. See https://github.com/nutanix/libvfio-user/issues/279 for details. * Add missing teardown code in do_address_space_destroy. * Fix bounce buffer size bookkeeping race condition. * Generate unmap notification callbacks unconditionally. * Some cosmetic fixes. Changes from v4: * Fix accidentally dropped memory_region_unref, control flow restored to match previous code to simplify review. * Some cosmetic fixes. Mattias Nissler (5): softmmu: Per-AddressSpace bounce buffering softmmu: Support concurrent bounce buffers Update subprojects/libvfio-user vfio-user: Message-based DMA support vfio-user: Fix config space access byte order hw/pci/pci.c | 8 ++ hw/remote/trace-events | 2 + hw/remote/vfio-user-obj.c | 88 ++++++++++++++++++--- include/exec/cpu-common.h | 2 - include/exec/memory.h | 41 +++++++++- include/hw/pci/pci_device.h | 3 + softmmu/dma-helpers.c | 4 +- softmmu/memory.c | 8 ++ softmmu/physmem.c | 141 ++++++++++++++++++---------------- subprojects/libvfio-user.wrap | 2 +- 10 files changed, 218 insertions(+), 81 deletions(-) -- 2.34.1
Instead of using a single global bounce buffer, give each AddressSpace its own bounce buffer. The MapClient callback mechanism moves to AddressSpace accordingly. This is in preparation for generalizing bounce buffer handling further to allow multiple bounce buffers, with a total allocation limit configured per AddressSpace. Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- include/exec/cpu-common.h | 2 - include/exec/memory.h | 45 ++++++++++++++++- softmmu/dma-helpers.c | 4 +- softmmu/memory.c | 7 +++ softmmu/physmem.c | 101 ++++++++++++++++---------------------- 5 files changed, 93 insertions(+), 66 deletions(-) diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index XXXXXXX..XXXXXXX 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr, bool is_write); void cpu_physical_memory_unmap(void *buffer, hwaddr len, bool is_write, hwaddr access_len); -void cpu_register_map_client(QEMUBH *bh); -void cpu_unregister_map_client(QEMUBH *bh); bool cpu_physical_memory_is_io(hwaddr phys_addr); diff --git a/include/exec/memory.h b/include/exec/memory.h index XXXXXXX..XXXXXXX 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -XXX,XX +XXX,XX @@ struct MemoryListener { QTAILQ_ENTRY(MemoryListener) link_as; }; +typedef struct AddressSpaceMapClient { + QEMUBH *bh; + QLIST_ENTRY(AddressSpaceMapClient) link; +} AddressSpaceMapClient; + +typedef struct { + MemoryRegion *mr; + void *buffer; + hwaddr addr; + hwaddr len; + bool in_use; +} BounceBuffer; + /** * struct AddressSpace: describes a mapping of addresses to #MemoryRegion objects */ @@ -XXX,XX +XXX,XX @@ struct AddressSpace { struct MemoryRegionIoeventfd *ioeventfds; QTAILQ_HEAD(, MemoryListener) listeners; QTAILQ_ENTRY(AddressSpace) address_spaces_link; + + /* Bounce buffer to use for this address space. */ + BounceBuffer bounce; + /* List of callbacks to invoke when buffers free up */ + QemuMutex map_client_list_lock; + QLIST_HEAD(, AddressSpaceMapClient) map_client_list; }; typedef struct AddressSpaceDispatch AddressSpaceDispatch; @@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, hwaddr len, * May return %NULL and set *@plen to zero(0), if resources needed to perform * the mapping are exhausted. * Use only for reads OR writes - not for read-modify-write operations. - * Use cpu_register_map_client() to know when retrying the map operation is - * likely to succeed. + * Use address_space_register_map_client() to know when retrying the map + * operation is likely to succeed. * * @as: #AddressSpace to be accessed * @addr: address within that address space @@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as, hwaddr addr, void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, bool is_write, hwaddr access_len); +/* + * address_space_register_map_client: Register a callback to invoke when + * resources for address_space_map() are available again. + * + * address_space_map may fail when there are not enough resources available, + * such as when bounce buffer memory would exceed the limit. The callback can + * be used to retry the address_space_map operation. Note that the callback + * gets automatically removed after firing. + * + * @as: #AddressSpace to be accessed + * @bh: callback to invoke when address_space_map() retry is appropriate + */ +void address_space_register_map_client(AddressSpace *as, QEMUBH *bh); + +/* + * address_space_unregister_map_client: Unregister a callback that has + * previously been registered and not fired yet. + * + * @as: #AddressSpace to be accessed + * @bh: callback to unregister + */ +void address_space_unregister_map_client(AddressSpace *as, QEMUBH *bh); /* Internal functions, part of the implementation of address_space_read. */ MemTxResult address_space_read_full(AddressSpace *as, hwaddr addr, diff --git a/softmmu/dma-helpers.c b/softmmu/dma-helpers.c index XXXXXXX..XXXXXXX 100644 --- a/softmmu/dma-helpers.c +++ b/softmmu/dma-helpers.c @@ -XXX,XX +XXX,XX @@ static void dma_blk_cb(void *opaque, int ret) if (dbs->iov.size == 0) { trace_dma_map_wait(dbs); dbs->bh = aio_bh_new(ctx, reschedule_dma, dbs); - cpu_register_map_client(dbs->bh); + address_space_register_map_client(dbs->sg->as, dbs->bh); goto out; } @@ -XXX,XX +XXX,XX @@ static void dma_aio_cancel(BlockAIOCB *acb) } if (dbs->bh) { - cpu_unregister_map_client(dbs->bh); + address_space_unregister_map_client(dbs->sg->as, dbs->bh); qemu_bh_delete(dbs->bh); dbs->bh = NULL; } diff --git a/softmmu/memory.c b/softmmu/memory.c index XXXXXXX..XXXXXXX 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -XXX,XX +XXX,XX @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) as->ioeventfds = NULL; QTAILQ_INIT(&as->listeners); QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link); + as->bounce.in_use = false; + qemu_mutex_init(&as->map_client_list_lock); + QLIST_INIT(&as->map_client_list); as->name = g_strdup(name ? name : "anonymous"); address_space_update_topology(as); address_space_update_ioeventfds(as); @@ -XXX,XX +XXX,XX @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) static void do_address_space_destroy(AddressSpace *as) { + assert(!qatomic_read(&as->bounce.in_use)); + assert(QLIST_EMPTY(&as->map_client_list)); + qemu_mutex_destroy(&as->map_client_list_lock); + assert(QTAILQ_EMPTY(&as->listeners)); flatview_unref(as->current_map); diff --git a/softmmu/physmem.c b/softmmu/physmem.c index XXXXXXX..XXXXXXX 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -XXX,XX +XXX,XX @@ void cpu_flush_icache_range(hwaddr start, hwaddr len) NULL, len, FLUSH_CACHE); } -typedef struct { - MemoryRegion *mr; - void *buffer; - hwaddr addr; - hwaddr len; - bool in_use; -} BounceBuffer; - -static BounceBuffer bounce; - -typedef struct MapClient { - QEMUBH *bh; - QLIST_ENTRY(MapClient) link; -} MapClient; - -QemuMutex map_client_list_lock; -static QLIST_HEAD(, MapClient) map_client_list - = QLIST_HEAD_INITIALIZER(map_client_list); - -static void cpu_unregister_map_client_do(MapClient *client) +static void +address_space_unregister_map_client_do(AddressSpaceMapClient *client) { QLIST_REMOVE(client, link); g_free(client); } -static void cpu_notify_map_clients_locked(void) +static void address_space_notify_map_clients_locked(AddressSpace *as) { - MapClient *client; + AddressSpaceMapClient *client; - while (!QLIST_EMPTY(&map_client_list)) { - client = QLIST_FIRST(&map_client_list); + while (!QLIST_EMPTY(&as->map_client_list)) { + client = QLIST_FIRST(&as->map_client_list); qemu_bh_schedule(client->bh); - cpu_unregister_map_client_do(client); + address_space_unregister_map_client_do(client); } } -void cpu_register_map_client(QEMUBH *bh) +void address_space_register_map_client(AddressSpace *as, QEMUBH *bh) { - MapClient *client = g_malloc(sizeof(*client)); + AddressSpaceMapClient *client = g_malloc(sizeof(*client)); - qemu_mutex_lock(&map_client_list_lock); + qemu_mutex_lock(&as->map_client_list_lock); client->bh = bh; - QLIST_INSERT_HEAD(&map_client_list, client, link); + QLIST_INSERT_HEAD(&as->map_client_list, client, link); /* Write map_client_list before reading in_use. */ smp_mb(); - if (!qatomic_read(&bounce.in_use)) { - cpu_notify_map_clients_locked(); + if (!qatomic_read(&as->bounce.in_use)) { + address_space_notify_map_clients_locked(as); } - qemu_mutex_unlock(&map_client_list_lock); + qemu_mutex_unlock(&as->map_client_list_lock); } void cpu_exec_init_all(void) @@ -XXX,XX +XXX,XX @@ void cpu_exec_init_all(void) finalize_target_page_bits(); io_mem_init(); memory_map_init(); - qemu_mutex_init(&map_client_list_lock); } -void cpu_unregister_map_client(QEMUBH *bh) +void address_space_unregister_map_client(AddressSpace *as, QEMUBH *bh) { - MapClient *client; + AddressSpaceMapClient *client; - qemu_mutex_lock(&map_client_list_lock); - QLIST_FOREACH(client, &map_client_list, link) { + qemu_mutex_lock(&as->map_client_list_lock); + QLIST_FOREACH(client, &as->map_client_list, link) { if (client->bh == bh) { - cpu_unregister_map_client_do(client); + address_space_unregister_map_client_do(client); break; } } - qemu_mutex_unlock(&map_client_list_lock); + qemu_mutex_unlock(&as->map_client_list_lock); } -static void cpu_notify_map_clients(void) +static void address_space_notify_map_clients(AddressSpace *as) { - qemu_mutex_lock(&map_client_list_lock); - cpu_notify_map_clients_locked(); - qemu_mutex_unlock(&map_client_list_lock); + qemu_mutex_lock(&as->map_client_list_lock); + address_space_notify_map_clients_locked(as); + qemu_mutex_unlock(&as->map_client_list_lock); } static bool flatview_access_valid(FlatView *fv, hwaddr addr, hwaddr len, @@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr, * May map a subset of the requested range, given by and returned in *plen. * May return NULL if resources needed to perform the mapping are exhausted. * Use only for reads OR writes - not for read-modify-write operations. - * Use cpu_register_map_client() to know when retrying the map operation is - * likely to succeed. + * Use address_space_register_map_client() to know when retrying the map + * operation is likely to succeed. */ void *address_space_map(AddressSpace *as, hwaddr addr, @@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as, mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs); if (!memory_access_is_direct(mr, is_write)) { - if (qatomic_xchg(&bounce.in_use, true)) { + if (qatomic_xchg(&as->bounce.in_use, true)) { *plen = 0; return NULL; } /* Avoid unbounded allocations */ l = MIN(l, TARGET_PAGE_SIZE); - bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l); - bounce.addr = addr; - bounce.len = l; + as->bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l); + as->bounce.addr = addr; + as->bounce.len = l; memory_region_ref(mr); - bounce.mr = mr; + as->bounce.mr = mr; if (!is_write) { flatview_read(fv, addr, MEMTXATTRS_UNSPECIFIED, - bounce.buffer, l); + as->bounce.buffer, l); } *plen = l; - return bounce.buffer; + return as->bounce.buffer; } @@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as, void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, bool is_write, hwaddr access_len) { - if (buffer != bounce.buffer) { + if (buffer != as->bounce.buffer) { MemoryRegion *mr; ram_addr_t addr1; @@ -XXX,XX +XXX,XX @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, return; } if (is_write) { - address_space_write(as, bounce.addr, MEMTXATTRS_UNSPECIFIED, - bounce.buffer, access_len); + address_space_write(as, as->bounce.addr, MEMTXATTRS_UNSPECIFIED, + as->bounce.buffer, access_len); } - qemu_vfree(bounce.buffer); - bounce.buffer = NULL; - memory_region_unref(bounce.mr); + qemu_vfree(as->bounce.buffer); + as->bounce.buffer = NULL; + memory_region_unref(as->bounce.mr); /* Clear in_use before reading map_client_list. */ - qatomic_set_mb(&bounce.in_use, false); - cpu_notify_map_clients(); + qatomic_set_mb(&as->bounce.in_use, false); + address_space_notify_map_clients(as); } void *cpu_physical_memory_map(hwaddr addr, -- 2.34.1
When DMA memory can't be directly accessed, as is the case when running the device model in a separate process without shareable DMA file descriptors, bounce buffering is used. It is not uncommon for device models to request mapping of several DMA regions at the same time. Examples include: * net devices, e.g. when transmitting a packet that is split across several TX descriptors (observed with igb) * USB host controllers, when handling a packet with multiple data TRBs (observed with xhci) Previously, qemu only provided a single bounce buffer per AddressSpace and would fail DMA map requests while the buffer was already in use. In turn, this would cause DMA failures that ultimately manifest as hardware errors from the guest perspective. This change allocates DMA bounce buffers dynamically instead of supporting only a single buffer. Thus, multiple DMA mappings work correctly also when RAM can't be mmap()-ed. The total bounce buffer allocation size is limited individually for each AddressSpace. The default limit is 4096 bytes, matching the previous maximum buffer size. A new x-max-bounce-buffer-size parameter is provided to configure the limit for PCI devices. Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- hw/pci/pci.c | 8 ++++ include/exec/memory.h | 14 +++---- include/hw/pci/pci_device.h | 3 ++ softmmu/memory.c | 5 ++- softmmu/physmem.c | 80 +++++++++++++++++++++++++------------ 5 files changed, 74 insertions(+), 36 deletions(-) diff --git a/hw/pci/pci.c b/hw/pci/pci.c index XXXXXXX..XXXXXXX 100644 --- a/hw/pci/pci.c +++ b/hw/pci/pci.c @@ -XXX,XX +XXX,XX @@ static Property pci_props[] = { QEMU_PCIE_ERR_UNC_MASK_BITNR, true), DEFINE_PROP_BIT("x-pcie-ari-nextfn-1", PCIDevice, cap_present, QEMU_PCIE_ARI_NEXTFN_1_BITNR, false), + DEFINE_PROP_SIZE("x-max-bounce-buffer-size", PCIDevice, + max_bounce_buffer_size, DEFAULT_MAX_BOUNCE_BUFFER_SIZE), DEFINE_PROP_END_OF_LIST() }; @@ -XXX,XX +XXX,XX @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, "bus master container", UINT64_MAX); address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_container_region, pci_dev->name); + pci_dev->bus_master_as.max_bounce_buffer_size = + pci_dev->max_bounce_buffer_size; if (phase_check(PHASE_MACHINE_READY)) { pci_init_bus_master(pci_dev); @@ -XXX,XX +XXX,XX @@ static void pci_device_class_init(ObjectClass *klass, void *data) k->unrealize = pci_qdev_unrealize; k->bus_type = TYPE_PCI_BUS; device_class_set_props(k, pci_props); + object_class_property_set_description( + klass, "x-max-bounce-buffer-size", + "Maximum buffer size allocated for bounce buffers used for mapped " + "access to indirect DMA memory"); } static void pci_device_class_base_init(ObjectClass *klass, void *data) diff --git a/include/exec/memory.h b/include/exec/memory.h index XXXXXXX..XXXXXXX 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -XXX,XX +XXX,XX @@ typedef struct AddressSpaceMapClient { QLIST_ENTRY(AddressSpaceMapClient) link; } AddressSpaceMapClient; -typedef struct { - MemoryRegion *mr; - void *buffer; - hwaddr addr; - hwaddr len; - bool in_use; -} BounceBuffer; +#define DEFAULT_MAX_BOUNCE_BUFFER_SIZE (4096) /** * struct AddressSpace: describes a mapping of addresses to #MemoryRegion objects @@ -XXX,XX +XXX,XX @@ struct AddressSpace { QTAILQ_HEAD(, MemoryListener) listeners; QTAILQ_ENTRY(AddressSpace) address_spaces_link; - /* Bounce buffer to use for this address space. */ - BounceBuffer bounce; + /* Maximum DMA bounce buffer size used for indirect memory map requests */ + uint64_t max_bounce_buffer_size; + /* Total size of bounce buffers currently allocated, atomically accessed */ + uint64_t bounce_buffer_size; /* List of callbacks to invoke when buffers free up */ QemuMutex map_client_list_lock; QLIST_HEAD(, AddressSpaceMapClient) map_client_list; diff --git a/include/hw/pci/pci_device.h b/include/hw/pci/pci_device.h index XXXXXXX..XXXXXXX 100644 --- a/include/hw/pci/pci_device.h +++ b/include/hw/pci/pci_device.h @@ -XXX,XX +XXX,XX @@ struct PCIDevice { /* ID of standby device in net_failover pair */ char *failover_pair_id; uint32_t acpi_index; + + /* Maximum DMA bounce buffer size used for indirect memory map requests */ + uint64_t max_bounce_buffer_size; }; static inline int pci_intx(PCIDevice *pci_dev) diff --git a/softmmu/memory.c b/softmmu/memory.c index XXXXXXX..XXXXXXX 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -XXX,XX +XXX,XX @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) as->ioeventfds = NULL; QTAILQ_INIT(&as->listeners); QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link); - as->bounce.in_use = false; + as->max_bounce_buffer_size = DEFAULT_MAX_BOUNCE_BUFFER_SIZE; + as->bounce_buffer_size = 0; qemu_mutex_init(&as->map_client_list_lock); QLIST_INIT(&as->map_client_list); as->name = g_strdup(name ? name : "anonymous"); @@ -XXX,XX +XXX,XX @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) static void do_address_space_destroy(AddressSpace *as) { - assert(!qatomic_read(&as->bounce.in_use)); + assert(qatomic_read(&as->bounce_buffer_size) == 0); assert(QLIST_EMPTY(&as->map_client_list)); qemu_mutex_destroy(&as->map_client_list_lock); diff --git a/softmmu/physmem.c b/softmmu/physmem.c index XXXXXXX..XXXXXXX 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -XXX,XX +XXX,XX @@ void cpu_flush_icache_range(hwaddr start, hwaddr len) NULL, len, FLUSH_CACHE); } +/* + * A magic value stored in the first 8 bytes of the bounce buffer struct. Used + * to detect illegal pointers passed to address_space_unmap. + */ +#define BOUNCE_BUFFER_MAGIC 0xb4017ceb4ffe12ed + +typedef struct { + uint64_t magic; + MemoryRegion *mr; + hwaddr addr; + size_t len; + uint8_t buffer[]; +} BounceBuffer; + static void address_space_unregister_map_client_do(AddressSpaceMapClient *client) { @@ -XXX,XX +XXX,XX @@ void address_space_register_map_client(AddressSpace *as, QEMUBH *bh) qemu_mutex_lock(&as->map_client_list_lock); client->bh = bh; QLIST_INSERT_HEAD(&as->map_client_list, client, link); - /* Write map_client_list before reading in_use. */ + /* Write map_client_list before reading bounce_buffer_size. */ smp_mb(); - if (!qatomic_read(&as->bounce.in_use)) { + if (qatomic_read(&as->bounce_buffer_size) < as->max_bounce_buffer_size) { address_space_notify_map_clients_locked(as); } qemu_mutex_unlock(&as->map_client_list_lock); @@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as, mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs); if (!memory_access_is_direct(mr, is_write)) { - if (qatomic_xchg(&as->bounce.in_use, true)) { + size_t size = qatomic_add_fetch(&as->bounce_buffer_size, l); + if (size > as->max_bounce_buffer_size) { + /* + * Note that the overshot might be larger than l if threads are + * racing and bump bounce_buffer_size at the same time. + */ + size_t excess = MIN(size - as->max_bounce_buffer_size, l); + l -= excess; + qatomic_sub(&as->bounce_buffer_size, excess); + } + + if (l == 0) { *plen = 0; return NULL; } - /* Avoid unbounded allocations */ - l = MIN(l, TARGET_PAGE_SIZE); - as->bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l); - as->bounce.addr = addr; - as->bounce.len = l; + BounceBuffer *bounce = g_malloc0(l + sizeof(BounceBuffer)); + bounce->magic = BOUNCE_BUFFER_MAGIC; memory_region_ref(mr); - as->bounce.mr = mr; + bounce->mr = mr; + bounce->addr = addr; + bounce->len = l; + if (!is_write) { flatview_read(fv, addr, MEMTXATTRS_UNSPECIFIED, - as->bounce.buffer, l); + bounce->buffer, l); } *plen = l; - return as->bounce.buffer; + return bounce->buffer; } - memory_region_ref(mr); *plen = flatview_extend_translation(fv, addr, len, mr, xlat, l, is_write, attrs); @@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as, void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, bool is_write, hwaddr access_len) { - if (buffer != as->bounce.buffer) { - MemoryRegion *mr; - ram_addr_t addr1; + MemoryRegion *mr; + ram_addr_t addr1; - mr = memory_region_from_host(buffer, &addr1); - assert(mr != NULL); + mr = memory_region_from_host(buffer, &addr1); + if (mr != NULL) { if (is_write) { invalidate_and_set_dirty(mr, addr1, access_len); } @@ -XXX,XX +XXX,XX @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, memory_region_unref(mr); return; } + + + BounceBuffer *bounce = container_of(buffer, BounceBuffer, buffer); + assert(bounce->magic == BOUNCE_BUFFER_MAGIC); + if (is_write) { - address_space_write(as, as->bounce.addr, MEMTXATTRS_UNSPECIFIED, - as->bounce.buffer, access_len); - } - qemu_vfree(as->bounce.buffer); - as->bounce.buffer = NULL; - memory_region_unref(as->bounce.mr); - /* Clear in_use before reading map_client_list. */ - qatomic_set_mb(&as->bounce.in_use, false); + address_space_write(as, bounce->addr, MEMTXATTRS_UNSPECIFIED, + bounce->buffer, access_len); + } + + qatomic_sub(&as->bounce_buffer_size, bounce->len); + bounce->magic = ~BOUNCE_BUFFER_MAGIC; + memory_region_unref(bounce->mr); + g_free(bounce); + /* Write bounce_buffer_size before reading map_client_list. */ + smp_mb(); address_space_notify_map_clients(as); } -- 2.34.1
Brings in assorted bug fixes. The following are of particular interest with respect to message-based DMA support: * bb308a2 "Fix address calculation for message-based DMA" Corrects a bug in DMA address calculation. * 1569a37 "Pass server->client command over a separate socket pair" Adds support for separate sockets for either command direction, addressing a bug where libvfio-user gets confused if both client and server send commands concurrently. Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- subprojects/libvfio-user.wrap | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/subprojects/libvfio-user.wrap b/subprojects/libvfio-user.wrap index XXXXXXX..XXXXXXX 100644 --- a/subprojects/libvfio-user.wrap +++ b/subprojects/libvfio-user.wrap @@ -XXX,XX +XXX,XX @@ [wrap-git] url = https://gitlab.com/qemu-project/libvfio-user.git -revision = 0b28d205572c80b568a1003db2c8f37ca333e4d7 +revision = 1569a37a54ecb63bd4008708c76339ccf7d06115 depth = 1 -- 2.34.1
Wire up support for DMA for the case where the vfio-user client does not provide mmap()-able file descriptors, but DMA requests must be performed via the VFIO-user protocol. This installs an indirect memory region, which already works for pci_dma_{read,write}, and pci_dma_map works thanks to the existing DMA bounce buffering support. Note that while simple scenarios work with this patch, there's a known race condition in libvfio-user that will mess up the communication channel. See https://github.com/nutanix/libvfio-user/issues/279 for details as well as a proposed fix. Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- hw/remote/trace-events | 2 + hw/remote/vfio-user-obj.c | 84 +++++++++++++++++++++++++++++++++++---- 2 files changed, 79 insertions(+), 7 deletions(-) diff --git a/hw/remote/trace-events b/hw/remote/trace-events index XXXXXXX..XXXXXXX 100644 --- a/hw/remote/trace-events +++ b/hw/remote/trace-events @@ -XXX,XX +XXX,XX @@ vfu_cfg_read(uint32_t offset, uint32_t val) "vfu: cfg: 0x%x -> 0x%x" vfu_cfg_write(uint32_t offset, uint32_t val) "vfu: cfg: 0x%x <- 0x%x" vfu_dma_register(uint64_t gpa, size_t len) "vfu: registering GPA 0x%"PRIx64", %zu bytes" vfu_dma_unregister(uint64_t gpa) "vfu: unregistering GPA 0x%"PRIx64"" +vfu_dma_read(uint64_t gpa, size_t len) "vfu: DMA read 0x%"PRIx64", %zu bytes" +vfu_dma_write(uint64_t gpa, size_t len) "vfu: DMA write 0x%"PRIx64", %zu bytes" vfu_bar_register(int i, uint64_t addr, uint64_t size) "vfu: BAR %d: addr 0x%"PRIx64" size 0x%"PRIx64"" vfu_bar_rw_enter(const char *op, uint64_t addr) "vfu: %s request for BAR address 0x%"PRIx64"" vfu_bar_rw_exit(const char *op, uint64_t addr) "vfu: Finished %s of BAR address 0x%"PRIx64"" diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c index XXXXXXX..XXXXXXX 100644 --- a/hw/remote/vfio-user-obj.c +++ b/hw/remote/vfio-user-obj.c @@ -XXX,XX +XXX,XX @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, char * const buf, return count; } +static MemTxResult vfu_dma_read(void *opaque, hwaddr addr, uint64_t *val, + unsigned size, MemTxAttrs attrs) +{ + MemoryRegion *region = opaque; + vfu_ctx_t *vfu_ctx = VFU_OBJECT(region->owner)->vfu_ctx; + uint8_t buf[sizeof(uint64_t)]; + + trace_vfu_dma_read(region->addr + addr, size); + + g_autofree dma_sg_t *sg = g_malloc0(dma_sg_size()); + vfu_dma_addr_t vfu_addr = (vfu_dma_addr_t)(region->addr + addr); + if (vfu_addr_to_sgl(vfu_ctx, vfu_addr, size, sg, 1, PROT_READ) < 0 || + vfu_sgl_read(vfu_ctx, sg, 1, buf) != 0) { + return MEMTX_ERROR; + } + + *val = ldn_he_p(buf, size); + + return MEMTX_OK; +} + +static MemTxResult vfu_dma_write(void *opaque, hwaddr addr, uint64_t val, + unsigned size, MemTxAttrs attrs) +{ + MemoryRegion *region = opaque; + vfu_ctx_t *vfu_ctx = VFU_OBJECT(region->owner)->vfu_ctx; + uint8_t buf[sizeof(uint64_t)]; + + trace_vfu_dma_write(region->addr + addr, size); + + stn_he_p(buf, size, val); + + g_autofree dma_sg_t *sg = g_malloc0(dma_sg_size()); + vfu_dma_addr_t vfu_addr = (vfu_dma_addr_t)(region->addr + addr); + if (vfu_addr_to_sgl(vfu_ctx, vfu_addr, size, sg, 1, PROT_WRITE) < 0 || + vfu_sgl_write(vfu_ctx, sg, 1, buf) != 0) { + return MEMTX_ERROR; + } + + return MEMTX_OK; +} + +static const MemoryRegionOps vfu_dma_ops = { + .read_with_attrs = vfu_dma_read, + .write_with_attrs = vfu_dma_write, + .endianness = DEVICE_HOST_ENDIAN, + .valid = { + .min_access_size = 1, + .max_access_size = 8, + .unaligned = true, + }, + .impl = { + .min_access_size = 1, + .max_access_size = 8, + }, +}; + static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) { VfuObject *o = vfu_get_private(vfu_ctx); @@ -XXX,XX +XXX,XX @@ static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) g_autofree char *name = NULL; struct iovec *iov = &info->iova; - if (!info->vaddr) { - return; - } - name = g_strdup_printf("mem-%s-%"PRIx64"", o->device, - (uint64_t)info->vaddr); + (uint64_t)iov->iov_base); subregion = g_new0(MemoryRegion, 1); - memory_region_init_ram_ptr(subregion, NULL, name, - iov->iov_len, info->vaddr); + if (info->vaddr) { + memory_region_init_ram_ptr(subregion, OBJECT(o), name, + iov->iov_len, info->vaddr); + } else { + /* + * Note that I/O regions' MemoryRegionOps handle accesses of at most 8 + * bytes at a time, and larger accesses are broken down. However, + * many/most DMA accesses are larger than 8 bytes and VFIO-user can + * handle large DMA accesses just fine, thus this size restriction + * unnecessarily hurts performance, in particular given that each + * access causes a round trip on the VFIO-user socket. + * + * TODO: Investigate how to plumb larger accesses through memory + * regions, possibly by amending MemoryRegionOps or by creating a new + * memory region type. + */ + memory_region_init_io(subregion, OBJECT(o), &vfu_dma_ops, subregion, + name, iov->iov_len); + } dma_as = pci_device_iommu_address_space(o->pci_dev); -- 2.34.1
PCI config space is little-endian, so on a big-endian host we need to perform byte swaps for values as they are passed to and received from the generic PCI config space access machinery. Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- hw/remote/vfio-user-obj.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c index XXXXXXX..XXXXXXX 100644 --- a/hw/remote/vfio-user-obj.c +++ b/hw/remote/vfio-user-obj.c @@ -XXX,XX +XXX,XX @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, char * const buf, while (bytes > 0) { len = (bytes > pci_access_width) ? pci_access_width : bytes; if (is_write) { - memcpy(&val, ptr, len); + val = ldn_le_p(ptr, len); pci_host_config_write_common(o->pci_dev, offset, pci_config_size(o->pci_dev), val, len); @@ -XXX,XX +XXX,XX @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, char * const buf, } else { val = pci_host_config_read_common(o->pci_dev, offset, pci_config_size(o->pci_dev), len); - memcpy(ptr, &val, len); + stn_le_p(ptr, len, val); trace_vfu_cfg_read(offset, val); } offset += len; -- 2.34.1
This series adds basic support for message-based DMA in qemu's vfio-user server. This is useful for cases where the client does not provide file descriptors for accessing system memory via memory mappings. My motivating use case is to hook up device models as PCIe endpoints to a hardware design. This works by bridging the PCIe transaction layer to vfio-user, and the endpoint does not access memory directly, but sends memory requests TLPs to the hardware design in order to perform DMA. Note that more work is needed to make message-based DMA work well: qemu currently breaks down DMA accesses into chunks of size 8 bytes at maximum, each of which will be handled in a separate vfio-user DMA request message. This is quite terrible for large DMA accesses, such as when nvme reads and writes page-sized blocks for example. Thus, I would like to improve qemu to be able to perform larger accesses, at least for indirect memory regions. I have something working locally, but since this will likely result in more involved surgery and discussion, I am leaving this to be addressed in a separate patch. Changes from v1: * Address Stefan's review comments. In particular, enforce an allocation limit and don't drop the map client callbacks given that map requests can fail when hitting size limits. * libvfio-user version bump now included in the series. * Tested as well on big-endian s390x. This uncovered another byte order issue in vfio-user server code that I've included a fix for. Changes from v2: * Add a preparatory patch to make bounce buffering an AddressSpace-specific concept. * The total buffer size limit parameter is now per AdressSpace and can be configured for PCIDevice via a property. * Store a magic value in first bytes of bounce buffer struct as a best effort measure to detect invalid pointers in address_space_unmap. Changes from v3: * libvfio-user now supports twin-socket mode which uses separate sockets for client->server and server->client commands, respectively. This addresses the concurrent command bug triggered by server->client DMA access commands. See https://github.com/nutanix/libvfio-user/issues/279 for details. * Add missing teardown code in do_address_space_destroy. * Fix bounce buffer size bookkeeping race condition. * Generate unmap notification callbacks unconditionally. * Some cosmetic fixes. Changes from v4: * Fix accidentally dropped memory_region_unref, control flow restored to match previous code to simplify review. * Some cosmetic fixes. Changes from v5: * Unregister indirect memory region in libvfio-user dma_unregister callback. Changes from v6: * Rebase, resolve straightforward merge conflict in system/dma-helpers.c Changes from v7: * Rebase (applied cleanly) * Restore various Reviewed-by and Tested-by tags that I failed to carry forward (I double-checked that the patches haven't changed since the reviewed version) Changes from v8: * Rebase (clean) * Change bounce buffer size accounting to use uint32_t so it works also on hosts that don't support uint64_t atomics, such as mipsel. As a consequence overflows are a real concern now, so switch to a cmpxchg loop for allocating bounce buffer space. Changes from v9: * Incorporate patch split and QEMU_MUTEX_GUARD change by philmd@linaro.org * Use size_t instead of uint32_t for bounce buffer size accounting. The qdev property remains uint32_t though, so it has a consistent size regardless of host. Changes from v10: * Update the commit to uprev the libvfio-user subproject to the latest libvfio-user revision. * Break out the "softmmu: Support concurrent bounce buffers" patch so this series only touches vfio-user code and can be picked up as is by Jag. Changes from v11: * Rebase (clean) * Add a patch to fix DMA memory region reference leaks, which prevented vfio-user-server auto-shutdown from working. Mattias Nissler (3): Update subprojects/libvfio-user vfio-user: Message-based DMA support vfio-user: Fix memory region reference accounting hw/remote/trace-events | 2 + hw/remote/vfio-user-obj.c | 108 +++++++++++++++++++++++++++++----- subprojects/libvfio-user.wrap | 2 +- 3 files changed, 96 insertions(+), 16 deletions(-) -- 2.34.1
Brings in assorted bug fixes. The following are of particular interest with respect to message-based DMA support: * bb308a2 "Fix address calculation for message-based DMA" Corrects a bug in DMA address calculation. * 1569a37 "Pass server->client command over a separate socket pair" Adds support for separate sockets for either command direction, addressing a bug where libvfio-user gets confused if both client and server send commands concurrently. Reviewed-by: Jagannathan Raman <jag.raman@oracle.com> Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- subprojects/libvfio-user.wrap | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/subprojects/libvfio-user.wrap b/subprojects/libvfio-user.wrap index XXXXXXX..XXXXXXX 100644 --- a/subprojects/libvfio-user.wrap +++ b/subprojects/libvfio-user.wrap @@ -XXX,XX +XXX,XX @@ [wrap-git] url = https://gitlab.com/qemu-project/libvfio-user.git -revision = 0b28d205572c80b568a1003db2c8f37ca333e4d7 +revision = b1a156d86f55a8fa3f78ece5bee7748ec75e7b82 depth = 1 -- 2.34.1
Wire up support for DMA for the case where the vfio-user client does not provide mmap()-able file descriptors, but DMA requests must be performed via the VFIO-user protocol. This installs an indirect memory region, which already works for pci_dma_{read,write}, and pci_dma_map works thanks to the existing DMA bounce buffering support. Note that while simple scenarios work with this patch, there's a known race condition in libvfio-user that will mess up the communication channel. See https://github.com/nutanix/libvfio-user/issues/279 for details as well as a proposed fix. Reviewed-by: Jagannathan Raman <jag.raman@oracle.com> Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- hw/remote/trace-events | 2 + hw/remote/vfio-user-obj.c | 100 ++++++++++++++++++++++++++++++++------ 2 files changed, 87 insertions(+), 15 deletions(-) diff --git a/hw/remote/trace-events b/hw/remote/trace-events index XXXXXXX..XXXXXXX 100644 --- a/hw/remote/trace-events +++ b/hw/remote/trace-events @@ -XXX,XX +XXX,XX @@ vfu_cfg_read(uint32_t offset, uint32_t val) "vfu: cfg: 0x%x -> 0x%x" vfu_cfg_write(uint32_t offset, uint32_t val) "vfu: cfg: 0x%x <- 0x%x" vfu_dma_register(uint64_t gpa, size_t len) "vfu: registering GPA 0x%"PRIx64", %zu bytes" vfu_dma_unregister(uint64_t gpa) "vfu: unregistering GPA 0x%"PRIx64"" +vfu_dma_read(uint64_t gpa, size_t len) "vfu: DMA read 0x%"PRIx64", %zu bytes" +vfu_dma_write(uint64_t gpa, size_t len) "vfu: DMA write 0x%"PRIx64", %zu bytes" vfu_bar_register(int i, uint64_t addr, uint64_t size) "vfu: BAR %d: addr 0x%"PRIx64" size 0x%"PRIx64"" vfu_bar_rw_enter(const char *op, uint64_t addr) "vfu: %s request for BAR address 0x%"PRIx64"" vfu_bar_rw_exit(const char *op, uint64_t addr) "vfu: Finished %s of BAR address 0x%"PRIx64"" diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c index XXXXXXX..XXXXXXX 100644 --- a/hw/remote/vfio-user-obj.c +++ b/hw/remote/vfio-user-obj.c @@ -XXX,XX +XXX,XX @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, char * const buf, return count; } +static MemTxResult vfu_dma_read(void *opaque, hwaddr addr, uint64_t *val, + unsigned size, MemTxAttrs attrs) +{ + MemoryRegion *region = opaque; + vfu_ctx_t *vfu_ctx = VFU_OBJECT(region->owner)->vfu_ctx; + uint8_t buf[sizeof(uint64_t)]; + + trace_vfu_dma_read(region->addr + addr, size); + + g_autofree dma_sg_t *sg = g_malloc0(dma_sg_size()); + vfu_dma_addr_t vfu_addr = (vfu_dma_addr_t)(region->addr + addr); + if (vfu_addr_to_sgl(vfu_ctx, vfu_addr, size, sg, 1, PROT_READ) < 0 || + vfu_sgl_read(vfu_ctx, sg, 1, buf) != 0) { + return MEMTX_ERROR; + } + + *val = ldn_he_p(buf, size); + + return MEMTX_OK; +} + +static MemTxResult vfu_dma_write(void *opaque, hwaddr addr, uint64_t val, + unsigned size, MemTxAttrs attrs) +{ + MemoryRegion *region = opaque; + vfu_ctx_t *vfu_ctx = VFU_OBJECT(region->owner)->vfu_ctx; + uint8_t buf[sizeof(uint64_t)]; + + trace_vfu_dma_write(region->addr + addr, size); + + stn_he_p(buf, size, val); + + g_autofree dma_sg_t *sg = g_malloc0(dma_sg_size()); + vfu_dma_addr_t vfu_addr = (vfu_dma_addr_t)(region->addr + addr); + if (vfu_addr_to_sgl(vfu_ctx, vfu_addr, size, sg, 1, PROT_WRITE) < 0 || + vfu_sgl_write(vfu_ctx, sg, 1, buf) != 0) { + return MEMTX_ERROR; + } + + return MEMTX_OK; +} + +static const MemoryRegionOps vfu_dma_ops = { + .read_with_attrs = vfu_dma_read, + .write_with_attrs = vfu_dma_write, + .endianness = DEVICE_HOST_ENDIAN, + .valid = { + .min_access_size = 1, + .max_access_size = 8, + .unaligned = true, + }, + .impl = { + .min_access_size = 1, + .max_access_size = 8, + }, +}; + static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) { VfuObject *o = vfu_get_private(vfu_ctx); @@ -XXX,XX +XXX,XX @@ static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) g_autofree char *name = NULL; struct iovec *iov = &info->iova; - if (!info->vaddr) { - return; - } - name = g_strdup_printf("mem-%s-%"PRIx64"", o->device, - (uint64_t)info->vaddr); + (uint64_t)iov->iov_base); subregion = g_new0(MemoryRegion, 1); - memory_region_init_ram_ptr(subregion, NULL, name, - iov->iov_len, info->vaddr); + if (info->vaddr) { + memory_region_init_ram_ptr(subregion, OBJECT(o), name, + iov->iov_len, info->vaddr); + } else { + /* + * Note that I/O regions' MemoryRegionOps handle accesses of at most 8 + * bytes at a time, and larger accesses are broken down. However, + * many/most DMA accesses are larger than 8 bytes and VFIO-user can + * handle large DMA accesses just fine, thus this size restriction + * unnecessarily hurts performance, in particular given that each + * access causes a round trip on the VFIO-user socket. + * + * TODO: Investigate how to plumb larger accesses through memory + * regions, possibly by amending MemoryRegionOps or by creating a new + * memory region type. + */ + memory_region_init_io(subregion, OBJECT(o), &vfu_dma_ops, subregion, + name, iov->iov_len); + } dma_as = pci_device_iommu_address_space(o->pci_dev); @@ -XXX,XX +XXX,XX @@ static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) static void dma_unregister(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) { VfuObject *o = vfu_get_private(vfu_ctx); + MemoryRegionSection mr_section; AddressSpace *dma_as = NULL; - MemoryRegion *mr = NULL; - ram_addr_t offset; - mr = memory_region_from_host(info->vaddr, &offset); - if (!mr) { + dma_as = pci_device_iommu_address_space(o->pci_dev); + + mr_section = + memory_region_find(dma_as->root, (hwaddr)info->iova.iov_base, 1); + if (!mr_section.mr) { return; } - dma_as = pci_device_iommu_address_space(o->pci_dev); - - memory_region_del_subregion(dma_as->root, mr); + memory_region_del_subregion(dma_as->root, mr_section.mr); - object_unparent((OBJECT(mr))); + object_unparent((OBJECT(mr_section.mr))); trace_vfu_dma_unregister((uint64_t)info->iova.iov_base); } -- 2.34.1
The memory regions created for DMA regions where leaking the original reference the object is initialized with. This happened since we insert the memory region as a subregion, but don't keep the reference obtained when creating the object. Thus, drop the reference after inserting the DMA memory region into the address space. This fixes auto-shutdown behavior: Due to the leaked references, the memory regions would never be released, and indirectly keep the VFU object as their owner alive. Thus, vfu_object_finalize didn't get invoked, and qemu wouldn't terminate. With this fix, this is now working as originally intended. Signed-off-by: Mattias Nissler <mnissler@rivosinc.com> --- hw/remote/vfio-user-obj.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c index XXXXXXX..XXXXXXX 100644 --- a/hw/remote/vfio-user-obj.c +++ b/hw/remote/vfio-user-obj.c @@ -XXX,XX +XXX,XX @@ static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) memory_region_add_subregion(dma_as->root, (hwaddr)iov->iov_base, subregion); + /* + * Insertion into the address space grabbed a reference to keep the memory + * region alive. However, the memory region object was created with an + * original reference count of 1, so we must unref since we don't keep that + * reference. + */ + memory_region_unref(subregion); + trace_vfu_dma_register((uint64_t)iov->iov_base, iov->iov_len); } -- 2.34.1