[PATCH v2 0/4] Support message-based DMA in vfio-user server

Mattias Nissler posted 4 patches 1 year, 3 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20230823092905.2259418-1-mnissler@rivosinc.com
Maintainers: Elena Ufimtseva <elena.ufimtseva@oracle.com>, Jagannathan Raman <jag.raman@oracle.com>, Paolo Bonzini <pbonzini@redhat.com>, Peter Xu <peterx@redhat.com>, David Hildenbrand <david@redhat.com>, "Philippe Mathieu-Daudé" <philmd@linaro.org>
There is a newer version of this series
hw/remote/trace-events        |  2 +
hw/remote/vfio-user-obj.c     | 88 +++++++++++++++++++++++++++++++----
include/sysemu/sysemu.h       |  2 +
qemu-options.hx               | 27 +++++++++++
softmmu/globals.c             |  1 +
softmmu/physmem.c             | 84 ++++++++++++++++++---------------
softmmu/vl.c                  |  6 +++
subprojects/libvfio-user.wrap |  2 +-
8 files changed, 165 insertions(+), 47 deletions(-)
[PATCH v2 0/4] Support message-based DMA in vfio-user server
Posted by Mattias Nissler 1 year, 3 months ago
This series adds basic support for message-based DMA in qemu's vfio-user
server. This is useful for cases where the client does not provide file
descriptors for accessing system memory via memory mappings. My motivating use
case is to hook up device models as PCIe endpoints to a hardware design. This
works by bridging the PCIe transaction layer to vfio-user, and the endpoint
does not access memory directly, but sends memory requests TLPs to the hardware
design in order to perform DMA.

Note that there is some more work required on top of this series to get
message-based DMA to really work well:

* libvfio-user has a long-standing issue where socket communication gets messed
  up when messages are sent from both ends at the same time. See
  https://github.com/nutanix/libvfio-user/issues/279 for more details. I've
  been engaging there and a fix is in review.

* qemu currently breaks down DMA accesses into chunks of size 8 bytes at
  maximum, each of which will be handled in a separate vfio-user DMA request
  message. This is quite terrible for large DMA accesses, such as when nvme
  reads and writes page-sized blocks for example. Thus, I would like to improve
  qemu to be able to perform larger accesses, at least for indirect memory
  regions. I have something working locally, but since this will likely result
  in more involved surgery and discussion, I am leaving this to be addressed in
  a separate patch.

Changes from v1:

* Address Stefan's review comments. In particular, enforce an allocation limit
  and don't drop the map client callbacks given that map requests can fail when
  hitting size limits.

* libvfio-user version bump now included in the series.

* Tested as well on big-endian s390x. This uncovered another byte order issue
  in vfio-user server code that I've included a fix for.

Mattias Nissler (4):
  softmmu: Support concurrent bounce buffers
  Update subprojects/libvfio-user
  vfio-user: Message-based DMA support
  vfio-user: Fix config space access byte order

 hw/remote/trace-events        |  2 +
 hw/remote/vfio-user-obj.c     | 88 +++++++++++++++++++++++++++++++----
 include/sysemu/sysemu.h       |  2 +
 qemu-options.hx               | 27 +++++++++++
 softmmu/globals.c             |  1 +
 softmmu/physmem.c             | 84 ++++++++++++++++++---------------
 softmmu/vl.c                  |  6 +++
 subprojects/libvfio-user.wrap |  2 +-
 8 files changed, 165 insertions(+), 47 deletions(-)

-- 
2.34.1