The guest may overlap guest memory regions when mapping IOVA to HVA
translations in the IOVA->HVA tree. This means that different HVAs, that
correspond to different guest memory region mappings, may translate to
the same IOVA. This can cause conflicts when a mapping is incorrectly
referenced.
For example, consider this example of mapped guest memory regions:
HVA GPA IOVA
------------------------------- --------------------------- ----------------------------
[0x7f7903e00000, 0x7f7983e00000) [0x0, 0x80000000) [0x1000, 0x80000000)
[0x7f7983e00000, 0x7f9903e00000) [0x100000000, 0x2080000000) [0x80001000, 0x2000001000)
[0x7f7903ea0000, 0x7f7903ec0000) [0xfeda0000, 0xfedc0000) [0x2000001000, 0x2000021000)
The last HVA range [0x7f7903ea0000, 0x7f7903ec0000) is contained within
the first HVA range [0x7f7903e00000, 0x7f7983e00000). Despite this, the
GPA ranges for the first and third mappings don't overlap, so the guest
sees them as different physical memory regions.
So, for example, say we're given an HVA of 0x7f7903eb0000 when we go to
unmap the mapping associated with this address. This HVA technically
fits in the first and third mapped HVA ranges.
When we go to search the IOVA->HVA tree, we'll stop at the first mapping
whose HVA range accommodates our given HVA. Given that IOVATrees are
GTrees which are balanced binary red-black trees, the search will stop
at the first mapping, which has an HVA range of [0x7f7903e00000,
0x7f7983e00000).
However, the correct mapping to remove in this case is the third mapping
because the HVA to GPA translation would result in a GPA of 0xfedb0000,
which only fits in the third mapping's GPA range.
To avoid this issue, we can create a IOVA->GPA tree for guest memory
mappings and use the GPA to find the correct IOVA translation, as GPAs
wont overlap and will always translate to the correct IOVA.
--------
This series is a different approach of [1] and is based off of [2],
where this issue was originally discovered.
RFC v2:
-------
* Don't decouple IOVA allocator.
* Build a IOVA->GPA tree (instead of GPA->IOVA tree).
* Remove IOVA-only tree and keep the full IOVA->HVA tree.
* Only search through RAMBlocks when we know the HVA is backed by
guest memory.
* Slight rewording of function names.
RFC v1:
-------
* Alternative approach of [1].
* First attempt to address this issue found in [2].
[1] https://lore.kernel.org/qemu-devel/20240410100345.389462-1-eperezma@redhat.com
[2] https://lore.kernel.org/qemu-devel/20240201180924.487579-1-eperezma@redhat.com
Jonah Palmer (2):
vhost-vdpa: Implement IOVA->GPA tree
vhost-svq: Translate guest-backed memory with IOVA->GPA tree
hw/virtio/vhost-iova-tree.c | 78 ++++++++++++++++++++++++++++--
hw/virtio/vhost-iova-tree.h | 5 ++
hw/virtio/vhost-shadow-virtqueue.c | 61 ++++++++++++++++++-----
hw/virtio/vhost-vdpa.c | 20 +++++---
4 files changed, 141 insertions(+), 23 deletions(-)
--
2.43.5