hw/vfio/common.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
Signed-off-by: Dev Audsin <dev.devaqemu@gmail.com>
---
hw/vfio/common.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 6ff1daa763..3af70238bd 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -541,7 +541,8 @@ static int vfio_host_win_del(VFIOContainer *container,
hwaddr min_iova,
static bool vfio_listener_skipped_section(MemoryRegionSection *section)
{
- return (!memory_region_is_ram(section->mr) &&
+ return (!strcmp(memory_region_name(section->mr), "virtio-fs-cache")) ||
+ (!memory_region_is_ram(section->mr) &&
!memory_region_is_iommu(section->mr)) ||
/*
* Sizing an enabled 64-bit BAR can cause spurious mappings to
--
2.25.1
virtio-fs with DAX is currently not compatible with NIC Pass through. VM
fails to boot when DAX cache is enabled and SR-IOV VF is being attached.
This patch solves the problem. Hencem DAX cache and SR-IOV VF are be
attached together.
When a SR-IOV VF attaches to a qemu process, vfio will try to pin the
entire DAX Window but it is empty when the guest boots and will fail.
A method to make VFIO and DAX to work together is to make vfio skip DAX
cache.
Currently DAX cache need to be set to 0, for the SR-IOV VF to be attached
to Kata containers.
Enabling both SR-IOV VF and DAX work together will potentially improve
performance for workloads which are I/O and network intensive
On Mon, Apr 26, 2021 at 9:24 PM Dev Audsin <dev.devaqemu@gmail.com> wrote:
> Signed-off-by: Dev Audsin <dev.devaqemu@gmail.com>
> ---
> hw/vfio/common.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 6ff1daa763..3af70238bd 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -541,7 +541,8 @@ static int vfio_host_win_del(VFIOContainer *container,
> hwaddr min_iova,
>
> static bool vfio_listener_skipped_section(MemoryRegionSection *section)
> {
> - return (!memory_region_is_ram(section->mr) &&
> + return (!strcmp(memory_region_name(section->mr), "virtio-fs-cache"))
> ||
> + (!memory_region_is_ram(section->mr) &&
> !memory_region_is_iommu(section->mr)) ||
> /*
> * Sizing an enabled 64-bit BAR can cause spurious mappings to
> --
> 2.25.1
>
On Mon, 26 Apr 2021 21:27:52 +0100
Dev Audsin <dev.devaqemu@gmail.com> wrote:
> virtio-fs with DAX is currently not compatible with NIC Pass through. VM
> fails to boot when DAX cache is enabled and SR-IOV VF is being attached.
> This patch solves the problem. Hencem DAX cache and SR-IOV VF are be
> attached together.
>
> When a SR-IOV VF attaches to a qemu process, vfio will try to pin the
> entire DAX Window but it is empty when the guest boots and will fail.
> A method to make VFIO and DAX to work together is to make vfio skip DAX
> cache.
> Currently DAX cache need to be set to 0, for the SR-IOV VF to be attached
> to Kata containers.
> Enabling both SR-IOV VF and DAX work together will potentially improve
> performance for workloads which are I/O and network intensive
Please work on your patch email tooling, this is not how to provide a
commit log.
Also, this is not a qemu-trivial candidate imo. A qemu-trivial patch
should be obviously correct, not just simple in mechanics. It's not
obvious to me that simply skipping a region by name to avoid an
incompatibility is correct.
> On Mon, Apr 26, 2021 at 9:24 PM Dev Audsin <dev.devaqemu@gmail.com> wrote:
>
> > Signed-off-by: Dev Audsin <dev.devaqemu@gmail.com>
> > ---
> > hw/vfio/common.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > index 6ff1daa763..3af70238bd 100644
> > --- a/hw/vfio/common.c
> > +++ b/hw/vfio/common.c
> > @@ -541,7 +541,8 @@ static int vfio_host_win_del(VFIOContainer *container,
> > hwaddr min_iova,
> >
> > static bool vfio_listener_skipped_section(MemoryRegionSection *section)
> > {
> > - return (!memory_region_is_ram(section->mr) &&
> > + return (!strcmp(memory_region_name(section->mr), "virtio-fs-cache"))
> > ||
> > + (!memory_region_is_ram(section->mr) &&
> > !memory_region_is_iommu(section->mr)) ||
> > /*
> > * Sizing an enabled 64-bit BAR can cause spurious mappings to
> > --
> > 2.25.1
> >
Dave Gilbert already commented that a hard coded name comparison is not
a good solution here. There needs to be more analysis of the issue
beyond simply making the VM with this combination boot. If there's a
valid reason this particular region cannot be a device DMA target, then
advertise that reason and make vfio skip all regions with that
property. It's clear that we already skip non-ram and non-iommu
sections, why is this region considered both "ram" and not a DMA
target? The fact that it's not populated at guest boot does not
provide any support that it couldn't later be populated and become a DMA
target for the assigned device. Thanks,
Alex
© 2016 - 2026 Red Hat, Inc.