[PATCH v9 00/19] intel_iommu: Enable first stage translation for passthrough device

Zhenzhong Duan posted 19 patches 1 month, 3 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20251215065046.86991-1-zhenzhong.duan@intel.com
Maintainers: Yi Liu <yi.l.liu@intel.com>, Eric Auger <eric.auger@redhat.com>, Zhenzhong Duan <zhenzhong.duan@intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Jason Wang <jasowang@redhat.com>, "Clément Mathieu--Drif" <clement.mathieu--drif@eviden.com>, Alex Williamson <alex@shazbot.org>, "Cédric Le Goater" <clg@redhat.com>, Fabiano Rosas <farosas@suse.de>, Laurent Vivier <lvivier@redhat.com>
There is a newer version of this series
MAINTAINERS                    |   2 +
docs/devel/vfio-iommufd.rst    |  17 ++
hw/i386/intel_iommu_accel.h    |  51 ++++
hw/i386/intel_iommu_internal.h | 155 +++++++---
include/hw/i386/intel_iommu.h  |   6 +-
include/hw/iommu.h             |  25 ++
include/hw/pci/pci.h           |  24 ++
include/hw/vfio/vfio-device.h  |   2 +
hw/i386/intel_iommu.c          | 528 +++++++++++++++++++--------------
hw/i386/intel_iommu_accel.c    | 251 ++++++++++++++++
hw/pci/pci.c                   |  23 +-
hw/vfio/device.c               |  12 +
hw/vfio/iommufd.c              |   9 +
tests/qtest/intel-iommu-test.c |   4 +-
hw/i386/Kconfig                |   5 +
hw/i386/meson.build            |   1 +
hw/i386/trace-events           |   4 +
17 files changed, 833 insertions(+), 286 deletions(-)
create mode 100644 hw/i386/intel_iommu_accel.h
create mode 100644 include/hw/iommu.h
create mode 100644 hw/i386/intel_iommu_accel.c
[PATCH v9 00/19] intel_iommu: Enable first stage translation for passthrough device
Posted by Zhenzhong Duan 1 month, 3 weeks ago
Hi,

Based on Cédric's suggestions[1], The nesting series v8 is split to
"base nesting series" + "ERRATA_772415_SPR17 quirk series", this is the
base nesting series.

For passthrough device with intel_iommu.x-flts=on, we don't do shadowing of
guest page table but pass first stage page table to host side to construct a
nested HWPT. There was some effort to enable this feature in old days, see
[2] for details.

The key design is to utilize the dual-stage IOMMU translation (also known as
IOMMU nested translation) capability in host IOMMU. As the below diagram shows,
guest I/O page table pointer in GPA (guest physical address) is passed to host
and be used to perform the first stage address translation. Along with it,
modifications to present mappings in the guest I/O page table should be followed
with an IOTLB invalidation.

        .-------------.  .---------------------------.
        |   vIOMMU    |  | Guest I/O page table      |
        |             |  '---------------------------'
        .----------------/
        | PASID Entry |--- PASID cache flush --+
        '-------------'                        |
        |             |                        V
        |             |           I/O page table pointer in GPA
        '-------------'
    Guest
    ------| Shadow |---------------------------|--------
          v        v                           v
    Host
        .-------------.  .-----------------------------.
        |   pIOMMU    |  | First stage for GIOVA->GPA  |
        |             |  '-----------------------------'
        .----------------/  |
        | PASID Entry |     V (Nested xlate)
        '----------------\.--------------------------------------------.
        |             |   | Second stage for GPA->HPA, unmanaged domain|
        |             |   '--------------------------------------------'
        '-------------'
<Intel VT-d Nested translation>

This series reuse VFIO device's default HWPT as nesting parent instead of
creating new one. This way avoids duplicate code of a new memory listener,
all existing feature from VFIO listener can be shared, e.g., ram discard,
dirty tracking, etc. Two limitations are: 1) not supporting VFIO device
under a PCI bridge with emulated device, because emulated device wants
IOMMU AS and VFIO device stick to system AS; 2) not supporting kexec or
reboot from "intel_iommu=on,sm_on" to "intel_iommu=on,sm_off" on platform
with ERRATA_772415_SPR17, because VFIO device's default HWPT is created
with NEST_PARENT flag, kernel inhibit RO mappings when switch to shadow
mode.

This series is also a prerequisite work for vSVA, i.e. Sharing guest
application address space with passthrough devices.

There are some interactions between VFIO and vIOMMU
* vIOMMU registers PCIIOMMUOps [set|unset]_iommu_device to PCI
  subsystem. VFIO calls them to register/unregister HostIOMMUDevice
  instance to vIOMMU at vfio device realize stage.
* vIOMMU registers PCIIOMMUOps get_viommu_flags to PCI subsystem.
  VFIO calls it to get vIOMMU exposed flags.
* vIOMMU calls HostIOMMUDeviceIOMMUFD interface [at|de]tach_hwpt
  to bind/unbind device to IOMMUFD backed domains, either nested
  domain or not.

See below diagram:

        VFIO Device                                 Intel IOMMU
    .-----------------.                         .-------------------.
    |                 |                         |                   |
    |       .---------|PCIIOMMUOps              |.-------------.    |
    |       | IOMMUFD |(set/unset_iommu_device) || Host IOMMU  |    |
    |       | Device  |------------------------>|| Device list |    |
    |       .---------|(get_viommu_flags)       |.-------------.    |
    |                 |                         |       |           |
    |                 |                         |       V           |
    |       .---------|  HostIOMMUDeviceIOMMUFD |  .-------------.  |
    |       | IOMMUFD |            (attach_hwpt)|  | Host IOMMU  |  |
    |       | link    |<------------------------|  |   Device    |  |
    |       .---------|            (detach_hwpt)|  .-------------.  |
    |                 |                         |       |           |
    |                 |                         |       ...         |
    .-----------------.                         .-------------------.

Below is an example to enable first stage translation for passthrough device:

    -M q35,...
    -device intel-iommu,x-scalable-mode=on,x-flts=on...
    -object iommufd,id=iommufd0 -device vfio-pci,iommufd=iommufd0,...

Test done:
- VFIO devices hotplug/unplug
- different VFIO devices linked to different iommufds
- vhost net device ping test
- migration with QAT passthrough

PATCH01-08: Some preparing work
PATCH09-10: Compatibility check between vIOMMU and Host IOMMU
PATCH11-16: Implement first stage translation for passthrough device
PATCH17:    Add migration support and optimization
PATCH18:    Enable first stage translation for passthrough device
PATCH19:    Add doc

Qemu code can be found at [3], it's based on vfio-next.

Fault event injection to guest isn't supported in this series, we presume guest
kernel always construct correct first stage page table for passthrough device.
For emulated devices, the emulation code already provided first stage fault
injection.

TODO:
- Fault event injection to guest when HW first stage page table faults

[1] https://lore.kernel.org/qemu-devel/bbc8412b-25c3-4c95-9fde-a1c9c29b54ce@redhat.com/
[2] https://patchwork.kernel.org/project/kvm/cover/20210302203827.437645-1-yi.l.liu@intel.com/
[3] https://github.com/yiliu1765/qemu/tree/zhenzhong/iommufd_nesting.v9

Thanks
Zhenzhong

Changelog:
v9:
- split v8 to base nesting series + ERRATA_772415_SPR17 series (Cédric)
- s/fs_hwpt/fs_hwpt_id, s/vtd_bind_guest_pasid/vtd_propagate_guest_pasid (Eric)
- polish error msg when CONFIG_VTD_ACCEL isn't defined (Eric)
- refactor hwpt_id assignment in vtd_device_attach_iommufd() (Eric)

v8:
- add hw/i386/intel_iommu_accel.[hc] to hold accel code (Eric)
- return bool for all vtd accel related functions (Cédric, Eric)
- introduce a new PCIIOMMUOps::get_host_iommu_quirks() (Eric, Nicolin)
- minor polishment to comment and code (Cédric, Eric)
- drop some R-b as they have changes needing review again

v7:
- s/host_iommu_extract_vendor_caps/host_iommu_extract_quirks (Nicolin)
- s/RID_PASID/PASID_0 (Eric)
- drop rid2pasid check in vtd_do_iommu_translate (Eric)
- refine DID check in vtd_pasid_cache_sync_locked (Liuyi)
- refine commit log (Nicolin, Eric, Liuyi)
- Fix doc build (Cédric)
- add migration support

v6:
- delete RPS capability related supporting code (Eric, Yi)
- use terminology 'first/second stage' to replace 'first/second level" (Eric, Yi)
- use get_viommu_flags() instead of get_viommu_caps() (Nicolin)
- drop non-RID_PASID related code and simplify pasid invalidation handling (Eric, Yi)
- drop the patch that handle pasid replay when context invalidation (Eric)
- move vendor specific cap check from VFIO core to backend/iommufd.c (Nicolin)

v5:
- refine commit log of patch2 (Cédric, Nicolin)
- introduce helper vfio_pci_from_vfio_device() (Cédric)
- introduce helper vfio_device_viommu_get_nested() (Cédric)
- pass 'bool bypass_ro' argument to vfio_listener_valid_section() instead of 'VFIOContainerBase *' (Cédric)
- fix a potential build error reported by Jim Shu

v4:
- s/VIOMMU_CAP_STAGE1/VIOMMU_CAP_HW_NESTED (Eric, Nicolin, Donald, Shameer)
- clarify get_viommu_cap() return pure emulated caps and explain reason in commit log (Eric)
- retrieve the ce only if vtd_as->pasid in vtd_as_to_iommu_pasid_locked (Eric)
- refine doc comment and commit log in patch10-11 (Eric)

v3:
- define enum type for VIOMMU_CAP_* (Eric)
- drop inline flag in the patch which uses the helper (Eric)
- use extract64 in new introduced MACRO (Eric)
- polish comments and fix typo error (Eric)
- split workaround patch for ERRATA_772415_SPR17 to two patches (Eric)
- optimize bind/unbind error path processing

v2:
- introduce get_viommu_cap() to get STAGE1 flag to create nesting parent HWPT (Liuyi)
- reuse VFIO's default HWPT as parent HWPT of nested translation (Nicolin, Liuyi)
- abandon support of VFIO device under pcie-to-pci bridge to simplify design (Liuyi)
- bypass RO mapping in VFIO's default HWPT if ERRATA_772415_SPR17 (Liuyi)
- drop vtd_dev_to_context_entry optimization (Liuyi)

v1:
- simplify vendor specific checking in vtd_check_hiod (Cédric, Nicolin)
- rebase to master


Yi Liu (3):
  intel_iommu_accel: Propagate PASID-based iotlb invalidation to host
  intel_iommu: Replay all pasid bindings when either SRTP or TE bit is
    changed
  intel_iommu: Replay pasid bindings after context cache invalidation

Zhenzhong Duan (16):
  intel_iommu: Rename vtd_ce_get_rid2pasid_entry to
    vtd_ce_get_pasid_entry
  intel_iommu: Delete RPS capability related supporting code
  intel_iommu: Update terminology to match VTD spec
  hw/pci: Export pci_device_get_iommu_bus_devfn() and return bool
  hw/pci: Introduce pci_device_get_viommu_flags()
  intel_iommu: Implement get_viommu_flags() callback
  intel_iommu: Introduce a new structure VTDHostIOMMUDevice
  vfio/iommufd: Force creating nesting parent HWPT
  intel_iommu_accel: Check for compatibility with IOMMUFD backed device
    when x-flts=on
  intel_iommu_accel: Fail passthrough device under PCI bridge if
    x-flts=on
  intel_iommu_accel: Stick to system MR for IOMMUFD backed host device
    when x-flts=on
  intel_iommu: Add some macros and inline functions
  intel_iommu_accel: Bind/unbind guest page table to host
  intel_iommu: Add migration support with x-flts=on
  intel_iommu: Enable host device when x-flts=on in scalable mode
  docs/devel: Add IOMMUFD nesting documentation

 MAINTAINERS                    |   2 +
 docs/devel/vfio-iommufd.rst    |  17 ++
 hw/i386/intel_iommu_accel.h    |  51 ++++
 hw/i386/intel_iommu_internal.h | 155 +++++++---
 include/hw/i386/intel_iommu.h  |   6 +-
 include/hw/iommu.h             |  25 ++
 include/hw/pci/pci.h           |  24 ++
 include/hw/vfio/vfio-device.h  |   2 +
 hw/i386/intel_iommu.c          | 528 +++++++++++++++++++--------------
 hw/i386/intel_iommu_accel.c    | 251 ++++++++++++++++
 hw/pci/pci.c                   |  23 +-
 hw/vfio/device.c               |  12 +
 hw/vfio/iommufd.c              |   9 +
 tests/qtest/intel-iommu-test.c |   4 +-
 hw/i386/Kconfig                |   5 +
 hw/i386/meson.build            |   1 +
 hw/i386/trace-events           |   4 +
 17 files changed, 833 insertions(+), 286 deletions(-)
 create mode 100644 hw/i386/intel_iommu_accel.h
 create mode 100644 include/hw/iommu.h
 create mode 100644 hw/i386/intel_iommu_accel.c

-- 
2.47.1


Re: [PATCH v9 00/19] intel_iommu: Enable first stage translation for passthrough device
Posted by Cédric Le Goater 1 month, 3 weeks ago
Hello Zhenzhong

On 12/15/25 07:50, Zhenzhong Duan wrote:
> Hi,
> 
> Based on Cédric's suggestions[1], The nesting series v8 is split to
> "base nesting series" + "ERRATA_772415_SPR17 quirk series", this is the
> base nesting series.
> 
> For passthrough device with intel_iommu.x-flts=on, we don't do shadowing of
> guest page table but pass first stage page table to host side to construct a
> nested HWPT. There was some effort to enable this feature in old days, see
> [2] for details.
> 
> The key design is to utilize the dual-stage IOMMU translation (also known as
> IOMMU nested translation) capability in host IOMMU. As the below diagram shows,
> guest I/O page table pointer in GPA (guest physical address) is passed to host
> and be used to perform the first stage address translation. Along with it,
> modifications to present mappings in the guest I/O page table should be followed
> with an IOTLB invalidation.
> 
>          .-------------.  .---------------------------.
>          |   vIOMMU    |  | Guest I/O page table      |
>          |             |  '---------------------------'
>          .----------------/
>          | PASID Entry |--- PASID cache flush --+
>          '-------------'                        |
>          |             |                        V
>          |             |           I/O page table pointer in GPA
>          '-------------'
>      Guest
>      ------| Shadow |---------------------------|--------
>            v        v                           v
>      Host
>          .-------------.  .-----------------------------.
>          |   pIOMMU    |  | First stage for GIOVA->GPA  |
>          |             |  '-----------------------------'
>          .----------------/  |
>          | PASID Entry |     V (Nested xlate)
>          '----------------\.--------------------------------------------.
>          |             |   | Second stage for GPA->HPA, unmanaged domain|
>          |             |   '--------------------------------------------'
>          '-------------'
> <Intel VT-d Nested translation>
> 
> This series reuse VFIO device's default HWPT as nesting parent instead of
> creating new one. This way avoids duplicate code of a new memory listener,
> all existing feature from VFIO listener can be shared, e.g., ram discard,
> dirty tracking, etc. Two limitations are: 1) not supporting VFIO device
> under a PCI bridge with emulated device, because emulated device wants
> IOMMU AS and VFIO device stick to system AS; 2) not supporting kexec or
> reboot from "intel_iommu=on,sm_on" to "intel_iommu=on,sm_off" on platform
> with ERRATA_772415_SPR17, because VFIO device's default HWPT is created
> with NEST_PARENT flag, kernel inhibit RO mappings when switch to shadow
> mode.
> 
> This series is also a prerequisite work for vSVA, i.e. Sharing guest
> application address space with passthrough devices.
> 
> There are some interactions between VFIO and vIOMMU
> * vIOMMU registers PCIIOMMUOps [set|unset]_iommu_device to PCI
>    subsystem. VFIO calls them to register/unregister HostIOMMUDevice
>    instance to vIOMMU at vfio device realize stage.
> * vIOMMU registers PCIIOMMUOps get_viommu_flags to PCI subsystem.
>    VFIO calls it to get vIOMMU exposed flags.
> * vIOMMU calls HostIOMMUDeviceIOMMUFD interface [at|de]tach_hwpt
>    to bind/unbind device to IOMMUFD backed domains, either nested
>    domain or not.
> 
> See below diagram:
> 
>          VFIO Device                                 Intel IOMMU
>      .-----------------.                         .-------------------.
>      |                 |                         |                   |
>      |       .---------|PCIIOMMUOps              |.-------------.    |
>      |       | IOMMUFD |(set/unset_iommu_device) || Host IOMMU  |    |
>      |       | Device  |------------------------>|| Device list |    |
>      |       .---------|(get_viommu_flags)       |.-------------.    |
>      |                 |                         |       |           |
>      |                 |                         |       V           |
>      |       .---------|  HostIOMMUDeviceIOMMUFD |  .-------------.  |
>      |       | IOMMUFD |            (attach_hwpt)|  | Host IOMMU  |  |
>      |       | link    |<------------------------|  |   Device    |  |
>      |       .---------|            (detach_hwpt)|  .-------------.  |
>      |                 |                         |       |           |
>      |                 |                         |       ...         |
>      .-----------------.                         .-------------------.
> 
> Below is an example to enable first stage translation for passthrough device:
> 
>      -M q35,...
>      -device intel-iommu,x-scalable-mode=on,x-flts=on...
>      -object iommufd,id=iommufd0 -device vfio-pci,iommufd=iommufd0,...

What about libvirt support ? There are patches to enable IOMMUFD
support with device assignment but I don't see anything related
to first stage translation. Is there a plan ?

This raises a question. Should ftls support be automatically enabled
based on the availability of an IOMMUFD backend ?
  
> 
> Test done:
> - VFIO devices hotplug/unplug
> - different VFIO devices linked to different iommufds
> - vhost net device ping test
> - migration with QAT passthrough

Did you do any experiments with active mlx5 VFs ?

Thanks,

C.


> PATCH01-08: Some preparing work
> PATCH09-10: Compatibility check between vIOMMU and Host IOMMU
> PATCH11-16: Implement first stage translation for passthrough device
> PATCH17:    Add migration support and optimization
> PATCH18:    Enable first stage translation for passthrough device
> PATCH19:    Add doc
> 
> Qemu code can be found at [3], it's based on vfio-next.
> 
> Fault event injection to guest isn't supported in this series, we presume guest
> kernel always construct correct first stage page table for passthrough device.
> For emulated devices, the emulation code already provided first stage fault
> injection.
> 
> TODO:
> - Fault event injection to guest when HW first stage page table faults
> 
> [1] https://lore.kernel.org/qemu-devel/bbc8412b-25c3-4c95-9fde-a1c9c29b54ce@redhat.com/
> [2] https://patchwork.kernel.org/project/kvm/cover/20210302203827.437645-1-yi.l.liu@intel.com/
> [3] https://github.com/yiliu1765/qemu/tree/zhenzhong/iommufd_nesting.v9
> 
> Thanks
> Zhenzhong
> 
> Changelog:
> v9:
> - split v8 to base nesting series + ERRATA_772415_SPR17 series (Cédric)
> - s/fs_hwpt/fs_hwpt_id, s/vtd_bind_guest_pasid/vtd_propagate_guest_pasid (Eric)
> - polish error msg when CONFIG_VTD_ACCEL isn't defined (Eric)
> - refactor hwpt_id assignment in vtd_device_attach_iommufd() (Eric)
> 
> v8:
> - add hw/i386/intel_iommu_accel.[hc] to hold accel code (Eric)
> - return bool for all vtd accel related functions (Cédric, Eric)
> - introduce a new PCIIOMMUOps::get_host_iommu_quirks() (Eric, Nicolin)
> - minor polishment to comment and code (Cédric, Eric)
> - drop some R-b as they have changes needing review again
> 
> v7:
> - s/host_iommu_extract_vendor_caps/host_iommu_extract_quirks (Nicolin)
> - s/RID_PASID/PASID_0 (Eric)
> - drop rid2pasid check in vtd_do_iommu_translate (Eric)
> - refine DID check in vtd_pasid_cache_sync_locked (Liuyi)
> - refine commit log (Nicolin, Eric, Liuyi)
> - Fix doc build (Cédric)
> - add migration support
> 
> v6:
> - delete RPS capability related supporting code (Eric, Yi)
> - use terminology 'first/second stage' to replace 'first/second level" (Eric, Yi)
> - use get_viommu_flags() instead of get_viommu_caps() (Nicolin)
> - drop non-RID_PASID related code and simplify pasid invalidation handling (Eric, Yi)
> - drop the patch that handle pasid replay when context invalidation (Eric)
> - move vendor specific cap check from VFIO core to backend/iommufd.c (Nicolin)
> 
> v5:
> - refine commit log of patch2 (Cédric, Nicolin)
> - introduce helper vfio_pci_from_vfio_device() (Cédric)
> - introduce helper vfio_device_viommu_get_nested() (Cédric)
> - pass 'bool bypass_ro' argument to vfio_listener_valid_section() instead of 'VFIOContainerBase *' (Cédric)
> - fix a potential build error reported by Jim Shu
> 
> v4:
> - s/VIOMMU_CAP_STAGE1/VIOMMU_CAP_HW_NESTED (Eric, Nicolin, Donald, Shameer)
> - clarify get_viommu_cap() return pure emulated caps and explain reason in commit log (Eric)
> - retrieve the ce only if vtd_as->pasid in vtd_as_to_iommu_pasid_locked (Eric)
> - refine doc comment and commit log in patch10-11 (Eric)
> 
> v3:
> - define enum type for VIOMMU_CAP_* (Eric)
> - drop inline flag in the patch which uses the helper (Eric)
> - use extract64 in new introduced MACRO (Eric)
> - polish comments and fix typo error (Eric)
> - split workaround patch for ERRATA_772415_SPR17 to two patches (Eric)
> - optimize bind/unbind error path processing
> 
> v2:
> - introduce get_viommu_cap() to get STAGE1 flag to create nesting parent HWPT (Liuyi)
> - reuse VFIO's default HWPT as parent HWPT of nested translation (Nicolin, Liuyi)
> - abandon support of VFIO device under pcie-to-pci bridge to simplify design (Liuyi)
> - bypass RO mapping in VFIO's default HWPT if ERRATA_772415_SPR17 (Liuyi)
> - drop vtd_dev_to_context_entry optimization (Liuyi)
> 
> v1:
> - simplify vendor specific checking in vtd_check_hiod (Cédric, Nicolin)
> - rebase to master
> 
> 
> Yi Liu (3):
>    intel_iommu_accel: Propagate PASID-based iotlb invalidation to host
>    intel_iommu: Replay all pasid bindings when either SRTP or TE bit is
>      changed
>    intel_iommu: Replay pasid bindings after context cache invalidation
> 
> Zhenzhong Duan (16):
>    intel_iommu: Rename vtd_ce_get_rid2pasid_entry to
>      vtd_ce_get_pasid_entry
>    intel_iommu: Delete RPS capability related supporting code
>    intel_iommu: Update terminology to match VTD spec
>    hw/pci: Export pci_device_get_iommu_bus_devfn() and return bool
>    hw/pci: Introduce pci_device_get_viommu_flags()
>    intel_iommu: Implement get_viommu_flags() callback
>    intel_iommu: Introduce a new structure VTDHostIOMMUDevice
>    vfio/iommufd: Force creating nesting parent HWPT
>    intel_iommu_accel: Check for compatibility with IOMMUFD backed device
>      when x-flts=on
>    intel_iommu_accel: Fail passthrough device under PCI bridge if
>      x-flts=on
>    intel_iommu_accel: Stick to system MR for IOMMUFD backed host device
>      when x-flts=on
>    intel_iommu: Add some macros and inline functions
>    intel_iommu_accel: Bind/unbind guest page table to host
>    intel_iommu: Add migration support with x-flts=on
>    intel_iommu: Enable host device when x-flts=on in scalable mode
>    docs/devel: Add IOMMUFD nesting documentation
> 
>   MAINTAINERS                    |   2 +
>   docs/devel/vfio-iommufd.rst    |  17 ++
>   hw/i386/intel_iommu_accel.h    |  51 ++++
>   hw/i386/intel_iommu_internal.h | 155 +++++++---
>   include/hw/i386/intel_iommu.h  |   6 +-
>   include/hw/iommu.h             |  25 ++
>   include/hw/pci/pci.h           |  24 ++
>   include/hw/vfio/vfio-device.h  |   2 +
>   hw/i386/intel_iommu.c          | 528 +++++++++++++++++++--------------
>   hw/i386/intel_iommu_accel.c    | 251 ++++++++++++++++
>   hw/pci/pci.c                   |  23 +-
>   hw/vfio/device.c               |  12 +
>   hw/vfio/iommufd.c              |   9 +
>   tests/qtest/intel-iommu-test.c |   4 +-
>   hw/i386/Kconfig                |   5 +
>   hw/i386/meson.build            |   1 +
>   hw/i386/trace-events           |   4 +
>   17 files changed, 833 insertions(+), 286 deletions(-)
>   create mode 100644 hw/i386/intel_iommu_accel.h
>   create mode 100644 include/hw/iommu.h
>   create mode 100644 hw/i386/intel_iommu_accel.c
> 


RE: [PATCH v9 00/19] intel_iommu: Enable first stage translation for passthrough device
Posted by Duan, Zhenzhong 1 month, 3 weeks ago
Hi Cédric,

>-----Original Message-----
>From: Cédric Le Goater <clg@redhat.com>
>Subject: Re: [PATCH v9 00/19] intel_iommu: Enable first stage translation for
>passthrough device
>
>Hello Zhenzhong
>
>On 12/15/25 07:50, Zhenzhong Duan wrote:
>> Hi,
>>
>> Based on Cédric's suggestions[1], The nesting series v8 is split to
>> "base nesting series" + "ERRATA_772415_SPR17 quirk series", this is the
>> base nesting series.
>>
>> For passthrough device with intel_iommu.x-flts=on, we don't do shadowing
>of
>> guest page table but pass first stage page table to host side to construct a
>> nested HWPT. There was some effort to enable this feature in old days, see
>> [2] for details.
>>
>> The key design is to utilize the dual-stage IOMMU translation (also known as
>> IOMMU nested translation) capability in host IOMMU. As the below
>diagram shows,
>> guest I/O page table pointer in GPA (guest physical address) is passed to
>host
>> and be used to perform the first stage address translation. Along with it,
>> modifications to present mappings in the guest I/O page table should be
>followed
>> with an IOTLB invalidation.
>>
>>          .-------------.  .---------------------------.
>>          |   vIOMMU    |  | Guest I/O page table      |
>>          |             |  '---------------------------'
>>          .----------------/
>>          | PASID Entry |--- PASID cache flush --+
>>          '-------------'                        |
>>          |             |                        V
>>          |             |           I/O page table pointer in GPA
>>          '-------------'
>>      Guest
>>      ------| Shadow |---------------------------|--------
>>            v        v                           v
>>      Host
>>          .-------------.  .-----------------------------.
>>          |   pIOMMU    |  | First stage for GIOVA->GPA  |
>>          |             |  '-----------------------------'
>>          .----------------/  |
>>          | PASID Entry |     V (Nested xlate)
>>          '----------------\.--------------------------------------------.
>>          |             |   | Second stage for GPA->HPA, unmanaged
>domain|
>>          |             |   '--------------------------------------------'
>>          '-------------'
>> <Intel VT-d Nested translation>
>>
>> This series reuse VFIO device's default HWPT as nesting parent instead of
>> creating new one. This way avoids duplicate code of a new memory
>listener,
>> all existing feature from VFIO listener can be shared, e.g., ram discard,
>> dirty tracking, etc. Two limitations are: 1) not supporting VFIO device
>> under a PCI bridge with emulated device, because emulated device wants
>> IOMMU AS and VFIO device stick to system AS; 2) not supporting kexec or
>> reboot from "intel_iommu=on,sm_on" to "intel_iommu=on,sm_off" on
>platform
>> with ERRATA_772415_SPR17, because VFIO device's default HWPT is
>created
>> with NEST_PARENT flag, kernel inhibit RO mappings when switch to shadow
>> mode.
>>
>> This series is also a prerequisite work for vSVA, i.e. Sharing guest
>> application address space with passthrough devices.
>>
>> There are some interactions between VFIO and vIOMMU
>> * vIOMMU registers PCIIOMMUOps [set|unset]_iommu_device to PCI
>>    subsystem. VFIO calls them to register/unregister HostIOMMUDevice
>>    instance to vIOMMU at vfio device realize stage.
>> * vIOMMU registers PCIIOMMUOps get_viommu_flags to PCI subsystem.
>>    VFIO calls it to get vIOMMU exposed flags.
>> * vIOMMU calls HostIOMMUDeviceIOMMUFD interface [at|de]tach_hwpt
>>    to bind/unbind device to IOMMUFD backed domains, either nested
>>    domain or not.
>>
>> See below diagram:
>>
>>          VFIO Device                                 Intel
>IOMMU
>>      .-----------------.                         .-------------------.
>>      |                 |                         |
>|
>>      |       .---------|PCIIOMMUOps              |.-------------.
>|
>>      |       | IOMMUFD |(set/unset_iommu_device) || Host IOMMU
>|    |
>>      |       | Device  |------------------------>|| Device list |    |
>>      |       .---------|(get_viommu_flags)       |.-------------.    |
>>      |                 |                         |       |
>|
>>      |                 |                         |       V
>|
>>      |       .---------|  HostIOMMUDeviceIOMMUFD |  .-------------.
>|
>>      |       | IOMMUFD |            (attach_hwpt)|  | Host
>IOMMU  |  |
>>      |       | link    |<------------------------|  |   Device    |  |
>>      |       .---------|            (detach_hwpt)|  .-------------.  |
>>      |                 |                         |       |
>|
>>      |                 |                         |       ...
>|
>>      .-----------------.                         .-------------------.
>>
>> Below is an example to enable first stage translation for passthrough
>device:
>>
>>      -M q35,...
>>      -device intel-iommu,x-scalable-mode=on,x-flts=on...
>>      -object iommufd,id=iommufd0 -device
>vfio-pci,iommufd=iommufd0,...
>
>What about libvirt support ? There are patches to enable IOMMUFD
>support with device assignment but I don't see anything related
>to first stage translation. Is there a plan ?

I think IOMMUFD support in libvirt is non-trivial, good to know there is progress.
But I didn't find a match in libvirt mailing list, https://lists.libvirt.org/archives/search?q=iommufd
Do you have a link?

I think first stage support is trivial, only to support a new property <...x-flts=on/off>.
I can apply a few time resource from my manager to work on it after this series is merged.
It's also welcome if anyone is interested to take it.

>
>This raises a question. Should ftls support be automatically enabled
>based on the availability of an IOMMUFD backend ?

Yes, if user doesn't force it off, like <...iommufd='off'> and IOMMUFD backend available, we can enable it automatically.

>
>>
>> Test done:
>> - VFIO devices hotplug/unplug
>> - different VFIO devices linked to different iommufds
>> - vhost net device ping test
>> - migration with QAT passthrough
>
>Did you do any experiments with active mlx5 VFs ?

No, there are only a few device drivers supporting VFIO migration and we only have QAT.
Let me know if you see issue on other devices.

Thanks
Zhenzhong
Re: [PATCH v9 00/19] intel_iommu: Enable first stage translation for passthrough device
Posted by Cédric Le Goater 1 month, 3 weeks ago
Hello Zhenzhong,

On 12/16/25 04:24, Duan, Zhenzhong wrote:
> Hi Cédric,
> 
>> -----Original Message-----
>> From: Cédric Le Goater <clg@redhat.com>
>> Subject: Re: [PATCH v9 00/19] intel_iommu: Enable first stage translation for
>> passthrough device
>>
>> Hello Zhenzhong
>>
>> On 12/15/25 07:50, Zhenzhong Duan wrote:
>>> Hi,
>>>
>>> Based on Cédric's suggestions[1], The nesting series v8 is split to
>>> "base nesting series" + "ERRATA_772415_SPR17 quirk series", this is the
>>> base nesting series.
>>>
>>> For passthrough device with intel_iommu.x-flts=on, we don't do shadowing
>> of
>>> guest page table but pass first stage page table to host side to construct a
>>> nested HWPT. There was some effort to enable this feature in old days, see
>>> [2] for details.
>>>
>>> The key design is to utilize the dual-stage IOMMU translation (also known as
>>> IOMMU nested translation) capability in host IOMMU. As the below
>> diagram shows,
>>> guest I/O page table pointer in GPA (guest physical address) is passed to
>> host
>>> and be used to perform the first stage address translation. Along with it,
>>> modifications to present mappings in the guest I/O page table should be
>> followed
>>> with an IOTLB invalidation.
>>>
>>>           .-------------.  .---------------------------.
>>>           |   vIOMMU    |  | Guest I/O page table      |
>>>           |             |  '---------------------------'
>>>           .----------------/
>>>           | PASID Entry |--- PASID cache flush --+
>>>           '-------------'                        |
>>>           |             |                        V
>>>           |             |           I/O page table pointer in GPA
>>>           '-------------'
>>>       Guest
>>>       ------| Shadow |---------------------------|--------
>>>             v        v                           v
>>>       Host
>>>           .-------------.  .-----------------------------.
>>>           |   pIOMMU    |  | First stage for GIOVA->GPA  |
>>>           |             |  '-----------------------------'
>>>           .----------------/  |
>>>           | PASID Entry |     V (Nested xlate)
>>>           '----------------\.--------------------------------------------.
>>>           |             |   | Second stage for GPA->HPA, unmanaged
>> domain|
>>>           |             |   '--------------------------------------------'
>>>           '-------------'
>>> <Intel VT-d Nested translation>
>>>
>>> This series reuse VFIO device's default HWPT as nesting parent instead of
>>> creating new one. This way avoids duplicate code of a new memory
>> listener,
>>> all existing feature from VFIO listener can be shared, e.g., ram discard,
>>> dirty tracking, etc. Two limitations are: 1) not supporting VFIO device
>>> under a PCI bridge with emulated device, because emulated device wants
>>> IOMMU AS and VFIO device stick to system AS; 2) not supporting kexec or
>>> reboot from "intel_iommu=on,sm_on" to "intel_iommu=on,sm_off" on
>> platform
>>> with ERRATA_772415_SPR17, because VFIO device's default HWPT is
>> created
>>> with NEST_PARENT flag, kernel inhibit RO mappings when switch to shadow
>>> mode.
>>>
>>> This series is also a prerequisite work for vSVA, i.e. Sharing guest
>>> application address space with passthrough devices.
>>>
>>> There are some interactions between VFIO and vIOMMU
>>> * vIOMMU registers PCIIOMMUOps [set|unset]_iommu_device to PCI
>>>     subsystem. VFIO calls them to register/unregister HostIOMMUDevice
>>>     instance to vIOMMU at vfio device realize stage.
>>> * vIOMMU registers PCIIOMMUOps get_viommu_flags to PCI subsystem.
>>>     VFIO calls it to get vIOMMU exposed flags.
>>> * vIOMMU calls HostIOMMUDeviceIOMMUFD interface [at|de]tach_hwpt
>>>     to bind/unbind device to IOMMUFD backed domains, either nested
>>>     domain or not.
>>>
>>> See below diagram:
>>>
>>>           VFIO Device                                 Intel
>> IOMMU
>>>       .-----------------.                         .-------------------.
>>>       |                 |                         |
>> |
>>>       |       .---------|PCIIOMMUOps              |.-------------.
>> |
>>>       |       | IOMMUFD |(set/unset_iommu_device) || Host IOMMU
>> |    |
>>>       |       | Device  |------------------------>|| Device list |    |
>>>       |       .---------|(get_viommu_flags)       |.-------------.    |
>>>       |                 |                         |       |
>> |
>>>       |                 |                         |       V
>> |
>>>       |       .---------|  HostIOMMUDeviceIOMMUFD |  .-------------.
>> |
>>>       |       | IOMMUFD |            (attach_hwpt)|  | Host
>> IOMMU  |  |
>>>       |       | link    |<------------------------|  |   Device    |  |
>>>       |       .---------|            (detach_hwpt)|  .-------------.  |
>>>       |                 |                         |       |
>> |
>>>       |                 |                         |       ...
>> |
>>>       .-----------------.                         .-------------------.
>>>
>>> Below is an example to enable first stage translation for passthrough
>> device:
>>>
>>>       -M q35,...
>>>       -device intel-iommu,x-scalable-mode=on,x-flts=on...
>>>       -object iommufd,id=iommufd0 -device
>> vfio-pci,iommufd=iommufd0,...
>>
>> What about libvirt support ? There are patches to enable IOMMUFD
>> support with device assignment but I don't see anything related
>> to first stage translation. Is there a plan ?
> 
> I think IOMMUFD support in libvirt is non-trivial, good to know there is progress.
> But I didn't find a match in libvirt mailing list, https://lists.libvirt.org/archives/search?q=iommufd
> Do you have a link?

Here  :

   https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/KFYUQGMXWV64QPI245H66GKRNAYL7LGB/

There might be an update. We should ask Nathan.

> I think first stage support is trivial, only to support a new property <...x-flts=on/off>.
> I can apply a few time resource from my manager to work on it after this series is merged.
> It's also welcome if anyone is interested to take it.

ok. So, currently, we have no way to benefit from translation
acceleration on the host unless we directly set the 'x-flts'
property on the QEMU command line.

>> This raises a question. Should ftls support be automatically enabled
>> based on the availability of an IOMMUFD backend ?
> 
> Yes, if user doesn't force it off, like <...iommufd='off'> and IOMMUFD backend available, we can enable it automatically.

The plan is to keep VFIO IOMMU Type1 as the default host IOMMU
backend to maintain a consistent behavior. If an IOMMUFD backend
is required, it should be set explicitly. One day we might revisit
this choice and change the default. Not yet.


>>> Test done:
>>> - VFIO devices hotplug/unplug
>>> - different VFIO devices linked to different iommufds
>>> - vhost net device ping test
>>> - migration with QAT passthrough
>>
>> Did you do any experiments with active mlx5 VFs ?
> 
> No, there are only a few device drivers supporting VFIO migration and we only have QAT.
> Let me know if you see issue on other devices.
Since we lack libvirt integration (of flts), the tests need
to be run manually which is more complex for QE. IOW, it will
take more time but we should definitely evaluate other devices.


Thanks,

C.


RE: [PATCH v9 00/19] intel_iommu: Enable first stage translation for passthrough device
Posted by Duan, Zhenzhong 1 month, 3 weeks ago

>-----Original Message-----
>From: Cédric Le Goater <clg@redhat.com>
>Subject: Re: [PATCH v9 00/19] intel_iommu: Enable first stage translation for
>passthrough device

...

>>>> Below is an example to enable first stage translation for passthrough
>>> device:
>>>>
>>>>       -M q35,...
>>>>       -device intel-iommu,x-scalable-mode=on,x-flts=on...
>>>>       -object iommufd,id=iommufd0 -device
>>> vfio-pci,iommufd=iommufd0,...
>>>
>>> What about libvirt support ? There are patches to enable IOMMUFD
>>> support with device assignment but I don't see anything related
>>> to first stage translation. Is there a plan ?
>>
>> I think IOMMUFD support in libvirt is non-trivial, good to know there is
>progress.
>> But I didn't find a match in libvirt mailing list,
>https://lists.libvirt.org/archives/search?q=iommufd
>> Do you have a link?
>
>Here  :
>
>
>https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/KFYUQGMX
>WV64QPI245H66GKRNAYL7LGB/

Thanks

>
>There might be an update. We should ask Nathan.
>
>> I think first stage support is trivial, only to support a new property
><...x-flts=on/off>.
>> I can apply a few time resource from my manager to work on it after this
>series is merged.
>> It's also welcome if anyone is interested to take it.
>
>ok. So, currently, we have no way to benefit from translation
>acceleration on the host unless we directly set the 'x-flts'
>property on the QEMU command line.

Yes, thanks for reminding.
I'll try add 'x-flts' support to libvirt to fill the gap recently,
I will take one week vacation starting this Friday, may try it after vacation.

>
>>> This raises a question. Should ftls support be automatically enabled
>>> based on the availability of an IOMMUFD backend ?
>>
>> Yes, if user doesn't force it off, like <...iommufd='off'> and IOMMUFD
>backend available, we can enable it automatically.
>
>The plan is to keep VFIO IOMMU Type1 as the default host IOMMU
>backend to maintain a consistent behavior. If an IOMMUFD backend
>is required, it should be set explicitly. One day we might revisit
>this choice and change the default. Not yet.

OK, maybe we need to maintain consistent behavior for intel_iommu too,
if first-stage is required, it should be set explicitly, if not set, default to second stage(shadow page).

>
>
>>>> Test done:
>>>> - VFIO devices hotplug/unplug
>>>> - different VFIO devices linked to different iommufds
>>>> - vhost net device ping test
>>>> - migration with QAT passthrough
>>>
>>> Did you do any experiments with active mlx5 VFs ?
>>
>> No, there are only a few device drivers supporting VFIO migration and we
>only have QAT.
>> Let me know if you see issue on other devices.
>Since we lack libvirt integration (of flts), the tests need
>to be run manually which is more complex for QE. IOW, it will
>take more time but we should definitely evaluate other devices.

Oh, if you mean nesting feature test, we did play with different devices we had,
ixgbevf, ICE vf, DSA and QAT. For VFIO migration with nesting, we only tested QAT.

Thanks
Zhenzhong