[RFC PATCH v2 0/4] Add migration support for VFIO PCI devices in SMMUv3 nested stage mode

Kunkun Jiang posted 4 patches 3 years ago
Failed in applying to current master (apply log)
hw/arm/smmuv3.c     | 62 +++++++++++++++++++++++++++++++++++++++
hw/arm/trace-events |  1 +
hw/vfio/common.c    | 71 ++++++++++++++++++++++++++++++++++++++++-----
3 files changed, 126 insertions(+), 8 deletions(-)
[RFC PATCH v2 0/4] Add migration support for VFIO PCI devices in SMMUv3 nested stage mode
Posted by Kunkun Jiang 3 years ago
Hi all,

Since the SMMUv3's nested translation stages[1] has been introduced by Eric, we
need to pay attention to the migration of VFIO PCI devices in SMMUv3 nested stage
mode. At present, it is not yet supported in QEMU. There are two problems in the
existing framework.

First, the current way to get dirty pages is not applicable to nested stage mode.
Because of the "Caching Mode", VTD can map the RAM through the host single stage
(giova->hpa). "vfio_listener_log_sync" gets dirty pages by transferring "giova"
to the kernel for the RAM block section of mapped MMIO region. In nested stage
mode, we setup the stage 2 (gpa->hpa) and the stage 1(giova->gpa) separately. So
it is inapplicable to get dirty pages by the current way in nested stage mode.

Second, it also need to pass stage 1 configurations to the destination host after
the migration. In Eric's patch, it passes the stage 1 configuration to the host on
each STE update for the devices set the PASID PciOps. The configuration will be
applied at physical level. But the data of physical level will not be sent to the
destination host. So we have to pass stage 1 configurations to the destination
host after the migration.

Best Regards,
Kunkun Jiang

[1] [RFC,v8,00/28] vSMMUv3/pSMMUv3 2 stage VFIO integration
http://patchwork.ozlabs.org/project/qemu-devel/cover/20210225105233.650545-1-eric.auger@redhat.com/

This Patch set includes patches as below:
Patch 1-2:
- Refactor the vfio_listener_log_sync and added a new function to get dirty pages
in nested stage mode.

Patch 3:
- Added global_log_start/stop interface in vfio_memory_prereg_listener

Patch 4:
- Added the post_load function to vmstate_smmuv3 for passing stage 1 configuration
to the destination host after the migration.

History:

v1 -> v2:
- Add global_log_start/stop interface in vfio_memory_prereg_listener
- Add support for repass stage 1 configs with multiple CDs to the host

Kunkun Jiang (4):
  vfio: Introduce helpers to mark dirty pages of a RAM section
  vfio: Add vfio_prereg_listener_log_sync in nested stage
  vfio: Add vfio_prereg_listener_global_log_start/stop in nested stage
  hw/arm/smmuv3: Post-load stage 1 configurations to the host

 hw/arm/smmuv3.c     | 62 +++++++++++++++++++++++++++++++++++++++
 hw/arm/trace-events |  1 +
 hw/vfio/common.c    | 71 ++++++++++++++++++++++++++++++++++++++++-----
 3 files changed, 126 insertions(+), 8 deletions(-)

-- 
2.23.0


Re: [RFC PATCH v2 0/4] Add migration support for VFIO PCI devices in SMMUv3 nested stage mode
Posted by Kunkun Jiang 2 years, 11 months ago
Hi all,

This series has been updated to v3.[1]
Any comments and reviews are welcome.

Thanks,
Kunkun Jiang

[1] [RFC PATCH v3 0/4] Add migration support for VFIO PCI devices in 
SMMUv3 nested mode
https://lore.kernel.org/qemu-devel/20210511020816.2905-1-jiangkunkun@huawei.com/

On 2021/3/31 18:12, Kunkun Jiang wrote:
> Hi all,
>
> Since the SMMUv3's nested translation stages[1] has been introduced by Eric, we
> need to pay attention to the migration of VFIO PCI devices in SMMUv3 nested stage
> mode. At present, it is not yet supported in QEMU. There are two problems in the
> existing framework.
>
> First, the current way to get dirty pages is not applicable to nested stage mode.
> Because of the "Caching Mode", VTD can map the RAM through the host single stage
> (giova->hpa). "vfio_listener_log_sync" gets dirty pages by transferring "giova"
> to the kernel for the RAM block section of mapped MMIO region. In nested stage
> mode, we setup the stage 2 (gpa->hpa) and the stage 1(giova->gpa) separately. So
> it is inapplicable to get dirty pages by the current way in nested stage mode.
>
> Second, it also need to pass stage 1 configurations to the destination host after
> the migration. In Eric's patch, it passes the stage 1 configuration to the host on
> each STE update for the devices set the PASID PciOps. The configuration will be
> applied at physical level. But the data of physical level will not be sent to the
> destination host. So we have to pass stage 1 configurations to the destination
> host after the migration.
>
> Best Regards,
> Kunkun Jiang
>
> [1] [RFC,v8,00/28] vSMMUv3/pSMMUv3 2 stage VFIO integration
> http://patchwork.ozlabs.org/project/qemu-devel/cover/20210225105233.650545-1-eric.auger@redhat.com/
>
> This Patch set includes patches as below:
> Patch 1-2:
> - Refactor the vfio_listener_log_sync and added a new function to get dirty pages
> in nested stage mode.
>
> Patch 3:
> - Added global_log_start/stop interface in vfio_memory_prereg_listener
>
> Patch 4:
> - Added the post_load function to vmstate_smmuv3 for passing stage 1 configuration
> to the destination host after the migration.
>
> History:
>
> v1 -> v2:
> - Add global_log_start/stop interface in vfio_memory_prereg_listener
> - Add support for repass stage 1 configs with multiple CDs to the host
>
> Kunkun Jiang (4):
>    vfio: Introduce helpers to mark dirty pages of a RAM section
>    vfio: Add vfio_prereg_listener_log_sync in nested stage
>    vfio: Add vfio_prereg_listener_global_log_start/stop in nested stage
>    hw/arm/smmuv3: Post-load stage 1 configurations to the host
>
>   hw/arm/smmuv3.c     | 62 +++++++++++++++++++++++++++++++++++++++
>   hw/arm/trace-events |  1 +
>   hw/vfio/common.c    | 71 ++++++++++++++++++++++++++++++++++++++++-----
>   3 files changed, 126 insertions(+), 8 deletions(-)
>