[PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices

Kirti Wankhede posted 8 patches 3 years, 11 months ago
Failed in applying to current master (apply log)
drivers/vfio/vfio.c             |  13 +-
drivers/vfio/vfio_iommu_type1.c | 572 ++++++++++++++++++++++++++++++++++++----
include/linux/vfio.h            |   4 +-
include/uapi/linux/vfio.h       | 319 ++++++++++++++++++++++
4 files changed, 849 insertions(+), 59 deletions(-)
[PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Kirti Wankhede 3 years, 11 months ago
Hi,

This patch set adds:
* IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
  respect to IOMMU container rather than per device. All pages pinned by
  vendor driver through vfio_pin_pages external API has to be marked as
  dirty during  migration. When IOMMU capable device is present in the
  container and all pages are pinned and mapped, then all pages are marked
  dirty.
  When there are CPU writes, CPU dirty page tracking can identify dirtied
  pages, but any page pinned by vendor driver can also be written by
  device. As of now there is no device which has hardware support for
  dirty page tracking. So all pages which are pinned should be considered
  as dirty.
  This ioctl is also used to start/stop dirty pages tracking for pinned and
  unpinned pages while migration is active.

* Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
  unmapping IO virtual address range.
  With vIOMMU, during pre-copy phase of migration, while CPUs are still
  running, IO virtual address unmap can happen while device still keeping
  reference of guest pfns. Those pages should be reported as dirty before
  unmap, so that VFIO user space application can copy content of those
  pages from source to destination.

* Patch 8 detect if IOMMU capable device driver is smart to report pages
  to be marked dirty by pinning pages using vfio_pin_pages() API.


Yet TODO:
Since there is no device which has hardware support for system memmory
dirty bitmap tracking, right now there is no other API from vendor driver
to VFIO IOMMU module to report dirty pages. In future, when such hardware
support will be implemented, an API will be required such that vendor
driver could report dirty pages to VFIO module during migration phases.

v23 -> v24
- Fixed nit picks by Cornelia
- Fixed warning reported by test robot.

v22 -> v23
- Fixed issue reported by Yan
https://lore.kernel.org/kvm/97977ede-3c5b-c5a5-7858-7eecd7dd531c@nvidia.com/
- Fixed nit picks suggested by Cornelia

v21 -> v22
- Fixed issue raised by Alex :
https://lore.kernel.org/kvm/20200515163307.72951dd2@w520.home/

v20 -> v21
- Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
- Updated unmap ioctl function - as suggested by Alex.
- Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
  Cornelia.

v19 -> v20
- Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
- Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
- Removed flag definition from migration capability.

v18 -> v19
- Updated migration capability with supported page sizes bitmap for dirty
  page tracking and  maximum bitmap size supported by kernel module.
- Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
  is updated.
- Removed extra buffers added in previous version for bitmap manipulation
  and optimised the code.

v17 -> v18
- Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
  ioctl
- Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma

v16 -> v17
- Fixed errors reported by kbuild test robot <lkp@intel.com> on i386

v15 -> v16
- Minor edits and nit picks (Auger Eric)
- On copying bitmap to user, re-populated bitmap only for pinned pages,
  excluding unmapped pages and CPU dirtied pages.
- Patches are on tag: next-20200318 and 1-3 patches from Yan's series
  https://lkml.org/lkml/2020/3/12/1255

v14 -> v15
- Minor edits and nit picks.
- In the verification of user allocated bitmap memory, added check of
   maximum size.
- Patches are on tag: next-20200318 and 1-3 patches from Yan's series
  https://lkml.org/lkml/2020/3/12/1255

v13 -> v14
- Added struct vfio_bitmap to kabi. updated structure
  vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
- All small changes suggested by Alex.
- Patches are on tag: next-20200318 and 1-3 patches from Yan's series
  https://lkml.org/lkml/2020/3/12/1255

v12 -> v13
- Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
- Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
- Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
  field.

v11 -> v12
- Changed bitmap allocation in vfio_iommu_type1.
- Remove atomicity of ref_count.
- Updated comments for migration device state structure about error
  reporting.
- Nit picks from v11 reviews

v10 -> v11
- Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
- Added proposal to detect if IOMMU capable device calls external pin pages
  API to mark pages dirty.
- Nit picks from v10 reviews

v9 -> v10:
- Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
  during unmap while migration is active
- Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
  tracking.
- If iommu_mapped, mark all pages dirty.
- Added unpinned pages tracking while migration is active.
- Updated comments for migration device state structure with bit
  combination table and state transition details.

v8 -> v9:
- Split patch set in 2 sets, Kernel and QEMU.
- Dirty pages bitmap is queried from IOMMU container rather than from
  vendor driver for per device. Added 2 ioctls to achieve this.

v7 -> v8:
- Updated comments for KABI
- Added BAR address validation check during PCI device's config space load
  as suggested by Dr. David Alan Gilbert.
- Changed vfio_migration_set_state() to set or clear device state flags.
- Some nit fixes.

v6 -> v7:
- Fix build failures.

v5 -> v6:
- Fix build failure.

v4 -> v5:
- Added decriptive comment about the sequence of access of members of
  structure vfio_device_migration_info to be followed based on Alex's
  suggestion
- Updated get dirty pages sequence.
- As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
  get_object, save_config and load_config.
- Fixed multiple nit picks.
- Tested live migration with multiple vfio device assigned to a VM.

v3 -> v4:
- Added one more bit for _RESUMING flag to be set explicitly.
- data_offset field is read-only for user space application.
- data_size is read for every iteration before reading data from migration,
  that is removed assumption that data will be till end of migration
  region.
- If vendor driver supports mappable sparsed region, map those region
  during setup state of save/load, similarly unmap those from cleanup
  routines.
- Handles race condition that causes data corruption in migration region
  during save device state by adding mutex and serialiaing save_buffer and
  get_dirty_pages routines.
- Skip called get_dirty_pages routine for mapped MMIO region of device.
- Added trace events.
- Split into multiple functional patches.

v2 -> v3:
- Removed enum of VFIO device states. Defined VFIO device state with 2
  bits.
- Re-structured vfio_device_migration_info to keep it minimal and defined
  action on read and write access on its members.

v1 -> v2:
- Defined MIGRATION region type and sub-type which should be used with
  region type capability.
- Re-structured vfio_device_migration_info. This structure will be placed
  at 0th offset of migration region.
- Replaced ioctl with read/write for trapped part of migration region.
- Added both type of access support, trapped or mmapped, for data section
  of the region.
- Moved PCI device functions to pci file.
- Added iteration to get dirty page bitmap until bitmap for all requested
  pages are copied.

Thanks,
Kirti



Kirti Wankhede (8):
  vfio: UAPI for migration interface for device state
  vfio iommu: Remove atomicity of ref_count of pinned pages
  vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
  vfio iommu: Add ioctl definition for dirty pages tracking
  vfio iommu: Implementation of ioctl for dirty pages tracking
  vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  vfio iommu: Add migration capability to report supported features
  vfio: Selective dirty page tracking if IOMMU backed device pins pages

 drivers/vfio/vfio.c             |  13 +-
 drivers/vfio/vfio_iommu_type1.c | 572 ++++++++++++++++++++++++++++++++++++----
 include/linux/vfio.h            |   4 +-
 include/uapi/linux/vfio.h       | 319 ++++++++++++++++++++++
 4 files changed, 849 insertions(+), 59 deletions(-)

-- 
2.7.0


Re: [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Alex Williamson 3 years, 11 months ago
On Fri, 29 May 2020 02:00:46 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Hi,
> 
> This patch set adds:
> * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
>   respect to IOMMU container rather than per device. All pages pinned by
>   vendor driver through vfio_pin_pages external API has to be marked as
>   dirty during  migration. When IOMMU capable device is present in the
>   container and all pages are pinned and mapped, then all pages are marked
>   dirty.
>   When there are CPU writes, CPU dirty page tracking can identify dirtied
>   pages, but any page pinned by vendor driver can also be written by
>   device. As of now there is no device which has hardware support for
>   dirty page tracking. So all pages which are pinned should be considered
>   as dirty.
>   This ioctl is also used to start/stop dirty pages tracking for pinned and
>   unpinned pages while migration is active.
> 
> * Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
>   unmapping IO virtual address range.
>   With vIOMMU, during pre-copy phase of migration, while CPUs are still
>   running, IO virtual address unmap can happen while device still keeping
>   reference of guest pfns. Those pages should be reported as dirty before
>   unmap, so that VFIO user space application can copy content of those
>   pages from source to destination.
> 
> * Patch 8 detect if IOMMU capable device driver is smart to report pages
>   to be marked dirty by pinning pages using vfio_pin_pages() API.
> 
> 
> Yet TODO:
> Since there is no device which has hardware support for system memmory
> dirty bitmap tracking, right now there is no other API from vendor driver
> to VFIO IOMMU module to report dirty pages. In future, when such hardware
> support will be implemented, an API will be required such that vendor
> driver could report dirty pages to VFIO module during migration phases.
> 
> v23 -> v24
> - Fixed nit picks by Cornelia
> - Fixed warning reported by test robot.

Applied to my next branch for v5.8.  Thanks for your persistence and
for everyone's participation!  Thanks,

Alex

> v22 -> v23
> - Fixed issue reported by Yan
> https://lore.kernel.org/kvm/97977ede-3c5b-c5a5-7858-7eecd7dd531c@nvidia.com/
> - Fixed nit picks suggested by Cornelia
> 
> v21 -> v22
> - Fixed issue raised by Alex :
> https://lore.kernel.org/kvm/20200515163307.72951dd2@w520.home/
> 
> v20 -> v21
> - Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
> - Updated unmap ioctl function - as suggested by Alex.
> - Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
>   Cornelia.
> 
> v19 -> v20
> - Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
> - Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
> - Removed flag definition from migration capability.
> 
> v18 -> v19
> - Updated migration capability with supported page sizes bitmap for dirty
>   page tracking and  maximum bitmap size supported by kernel module.
> - Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
>   is updated.
> - Removed extra buffers added in previous version for bitmap manipulation
>   and optimised the code.
> 
> v17 -> v18
> - Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
>   ioctl
> - Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma
> 
> v16 -> v17
> - Fixed errors reported by kbuild test robot <lkp@intel.com> on i386
> 
> v15 -> v16
> - Minor edits and nit picks (Auger Eric)
> - On copying bitmap to user, re-populated bitmap only for pinned pages,
>   excluding unmapped pages and CPU dirtied pages.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v14 -> v15
> - Minor edits and nit picks.
> - In the verification of user allocated bitmap memory, added check of
>    maximum size.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v13 -> v14
> - Added struct vfio_bitmap to kabi. updated structure
>   vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
> - All small changes suggested by Alex.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v12 -> v13
> - Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
> - Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
> - Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
>   field.
> 
> v11 -> v12
> - Changed bitmap allocation in vfio_iommu_type1.
> - Remove atomicity of ref_count.
> - Updated comments for migration device state structure about error
>   reporting.
> - Nit picks from v11 reviews
> 
> v10 -> v11
> - Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
> - Added proposal to detect if IOMMU capable device calls external pin pages
>   API to mark pages dirty.
> - Nit picks from v10 reviews
> 
> v9 -> v10:
> - Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
>   during unmap while migration is active
> - Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
>   tracking.
> - If iommu_mapped, mark all pages dirty.
> - Added unpinned pages tracking while migration is active.
> - Updated comments for migration device state structure with bit
>   combination table and state transition details.
> 
> v8 -> v9:
> - Split patch set in 2 sets, Kernel and QEMU.
> - Dirty pages bitmap is queried from IOMMU container rather than from
>   vendor driver for per device. Added 2 ioctls to achieve this.
> 
> v7 -> v8:
> - Updated comments for KABI
> - Added BAR address validation check during PCI device's config space load
>   as suggested by Dr. David Alan Gilbert.
> - Changed vfio_migration_set_state() to set or clear device state flags.
> - Some nit fixes.
> 
> v6 -> v7:
> - Fix build failures.
> 
> v5 -> v6:
> - Fix build failure.
> 
> v4 -> v5:
> - Added decriptive comment about the sequence of access of members of
>   structure vfio_device_migration_info to be followed based on Alex's
>   suggestion
> - Updated get dirty pages sequence.
> - As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
>   get_object, save_config and load_config.
> - Fixed multiple nit picks.
> - Tested live migration with multiple vfio device assigned to a VM.
> 
> v3 -> v4:
> - Added one more bit for _RESUMING flag to be set explicitly.
> - data_offset field is read-only for user space application.
> - data_size is read for every iteration before reading data from migration,
>   that is removed assumption that data will be till end of migration
>   region.
> - If vendor driver supports mappable sparsed region, map those region
>   during setup state of save/load, similarly unmap those from cleanup
>   routines.
> - Handles race condition that causes data corruption in migration region
>   during save device state by adding mutex and serialiaing save_buffer and
>   get_dirty_pages routines.
> - Skip called get_dirty_pages routine for mapped MMIO region of device.
> - Added trace events.
> - Split into multiple functional patches.
> 
> v2 -> v3:
> - Removed enum of VFIO device states. Defined VFIO device state with 2
>   bits.
> - Re-structured vfio_device_migration_info to keep it minimal and defined
>   action on read and write access on its members.
> 
> v1 -> v2:
> - Defined MIGRATION region type and sub-type which should be used with
>   region type capability.
> - Re-structured vfio_device_migration_info. This structure will be placed
>   at 0th offset of migration region.
> - Replaced ioctl with read/write for trapped part of migration region.
> - Added both type of access support, trapped or mmapped, for data section
>   of the region.
> - Moved PCI device functions to pci file.
> - Added iteration to get dirty page bitmap until bitmap for all requested
>   pages are copied.
> 
> Thanks,
> Kirti
> 
> 
> 
> Kirti Wankhede (8):
>   vfio: UAPI for migration interface for device state
>   vfio iommu: Remove atomicity of ref_count of pinned pages
>   vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
>   vfio iommu: Add ioctl definition for dirty pages tracking
>   vfio iommu: Implementation of ioctl for dirty pages tracking
>   vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
>   vfio iommu: Add migration capability to report supported features
>   vfio: Selective dirty page tracking if IOMMU backed device pins pages
> 
>  drivers/vfio/vfio.c             |  13 +-
>  drivers/vfio/vfio_iommu_type1.c | 572 ++++++++++++++++++++++++++++++++++++----
>  include/linux/vfio.h            |   4 +-
>  include/uapi/linux/vfio.h       | 319 ++++++++++++++++++++++
>  4 files changed, 849 insertions(+), 59 deletions(-)
> 


Re: [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Stefan Hajnoczi 3 years, 7 months ago
On Fri, May 29, 2020 at 02:00:46AM +0530, Kirti Wankhede wrote:
> * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
>   respect to IOMMU container rather than per device. All pages pinned by
>   vendor driver through vfio_pin_pages external API has to be marked as
>   dirty during  migration. When IOMMU capable device is present in the
>   container and all pages are pinned and mapped, then all pages are marked
>   dirty.

From what I can tell only the iommu participates in dirty page tracking.
This places the responsibility for dirty page tracking on IOMMUs. My
understanding is that support for dirty page tracking is currently not
available in IOMMUs.

Can a PCI device implement its own DMA dirty log and let an mdev driver
implement the dirty page tracking using this mechanism? That way we
don't need to treat all pinned pages as dirty all the time.

Stefan
Re: [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Alex Williamson 3 years, 7 months ago
On Tue, 29 Sep 2020 09:27:02 +0100
Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Fri, May 29, 2020 at 02:00:46AM +0530, Kirti Wankhede wrote:
> > * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
> >   respect to IOMMU container rather than per device. All pages pinned by
> >   vendor driver through vfio_pin_pages external API has to be marked as
> >   dirty during  migration. When IOMMU capable device is present in the
> >   container and all pages are pinned and mapped, then all pages are marked
> >   dirty.  
> 
> From what I can tell only the iommu participates in dirty page tracking.
> This places the responsibility for dirty page tracking on IOMMUs. My
> understanding is that support for dirty page tracking is currently not
> available in IOMMUs.
> 
> Can a PCI device implement its own DMA dirty log and let an mdev driver
> implement the dirty page tracking using this mechanism? That way we
> don't need to treat all pinned pages as dirty all the time.

Look at the last patch in this series, there we define a mechanism
whereby the act of a vendor driver pinning pages both marks those pages
dirty and indicates a mode in the vfio type1 container where the scope
of dirty pages is limited to those pages pinned by the driver.  The
vfio_dma_rw() interface does the same.  We could clearly implement a
more lightweight interface for this as well, one without pinning or
memory access, but there are no proposed users for such an interface
currently.  Thanks,

Alex


Re: [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Xiang Zheng 3 years, 9 months ago
Hi Kirti,

Sorry to disturb you since this patch set has been merged, and I cannot
receive the qemu-side emails about this patch set.

We are going to support migration for VFIO devices which support dirty
pages tracking.

And we also plan to leverage SMMU HTTU feature to do the dirty pages
tracking for the devices which don't support dirty pages tracking.

For the above two cases, which side determines to choose IOMMU driver or
vendor driver to do dirty bitmap tracking, Qemu or VFIO?

In brief, if both IOMMU and VFIO devices support dirty pages tracking,
we can check the capability and prefer to track dirty pages on device
vendor driver which is more efficient.

The qusetion is which side to do the check and selection? In my opinion,
Qemu/userspace seems more suitable.


On 2020/5/29 4:30, Kirti Wankhede wrote:
> Hi,
> 
> This patch set adds:
> * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
>   respect to IOMMU container rather than per device. All pages pinned by
>   vendor driver through vfio_pin_pages external API has to be marked as
>   dirty during  migration. When IOMMU capable device is present in the
>   container and all pages are pinned and mapped, then all pages are marked
>   dirty.
>   When there are CPU writes, CPU dirty page tracking can identify dirtied
>   pages, but any page pinned by vendor driver can also be written by
>   device. As of now there is no device which has hardware support for
>   dirty page tracking. So all pages which are pinned should be considered
>   as dirty.
>   This ioctl is also used to start/stop dirty pages tracking for pinned and
>   unpinned pages while migration is active.
> 
> * Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
>   unmapping IO virtual address range.
>   With vIOMMU, during pre-copy phase of migration, while CPUs are still
>   running, IO virtual address unmap can happen while device still keeping
>   reference of guest pfns. Those pages should be reported as dirty before
>   unmap, so that VFIO user space application can copy content of those
>   pages from source to destination.
> 
> * Patch 8 detect if IOMMU capable device driver is smart to report pages
>   to be marked dirty by pinning pages using vfio_pin_pages() API.
> 
> 
> Yet TODO:
> Since there is no device which has hardware support for system memmory
> dirty bitmap tracking, right now there is no other API from vendor driver
> to VFIO IOMMU module to report dirty pages. In future, when such hardware
> support will be implemented, an API will be required such that vendor
> driver could report dirty pages to VFIO module during migration phases.
> 
> v23 -> v24
> - Fixed nit picks by Cornelia
> - Fixed warning reported by test robot.
> 
> v22 -> v23
> - Fixed issue reported by Yan
> https://lore.kernel.org/kvm/97977ede-3c5b-c5a5-7858-7eecd7dd531c@nvidia.com/
> - Fixed nit picks suggested by Cornelia
> 
> v21 -> v22
> - Fixed issue raised by Alex :
> https://lore.kernel.org/kvm/20200515163307.72951dd2@w520.home/
> 
> v20 -> v21
> - Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
> - Updated unmap ioctl function - as suggested by Alex.
> - Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
>   Cornelia.
> 
> v19 -> v20
> - Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
> - Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
> - Removed flag definition from migration capability.
> 
> v18 -> v19
> - Updated migration capability with supported page sizes bitmap for dirty
>   page tracking and  maximum bitmap size supported by kernel module.
> - Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
>   is updated.
> - Removed extra buffers added in previous version for bitmap manipulation
>   and optimised the code.
> 
> v17 -> v18
> - Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
>   ioctl
> - Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma
> 
> v16 -> v17
> - Fixed errors reported by kbuild test robot <lkp@intel.com> on i386
> 
> v15 -> v16
> - Minor edits and nit picks (Auger Eric)
> - On copying bitmap to user, re-populated bitmap only for pinned pages,
>   excluding unmapped pages and CPU dirtied pages.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v14 -> v15
> - Minor edits and nit picks.
> - In the verification of user allocated bitmap memory, added check of
>    maximum size.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v13 -> v14
> - Added struct vfio_bitmap to kabi. updated structure
>   vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
> - All small changes suggested by Alex.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v12 -> v13
> - Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
> - Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
> - Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
>   field.
> 
> v11 -> v12
> - Changed bitmap allocation in vfio_iommu_type1.
> - Remove atomicity of ref_count.
> - Updated comments for migration device state structure about error
>   reporting.
> - Nit picks from v11 reviews
> 
> v10 -> v11
> - Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
> - Added proposal to detect if IOMMU capable device calls external pin pages
>   API to mark pages dirty.
> - Nit picks from v10 reviews
> 
> v9 -> v10:
> - Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
>   during unmap while migration is active
> - Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
>   tracking.
> - If iommu_mapped, mark all pages dirty.
> - Added unpinned pages tracking while migration is active.
> - Updated comments for migration device state structure with bit
>   combination table and state transition details.
> 
> v8 -> v9:
> - Split patch set in 2 sets, Kernel and QEMU.
> - Dirty pages bitmap is queried from IOMMU container rather than from
>   vendor driver for per device. Added 2 ioctls to achieve this.
> 
> v7 -> v8:
> - Updated comments for KABI
> - Added BAR address validation check during PCI device's config space load
>   as suggested by Dr. David Alan Gilbert.
> - Changed vfio_migration_set_state() to set or clear device state flags.
> - Some nit fixes.
> 
> v6 -> v7:
> - Fix build failures.
> 
> v5 -> v6:
> - Fix build failure.
> 
> v4 -> v5:
> - Added decriptive comment about the sequence of access of members of
>   structure vfio_device_migration_info to be followed based on Alex's
>   suggestion
> - Updated get dirty pages sequence.
> - As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
>   get_object, save_config and load_config.
> - Fixed multiple nit picks.
> - Tested live migration with multiple vfio device assigned to a VM.
> 
> v3 -> v4:
> - Added one more bit for _RESUMING flag to be set explicitly.
> - data_offset field is read-only for user space application.
> - data_size is read for every iteration before reading data from migration,
>   that is removed assumption that data will be till end of migration
>   region.
> - If vendor driver supports mappable sparsed region, map those region
>   during setup state of save/load, similarly unmap those from cleanup
>   routines.
> - Handles race condition that causes data corruption in migration region
>   during save device state by adding mutex and serialiaing save_buffer and
>   get_dirty_pages routines.
> - Skip called get_dirty_pages routine for mapped MMIO region of device.
> - Added trace events.
> - Split into multiple functional patches.
> 
> v2 -> v3:
> - Removed enum of VFIO device states. Defined VFIO device state with 2
>   bits.
> - Re-structured vfio_device_migration_info to keep it minimal and defined
>   action on read and write access on its members.
> 
> v1 -> v2:
> - Defined MIGRATION region type and sub-type which should be used with
>   region type capability.
> - Re-structured vfio_device_migration_info. This structure will be placed
>   at 0th offset of migration region.
> - Replaced ioctl with read/write for trapped part of migration region.
> - Added both type of access support, trapped or mmapped, for data section
>   of the region.
> - Moved PCI device functions to pci file.
> - Added iteration to get dirty page bitmap until bitmap for all requested
>   pages are copied.
> 
> Thanks,
> Kirti
> 
> 
> 
> Kirti Wankhede (8):
>   vfio: UAPI for migration interface for device state
>   vfio iommu: Remove atomicity of ref_count of pinned pages
>   vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
>   vfio iommu: Add ioctl definition for dirty pages tracking
>   vfio iommu: Implementation of ioctl for dirty pages tracking
>   vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
>   vfio iommu: Add migration capability to report supported features
>   vfio: Selective dirty page tracking if IOMMU backed device pins pages
> 
>  drivers/vfio/vfio.c             |  13 +-
>  drivers/vfio/vfio_iommu_type1.c | 572 ++++++++++++++++++++++++++++++++++++----
>  include/linux/vfio.h            |   4 +-
>  include/uapi/linux/vfio.h       | 319 ++++++++++++++++++++++
>  4 files changed, 849 insertions(+), 59 deletions(-)
> 

-- 
Thanks,
Xiang


Re: [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Alex Williamson 3 years, 9 months ago
On Tue, 21 Jul 2020 10:43:21 +0800
Xiang Zheng <zhengxiang9@huawei.com> wrote:

> Hi Kirti,
> 
> Sorry to disturb you since this patch set has been merged, and I cannot
> receive the qemu-side emails about this patch set.
> 
> We are going to support migration for VFIO devices which support dirty
> pages tracking.
> 
> And we also plan to leverage SMMU HTTU feature to do the dirty pages
> tracking for the devices which don't support dirty pages tracking.
> 
> For the above two cases, which side determines to choose IOMMU driver or
> vendor driver to do dirty bitmap tracking, Qemu or VFIO?
> 
> In brief, if both IOMMU and VFIO devices support dirty pages tracking,
> we can check the capability and prefer to track dirty pages on device
> vendor driver which is more efficient.
> 
> The qusetion is which side to do the check and selection? In my opinion,
> Qemu/userspace seems more suitable.

Dirty page tracking is consolidated at the vfio container level.
Userspace has no basis for determining or interface for selecting a
dirty bitmap provider, so I would disagree that QEMU should play any
role here.  The container dirty bitmap tries to provide the finest
granularity available based on the support of all the devices/groups
managed by the container.  If there are groups attached to the
container that have not participated in page pinning, then we consider
all DMA mappings within the container as persistently dirty.  Once all
of the participants subscribe to page pinning, the dirty scope is
reduced to the pinned pages.  IOMMU support for dirty page logging would
introduce finer granularity yet, which we would probably prefer over
page pinning, but interfaces for this have not been devised.

Ideally userspace should be unaware of any of this, the benefit would
be seen transparently by having a more sparsely filled dirty bitmap,
which more accurately reflects how memory is actually being dirtied.
Thanks,

Alex


Re: [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Xiang Zheng 3 years, 9 months ago
Hi Alex,

Thank you for your suggestion.

On 2020/7/22 6:43, Alex Williamson wrote:
> On Tue, 21 Jul 2020 10:43:21 +0800
> Xiang Zheng <zhengxiang9@huawei.com> wrote:
> 
>> Hi Kirti,
>>
>> Sorry to disturb you since this patch set has been merged, and I cannot
>> receive the qemu-side emails about this patch set.
>>
>> We are going to support migration for VFIO devices which support dirty
>> pages tracking.
>>
>> And we also plan to leverage SMMU HTTU feature to do the dirty pages
>> tracking for the devices which don't support dirty pages tracking.
>>
>> For the above two cases, which side determines to choose IOMMU driver or
>> vendor driver to do dirty bitmap tracking, Qemu or VFIO?
>>
>> In brief, if both IOMMU and VFIO devices support dirty pages tracking,
>> we can check the capability and prefer to track dirty pages on device
>> vendor driver which is more efficient.
>>
>> The qusetion is which side to do the check and selection? In my opinion,
>> Qemu/userspace seems more suitable.
> 
> Dirty page tracking is consolidated at the vfio container level.
> Userspace has no basis for determining or interface for selecting a
> dirty bitmap provider, so I would disagree that QEMU should play any
> role here.  The container dirty bitmap tries to provide the finest
> granularity available based on the support of all the devices/groups
> managed by the container.  If there are groups attached to the
> container that have not participated in page pinning, then we consider
> all DMA mappings within the container as persistently dirty.  Once all
> of the participants subscribe to page pinning, the dirty scope is
> reduced to the pinned pages.  IOMMU support for dirty page logging would
> introduce finer granularity yet, which we would probably prefer over
> page pinning, but interfaces for this have not been devised.

Kevin and his colleagues may add these APIs in the future.
We also plan to support these interfaces on SMMU driver and afterwards we
can have a further discussion.

> 
> Ideally userspace should be unaware of any of this, the benefit would
> be seen transparently by having a more sparsely filled dirty bitmap,
> which more accurately reflects how memory is actually being dirtied.

Yes, indeed.

-- 
Thanks,
Xiang


RE: [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices
Posted by Tian, Kevin 3 years, 9 months ago
> From: Xiang Zheng
> Sent: Wednesday, July 22, 2020 10:56 AM
> 
> Hi Alex,
> 
> Thank you for your suggestion.
> 
> On 2020/7/22 6:43, Alex Williamson wrote:
> > On Tue, 21 Jul 2020 10:43:21 +0800
> > Xiang Zheng <zhengxiang9@huawei.com> wrote:
> >
> >> Hi Kirti,
> >>
> >> Sorry to disturb you since this patch set has been merged, and I cannot
> >> receive the qemu-side emails about this patch set.
> >>
> >> We are going to support migration for VFIO devices which support dirty
> >> pages tracking.
> >>
> >> And we also plan to leverage SMMU HTTU feature to do the dirty pages
> >> tracking for the devices which don't support dirty pages tracking.
> >>
> >> For the above two cases, which side determines to choose IOMMU driver
> or
> >> vendor driver to do dirty bitmap tracking, Qemu or VFIO?
> >>
> >> In brief, if both IOMMU and VFIO devices support dirty pages tracking,
> >> we can check the capability and prefer to track dirty pages on device
> >> vendor driver which is more efficient.
> >>
> >> The qusetion is which side to do the check and selection? In my opinion,
> >> Qemu/userspace seems more suitable.
> >
> > Dirty page tracking is consolidated at the vfio container level.
> > Userspace has no basis for determining or interface for selecting a
> > dirty bitmap provider, so I would disagree that QEMU should play any
> > role here.  The container dirty bitmap tries to provide the finest
> > granularity available based on the support of all the devices/groups
> > managed by the container.  If there are groups attached to the
> > container that have not participated in page pinning, then we consider
> > all DMA mappings within the container as persistently dirty.  Once all
> > of the participants subscribe to page pinning, the dirty scope is
> > reduced to the pinned pages.  IOMMU support for dirty page logging would
> > introduce finer granularity yet, which we would probably prefer over
> > page pinning, but interfaces for this have not been devised.
> 
> Kevin and his colleagues may add these APIs in the future.
> We also plan to support these interfaces on SMMU driver and afterwards we
> can have a further discussion.

Yes, we are working on that part. Generally speaking I agree with Alex
that the kernel should decide dirty bitmap provider and IOMMU dirty
logging is likely preferred over page pinning. Page pinning either provides
only coarse-grained dirty info (e.g. 100's MB pinned pages observed in
vGPU case) or when reaching finer granularity it may affect fast-path 
performance (e.g. driver may have to intercept fast-path operations to
track dirtied pages closely). IOMMU provides an architectural and
lighter approach for tracking dirty pages.

Thanks
Kevin

> 
> >
> > Ideally userspace should be unaware of any of this, the benefit would
> > be seen transparently by having a more sparsely filled dirty bitmap,
> > which more accurately reflects how memory is actually being dirtied.
> 
> Yes, indeed.
> 
> --
> Thanks,
> Xiang