[Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device

Yulei Zhang posted 4 patches 6 years, 1 month ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/1520229653-10658-1-git-send-email-yulei.zhang@intel.com
Test checkpatch failed
Test docker-build@min-glib passed
Test docker-mingw@fedora passed
Test docker-quick@centos6 passed
Test ppcbe passed
Test ppcle passed
Test s390x passed
hw/vfio/common.c              |  34 +++++++++
hw/vfio/pci.c                 | 171 +++++++++++++++++++++++++++++++++++++++++-
hw/vfio/pci.h                 |   1 +
include/hw/vfio/vfio-common.h |   1 +
linux-headers/linux/vfio.h    |  29 ++++++-
5 files changed, 232 insertions(+), 4 deletions(-)
[Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Posted by Yulei Zhang 6 years, 1 month ago
Summary

This series RFC would like to resume the discussion about how to
introduce the live migration capability to vfio mdev device. 

By adding a new vfio subtype region VFIO_REGION_SUBTYPE_DEVICE_STATE,
the mdev device will be set to migratable if the new region exist
during the initialization.  

The intention to add the new region is using it for mdev device status
save and restore during the migration. The access to this region
will be trapped and forward to the mdev device driver, it also uses 
the first byte in the new region to control the running state of mdev
device, so during the migration after stop the mdev driver, qemu could
retrieve the specific device status from this region and transfer to 
the target VM side for the mdev device restore.

In addition,  we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP to help do 
the mdev device dirty page synchronization during the migration, currently
it is just for static copy, in the future we would like to add new interface
for the pre-copy.

Below is the vfio_mdev device migration sequence
Source VM side:
			start migration
				|
				V
        	 get the cpu state change callback, write to the
        	 subregion's first byte to stop the mdev device
				|
				V
		 quary the dirty page bitmap from iommu container 
		 and add into qemu dirty list for synchronization
				|
				V
		 save the deivce status into Qemufile which is 
                     read from the vfio device subregion

Target VM side:
		   restore the mdev device after get the
		     saved status context from Qemufile
				|
				V
		     get the cpu state change callback
 		     write to subregion's first byte to 
                      start the mdev device to put it in 
                      running status
				|
				V
			finish migration

V3->V2:
1. rebase the patch to Qemu stable 2.10 branch.
2. use a common name for the subregion instead of specific for 
   intel IGD.

V1->V2:
Per Alex's suggestion:
1. use device subtype region instead of VFIO PCI fixed region.
2. remove unnecessary ioctl, use the first byte of subregion to 
   control the running state of mdev device.  
3. for dirty page synchronization, implement the interface with
   VFIOContainer instead of vfio pci device.

Yulei Zhang (4):
  vfio: introduce a new VFIO subregion for mdev device migration support
  vfio: Add vm status change callback to stop/restart the mdev device
  vfio: Add struct vfio_vmstate_info to introduce put/get callback
    funtion for vfio device status save/restore
  vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP

 hw/vfio/common.c              |  34 +++++++++
 hw/vfio/pci.c                 | 171 +++++++++++++++++++++++++++++++++++++++++-
 hw/vfio/pci.h                 |   1 +
 include/hw/vfio/vfio-common.h |   1 +
 linux-headers/linux/vfio.h    |  29 ++++++-
 5 files changed, 232 insertions(+), 4 deletions(-)

-- 
2.7.4

Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Posted by Kirti Wankhede 6 years, 1 month ago
Hi Yulei Zhang,

This series is same as the previous version, that is, there is no
pre-copy phase. This only takes care of stop-and-copy phase.
As per we discussed in KVM Forum 2017 in October, there should be
provision of pre-copy phase.

Thanks,
Kirti

On 3/5/2018 11:30 AM, Yulei Zhang wrote:
> Summary
> 
> This series RFC would like to resume the discussion about how to
> introduce the live migration capability to vfio mdev device. 
> 
> By adding a new vfio subtype region VFIO_REGION_SUBTYPE_DEVICE_STATE,
> the mdev device will be set to migratable if the new region exist
> during the initialization.  
> 
> The intention to add the new region is using it for mdev device status
> save and restore during the migration. The access to this region
> will be trapped and forward to the mdev device driver, it also uses 
> the first byte in the new region to control the running state of mdev
> device, so during the migration after stop the mdev driver, qemu could
> retrieve the specific device status from this region and transfer to 
> the target VM side for the mdev device restore.
> 
> In addition,  we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP to help do 
> the mdev device dirty page synchronization during the migration, currently
> it is just for static copy, in the future we would like to add new interface
> for the pre-copy.
> 
> Below is the vfio_mdev device migration sequence
> Source VM side:
> 			start migration
> 				|
> 				V
>         	 get the cpu state change callback, write to the
>         	 subregion's first byte to stop the mdev device
> 				|
> 				V
> 		 quary the dirty page bitmap from iommu container 
> 		 and add into qemu dirty list for synchronization
> 				|
> 				V
> 		 save the deivce status into Qemufile which is 
>                      read from the vfio device subregion
> 
> Target VM side:
> 		   restore the mdev device after get the
> 		     saved status context from Qemufile
> 				|
> 				V
> 		     get the cpu state change callback
>  		     write to subregion's first byte to 
>                       start the mdev device to put it in 
>                       running status
> 				|
> 				V
> 			finish migration
> 
> V3->V2:
> 1. rebase the patch to Qemu stable 2.10 branch.
> 2. use a common name for the subregion instead of specific for 
>    intel IGD.
> 
> V1->V2:
> Per Alex's suggestion:
> 1. use device subtype region instead of VFIO PCI fixed region.
> 2. remove unnecessary ioctl, use the first byte of subregion to 
>    control the running state of mdev device.  
> 3. for dirty page synchronization, implement the interface with
>    VFIOContainer instead of vfio pci device.
> 
> Yulei Zhang (4):
>   vfio: introduce a new VFIO subregion for mdev device migration support
>   vfio: Add vm status change callback to stop/restart the mdev device
>   vfio: Add struct vfio_vmstate_info to introduce put/get callback
>     funtion for vfio device status save/restore
>   vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> 
>  hw/vfio/common.c              |  34 +++++++++
>  hw/vfio/pci.c                 | 171 +++++++++++++++++++++++++++++++++++++++++-
>  hw/vfio/pci.h                 |   1 +
>  include/hw/vfio/vfio-common.h |   1 +
>  linux-headers/linux/vfio.h    |  29 ++++++-
>  5 files changed, 232 insertions(+), 4 deletions(-)
> 

Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Posted by Zhang, Yulei 6 years, 1 month ago
Hi Kirti,

Yes, that is the plan and we will address it in the coming versions.
In this version we just rebase the code and looking for more inputs.

Thanks, 
Yulei

> -----Original Message-----
> From: Kirti Wankhede [mailto:kwankhede@nvidia.com]
> Sent: Monday, March 5, 2018 9:03 PM
> To: Zhang, Yulei <yulei.zhang@intel.com>; qemu-devel@nongnu.org
> Cc: Tian, Kevin <kevin.tian@intel.com>; zhenyuw@linux.intel.com;
> alex.williamson@redhat.com
> Subject: Re: [PATCH V3 0/4] vfio: Introduce Live migration capability to
> vfio_mdev device
> 
> Hi Yulei Zhang,
> 
> This series is same as the previous version, that is, there is no pre-copy
> phase. This only takes care of stop-and-copy phase.
> As per we discussed in KVM Forum 2017 in October, there should be
> provision of pre-copy phase.
> 
> Thanks,
> Kirti
> 
> On 3/5/2018 11:30 AM, Yulei Zhang wrote:
> > Summary
> >
> > This series RFC would like to resume the discussion about how to
> > introduce the live migration capability to vfio mdev device.
> >
> > By adding a new vfio subtype region
> VFIO_REGION_SUBTYPE_DEVICE_STATE,
> > the mdev device will be set to migratable if the new region exist
> > during the initialization.
> >
> > The intention to add the new region is using it for mdev device status
> > save and restore during the migration. The access to this region will
> > be trapped and forward to the mdev device driver, it also uses the
> > first byte in the new region to control the running state of mdev
> > device, so during the migration after stop the mdev driver, qemu could
> > retrieve the specific device status from this region and transfer to
> > the target VM side for the mdev device restore.
> >
> > In addition,  we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP to
> help
> > do the mdev device dirty page synchronization during the migration,
> > currently it is just for static copy, in the future we would like to
> > add new interface for the pre-copy.
> >
> > Below is the vfio_mdev device migration sequence Source VM side:
> > 			start migration
> > 				|
> > 				V
> >         	 get the cpu state change callback, write to the
> >         	 subregion's first byte to stop the mdev device
> > 				|
> > 				V
> > 		 quary the dirty page bitmap from iommu container
> > 		 and add into qemu dirty list for synchronization
> > 				|
> > 				V
> > 		 save the deivce status into Qemufile which is
> >                      read from the vfio device subregion
> >
> > Target VM side:
> > 		   restore the mdev device after get the
> > 		     saved status context from Qemufile
> > 				|
> > 				V
> > 		     get the cpu state change callback
> >  		     write to subregion's first byte to
> >                       start the mdev device to put it in
> >                       running status
> > 				|
> > 				V
> > 			finish migration
> >
> > V3->V2:
> > 1. rebase the patch to Qemu stable 2.10 branch.
> > 2. use a common name for the subregion instead of specific for
> >    intel IGD.
> >
> > V1->V2:
> > Per Alex's suggestion:
> > 1. use device subtype region instead of VFIO PCI fixed region.
> > 2. remove unnecessary ioctl, use the first byte of subregion to
> >    control the running state of mdev device.
> > 3. for dirty page synchronization, implement the interface with
> >    VFIOContainer instead of vfio pci device.
> >
> > Yulei Zhang (4):
> >   vfio: introduce a new VFIO subregion for mdev device migration support
> >   vfio: Add vm status change callback to stop/restart the mdev device
> >   vfio: Add struct vfio_vmstate_info to introduce put/get callback
> >     funtion for vfio device status save/restore
> >   vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> >
> >  hw/vfio/common.c              |  34 +++++++++
> >  hw/vfio/pci.c                 | 171
> +++++++++++++++++++++++++++++++++++++++++-
> >  hw/vfio/pci.h                 |   1 +
> >  include/hw/vfio/vfio-common.h |   1 +
> >  linux-headers/linux/vfio.h    |  29 ++++++-
> >  5 files changed, 232 insertions(+), 4 deletions(-)
> >
Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Posted by Tian, Kevin 6 years, 1 month ago
> From: Zhang, Yulei
> Sent: Tuesday, March 6, 2018 9:35 PM
> 
> Hi Kirti,
> 
> Yes, that is the plan and we will address it in the coming versions.
> In this version we just rebase the code and looking for more inputs.

It's not how a new version is expected to provide. For review
comments which you received from previous versions, you need
echo them in the new version where 'echo' means either fix and list
in change list or providing a TODO list for unhandled comments 
so reviewers know what to further look at. Also rebase usually
doesn't bear a new version...

btw when describing change list of version history, please use v2->v3
instead of vice versa.

Thanks
Kevin

> 
> Thanks,
> Yulei
> 
> > -----Original Message-----
> > From: Kirti Wankhede [mailto:kwankhede@nvidia.com]
> > Sent: Monday, March 5, 2018 9:03 PM
> > To: Zhang, Yulei <yulei.zhang@intel.com>; qemu-devel@nongnu.org
> > Cc: Tian, Kevin <kevin.tian@intel.com>; zhenyuw@linux.intel.com;
> > alex.williamson@redhat.com
> > Subject: Re: [PATCH V3 0/4] vfio: Introduce Live migration capability to
> > vfio_mdev device
> >
> > Hi Yulei Zhang,
> >
> > This series is same as the previous version, that is, there is no pre-copy
> > phase. This only takes care of stop-and-copy phase.
> > As per we discussed in KVM Forum 2017 in October, there should be
> > provision of pre-copy phase.
> >
> > Thanks,
> > Kirti
> >
> > On 3/5/2018 11:30 AM, Yulei Zhang wrote:
> > > Summary
> > >
> > > This series RFC would like to resume the discussion about how to
> > > introduce the live migration capability to vfio mdev device.
> > >
> > > By adding a new vfio subtype region
> > VFIO_REGION_SUBTYPE_DEVICE_STATE,
> > > the mdev device will be set to migratable if the new region exist
> > > during the initialization.
> > >
> > > The intention to add the new region is using it for mdev device status
> > > save and restore during the migration. The access to this region will
> > > be trapped and forward to the mdev device driver, it also uses the
> > > first byte in the new region to control the running state of mdev
> > > device, so during the migration after stop the mdev driver, qemu could
> > > retrieve the specific device status from this region and transfer to
> > > the target VM side for the mdev device restore.
> > >
> > > In addition,  we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> to
> > help
> > > do the mdev device dirty page synchronization during the migration,
> > > currently it is just for static copy, in the future we would like to
> > > add new interface for the pre-copy.
> > >
> > > Below is the vfio_mdev device migration sequence Source VM side:
> > > 			start migration
> > > 				|
> > > 				V
> > >         	 get the cpu state change callback, write to the
> > >         	 subregion's first byte to stop the mdev device
> > > 				|
> > > 				V
> > > 		 quary the dirty page bitmap from iommu container
> > > 		 and add into qemu dirty list for synchronization
> > > 				|
> > > 				V
> > > 		 save the deivce status into Qemufile which is
> > >                      read from the vfio device subregion
> > >
> > > Target VM side:
> > > 		   restore the mdev device after get the
> > > 		     saved status context from Qemufile
> > > 				|
> > > 				V
> > > 		     get the cpu state change callback
> > >  		     write to subregion's first byte to
> > >                       start the mdev device to put it in
> > >                       running status
> > > 				|
> > > 				V
> > > 			finish migration
> > >
> > > V3->V2:
> > > 1. rebase the patch to Qemu stable 2.10 branch.
> > > 2. use a common name for the subregion instead of specific for
> > >    intel IGD.
> > >
> > > V1->V2:
> > > Per Alex's suggestion:
> > > 1. use device subtype region instead of VFIO PCI fixed region.
> > > 2. remove unnecessary ioctl, use the first byte of subregion to
> > >    control the running state of mdev device.
> > > 3. for dirty page synchronization, implement the interface with
> > >    VFIOContainer instead of vfio pci device.
> > >
> > > Yulei Zhang (4):
> > >   vfio: introduce a new VFIO subregion for mdev device migration
> support
> > >   vfio: Add vm status change callback to stop/restart the mdev device
> > >   vfio: Add struct vfio_vmstate_info to introduce put/get callback
> > >     funtion for vfio device status save/restore
> > >   vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> > >
> > >  hw/vfio/common.c              |  34 +++++++++
> > >  hw/vfio/pci.c                 | 171
> > +++++++++++++++++++++++++++++++++++++++++-
> > >  hw/vfio/pci.h                 |   1 +
> > >  include/hw/vfio/vfio-common.h |   1 +
> > >  linux-headers/linux/vfio.h    |  29 ++++++-
> > >  5 files changed, 232 insertions(+), 4 deletions(-)
> > >
Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Posted by no-reply@patchew.org 6 years, 1 month ago
Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 1520229653-10658-1-git-send-email-yulei.zhang@intel.com
Subject: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device

=== TEST SCRIPT BEGIN ===
#!/bin/bash

BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0

git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram

commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
    echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
    if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
        failed=1
        echo
    fi
    n=$((n+1))
done

exit $failed
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]               patchew/1520229653-10658-1-git-send-email-yulei.zhang@intel.com -> patchew/1520229653-10658-1-git-send-email-yulei.zhang@intel.com
Switched to a new branch 'test'
c4092ae00e vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
a6090fffc5 vfio: Add struct vfio_vmstate_info to introduce put/get callback funtion for vfio device status save/restore
74f1f045a9 vfio: Add vm status change callback to stop/restart the mdev device
b32825279c vfio: introduce a new VFIO subregion for mdev device migration support

=== OUTPUT BEGIN ===
Checking PATCH 1/4: vfio: introduce a new VFIO subregion for mdev device migration support...
ERROR: initializer for struct VMStateDescription should normally be const
#49: FILE: hw/vfio/pci.c:3172:
+static VMStateDescription vfio_pci_vmstate = {

total: 1 errors, 0 warnings, 54 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 2/4: vfio: Add vm status change callback to stop/restart the mdev device...
Checking PATCH 3/4: vfio: Add struct vfio_vmstate_info to introduce put/get callback funtion for vfio device status save/restore...
Checking PATCH 4/4: vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP...
=== OUTPUT END ===

Test command exited with code: 1


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org
Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Posted by Alex Williamson 6 years, 1 month ago
[cc +Juan]

On Mon,  5 Mar 2018 14:00:49 +0800
Yulei Zhang <yulei.zhang@intel.com> wrote:

> Summary
> 
> This series RFC would like to resume the discussion about how to
> introduce the live migration capability to vfio mdev device. 
> 
> By adding a new vfio subtype region VFIO_REGION_SUBTYPE_DEVICE_STATE,
> the mdev device will be set to migratable if the new region exist
> during the initialization.  
> 
> The intention to add the new region is using it for mdev device status
> save and restore during the migration. The access to this region
> will be trapped and forward to the mdev device driver, it also uses 
> the first byte in the new region to control the running state of mdev
> device, so during the migration after stop the mdev driver, qemu could
> retrieve the specific device status from this region and transfer to 
> the target VM side for the mdev device restore.
> 
> In addition,  we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP to help do 
> the mdev device dirty page synchronization during the migration, currently
> it is just for static copy, in the future we would like to add new interface
> for the pre-copy.

Juan had concerns about another dirty bitmap implementation.  I'm not
sure what alternatives we have, but let's loop him in for guidance on
the best migration strategies.  The migration state for a device could
be many gigabytes.

> Below is the vfio_mdev device migration sequence
> Source VM side:
> 			start migration
> 				|
> 				V
>         	 get the cpu state change callback, write to the
>         	 subregion's first byte to stop the mdev device
> 				|
> 				V
> 		 quary the dirty page bitmap from iommu container 
> 		 and add into qemu dirty list for synchronization
> 				|
> 				V
> 		 save the deivce status into Qemufile which is 
>                      read from the vfio device subregion
> 
> Target VM side:
> 		   restore the mdev device after get the
> 		     saved status context from Qemufile
> 				|
> 				V
> 		     get the cpu state change callback
>  		     write to subregion's first byte to 
>                       start the mdev device to put it in 
>                       running status
> 				|
> 				V
> 			finish migration
> 
> V3->V2:
> 1. rebase the patch to Qemu stable 2.10 branch.
> 2. use a common name for the subregion instead of specific for 
>    intel IGD.

But it's still tied to Intel's vendor ID??

Thanks,
Alex


> 
> V1->V2:
> Per Alex's suggestion:
> 1. use device subtype region instead of VFIO PCI fixed region.
> 2. remove unnecessary ioctl, use the first byte of subregion to 
>    control the running state of mdev device.  
> 3. for dirty page synchronization, implement the interface with
>    VFIOContainer instead of vfio pci device.
> 
> Yulei Zhang (4):
>   vfio: introduce a new VFIO subregion for mdev device migration support
>   vfio: Add vm status change callback to stop/restart the mdev device
>   vfio: Add struct vfio_vmstate_info to introduce put/get callback
>     funtion for vfio device status save/restore
>   vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> 
>  hw/vfio/common.c              |  34 +++++++++
>  hw/vfio/pci.c                 | 171 +++++++++++++++++++++++++++++++++++++++++-
>  hw/vfio/pci.h                 |   1 +
>  include/hw/vfio/vfio-common.h |   1 +
>  linux-headers/linux/vfio.h    |  29 ++++++-
>  5 files changed, 232 insertions(+), 4 deletions(-)
> 


Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Posted by Zhang, Yulei 6 years, 1 month ago

> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Tuesday, March 13, 2018 6:22 AM
> To: Zhang, Yulei <yulei.zhang@intel.com>
> Cc: qemu-devel@nongnu.org; Tian, Kevin <kevin.tian@intel.com>;
> zhenyuw@linux.intel.com; kwankhede@nvidia.com; Juan Quintela
> <quintela@redhat.com>
> Subject: Re: [PATCH V3 0/4] vfio: Introduce Live migration capability to
> vfio_mdev device
> 
> [cc +Juan]
> 
> On Mon,  5 Mar 2018 14:00:49 +0800
> Yulei Zhang <yulei.zhang@intel.com> wrote:
> 
> > Summary
> >
> > This series RFC would like to resume the discussion about how to
> > introduce the live migration capability to vfio mdev device.
> >
> > By adding a new vfio subtype region
> VFIO_REGION_SUBTYPE_DEVICE_STATE,
> > the mdev device will be set to migratable if the new region exist
> > during the initialization.
> >
> > The intention to add the new region is using it for mdev device status
> > save and restore during the migration. The access to this region
> > will be trapped and forward to the mdev device driver, it also uses
> > the first byte in the new region to control the running state of mdev
> > device, so during the migration after stop the mdev driver, qemu could
> > retrieve the specific device status from this region and transfer to
> > the target VM side for the mdev device restore.
> >
> > In addition,  we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP to
> help do
> > the mdev device dirty page synchronization during the migration,
> currently
> > it is just for static copy, in the future we would like to add new interface
> > for the pre-copy.
> 
> Juan had concerns about another dirty bitmap implementation.  I'm not
> sure what alternatives we have, but let's loop him in for guidance on
> the best migration strategies.  The migration state for a device could
> be many gigabytes.
> 
> > Below is the vfio_mdev device migration sequence
> > Source VM side:
> > 			start migration
> > 				|
> > 				V
> >         	 get the cpu state change callback, write to the
> >         	 subregion's first byte to stop the mdev device
> > 				|
> > 				V
> > 		 quary the dirty page bitmap from iommu container
> > 		 and add into qemu dirty list for synchronization
> > 				|
> > 				V
> > 		 save the deivce status into Qemufile which is
> >                      read from the vfio device subregion
> >
> > Target VM side:
> > 		   restore the mdev device after get the
> > 		     saved status context from Qemufile
> > 				|
> > 				V
> > 		     get the cpu state change callback
> >  		     write to subregion's first byte to
> >                       start the mdev device to put it in
> >                       running status
> > 				|
> > 				V
> > 			finish migration
> >
> > V3->V2:
> > 1. rebase the patch to Qemu stable 2.10 branch.
> > 2. use a common name for the subregion instead of specific for
> >    intel IGD.
> 
> But it's still tied to Intel's vendor ID??
> 
No. this is not necessary, I will remove the intel vendor ID. 

> Thanks,
> Alex
> 
> 
> >
> > V1->V2:
> > Per Alex's suggestion:
> > 1. use device subtype region instead of VFIO PCI fixed region.
> > 2. remove unnecessary ioctl, use the first byte of subregion to
> >    control the running state of mdev device.
> > 3. for dirty page synchronization, implement the interface with
> >    VFIOContainer instead of vfio pci device.
> >
> > Yulei Zhang (4):
> >   vfio: introduce a new VFIO subregion for mdev device migration support
> >   vfio: Add vm status change callback to stop/restart the mdev device
> >   vfio: Add struct vfio_vmstate_info to introduce put/get callback
> >     funtion for vfio device status save/restore
> >   vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> >
> >  hw/vfio/common.c              |  34 +++++++++
> >  hw/vfio/pci.c                 | 171
> +++++++++++++++++++++++++++++++++++++++++-
> >  hw/vfio/pci.h                 |   1 +
> >  include/hw/vfio/vfio-common.h |   1 +
> >  linux-headers/linux/vfio.h    |  29 ++++++-
> >  5 files changed, 232 insertions(+), 4 deletions(-)
> >