[PATCH v3 0/7] UFFD write-tracking migration/snapshots

Andrey Gruzdev via posted 7 patches 4 days, 17 hours ago
Test checkpatch passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20201119125940.20017-1-andrey.gruzdev@virtuozzo.com
Maintainers: Paolo Bonzini <pbonzini@redhat.com>, Juan Quintela <quintela@redhat.com>, Eric Blake <eblake@redhat.com>, Markus Armbruster <armbru@redhat.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>
include/exec/memory.h |   7 +
migration/migration.c | 338 +++++++++++++++++++++++++++++++-
migration/migration.h |   4 +
migration/ram.c       | 439 +++++++++++++++++++++++++++++++++++++++++-
migration/ram.h       |   4 +
migration/savevm.c    |   1 -
migration/savevm.h    |   2 +
qapi/migration.json   |   7 +-
8 files changed, 790 insertions(+), 12 deletions(-)

[PATCH v3 0/7] UFFD write-tracking migration/snapshots

Posted by Andrey Gruzdev via 4 days, 17 hours ago
Changes with v3:
* coding style fixes to pass checkpatch test
* qapi/migration.json: change 'track-writes-ram' capability
*                      supported version to 6.0
* fixes to commit message format


This patch series is a kind of 'rethinking' of Denis Plotnikov's ideas he's
implemented in his series '[PATCH v0 0/4] migration: add background snapshot'.

Currently the only way to make (external) live VM snapshot is using existing
dirty page logging migration mechanism. The main problem is that it tends to
produce a lot of page duplicates while running VM goes on updating already
saved pages. That leads to the fact that vmstate image size is commonly several
times bigger then non-zero part of virtual machine's RSS. Time required to
converge RAM migration and the size of snapshot image severely depend on the
guest memory write rate, sometimes resulting in unacceptably long snapshot
creation time and huge image size.

This series propose a way to solve the aforementioned problems. This is done
by using different RAM migration mechanism based on UFFD write protection
management introduced in v5.7 kernel. The migration strategy is to 'freeze'
guest RAM content using write-protection and iteratively release protection
for memory ranges that have already been saved to the migration stream.
At the same time we read in pending UFFD write fault events and save those
pages out-of-order with higher priority.

How to use:
1. Enable write-tracking migration capability
   virsh qemu-monitor-command <domain> --hmp migrate_set_capability.
track-writes-ram on

2. Start the external migration to a file
   virsh qemu-monitor-command <domain> --hmp migrate exec:'cat > ./vm_state'

3. Wait for the migration finish and check that the migration has completed.
state.

Andrey Gruzdev (7):
  introduce 'track-writes-ram' migration capability
  introduce UFFD-WP low-level interface helpers
  support UFFD write fault processing in ram_save_iterate()
  implementation of write-tracking migration thread
  implementation of vm_start() BH
  the rest of write tracking migration code
  introduce simple linear scan rate limiting mechanism

 include/exec/memory.h |   7 +
 migration/migration.c | 338 +++++++++++++++++++++++++++++++-
 migration/migration.h |   4 +
 migration/ram.c       | 439 +++++++++++++++++++++++++++++++++++++++++-
 migration/ram.h       |   4 +
 migration/savevm.c    |   1 -
 migration/savevm.h    |   2 +
 qapi/migration.json   |   7 +-
 8 files changed, 790 insertions(+), 12 deletions(-)

-- 
2.25.1