[PATCH v1 5/7] Implementation of vm_start() BH.

Andrey Gruzdev via posted 7 patches 5 years, 2 months ago
Maintainers: Paolo Bonzini <pbonzini@redhat.com>, Markus Armbruster <armbru@redhat.com>, Juan Quintela <quintela@redhat.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Eric Blake <eblake@redhat.com>
There is a newer version of this series
[PATCH v1 5/7] Implementation of vm_start() BH.
Posted by Andrey Gruzdev via 5 years, 2 months ago
To avoid saving updated versions of memory pages we need
to start tracking RAM writes before we resume operation of
vCPUs. This sequence is especially critical for virtio device
backends whos VQs are mapped to main memory and accessed
directly not using MMIO callbacks.

One problem is that vm_start() routine makes calls state
change notifier callbacks directly from itself. Virtio drivers
do some stuff with syncing/flusing VQs in its notifier routines.
Since we poll UFFD and process faults on the same thread, that
leads to the situation when the thread locks in vm_start()
if we try to call it from the migration thread.

The solution is to call ram_write_tracking_start() directly
from migration thread and then schedule BH for vm_start.

Signed-off-by: Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>
---
 migration/migration.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/migration/migration.c b/migration/migration.c
index 158e5441ec..dba388f8bd 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3716,7 +3716,13 @@ static void *migration_thread(void *opaque)
 
 static void wt_migration_vm_start_bh(void *opaque)
 {
-    /* TODO: implement */
+    MigrationState *s = opaque;
+
+    qemu_bh_delete(s->wt_vm_start_bh);
+    s->wt_vm_start_bh = NULL;
+
+    vm_start();
+    s->downtime = qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - s->downtime_start;
 }
 
 /*
-- 
2.25.1