Hello David,
You wrote in previous version:
>We've had a postcopy migrate work now, with a few hacks we're still
>cleaning up, both on vhost-user-bridge and dpdk; so I'll get this
>updated and reposted.
I want to know more about DPDK work, do you know, is somebody assigned to that task?
On 08/24/2017 10:26 PM, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
>
> Hi,
> This is a RFC/WIP series that enables postcopy migration
> with shared memory to a vhost-user process.
> It's based off current-head + Alexey's bitmap series
>
> It's tested with vhost-user-bridge and a dpdk (modified by Maxime
> that will get posted separately) - both very lightly.
>
> It's still got a few very rough edges, but it succesfully migrates
> with both normal and huge pages (2M).
>
> The major difference over v1 is that there's a set of code
> that merges vhost regions together on the qemu side so that
> we get a single hugepage region on the PC spanning the 640k
> hole (the hole hopefully isn't accessed by the client,
> but the client used to align around it anyway).
>
> It's also got a lot of cleanups from the comments from v1
> but there's still a few things that need work.
> In particular, there's still the hack around qemu waiting
> for the set_mem_table to come back; I also worry what would
> happen if a set-mem-table was triggered during a migrate;
> I suspect it would break badly.
>
> One problem that didn't cause a problem was madvises for hugepages;
> because we register userfault directly after mmap'ing the
> region in the client, we have no pages mapped and hence
> the madvise's/fallocate's are fortunately not compulsary.
> Still, I'd like a way to do it, it would feel safer.
>
> A copy of this code, based off the current 2.10.0-rc4
> together with Alexey's bitmap code is available here:
> https://github.com/dagrh/qemu/tree/vhost-wipv2
>
> Dave
>
> Dr. David Alan Gilbert (32):
> vhu: vu_queue_started
> vhub: Only process received packets on started queues
> migrate: Update ram_block_discard_range for shared
> qemu_ram_block_host_offset
> migration/ram: ramblock_recv_bitmap_test_byte_offset
> postcopy: use UFFDIO_ZEROPAGE only when available
> postcopy: Add notifier chain
> postcopy: Add vhost-user flag for postcopy and check it
> vhost-user: Add 'VHOST_USER_POSTCOPY_ADVISE' message
> vhub: Support sending fds back to qemu
> vhub: Open userfaultfd
> postcopy: Allow registering of fd handler
> vhost+postcopy: Register shared ufd with postcopy
> vhost+postcopy: Transmit 'listen' to client
> vhost+postcopy: Register new regions with the ufd
> vhost+postcopy: Send address back to qemu
> vhost+postcopy: Stash RAMBlock and offset
> vhost+postcopy: Send requests to source for shared pages
> vhost+postcopy: Resolve client address
> postcopy: wake shared
> postcopy: postcopy_notify_shared_wake
> vhost+postcopy: Add vhost waker
> vhost+postcopy: Call wakeups
> vub+postcopy: madvises
> vhost+postcopy: Lock around set_mem_table
> vhost: Add VHOST_USER_POSTCOPY_END message
> vhost+postcopy: Wire up POSTCOPY_END notify
> postcopy: Allow shared memory
> vhost-user: Claim support for postcopy
> vhost: Merge neighbouring hugepage regions where appropriate
> vhost: Don't break merged regions on small remove/non-adds
> postcopy shared docs
>
> contrib/libvhost-user/libvhost-user.c | 226 ++++++++++++++++++++-
> contrib/libvhost-user/libvhost-user.h | 22 ++-
> docs/devel/migration.txt | 39 ++++
> docs/interop/vhost-user.txt | 39 ++++
> exec.c | 60 ++++--
> hw/virtio/trace-events | 27 +++
> hw/virtio/vhost-user.c | 326 +++++++++++++++++++++++++++++-
> hw/virtio/vhost.c | 121 +++++++++++-
> include/exec/cpu-common.h | 4 +
> migration/migration.c | 3 +
> migration/migration.h | 4 +
> migration/postcopy-ram.c | 359 +++++++++++++++++++++++++++-------
> migration/postcopy-ram.h | 69 +++++++
> migration/ram.c | 5 +
> migration/ram.h | 1 +
> migration/savevm.c | 13 ++
> migration/trace-events | 6 +
> tests/vhost-user-bridge.c | 1 +
> trace-events | 3 +
> vl.c | 2 +
> 20 files changed, 1241 insertions(+), 89 deletions(-)
>
--
Best regards,
Alexey Perevalov