include/migration/register.h | 15 +++ migration/migration.c | 128 ++++++++++++---------- migration/multifd-nocomp.c | 3 +- migration/multifd.c | 12 +- migration/multifd.h | 5 + migration/options.c | 5 - migration/ram.c | 44 +++++++- migration/savevm.c | 33 ++++++ migration/savevm.h | 1 + tests/qtest/migration/compression-tests.c | 38 ++++++- tests/qtest/migration/cpr-tests.c | 6 +- tests/qtest/migration/file-tests.c | 58 +++++----- tests/qtest/migration/framework.c | 76 +++++++++---- tests/qtest/migration/framework.h | 9 +- tests/qtest/migration/misc-tests.c | 4 +- tests/qtest/migration/postcopy-tests.c | 35 +++++- tests/qtest/migration/precopy-tests.c | 48 +++++--- tests/qtest/migration/tls-tests.c | 70 +++++++++++- 18 files changed, 436 insertions(+), 154 deletions(-)
From: Prasad Pandit <pjp@fedoraproject.org>
Hello,
* This series (v8) splits earlier patch-2 which enabled multifd and
postcopy options together into two separate patches. One modifies
the channel discovery in migration_ioc_process_incoming() function,
and second one enables the multifd and postcopy migration together.
It also adds the 'save_postcopy_prepare' savevm_state handler to
enable different sections to take an action just before the Postcopy
phase starts. Thank you Peter for these patches.
===
67/67 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 152.66s 81 subtests passed
===
v7: https://lore.kernel.org/qemu-devel/20250228121749.553184-1-ppandit@redhat.com/T/#t
* This series (v7) adds 'MULTIFD_RECV_SYNC' migration command. It is used
to notify the destination migration thread to synchronise with the Multifd
threads. This allows Multifd ('mig/dst/recv_x') threads on the destination
to receive all their data, before they are shutdown.
This series also updates the channel discovery function and qtests as
suggested in the previous review comments.
===
67/67 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 147.84s 81 subtests passed
===
v6: https://lore.kernel.org/qemu-devel/20250215123119.814345-1-ppandit@redhat.com/T/#t
* This series (v6) shuts down Multifd threads before starting Postcopy
migration. It helps to avoid an issue of multifd pages arriving late
at the destination during Postcopy phase and corrupting the vCPU
state. It also reorders the qtest patches and does some refactoring
changes as suggested in previous review.
===
67/67 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 161.35s 73 subtests passed
===
v5: https://lore.kernel.org/qemu-devel/20250205122712.229151-1-ppandit@redhat.com/T/#t
* This series (v5) consolidates migration capabilities setting in one
'set_migration_capabilities()' function, thus simplifying test sources.
It passes all migration tests.
===
66/66 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 143.66s 71 subtests passed
===
v4: https://lore.kernel.org/qemu-devel/20250127120823.144949-1-ppandit@redhat.com/T/#t
* This series (v4) adds more 'multifd+postcopy' qtests which test
Precopy migration with 'postcopy-ram' attribute set. And run
Postcopy migrations with 'multifd' channels enabled.
===
$ ../qtest/migration-test --tap -k -r '/x86_64/migration/multifd+postcopy' | grep -i 'slow test'
# slow test /x86_64/migration/multifd+postcopy/plain executed in 1.29 secs
# slow test /x86_64/migration/multifd+postcopy/recovery/tls/psk executed in 2.48 secs
# slow test /x86_64/migration/multifd+postcopy/preempt/plain executed in 1.49 secs
# slow test /x86_64/migration/multifd+postcopy/preempt/recovery/tls/psk executed in 2.52 secs
# slow test /x86_64/migration/multifd+postcopy/tcp/tls/psk/match executed in 3.62 secs
# slow test /x86_64/migration/multifd+postcopy/tcp/plain/zstd executed in 1.34 secs
# slow test /x86_64/migration/multifd+postcopy/tcp/plain/cancel executed in 2.24 secs
...
66/66 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 148.41s 71 subtests passed
===
v3: https://lore.kernel.org/qemu-devel/20250121131032.1611245-1-ppandit@redhat.com/T/#t
* This series (v3) passes all existing 'tests/qtest/migration/*' tests
and adds a new one to enable multifd channels with postcopy migration.
v2: https://lore.kernel.org/qemu-devel/20241129122256.96778-1-ppandit@redhat.com/T/#u
* This series (v2) further refactors the 'ram_save_target_page'
function to make it independent of the multifd & postcopy change.
v1: https://lore.kernel.org/qemu-devel/20241126115748.118683-1-ppandit@redhat.com/T/#u
* This series removes magic value (4-bytes) introduced in the
previous series for the Postcopy channel.
v0: https://lore.kernel.org/qemu-devel/20241029150908.1136894-1-ppandit@redhat.com/T/#u
* Currently Multifd and Postcopy migration can not be used together.
QEMU shows "Postcopy is not yet compatible with multifd" message.
When migrating guests with large (100's GB) RAM, Multifd threads
help to accelerate migration, but inability to use it with the
Postcopy mode delays guest start up on the destination side.
* This patch series allows to enable both Multifd and Postcopy
migration together. Precopy and Multifd threads work during
the initial guest (RAM) transfer. When migration moves to the
Postcopy phase, Multifd threads are restrained and the Postcopy
threads start to request pages from the source side.
* This series introduces magic value (4-bytes) to be sent on the
Postcopy channel. It helps to differentiate channels and properly
setup incoming connections on the destination side.
Thank you.
---
Peter Xu (2):
migration: Add save_postcopy_prepare() savevm handler
migration/ram: Implement save_postcopy_prepare()
Prasad Pandit (5):
migration/multifd: move macros to multifd header
migration: Refactor channel discovery mechanism
migration: enable multifd and postcopy together
tests/qtest/migration: consolidate set capabilities
tests/qtest/migration: add postcopy tests with multifd
include/migration/register.h | 15 +++
migration/migration.c | 128 ++++++++++++----------
migration/multifd-nocomp.c | 3 +-
migration/multifd.c | 12 +-
migration/multifd.h | 5 +
migration/options.c | 5 -
migration/ram.c | 44 +++++++-
migration/savevm.c | 33 ++++++
migration/savevm.h | 1 +
tests/qtest/migration/compression-tests.c | 38 ++++++-
tests/qtest/migration/cpr-tests.c | 6 +-
tests/qtest/migration/file-tests.c | 58 +++++-----
tests/qtest/migration/framework.c | 76 +++++++++----
tests/qtest/migration/framework.h | 9 +-
tests/qtest/migration/misc-tests.c | 4 +-
tests/qtest/migration/postcopy-tests.c | 35 +++++-
tests/qtest/migration/precopy-tests.c | 48 +++++---
tests/qtest/migration/tls-tests.c | 70 +++++++++++-
18 files changed, 436 insertions(+), 154 deletions(-)
--
2.48.1
Hello Fabiano, On Tue, 18 Mar 2025 at 18:10, Prasad Pandit <ppandit@redhat.com> wrote: > * This series (v8) splits earlier patch-2 which enabled multifd and > postcopy options together into two separate patches. One modifies > the channel discovery in migration_ioc_process_incoming() function, > and second one enables the multifd and postcopy migration together. > > It also adds the 'save_postcopy_prepare' savevm_state handler to > enable different sections to take an action just before the Postcopy > phase starts. Thank you Peter for these patches. > === > 67/67 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 152.66s 81 subtests passed > === > Ping..! Did you have chance to review this (v8) series. (just checking) Thank you. --- - Prasad
Prasad Pandit <ppandit@redhat.com> writes: > Hello Fabiano, > > On Tue, 18 Mar 2025 at 18:10, Prasad Pandit <ppandit@redhat.com> wrote: >> * This series (v8) splits earlier patch-2 which enabled multifd and >> postcopy options together into two separate patches. One modifies >> the channel discovery in migration_ioc_process_incoming() function, >> and second one enables the multifd and postcopy migration together. >> >> It also adds the 'save_postcopy_prepare' savevm_state handler to >> enable different sections to take an action just before the Postcopy >> phase starts. Thank you Peter for these patches. >> === >> 67/67 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 152.66s 81 subtests passed >> === >> > > Ping..! Did you have chance to review this (v8) series. (just checking) > I'll get to it soon. I need to send a PR for the recent SNP breakage and also check Li Zhijian's RDMA series first.
On Thu, 27 Mar 2025 at 20:05, Fabiano Rosas <farosas@suse.de> wrote: > I'll get to it soon. I need to send a PR for the recent SNP breakage and > also check Li Zhijian's RDMA series first. * I see, okay. Thank you for an update, I appreciate it. Thank you. --- - Prasad
Prasad Pandit <ppandit@redhat.com> writes: > From: Prasad Pandit <pjp@fedoraproject.org> > > Hello, > > * This series (v8) splits earlier patch-2 which enabled multifd and > postcopy options together into two separate patches. One modifies > the channel discovery in migration_ioc_process_incoming() function, > and second one enables the multifd and postcopy migration together. > > It also adds the 'save_postcopy_prepare' savevm_state handler to > enable different sections to take an action just before the Postcopy > phase starts. Thank you Peter for these patches. > === > 67/67 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test OK 152.66s 81 subtests passed The postcopy/multifd/plain test is still hanging from time to time. I see a vmstate load function trying to access guest memory and the postcopy-listen thread already finished, waiting for that qemu_loadvm_state() (frame #18) to return and set the main_thread_load_event. Thread 1 (Thread 0x7fbc4849df80 (LWP 7487) "qemu-system-x86"): #0 __memcpy_evex_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:274 #1 0x0000560b135103aa in flatview_read_continue_step (attrs=..., buf=0x560b168a5930 "U\252\022\006\016\a1\300\271", len=9216, mr_addr=831488, l=0x7fbc465ff980, mr=0x560b166c5070) at ../system/physmem.c:3056 #2 0x0000560b1351042e in flatview_read_continue (fv=0x560b16c606a0, addr=831488, attrs=..., ptr=0x560b168a5930, len=9216, mr_addr=831488, l=9216, mr=0x560b166c5070) at ../system/physmem.c:3073 #3 0x0000560b13510533 in flatview_read (fv=0x560b16c606a0, addr=831488, attrs=..., buf=0x560b168a5930, len=9216) at ../system/physmem.c:3103 #4 0x0000560b135105be in address_space_read_full (as=0x560b14970fc0 <address_space_memory>, addr=831488, attrs=..., buf=0x560b168a5930, len=9216) at ../system/physmem.c:3116 #5 0x0000560b135106e7 in address_space_rw (as=0x560b14970fc0 <address_space_memory>, addr=831488, attrs=..., buf=0x560b168a5930, len=9216, is_write=false) at ../system/physmem.c:3144 #6 0x0000560b13510848 in cpu_physical_memory_rw (addr=831488, buf=0x560b168a5930, len=9216, is_write=false) at ../system/physmem.c:3170 #7 0x0000560b1338f5a5 in cpu_physical_memory_read (addr=831488, buf=0x560b168a5930, len=9216) at qemu/include/exec/cpu-common.h:148 #8 0x0000560b1339063c in patch_hypercalls (s=0x560b168840c0) at ../hw/i386/vapic.c:547 #9 0x0000560b1339096d in vapic_prepare (s=0x560b168840c0) at ../hw/i386/vapic.c:629 #10 0x0000560b13390e8b in vapic_post_load (opaque=0x560b168840c0, version_id=1) at ../hw/i386/vapic.c:789 #11 0x0000560b135b4924 in vmstate_load_state (f=0x560b16c53400, vmsd=0x560b147c6cc0 <vmstate_vapic>, opaque=0x560b168840c0, version_id=1) at ../migration/vmstate.c:234 #12 0x0000560b132a15b8 in vmstate_load (f=0x560b16c53400, se=0x560b16893390) at ../migration/savevm.c:972 #13 0x0000560b132a4f28 in qemu_loadvm_section_start_full (f=0x560b16c53400, type=4 '\004') at ../migration/savevm.c:2746 #14 0x0000560b132a5ae8 in qemu_loadvm_state_main (f=0x560b16c53400, mis=0x560b16877f20) at ../migration/savevm.c:3058 #15 0x0000560b132a45d0 in loadvm_handle_cmd_packaged (mis=0x560b16877f20) at ../migration/savevm.c:2451 #16 0x0000560b132a4b36 in loadvm_process_command (f=0x560b168c3b60) at ../migration/savevm.c:2614 #17 0x0000560b132a5b96 in qemu_loadvm_state_main (f=0x560b168c3b60, mis=0x560b16877f20) at ../migration/savevm.c:3073 #18 0x0000560b132a5db7 in qemu_loadvm_state (f=0x560b168c3b60) at ../migration/savevm.c:3150 #19 0x0000560b13286271 in process_incoming_migration_co (opaque=0x0) at ../migration/migration.c:892 #20 0x0000560b137cb6d4 in coroutine_trampoline (i0=377836416, i1=22027) at ../util/coroutine-ucontext.c:175 #21 0x00007fbc4786a79e in ??? () at ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:103 Thread 10 (Thread 0x7fffce7fc700 (LWP 11778) "mig/dst/listen"): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x000055555614e33f in qemu_futex_wait (f=0x5555576f6fc0, val=4294967295) at qemu/include/qemu/futex.h:29 #2 0x000055555614e505 in qemu_event_wait (ev=0x5555576f6fc0) at ../util/qemu-thread-posix.c:464 #3 0x0000555555c44eb1 in postcopy_ram_listen_thread (opaque=0x5555576f6f20) at ../migration/savevm.c:2135 #4 0x000055555614e6b8 in qemu_thread_start (args=0x5555582c8480) at ../util/qemu-thread-posix.c:541 #5 0x00007ffff72626ea in start_thread (arg=0x7fffce7fc700) at pthread_create.c:477 #6 0x00007ffff532158f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 9 (Thread 0x7fffceffd700 (LWP 11777) "mig/dst/fault"): #0 0x00007ffff5314a89 in __GI___poll (fds=0x7fffc0000b60, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 #1 0x0000555555c3be3f in postcopy_ram_fault_thread (opaque=0x5555576f6f20) at ../migration/postcopy-ram.c:999 #2 0x000055555614e6b8 in qemu_thread_start (args=0x555557735be0) at ../util/qemu-thread-posix.c:541 #3 0x00007ffff72626ea in start_thread (arg=0x7fffceffd700) at pthread_create.c:477 #4 0x00007ffff532158f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Breaking with gdb and stepping through the memcpy code generates a request for a page that's seemingly already in the receivedmap: (gdb) x/i $pc => 0x7ffff5399d14 <__memcpy_evex_unaligned_erms+86>: rep movsb %ds:(%rsi),%es:(%rdi) (gdb) p/x $rsi $1 = 0x7fffd68cc000 (gdb) si postcopy_ram_fault_thread_request Request for HVA=0x7fffd68cc000 rb=pc.ram offset=0xcc000 pid=11754 // these are my printfs: postcopy_request_page: migrate_send_rp_req_pages: migrate_send_rp_req_pages: mutex migrate_send_rp_req_pages: received // gdb hangs here, it looks like the page wasn't populated? I've had my share of postcopy for the day. Hopefully you'll be able to figure out what the issue is. - reproducer (2nd iter already hangs for me): $ for i in $(seq 1 9999); do echo "$i ============="; \ QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test \ --full -r /x86_64/migration/postcopy/multifd/plain || break ; done - reproducer with traces and gdb: $ for i in $(seq 1 9999); do echo "$i ============="; \ QTEST_TRACE="multifd_* -trace source_* -trace postcopy_* -trace savevm_* \ -trace loadvm_*" QTEST_QEMU_BINARY_DST='gdb --ex "handle SIGUSR1 \ noprint" --ex "run" --args ./qemu-system-x86_64' \ QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test \ --full -r /x86_64/migration/postcopy/multifd/plain || break ; done
On Tue, 1 Apr 2025 at 02:24, Fabiano Rosas <farosas@suse.de> wrote: > The postcopy/multifd/plain test is still hanging from time to time. I > see a vmstate load function trying to access guest memory and the > postcopy-listen thread already finished, waiting for that > qemu_loadvm_state() (frame #18) to return and set the > main_thread_load_event. > > Thread 1 (Thread 0x7fbc4849df80 (LWP 7487) "qemu-system-x86"): > #0 __memcpy_evex_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:274 > #1 0x0000560b135103aa in flatview_read_continue_step (attrs=..., buf=0x560b168a5930 "U\252\022\006\016\a1\300\271", len=9216, mr_addr=831488, l=0x7fbc465ff980, mr=0x560b166c5070) at ../system/physmem.c:3056 > #2 0x0000560b1351042e in flatview_read_continue (fv=0x560b16c606a0, addr=831488, attrs=..., ptr=0x560b168a5930, len=9216, mr_addr=831488, l=9216, mr=0x560b166c5070) at ../system/physmem.c:3073 > #3 0x0000560b13510533 in flatview_read (fv=0x560b16c606a0, addr=831488, attrs=..., buf=0x560b168a5930, len=9216) at ../system/physmem.c:3103 > #4 0x0000560b135105be in address_space_read_full (as=0x560b14970fc0 <address_space_memory>, addr=831488, attrs=..., buf=0x560b168a5930, len=9216) at ../system/physmem.c:3116 > #5 0x0000560b135106e7 in address_space_rw (as=0x560b14970fc0 <address_space_memory>, addr=831488, attrs=..., buf=0x560b168a5930, len=9216, is_write=false) at ../system/physmem.c:3144 > #6 0x0000560b13510848 in cpu_physical_memory_rw (addr=831488, buf=0x560b168a5930, len=9216, is_write=false) at ../system/physmem.c:3170 > #7 0x0000560b1338f5a5 in cpu_physical_memory_read (addr=831488, buf=0x560b168a5930, len=9216) at qemu/include/exec/cpu-common.h:148 > #8 0x0000560b1339063c in patch_hypercalls (s=0x560b168840c0) at ../hw/i386/vapic.c:547 > #9 0x0000560b1339096d in vapic_prepare (s=0x560b168840c0) at ../hw/i386/vapic.c:629 > #10 0x0000560b13390e8b in vapic_post_load (opaque=0x560b168840c0, version_id=1) at ../hw/i386/vapic.c:789 > #11 0x0000560b135b4924 in vmstate_load_state (f=0x560b16c53400, vmsd=0x560b147c6cc0 <vmstate_vapic>, opaque=0x560b168840c0, version_id=1) at ../migration/vmstate.c:234 > #12 0x0000560b132a15b8 in vmstate_load (f=0x560b16c53400, se=0x560b16893390) at ../migration/savevm.c:972 > #13 0x0000560b132a4f28 in qemu_loadvm_section_start_full (f=0x560b16c53400, type=4 '\004') at ../migration/savevm.c:2746 > #14 0x0000560b132a5ae8 in qemu_loadvm_state_main (f=0x560b16c53400, mis=0x560b16877f20) at ../migration/savevm.c:3058 > #15 0x0000560b132a45d0 in loadvm_handle_cmd_packaged (mis=0x560b16877f20) at ../migration/savevm.c:2451 > #16 0x0000560b132a4b36 in loadvm_process_command (f=0x560b168c3b60) at ../migration/savevm.c:2614 > #17 0x0000560b132a5b96 in qemu_loadvm_state_main (f=0x560b168c3b60, mis=0x560b16877f20) at ../migration/savevm.c:3073 > #18 0x0000560b132a5db7 in qemu_loadvm_state (f=0x560b168c3b60) at ../migration/savevm.c:3150 > #19 0x0000560b13286271 in process_incoming_migration_co (opaque=0x0) at ../migration/migration.c:892 > #20 0x0000560b137cb6d4 in coroutine_trampoline (i0=377836416, i1=22027) at ../util/coroutine-ucontext.c:175 > #21 0x00007fbc4786a79e in ??? () at ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:103 > > > Thread 10 (Thread 0x7fffce7fc700 (LWP 11778) "mig/dst/listen"): > #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 > #1 0x000055555614e33f in qemu_futex_wait (f=0x5555576f6fc0, val=4294967295) at qemu/include/qemu/futex.h:29 > #2 0x000055555614e505 in qemu_event_wait (ev=0x5555576f6fc0) at ../util/qemu-thread-posix.c:464 > #3 0x0000555555c44eb1 in postcopy_ram_listen_thread (opaque=0x5555576f6f20) at ../migration/savevm.c:2135 > #4 0x000055555614e6b8 in qemu_thread_start (args=0x5555582c8480) at ../util/qemu-thread-posix.c:541 > #5 0x00007ffff72626ea in start_thread (arg=0x7fffce7fc700) at pthread_create.c:477 > #6 0x00007ffff532158f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 > > Thread 9 (Thread 0x7fffceffd700 (LWP 11777) "mig/dst/fault"): > #0 0x00007ffff5314a89 in __GI___poll (fds=0x7fffc0000b60, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 > #1 0x0000555555c3be3f in postcopy_ram_fault_thread (opaque=0x5555576f6f20) at ../migration/postcopy-ram.c:999 > #2 0x000055555614e6b8 in qemu_thread_start (args=0x555557735be0) at ../util/qemu-thread-posix.c:541 > #3 0x00007ffff72626ea in start_thread (arg=0x7fffceffd700) at pthread_create.c:477 > #4 0x00007ffff532158f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 > > Breaking with gdb and stepping through the memcpy code generates a > request for a page that's seemingly already in the receivedmap: > > (gdb) x/i $pc > => 0x7ffff5399d14 <__memcpy_evex_unaligned_erms+86>: rep movsb %ds:(%rsi),%es:(%rdi) > (gdb) p/x $rsi > $1 = 0x7fffd68cc000 > (gdb) si > postcopy_ram_fault_thread_request Request for HVA=0x7fffd68cc000 rb=pc.ram offset=0xcc000 pid=11754 > // these are my printfs: > postcopy_request_page: > migrate_send_rp_req_pages: > migrate_send_rp_req_pages: mutex > migrate_send_rp_req_pages: received > > // gdb hangs here, it looks like the page wasn't populated? > > I've had my share of postcopy for the day. Hopefully you'll be able to > figure out what the issue is. > > - reproducer (2nd iter already hangs for me): > > $ for i in $(seq 1 9999); do echo "$i ============="; \ > QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test \ > --full -r /x86_64/migration/postcopy/multifd/plain || break ; done > > - reproducer with traces and gdb: > > $ for i in $(seq 1 9999); do echo "$i ============="; \ > QTEST_TRACE="multifd_* -trace source_* -trace postcopy_* -trace savevm_* \ > -trace loadvm_*" QTEST_QEMU_BINARY_DST='gdb --ex "handle SIGUSR1 \ > noprint" --ex "run" --args ./qemu-system-x86_64' \ > QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test \ > --full -r /x86_64/migration/postcopy/multifd/plain || break ; done * Thank you for the reproducer and traces. I'll try to check more and see if I'm able to reproduce it on my side. Thank you. --- - Prasad
Prasad Pandit <ppandit@redhat.com> writes: > On Tue, 1 Apr 2025 at 02:24, Fabiano Rosas <farosas@suse.de> wrote: >> The postcopy/multifd/plain test is still hanging from time to time. I >> see a vmstate load function trying to access guest memory and the >> postcopy-listen thread already finished, waiting for that >> qemu_loadvm_state() (frame #18) to return and set the >> main_thread_load_event. >> >> Thread 1 (Thread 0x7fbc4849df80 (LWP 7487) "qemu-system-x86"): >> #0 __memcpy_evex_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:274 >> #1 0x0000560b135103aa in flatview_read_continue_step (attrs=..., buf=0x560b168a5930 "U\252\022\006\016\a1\300\271", len=9216, mr_addr=831488, l=0x7fbc465ff980, mr=0x560b166c5070) at ../system/physmem.c:3056 >> #2 0x0000560b1351042e in flatview_read_continue (fv=0x560b16c606a0, addr=831488, attrs=..., ptr=0x560b168a5930, len=9216, mr_addr=831488, l=9216, mr=0x560b166c5070) at ../system/physmem.c:3073 >> #3 0x0000560b13510533 in flatview_read (fv=0x560b16c606a0, addr=831488, attrs=..., buf=0x560b168a5930, len=9216) at ../system/physmem.c:3103 >> #4 0x0000560b135105be in address_space_read_full (as=0x560b14970fc0 <address_space_memory>, addr=831488, attrs=..., buf=0x560b168a5930, len=9216) at ../system/physmem.c:3116 >> #5 0x0000560b135106e7 in address_space_rw (as=0x560b14970fc0 <address_space_memory>, addr=831488, attrs=..., buf=0x560b168a5930, len=9216, is_write=false) at ../system/physmem.c:3144 >> #6 0x0000560b13510848 in cpu_physical_memory_rw (addr=831488, buf=0x560b168a5930, len=9216, is_write=false) at ../system/physmem.c:3170 >> #7 0x0000560b1338f5a5 in cpu_physical_memory_read (addr=831488, buf=0x560b168a5930, len=9216) at qemu/include/exec/cpu-common.h:148 >> #8 0x0000560b1339063c in patch_hypercalls (s=0x560b168840c0) at ../hw/i386/vapic.c:547 >> #9 0x0000560b1339096d in vapic_prepare (s=0x560b168840c0) at ../hw/i386/vapic.c:629 >> #10 0x0000560b13390e8b in vapic_post_load (opaque=0x560b168840c0, version_id=1) at ../hw/i386/vapic.c:789 >> #11 0x0000560b135b4924 in vmstate_load_state (f=0x560b16c53400, vmsd=0x560b147c6cc0 <vmstate_vapic>, opaque=0x560b168840c0, version_id=1) at ../migration/vmstate.c:234 >> #12 0x0000560b132a15b8 in vmstate_load (f=0x560b16c53400, se=0x560b16893390) at ../migration/savevm.c:972 >> #13 0x0000560b132a4f28 in qemu_loadvm_section_start_full (f=0x560b16c53400, type=4 '\004') at ../migration/savevm.c:2746 >> #14 0x0000560b132a5ae8 in qemu_loadvm_state_main (f=0x560b16c53400, mis=0x560b16877f20) at ../migration/savevm.c:3058 >> #15 0x0000560b132a45d0 in loadvm_handle_cmd_packaged (mis=0x560b16877f20) at ../migration/savevm.c:2451 >> #16 0x0000560b132a4b36 in loadvm_process_command (f=0x560b168c3b60) at ../migration/savevm.c:2614 >> #17 0x0000560b132a5b96 in qemu_loadvm_state_main (f=0x560b168c3b60, mis=0x560b16877f20) at ../migration/savevm.c:3073 >> #18 0x0000560b132a5db7 in qemu_loadvm_state (f=0x560b168c3b60) at ../migration/savevm.c:3150 >> #19 0x0000560b13286271 in process_incoming_migration_co (opaque=0x0) at ../migration/migration.c:892 >> #20 0x0000560b137cb6d4 in coroutine_trampoline (i0=377836416, i1=22027) at ../util/coroutine-ucontext.c:175 >> #21 0x00007fbc4786a79e in ??? () at ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:103 >> >> >> Thread 10 (Thread 0x7fffce7fc700 (LWP 11778) "mig/dst/listen"): >> #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 >> #1 0x000055555614e33f in qemu_futex_wait (f=0x5555576f6fc0, val=4294967295) at qemu/include/qemu/futex.h:29 >> #2 0x000055555614e505 in qemu_event_wait (ev=0x5555576f6fc0) at ../util/qemu-thread-posix.c:464 >> #3 0x0000555555c44eb1 in postcopy_ram_listen_thread (opaque=0x5555576f6f20) at ../migration/savevm.c:2135 >> #4 0x000055555614e6b8 in qemu_thread_start (args=0x5555582c8480) at ../util/qemu-thread-posix.c:541 >> #5 0x00007ffff72626ea in start_thread (arg=0x7fffce7fc700) at pthread_create.c:477 >> #6 0x00007ffff532158f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 >> >> Thread 9 (Thread 0x7fffceffd700 (LWP 11777) "mig/dst/fault"): >> #0 0x00007ffff5314a89 in __GI___poll (fds=0x7fffc0000b60, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 >> #1 0x0000555555c3be3f in postcopy_ram_fault_thread (opaque=0x5555576f6f20) at ../migration/postcopy-ram.c:999 >> #2 0x000055555614e6b8 in qemu_thread_start (args=0x555557735be0) at ../util/qemu-thread-posix.c:541 >> #3 0x00007ffff72626ea in start_thread (arg=0x7fffceffd700) at pthread_create.c:477 >> #4 0x00007ffff532158f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 >> >> Breaking with gdb and stepping through the memcpy code generates a >> request for a page that's seemingly already in the receivedmap: >> >> (gdb) x/i $pc >> => 0x7ffff5399d14 <__memcpy_evex_unaligned_erms+86>: rep movsb %ds:(%rsi),%es:(%rdi) >> (gdb) p/x $rsi >> $1 = 0x7fffd68cc000 >> (gdb) si >> postcopy_ram_fault_thread_request Request for HVA=0x7fffd68cc000 rb=pc.ram offset=0xcc000 pid=11754 >> // these are my printfs: >> postcopy_request_page: >> migrate_send_rp_req_pages: >> migrate_send_rp_req_pages: mutex >> migrate_send_rp_req_pages: received >> >> // gdb hangs here, it looks like the page wasn't populated? >> >> I've had my share of postcopy for the day. Hopefully you'll be able to >> figure out what the issue is. >> >> - reproducer (2nd iter already hangs for me): >> >> $ for i in $(seq 1 9999); do echo "$i ============="; \ >> QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test \ >> --full -r /x86_64/migration/postcopy/multifd/plain || break ; done >> >> - reproducer with traces and gdb: >> >> $ for i in $(seq 1 9999); do echo "$i ============="; \ >> QTEST_TRACE="multifd_* -trace source_* -trace postcopy_* -trace savevm_* \ >> -trace loadvm_*" QTEST_QEMU_BINARY_DST='gdb --ex "handle SIGUSR1 \ >> noprint" --ex "run" --args ./qemu-system-x86_64' \ >> QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test \ >> --full -r /x86_64/migration/postcopy/multifd/plain || break ; done > > * Thank you for the reproducer and traces. I'll try to check more and > see if I'm able to reproduce it on my side. > Thanks. I cannot merge this series until that issue is resolved. If it reproduces on my machine there's a high chance that it will break CI at some point and then it'll be a nightmare to debug. This has happened many times before with multifd. > Thank you. > --- > - Prasad
Hello Fabiano,
On Thu, 3 Apr 2025 at 18:41, Fabiano Rosas <farosas@suse.de> wrote:
> Prasad Pandit <ppandit@redhat.com> writes:
> > * Thank you for the reproducer and traces. I'll try to check more and
> > see if I'm able to reproduce it on my side.
>
> Thanks. I cannot merge this series until that issue is resolved. If it
> reproduces on my machine there's a high chance that it will break CI at
> some point and then it'll be a nightmare to debug. This has happened
> many times before with multifd.
===
qemu/build)$ for i in $(seq 1 9999); do echo "$i ====";
QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test
--full -r '/x86_64/migration/postcopy/multifd/plain' || break; done |
tee /tmp/migration-test.out | awk -e '/====/ { printf ("%s ", $_) };
/slow test/ { printf("%s\n", $_); }'
Host-1]
...
9980 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.51 secs
9981 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.47 secs
9982 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.42 secs
9983 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.56 secs
9984 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.44 secs
9985 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.43 secs
9986 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.45 secs
9987 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.53 secs
9988 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.46 secs
9989 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.49 secs
9990 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
9991 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.47 secs
9992 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.45 secs
9993 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.47 secs
9994 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.41 secs
9995 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.42 secs
9996 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.58 secs
9997 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.45 secs
9998 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.51 secs
9999 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.51 secs
--------
Iter: 9999, low: 1.35, high: 1.73, avg: 1.47 secs
Host-2]
...
9980 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.45 secs
9981 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.69 secs
9982 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.41 secs
9983 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.54 secs
9984 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.45 secs
9985 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.44 secs
9986 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
9987 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
9988 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.44 secs
9989 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.51 secs
9990 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.37 secs
9991 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
9992 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.51 secs
9993 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.47 secs
9994 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.47 secs
9995 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.45 secs
9996 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.53 secs
9997 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
9998 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.47 secs
9999 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
--------
Iter: 9999, low: 1.34, high: 1.82, avg: 1.48 secs
Host-3]
...
9980 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.50 secs
9981 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.55 secs
9982 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.54 secs
9983 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.49 secs
9984 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.49 secs
9985 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.52 secs
9986 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
9987 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.52 secs
9988 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.54 secs
9989 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.51 secs
9990 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.51 secs
9991 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.50 secs
9992 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.53 secs
9993 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.50 secs
9994 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.53 secs
9995 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.49 secs
9996 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.48 secs
9997 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.54 secs
9998 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.44 secs
9999 ==== # slow test /x86_64/migration/postcopy/multifd/plain
executed in 1.54 secs
--------
Iter: 9999, low: 1.31, high: 2.49, avg: 1.48
===
* I tried to reproduce the hang issue with and without -traces across
3 different machines but am unable to reproduce it on my side.
* Going through the source and the back trace you provided, you said
gdb hangs in the postcopy_ram_fault_thread() function at poll()
function to wait for a missing page.
- But by this time, postcopy_ram_listen thread is already preparing
to cleanup and exit
- That means postcopy migration is finished/ending
- ie. postcopy migration is ending without (or before) migrating
all the RAM pages from the source side?
In postcopy mode:
* Is there a way to log the pages (#numers) that are sent from the
source side?
* And log the pages (#numbers) that are received on the receive side?
* That way we might be able to check/confirm the pages which were not
received or not processed properly.
* Can we connect the faulting/missing (HVA=0x7fffd68cc000)
address/page in postcopy_ram_fault_thread() with the memcpy that the
main thread seems to be loading via vapic_post_load()? ie. the main
thread and pocyopy_ram_fault_thread() above could be doing unrelated
things.
* Other than this, I've revised the patch-set as suggested. How do we
proceed further?
Thank you.
---
- Prasad
Prasad Pandit <ppandit@redhat.com> writes:
> Hello Fabiano,
>
> On Thu, 3 Apr 2025 at 18:41, Fabiano Rosas <farosas@suse.de> wrote:
>> Prasad Pandit <ppandit@redhat.com> writes:
>> > * Thank you for the reproducer and traces. I'll try to check more and
>> > see if I'm able to reproduce it on my side.
>>
>> Thanks. I cannot merge this series until that issue is resolved. If it
>> reproduces on my machine there's a high chance that it will break CI at
>> some point and then it'll be a nightmare to debug. This has happened
>> many times before with multifd.
>
> ===
> qemu/build)$ for i in $(seq 1 9999); do echo "$i ====";
> QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test
> --full -r '/x86_64/migration/postcopy/multifd/plain' || break; done |
> tee /tmp/migration-test.out | awk -e '/====/ { printf ("%s ", $_) };
> /slow test/ { printf("%s\n", $_); }'
>
> Host-1]
> ...
> 9980 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.51 secs
> 9981 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.47 secs
> 9982 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.42 secs
> 9983 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.56 secs
> 9984 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.44 secs
> 9985 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.43 secs
> 9986 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.45 secs
> 9987 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.53 secs
> 9988 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.46 secs
> 9989 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.49 secs
> 9990 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> 9991 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.47 secs
> 9992 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.45 secs
> 9993 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.47 secs
> 9994 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.41 secs
> 9995 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.42 secs
> 9996 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.58 secs
> 9997 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.45 secs
> 9998 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.51 secs
> 9999 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.51 secs
> --------
> Iter: 9999, low: 1.35, high: 1.73, avg: 1.47 secs
>
>
> Host-2]
> ...
> 9980 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.45 secs
> 9981 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.69 secs
> 9982 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.41 secs
> 9983 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.54 secs
> 9984 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.45 secs
> 9985 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.44 secs
> 9986 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> 9987 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> 9988 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.44 secs
> 9989 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.51 secs
> 9990 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.37 secs
> 9991 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> 9992 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.51 secs
> 9993 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.47 secs
> 9994 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.47 secs
> 9995 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.45 secs
> 9996 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.53 secs
> 9997 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> 9998 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.47 secs
> 9999 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> --------
> Iter: 9999, low: 1.34, high: 1.82, avg: 1.48 secs
>
>
> Host-3]
> ...
> 9980 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.50 secs
> 9981 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.55 secs
> 9982 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.54 secs
> 9983 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.49 secs
> 9984 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.49 secs
> 9985 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.52 secs
> 9986 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> 9987 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.52 secs
> 9988 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.54 secs
> 9989 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.51 secs
> 9990 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.51 secs
> 9991 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.50 secs
> 9992 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.53 secs
> 9993 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.50 secs
> 9994 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.53 secs
> 9995 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.49 secs
> 9996 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.48 secs
> 9997 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.54 secs
> 9998 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.44 secs
> 9999 ==== # slow test /x86_64/migration/postcopy/multifd/plain
> executed in 1.54 secs
> --------
> Iter: 9999, low: 1.31, high: 2.49, avg: 1.48
> ===
>
> * I tried to reproduce the hang issue with and without -traces across
> 3 different machines but am unable to reproduce it on my side.
>
> * Going through the source and the back trace you provided, you said
> gdb hangs in the postcopy_ram_fault_thread() function at poll()
> function to wait for a missing page.
> - But by this time, postcopy_ram_listen thread is already preparing
> to cleanup and exit
> - That means postcopy migration is finished/ending
> - ie. postcopy migration is ending without (or before) migrating
> all the RAM pages from the source side?
>
That's what it looks like. It could be some error condition that is not
being propagated properly. The thread hits an error and exits without
informing the rest of migration.
> In postcopy mode:
> * Is there a way to log the pages (#numers) that are sent from the
> source side?
> * And log the pages (#numbers) that are received on the receive side?
>
> * That way we might be able to check/confirm the pages which were not
> received or not processed properly.
Some combination of the postcopy traces should give you that. Sorry,
Peter Xu really is the expert on postcopy, I just tag along.
>
> * Can we connect the faulting/missing (HVA=0x7fffd68cc000)
> address/page in postcopy_ram_fault_thread() with the memcpy that the
> main thread seems to be loading via vapic_post_load()? ie. the main
> thread and pocyopy_ram_fault_thread() above could be doing unrelated
> things.
>
The snippet I posted shows that it's the same page:
(gdb) x/i $pc
=> 0x7ffff5399d14 <__memcpy_evex_unaligned_erms+86>: rep movsb %ds:(%rsi),%es:(%rdi)
(gdb) p/x $rsi
$1 = 0x7fffd68cc000
> * Other than this, I've revised the patch-set as suggested. How do we
> proceed further?
Send your next version and I'll set some time aside to debug this.
heads-up: I'll be off from 2025/04/18 until 2025/05/05. Peter should be
already back in the meantime.
>
> Thank you.
> ---
> - Prasad
Hi,
On Fri, 11 Apr 2025 at 01:48, Fabiano Rosas <farosas@suse.de> wrote:
> That's what it looks like. It could be some error condition that is not
> being propagated properly. The thread hits an error and exits without
> informing the rest of migration.
* The gdb(1) hanging in the postcopy_ram_fault_thread() is not
conclusive. I tried to set following break-points
gdb) break postcopy-ram.c:998 - poll_result = poll(pfd, pfd_len,
-1 /* Wait forever */);
gdb) break postcopy-ram.c:1057 - rb = qemu_ram_block_from_host(...);
gdb(1) hangs for both of them, there might be another reason for it.
Live-migration also stalls with it.
> Some combination of the postcopy traces should give you that. Sorry,
> Peter Xu really is the expert on postcopy, I just tag along.
* I see. Maybe it could be logged with --migration-debug=<level> option.
> The snippet I posted shows that it's the same page:
>
> (gdb) x/i $pc
> => 0x7ffff5399d14 <__memcpy_evex_unaligned_erms+86>: rep movsb %ds:(%rsi),%es:(%rdi)
> (gdb) p/x $rsi
> $1 = 0x7fffd68cc000
>
===
>> Thread 1 (Thread 0x7fbc4849df80 (LWP 7487) "qemu-system-x86"):
...
>> Thread 10 (Thread 0x7fffce7fc700 (LWP 11778) "mig/dst/listen"):
...
>> Thread 9 (Thread 0x7fffceffd700 (LWP 11777) "mig/dst/fault"):
#0 0x00007ffff5314a89 in __GI___poll (fds=0x7fffc0000b60, nfds=2,
timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
...
postcopy_ram_fault_thread_request Request for HVA=0x7fffd68cc000
rb=pc.ram offset=0xcc000 pid=11754
===
* Looking at the above data, it seems the missing page fault occurred
in thread=11754 , it may not be the memcpy(3) in
thread-1(pid/tid=7487) that triggered the fault.
* Secondly, if 'mig/dst/fault' thread is waiting at poll(2) call, ie.
fault notification has not arrived on the mis->userfault_fd OR
mis->userfault_event_fd descriptors yet. So the "Request for
HVA=0x7fffd..." via postcopy_ram_fault_thread_request() could be an
already served request.
> Send your next version and I'll set some time aside to debug this.
>
> heads-up: I'll be off from 2025/04/18 until 2025/05/05. Peter should be
> already back in the meantime.
* Okay, I'll send the next version.
Thank you.
---
- Prasad
© 2016 - 2026 Red Hat, Inc.