This causes the migration thread to hang if we failover during
checkpoint. A shutdown fd won't cause network traffic anyway.
Signed-off-by: Lukas Straub <lukasstraub2@web.de>
---
migration/qemu-file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 1c3a358a14..0748b5810f 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -660,7 +660,7 @@ int64_t qemu_ftell(QEMUFile *f)
int qemu_file_rate_limit(QEMUFile *f)
{
if (f->shutdown) {
- return 1;
+ return 0;
}
if (qemu_file_get_error(f)) {
return 1;
--
2.20.1
> This causes the migration thread to hang if we failover during checkpoint. A
> shutdown fd won't cause network traffic anyway.
>
I'm not quite sure if this modification can take side effect on normal migration process or not,
There are several places calling it.
Maybe Juan and Dave can help ;)
> Signed-off-by: Lukas Straub <lukasstraub2@web.de>
> ---
> migration/qemu-file.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/migration/qemu-file.c b/migration/qemu-file.c index
> 1c3a358a14..0748b5810f 100644
> --- a/migration/qemu-file.c
> +++ b/migration/qemu-file.c
> @@ -660,7 +660,7 @@ int64_t qemu_ftell(QEMUFile *f) int
> qemu_file_rate_limit(QEMUFile *f) {
> if (f->shutdown) {
> - return 1;
> + return 0;
> }
> if (qemu_file_get_error(f)) {
> return 1;
> --
> 2.20.1
* Zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> > This causes the migration thread to hang if we failover during checkpoint. A
> > shutdown fd won't cause network traffic anyway.
> >
>
> I'm not quite sure if this modification can take side effect on normal migration process or not,
> There are several places calling it.
>
> Maybe Juan and Dave can help ;)
>
> > Signed-off-by: Lukas Straub <lukasstraub2@web.de>
> > ---
> > migration/qemu-file.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/migration/qemu-file.c b/migration/qemu-file.c index
> > 1c3a358a14..0748b5810f 100644
> > --- a/migration/qemu-file.c
> > +++ b/migration/qemu-file.c
> > @@ -660,7 +660,7 @@ int64_t qemu_ftell(QEMUFile *f) int
> > qemu_file_rate_limit(QEMUFile *f) {
> > if (f->shutdown) {
> > - return 1;
> > + return 0;
> > }
This looks wrong to me; I'd be curious to understand how it's hanging
for you.
'1' means 'stop what you're doing', 0 means carry on; carrying on with a
shutdown fd sounds wrong.
If we look at ram.c we have:
while ((ret = qemu_file_rate_limit(f)) == 0 ||
!QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
int pages;
....
so if it returns '1', as it does at the moment it should cause it to
exit the ram_save_iterate loop - which is what we want if it's failing.
Thus I think you need to find the actual place it's stuck in this case -
I suspect it's repeatedly calling ram_save_iterate and then exiting it,
but if that's happening perhaps we're missing a qemu_file_get_error
check somewhere.
Dave
> > if (qemu_file_get_error(f)) {
> > return 1;
> > --
> > 2.20.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Mon, 18 May 2020 12:55:34 +0100
"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> > > This causes the migration thread to hang if we failover during checkpoint. A
> > > shutdown fd won't cause network traffic anyway.
> > >
> >
> > I'm not quite sure if this modification can take side effect on normal migration process or not,
> > There are several places calling it.
> >
> > Maybe Juan and Dave can help ;)
> >
> > > Signed-off-by: Lukas Straub <lukasstraub2@web.de>
> > > ---
> > > migration/qemu-file.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/migration/qemu-file.c b/migration/qemu-file.c index
> > > 1c3a358a14..0748b5810f 100644
> > > --- a/migration/qemu-file.c
> > > +++ b/migration/qemu-file.c
> > > @@ -660,7 +660,7 @@ int64_t qemu_ftell(QEMUFile *f) int
> > > qemu_file_rate_limit(QEMUFile *f) {
> > > if (f->shutdown) {
> > > - return 1;
> > > + return 0;
> > > }
>
> This looks wrong to me; I'd be curious to understand how it's hanging
> for you.
> '1' means 'stop what you're doing', 0 means carry on; carrying on with a
> shutdown fd sounds wrong.
>
> If we look at ram.c we have:
>
> while ((ret = qemu_file_rate_limit(f)) == 0 ||
> !QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
> int pages;
> ....
>
> so if it returns '1', as it does at the moment it should cause it to
> exit the ram_save_iterate loop - which is what we want if it's failing.
> Thus I think you need to find the actual place it's stuck in this case -
> I suspect it's repeatedly calling ram_save_iterate and then exiting it,
> but if that's happening perhaps we're missing a qemu_file_get_error
> check somewhere.
Hi,
the problem is in ram_save_host_page and migration_rate_limit, here is a backtrace:
#0 0x00007f7b502921a8 in futex_abstimed_wait_cancelable (private=0, abstime=0x7f7ada7fb3f0, clockid=0, expected=0, futex_word=0x55bc358b9908) at ../sysdeps/unix/sysv/linux/futex-internal.h:208
#1 do_futex_wait (sem=sem@entry=0x55bc358b9908, abstime=abstime@entry=0x7f7ada7fb3f0, clockid=0) at sem_waitcommon.c:112
#2 0x00007f7b502922d3 in __new_sem_wait_slow (sem=0x55bc358b9908, abstime=0x7f7ada7fb3f0, clockid=0) at sem_waitcommon.c:184
#3 0x000055bc3382b6c1 in qemu_sem_timedwait (sem=0x55bc358b9908, ms=100) at util/qemu-thread-posix.c:306
#4 0x000055bc3363950b in migration_rate_limit () at migration/migration.c:3365
#5 0x000055bc332b70d3 in ram_save_host_page (rs=0x7f7acc001a70, pss=0x7f7ada7fb4b0, last_stage=true) at /home/lukas/qemu/migration/ram.c:1696
#6 0x000055bc332b71fa in ram_find_and_save_block (rs=0x7f7acc001a70, last_stage=true) at /home/lukas/qemu/migration/ram.c:1750
#7 0x000055bc332b8bbd in ram_save_complete (f=0x55bc36661330, opaque=0x55bc33fbc678 <ram_state>) at /home/lukas/qemu/migration/ram.c:2606
#8 0x000055bc3364112c in qemu_savevm_state_complete_precopy_iterable (f=0x55bc36661330, in_postcopy=false) at migration/savevm.c:1344
#9 0x000055bc33641556 in qemu_savevm_state_complete_precopy (f=0x55bc36661330, iterable_only=true, inactivate_disks=false) at migration/savevm.c:1442
#10 0x000055bc33641982 in qemu_savevm_live_state (f=0x55bc36661330) at migration/savevm.c:1569
#11 0x000055bc33645407 in colo_do_checkpoint_transaction (s=0x55bc358b9840, bioc=0x7f7acc059990, fb=0x7f7acc4627b0) at migration/colo.c:464
#12 0x000055bc336457ca in colo_process_checkpoint (s=0x55bc358b9840) at migration/colo.c:589
#13 0x000055bc336459e4 in migrate_start_colo_process (s=0x55bc358b9840) at migration/colo.c:666
#14 0x000055bc336393d7 in migration_iteration_finish (s=0x55bc358b9840) at migration/migration.c:3312
#15 0x000055bc33639753 in migration_thread (opaque=0x55bc358b9840) at migration/migration.c:3477
#16 0x000055bc3382bbb5 in qemu_thread_start (args=0x55bc357c27c0) at util/qemu-thread-posix.c:519
#17 0x00007f7b50288f27 in start_thread (arg=<optimized out>) at pthread_create.c:479
#18 0x00007f7b501ba31f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
It hangs in ram_save_host_page for at least 10 Minutes.
Regards,
Lukas Straub
> Dave
>
> > > if (qemu_file_get_error(f)) {
> > > return 1;
> > > --
> > > 2.20.1
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
* Lukas Straub (lukasstraub2@web.de) wrote:
> On Mon, 18 May 2020 12:55:34 +0100
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>
> > * Zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> > > > This causes the migration thread to hang if we failover during checkpoint. A
> > > > shutdown fd won't cause network traffic anyway.
> > > >
> > >
> > > I'm not quite sure if this modification can take side effect on normal migration process or not,
> > > There are several places calling it.
> > >
> > > Maybe Juan and Dave can help ;)
> > >
> > > > Signed-off-by: Lukas Straub <lukasstraub2@web.de>
> > > > ---
> > > > migration/qemu-file.c | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/migration/qemu-file.c b/migration/qemu-file.c index
> > > > 1c3a358a14..0748b5810f 100644
> > > > --- a/migration/qemu-file.c
> > > > +++ b/migration/qemu-file.c
> > > > @@ -660,7 +660,7 @@ int64_t qemu_ftell(QEMUFile *f) int
> > > > qemu_file_rate_limit(QEMUFile *f) {
> > > > if (f->shutdown) {
> > > > - return 1;
> > > > + return 0;
> > > > }
> >
> > This looks wrong to me; I'd be curious to understand how it's hanging
> > for you.
> > '1' means 'stop what you're doing', 0 means carry on; carrying on with a
> > shutdown fd sounds wrong.
> >
> > If we look at ram.c we have:
> >
> > while ((ret = qemu_file_rate_limit(f)) == 0 ||
> > !QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
> > int pages;
> > ....
> >
> > so if it returns '1', as it does at the moment it should cause it to
> > exit the ram_save_iterate loop - which is what we want if it's failing.
> > Thus I think you need to find the actual place it's stuck in this case -
> > I suspect it's repeatedly calling ram_save_iterate and then exiting it,
> > but if that's happening perhaps we're missing a qemu_file_get_error
> > check somewhere.
>
> Hi,
> the problem is in ram_save_host_page and migration_rate_limit, here is a backtrace:
Ah...
> #0 0x00007f7b502921a8 in futex_abstimed_wait_cancelable (private=0, abstime=0x7f7ada7fb3f0, clockid=0, expected=0, futex_word=0x55bc358b9908) at ../sysdeps/unix/sysv/linux/futex-internal.h:208
> #1 do_futex_wait (sem=sem@entry=0x55bc358b9908, abstime=abstime@entry=0x7f7ada7fb3f0, clockid=0) at sem_waitcommon.c:112
> #2 0x00007f7b502922d3 in __new_sem_wait_slow (sem=0x55bc358b9908, abstime=0x7f7ada7fb3f0, clockid=0) at sem_waitcommon.c:184
> #3 0x000055bc3382b6c1 in qemu_sem_timedwait (sem=0x55bc358b9908, ms=100) at util/qemu-thread-posix.c:306
> #4 0x000055bc3363950b in migration_rate_limit () at migration/migration.c:3365
OK, so how about:
diff --git a/migration/migration.c b/migration/migration.c
index b6b662e016..4e885385a8 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3356,6 +3356,10 @@ bool migration_rate_limit(void)
bool urgent = false;
migration_update_counters(s, now);
if (qemu_file_rate_limit(s->to_dst_file)) {
+
+ if (qemu_file_get_error(mis->from_src_file)) {
+ return false;
+ }
/*
* Wait for a delay to do rate limiting OR
* something urgent to post the semaphore.
Does that work?
I wonder if we also need to kick the rate_limit_sem when we yank the
socket.
Dave
> #5 0x000055bc332b70d3 in ram_save_host_page (rs=0x7f7acc001a70, pss=0x7f7ada7fb4b0, last_stage=true) at /home/lukas/qemu/migration/ram.c:1696
> #6 0x000055bc332b71fa in ram_find_and_save_block (rs=0x7f7acc001a70, last_stage=true) at /home/lukas/qemu/migration/ram.c:1750
> #7 0x000055bc332b8bbd in ram_save_complete (f=0x55bc36661330, opaque=0x55bc33fbc678 <ram_state>) at /home/lukas/qemu/migration/ram.c:2606
> #8 0x000055bc3364112c in qemu_savevm_state_complete_precopy_iterable (f=0x55bc36661330, in_postcopy=false) at migration/savevm.c:1344
> #9 0x000055bc33641556 in qemu_savevm_state_complete_precopy (f=0x55bc36661330, iterable_only=true, inactivate_disks=false) at migration/savevm.c:1442
> #10 0x000055bc33641982 in qemu_savevm_live_state (f=0x55bc36661330) at migration/savevm.c:1569
> #11 0x000055bc33645407 in colo_do_checkpoint_transaction (s=0x55bc358b9840, bioc=0x7f7acc059990, fb=0x7f7acc4627b0) at migration/colo.c:464
> #12 0x000055bc336457ca in colo_process_checkpoint (s=0x55bc358b9840) at migration/colo.c:589
> #13 0x000055bc336459e4 in migrate_start_colo_process (s=0x55bc358b9840) at migration/colo.c:666
> #14 0x000055bc336393d7 in migration_iteration_finish (s=0x55bc358b9840) at migration/migration.c:3312
> #15 0x000055bc33639753 in migration_thread (opaque=0x55bc358b9840) at migration/migration.c:3477
> #16 0x000055bc3382bbb5 in qemu_thread_start (args=0x55bc357c27c0) at util/qemu-thread-posix.c:519
> #17 0x00007f7b50288f27 in start_thread (arg=<optimized out>) at pthread_create.c:479
> #18 0x00007f7b501ba31f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>
> It hangs in ram_save_host_page for at least 10 Minutes.
>
> Regards,
> Lukas Straub
>
> > Dave
> >
> > > > if (qemu_file_get_error(f)) {
> > > > return 1;
> > > > --
> > > > 2.20.1
> > >
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Tue, 19 May 2020 15:50:20 +0100
"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Lukas Straub (lukasstraub2@web.de) wrote:
> > On Mon, 18 May 2020 12:55:34 +0100
> > "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> >
> > > * Zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> > > > > This causes the migration thread to hang if we failover during checkpoint. A
> > > > > shutdown fd won't cause network traffic anyway.
> > > > >
> > > >
> > > > I'm not quite sure if this modification can take side effect on normal migration process or not,
> > > > There are several places calling it.
> > > >
> > > > Maybe Juan and Dave can help ;)
> > > >
> > > > > Signed-off-by: Lukas Straub <lukasstraub2@web.de>
> > > > > ---
> > > > > migration/qemu-file.c | 2 +-
> > > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/migration/qemu-file.c b/migration/qemu-file.c index
> > > > > 1c3a358a14..0748b5810f 100644
> > > > > --- a/migration/qemu-file.c
> > > > > +++ b/migration/qemu-file.c
> > > > > @@ -660,7 +660,7 @@ int64_t qemu_ftell(QEMUFile *f) int
> > > > > qemu_file_rate_limit(QEMUFile *f) {
> > > > > if (f->shutdown) {
> > > > > - return 1;
> > > > > + return 0;
> > > > > }
> > >
> > > This looks wrong to me; I'd be curious to understand how it's hanging
> > > for you.
> > > '1' means 'stop what you're doing', 0 means carry on; carrying on with a
> > > shutdown fd sounds wrong.
> > >
> > > If we look at ram.c we have:
> > >
> > > while ((ret = qemu_file_rate_limit(f)) == 0 ||
> > > !QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
> > > int pages;
> > > ....
> > >
> > > so if it returns '1', as it does at the moment it should cause it to
> > > exit the ram_save_iterate loop - which is what we want if it's failing.
> > > Thus I think you need to find the actual place it's stuck in this case -
> > > I suspect it's repeatedly calling ram_save_iterate and then exiting it,
> > > but if that's happening perhaps we're missing a qemu_file_get_error
> > > check somewhere.
> >
> > Hi,
> > the problem is in ram_save_host_page and migration_rate_limit, here is a backtrace:
>
> Ah...
>
> > #0 0x00007f7b502921a8 in futex_abstimed_wait_cancelable (private=0, abstime=0x7f7ada7fb3f0, clockid=0, expected=0, futex_word=0x55bc358b9908) at ../sysdeps/unix/sysv/linux/futex-internal.h:208
> > #1 do_futex_wait (sem=sem@entry=0x55bc358b9908, abstime=abstime@entry=0x7f7ada7fb3f0, clockid=0) at sem_waitcommon.c:112
> > #2 0x00007f7b502922d3 in __new_sem_wait_slow (sem=0x55bc358b9908, abstime=0x7f7ada7fb3f0, clockid=0) at sem_waitcommon.c:184
> > #3 0x000055bc3382b6c1 in qemu_sem_timedwait (sem=0x55bc358b9908, ms=100) at util/qemu-thread-posix.c:306
> > #4 0x000055bc3363950b in migration_rate_limit () at migration/migration.c:3365
>
> OK, so how about:
>
> diff --git a/migration/migration.c b/migration/migration.c
> index b6b662e016..4e885385a8 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -3356,6 +3356,10 @@ bool migration_rate_limit(void)
> bool urgent = false;
> migration_update_counters(s, now);
> if (qemu_file_rate_limit(s->to_dst_file)) {
> +
> + if (qemu_file_get_error(mis->from_src_file)) {
> + return false;
> + }
> /*
> * Wait for a delay to do rate limiting OR
> * something urgent to post the semaphore.
>
> Does that work?
Yes, this works well using s->to_dst_file instead of mis->from_src_file.
Regards,
Lukas Straub
> I wonder if we also need to kick the rate_limit_sem when we yank the
> socket.
>
> Dave
>
> > #5 0x000055bc332b70d3 in ram_save_host_page (rs=0x7f7acc001a70, pss=0x7f7ada7fb4b0, last_stage=true) at /home/lukas/qemu/migration/ram.c:1696
> > #6 0x000055bc332b71fa in ram_find_and_save_block (rs=0x7f7acc001a70, last_stage=true) at /home/lukas/qemu/migration/ram.c:1750
> > #7 0x000055bc332b8bbd in ram_save_complete (f=0x55bc36661330, opaque=0x55bc33fbc678 <ram_state>) at /home/lukas/qemu/migration/ram.c:2606
> > #8 0x000055bc3364112c in qemu_savevm_state_complete_precopy_iterable (f=0x55bc36661330, in_postcopy=false) at migration/savevm.c:1344
> > #9 0x000055bc33641556 in qemu_savevm_state_complete_precopy (f=0x55bc36661330, iterable_only=true, inactivate_disks=false) at migration/savevm.c:1442
> > #10 0x000055bc33641982 in qemu_savevm_live_state (f=0x55bc36661330) at migration/savevm.c:1569
> > #11 0x000055bc33645407 in colo_do_checkpoint_transaction (s=0x55bc358b9840, bioc=0x7f7acc059990, fb=0x7f7acc4627b0) at migration/colo.c:464
> > #12 0x000055bc336457ca in colo_process_checkpoint (s=0x55bc358b9840) at migration/colo.c:589
> > #13 0x000055bc336459e4 in migrate_start_colo_process (s=0x55bc358b9840) at migration/colo.c:666
> > #14 0x000055bc336393d7 in migration_iteration_finish (s=0x55bc358b9840) at migration/migration.c:3312
> > #15 0x000055bc33639753 in migration_thread (opaque=0x55bc358b9840) at migration/migration.c:3477
> > #16 0x000055bc3382bbb5 in qemu_thread_start (args=0x55bc357c27c0) at util/qemu-thread-posix.c:519
> > #17 0x00007f7b50288f27 in start_thread (arg=<optimized out>) at pthread_create.c:479
> > #18 0x00007f7b501ba31f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> >
> > It hangs in ram_save_host_page for at least 10 Minutes.
> >
> > Regards,
> > Lukas Straub
> >
> > > Dave
> > >
> > > > > if (qemu_file_get_error(f)) {
> > > > > return 1;
> > > > > --
> > > > > 2.20.1
> > > >
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > >
> >
>
>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
© 2016 - 2026 Red Hat, Inc.