We were checking against -EIO, assuming that it will cover all IO
failures. But actually it is not. One example is that in
qemu_loadvm_section_start_full() we can have tons of places that will
return -EINVAL even if the error is caused by IO failures on the
network.
Let's loosen the recovery check logic here to cover all the error cases
happened by removing the explicit check against -EIO. After all we
won't lose anything here if any other failure happened.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/savevm.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/migration/savevm.c b/migration/savevm.c
index 851d74e8b6..efcc795071 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2276,18 +2276,14 @@ out:
qemu_file_set_error(f, ret);
/*
- * Detect whether it is:
- *
- * 1. postcopy running (after receiving all device data, which
- * must be in POSTCOPY_INCOMING_RUNNING state. Note that
- * POSTCOPY_INCOMING_LISTENING is still not enough, it's
- * still receiving device states).
- * 2. network failure (-EIO)
- *
- * If so, we try to wait for a recovery.
+ * If we are during an active postcopy, then we pause instead
+ * of bail out to at least keep the VM's dirty data. Note
+ * that POSTCOPY_INCOMING_LISTENING stage is still not enough,
+ * during which we're still receiving device states and we
+ * still haven't yet started the VM on destination.
*/
if (postcopy_state_get() == POSTCOPY_INCOMING_RUNNING &&
- ret == -EIO && postcopy_pause_incoming(mis)) {
+ postcopy_pause_incoming(mis)) {
/* Reset f to point to the newly created channel */
f = mis->from_src_file;
goto retry;
--
2.17.1
* Peter Xu (peterx@redhat.com) wrote:
> We were checking against -EIO, assuming that it will cover all IO
> failures. But actually it is not. One example is that in
> qemu_loadvm_section_start_full() we can have tons of places that will
> return -EINVAL even if the error is caused by IO failures on the
> network.
I suspect we should fix those; but OK; I think the only cases
in there that could hit that are the get_counted_string and the
check_section_footer; they could both be fixed to return an errno
like the other cases.
> Let's loosen the recovery check logic here to cover all the error cases
> happened by removing the explicit check against -EIO. After all we
> won't lose anything here if any other failure happened.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> migration/savevm.c | 16 ++++++----------
> 1 file changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 851d74e8b6..efcc795071 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -2276,18 +2276,14 @@ out:
> qemu_file_set_error(f, ret);
>
> /*
> - * Detect whether it is:
> - *
> - * 1. postcopy running (after receiving all device data, which
> - * must be in POSTCOPY_INCOMING_RUNNING state. Note that
> - * POSTCOPY_INCOMING_LISTENING is still not enough, it's
> - * still receiving device states).
> - * 2. network failure (-EIO)
> - *
> - * If so, we try to wait for a recovery.
> + * If we are during an active postcopy, then we pause instead
> + * of bail out to at least keep the VM's dirty data. Note
> + * that POSTCOPY_INCOMING_LISTENING stage is still not enough,
> + * during which we're still receiving device states and we
> + * still haven't yet started the VM on destination.
> */
> if (postcopy_state_get() == POSTCOPY_INCOMING_RUNNING &&
> - ret == -EIO && postcopy_pause_incoming(mis)) {
> + postcopy_pause_incoming(mis)) {
Hmm, OK; It does make me a bit nervous we might trigger this
too often, but I see the other cases where we're missing stuff
we should trigger it. We might find we need to go the other way
and blacklist some errors.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> /* Reset f to point to the newly created channel */
> f = mis->from_src_file;
> goto retry;
> --
> 2.17.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Thu, Jul 05, 2018 at 10:15:47AM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > We were checking against -EIO, assuming that it will cover all IO
> > failures. But actually it is not. One example is that in
> > qemu_loadvm_section_start_full() we can have tons of places that will
> > return -EINVAL even if the error is caused by IO failures on the
> > network.
>
> I suspect we should fix those; but OK; I think the only cases
> in there that could hit that are the get_counted_string and the
> check_section_footer; they could both be fixed to return an errno
> like the other cases.
>
> > Let's loosen the recovery check logic here to cover all the error cases
> > happened by removing the explicit check against -EIO. After all we
> > won't lose anything here if any other failure happened.
> >
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> > migration/savevm.c | 16 ++++++----------
> > 1 file changed, 6 insertions(+), 10 deletions(-)
> >
> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index 851d74e8b6..efcc795071 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -2276,18 +2276,14 @@ out:
> > qemu_file_set_error(f, ret);
> >
> > /*
> > - * Detect whether it is:
> > - *
> > - * 1. postcopy running (after receiving all device data, which
> > - * must be in POSTCOPY_INCOMING_RUNNING state. Note that
> > - * POSTCOPY_INCOMING_LISTENING is still not enough, it's
> > - * still receiving device states).
> > - * 2. network failure (-EIO)
> > - *
> > - * If so, we try to wait for a recovery.
> > + * If we are during an active postcopy, then we pause instead
> > + * of bail out to at least keep the VM's dirty data. Note
> > + * that POSTCOPY_INCOMING_LISTENING stage is still not enough,
> > + * during which we're still receiving device states and we
> > + * still haven't yet started the VM on destination.
> > */
> > if (postcopy_state_get() == POSTCOPY_INCOMING_RUNNING &&
> > - ret == -EIO && postcopy_pause_incoming(mis)) {
> > + postcopy_pause_incoming(mis)) {
>
> Hmm, OK; It does make me a bit nervous we might trigger this
> too often, but I see the other cases where we're missing stuff
> we should trigger it. We might find we need to go the other way
> and blacklist some errors.
AFAIU the difference will be only that we pause instead of quit QEMU.
IMHO the whitelist/blacklist will be more meaningful when there is a
third choice for us (besides "quit" and "go into pause state").
>
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Thanks!
--
Peter Xu
Peter Xu <peterx@redhat.com> wrote: > We were checking against -EIO, assuming that it will cover all IO > failures. But actually it is not. One example is that in > qemu_loadvm_section_start_full() we can have tons of places that will > return -EINVAL even if the error is caused by IO failures on the > network. > > Let's loosen the recovery check logic here to cover all the error cases > happened by removing the explicit check against -EIO. After all we > won't lose anything here if any other failure happened. > > Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>
© 2016 - 2025 Red Hat, Inc.