Every running QEMU process we are willing to reconnect (i.e., at least
3.1.0) supports migration events and we can assume the capability is
already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to
start QEMU 3.1.0 or newer, migration events would not be enabled. And if
the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU
process is still running, they would not be able to migrate the domain
because of disabled migration events. I think we do not really need to
worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU
3.1.0 was released only 3.5 years ago. Thus a chance someone would be
running such configuration should be fairly small and a combination with
upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it
pretty much to zero. The issue would disappear ff the ancient libvirt is
first upgraded to something older than 8.4.0 and then to the current
libvirt.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
---
Notes:
I was hoping we could do this without any downside, but it appeared this
was not possible. In case someone still thinks this would be an issue, I
can take an alternative solution and check whether migration events are
enabled when reconnecting to QEMU monitor.
src/qemu/qemu_migration_params.c | 26 ++++++++++++++------------
src/qemu/qemu_migration_params.h | 3 ++-
src/qemu/qemu_process.c | 14 +++++++++-----
3 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c
index 26754f03f8..95fd773645 100644
--- a/src/qemu/qemu_migration_params.c
+++ b/src/qemu/qemu_migration_params.c
@@ -1385,10 +1385,10 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt,
int
qemuMigrationCapsCheck(virQEMUDriver *driver,
virDomainObj *vm,
- int asyncJob)
+ int asyncJob,
+ bool reconnect)
{
qemuDomainObjPrivate *priv = vm->privateData;
- g_autoptr(virBitmap) migEvent = NULL;
g_autoptr(virJSONValue) json = NULL;
g_auto(GStrv) caps = NULL;
char **capStr;
@@ -1419,22 +1419,24 @@ qemuMigrationCapsCheck(virQEMUDriver *driver,
}
}
- migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
+ if (!reconnect) {
+ g_autoptr(virBitmap) migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
- ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
+ ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
- if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent)))
- return -1;
+ if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent)))
+ return -1;
- if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
- return -1;
+ if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
+ return -1;
- rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
+ rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitor(vm);
- if (rc < 0)
- return -1;
+ if (rc < 0)
+ return -1;
+ }
/* Migration events capability must always be enabled, clearing it from
* migration capabilities bitmap makes sure it won't be touched anywhere
diff --git a/src/qemu/qemu_migration_params.h b/src/qemu/qemu_migration_params.h
index 4a8815e776..a0909b9f3d 100644
--- a/src/qemu/qemu_migration_params.h
+++ b/src/qemu/qemu_migration_params.h
@@ -162,7 +162,8 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt,
int
qemuMigrationCapsCheck(virQEMUDriver *driver,
virDomainObj *vm,
- int asyncJob);
+ int asyncJob,
+ bool reconnect);
bool
qemuMigrationCapsGet(virDomainObj *vm,
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 2b16298942..fd4db43a42 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -1914,8 +1914,12 @@ qemuProcessInitMonitor(virQEMUDriver *driver,
static int
-qemuConnectMonitor(virQEMUDriver *driver, virDomainObj *vm, int asyncJob,
- bool retry, qemuDomainLogContext *logCtxt)
+qemuConnectMonitor(virQEMUDriver *driver,
+ virDomainObj *vm,
+ int asyncJob,
+ bool retry,
+ qemuDomainLogContext *logCtxt,
+ bool reconnect)
{
qemuDomainObjPrivate *priv = vm->privateData;
qemuMonitor *mon = NULL;
@@ -1968,7 +1972,7 @@ qemuConnectMonitor(virQEMUDriver *driver, virDomainObj *vm, int asyncJob,
if (qemuProcessInitMonitor(driver, vm, asyncJob) < 0)
return -1;
- if (qemuMigrationCapsCheck(driver, vm, asyncJob) < 0)
+ if (qemuMigrationCapsCheck(driver, vm, asyncJob, reconnect) < 0)
return -1;
return 0;
@@ -2353,7 +2357,7 @@ qemuProcessWaitForMonitor(virQEMUDriver *driver,
VIR_DEBUG("Connect monitor to vm=%p name='%s' retry=%d",
vm, vm->def->name, retry);
- if (qemuConnectMonitor(driver, vm, asyncJob, retry, logCtxt) < 0)
+ if (qemuConnectMonitor(driver, vm, asyncJob, retry, logCtxt, false) < 0)
goto cleanup;
/* Try to get the pty path mappings again via the monitor. This is much more
@@ -8773,7 +8777,7 @@ qemuProcessReconnect(void *opaque)
tryMonReconn = true;
/* XXX check PID liveliness & EXE path */
- if (qemuConnectMonitor(driver, obj, VIR_ASYNC_JOB_NONE, retry, NULL) < 0)
+ if (qemuConnectMonitor(driver, obj, VIR_ASYNC_JOB_NONE, retry, NULL, true) < 0)
goto error;
priv->machineName = qemuDomainGetMachineName(obj);
--
2.35.1
On Thu, May 12, 2022 at 10:55:47AM +0200, Jiri Denemark wrote: > Every running QEMU process we are willing to reconnect (i.e., at least > 3.1.0) supports migration events and we can assume the capability is > already enabled since last time libvirt daemon connected to its monitor. > > Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to > start QEMU 3.1.0 or newer, migration events would not be enabled. And if > the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU > process is still running, they would not be able to migrate the domain > because of disabled migration events. I think we do not really need to > worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU > 3.1.0 was released only 3.5 years ago. Thus a chance someone would be > running such configuration should be fairly small and a combination with > upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it > pretty much to zero. The issue would disappear ff the ancient libvirt is > first upgraded to something older than 8.4.0 and then to the current > libvirt. libvirt 1.2.17 was released in Jul 2015 QEMU 3.1.0 was released in Dec 2018 New QEMU has periodically introduced CLI changes that made it incompatible with older libvirt. Given that 3+1/2 year gap, I feels like there is a pretty decent chance that it was impossible to ever start QEMU 3.3.0 using a libvirt 1.2.17. If so, we don't have anything to worry about from an upgrade POV With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
> Every running QEMU process we are willing to reconnect (i.e., at least
> 3.1.0) supports migration events and we can assume the capability is
> already enabled since last time libvirt daemon connected to its monitor.
>
> Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to
> start QEMU 3.1.0 or newer, migration events would not be enabled. And if
> the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU
> process is still running, they would not be able to migrate the domain
But doesn't the function below actually enable the events? Or is there
something else that needs to be done?
Such that we simply could enable the events and be done with it?
> because of disabled migration events. I think we do not really need to
> worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU
> 3.1.0 was released only 3.5 years ago. Thus a chance someone would be
> running such configuration should be fairly small and a combination with
> upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it
> pretty much to zero. The issue would disappear ff the ancient libvirt is
> first upgraded to something older than 8.4.0 and then to the current
> libvirt.
>
> Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
> ---
>
> Notes:
> I was hoping we could do this without any downside, but it appeared this
> was not possible. In case someone still thinks this would be an issue, I
> can take an alternative solution and check whether migration events are
> enabled when reconnecting to QEMU monitor.
>
> src/qemu/qemu_migration_params.c | 26 ++++++++++++++------------
> src/qemu/qemu_migration_params.h | 3 ++-
> src/qemu/qemu_process.c | 14 +++++++++-----
> 3 files changed, 25 insertions(+), 18 deletions(-)
>
> diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c
> index 26754f03f8..95fd773645 100644
> --- a/src/qemu/qemu_migration_params.c
> +++ b/src/qemu/qemu_migration_params.c
> @@ -1385,10 +1385,10 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt,
> int
> qemuMigrationCapsCheck(virQEMUDriver *driver,
> virDomainObj *vm,
> - int asyncJob)
> + int asyncJob,
> + bool reconnect)
> {
> qemuDomainObjPrivate *priv = vm->privateData;
> - g_autoptr(virBitmap) migEvent = NULL;
> g_autoptr(virJSONValue) json = NULL;
> g_auto(GStrv) caps = NULL;
> char **capStr;
> @@ -1419,22 +1419,24 @@ qemuMigrationCapsCheck(virQEMUDriver *driver,
> }
> }
>
> - migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
> + if (!reconnect) {
> + g_autoptr(virBitmap) migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
>
> - ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
> + ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
Here you assert the flag ...
>
> - if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent)))
> - return -1;
> + if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent)))
> + return -1;
>
> - if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
> - return -1;
> + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
> + return -1;
>
> - rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
> + rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
... and ask the monitor to enable it. To me that looks like we're good
even with older qemus, right?
>
> - qemuDomainObjExitMonitor(vm);
> + qemuDomainObjExitMonitor(vm);
>
> - if (rc < 0)
> - return -1;
> + if (rc < 0)
> + return -1;
> + }
On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote:
> On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
> > Every running QEMU process we are willing to reconnect (i.e., at least
> > 3.1.0) supports migration events and we can assume the capability is
> > already enabled since last time libvirt daemon connected to its monitor.
> >
> > Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to
> > start QEMU 3.1.0 or newer, migration events would not be enabled. And if
> > the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU
> > process is still running, they would not be able to migrate the domain
>
> But doesn't the function below actually enable the events? Or is there
> something else that needs to be done?
>
> Such that we simply could enable the events and be done with it?
We can't blindly enable the events all the time as it is a migration
capability and as such can only be set when there's no active migration.
> > because of disabled migration events. I think we do not really need to
> > worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU
> > 3.1.0 was released only 3.5 years ago. Thus a chance someone would be
> > running such configuration should be fairly small and a combination with
> > upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it
> > pretty much to zero. The issue would disappear ff the ancient libvirt is
> > first upgraded to something older than 8.4.0 and then to the current
> > libvirt.
> >
> > Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
> > ---
> >
> > Notes:
> > I was hoping we could do this without any downside, but it appeared this
> > was not possible. In case someone still thinks this would be an issue, I
> > can take an alternative solution and check whether migration events are
> > enabled when reconnecting to QEMU monitor.
> >
> > src/qemu/qemu_migration_params.c | 26 ++++++++++++++------------
> > src/qemu/qemu_migration_params.h | 3 ++-
> > src/qemu/qemu_process.c | 14 +++++++++-----
> > 3 files changed, 25 insertions(+), 18 deletions(-)
> >
> > diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c
> > index 26754f03f8..95fd773645 100644
> > --- a/src/qemu/qemu_migration_params.c
> > +++ b/src/qemu/qemu_migration_params.c
> > @@ -1385,10 +1385,10 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt,
> > int
> > qemuMigrationCapsCheck(virQEMUDriver *driver,
> > virDomainObj *vm,
> > - int asyncJob)
> > + int asyncJob,
> > + bool reconnect)
> > {
> > qemuDomainObjPrivate *priv = vm->privateData;
> > - g_autoptr(virBitmap) migEvent = NULL;
> > g_autoptr(virJSONValue) json = NULL;
> > g_auto(GStrv) caps = NULL;
> > char **capStr;
> > @@ -1419,22 +1419,24 @@ qemuMigrationCapsCheck(virQEMUDriver *driver,
> > }
> > }
> >
> > - migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
> > + if (!reconnect) {
> > + g_autoptr(virBitmap) migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
> >
> > - ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
> > + ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
>
> Here you assert the flag ...
>
> >
> > - if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent)))
> > - return -1;
> > + if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent)))
> > + return -1;
> >
> > - if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
> > - return -1;
> > + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
> > + return -1;
> >
> > - rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
> > + rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
>
>
> ... and ask the monitor to enable it. To me that looks like we're good
> even with older qemus, right?
This all happens only when !reconnect, that is when we're starting a
fresh QEMU. When reconnecting to an already running QEMU, we just expect
the capability was enabled when it was first started. Which is true for
any libvirt > 1.2.17 starting QEMU 3.1.0 and newer (and we won't even
reconnect to an older one).
Jirka
On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote: > On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote: > > Every running QEMU process we are willing to reconnect (i.e., at least > > 3.1.0) supports migration events and we can assume the capability is > > already enabled since last time libvirt daemon connected to its monitor. > > > > Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to > > start QEMU 3.1.0 or newer, migration events would not be enabled. And if > > the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU > > process is still running, they would not be able to migrate the domain > > But doesn't the function below actually enable the events? Or is there > something else that needs to be done? > > Such that we simply could enable the events and be done with it? > > > because of disabled migration events. I think we do not really need to > > worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU > > 3.1.0 was released only 3.5 years ago. Thus a chance someone would be > > running such configuration should be fairly small and a combination with > > upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it > > pretty much to zero. The issue would disappear ff the ancient libvirt is > > first upgraded to something older than 8.4.0 and then to the current > > libvirt. > > > > Signed-off-by: Jiri Denemark <jdenemar@redhat.com> > > --- > > > > Notes: > > I was hoping we could do this without any downside, but it appeared this > > was not possible. In case someone still thinks this would be an issue, I > > can take an alternative solution and check whether migration events are > > enabled when reconnecting to QEMU monitor. Aaah, never mind. You want to avoid setting it all the time. Well I think I would be okay with that, does our code handle the absence of events properly?
On Thu, May 12, 2022 at 12:41:57 +0200, Peter Krempa wrote: > On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote: > > On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote: > > > Every running QEMU process we are willing to reconnect (i.e., at least > > > 3.1.0) supports migration events and we can assume the capability is > > > already enabled since last time libvirt daemon connected to its monitor. > > > > > > Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to > > > start QEMU 3.1.0 or newer, migration events would not be enabled. And if > > > the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU > > > process is still running, they would not be able to migrate the domain > > > > But doesn't the function below actually enable the events? Or is there > > something else that needs to be done? > > > > Such that we simply could enable the events and be done with it? > > > > > because of disabled migration events. I think we do not really need to > > > worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU > > > 3.1.0 was released only 3.5 years ago. Thus a chance someone would be > > > running such configuration should be fairly small and a combination with > > > upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it > > > pretty much to zero. The issue would disappear ff the ancient libvirt is > > > first upgraded to something older than 8.4.0 and then to the current > > > libvirt. > > > > > > Signed-off-by: Jiri Denemark <jdenemar@redhat.com> > > > --- > > > > > > Notes: > > > I was hoping we could do this without any downside, but it appeared this > > > was not possible. In case someone still thinks this would be an issue, I > > > can take an alternative solution and check whether migration events are > > > enabled when reconnecting to QEMU monitor. > > Aaah, never mind. You want to avoid setting it all the time. Well I > think I would be okay with that, does our code handle the absence of > events properly? I guess the migration would just hang waiting for an event that never comes. But I guess it should be fairly easy to check whether events are enabled before starting a migration and report an error instead of hanging. Jirka
On Thu, May 12, 2022 at 13:30:57 +0200, Jiri Denemark wrote: > On Thu, May 12, 2022 at 12:41:57 +0200, Peter Krempa wrote: > > On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote: > > > On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote: > > > > Every running QEMU process we are willing to reconnect (i.e., at least > > > > 3.1.0) supports migration events and we can assume the capability is > > > > already enabled since last time libvirt daemon connected to its monitor. > > > > > > > > Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to > > > > start QEMU 3.1.0 or newer, migration events would not be enabled. And if > > > > the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU > > > > process is still running, they would not be able to migrate the domain > > > > > > But doesn't the function below actually enable the events? Or is there > > > something else that needs to be done? > > > > > > Such that we simply could enable the events and be done with it? > > > > > > > because of disabled migration events. I think we do not really need to > > > > worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU > > > > 3.1.0 was released only 3.5 years ago. Thus a chance someone would be > > > > running such configuration should be fairly small and a combination with > > > > upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it > > > > pretty much to zero. The issue would disappear ff the ancient libvirt is > > > > first upgraded to something older than 8.4.0 and then to the current > > > > libvirt. > > > > > > > > Signed-off-by: Jiri Denemark <jdenemar@redhat.com> > > > > --- > > > > > > > > Notes: > > > > I was hoping we could do this without any downside, but it appeared this > > > > was not possible. In case someone still thinks this would be an issue, I > > > > can take an alternative solution and check whether migration events are > > > > enabled when reconnecting to QEMU monitor. > > > > Aaah, never mind. You want to avoid setting it all the time. Well I > > think I would be okay with that, does our code handle the absence of > > events properly? > > I guess the migration would just hang waiting for an event that never > comes. But I guess it should be fairly easy to check whether events are > enabled before starting a migration and report an error instead of > hanging. Leaving the VM in a hanging migration would be bad, but if we can just refuse it I'd be okay with that.
On Thu, May 12, 2022 at 13:32:38 +0200, Peter Krempa wrote: > On Thu, May 12, 2022 at 13:30:57 +0200, Jiri Denemark wrote: > > On Thu, May 12, 2022 at 12:41:57 +0200, Peter Krempa wrote: > > > On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote: > > > > On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote: > > > > > Every running QEMU process we are willing to reconnect (i.e., at least > > > > > 3.1.0) supports migration events and we can assume the capability is > > > > > already enabled since last time libvirt daemon connected to its monitor. > > > > > > > > > > Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to > > > > > start QEMU 3.1.0 or newer, migration events would not be enabled. And if > > > > > the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU > > > > > process is still running, they would not be able to migrate the domain > > > > > > > > But doesn't the function below actually enable the events? Or is there > > > > something else that needs to be done? > > > > > > > > Such that we simply could enable the events and be done with it? > > > > > > > > > because of disabled migration events. I think we do not really need to > > > > > worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU > > > > > 3.1.0 was released only 3.5 years ago. Thus a chance someone would be > > > > > running such configuration should be fairly small and a combination with > > > > > upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it > > > > > pretty much to zero. The issue would disappear ff the ancient libvirt is > > > > > first upgraded to something older than 8.4.0 and then to the current > > > > > libvirt. > > > > > > > > > > Signed-off-by: Jiri Denemark <jdenemar@redhat.com> > > > > > --- > > > > > > > > > > Notes: > > > > > I was hoping we could do this without any downside, but it appeared this > > > > > was not possible. In case someone still thinks this would be an issue, I > > > > > can take an alternative solution and check whether migration events are > > > > > enabled when reconnecting to QEMU monitor. > > > > > > Aaah, never mind. You want to avoid setting it all the time. Well I > > > think I would be okay with that, does our code handle the absence of > > > events properly? > > > > I guess the migration would just hang waiting for an event that never > > comes. But I guess it should be fairly easy to check whether events are > > enabled before starting a migration and report an error instead of > > hanging. > > Leaving the VM in a hanging migration would be bad, but if we can just > refuse it I'd be okay with that. Actually, doing so would not make any sense. We could just as easy enable the capability on reconnect if it's not enabled. Which is the alternative solution I described above in the Notes section :-) Jirka
On Thu, May 12, 2022 at 14:07:10 +0200, Jiri Denemark wrote: > On Thu, May 12, 2022 at 13:32:38 +0200, Peter Krempa wrote: > > On Thu, May 12, 2022 at 13:30:57 +0200, Jiri Denemark wrote: [....] > > > I guess the migration would just hang waiting for an event that never > > > comes. But I guess it should be fairly easy to check whether events are > > > enabled before starting a migration and report an error instead of > > > hanging. > > > > Leaving the VM in a hanging migration would be bad, but if we can just > > refuse it I'd be okay with that. > > Actually, doing so would not make any sense. We could just as easy > enable the capability on reconnect if it's not enabled. Which is the > alternative solution I described above in the Notes section :-) I think I'm also starting to lean towards the fact that "This can't happen" and simply ignore it ;) ... I'm not sure if I: Reviewed-by: Peter Krempa <pkrempa@redhat.com> It somewhere else in this thread.
© 2016 - 2026 Red Hat, Inc.