Enable by default asynchronous teardown on S390 hosts and add tests for
asynchronous teardown autogeneration support.
On S390 hosts, Secure Execution guests can take a long time to shutdown,
since the memory cleanup can take a long time. Since there is no
practical way to determine whether a S390 guest is running in Secure
Execution mode, and since the asynchronous teardown does not impact
normal (not Secure Execution) guests or guests without large memory
configurations, we enable asynchronous teardown by default on S390.
A user can select to override the default in the guest domain XML.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
---
src/qemu/qemu_domain.c | 19 +++++++++++
.../qemuhotplug-base-ccw-live+ccw-virtio.xml | 1 +
...with-2-ccw-virtio+ccw-virtio-1-reverse.xml | 1 +
...otplug-base-ccw-live-with-2-ccw-virtio.xml | 1 +
...-with-ccw-virtio+ccw-virtio-2-explicit.xml | 1 +
...-ccw-live-with-ccw-virtio+ccw-virtio-2.xml | 1 +
...uhotplug-base-ccw-live-with-ccw-virtio.xml | 1 +
.../qemuhotplug-base-ccw-live.xml | 1 +
.../balloon-ccw-deflate.s390x-latest.args | 1 +
.../console-sclp.s390x-latest.args | 1 +
.../console-virtio-ccw.s390x-latest.args | 1 +
.../cpu-s390-features.s390x-latest.args | 1 +
.../cpu-s390-zEC12.s390x-latest.args | 1 +
...default-video-type-s390x.s390x-latest.args | 1 +
.../disk-error-policy-s390x.s390x-latest.args | 1 +
.../disk-virtio-ccw-many.s390x-latest.args | 1 +
.../disk-virtio-ccw.s390x-latest.args | 1 +
.../disk-virtio-s390-zpci.s390x-latest.args | 1 +
.../fs9p-ccw.s390x-latest.args | 1 +
...tdev-scsi-vhost-scsi-ccw.s390x-latest.args | 1 +
...tdev-subsys-mdev-vfio-ap.s390x-latest.args | 1 +
...ubsys-mdev-vfio-ccw-boot.s390x-latest.args | 1 +
...dev-subsys-mdev-vfio-ccw.s390x-latest.args | 1 +
...o-zpci-autogenerate-fids.s390x-latest.args | 1 +
...o-zpci-autogenerate-uids.s390x-latest.args | 1 +
...v-vfio-zpci-autogenerate.s390x-latest.args | 1 +
...dev-vfio-zpci-boundaries.s390x-latest.args | 1 +
...vfio-zpci-ccw-memballoon.s390x-latest.args | 1 +
...io-zpci-multidomain-many.s390x-latest.args | 1 +
.../hostdev-vfio-zpci.s390x-latest.args | 1 +
.../input-virtio-ccw.s390x-latest.args | 1 +
...othreads-virtio-scsi-ccw.s390x-latest.args | 1 +
.../launch-security-s390-pv.s390x-latest.args | 1 +
...chine-aeskeywrap-off-cap.s390x-latest.args | 1 +
...hine-aeskeywrap-off-caps.s390x-latest.args | 1 +
...achine-aeskeywrap-on-cap.s390x-latest.args | 1 +
...chine-aeskeywrap-on-caps.s390x-latest.args | 1 +
...chine-deakeywrap-off-cap.s390x-latest.args | 1 +
...hine-deakeywrap-off-caps.s390x-latest.args | 1 +
...achine-deakeywrap-on-cap.s390x-latest.args | 1 +
...chine-deakeywrap-on-caps.s390x-latest.args | 1 +
...achine-keywrap-none-caps.s390x-latest.args | 1 +
.../machine-keywrap-none.s390x-latest.args | 1 +
...machine-loadparm-hostdev.s390x-latest.args | 1 +
...multiple-disks-nets-s390.s390x-latest.args | 1 +
...achine-loadparm-net-s390.s390x-latest.args | 1 +
.../machine-loadparm-s390.s390x-latest.args | 1 +
.../net-virtio-ccw.s390x-latest.args | 1 +
...low-bogus-usb-controller.s390x-latest.args | 1 +
...390-allow-bogus-usb-none.s390x-latest.args | 1 +
...t-cpu-kvm-ccw-virtio-2.7.s390x-latest.args | 1 +
...t-cpu-kvm-ccw-virtio-4.2.s390x-latest.args | 1 +
...t-cpu-tcg-ccw-virtio-2.7.s390x-latest.args | 1 +
...t-cpu-tcg-ccw-virtio-4.2.s390x-latest.args | 1 +
...no-async-teardown-autogen.s390x-6.0.0.args | 32 ++++++++++++++++++
...o-async-teardown-autogen.s390x-latest.args | 33 +++++++++++++++++++
.../s390-no-async-teardown-autogen.xml | 18 ++++++++++
.../s390-panic-missing.s390x-latest.args | 1 +
.../s390-panic-no-address.s390x-latest.args | 1 +
.../s390-serial-2.s390x-latest.args | 1 +
.../s390-serial-console.s390x-latest.args | 1 +
.../s390-serial.s390x-latest.args | 1 +
.../s390x-ccw-graphics.s390x-latest.args | 1 +
.../s390x-ccw-headless.s390x-latest.args | 1 +
.../vhost-vsock-ccw-auto.s390x-latest.args | 1 +
.../vhost-vsock-ccw-iommu.s390x-latest.args | 1 +
.../vhost-vsock-ccw-iommu.xml | 3 ++
.../vhost-vsock-ccw.s390x-latest.args | 1 +
.../video-virtio-gpu-ccw.s390x-latest.args | 1 +
.../virtio-rng-ccw.s390x-latest.args | 1 +
.../watchdog-diag288.s390x-latest.args | 1 +
tests/qemuxml2argvtest.c | 2 ++
.../default-video-type-s390x.s390x-latest.xml | 3 ++
.../disk-virtio-s390-zpci.s390x-latest.xml | 3 ++
...stdev-scsi-vhost-scsi-ccw.s390x-latest.xml | 3 ++
...stdev-subsys-mdev-vfio-ap.s390x-latest.xml | 3 ++
...subsys-mdev-vfio-ccw-boot.s390x-latest.xml | 3 ++
...tdev-subsys-mdev-vfio-ccw.s390x-latest.xml | 3 ++
...io-zpci-autogenerate-fids.s390x-latest.xml | 3 ++
...io-zpci-autogenerate-uids.s390x-latest.xml | 3 ++
...ev-vfio-zpci-autogenerate.s390x-latest.xml | 3 ++
...tdev-vfio-zpci-boundaries.s390x-latest.xml | 3 ++
...-vfio-zpci-ccw-memballoon.s390x-latest.xml | 3 ++
...fio-zpci-multidomain-many.s390x-latest.xml | 3 ++
.../hostdev-vfio-zpci.s390x-latest.xml | 3 ++
.../input-virtio-ccw.s390x-latest.xml | 3 ++
...iothreads-disk-virtio-ccw.s390x-latest.xml | 3 ++
...iothreads-virtio-scsi-ccw.s390x-latest.xml | 3 ++
.../machine-loadparm-hostdev.s390x-latest.xml | 3 ++
...-multiple-disks-nets-s390.s390x-latest.xml | 3 ++
...lt-cpu-kvm-ccw-virtio-2.7.s390x-latest.xml | 3 ++
...lt-cpu-kvm-ccw-virtio-4.2.s390x-latest.xml | 3 ++
...lt-cpu-tcg-ccw-virtio-2.7.s390x-latest.xml | 3 ++
...lt-cpu-tcg-ccw-virtio-4.2.s390x-latest.xml | 3 ++
.../s390-defaultconsole.s390x-latest.xml | 3 ++
...-no-async-teardown-autogen.s390x-6.0.0.xml | 25 ++++++++++++++
...no-async-teardown-autogen.s390x-latest.xml | 28 ++++++++++++++++
.../s390-panic-missing.s390x-latest.xml | 3 ++
.../s390-panic-no-address.s390x-latest.xml | 3 ++
.../s390-panic.s390x-latest.xml | 3 ++
.../s390-serial-2.s390x-latest.xml | 3 ++
.../s390-serial-console.s390x-latest.xml | 3 ++
.../s390-serial.s390x-latest.xml | 3 ++
.../s390x-ccw-graphics.s390x-latest.xml | 3 ++
.../s390x-ccw-headless.s390x-latest.xml | 3 ++
.../vhost-vsock-ccw-auto.s390x-latest.xml | 3 ++
.../vhost-vsock-ccw.s390x-latest.xml | 3 ++
...video-virtio-gpu-ccw-auto.s390x-latest.xml | 3 ++
.../video-virtio-gpu-ccw.s390x-latest.xml | 3 ++
tests/qemuxml2xmltest.c | 2 ++
110 files changed, 333 insertions(+)
create mode 100644 tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-6.0.0.args
create mode 100644 tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-latest.args
create mode 100644 tests/qemuxml2argvdata/s390-no-async-teardown-autogen.xml
create mode 100644 tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-6.0.0.xml
create mode 100644 tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-latest.xml
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 94587638c3..884f1599b4 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -4402,6 +4402,18 @@ qemuDomainDefEnableDefaultFeatures(virDomainDef *def,
* capabilities, we still want to enable this */
def->features[VIR_DOMAIN_FEATURE_GIC] = VIR_TRISTATE_SWITCH_ON;
}
+
+ /* Enabled asynchronous teardown by default on S390 hosts as Secure
+ * Execution guests can take a long time to shutdown, since the memory
+ * cleanup can take a long time. Since there is no üractical way to
+ * determine whether a S390 guest is running in Secure Execution mode,
+ * and since the asynchronous teardown does not impact normal (not Secure
+ * Execution) guests or guests without large memory configurations. */
+ if (ARCH_IS_S390(def->os.arch) &&
+ virQEMUCapsGet(qemuCaps, QEMU_CAPS_RUN_WITH_ASYNC_TEARDOWN) &&
+ def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] == VIR_TRISTATE_BOOL_ABSENT)
+ def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] = VIR_TRISTATE_BOOL_YES;
+
}
@@ -6694,6 +6706,13 @@ qemuDomainDefFormatBufInternal(virQEMUDriver *driver,
}
}
}
+
+ /* Remove asynchronous teardown enablement for backwards compatibility
+ * on S390 as it gets autogenerated on S390 if supported anyway.
+ */
+ if (ARCH_IS_S390(def->os.arch) &&
+ def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] != VIR_TRISTATE_BOOL_YES)
+ def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] = VIR_TRISTATE_BOOL_ABSENT;
}
format:
diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live+ccw-virtio.xml b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live+ccw-virtio.xml
index 6e879ded86..368e3059c8 100644
--- a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live+ccw-virtio.xml
+++ b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live+ccw-virtio.xml
@@ -11,6 +11,7 @@
<features>
<apic/>
<pae/>
+ <async-teardown enabled='yes'/>
</features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio+ccw-virtio-1-reverse.xml b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio+ccw-virtio-1-reverse.xml
index 9b16951e46..4d7132b012 100644
--- a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio+ccw-virtio-1-reverse.xml
+++ b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio+ccw-virtio-1-reverse.xml
@@ -11,6 +11,7 @@
<features>
<apic/>
<pae/>
+ <async-teardown enabled='yes'/>
</features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio.xml b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio.xml
index b5292a7ed2..8cb615e28a 100644
--- a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio.xml
+++ b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-2-ccw-virtio.xml
@@ -11,6 +11,7 @@
<features>
<apic/>
<pae/>
+ <async-teardown enabled='yes'/>
</features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2-explicit.xml b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2-explicit.xml
index f37868101c..751bb86eba 100644
--- a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2-explicit.xml
+++ b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2-explicit.xml
@@ -11,6 +11,7 @@
<features>
<apic/>
<pae/>
+ <async-teardown enabled='yes'/>
</features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2.xml b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2.xml
index f37868101c..751bb86eba 100644
--- a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2.xml
+++ b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio+ccw-virtio-2.xml
@@ -11,6 +11,7 @@
<features>
<apic/>
<pae/>
+ <async-teardown enabled='yes'/>
</features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio.xml b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio.xml
index 42f89a07a2..6119894ce3 100644
--- a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio.xml
+++ b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live-with-ccw-virtio.xml
@@ -11,6 +11,7 @@
<features>
<apic/>
<pae/>
+ <async-teardown enabled='yes'/>
</features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live.xml b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live.xml
index f0570b5cf4..ffc85115a7 100644
--- a/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live.xml
+++ b/tests/qemuhotplugtestdomains/qemuhotplug-base-ccw-live.xml
@@ -11,6 +11,7 @@
<features>
<apic/>
<pae/>
+ <async-teardown enabled='yes'/>
</features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
diff --git a/tests/qemuxml2argvdata/balloon-ccw-deflate.s390x-latest.args b/tests/qemuxml2argvdata/balloon-ccw-deflate.s390x-latest.args
index 8a993c1d64..1535348df7 100644
--- a/tests/qemuxml2argvdata/balloon-ccw-deflate.s390x-latest.args
+++ b/tests/qemuxml2argvdata/balloon-ccw-deflate.s390x-latest.args
@@ -29,4 +29,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","deflate-on-oom":true,"devno":"fe.0.000a"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/console-sclp.s390x-latest.args b/tests/qemuxml2argvdata/console-sclp.s390x-latest.args
index 3e0456b4a2..e6d1bd340c 100644
--- a/tests/qemuxml2argvdata/console-sclp.s390x-latest.args
+++ b/tests/qemuxml2argvdata/console-sclp.s390x-latest.args
@@ -34,4 +34,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/console-virtio-ccw.s390x-latest.args b/tests/qemuxml2argvdata/console-virtio-ccw.s390x-latest.args
index 7077028dbd..34c7ff8395 100644
--- a/tests/qemuxml2argvdata/console-virtio-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/console-virtio-ccw.s390x-latest.args
@@ -35,4 +35,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.000a"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/cpu-s390-features.s390x-latest.args b/tests/qemuxml2argvdata/cpu-s390-features.s390x-latest.args
index 6a95997ff5..75b31121b3 100644
--- a/tests/qemuxml2argvdata/cpu-s390-features.s390x-latest.args
+++ b/tests/qemuxml2argvdata/cpu-s390-features.s390x-latest.args
@@ -28,4 +28,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-guest1/.config \
-boot strict=on \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/cpu-s390-zEC12.s390x-latest.args b/tests/qemuxml2argvdata/cpu-s390-zEC12.s390x-latest.args
index c47ad9e17c..77272d1347 100644
--- a/tests/qemuxml2argvdata/cpu-s390-zEC12.s390x-latest.args
+++ b/tests/qemuxml2argvdata/cpu-s390-zEC12.s390x-latest.args
@@ -28,4 +28,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-guest1/.config \
-boot strict=on \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/default-video-type-s390x.s390x-latest.args b/tests/qemuxml2argvdata/default-video-type-s390x.s390x-latest.args
index e6438482a3..93fd512188 100644
--- a/tests/qemuxml2argvdata/default-video-type-s390x.s390x-latest.args
+++ b/tests/qemuxml2argvdata/default-video-type-s390x.s390x-latest.args
@@ -29,4 +29,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-default-video-type-s/.config \
-vnc 127.0.0.1:0,audiodev=audio1 \
-device '{"driver":"virtio-gpu-ccw","id":"video0","max_outputs":1,"devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/disk-error-policy-s390x.s390x-latest.args b/tests/qemuxml2argvdata/disk-error-policy-s390x.s390x-latest.args
index c023ff8903..c9f8332842 100644
--- a/tests/qemuxml2argvdata/disk-error-policy-s390x.s390x-latest.args
+++ b/tests/qemuxml2argvdata/disk-error-policy-s390x.s390x-latest.args
@@ -37,4 +37,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-guest/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0002","drive":"libvirt-1-format","id":"virtio-disk2","write-cache":"on","werror":"report","rerror":"ignore"}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/disk-virtio-ccw-many.s390x-latest.args b/tests/qemuxml2argvdata/disk-virtio-ccw-many.s390x-latest.args
index 47f485bab0..ca350475db 100644
--- a/tests/qemuxml2argvdata/disk-virtio-ccw-many.s390x-latest.args
+++ b/tests/qemuxml2argvdata/disk-virtio-ccw-many.s390x-latest.args
@@ -41,4 +41,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.000a"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/disk-virtio-ccw.s390x-latest.args b/tests/qemuxml2argvdata/disk-virtio-ccw.s390x-latest.args
index 5456a25c8f..9d9f4f64a4 100644
--- a/tests/qemuxml2argvdata/disk-virtio-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/disk-virtio-ccw.s390x-latest.args
@@ -35,4 +35,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.000a"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/disk-virtio-s390-zpci.s390x-latest.args b/tests/qemuxml2argvdata/disk-virtio-s390-zpci.s390x-latest.args
index 3a8bf53390..7bde0babdc 100644
--- a/tests/qemuxml2argvdata/disk-virtio-s390-zpci.s390x-latest.args
+++ b/tests/qemuxml2argvdata/disk-virtio-s390-zpci.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/fs9p-ccw.s390x-latest.args b/tests/qemuxml2argvdata/fs9p-ccw.s390x-latest.args
index 2fb3203b9c..e7e328b9c2 100644
--- a/tests/qemuxml2argvdata/fs9p-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/fs9p-ccw.s390x-latest.args
@@ -38,4 +38,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0004"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.args
index 1668c6634d..03a986cc10 100644
--- a/tests/qemuxml2argvdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.args
@@ -34,4 +34,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest2/.config \
-device '{"driver":"vhost-scsi-ccw","wwpn":"naa.5123456789abcde0","vhostfd":"3","id":"hostdev0","devno":"fe.0.0002"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0003"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.args
index 880265bb03..03137e3977 100644
--- a/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.args
@@ -30,4 +30,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-ap","id":"hostdev0","sysfsdev":"/sys/bus/mdev/devices/90c6c135-ad44-41d0-b1b7-bae47de48627"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.args
index aeb07a9bcb..93c185e821 100644
--- a/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.args
@@ -30,4 +30,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-ccw","id":"hostdev0","sysfsdev":"/sys/bus/mdev/devices/90c6c135-ad44-41d0-b1b7-bae47de48627","bootindex":1,"devno":"fe.0.0000"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.args
index 01b182d44e..9a7547fb0a 100644
--- a/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.args
@@ -30,4 +30,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-ccw","id":"hostdev0","sysfsdev":"/sys/bus/mdev/devices/90c6c135-ad44-41d0-b1b7-bae47de48627","devno":"fe.0.0000"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.args
index d0355d04b8..4a1a8090bc 100644
--- a/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-pci","host":"0001:00:00.0","id":"hostdev1","bus":"pci.0","addr":"0x2"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.args
index 6754899fe4..cb036ea564 100644
--- a/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-pci","host":"0000:00:01.0","id":"hostdev1","bus":"pci.0","addr":"0x2"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate.s390x-latest.args
index 2c142c1e5a..aa734c7b41 100644
--- a/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-pci","host":"0000:00:00.0","id":"hostdev0","bus":"pci.0","addr":"0x1"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-vfio-zpci-boundaries.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-vfio-zpci-boundaries.s390x-latest.args
index 3bf6534c36..dec6dde157 100644
--- a/tests/qemuxml2argvdata/hostdev-vfio-zpci-boundaries.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-vfio-zpci-boundaries.s390x-latest.args
@@ -35,4 +35,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-pci","host":"0000:00:00.0","id":"hostdev1","bus":"pci.0","addr":"0x2"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.args
index 58e8ae95f5..01f867bfd8 100644
--- a/tests/qemuxml2argvdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-KVMGuest1/.config \
-device '{"driver":"vfio-pci","host":"0000:00:00.0","id":"hostdev0","bus":"pci.0","addr":"0x1"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.args
index 43b861b65c..10fa5754cc 100644
--- a/tests/qemuxml2argvdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.args
@@ -45,4 +45,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-pci","host":"0008:00:00.0","id":"hostdev7","bus":"pci.0","addr":"0x6"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/hostdev-vfio-zpci.s390x-latest.args b/tests/qemuxml2argvdata/hostdev-vfio-zpci.s390x-latest.args
index 852fe0206a..016a4aef8c 100644
--- a/tests/qemuxml2argvdata/hostdev-vfio-zpci.s390x-latest.args
+++ b/tests/qemuxml2argvdata/hostdev-vfio-zpci.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-pci","host":"0000:00:00.0","id":"hostdev0","bus":"pci.0","addr":"0x8"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/input-virtio-ccw.s390x-latest.args b/tests/qemuxml2argvdata/input-virtio-ccw.s390x-latest.args
index 7cf73299f6..3aeef58a61 100644
--- a/tests/qemuxml2argvdata/input-virtio-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/input-virtio-ccw.s390x-latest.args
@@ -35,4 +35,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/iothreads-virtio-scsi-ccw.s390x-latest.args b/tests/qemuxml2argvdata/iothreads-virtio-scsi-ccw.s390x-latest.args
index ed7971d632..1081953e80 100644
--- a/tests/qemuxml2argvdata/iothreads-virtio-scsi-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/iothreads-virtio-scsi-ccw.s390x-latest.args
@@ -38,4 +38,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.000a"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/launch-security-s390-pv.s390x-latest.args b/tests/qemuxml2argvdata/launch-security-s390-pv.s390x-latest.args
index 5c8cf9eeec..7cfa2d0b3e 100644
--- a/tests/qemuxml2argvdata/launch-security-s390-pv.s390x-latest.args
+++ b/tests/qemuxml2argvdata/launch-security-s390-pv.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-object '{"qom-type":"s390-pv-guest","id":"lsec0"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-aeskeywrap-off-cap.s390x-latest.args b/tests/qemuxml2argvdata/machine-aeskeywrap-off-cap.s390x-latest.args
index de274c6336..8f7d72fa85 100644
--- a/tests/qemuxml2argvdata/machine-aeskeywrap-off-cap.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-aeskeywrap-off-cap.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-aeskeywrap-off-caps.s390x-latest.args b/tests/qemuxml2argvdata/machine-aeskeywrap-off-caps.s390x-latest.args
index de274c6336..8f7d72fa85 100644
--- a/tests/qemuxml2argvdata/machine-aeskeywrap-off-caps.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-aeskeywrap-off-caps.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-aeskeywrap-on-cap.s390x-latest.args b/tests/qemuxml2argvdata/machine-aeskeywrap-on-cap.s390x-latest.args
index fb9b8fdc7a..6bd21d6c8d 100644
--- a/tests/qemuxml2argvdata/machine-aeskeywrap-on-cap.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-aeskeywrap-on-cap.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-aeskeywrap-on-caps.s390x-latest.args b/tests/qemuxml2argvdata/machine-aeskeywrap-on-caps.s390x-latest.args
index fb9b8fdc7a..6bd21d6c8d 100644
--- a/tests/qemuxml2argvdata/machine-aeskeywrap-on-caps.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-aeskeywrap-on-caps.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-deakeywrap-off-cap.s390x-latest.args b/tests/qemuxml2argvdata/machine-deakeywrap-off-cap.s390x-latest.args
index 4ffb2f3609..7040889685 100644
--- a/tests/qemuxml2argvdata/machine-deakeywrap-off-cap.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-deakeywrap-off-cap.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-deakeywrap-off-caps.s390x-latest.args b/tests/qemuxml2argvdata/machine-deakeywrap-off-caps.s390x-latest.args
index 4ffb2f3609..7040889685 100644
--- a/tests/qemuxml2argvdata/machine-deakeywrap-off-caps.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-deakeywrap-off-caps.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-deakeywrap-on-cap.s390x-latest.args b/tests/qemuxml2argvdata/machine-deakeywrap-on-cap.s390x-latest.args
index bb79e9e886..bd4b8d2c7c 100644
--- a/tests/qemuxml2argvdata/machine-deakeywrap-on-cap.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-deakeywrap-on-cap.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-deakeywrap-on-caps.s390x-latest.args b/tests/qemuxml2argvdata/machine-deakeywrap-on-caps.s390x-latest.args
index bb79e9e886..bd4b8d2c7c 100644
--- a/tests/qemuxml2argvdata/machine-deakeywrap-on-caps.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-deakeywrap-on-caps.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-keywrap-none-caps.s390x-latest.args b/tests/qemuxml2argvdata/machine-keywrap-none-caps.s390x-latest.args
index 516768833a..ab29708a83 100644
--- a/tests/qemuxml2argvdata/machine-keywrap-none-caps.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-keywrap-none-caps.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-keywrap-none.s390x-latest.args b/tests/qemuxml2argvdata/machine-keywrap-none.s390x-latest.args
index 516768833a..ab29708a83 100644
--- a/tests/qemuxml2argvdata/machine-keywrap-none.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-keywrap-none.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-loadparm-hostdev.s390x-latest.args b/tests/qemuxml2argvdata/machine-loadparm-hostdev.s390x-latest.args
index 3580db8e21..21ac5590b4 100644
--- a/tests/qemuxml2argvdata/machine-loadparm-hostdev.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-loadparm-hostdev.s390x-latest.args
@@ -30,4 +30,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"vfio-ccw","id":"hostdev0","sysfsdev":"/sys/bus/mdev/devices/90c6c135-ad44-41d0-b1b7-bae47de48627","bootindex":1,"devno":"fe.0.0000"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.args b/tests/qemuxml2argvdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.args
index 1e651e7870..071d1037fe 100644
--- a/tests/qemuxml2argvdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.args
@@ -39,4 +39,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-loadparm-net-s390.s390x-latest.args b/tests/qemuxml2argvdata/machine-loadparm-net-s390.s390x-latest.args
index bdd2782f5a..2c183b736b 100644
--- a/tests/qemuxml2argvdata/machine-loadparm-net-s390.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-loadparm-net-s390.s390x-latest.args
@@ -31,4 +31,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/machine-loadparm-s390.s390x-latest.args b/tests/qemuxml2argvdata/machine-loadparm-s390.s390x-latest.args
index b11d958117..1a7a00d0de 100644
--- a/tests/qemuxml2argvdata/machine-loadparm-s390.s390x-latest.args
+++ b/tests/qemuxml2argvdata/machine-loadparm-s390.s390x-latest.args
@@ -32,4 +32,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/net-virtio-ccw.s390x-latest.args b/tests/qemuxml2argvdata/net-virtio-ccw.s390x-latest.args
index 891d755501..507879ba75 100644
--- a/tests/qemuxml2argvdata/net-virtio-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/net-virtio-ccw.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.000a"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-allow-bogus-usb-controller.s390x-latest.args b/tests/qemuxml2argvdata/s390-allow-bogus-usb-controller.s390x-latest.args
index 1e7eaacad0..b2f5a36057 100644
--- a/tests/qemuxml2argvdata/s390-allow-bogus-usb-controller.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-allow-bogus-usb-controller.s390x-latest.args
@@ -37,4 +37,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/hwrng"}' \
-device '{"driver":"virtio-rng-ccw","rng":"objrng0","id":"rng0","devno":"fe.0.0003"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-allow-bogus-usb-none.s390x-latest.args b/tests/qemuxml2argvdata/s390-allow-bogus-usb-none.s390x-latest.args
index 1e7eaacad0..b2f5a36057 100644
--- a/tests/qemuxml2argvdata/s390-allow-bogus-usb-none.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-allow-bogus-usb-none.s390x-latest.args
@@ -37,4 +37,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/hwrng"}' \
-device '{"driver":"virtio-rng-ccw","rng":"objrng0","id":"rng0","devno":"fe.0.0003"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.args b/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.args
index 0d44697425..1de56f1df5 100644
--- a/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.args
@@ -29,4 +29,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.args b/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.args
index 7f70323720..f0142f2baf 100644
--- a/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.args
@@ -29,4 +29,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.args b/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.args
index 06b3f5733e..6f347fae3c 100644
--- a/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.args
@@ -29,4 +29,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.args b/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.args
index 61e38d908b..50c1297be9 100644
--- a/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.args
@@ -29,4 +29,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-6.0.0.args b/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-6.0.0.args
new file mode 100644
index 0000000000..1505b7cd78
--- /dev/null
+++ b/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-6.0.0.args
@@ -0,0 +1,32 @@
+LC_ALL=C \
+PATH=/bin \
+HOME=/var/lib/libvirt/qemu/domain--1-test \
+USER=test \
+LOGNAME=test \
+XDG_DATA_HOME=/var/lib/libvirt/qemu/domain--1-test/.local/share \
+XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain--1-test/.cache \
+XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
+/usr/bin/qemu-system-s390x \
+-name guest=test,debug-threads=on \
+-S \
+-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain--1-test/master-key.aes"}' \
+-machine s390-ccw-virtio-6.0,usb=off,dump-guest-core=off,memory-backend=s390.ram \
+-accel kvm \
+-cpu gen15a-base,aen=on,cmmnt=on,vxpdeh=on,aefsi=on,diag318=on,csske=on,mepoch=on,msa9=on,msa8=on,msa7=on,msa6=on,msa5=on,msa4=on,msa3=on,msa2=on,msa1=on,sthyi=on,edat=on,ri=on,deflate=on,edat2=on,etoken=on,vx=on,ipter=on,mepochptff=on,ap=on,vxeh=on,vxpd=on,esop=on,msa9_pckmo=on,vxeh2=on,esort=on,apqi=on,apft=on,els=on,iep=on,apqci=on,cte=on,ais=on,bpb=on,gs=on,ppa15=on,zpci=on,sea_esop2=on,te=on,cmm=on \
+-m size=262144k \
+-object '{"qom-type":"memory-backend-ram","id":"s390.ram","size":268435456}' \
+-overcommit mem-lock=off \
+-smp 1,sockets=1,cores=1,threads=1 \
+-uuid 9aa4b45c-b9dd-45ef-91fe-862b27b4231f \
+-display none \
+-no-user-config \
+-nodefaults \
+-chardev socket,id=charmonitor,fd=1729,server=on,wait=off \
+-mon chardev=charmonitor,id=monitor,mode=control \
+-rtc base=utc \
+-no-shutdown \
+-boot strict=on \
+-audiodev '{"id":"audio1","driver":"none"}' \
+-device virtio-balloon-ccw,id=balloon0,devno=fe.0.0000 \
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-latest.args b/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-latest.args
new file mode 100644
index 0000000000..3d15dec9cc
--- /dev/null
+++ b/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-latest.args
@@ -0,0 +1,33 @@
+LC_ALL=C \
+PATH=/bin \
+HOME=/var/lib/libvirt/qemu/domain--1-test \
+USER=test \
+LOGNAME=test \
+XDG_DATA_HOME=/var/lib/libvirt/qemu/domain--1-test/.local/share \
+XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain--1-test/.cache \
+XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-test/.config \
+/usr/bin/qemu-system-s390x \
+-name guest=test,debug-threads=on \
+-S \
+-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain--1-test/master-key.aes"}' \
+-machine s390-ccw-virtio,usb=off,dump-guest-core=off,memory-backend=s390.ram \
+-accel kvm \
+-cpu gen16a-base,nnpa=on,aen=on,cmmnt=on,vxpdeh=on,aefsi=on,diag318=on,csske=on,mepoch=on,msa9=on,msa8=on,msa7=on,msa6=on,msa5=on,msa4=on,msa3=on,msa2=on,msa1=on,sthyi=on,edat=on,ri=on,deflate=on,edat2=on,etoken=on,vx=on,ipter=on,pai=on,paie=on,mepochptff=on,ap=on,vxeh=on,vxpd=on,esop=on,msa9_pckmo=on,vxeh2=on,esort=on,apqi=on,apft=on,els=on,iep=on,apqci=on,cte=on,ais=on,bpb=on,gs=on,ppa15=on,zpci=on,rdp=on,sea_esop2=on,beareh=on,te=on,cmm=on,vxpdeh2=on \
+-m size=262144k \
+-object '{"qom-type":"memory-backend-ram","id":"s390.ram","size":268435456}' \
+-overcommit mem-lock=off \
+-smp 1,sockets=1,cores=1,threads=1 \
+-uuid 9aa4b45c-b9dd-45ef-91fe-862b27b4231f \
+-display none \
+-no-user-config \
+-nodefaults \
+-chardev socket,id=charmonitor,fd=1729,server=on,wait=off \
+-mon chardev=charmonitor,id=monitor,mode=control \
+-rtc base=utc \
+-no-shutdown \
+-boot strict=on \
+-audiodev '{"id":"audio1","driver":"none"}' \
+-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0000"}' \
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
+-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.xml b/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.xml
new file mode 100644
index 0000000000..e8e76cb372
--- /dev/null
+++ b/tests/qemuxml2argvdata/s390-no-async-teardown-autogen.xml
@@ -0,0 +1,18 @@
+<domain type='kvm'>
+ <name>test</name>
+ <uuid>9aa4b45c-b9dd-45ef-91fe-862b27b4231f</uuid>
+ <memory unit='KiB'>262144</memory>
+ <currentMemory unit='KiB'>262144</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-system-s390x</emulator>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2argvdata/s390-panic-missing.s390x-latest.args b/tests/qemuxml2argvdata/s390-panic-missing.s390x-latest.args
index a2d6a10038..8a33057a93 100644
--- a/tests/qemuxml2argvdata/s390-panic-missing.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-panic-missing.s390x-latest.args
@@ -32,4 +32,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0002"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-panic-no-address.s390x-latest.args b/tests/qemuxml2argvdata/s390-panic-no-address.s390x-latest.args
index 7f7dedfa2b..cc7866499f 100644
--- a/tests/qemuxml2argvdata/s390-panic-no-address.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-panic-no-address.s390x-latest.args
@@ -32,4 +32,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-serial-2.s390x-latest.args b/tests/qemuxml2argvdata/s390-serial-2.s390x-latest.args
index 3d57c421d6..07c9e24e43 100644
--- a/tests/qemuxml2argvdata/s390-serial-2.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-serial-2.s390x-latest.args
@@ -32,4 +32,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"sclplmconsole","chardev":"charserial1","id":"serial1"}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-serial-console.s390x-latest.args b/tests/qemuxml2argvdata/s390-serial-console.s390x-latest.args
index 8ee435d467..514865917b 100644
--- a/tests/qemuxml2argvdata/s390-serial-console.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-serial-console.s390x-latest.args
@@ -30,4 +30,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"sclpconsole","chardev":"charserial0","id":"serial0"}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390-serial.s390x-latest.args b/tests/qemuxml2argvdata/s390-serial.s390x-latest.args
index 8ee435d467..514865917b 100644
--- a/tests/qemuxml2argvdata/s390-serial.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390-serial.s390x-latest.args
@@ -30,4 +30,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"sclpconsole","chardev":"charserial0","id":"serial0"}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390x-ccw-graphics.s390x-latest.args b/tests/qemuxml2argvdata/s390x-ccw-graphics.s390x-latest.args
index d80f459d12..3c9938e63c 100644
--- a/tests/qemuxml2argvdata/s390x-ccw-graphics.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390x-ccw-graphics.s390x-latest.args
@@ -44,4 +44,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-guest/.config \
-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \
-device '{"driver":"virtio-rng-ccw","rng":"objrng0","id":"rng0","devno":"fe.0.0007"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/s390x-ccw-headless.s390x-latest.args b/tests/qemuxml2argvdata/s390x-ccw-headless.s390x-latest.args
index b39b36db1e..d1f2efcd3e 100644
--- a/tests/qemuxml2argvdata/s390x-ccw-headless.s390x-latest.args
+++ b/tests/qemuxml2argvdata/s390x-ccw-headless.s390x-latest.args
@@ -41,4 +41,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-guest/.config \
-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \
-device '{"driver":"virtio-rng-ccw","rng":"objrng0","id":"rng0","devno":"fe.0.0004"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/vhost-vsock-ccw-auto.s390x-latest.args b/tests/qemuxml2argvdata/vhost-vsock-ccw-auto.s390x-latest.args
index 928686ebac..12758ceb8a 100644
--- a/tests/qemuxml2argvdata/vhost-vsock-ccw-auto.s390x-latest.args
+++ b/tests/qemuxml2argvdata/vhost-vsock-ccw-auto.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-device '{"driver":"vhost-vsock-ccw","id":"vsock0","guest-cid":42,"vhostfd":"6789","devno":"fe.0.0002"}' \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.s390x-latest.args b/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.s390x-latest.args
index 4fec97f50e..85e7fe5825 100644
--- a/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.s390x-latest.args
+++ b/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-device '{"driver":"vhost-vsock-ccw","iommu_platform":true,"id":"vsock0","guest-cid":4,"vhostfd":"6789","devno":"fe.0.0002"}' \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.xml b/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.xml
index cc299dcba9..968442c707 100644
--- a/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.xml
+++ b/tests/qemuxml2argvdata/vhost-vsock-ccw-iommu.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2argvdata/vhost-vsock-ccw.s390x-latest.args b/tests/qemuxml2argvdata/vhost-vsock-ccw.s390x-latest.args
index 9d2cd4e125..e423ee7b2f 100644
--- a/tests/qemuxml2argvdata/vhost-vsock-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/vhost-vsock-ccw.s390x-latest.args
@@ -33,4 +33,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-device '{"driver":"vhost-vsock-ccw","id":"vsock0","guest-cid":4,"vhostfd":"6789","devno":"fe.0.0003"}' \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/video-virtio-gpu-ccw.s390x-latest.args b/tests/qemuxml2argvdata/video-virtio-gpu-ccw.s390x-latest.args
index 4e186b3452..2b5feaaaae 100644
--- a/tests/qemuxml2argvdata/video-virtio-gpu-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/video-virtio-gpu-ccw.s390x-latest.args
@@ -34,4 +34,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-device '{"driver":"virtio-gpu-ccw","id":"video1","max_outputs":1,"devno":"fe.0.0003"}' \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/virtio-rng-ccw.s390x-latest.args b/tests/qemuxml2argvdata/virtio-rng-ccw.s390x-latest.args
index 69bdfc8ac3..aed4a3957e 100644
--- a/tests/qemuxml2argvdata/virtio-rng-ccw.s390x-latest.args
+++ b/tests/qemuxml2argvdata/virtio-rng-ccw.s390x-latest.args
@@ -37,4 +37,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/hwrng"}' \
-device '{"driver":"virtio-rng-ccw","rng":"objrng0","id":"rng0","devno":"fe.0.0002"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvdata/watchdog-diag288.s390x-latest.args b/tests/qemuxml2argvdata/watchdog-diag288.s390x-latest.args
index bc848b8e82..cecfa5a9f7 100644
--- a/tests/qemuxml2argvdata/watchdog-diag288.s390x-latest.args
+++ b/tests/qemuxml2argvdata/watchdog-diag288.s390x-latest.args
@@ -34,4 +34,5 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \
-watchdog-action inject-nmi \
-device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0001"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-run-with async-teardown=on \
-msg timestamp=on
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 9abaa72674..232fce2476 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -2707,6 +2707,8 @@ mymain(void)
DO_TEST_CAPS_ARCH_VER_PARSE_ERROR("s390-async-teardown", "s390x", "6.0.0");
DO_TEST_CAPS_ARCH_LATEST("s390-async-teardown-disabled", "s390x");
DO_TEST_CAPS_ARCH_VER("s390-async-teardown-disabled", "s390x", "6.0.0");
+ DO_TEST_CAPS_ARCH_LATEST("s390-no-async-teardown-autogen", "s390x");
+ DO_TEST_CAPS_ARCH_VER("s390-no-async-teardown-autogen", "s390x", "6.0.0");
qemuTestDriverFree(&driver);
virFileWrapperClearPrefixes();
diff --git a/tests/qemuxml2xmloutdata/default-video-type-s390x.s390x-latest.xml b/tests/qemuxml2xmloutdata/default-video-type-s390x.s390x-latest.xml
index c8aac8f1bf..6a6b9d2a2b 100644
--- a/tests/qemuxml2xmloutdata/default-video-type-s390x.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/default-video-type-s390x.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
diff --git a/tests/qemuxml2xmloutdata/disk-virtio-s390-zpci.s390x-latest.xml b/tests/qemuxml2xmloutdata/disk-virtio-s390-zpci.s390x-latest.xml
index c98bf78160..c7aa466579 100644
--- a/tests/qemuxml2xmloutdata/disk-virtio-s390-zpci.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/disk-virtio-s390-zpci.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.xml
index efd3027d3e..e5f58ede0d 100644
--- a/tests/qemuxml2xmloutdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-scsi-vhost-scsi-ccw.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.xml
index 96cd88bfdd..0cee4da951 100644
--- a/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ap.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.xml
index f2ae0b7d09..4827b6e2a6 100644
--- a/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw-boot.s390x-latest.xml
@@ -7,6 +7,9 @@
<os>
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.xml
index b411a2a348..e4526d8bce 100644
--- a/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-subsys-mdev-vfio-ccw.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.xml
index dd1dea4e99..902d2227ee 100644
--- a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-fids.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.xml
index 1a52487692..136e56dedc 100644
--- a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate-uids.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate.s390x-latest.xml
index 670f8c68b4..3c93c5e868 100644
--- a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-autogenerate.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-boundaries.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-boundaries.s390x-latest.xml
index df55f79501..a868c7d585 100644
--- a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-boundaries.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-boundaries.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.xml
index 7df6491b68..2b97ebb30e 100644
--- a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-ccw-memballoon.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
diff --git a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.xml
index e64d7de561..937ed64ecc 100644
--- a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci-multidomain-many.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci.s390x-latest.xml b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci.s390x-latest.xml
index 5e14a63810..266f8cf1af 100644
--- a/tests/qemuxml2xmloutdata/hostdev-vfio-zpci.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/hostdev-vfio-zpci.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/input-virtio-ccw.s390x-latest.xml b/tests/qemuxml2xmloutdata/input-virtio-ccw.s390x-latest.xml
index bca07c8fd8..2b95beb0bd 100644
--- a/tests/qemuxml2xmloutdata/input-virtio-ccw.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/input-virtio-ccw.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/iothreads-disk-virtio-ccw.s390x-latest.xml b/tests/qemuxml2xmloutdata/iothreads-disk-virtio-ccw.s390x-latest.xml
index cdcee3bbb4..45d7238ded 100644
--- a/tests/qemuxml2xmloutdata/iothreads-disk-virtio-ccw.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/iothreads-disk-virtio-ccw.s390x-latest.xml
@@ -9,6 +9,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/iothreads-virtio-scsi-ccw.s390x-latest.xml b/tests/qemuxml2xmloutdata/iothreads-virtio-scsi-ccw.s390x-latest.xml
index d73f43f235..19e8d1246b 100644
--- a/tests/qemuxml2xmloutdata/iothreads-virtio-scsi-ccw.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/iothreads-virtio-scsi-ccw.s390x-latest.xml
@@ -9,6 +9,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/machine-loadparm-hostdev.s390x-latest.xml b/tests/qemuxml2xmloutdata/machine-loadparm-hostdev.s390x-latest.xml
index f564d6deb0..47a45e72d4 100644
--- a/tests/qemuxml2xmloutdata/machine-loadparm-hostdev.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/machine-loadparm-hostdev.s390x-latest.xml
@@ -7,6 +7,9 @@
<os>
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.xml b/tests/qemuxml2xmloutdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.xml
index 039968d7e4..8c06ab3fa5 100644
--- a/tests/qemuxml2xmloutdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/machine-loadparm-multiple-disks-nets-s390.s390x-latest.xml
@@ -7,6 +7,9 @@
<os>
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.xml
index ae39e6277d..75c4c79c32 100644
--- a/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-2.7.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio-2.7'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='host-passthrough' check='none'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
diff --git a/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.xml
index 4906206ada..0acc8d5abd 100644
--- a/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-default-cpu-kvm-ccw-virtio-4.2.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio-4.2'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
diff --git a/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.xml
index f4f9e724a9..704e06a4c4 100644
--- a/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-2.7.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio-2.7'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.xml
index 65dd30a3fb..4a2d567641 100644
--- a/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-default-cpu-tcg-ccw-virtio-4.2.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio-4.2'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390-defaultconsole.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-defaultconsole.s390x-latest.xml
index 212b294291..ab84711155 100644
--- a/tests/qemuxml2xmloutdata/s390-defaultconsole.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-defaultconsole.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
diff --git a/tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-6.0.0.xml b/tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-6.0.0.xml
new file mode 100644
index 0000000000..8fc0c6fe8f
--- /dev/null
+++ b/tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-6.0.0.xml
@@ -0,0 +1,25 @@
+<domain type='kvm'>
+ <name>test</name>
+ <uuid>9aa4b45c-b9dd-45ef-91fe-862b27b4231f</uuid>
+ <memory unit='KiB'>262144</memory>
+ <currentMemory unit='KiB'>262144</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='s390x' machine='s390-ccw-virtio-6.0'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <cpu mode='host-model' check='partial'/>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-system-s390x</emulator>
+ <controller type='pci' index='0' model='pci-root'/>
+ <audio id='1' type='none'/>
+ <memballoon model='virtio'>
+ <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
+ </memballoon>
+ <panic model='s390'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-latest.xml
new file mode 100644
index 0000000000..4f79e2e4f4
--- /dev/null
+++ b/tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-latest.xml
@@ -0,0 +1,28 @@
+<domain type='kvm'>
+ <name>test</name>
+ <uuid>9aa4b45c-b9dd-45ef-91fe-862b27b4231f</uuid>
+ <memory unit='KiB'>262144</memory>
+ <currentMemory unit='KiB'>262144</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
+ <cpu mode='host-model' check='partial'/>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-system-s390x</emulator>
+ <controller type='pci' index='0' model='pci-root'/>
+ <audio id='1' type='none'/>
+ <memballoon model='virtio'>
+ <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
+ </memballoon>
+ <panic model='s390'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2xmloutdata/s390-panic-missing.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-panic-missing.s390x-latest.xml
index b36c12e435..a7dec81555 100644
--- a/tests/qemuxml2xmloutdata/s390-panic-missing.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-panic-missing.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390-panic-no-address.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-panic-no-address.s390x-latest.xml
index 9b9fbf3243..510396a9a8 100644
--- a/tests/qemuxml2xmloutdata/s390-panic-no-address.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-panic-no-address.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390-panic.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-panic.s390x-latest.xml
index 2f27890ceb..1374d966fc 100644
--- a/tests/qemuxml2xmloutdata/s390-panic.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-panic.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='host-model' check='partial'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
diff --git a/tests/qemuxml2xmloutdata/s390-serial-2.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-serial-2.s390x-latest.xml
index bf67ed8c12..db1d4e32c9 100644
--- a/tests/qemuxml2xmloutdata/s390-serial-2.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-serial-2.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390-serial-console.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-serial-console.s390x-latest.xml
index 9ce55598bc..36c4b85dc7 100644
--- a/tests/qemuxml2xmloutdata/s390-serial-console.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-serial-console.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390-serial.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390-serial.s390x-latest.xml
index 9ce55598bc..36c4b85dc7 100644
--- a/tests/qemuxml2xmloutdata/s390-serial.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390-serial.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390x-ccw-graphics.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390x-ccw-graphics.s390x-latest.xml
index c4c4c4cfdb..375f293855 100644
--- a/tests/qemuxml2xmloutdata/s390x-ccw-graphics.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390x-ccw-graphics.s390x-latest.xml
@@ -13,6 +13,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/s390x-ccw-headless.s390x-latest.xml b/tests/qemuxml2xmloutdata/s390x-ccw-headless.s390x-latest.xml
index 48d9cb86f2..3b092cb574 100644
--- a/tests/qemuxml2xmloutdata/s390x-ccw-headless.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/s390x-ccw-headless.s390x-latest.xml
@@ -13,6 +13,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/vhost-vsock-ccw-auto.s390x-latest.xml b/tests/qemuxml2xmloutdata/vhost-vsock-ccw-auto.s390x-latest.xml
index c384522a42..30ca0c7caf 100644
--- a/tests/qemuxml2xmloutdata/vhost-vsock-ccw-auto.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/vhost-vsock-ccw-auto.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/vhost-vsock-ccw.s390x-latest.xml b/tests/qemuxml2xmloutdata/vhost-vsock-ccw.s390x-latest.xml
index d519028396..31a29da0e6 100644
--- a/tests/qemuxml2xmloutdata/vhost-vsock-ccw.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/vhost-vsock-ccw.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw-auto.s390x-latest.xml b/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw-auto.s390x-latest.xml
index 87ee9eee54..a2227a3eff 100644
--- a/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw-auto.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw-auto.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw.s390x-latest.xml b/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw.s390x-latest.xml
index 9b6bf6c980..d469060008 100644
--- a/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw.s390x-latest.xml
+++ b/tests/qemuxml2xmloutdata/video-virtio-gpu-ccw.s390x-latest.xml
@@ -8,6 +8,9 @@
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <async-teardown enabled='yes'/>
+ </features>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu</model>
</cpu>
diff --git a/tests/qemuxml2xmltest.c b/tests/qemuxml2xmltest.c
index b66274beb8..a6e02fc01d 100644
--- a/tests/qemuxml2xmltest.c
+++ b/tests/qemuxml2xmltest.c
@@ -1246,6 +1246,8 @@ mymain(void)
DO_TEST_CAPS_ARCH_LATEST("s390-async-teardown-no-attrib", "s390x");
DO_TEST_CAPS_ARCH_LATEST("s390-async-teardown-disabled", "s390x");
DO_TEST_CAPS_ARCH_VER("s390-async-teardown-disabled", "s390x", "6.0.0");
+ DO_TEST_CAPS_ARCH_LATEST("s390-no-async-teardown-autogen", "s390x");
+ DO_TEST_CAPS_ARCH_VER("s390-no-async-teardown-autogen", "s390x", "6.0.0");
cleanup:
qemuTestDriverFree(&driver);
--
2.41.0
On 7/5/23 8:20 AM, Boris Fiuczynski wrote:
> Enable by default asynchronous teardown on S390 hosts and add tests for
> asynchronous teardown autogeneration support.
> On S390 hosts, Secure Execution guests can take a long time to shutdown,
> since the memory cleanup can take a long time. Since there is no
> practical way to determine whether a S390 guest is running in Secure
> Execution mode, and since the asynchronous teardown does not impact
> normal (not Secure Execution) guests or guests without large memory
> configurations, we enable asynchronous teardown by default on S390.
> A user can select to override the default in the guest domain XML.
>
> Signed-off-by: Boris Fiuczynski<fiuczy@linux.ibm.com>
> ---
As it turns out the discussion seems to be on this patch only.
Would it be acceptable to push the first four patches off the series
without the fifth patch?
--
Mit freundlichen Grüßen/Kind regards
Boris Fiuczynski
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Gregor Pillen
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
On Mon, Jul 10, 2023 at 12:02:35PM +0200, Boris Fiuczynski wrote: > On 7/5/23 8:20 AM, Boris Fiuczynski wrote: > > Enable by default asynchronous teardown on S390 hosts and add tests for > > asynchronous teardown autogeneration support. > > On S390 hosts, Secure Execution guests can take a long time to shutdown, > > since the memory cleanup can take a long time. Since there is no > > practical way to determine whether a S390 guest is running in Secure > > Execution mode, and since the asynchronous teardown does not impact > > normal (not Secure Execution) guests or guests without large memory > > configurations, we enable asynchronous teardown by default on S390. > > A user can select to override the default in the guest domain XML. > > > > Signed-off-by: Boris Fiuczynski<fiuczy@linux.ibm.com> > > --- > > As it turns out the discussion seems to be on this patch only. > Would it be acceptable to push the first four patches off the series without > the fifth patch? Yes, that's fine. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On 7/10/23 12:07, Daniel P. Berrangé wrote: > On Mon, Jul 10, 2023 at 12:02:35PM +0200, Boris Fiuczynski wrote: >> On 7/5/23 8:20 AM, Boris Fiuczynski wrote: >>> Enable by default asynchronous teardown on S390 hosts and add tests for >>> asynchronous teardown autogeneration support. >>> On S390 hosts, Secure Execution guests can take a long time to shutdown, >>> since the memory cleanup can take a long time. Since there is no >>> practical way to determine whether a S390 guest is running in Secure >>> Execution mode, and since the asynchronous teardown does not impact >>> normal (not Secure Execution) guests or guests without large memory >>> configurations, we enable asynchronous teardown by default on S390. >>> A user can select to override the default in the guest domain XML. >>> >>> Signed-off-by: Boris Fiuczynski<fiuczy@linux.ibm.com> >>> --- >> >> As it turns out the discussion seems to be on this patch only. >> Would it be acceptable to push the first four patches off the series without >> the fifth patch? > > Yes, that's fine. > > With regards, > Daniel Merged now. Michal
On Wed, Jul 05, 2023 at 08:20:27AM +0200, Boris Fiuczynski wrote: > Enable by default asynchronous teardown on S390 hosts and add tests for > asynchronous teardown autogeneration support. > On S390 hosts, Secure Execution guests can take a long time to shutdown, > since the memory cleanup can take a long time. Can you elaborate on this ? What makes it slow, and what kind of magnitude of slowness are we talking abuot. eg for a 500 GB guest, what is the shutdown time for normal vs protected guest ? > Since there is no > practical way to determine whether a S390 guest is running in Secure > Execution mode, and since the asynchronous teardown does not impact > normal (not Secure Execution) guests or guests without large memory > configurations, we enable asynchronous teardown by default on S390. > A user can select to override the default in the guest domain XML. It feels pretty sketchy to me to be doing async teardown as a guest arch specific behavioural change. Its been a while since the orignal QEMU discussions, but IIRC, async teardown is not transparent to mgmt apps. Even if the guest has gone from QEMU/libvirt's POV, if the host is still reclaiming memory, the guest RAM is still not available for starting new guests. I fear this is liable to trip up memory accounting logic in mgmt apps, in a hard to understand way because it will be a designed in race condition. I rather think mgmt apps need to explicitly opt-in to async teardown, so they're aware that they need to take account of delayed RAM availablity in their accounting / guest placement logic. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Wed, 5 Jul 2023 10:18:37 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote: > On Wed, Jul 05, 2023 at 08:20:27AM +0200, Boris Fiuczynski wrote: > > Enable by default asynchronous teardown on S390 hosts and add tests for > > asynchronous teardown autogeneration support. > > On S390 hosts, Secure Execution guests can take a long time to shutdown, > > since the memory cleanup can take a long time. > > Can you elaborate on this ? What makes it slow, and what kind of > magnitude of slowness are we talking abuot. eg for a 500 GB guest, > what is the shutdown time for normal vs protected guest ? depending on the size of the guest it can go from seconds for small guests to dozens of minutes for huge guests I don't have the numbers at hand for 500G > > > Since there is no > > practical way to determine whether a S390 guest is running in Secure > > Execution mode, and since the asynchronous teardown does not impact > > normal (not Secure Execution) guests or guests without large memory > > configurations, we enable asynchronous teardown by default on S390. > > A user can select to override the default in the guest domain XML. > > It feels pretty sketchy to me to be doing async teardown as a > guest arch specific behavioural change. > > Its been a while since the orignal QEMU discussions, but IIRC, > async teardown is not transparent to mgmt apps. > > Even if the guest has gone from QEMU/libvirt's POV, if the host > is still reclaiming memory, the guest RAM is still not available > for starting new guests. I fear this is liable to trip up > accounting logic in mgmt apps, in a hard to understand way because > it will be a designed in race condition. > > I rather think mgmt apps need to explicitly opt-in to async teardown, > so they're aware that they need to take account of delayed RAM > availablity in their accounting / guest placement logic. what would you think about enabling it by default only for guests that are capable to run in Secure Execution mode? > > With regards, > Daniel
On Wed, Jul 05, 2023 at 02:22:37PM +0200, Claudio Imbrenda wrote: > On Wed, 5 Jul 2023 10:18:37 +0100 > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > On Wed, Jul 05, 2023 at 08:20:27AM +0200, Boris Fiuczynski wrote: > > > Enable by default asynchronous teardown on S390 hosts and add tests for > > > asynchronous teardown autogeneration support. > > > On S390 hosts, Secure Execution guests can take a long time to shutdown, > > > since the memory cleanup can take a long time. > > > > Can you elaborate on this ? What makes it slow, and what kind of > > magnitude of slowness are we talking abuot. eg for a 500 GB guest, > > what is the shutdown time for normal vs protected guest ? > > depending on the size of the guest it can go from seconds for small > guests to dozens of minutes for huge guests > > I don't have the numbers at hand for 500G Doesn't have to be for 500G - that was just a value i plucked out of the air. Just interested in any concrete example timings for a non-trivially small guest. > > > > > > Since there is no > > > practical way to determine whether a S390 guest is running in Secure > > > Execution mode, and since the asynchronous teardown does not impact > > > normal (not Secure Execution) guests or guests without large memory > > > configurations, we enable asynchronous teardown by default on S390. > > > A user can select to override the default in the guest domain XML. > > > > It feels pretty sketchy to me to be doing async teardown as a > > guest arch specific behavioural change. > > > > Its been a while since the orignal QEMU discussions, but IIRC, > > async teardown is not transparent to mgmt apps. > > > > Even if the guest has gone from QEMU/libvirt's POV, if the host > > is still reclaiming memory, the guest RAM is still not available > > for starting new guests. I fear this is liable to trip up > > accounting logic in mgmt apps, in a hard to understand way because > > it will be a designed in race condition. > > > > I rather think mgmt apps need to explicitly opt-in to async teardown, > > so they're aware that they need to take account of delayed RAM > > availablity in their accounting / guest placement logic. > > what would you think about enabling it by default only for guests that > are capable to run in Secure Execution mode? IIUC, that's basically /all/ guests if running on new enough hardware with prot_virt=1 enabled on the host OS, so will still present challenges to mgmt apps needing to be aware of this behaviour AFAICS. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Wed, 5 Jul 2023 13:26:32 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote: [...] > > > I rather think mgmt apps need to explicitly opt-in to async teardown, > > > so they're aware that they need to take account of delayed RAM > > > availablity in their accounting / guest placement logic. > > > > what would you think about enabling it by default only for guests that > > are capable to run in Secure Execution mode? > > IIUC, that's basically /all/ guests if running on new enough hardware > with prot_virt=1 enabled on the host OS, so will still present challenges > to mgmt apps needing to be aware of this behaviour AFAICS. I think there is some fencing still? I don't think it's automatic > > > With regards, > Daniel
On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote:
> On Wed, 5 Jul 2023 13:26:32 +0100
> Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> [...]
>
> > > > I rather think mgmt apps need to explicitly opt-in to async teardown,
> > > > so they're aware that they need to take account of delayed RAM
> > > > availablity in their accounting / guest placement logic.
> > >
> > > what would you think about enabling it by default only for guests that
> > > are capable to run in Secure Execution mode?
> >
> > IIUC, that's basically /all/ guests if running on new enough hardware
> > with prot_virt=1 enabled on the host OS, so will still present challenges
> > to mgmt apps needing to be aware of this behaviour AFAICS.
>
> I think there is some fencing still? I don't think it's automatic
IIUC, the following sequence is possible
1. Start QEMU with -m 500G
-> QEMU spawns async teardown helper process
2. Stop QEMU
-> Async teardown helper process remains running while
kernel releases RAM
3. Start QEMU with -m 500G
-> Fails with ENOMEM
...time passes...
4. Async teardown helper finally terminates
-> The full original 500G is only now released for use
Basically if you can't do
while true
do
virsh start $guest
virsh stop $guest
done
then it is a change in libvirt API semantics, as so will require
explicit opt-in from the mgmt app to use this feature.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On 7/5/23 3:08 PM, Daniel P. Berrangé wrote:
> On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote:
>> On Wed, 5 Jul 2023 13:26:32 +0100
>> Daniel P. Berrangé <berrange@redhat.com> wrote:
>>
>> [...]
>>
>>>>> I rather think mgmt apps need to explicitly opt-in to async teardown,
>>>>> so they're aware that they need to take account of delayed RAM
>>>>> availablity in their accounting / guest placement logic.
>>>>
>>>> what would you think about enabling it by default only for guests that
>>>> are capable to run in Secure Execution mode?
>>>
>>> IIUC, that's basically /all/ guests if running on new enough hardware
>>> with prot_virt=1 enabled on the host OS, so will still present challenges
>>> to mgmt apps needing to be aware of this behaviour AFAICS.
>>
>> I think there is some fencing still? I don't think it's automatic
>
> IIUC, the following sequence is possible
>
> 1. Start QEMU with -m 500G
> -> QEMU spawns async teardown helper process
> 2. Stop QEMU
> -> Async teardown helper process remains running while
> kernel releases RAM
> 3. Start QEMU with -m 500G
> -> Fails with ENOMEM
> ...time passes...
> 4. Async teardown helper finally terminates
> -> The full original 500G is only now released for use
>
> Basically if you can't do
>
> while true
> do
> virsh start $guest
> virsh stop $guest
> done
>
> then it is a change in libvirt API semantics, as so will require
> explicit opt-in from the mgmt app to use this feature.
What is your expectation if libvirt ["virsh stop $guest"] fails to wait
for qemu to terminate e.g. after 20+ minutes. I think that libvirt does
have a timeout trying to stop qemu and than gives up.
Wouldn't you encounter the same problem that way?
>
> With regards,
> Daniel
>
--
Mit freundlichen Grüßen/Kind regards
Boris Fiuczynski
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Gregor Pillen
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
On Wed, Jul 05, 2023 at 04:27:46PM +0200, Boris Fiuczynski wrote: > On 7/5/23 3:08 PM, Daniel P. Berrangé wrote: > > On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote: > > > On Wed, 5 Jul 2023 13:26:32 +0100 > > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > [...] > > > > > > > > > I rather think mgmt apps need to explicitly opt-in to async teardown, > > > > > > so they're aware that they need to take account of delayed RAM > > > > > > availablity in their accounting / guest placement logic. > > > > > > > > > > what would you think about enabling it by default only for guests that > > > > > are capable to run in Secure Execution mode? > > > > > > > > IIUC, that's basically /all/ guests if running on new enough hardware > > > > with prot_virt=1 enabled on the host OS, so will still present challenges > > > > to mgmt apps needing to be aware of this behaviour AFAICS. > > > > > > I think there is some fencing still? I don't think it's automatic > > > > IIUC, the following sequence is possible > > > > 1. Start QEMU with -m 500G > > -> QEMU spawns async teardown helper process > > 2. Stop QEMU > > -> Async teardown helper process remains running while > > kernel releases RAM > > 3. Start QEMU with -m 500G > > -> Fails with ENOMEM > > ...time passes... > > 4. Async teardown helper finally terminates > > -> The full original 500G is only now released for use > > > > Basically if you can't do > > > > while true > > do > > virsh start $guest > > virsh stop $guest > > done > > > > then it is a change in libvirt API semantics, as so will require > > explicit opt-in from the mgmt app to use this feature. > > What is your expectation if libvirt ["virsh stop $guest"] fails to wait for > qemu to terminate e.g. after 20+ minutes. I think that libvirt does have a > timeout trying to stop qemu and than gives up. > Wouldn't you encounter the same problem that way? Yes, that would be a bug. We've tried to address these in the past. For example, when there are PCI host devs assigned, the kernel takes quite a bit longer to terminate QEMU. In that case, we extended the timeout we wait for QEMU to exit. Essentially the idea is that when 'virsh destroy' returns we want the caller to have a strong guarantee that all resources are released. IOW, if it sees an error code the expectation is that QEMU has suffered a serious problem - such as stuck in an uninterruptible sleep in kernel space. We don't want the caller to see errors in "normal" scenarios. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On 7/5/23 4:47 PM, Daniel P. Berrangé wrote:
> On Wed, Jul 05, 2023 at 04:27:46PM +0200, Boris Fiuczynski wrote:
>> On 7/5/23 3:08 PM, Daniel P. Berrangé wrote:
>>> On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote:
>>>> On Wed, 5 Jul 2023 13:26:32 +0100
>>>> Daniel P. Berrangé <berrange@redhat.com> wrote:
>>>>
>>>> [...]
>>>>
>>>>>>> I rather think mgmt apps need to explicitly opt-in to async teardown,
>>>>>>> so they're aware that they need to take account of delayed RAM
>>>>>>> availablity in their accounting / guest placement logic.
>>>>>>
>>>>>> what would you think about enabling it by default only for guests that
>>>>>> are capable to run in Secure Execution mode?
>>>>>
>>>>> IIUC, that's basically /all/ guests if running on new enough hardware
>>>>> with prot_virt=1 enabled on the host OS, so will still present challenges
>>>>> to mgmt apps needing to be aware of this behaviour AFAICS.
>>>>
>>>> I think there is some fencing still? I don't think it's automatic
>>>
>>> IIUC, the following sequence is possible
>>>
>>> 1. Start QEMU with -m 500G
>>> -> QEMU spawns async teardown helper process
>>> 2. Stop QEMU
>>> -> Async teardown helper process remains running while
>>> kernel releases RAM
>>> 3. Start QEMU with -m 500G
>>> -> Fails with ENOMEM
>>> ...time passes...
>>> 4. Async teardown helper finally terminates
>>> -> The full original 500G is only now released for use
>>>
>>> Basically if you can't do
>>>
>>> while true
>>> do
>>> virsh start $guest
>>> virsh stop $guest
>>> done
>>>
>>> then it is a change in libvirt API semantics, as so will require
>>> explicit opt-in from the mgmt app to use this feature.
>>
>> What is your expectation if libvirt ["virsh stop $guest"] fails to wait for
>> qemu to terminate e.g. after 20+ minutes. I think that libvirt does have a
>> timeout trying to stop qemu and than gives up.
>> Wouldn't you encounter the same problem that way?
>
> Yes, that would be a bug. We've tried to address these in the past.
> For example, when there are PCI host devs assigned, the kernel takes
> quite a bit longer to terminate QEMU. In that case, we extended the
> timeout we wait for QEMU to exit.
>
> Essentially the idea is that when 'virsh destroy' returns we want the
> caller to have a strong guarantee that all resources are released.
> IOW, if it sees an error code the expectation is that QEMU has suffered
> a serious problem - such as stuck in an uninterruptible sleep in kernel
> space. We don't want the caller to see errors in "normal" scenarios.
>
> With regards,
> Daniel
>
Daniel,
so the idea is to extend the wait until QEMU terminates?
What is your proposal how to fix the bug?
We had a scenario with a 2TB guest running NOT in Secure Execution mode
which termination resulted in libvirt giving up on terminating the guest
after 40 seconds (10s SIGTERM and 30s SIGKILL) and systemd was able to
"kill" the QEMU process after about 140s.
We could add additional time depending on the guest memory size BUT with
Secure Execution the timeout would need to be increased by factors (two
digits). Also for libvirt it is not possible to detect if the guest is
in Secure Execution mode.
I also assume that timeouts of +1h are not acceptable. Wouldn't a long
timeout cause other trouble like stalling "virsh list" run in parallel?
--
Mit freundlichen Grüßen/Kind regards
Boris Fiuczynski
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Gregor Pillen
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
On Mon, Jul 10, 2023 at 11:57:34AM +0200, Boris Fiuczynski wrote: > On 7/5/23 4:47 PM, Daniel P. Berrangé wrote: > > On Wed, Jul 05, 2023 at 04:27:46PM +0200, Boris Fiuczynski wrote: > > > On 7/5/23 3:08 PM, Daniel P. Berrangé wrote: > > > > On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote: > > > > > On Wed, 5 Jul 2023 13:26:32 +0100 > > > > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > > > > > [...] > > > > > > > > > > > > > I rather think mgmt apps need to explicitly opt-in to async teardown, > > > > > > > > so they're aware that they need to take account of delayed RAM > > > > > > > > availablity in their accounting / guest placement logic. > > > > > > > > > > > > > > what would you think about enabling it by default only for guests that > > > > > > > are capable to run in Secure Execution mode? > > > > > > > > > > > > IIUC, that's basically /all/ guests if running on new enough hardware > > > > > > with prot_virt=1 enabled on the host OS, so will still present challenges > > > > > > to mgmt apps needing to be aware of this behaviour AFAICS. > > > > > > > > > > I think there is some fencing still? I don't think it's automatic > > > > > > > > IIUC, the following sequence is possible > > > > > > > > 1. Start QEMU with -m 500G > > > > -> QEMU spawns async teardown helper process > > > > 2. Stop QEMU > > > > -> Async teardown helper process remains running while > > > > kernel releases RAM > > > > 3. Start QEMU with -m 500G > > > > -> Fails with ENOMEM > > > > ...time passes... > > > > 4. Async teardown helper finally terminates > > > > -> The full original 500G is only now released for use > > > > > > > > Basically if you can't do > > > > > > > > while true > > > > do > > > > virsh start $guest > > > > virsh stop $guest > > > > done > > > > > > > > then it is a change in libvirt API semantics, as so will require > > > > explicit opt-in from the mgmt app to use this feature. > > > > > > What is your expectation if libvirt ["virsh stop $guest"] fails to wait for > > > qemu to terminate e.g. after 20+ minutes. I think that libvirt does have a > > > timeout trying to stop qemu and than gives up. > > > Wouldn't you encounter the same problem that way? > > > > Yes, that would be a bug. We've tried to address these in the past. > > For example, when there are PCI host devs assigned, the kernel takes > > quite a bit longer to terminate QEMU. In that case, we extended the > > timeout we wait for QEMU to exit. > > > > Essentially the idea is that when 'virsh destroy' returns we want the > > caller to have a strong guarantee that all resources are released. > > IOW, if it sees an error code the expectation is that QEMU has suffered > > a serious problem - such as stuck in an uninterruptible sleep in kernel > > space. We don't want the caller to see errors in "normal" scenarios. > > so the idea is to extend the wait until QEMU terminates? > What is your proposal how to fix the bug? There is no bug currently. If virDomainDestroy returns success, then the caller is guaranteed that QEMU has gone and all resources are released. If virDomainDestroy returns failure, then the QEMU may or may not be gone. They can call virDomainDestroy again, or monitor for the domain lifecycle events to discover when it has finally gone and all resources are released. To be more amenable to mgmt apps, we want virDmoainDestroy to return success as frequently as is practical. If there are some scenarios where we timeout because QEMU is too slow, then that's not a bug, just a less desirable outcome. > We had a scenario with a 2TB guest running NOT in Secure Execution mode > which termination resulted in libvirt giving up on terminating the guest > after 40 seconds (10s SIGTERM and 30s SIGKILL) and systemd was able to > "kill" the QEMU process after about 140s. When you say systemd killed the process, do you mean this was when libvirt talks to systemd to invoke "TerminateMachine" ? If so then presumably virDomainDestroy would have returned success which is OK. Or am I mis-understanding what you refer to here ? > We could add additional time depending on the guest memory size BUT with > Secure Execution the timeout would need to be increased by factors (two > digits). Also for libvirt it is not possible to detect if the guest is in > Secure Execution mode. What component is causing this 2 orders of magnitude delay in shutting down a guest ? If the host can't tell if Secure Execution mode is enabled or not, why would any code path be different & slower ? > I also assume that timeouts of +1h are not acceptable. Wouldn't a long > timeout cause other trouble like stalling "virsh list" run in parallel? Well a 1 hour timeout is pretty insane, even with the async teardown that's terrible as RAM is unable to be used for any new guest for an incredibly long time. AFAIR, 'virsh list' should not be stalled by virDomainDestroy, as we release the exclusive locks during the wait loop. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Tue, 11 Jul 2023 09:17:00 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote: [...] > > We could add additional time depending on the guest memory size BUT with > > Secure Execution the timeout would need to be increased by factors (two > > digits). Also for libvirt it is not possible to detect if the guest is in > > Secure Execution mode. > > What component is causing this 2 orders of magnitude delay in shutting Secure Execution (protected VMs) > down a guest ? If the host can't tell if Secure Execution mode is > enabled or not, why would any code path be different & slower ? The host kernel (and QEMU) know if a specific VM is running in secure mode, but there is no meaningful way for this information to be communicated outwards (e.g. to libvirt) During teardown, the host kernel will need to do some time-consuming extra cleanup for each page that belonged to a secure guest. > > > I also assume that timeouts of +1h are not acceptable. Wouldn't a long > > timeout cause other trouble like stalling "virsh list" run in parallel? > > Well a 1 hour timeout is pretty insane, even with the async teardown I think we all agree, and that's why asynchronous teardown was implemented > that's terrible as RAM is unable to be used for any new guest for > an incredibly long time. I'm not sure what you mean here. RAM is not kept aside until the teardown is complete; cleared pages are returned to the free pool immediately as they are cleared. i.e. when the cleanup is halfway through, half of the memory will have been freed. I just wanted to clear up those details; how libvirt can/should implement it is outside of my domain :)
On Tue, Jul 11, 2023 at 03:48:25PM +0200, Claudio Imbrenda wrote: > On Tue, 11 Jul 2023 09:17:00 +0100 > Daniel P. Berrangé <berrange@redhat.com> wrote: > > [...] > > > > We could add additional time depending on the guest memory size BUT with > > > Secure Execution the timeout would need to be increased by factors (two > > > digits). Also for libvirt it is not possible to detect if the guest is in > > > Secure Execution mode. > > > > What component is causing this 2 orders of magnitude delay in shutting > > Secure Execution (protected VMs) So its the hardware that imposes the penalty, rather than something the kenrel is doing ? Can anything else mitigate this ? eg does using huge pages make it faster than normal pages ? > > down a guest ? If the host can't tell if Secure Execution mode is > > enabled or not, why would any code path be different & slower ? > > The host kernel (and QEMU) know if a specific VM is running in > secure mode, but there is no meaningful way for this information to be > communicated outwards (e.g. to libvirt) Can we expose this in one of the QMP commands, or a new one ? It feels like a mgmt app is going to want to know if a guest is running in secure mode or not, so it can know if this shutdown penalty is going to be present. > During teardown, the host kernel will need to do some time-consuming > extra cleanup for each page that belonged to a secure guest. > > > > > > I also assume that timeouts of +1h are not acceptable. Wouldn't a long > > > timeout cause other trouble like stalling "virsh list" run in parallel? > > > > Well a 1 hour timeout is pretty insane, even with the async teardown > > I think we all agree, and that's why asynchronous teardown was > implemented > > > that's terrible as RAM is unable to be used for any new guest for > > an incredibly long time. > > I'm not sure what you mean here. RAM is not kept aside until the > teardown is complete; cleared pages are returned to the free pool > immediately as they are cleared. i.e. when the cleanup is halfway > through, half of the memory will have been freed. Yes, it is incrementally released, but in practice most hypervisors are memory constrained. So if you stop a 2 TB guest, and want to then boot it again, unless you have a couple of free TB of RAM hanging around, you're going to need to wait for most all of the orignial RAM to be reclaimed. Async cleanup definitely helps, but there's only so much it can do. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Tue, 11 Jul 2023 14:57:45 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote: > On Tue, Jul 11, 2023 at 03:48:25PM +0200, Claudio Imbrenda wrote: > > On Tue, 11 Jul 2023 09:17:00 +0100 > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > [...] > > > > > > We could add additional time depending on the guest memory size BUT with > > > > Secure Execution the timeout would need to be increased by factors (two > > > > digits). Also for libvirt it is not possible to detect if the guest is in > > > > Secure Execution mode. > > > > > > What component is causing this 2 orders of magnitude delay in shutting > > > > Secure Execution (protected VMs) > > So its the hardware that imposes the penalty, rather than something > the kenrel is doing ? > > Can anything else mitigate this ? eg does using huge pages make it > faster than normal pages ? unfortunately huge pages cannot be used for Secure Execution, it's a hardware limitation. > > > > down a guest ? If the host can't tell if Secure Execution mode is > > > enabled or not, why would any code path be different & slower ? > > > > The host kernel (and QEMU) know if a specific VM is running in > > secure mode, but there is no meaningful way for this information to be > > communicated outwards (e.g. to libvirt) > > Can we expose this in one of the QMP commands, or a new one ? It feels > like a mgmt app is going to want to know if a guest is running in secure > mode or not, so it can know if this shutdown penalty is going to be > present. I guess it would be possible (no idea how easy/clean it would be). the issue would be that when the guest is running, it's too late to enable asynchronous teardown. also notice that the same guest can jump in and out of secure mode without needing to shut down (a reboot is enough) > > > During teardown, the host kernel will need to do some time-consuming > > extra cleanup for each page that belonged to a secure guest. > > > > > > > > > I also assume that timeouts of +1h are not acceptable. Wouldn't a long > > > > timeout cause other trouble like stalling "virsh list" run in parallel? > > > > > > Well a 1 hour timeout is pretty insane, even with the async teardown > > > > I think we all agree, and that's why asynchronous teardown was > > implemented > > > > > that's terrible as RAM is unable to be used for any new guest for > > > an incredibly long time. > > > > I'm not sure what you mean here. RAM is not kept aside until the > > teardown is complete; cleared pages are returned to the free pool > > immediately as they are cleared. i.e. when the cleanup is halfway > > through, half of the memory will have been freed. > > Yes, it is incrementally released, but in practice most hypervisors are > memory constrained. So if you stop a 2 TB guest, and want to then boot it > again, unless you have a couple of free TB of RAM hanging around, you're > going to need to wait for most all of the orignial RAM to be reclaimed. if it's a secure guest, it will take time to actually use the memory anyway. it's a similar issue to the teardown, but in reverse. > > Async cleanup definitely helps, but there's only so much it can do. > > With regards, > Daniel
On Tue, Jul 11, 2023 at 04:22:12PM +0200, Claudio Imbrenda wrote: > On Tue, 11 Jul 2023 14:57:45 +0100 > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > On Tue, Jul 11, 2023 at 03:48:25PM +0200, Claudio Imbrenda wrote: > > > On Tue, 11 Jul 2023 09:17:00 +0100 > > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > [...] > > > > > > > > We could add additional time depending on the guest memory size BUT with > > > > > Secure Execution the timeout would need to be increased by factors (two > > > > > digits). Also for libvirt it is not possible to detect if the guest is in > > > > > Secure Execution mode. > > > > > > > > What component is causing this 2 orders of magnitude delay in shutting > > > > > > Secure Execution (protected VMs) > > > > So its the hardware that imposes the penalty, rather than something > > the kenrel is doing ? > > > > Can anything else mitigate this ? eg does using huge pages make it > > faster than normal pages ? > > unfortunately huge pages cannot be used for Secure Execution, it's a > hardware limitation. > > > > > > > down a guest ? If the host can't tell if Secure Execution mode is > > > > enabled or not, why would any code path be different & slower ? > > > > > > The host kernel (and QEMU) know if a specific VM is running in > > > secure mode, but there is no meaningful way for this information to be > > > communicated outwards (e.g. to libvirt) > > > > Can we expose this in one of the QMP commands, or a new one ? It feels > > like a mgmt app is going to want to know if a guest is running in secure > > mode or not, so it can know if this shutdown penalty is going to be > > present. > > I guess it would be possible (no idea how easy/clean it would be). the > issue would be that when the guest is running, it's too late to enable > asynchronous teardown. I think just need to document that async teardown is highly recommended regardless. The ability to query secure virt, is more about helping the application know whether async teardown will be fast or very very slow. > also notice that the same guest can jump in and out of secure mode > without needing to shut down (a reboot is enough) Yep, though I imagine that's going to be fairly unlikely in practice. > > > During teardown, the host kernel will need to do some time-consuming > > > extra cleanup for each page that belonged to a secure guest. > > > > > > > > > > > > I also assume that timeouts of +1h are not acceptable. Wouldn't a long > > > > > timeout cause other trouble like stalling "virsh list" run in parallel? > > > > > > > > Well a 1 hour timeout is pretty insane, even with the async teardown > > > > > > I think we all agree, and that's why asynchronous teardown was > > > implemented > > > > > > > that's terrible as RAM is unable to be used for any new guest for > > > > an incredibly long time. > > > > > > I'm not sure what you mean here. RAM is not kept aside until the > > > teardown is complete; cleared pages are returned to the free pool > > > immediately as they are cleared. i.e. when the cleanup is halfway > > > through, half of the memory will have been freed. > > > > Yes, it is incrementally released, but in practice most hypervisors are > > memory constrained. So if you stop a 2 TB guest, and want to then boot it > > again, unless you have a couple of free TB of RAM hanging around, you're > > going to need to wait for most all of the orignial RAM to be reclaimed. > > if it's a secure guest, it will take time to actually use the memory > anyway. it's a similar issue to the teardown, but in reverse. Unless the guest is started with memory preallocation on the QEMU side which would make QEMU touch every page to fault it into RAM. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Tue, 11 Jul 2023 15:33:02 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote: > On Tue, Jul 11, 2023 at 04:22:12PM +0200, Claudio Imbrenda wrote: > > On Tue, 11 Jul 2023 14:57:45 +0100 > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > On Tue, Jul 11, 2023 at 03:48:25PM +0200, Claudio Imbrenda wrote: > > > > On Tue, 11 Jul 2023 09:17:00 +0100 > > > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > > > [...] > > > > > > > > > > We could add additional time depending on the guest memory size BUT with > > > > > > Secure Execution the timeout would need to be increased by factors (two > > > > > > digits). Also for libvirt it is not possible to detect if the guest is in > > > > > > Secure Execution mode. > > > > > > > > > > What component is causing this 2 orders of magnitude delay in shutting > > > > > > > > Secure Execution (protected VMs) > > > > > > So its the hardware that imposes the penalty, rather than something > > > the kenrel is doing ? > > > > > > Can anything else mitigate this ? eg does using huge pages make it > > > faster than normal pages ? > > > > unfortunately huge pages cannot be used for Secure Execution, it's a > > hardware limitation. > > > > > > > > > > down a guest ? If the host can't tell if Secure Execution mode is > > > > > enabled or not, why would any code path be different & slower ? > > > > > > > > The host kernel (and QEMU) know if a specific VM is running in > > > > secure mode, but there is no meaningful way for this information to be > > > > communicated outwards (e.g. to libvirt) > > > > > > Can we expose this in one of the QMP commands, or a new one ? It feels > > > like a mgmt app is going to want to know if a guest is running in secure > > > mode or not, so it can know if this shutdown penalty is going to be > > > present. > > > > I guess it would be possible (no idea how easy/clean it would be). the > > issue would be that when the guest is running, it's too late to enable > > asynchronous teardown. > > I think just need to document that async teardown is highly recommended > regardless. The ability to query secure virt, is more about helping > the application know whether async teardown will be fast or very very > slow. > > > also notice that the same guest can jump in and out of secure mode > > without needing to shut down (a reboot is enough) > > Yep, though I imagine that's going to be fairly unlikely in practice. true > > > > > During teardown, the host kernel will need to do some time-consuming > > > > extra cleanup for each page that belonged to a secure guest. > > > > > > > > > > > > > > > I also assume that timeouts of +1h are not acceptable. Wouldn't a long > > > > > > timeout cause other trouble like stalling "virsh list" run in parallel? > > > > > > > > > > Well a 1 hour timeout is pretty insane, even with the async teardown > > > > > > > > I think we all agree, and that's why asynchronous teardown was > > > > implemented > > > > > > > > > that's terrible as RAM is unable to be used for any new guest for > > > > > an incredibly long time. > > > > > > > > I'm not sure what you mean here. RAM is not kept aside until the > > > > teardown is complete; cleared pages are returned to the free pool > > > > immediately as they are cleared. i.e. when the cleanup is halfway > > > > through, half of the memory will have been freed. > > > > > > Yes, it is incrementally released, but in practice most hypervisors are > > > memory constrained. So if you stop a 2 TB guest, and want to then boot it > > > again, unless you have a couple of free TB of RAM hanging around, you're > > > going to need to wait for most all of the orignial RAM to be reclaimed. > > > > if it's a secure guest, it will take time to actually use the memory > > anyway. it's a similar issue to the teardown, but in reverse. > > Unless the guest is started with memory preallocation on the QEMU side > which would make QEMU touch every page to fault it into RAM. that would be unfortunate, indeed. > > With regards, > Daniel
On Wed, 5 Jul 2023 14:08:27 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote: > On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote: > > On Wed, 5 Jul 2023 13:26:32 +0100 > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > [...] > > > > > > > I rather think mgmt apps need to explicitly opt-in to async teardown, > > > > > so they're aware that they need to take account of delayed RAM > > > > > availablity in their accounting / guest placement logic. > > > > > > > > what would you think about enabling it by default only for guests that > > > > are capable to run in Secure Execution mode? > > > > > > IIUC, that's basically /all/ guests if running on new enough hardware > > > with prot_virt=1 enabled on the host OS, so will still present challenges > > > to mgmt apps needing to be aware of this behaviour AFAICS. > > > > I think there is some fencing still? I don't think it's automatic > > IIUC, the following sequence is possible > > 1. Start QEMU with -m 500G > -> QEMU spawns async teardown helper process > 2. Stop QEMU > -> Async teardown helper process remains running while not running, the process terminates immediately as soon as QEMU terminates. the termination takes some time, because of the memory cleanup. > kernel releases RAM > 3. Start QEMU with -m 500G > -> Fails with ENOMEM why though? the new VM will not manage to instantly use all of the memory > ...time passes... > 4. Async teardown helper finally terminates > -> The full original 500G is only now released for use memory starts to get freed as soon as the helper process terminates (which is as immediately as possible after QEMU terminates so unless you have a guest that will allocate and use all of its memory immediately as fast as possible at boot, this won't be a concern. > > Basically if you can't do > > while true > do > virsh start $guest > virsh stop $guest > done > > then it is a change in libvirt API semantics, as so will require > explicit opt-in from the mgmt app to use this feature. this is still true, though, because you __could__ have a guest that zeroes out (or otherwise touches) all memory immediately when booting, I guess? see Thomas' comment, though > > With regards, > Daniel
On Wed, Jul 05, 2023 at 03:29:39PM +0200, Claudio Imbrenda wrote: > On Wed, 5 Jul 2023 14:08:27 +0100 > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote: > > > On Wed, 5 Jul 2023 13:26:32 +0100 > > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > [...] > > > > > > > > > I rather think mgmt apps need to explicitly opt-in to async teardown, > > > > > > so they're aware that they need to take account of delayed RAM > > > > > > availablity in their accounting / guest placement logic. > > > > > > > > > > what would you think about enabling it by default only for guests that > > > > > are capable to run in Secure Execution mode? > > > > > > > > IIUC, that's basically /all/ guests if running on new enough hardware > > > > with prot_virt=1 enabled on the host OS, so will still present challenges > > > > to mgmt apps needing to be aware of this behaviour AFAICS. > > > > > > I think there is some fencing still? I don't think it's automatic > > > > IIUC, the following sequence is possible > > > > 1. Start QEMU with -m 500G > > -> QEMU spawns async teardown helper process > > 2. Stop QEMU > > -> Async teardown helper process remains running while > > not running, the process terminates immediately as soon as QEMU > terminates. the termination takes some time, because of the memory > cleanup. > > > kernel releases RAM > > 3. Start QEMU with -m 500G > > -> Fails with ENOMEM > > why though? the new VM will not manage to instantly use all of the > memory > > > ...time passes... > > 4. Async teardown helper finally terminates > > -> The full original 500G is only now released for use > > memory starts to get freed as soon as the helper process terminates > (which is as immediately as possible after QEMU terminates > > so unless you have a guest that will allocate and use all of its memory > immediately as fast as possible at boot, this won't be a concern. When using huge pages, QEMU should be fully allocating memory immediately, regardless of whether the guest OS touches all RAM. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On 05/07/2023 16.35, Daniel P. Berrangé wrote: > On Wed, Jul 05, 2023 at 03:29:39PM +0200, Claudio Imbrenda wrote: >> On Wed, 5 Jul 2023 14:08:27 +0100 >> Daniel P. Berrangé <berrange@redhat.com> wrote: >> >>> On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote: >>>> On Wed, 5 Jul 2023 13:26:32 +0100 >>>> Daniel P. Berrangé <berrange@redhat.com> wrote: >>>> >>>> [...] >>>> >>>>>>> I rather think mgmt apps need to explicitly opt-in to async teardown, >>>>>>> so they're aware that they need to take account of delayed RAM >>>>>>> availablity in their accounting / guest placement logic. >>>>>> >>>>>> what would you think about enabling it by default only for guests that >>>>>> are capable to run in Secure Execution mode? >>>>> >>>>> IIUC, that's basically /all/ guests if running on new enough hardware >>>>> with prot_virt=1 enabled on the host OS, so will still present challenges >>>>> to mgmt apps needing to be aware of this behaviour AFAICS. >>>> >>>> I think there is some fencing still? I don't think it's automatic >>> >>> IIUC, the following sequence is possible >>> >>> 1. Start QEMU with -m 500G >>> -> QEMU spawns async teardown helper process >>> 2. Stop QEMU >>> -> Async teardown helper process remains running while >> >> not running, the process terminates immediately as soon as QEMU >> terminates. the termination takes some time, because of the memory >> cleanup. >> >>> kernel releases RAM >>> 3. Start QEMU with -m 500G >>> -> Fails with ENOMEM >> >> why though? the new VM will not manage to instantly use all of the >> memory >> >>> ...time passes... >>> 4. Async teardown helper finally terminates >>> -> The full original 500G is only now released for use >> >> memory starts to get freed as soon as the helper process terminates >> (which is as immediately as possible after QEMU terminates >> >> so unless you have a guest that will allocate and use all of its memory >> immediately as fast as possible at boot, this won't be a concern. > > When using huge pages, QEMU should be fully allocating memory > immediately, regardless of whether the guest OS touches all RAM. IIRC huge pages cannot be used with protected guests yet (Claudio, Janosch, please confirm), so this should not be a problem here. Thomas
On Wed, Jul 05, 2023 at 05:21:21PM +0200, Thomas Huth wrote: > On 05/07/2023 16.35, Daniel P. Berrangé wrote: > > On Wed, Jul 05, 2023 at 03:29:39PM +0200, Claudio Imbrenda wrote: > > > On Wed, 5 Jul 2023 14:08:27 +0100 > > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > > On Wed, Jul 05, 2023 at 02:46:03PM +0200, Claudio Imbrenda wrote: > > > > > On Wed, 5 Jul 2023 13:26:32 +0100 > > > > > Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > > > > > [...] > > > > > > > > I rather think mgmt apps need to explicitly opt-in to async teardown, > > > > > > > > so they're aware that they need to take account of delayed RAM > > > > > > > > availablity in their accounting / guest placement logic. > > > > > > > > > > > > > > what would you think about enabling it by default only for guests that > > > > > > > are capable to run in Secure Execution mode? > > > > > > > > > > > > IIUC, that's basically /all/ guests if running on new enough hardware > > > > > > with prot_virt=1 enabled on the host OS, so will still present challenges > > > > > > to mgmt apps needing to be aware of this behaviour AFAICS. > > > > > > > > > > I think there is some fencing still? I don't think it's automatic > > > > > > > > IIUC, the following sequence is possible > > > > > > > > 1. Start QEMU with -m 500G > > > > -> QEMU spawns async teardown helper process > > > > 2. Stop QEMU > > > > -> Async teardown helper process remains running while > > > > > > not running, the process terminates immediately as soon as QEMU > > > terminates. the termination takes some time, because of the memory > > > cleanup. > > > > > > > kernel releases RAM > > > > 3. Start QEMU with -m 500G > > > > -> Fails with ENOMEM > > > > > > why though? the new VM will not manage to instantly use all of the > > > memory > > > > > > > ...time passes... > > > > 4. Async teardown helper finally terminates > > > > -> The full original 500G is only now released for use > > > > > > memory starts to get freed as soon as the helper process terminates > > > (which is as immediately as possible after QEMU terminates > > > > > > so unless you have a guest that will allocate and use all of its memory > > > immediately as fast as possible at boot, this won't be a concern. > > > > When using huge pages, QEMU should be fully allocating memory > > immediately, regardless of whether the guest OS touches all RAM. > > IIRC huge pages cannot be used with protected guests yet (Claudio, Janosch, > please confirm), so this should not be a problem here. Another non-HP scenario is where the <allocation mode="immediate"/> is set in the guest. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On 05/07/2023 14.46, Claudio Imbrenda wrote: > On Wed, 5 Jul 2023 13:26:32 +0100 > Daniel P. Berrangé <berrange@redhat.com> wrote: > > [...] > >>>> I rather think mgmt apps need to explicitly opt-in to async teardown, >>>> so they're aware that they need to take account of delayed RAM >>>> availablity in their accounting / guest placement logic. >>> >>> what would you think about enabling it by default only for guests that >>> are capable to run in Secure Execution mode? >> >> IIUC, that's basically /all/ guests if running on new enough hardware >> with prot_virt=1 enabled on the host OS, so will still present challenges >> to mgmt apps needing to be aware of this behaviour AFAICS. > > I think there is some fencing still? I don't think it's automatic Could we maybe enable it by default if the user specified the <launchSecurity type='s390-pv'/> tag? Thomas
On 7/5/23 8:20 AM, Boris Fiuczynski wrote:
> @@ -6694,6 +6706,13 @@ qemuDomainDefFormatBufInternal(virQEMUDriver *driver,
> }
> }
> }
> +
> + /* Remove asynchronous teardown enablement for backwards compatibility
> + * on S390 as it gets autogenerated on S390 if supported anyway.
> + */
> + if (ARCH_IS_S390(def->os.arch) &&
> + def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] != VIR_TRISTATE_BOOL_YES)
> + def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] = VIR_TRISTATE_BOOL_ABSENT;
> }
Just realized that this is incorrect.
It must be:
if (ARCH_IS_S390(def->os.arch) &&
def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] ==
VIR_TRISTATE_BOOL_YES)
def->features[VIR_DOMAIN_FEATURE_ASYNC_TEARDOWN] =
VIR_TRISTATE_BOOL_ABSENT;
Please let me now if I need to resend this series. Thanks.
--
Mit freundlichen Grüßen/Kind regards
Boris Fiuczynski
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Gregor Pillen
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
On 05/07/2023 08.20, Boris Fiuczynski wrote: > Enable by default asynchronous teardown on S390 hosts and add tests for > asynchronous teardown autogeneration support. > On S390 hosts, Secure Execution guests can take a long time to shutdown, > since the memory cleanup can take a long time. Since there is no > practical way to determine whether a S390 guest is running in Secure > Execution mode, and since the asynchronous teardown does not impact > normal (not Secure Execution) guests or guests without large memory > configurations, we enable asynchronous teardown by default on S390. > A user can select to override the default in the guest domain XML. > > Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com> > --- > src/qemu/qemu_domain.c | 19 +++++++++++ > .../qemuhotplug-base-ccw-live+ccw-virtio.xml | 1 + > ...with-2-ccw-virtio+ccw-virtio-1-reverse.xml | 1 + > ...otplug-base-ccw-live-with-2-ccw-virtio.xml | 1 + > ...-with-ccw-virtio+ccw-virtio-2-explicit.xml | 1 + > ...-ccw-live-with-ccw-virtio+ccw-virtio-2.xml | 1 + > ...uhotplug-base-ccw-live-with-ccw-virtio.xml | 1 + > .../qemuhotplug-base-ccw-live.xml | 1 + > .../balloon-ccw-deflate.s390x-latest.args | 1 + > .../console-sclp.s390x-latest.args | 1 + > .../console-virtio-ccw.s390x-latest.args | 1 + > .../cpu-s390-features.s390x-latest.args | 1 + > .../cpu-s390-zEC12.s390x-latest.args | 1 + > ...default-video-type-s390x.s390x-latest.args | 1 + > .../disk-error-policy-s390x.s390x-latest.args | 1 + > .../disk-virtio-ccw-many.s390x-latest.args | 1 + > .../disk-virtio-ccw.s390x-latest.args | 1 + > .../disk-virtio-s390-zpci.s390x-latest.args | 1 + > .../fs9p-ccw.s390x-latest.args | 1 + > ...tdev-scsi-vhost-scsi-ccw.s390x-latest.args | 1 + > ...tdev-subsys-mdev-vfio-ap.s390x-latest.args | 1 + > ...ubsys-mdev-vfio-ccw-boot.s390x-latest.args | 1 + > ...dev-subsys-mdev-vfio-ccw.s390x-latest.args | 1 + > ...o-zpci-autogenerate-fids.s390x-latest.args | 1 + > ...o-zpci-autogenerate-uids.s390x-latest.args | 1 + > ...v-vfio-zpci-autogenerate.s390x-latest.args | 1 + > ...dev-vfio-zpci-boundaries.s390x-latest.args | 1 + > ...vfio-zpci-ccw-memballoon.s390x-latest.args | 1 + > ...io-zpci-multidomain-many.s390x-latest.args | 1 + > .../hostdev-vfio-zpci.s390x-latest.args | 1 + > .../input-virtio-ccw.s390x-latest.args | 1 + > ...othreads-virtio-scsi-ccw.s390x-latest.args | 1 + > .../launch-security-s390-pv.s390x-latest.args | 1 + > ...chine-aeskeywrap-off-cap.s390x-latest.args | 1 + > ...hine-aeskeywrap-off-caps.s390x-latest.args | 1 + > ...achine-aeskeywrap-on-cap.s390x-latest.args | 1 + > ...chine-aeskeywrap-on-caps.s390x-latest.args | 1 + > ...chine-deakeywrap-off-cap.s390x-latest.args | 1 + > ...hine-deakeywrap-off-caps.s390x-latest.args | 1 + > ...achine-deakeywrap-on-cap.s390x-latest.args | 1 + > ...chine-deakeywrap-on-caps.s390x-latest.args | 1 + > ...achine-keywrap-none-caps.s390x-latest.args | 1 + > .../machine-keywrap-none.s390x-latest.args | 1 + > ...machine-loadparm-hostdev.s390x-latest.args | 1 + > ...multiple-disks-nets-s390.s390x-latest.args | 1 + > ...achine-loadparm-net-s390.s390x-latest.args | 1 + > .../machine-loadparm-s390.s390x-latest.args | 1 + > .../net-virtio-ccw.s390x-latest.args | 1 + > ...low-bogus-usb-controller.s390x-latest.args | 1 + > ...390-allow-bogus-usb-none.s390x-latest.args | 1 + > ...t-cpu-kvm-ccw-virtio-2.7.s390x-latest.args | 1 + > ...t-cpu-kvm-ccw-virtio-4.2.s390x-latest.args | 1 + > ...t-cpu-tcg-ccw-virtio-2.7.s390x-latest.args | 1 + > ...t-cpu-tcg-ccw-virtio-4.2.s390x-latest.args | 1 + > ...no-async-teardown-autogen.s390x-6.0.0.args | 32 ++++++++++++++++++ > ...o-async-teardown-autogen.s390x-latest.args | 33 +++++++++++++++++++ > .../s390-no-async-teardown-autogen.xml | 18 ++++++++++ > .../s390-panic-missing.s390x-latest.args | 1 + > .../s390-panic-no-address.s390x-latest.args | 1 + > .../s390-serial-2.s390x-latest.args | 1 + > .../s390-serial-console.s390x-latest.args | 1 + > .../s390-serial.s390x-latest.args | 1 + > .../s390x-ccw-graphics.s390x-latest.args | 1 + > .../s390x-ccw-headless.s390x-latest.args | 1 + > .../vhost-vsock-ccw-auto.s390x-latest.args | 1 + > .../vhost-vsock-ccw-iommu.s390x-latest.args | 1 + > .../vhost-vsock-ccw-iommu.xml | 3 ++ > .../vhost-vsock-ccw.s390x-latest.args | 1 + > .../video-virtio-gpu-ccw.s390x-latest.args | 1 + > .../virtio-rng-ccw.s390x-latest.args | 1 + > .../watchdog-diag288.s390x-latest.args | 1 + > tests/qemuxml2argvtest.c | 2 ++ > .../default-video-type-s390x.s390x-latest.xml | 3 ++ > .../disk-virtio-s390-zpci.s390x-latest.xml | 3 ++ > ...stdev-scsi-vhost-scsi-ccw.s390x-latest.xml | 3 ++ > ...stdev-subsys-mdev-vfio-ap.s390x-latest.xml | 3 ++ > ...subsys-mdev-vfio-ccw-boot.s390x-latest.xml | 3 ++ > ...tdev-subsys-mdev-vfio-ccw.s390x-latest.xml | 3 ++ > ...io-zpci-autogenerate-fids.s390x-latest.xml | 3 ++ > ...io-zpci-autogenerate-uids.s390x-latest.xml | 3 ++ > ...ev-vfio-zpci-autogenerate.s390x-latest.xml | 3 ++ > ...tdev-vfio-zpci-boundaries.s390x-latest.xml | 3 ++ > ...-vfio-zpci-ccw-memballoon.s390x-latest.xml | 3 ++ > ...fio-zpci-multidomain-many.s390x-latest.xml | 3 ++ > .../hostdev-vfio-zpci.s390x-latest.xml | 3 ++ > .../input-virtio-ccw.s390x-latest.xml | 3 ++ > ...iothreads-disk-virtio-ccw.s390x-latest.xml | 3 ++ > ...iothreads-virtio-scsi-ccw.s390x-latest.xml | 3 ++ > .../machine-loadparm-hostdev.s390x-latest.xml | 3 ++ > ...-multiple-disks-nets-s390.s390x-latest.xml | 3 ++ > ...lt-cpu-kvm-ccw-virtio-2.7.s390x-latest.xml | 3 ++ > ...lt-cpu-kvm-ccw-virtio-4.2.s390x-latest.xml | 3 ++ > ...lt-cpu-tcg-ccw-virtio-2.7.s390x-latest.xml | 3 ++ > ...lt-cpu-tcg-ccw-virtio-4.2.s390x-latest.xml | 3 ++ > .../s390-defaultconsole.s390x-latest.xml | 3 ++ > ...-no-async-teardown-autogen.s390x-6.0.0.xml | 25 ++++++++++++++ > ...no-async-teardown-autogen.s390x-latest.xml | 28 ++++++++++++++++ > .../s390-panic-missing.s390x-latest.xml | 3 ++ > .../s390-panic-no-address.s390x-latest.xml | 3 ++ > .../s390-panic.s390x-latest.xml | 3 ++ > .../s390-serial-2.s390x-latest.xml | 3 ++ > .../s390-serial-console.s390x-latest.xml | 3 ++ > .../s390-serial.s390x-latest.xml | 3 ++ > .../s390x-ccw-graphics.s390x-latest.xml | 3 ++ > .../s390x-ccw-headless.s390x-latest.xml | 3 ++ > .../vhost-vsock-ccw-auto.s390x-latest.xml | 3 ++ > .../vhost-vsock-ccw.s390x-latest.xml | 3 ++ > ...video-virtio-gpu-ccw-auto.s390x-latest.xml | 3 ++ > .../video-virtio-gpu-ccw.s390x-latest.xml | 3 ++ > tests/qemuxml2xmltest.c | 2 ++ > 110 files changed, 333 insertions(+) > create mode 100644 tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-6.0.0.args > create mode 100644 tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-latest.args > create mode 100644 tests/qemuxml2argvdata/s390-no-async-teardown-autogen.xml > create mode 100644 tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-6.0.0.xml > create mode 100644 tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-latest.xml > > diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c > index 94587638c3..884f1599b4 100644 > --- a/src/qemu/qemu_domain.c > +++ b/src/qemu/qemu_domain.c > @@ -4402,6 +4402,18 @@ qemuDomainDefEnableDefaultFeatures(virDomainDef *def, > * capabilities, we still want to enable this */ > def->features[VIR_DOMAIN_FEATURE_GIC] = VIR_TRISTATE_SWITCH_ON; > } > + > + /* Enabled asynchronous teardown by default on S390 hosts as Secure > + * Execution guests can take a long time to shutdown, since the memory > + * cleanup can take a long time. Since there is no üractical way to s/üractical/practical/ With the typo fixed: Reviewed-by: Thomas Huth <thuth@redhat.com>
On 7/5/23 8:45 AM, Thomas Huth wrote:
> On 05/07/2023 08.20, Boris Fiuczynski wrote:
>> Enable by default asynchronous teardown on S390 hosts and add tests for
>> asynchronous teardown autogeneration support.
>> On S390 hosts, Secure Execution guests can take a long time to shutdown,
>> since the memory cleanup can take a long time. Since there is no
>> practical way to determine whether a S390 guest is running in Secure
>> Execution mode, and since the asynchronous teardown does not impact
>> normal (not Secure Execution) guests or guests without large memory
>> configurations, we enable asynchronous teardown by default on S390.
>> A user can select to override the default in the guest domain XML.
>>
>> Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
>> ---
>> src/qemu/qemu_domain.c | 19 +++++++++++
>> .../qemuhotplug-base-ccw-live+ccw-virtio.xml | 1 +
>> ...with-2-ccw-virtio+ccw-virtio-1-reverse.xml | 1 +
>> ...otplug-base-ccw-live-with-2-ccw-virtio.xml | 1 +
>> ...-with-ccw-virtio+ccw-virtio-2-explicit.xml | 1 +
>> ...-ccw-live-with-ccw-virtio+ccw-virtio-2.xml | 1 +
>> ...uhotplug-base-ccw-live-with-ccw-virtio.xml | 1 +
>> .../qemuhotplug-base-ccw-live.xml | 1 +
>> .../balloon-ccw-deflate.s390x-latest.args | 1 +
>> .../console-sclp.s390x-latest.args | 1 +
>> .../console-virtio-ccw.s390x-latest.args | 1 +
>> .../cpu-s390-features.s390x-latest.args | 1 +
>> .../cpu-s390-zEC12.s390x-latest.args | 1 +
>> ...default-video-type-s390x.s390x-latest.args | 1 +
>> .../disk-error-policy-s390x.s390x-latest.args | 1 +
>> .../disk-virtio-ccw-many.s390x-latest.args | 1 +
>> .../disk-virtio-ccw.s390x-latest.args | 1 +
>> .../disk-virtio-s390-zpci.s390x-latest.args | 1 +
>> .../fs9p-ccw.s390x-latest.args | 1 +
>> ...tdev-scsi-vhost-scsi-ccw.s390x-latest.args | 1 +
>> ...tdev-subsys-mdev-vfio-ap.s390x-latest.args | 1 +
>> ...ubsys-mdev-vfio-ccw-boot.s390x-latest.args | 1 +
>> ...dev-subsys-mdev-vfio-ccw.s390x-latest.args | 1 +
>> ...o-zpci-autogenerate-fids.s390x-latest.args | 1 +
>> ...o-zpci-autogenerate-uids.s390x-latest.args | 1 +
>> ...v-vfio-zpci-autogenerate.s390x-latest.args | 1 +
>> ...dev-vfio-zpci-boundaries.s390x-latest.args | 1 +
>> ...vfio-zpci-ccw-memballoon.s390x-latest.args | 1 +
>> ...io-zpci-multidomain-many.s390x-latest.args | 1 +
>> .../hostdev-vfio-zpci.s390x-latest.args | 1 +
>> .../input-virtio-ccw.s390x-latest.args | 1 +
>> ...othreads-virtio-scsi-ccw.s390x-latest.args | 1 +
>> .../launch-security-s390-pv.s390x-latest.args | 1 +
>> ...chine-aeskeywrap-off-cap.s390x-latest.args | 1 +
>> ...hine-aeskeywrap-off-caps.s390x-latest.args | 1 +
>> ...achine-aeskeywrap-on-cap.s390x-latest.args | 1 +
>> ...chine-aeskeywrap-on-caps.s390x-latest.args | 1 +
>> ...chine-deakeywrap-off-cap.s390x-latest.args | 1 +
>> ...hine-deakeywrap-off-caps.s390x-latest.args | 1 +
>> ...achine-deakeywrap-on-cap.s390x-latest.args | 1 +
>> ...chine-deakeywrap-on-caps.s390x-latest.args | 1 +
>> ...achine-keywrap-none-caps.s390x-latest.args | 1 +
>> .../machine-keywrap-none.s390x-latest.args | 1 +
>> ...machine-loadparm-hostdev.s390x-latest.args | 1 +
>> ...multiple-disks-nets-s390.s390x-latest.args | 1 +
>> ...achine-loadparm-net-s390.s390x-latest.args | 1 +
>> .../machine-loadparm-s390.s390x-latest.args | 1 +
>> .../net-virtio-ccw.s390x-latest.args | 1 +
>> ...low-bogus-usb-controller.s390x-latest.args | 1 +
>> ...390-allow-bogus-usb-none.s390x-latest.args | 1 +
>> ...t-cpu-kvm-ccw-virtio-2.7.s390x-latest.args | 1 +
>> ...t-cpu-kvm-ccw-virtio-4.2.s390x-latest.args | 1 +
>> ...t-cpu-tcg-ccw-virtio-2.7.s390x-latest.args | 1 +
>> ...t-cpu-tcg-ccw-virtio-4.2.s390x-latest.args | 1 +
>> ...no-async-teardown-autogen.s390x-6.0.0.args | 32 ++++++++++++++++++
>> ...o-async-teardown-autogen.s390x-latest.args | 33 +++++++++++++++++++
>> .../s390-no-async-teardown-autogen.xml | 18 ++++++++++
>> .../s390-panic-missing.s390x-latest.args | 1 +
>> .../s390-panic-no-address.s390x-latest.args | 1 +
>> .../s390-serial-2.s390x-latest.args | 1 +
>> .../s390-serial-console.s390x-latest.args | 1 +
>> .../s390-serial.s390x-latest.args | 1 +
>> .../s390x-ccw-graphics.s390x-latest.args | 1 +
>> .../s390x-ccw-headless.s390x-latest.args | 1 +
>> .../vhost-vsock-ccw-auto.s390x-latest.args | 1 +
>> .../vhost-vsock-ccw-iommu.s390x-latest.args | 1 +
>> .../vhost-vsock-ccw-iommu.xml | 3 ++
>> .../vhost-vsock-ccw.s390x-latest.args | 1 +
>> .../video-virtio-gpu-ccw.s390x-latest.args | 1 +
>> .../virtio-rng-ccw.s390x-latest.args | 1 +
>> .../watchdog-diag288.s390x-latest.args | 1 +
>> tests/qemuxml2argvtest.c | 2 ++
>> .../default-video-type-s390x.s390x-latest.xml | 3 ++
>> .../disk-virtio-s390-zpci.s390x-latest.xml | 3 ++
>> ...stdev-scsi-vhost-scsi-ccw.s390x-latest.xml | 3 ++
>> ...stdev-subsys-mdev-vfio-ap.s390x-latest.xml | 3 ++
>> ...subsys-mdev-vfio-ccw-boot.s390x-latest.xml | 3 ++
>> ...tdev-subsys-mdev-vfio-ccw.s390x-latest.xml | 3 ++
>> ...io-zpci-autogenerate-fids.s390x-latest.xml | 3 ++
>> ...io-zpci-autogenerate-uids.s390x-latest.xml | 3 ++
>> ...ev-vfio-zpci-autogenerate.s390x-latest.xml | 3 ++
>> ...tdev-vfio-zpci-boundaries.s390x-latest.xml | 3 ++
>> ...-vfio-zpci-ccw-memballoon.s390x-latest.xml | 3 ++
>> ...fio-zpci-multidomain-many.s390x-latest.xml | 3 ++
>> .../hostdev-vfio-zpci.s390x-latest.xml | 3 ++
>> .../input-virtio-ccw.s390x-latest.xml | 3 ++
>> ...iothreads-disk-virtio-ccw.s390x-latest.xml | 3 ++
>> ...iothreads-virtio-scsi-ccw.s390x-latest.xml | 3 ++
>> .../machine-loadparm-hostdev.s390x-latest.xml | 3 ++
>> ...-multiple-disks-nets-s390.s390x-latest.xml | 3 ++
>> ...lt-cpu-kvm-ccw-virtio-2.7.s390x-latest.xml | 3 ++
>> ...lt-cpu-kvm-ccw-virtio-4.2.s390x-latest.xml | 3 ++
>> ...lt-cpu-tcg-ccw-virtio-2.7.s390x-latest.xml | 3 ++
>> ...lt-cpu-tcg-ccw-virtio-4.2.s390x-latest.xml | 3 ++
>> .../s390-defaultconsole.s390x-latest.xml | 3 ++
>> ...-no-async-teardown-autogen.s390x-6.0.0.xml | 25 ++++++++++++++
>> ...no-async-teardown-autogen.s390x-latest.xml | 28 ++++++++++++++++
>> .../s390-panic-missing.s390x-latest.xml | 3 ++
>> .../s390-panic-no-address.s390x-latest.xml | 3 ++
>> .../s390-panic.s390x-latest.xml | 3 ++
>> .../s390-serial-2.s390x-latest.xml | 3 ++
>> .../s390-serial-console.s390x-latest.xml | 3 ++
>> .../s390-serial.s390x-latest.xml | 3 ++
>> .../s390x-ccw-graphics.s390x-latest.xml | 3 ++
>> .../s390x-ccw-headless.s390x-latest.xml | 3 ++
>> .../vhost-vsock-ccw-auto.s390x-latest.xml | 3 ++
>> .../vhost-vsock-ccw.s390x-latest.xml | 3 ++
>> ...video-virtio-gpu-ccw-auto.s390x-latest.xml | 3 ++
>> .../video-virtio-gpu-ccw.s390x-latest.xml | 3 ++
>> tests/qemuxml2xmltest.c | 2 ++
>> 110 files changed, 333 insertions(+)
>> create mode 100644
>> tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-6.0.0.args
>> create mode 100644
>> tests/qemuxml2argvdata/s390-no-async-teardown-autogen.s390x-latest.args
>> create mode 100644
>> tests/qemuxml2argvdata/s390-no-async-teardown-autogen.xml
>> create mode 100644
>> tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-6.0.0.xml
>> create mode 100644
>> tests/qemuxml2xmloutdata/s390-no-async-teardown-autogen.s390x-latest.xml
>>
>> diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
>> index 94587638c3..884f1599b4 100644
>> --- a/src/qemu/qemu_domain.c
>> +++ b/src/qemu/qemu_domain.c
>> @@ -4402,6 +4402,18 @@ qemuDomainDefEnableDefaultFeatures(virDomainDef
>> *def,
>> * capabilities, we still want to enable this */
>> def->features[VIR_DOMAIN_FEATURE_GIC] = VIR_TRISTATE_SWITCH_ON;
>> }
>> +
>> + /* Enabled asynchronous teardown by default on S390 hosts as Secure
>> + * Execution guests can take a long time to shutdown, since the
>> memory
>> + * cleanup can take a long time. Since there is no üractical way to
>
> s/üractical/practical/
>
> With the typo fixed:
> Reviewed-by: Thomas Huth <thuth@redhat.com>
Sorry about that and thanks for catching it.
Instead of resending the series I hope that the person pushing it could
do the fixup with the inlined patch below. Let me know otherwise.
From f08a53cd4954ade366bc794f3a006851f8e7e914 Mon Sep 17 00:00:00 2001
From: Boris Fiuczynski <fiuczy@linux.ibm.com>
Date: Wed, 5 Jul 2023 09:34:12 +0200
Subject: [PATCH] qemu: fixup comment
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Replacing üractical with practical.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
---
src/qemu/qemu_domain.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 884f1599b4..d3f9421943 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -4405,7 +4405,7 @@ qemuDomainDefEnableDefaultFeatures(virDomainDef *def,
/* Enabled asynchronous teardown by default on S390 hosts as Secure
* Execution guests can take a long time to shutdown, since the memory
- * cleanup can take a long time. Since there is no üractical way to
+ * cleanup can take a long time. Since there is no practical way to
* determine whether a S390 guest is running in Secure Execution mode,
* and since the asynchronous teardown does not impact normal (not
Secure
* Execution) guests or guests without large memory configurations. */
--
2.41.0
--
Mit freundlichen Grüßen/Kind regards
Boris Fiuczynski
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Gregor Pillen
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
© 2016 - 2026 Red Hat, Inc.