[PATCH v1] automation: add a test for HVM domU on PVH dom0

Marek Marczykowski-Górecki posted 1 patch 5 months, 2 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20240610133210.724346-1-marmarek@invisiblethingslab.com
automation/gitlab-ci/test.yaml                | 16 ++++++++++++++++
automation/scripts/qubes-x86-64.sh            | 19 ++++++++++++++++---
.../tests-artifacts/kernel/6.1.19.dockerfile  |  1 +
3 files changed, 33 insertions(+), 3 deletions(-)
[PATCH v1] automation: add a test for HVM domU on PVH dom0
Posted by Marek Marczykowski-Górecki 5 months, 2 weeks ago
This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
in the kernel, so do that too.

Add it to both x86 runners, similar to the PVH domU test.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
Requires rebuilding test-artifacts/kernel/6.1.19

I'm actually not sure if there is a sense in testing HVM domU on both
runners, when PVH domU variant is already tested on both. Are there any
differences between Intel and AMD relevant for QEMU in dom0?
---
 automation/gitlab-ci/test.yaml                | 16 ++++++++++++++++
 automation/scripts/qubes-x86-64.sh            | 19 ++++++++++++++++---
 .../tests-artifacts/kernel/6.1.19.dockerfile  |  1 +
 3 files changed, 33 insertions(+), 3 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 902139e14893..898d2adc8c5b 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -175,6 +175,14 @@ adl-smoke-x86-64-dom0pvh-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.18-gcc-debug
 
+adl-smoke-x86-64-dom0pvh-hvm-gcc-debug:
+  extends: .adl-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh dom0pvh-hvm 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.18-gcc-debug
+
 adl-suspend-x86-64-gcc-debug:
   extends: .adl-x86-64
   script:
@@ -215,6 +223,14 @@ zen3p-smoke-x86-64-dom0pvh-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.18-gcc-debug
 
+zen3p-smoke-x86-64-dom0pvh-hvm-gcc-debug:
+  extends: .zen3p-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh dom0pvh-hvm 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.18-gcc-debug
+
 zen3p-pci-hvm-x86-64-gcc-debug:
   extends: .zen3p-x86-64
   script:
diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 087ab2dc171c..816c5dd9aa77 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -19,8 +19,8 @@ vif = [ "bridge=xenbr0", ]
 disk = [ ]
 '
 
-### test: smoke test & smoke test PVH
-if [ -z "${test_variant}" ] || [ "${test_variant}" = "dom0pvh" ]; then
+### test: smoke test & smoke test PVH & smoke test HVM
+if [ -z "${test_variant}" ] || [ "${test_variant}" = "dom0pvh" ] || [ "${test_variant}" = "dom0pvh-hvm" ]; then
     passed="ping test passed"
     domU_check="
 ifconfig eth0 192.168.0.2
@@ -37,10 +37,23 @@ done
 set -x
 echo \"${passed}\"
 "
-if [ "${test_variant}" = "dom0pvh" ]; then
+if [ "${test_variant}" = "dom0pvh" ] || [ "${test_variant}" = "dom0pvh-hvm" ]; then
     extra_xen_opts="dom0=pvh"
 fi
 
+if [ "${test_variant}" = "dom0pvh-hvm" ]; then
+    domU_config='
+type = "hvm"
+name = "domU"
+kernel = "/boot/vmlinuz"
+ramdisk = "/boot/initrd-domU"
+extra = "root=/dev/ram0 console=hvc0"
+memory = 512
+vif = [ "bridge=xenbr0", ]
+disk = [ ]
+'
+fi
+
 ### test: S3
 elif [ "${test_variant}" = "s3" ]; then
     passed="suspend test passed"
diff --git a/automation/tests-artifacts/kernel/6.1.19.dockerfile b/automation/tests-artifacts/kernel/6.1.19.dockerfile
index 3a4096780d20..021bde26c790 100644
--- a/automation/tests-artifacts/kernel/6.1.19.dockerfile
+++ b/automation/tests-artifacts/kernel/6.1.19.dockerfile
@@ -32,6 +32,7 @@ RUN curl -fsSLO https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-"$LINUX_VERSI
     make xen.config && \
     scripts/config --enable BRIDGE && \
     scripts/config --enable IGC && \
+    scripts/config --enable TUN && \
     cp .config .config.orig && \
     cat .config.orig | grep XEN | grep =m |sed 's/=m/=y/g' >> .config && \
     make -j$(nproc) bzImage && \
-- 
2.44.0


Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH dom0
Posted by Andrew Cooper 5 months, 2 weeks ago
On 10/06/2024 2:32 pm, Marek Marczykowski-Górecki wrote:
> This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
> in the kernel, so do that too.
>
> Add it to both x86 runners, similar to the PVH domU test.
>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

CC Oleksii.

> ---
> Requires rebuilding test-artifacts/kernel/6.1.19

Ok.

But on a tangent, shouldn't that move forwards somewhat?

>
> I'm actually not sure if there is a sense in testing HVM domU on both
> runners, when PVH domU variant is already tested on both. Are there any
> differences between Intel and AMD relevant for QEMU in dom0?

It's not just Qemu, it's also HVMLoader, and the particulars of VT-x/SVM
VMExit decode information in order to generate ioreqs.

I'd firmly suggest having both.

~Andrew

Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH dom0
Posted by Oleksii K. 5 months, 2 weeks ago
On Mon, 2024-06-10 at 16:25 +0100, Andrew Cooper wrote:
> On 10/06/2024 2:32 pm, Marek Marczykowski-Górecki wrote:
> > This tests if QEMU works in PVH dom0. QEMU in dom0 requires
> > enabling TUN
> > in the kernel, so do that too.
> > 
> > Add it to both x86 runners, similar to the PVH domU test.
> > 
> > Signed-off-by: Marek Marczykowski-Górecki
> > <marmarek@invisiblethingslab.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> CC Oleksii.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii 
> 
> > ---
> > Requires rebuilding test-artifacts/kernel/6.1.19
> 
> Ok.
> 
> But on a tangent, shouldn't that move forwards somewhat?
> 
> > 
> > I'm actually not sure if there is a sense in testing HVM domU on
> > both
> > runners, when PVH domU variant is already tested on both. Are there
> > any
> > differences between Intel and AMD relevant for QEMU in dom0?
> 
> It's not just Qemu, it's also HVMLoader, and the particulars of VT-
> x/SVM
> VMExit decode information in order to generate ioreqs.
> 
> I'd firmly suggest having both.
> 
> ~Andrew
Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH dom0
Posted by Marek Marczykowski-Górecki 5 months, 2 weeks ago
On Mon, Jun 10, 2024 at 04:25:01PM +0100, Andrew Cooper wrote:
> On 10/06/2024 2:32 pm, Marek Marczykowski-Górecki wrote:
> > This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
> > in the kernel, so do that too.
> >
> > Add it to both x86 runners, similar to the PVH domU test.
> >
> > Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> CC Oleksii.
> 
> > ---
> > Requires rebuilding test-artifacts/kernel/6.1.19
> 
> Ok.
> 
> But on a tangent, shouldn't that move forwards somewhat?

There is already "[PATCH 08/12] automation: update kernel for x86 tests"
in the stubdom test series. And as noted in the cover letter there, most
patches can be applied independently, and also they got R-by/A-by from
Stefano already.

> > I'm actually not sure if there is a sense in testing HVM domU on both
> > runners, when PVH domU variant is already tested on both. Are there any
> > differences between Intel and AMD relevant for QEMU in dom0?
> 
> It's not just Qemu, it's also HVMLoader, and the particulars of VT-x/SVM
> VMExit decode information in order to generate ioreqs.

For just HVM, we have PCI passthrough tests on both - they run HVM (but
on PV dom0). My question was more about PVH-dom0 specific parts.

> I'd firmly suggest having both.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH dom0
Posted by Andrew Cooper 5 months, 2 weeks ago
On 10/06/2024 7:47 pm, Marek Marczykowski-Górecki wrote:
> On Mon, Jun 10, 2024 at 04:25:01PM +0100, Andrew Cooper wrote:
>> On 10/06/2024 2:32 pm, Marek Marczykowski-Górecki wrote:
>>> This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
>>> in the kernel, so do that too.
>>>
>>> Add it to both x86 runners, similar to the PVH domU test.
>>>
>>> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> CC Oleksii.
>>
>>> ---
>>> Requires rebuilding test-artifacts/kernel/6.1.19
>> Ok.
>>
>> But on a tangent, shouldn't that move forwards somewhat?
> There is already "[PATCH 08/12] automation: update kernel for x86 tests"
> in the stubdom test series. And as noted in the cover letter there, most
> patches can be applied independently, and also they got R-by/A-by from
> Stefano already.

I've got yet more fixes to come too.  I'll chase down some CI R-ack's in
due course.

>
>>> I'm actually not sure if there is a sense in testing HVM domU on both
>>> runners, when PVH domU variant is already tested on both. Are there any
>>> differences between Intel and AMD relevant for QEMU in dom0?
>> It's not just Qemu, it's also HVMLoader, and the particulars of VT-x/SVM
>> VMExit decode information in order to generate ioreqs.
> For just HVM, we have PCI passthrough tests on both - they run HVM (but
> on PV dom0). My question was more about PVH-dom0 specific parts.

Still a firm recommendation for both.

Dom0 is a very different set of codepaths to other domains, and unlike
PV (where almost all logic is common), for PVH there's large variety of
VT-x/SVM specifics both in terms of configuring dom0 to start with, and
at runtime.

Within XenRT, it's been very rare that we've found a PVH dom0 bug
affecting Intel and AMD equally.

~Andrew