[Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip

Cleber Rosa posted 3 patches 6 years, 9 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20170721034730.25612-1-crosa@redhat.com
Test FreeBSD passed
Test checkpatch failed
Test docker passed
Test s390x passed
scripts/buildconf.py         | 278 +++++++++++++++++++++++++++++++++++++++++++
tests/qemu-iotests/087       |   1 +
tests/qemu-iotests/check     |   2 +
tests/qemu-iotests/common.rc |   7 ++
4 files changed, 288 insertions(+)
[Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Cleber Rosa 6 years, 9 months ago
This is a follow up to a previous discussion about reported failures when
running some qemu-iotests.  Turns out the failures were due to missing
libraries, which in turn, reflected on the host build configuration.

This series introduces a tool that can check both host and target level
build configurations.  On top of that, it adds a function to to be used
on qemu-iotests.  Finally, as an example, it sets a test to be skipped
if the required feature is not enable on the host build configuration.

Cleber Rosa (3):
  scripts: introduce buildconf.py
  qemu-iotests: add _require_feature() function
  qemu-iotests: require CONFIG_LINUX_AIO for test 087

 scripts/buildconf.py         | 278 +++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/087       |   1 +
 tests/qemu-iotests/check     |   2 +
 tests/qemu-iotests/common.rc |   7 ++
 4 files changed, 288 insertions(+)

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by no-reply@patchew.org 6 years, 9 months ago
Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Message-id: 20170721034730.25612-1-crosa@redhat.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash

BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0

git config --local diff.renamelimit 0
git config --local diff.renames True

commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
    echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
    if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
        failed=1
        echo
    fi
    n=$((n+1))
done

exit $failed
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]         patchew/20170721041952.45950-1-aik@ozlabs.ru -> patchew/20170721041952.45950-1-aik@ozlabs.ru
Switched to a new branch 'test'
8e1715d qemu-iotests: require CONFIG_LINUX_AIO for test 087
2ba2d91 qemu-iotests: add _require_feature() function
9041490 scripts: introduce buildconf.py

=== OUTPUT BEGIN ===
Checking PATCH 1/3: scripts: introduce buildconf.py...
ERROR: code indent should never use tabs
#107: FILE: scripts/buildconf.py:59:
+^I@echo $({conf})$

WARNING: line over 80 characters
#277: FILE: scripts/buildconf.py:229:
+                                  'set to "y".  This causes this tool to be silent '

WARNING: line over 80 characters
#278: FILE: scripts/buildconf.py:230:
+                                  'and return only a status code of either 0 (if '

WARNING: line over 80 characters
#279: FILE: scripts/buildconf.py:231:
+                                  'configuration is set) or non-zero otherwise.'))

WARNING: line over 80 characters
#281: FILE: scripts/buildconf.py:233:
+                            help=('Do not attempt to use a default target if one '

WARNING: line over 80 characters
#282: FILE: scripts/buildconf.py:234:
+                                  'was not explicitly given in the command line'))

total: 1 errors, 5 warnings, 278 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 2/3: qemu-iotests: add _require_feature() function...
Checking PATCH 3/3: qemu-iotests: require CONFIG_LINUX_AIO for test 087...
=== OUTPUT END ===

Test command exited with code: 1


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org
Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Stefan Hajnoczi 6 years, 9 months ago
On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> This is a follow up to a previous discussion about reported failures when
> running some qemu-iotests.  Turns out the failures were due to missing
> libraries, which in turn, reflected on the host build configuration.
> 
> This series introduces a tool that can check both host and target level
> build configurations.  On top of that, it adds a function to to be used
> on qemu-iotests.  Finally, as an example, it sets a test to be skipped
> if the required feature is not enable on the host build configuration.
> 
> Cleber Rosa (3):
>   scripts: introduce buildconf.py
>   qemu-iotests: add _require_feature() function
>   qemu-iotests: require CONFIG_LINUX_AIO for test 087
> 
>  scripts/buildconf.py         | 278 +++++++++++++++++++++++++++++++++++++++++++
>  tests/qemu-iotests/087       |   1 +
>  tests/qemu-iotests/check     |   2 +
>  tests/qemu-iotests/common.rc |   7 ++
>  4 files changed, 288 insertions(+)
> 

It should be possible to run iotests against any
qemu/qemu-img/qemu-io/qemu-nbd binaries - even if no build root is
available.

How about invoking qemu-img and tools to determine their capabilities?

At the beginning of ./check, query the qemu/qemu-img/qemu-io/qemu-nbd
binaries for specific features.  This produces a set of available
features and tests can say:

  _supported_feature aio_native

This feature can be checked by opening an image file:

  qemu-io --format raw --nocache --native-aio --cmd quit test.img
Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Cleber Rosa 6 years, 9 months ago

On 07/21/2017 08:33 AM, Stefan Hajnoczi wrote:
> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
>> This is a follow up to a previous discussion about reported failures when
>> running some qemu-iotests.  Turns out the failures were due to missing
>> libraries, which in turn, reflected on the host build configuration.
>>
>> This series introduces a tool that can check both host and target level
>> build configurations.  On top of that, it adds a function to to be used
>> on qemu-iotests.  Finally, as an example, it sets a test to be skipped
>> if the required feature is not enable on the host build configuration.
>>
>> Cleber Rosa (3):
>>   scripts: introduce buildconf.py
>>   qemu-iotests: add _require_feature() function
>>   qemu-iotests: require CONFIG_LINUX_AIO for test 087
>>
>>  scripts/buildconf.py         | 278 +++++++++++++++++++++++++++++++++++++++++++
>>  tests/qemu-iotests/087       |   1 +
>>  tests/qemu-iotests/check     |   2 +
>>  tests/qemu-iotests/common.rc |   7 ++
>>  4 files changed, 288 insertions(+)
>>
> 
> It should be possible to run iotests against any
> qemu/qemu-img/qemu-io/qemu-nbd binaries - even if no build root is
> available.
> 

Yes, I actually overlooked that point.

> How about invoking qemu-img and tools to determine their capabilities?
> 

Can capabilities be consistently queried?  I would love to not count on
a build root if the same information can be consistently queried from
the binaries themselves.

> At the beginning of ./check, query the qemu/qemu-img/qemu-io/qemu-nbd
> binaries for specific features.  This produces a set of available
> features and tests can say:
> 

Which would be another ad-hoc thing, limited to qemu-iotests.  From a
test writer perspective, QEMU lacks is a uniform way to introspect its
capabilities.

>   _supported_feature aio_native
> 
> This feature can be checked by opening an image file:
> 
>   qemu-io --format raw --nocache --native-aio --cmd quit test.img
> 

While the solution I proposed is not cheap in terms of what it runs to
query capabilities (runs make on every query), it was cheap to write, it
sets a universal standard, and it's mostly maintenance free.  A key
point is that all build configuration (capabilities?) is predictable and
available across all subsystems and all targets.

Being honest, I think your suggestion is terribly expensive in the long
run.  In the best case scenario, it requires one explicit check to be
written for each capability, which at some point, may start to look like
a test itself.  The capability naming and behavior will probably end up
becoming inconsistent.

I feel a lot more safe relying on a "capability statement" to write the
foundation of tests, than to write a number of custom "capability checks".

But I agree that the build root requirement is an issue.

Is embedding the configured capabilities in the binary themselves
acceptable?  Something like an standard option such as
`-query-capabilities` or `-debug-build-info` that would basically list
the content of "config-host.h" and similar files?

Thanks for reviewing the idea and pointing out this important limitation!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Markus Armbruster 6 years, 8 months ago
Cleber Rosa <crosa@redhat.com> writes:

> On 07/21/2017 08:33 AM, Stefan Hajnoczi wrote:
>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
>>> This is a follow up to a previous discussion about reported failures when
>>> running some qemu-iotests.  Turns out the failures were due to missing
>>> libraries, which in turn, reflected on the host build configuration.
>>>
>>> This series introduces a tool that can check both host and target level
>>> build configurations.  On top of that, it adds a function to to be used
>>> on qemu-iotests.  Finally, as an example, it sets a test to be skipped
>>> if the required feature is not enable on the host build configuration.
>>>
>>> Cleber Rosa (3):
>>>   scripts: introduce buildconf.py
>>>   qemu-iotests: add _require_feature() function
>>>   qemu-iotests: require CONFIG_LINUX_AIO for test 087
>>>
>>>  scripts/buildconf.py         | 278 +++++++++++++++++++++++++++++++++++++++++++
>>>  tests/qemu-iotests/087       |   1 +
>>>  tests/qemu-iotests/check     |   2 +
>>>  tests/qemu-iotests/common.rc |   7 ++
>>>  4 files changed, 288 insertions(+)
>>>
>> 
>> It should be possible to run iotests against any
>> qemu/qemu-img/qemu-io/qemu-nbd binaries - even if no build root is
>> available.
>> 
>
> Yes, I actually overlooked that point.
>
>> How about invoking qemu-img and tools to determine their capabilities?
>> 
>
> Can capabilities be consistently queried?  I would love to not count on
> a build root if the same information can be consistently queried from
> the binaries themselves.
>
>> At the beginning of ./check, query the qemu/qemu-img/qemu-io/qemu-nbd
>> binaries for specific features.  This produces a set of available
>> features and tests can say:
>> 
>
> Which would be another ad-hoc thing, limited to qemu-iotests.  From a
> test writer perspective, QEMU lacks is a uniform way to introspect its
> capabilities.

The closest we have is query-qmp-schema.  It's uniform, but limited to
the qapified part of QMP (see my KVM Forum 2015 talk[*] for details).
Something similar for the command line would be nice, and I hope to get
there some day.  Until then, you can often reason like "if QMP supports
X, then surely the command line supports X'".

We commonly reason "if INTERFACE supports FEATURE-API, then QEMU surely
supports FEATURE".

[...]

[*] https://events.linuxfoundation.org/sites/events/files/slides/armbru-qemu-introspection.pdf

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Daniel P. Berrange 6 years, 9 months ago
On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> > This is a follow up to a previous discussion about reported failures when
> > running some qemu-iotests.  Turns out the failures were due to missing
> > libraries, which in turn, reflected on the host build configuration.
> > 
> > This series introduces a tool that can check both host and target level
> > build configurations.  On top of that, it adds a function to to be used
> > on qemu-iotests.  Finally, as an example, it sets a test to be skipped
> > if the required feature is not enable on the host build configuration.
> > 
> > Cleber Rosa (3):
> >   scripts: introduce buildconf.py
> >   qemu-iotests: add _require_feature() function
> >   qemu-iotests: require CONFIG_LINUX_AIO for test 087
> > 
> >  scripts/buildconf.py         | 278 +++++++++++++++++++++++++++++++++++++++++++
> >  tests/qemu-iotests/087       |   1 +
> >  tests/qemu-iotests/check     |   2 +
> >  tests/qemu-iotests/common.rc |   7 ++
> >  4 files changed, 288 insertions(+)
> > 
> 
> It should be possible to run iotests against any
> qemu/qemu-img/qemu-io/qemu-nbd binaries - even if no build root is
> available.

For sake of argument, two options for non-buildroot scenario

 - assume all features are present, so we're no worse than we are today.
 - install config.h (or same data in a structured format) to
   /usr/share/qemu so its available for query

Downside of 2 of course is that other non-iotests apps might start
to depend on it

> How about invoking qemu-img and tools to determine their capabilities?
> 
> At the beginning of ./check, query the qemu/qemu-img/qemu-io/qemu-nbd
> binaries for specific features.  This produces a set of available
> features and tests can say:
> 
>   _supported_feature aio_native
> 
> This feature can be checked by opening an image file:
> 
>   qemu-io --format raw --nocache --native-aio --cmd quit test.img

I think this is useful as a general approach, because there are bound
to be a number of features which are available at compile time, but
cannot actually be used at runtime. eg not every filesystem supports
O_DIRECT, even if we've built support for it.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Cleber Rosa 6 years, 9 months ago

On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
>>> This is a follow up to a previous discussion about reported failures when
>>> running some qemu-iotests.  Turns out the failures were due to missing
>>> libraries, which in turn, reflected on the host build configuration.
>>>
>>> This series introduces a tool that can check both host and target level
>>> build configurations.  On top of that, it adds a function to to be used
>>> on qemu-iotests.  Finally, as an example, it sets a test to be skipped
>>> if the required feature is not enable on the host build configuration.
>>>
>>> Cleber Rosa (3):
>>>   scripts: introduce buildconf.py
>>>   qemu-iotests: add _require_feature() function
>>>   qemu-iotests: require CONFIG_LINUX_AIO for test 087
>>>
>>>  scripts/buildconf.py         | 278 +++++++++++++++++++++++++++++++++++++++++++
>>>  tests/qemu-iotests/087       |   1 +
>>>  tests/qemu-iotests/check     |   2 +
>>>  tests/qemu-iotests/common.rc |   7 ++
>>>  4 files changed, 288 insertions(+)
>>>
>>
>> It should be possible to run iotests against any
>> qemu/qemu-img/qemu-io/qemu-nbd binaries - even if no build root is
>> available.
> 
> For sake of argument, two options for non-buildroot scenario
> 
>  - assume all features are present, so we're no worse than we are today.
>  - install config.h (or same data in a structured format) to
>    /usr/share/qemu so its available for query
> 
> Downside of 2 of course is that other non-iotests apps might start
> to depend on it
> 

Actually, I see #2 as a worthy goal.  Not in the strict sense of the
implementation you suggested, but as a way for *any code* (including
non-iotests) to have a baseline to work with.

>> How about invoking qemu-img and tools to determine their capabilities?
>>
>> At the beginning of ./check, query the qemu/qemu-img/qemu-io/qemu-nbd
>> binaries for specific features.  This produces a set of available
>> features and tests can say:
>>
>>   _supported_feature aio_native
>>
>> This feature can be checked by opening an image file:
>>
>>   qemu-io --format raw --nocache --native-aio --cmd quit test.img
> 
> I think this is useful as a general approach, because there are bound
> to be a number of features which are available at compile time, but
> cannot actually be used at runtime. eg not every filesystem supports
> O_DIRECT, even if we've built support for it.
> 

I strongly believe this kind of ad hoc check has value, it complements
what is being proposed here, but doesn't replace it at all.  Using the
previous example, suppose a test is being written to test aio in various
filesystems.  It'd be extremely useful to rely on the information that
qemu-io itself has been built with native aio support.  With that
information as a safe baseline, and run time information about the
filesystem it's operating on, a much cleaner expected outcome can be
defined.

Without the static capabilities defined, the dynamic check would be
influenced by the run time environment.  It would really mean "qemu-io
running on this environment (filesystem?) can do native aio".  Again,
that's not the best type of information to depend on when writing tests.

> Regards,
> Daniel
> 

Regards!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Stefan Hajnoczi 6 years, 9 months ago
On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> > On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> >> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> Without the static capabilities defined, the dynamic check would be
> influenced by the run time environment.  It would really mean "qemu-io
> running on this environment (filesystem?) can do native aio".  Again,
> that's not the best type of information to depend on when writing tests.

Can you explain this more?

It seems logical to me that if qemu-io in this environment cannot do
aio=native then we must skip those tests.

Stefan
Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Cleber Rosa 6 years, 9 months ago

On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
>> Without the static capabilities defined, the dynamic check would be
>> influenced by the run time environment.  It would really mean "qemu-io
>> running on this environment (filesystem?) can do native aio".  Again,
>> that's not the best type of information to depend on when writing tests.
> 
> Can you explain this more?
> 
> It seems logical to me that if qemu-io in this environment cannot do
> aio=native then we must skip those tests.
> 
> Stefan
> 

OK, let's abstract a bit more.  Let's take this part of your statement:

 "if qemu-io in this environment cannot do aio=native"

Let's call that a feature check.  Depending on how the *feature check*
is written, a negative result may hide a test failure, because it would
now be skipped.

Suppose that a feature check for "SDL display" is such that you run
"qemu -display sdl".  A *feature failure* here (SDL init is broken), or
an environment issue (DISPLAY=), will cause a SDL test skip.

If you base the test skip decision to a simple lookup on a list of
features (not calling them build configuration anymore, as this is clear
not attractive), this won't happen.  A "feature statement check" will
make the test proceed, and the *failure* will be presented.

I hope the pattern is visible.

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Daniel P. Berrange 6 years, 9 months ago
On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
> 
> 
> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
> > On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
> >> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> >>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> >>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> >> Without the static capabilities defined, the dynamic check would be
> >> influenced by the run time environment.  It would really mean "qemu-io
> >> running on this environment (filesystem?) can do native aio".  Again,
> >> that's not the best type of information to depend on when writing tests.
> > 
> > Can you explain this more?
> > 
> > It seems logical to me that if qemu-io in this environment cannot do
> > aio=native then we must skip those tests.
> > 
> > Stefan
> > 
> 
> OK, let's abstract a bit more.  Let's take this part of your statement:
> 
>  "if qemu-io in this environment cannot do aio=native"
> 
> Let's call that a feature check.  Depending on how the *feature check*
> is written, a negative result may hide a test failure, because it would
> now be skipped.
> 
> Suppose that a feature check for "SDL display" is such that you run
> "qemu -display sdl".  A *feature failure* here (SDL init is broken), or
> an environment issue (DISPLAY=), will cause a SDL test skip.

You could have a way to statically define what features any given run
of the test suite should enable, then report failure if they were not
detected.

This is a similar situation to that seen with configure scripts. If invoked
with no --enable-xxx flags, it will probe for features & enable them if
found.  This means you can accidentally build without expected features if
you have a missing -devel package, or a header/library is broken in some
way. This is why configure prints a summary of which features it actually
found. It is also why when building binary packages like RPMs, it is common
to explicitly give --enable-xxx flags for all features you expect to see.
Automatic enablement is none the less still useful for people in general.

So if we applied this kind of approach for testing, then any automated
test systems at least, ought to provide a fixed list of features they
expect to be present for tests. So if any features accidentally broke
the tests would error.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Cleber Rosa 6 years, 9 months ago

On 07/25/2017 12:24 PM, Daniel P. Berrange wrote:
>>
>> OK, let's abstract a bit more.  Let's take this part of your statement:
>>
>>  "if qemu-io in this environment cannot do aio=native"
>>
>> Let's call that a feature check.  Depending on how the *feature check*
>> is written, a negative result may hide a test failure, because it would
>> now be skipped.
>>
>> Suppose that a feature check for "SDL display" is such that you run
>> "qemu -display sdl".  A *feature failure* here (SDL init is broken), or
>> an environment issue (DISPLAY=), will cause a SDL test skip.
> 
> You could have a way to statically define what features any given run
> of the test suite should enable, then report failure if they were not
> detected.
> 

You hit a key point here: statically define(d).  As a said before,
feature statements are a safer place upon which to base tests.  Ad hoc
checks, as suggested by Stefan, are definitely not.

> This is a similar situation to that seen with configure scripts. If invoked
> with no --enable-xxx flags, it will probe for features & enable them if
> found.  This means you can accidentally build without expected features if
> you have a missing -devel package, or a header/library is broken in some
> way. This is why configure prints a summary of which features it actually
> found. It is also why when building binary packages like RPMs, it is common
> to explicitly give --enable-xxx flags for all features you expect to see.
> Automatic enablement is none the less still useful for people in general.
> 
> So if we applied this kind of approach for testing, then any automated
> test systems at least, ought to provide a fixed list of features they
> expect to be present for tests. So if any features accidentally broke
> the tests would error.
> 
> Regards,
> Daniel
> 

Right.  The key question here seems to be the distance of the "fixed
list of features" from the test itself.  For instance, think of this
workflow/approach:

 1) ./scripts/configured-features-to-feature-list.sh > ~/feature_list
 2) tweak feature_list
 3) ./rpm -e SDL-devel
 4) ./configure  --enable-sdl
 5) ./make
 6) ./scripts/run-test-suite.sh --only-features=~/feature_list

This would only run tests that are expected to PASS within the given
feature list.  The test runner (run-test-suite.sh) would select only
tests that match the features given.  No SKIPs would be expected as the
outcome of *any test*.

The other approach is to let the feature match to the test, and SKIPs
would then be OK.  The downside to this is that a "--enable-xxx" with
missing "-devel" package, as you exemplified, would not show up as ERRORs.

Makes sense?

Regards!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Stefan Hajnoczi 6 years, 9 months ago
On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
> > On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
> >> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> >>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> >>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> >> Without the static capabilities defined, the dynamic check would be
> >> influenced by the run time environment.  It would really mean "qemu-io
> >> running on this environment (filesystem?) can do native aio".  Again,
> >> that's not the best type of information to depend on when writing tests.
> > 
> > Can you explain this more?
> > 
> > It seems logical to me that if qemu-io in this environment cannot do
> > aio=native then we must skip those tests.
> > 
> > Stefan
> > 
> 
> OK, let's abstract a bit more.  Let's take this part of your statement:
> 
>  "if qemu-io in this environment cannot do aio=native"
> 
> Let's call that a feature check.  Depending on how the *feature check*
> is written, a negative result may hide a test failure, because it would
> now be skipped.

You are saying a pass->skip transition can hide a failure but ./check
tracks skipped tests.  See tests/qemu-iotests/check.log for a
pass/fail/skip history.

It is the job of the CI system to flag up pass/fail/skip transitions.
You're no worse off using feature tests.

Stefan
Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Cleber Rosa 6 years, 9 months ago

On 07/26/2017 01:58 PM, Stefan Hajnoczi wrote:
> On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
>> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
>>> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
>>>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
>>>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
>>>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
>>>> Without the static capabilities defined, the dynamic check would be
>>>> influenced by the run time environment.  It would really mean "qemu-io
>>>> running on this environment (filesystem?) can do native aio".  Again,
>>>> that's not the best type of information to depend on when writing tests.
>>>
>>> Can you explain this more?
>>>
>>> It seems logical to me that if qemu-io in this environment cannot do
>>> aio=native then we must skip those tests.
>>>
>>> Stefan
>>>
>>
>> OK, let's abstract a bit more.  Let's take this part of your statement:
>>
>>  "if qemu-io in this environment cannot do aio=native"
>>
>> Let's call that a feature check.  Depending on how the *feature check*
>> is written, a negative result may hide a test failure, because it would
>> now be skipped.
> 
> You are saying a pass->skip transition can hide a failure but ./check
> tracks skipped tests.  See tests/qemu-iotests/check.log for a
> pass/fail/skip history.
> 

You're not focusing on the problem here.  The problem is that a test
that *was not* supposed to be skipped, would be skipped.

Let me reinforce my point, and you can address it directly:  feature
checks like you proposed can easily produce false negatives.  Not
something hypothetical or far fetched, I gave very reasonable examples
on this same thread.

Do you think that's OK because the skip count will get an increment?
That's exactly one of the main concerns raised in the original thread (
and break room conversations) that motivated this experiment.

> It is the job of the CI system to flag up pass/fail/skip transitions.
> You're no worse off using feature tests.
> 
> Stefan
> 

What I'm trying to help us achieve here is a reliable and predictable
way for the same test job execution to be comparable across
environments.  From the individual developer workstation, CI, QA etc.

Please let me know If you really believe this should *not* be done here
(upstream QEMU).

Regards!

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Stefan Hajnoczi 6 years, 9 months ago
On Wed, Jul 26, 2017 at 02:24:02PM -0400, Cleber Rosa wrote:
> 
> 
> On 07/26/2017 01:58 PM, Stefan Hajnoczi wrote:
> > On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
> >> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
> >>> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
> >>>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> >>>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> >>>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> >>>> Without the static capabilities defined, the dynamic check would be
> >>>> influenced by the run time environment.  It would really mean "qemu-io
> >>>> running on this environment (filesystem?) can do native aio".  Again,
> >>>> that's not the best type of information to depend on when writing tests.
> >>>
> >>> Can you explain this more?
> >>>
> >>> It seems logical to me that if qemu-io in this environment cannot do
> >>> aio=native then we must skip those tests.
> >>>
> >>> Stefan
> >>>
> >>
> >> OK, let's abstract a bit more.  Let's take this part of your statement:
> >>
> >>  "if qemu-io in this environment cannot do aio=native"
> >>
> >> Let's call that a feature check.  Depending on how the *feature check*
> >> is written, a negative result may hide a test failure, because it would
> >> now be skipped.
> > 
> > You are saying a pass->skip transition can hide a failure but ./check
> > tracks skipped tests.  See tests/qemu-iotests/check.log for a
> > pass/fail/skip history.
> > 
> 
> You're not focusing on the problem here.  The problem is that a test
> that *was not* supposed to be skipped, would be skipped.

As Daniel Berrange mentioned, ./configure has the same problem.  You
cannot just run it blindly because it silently disables features.

What I'm saying is that in addition to watching ./configure closely, you
also need to look at the skipped tests that ./check reports.  If you do
that then you can be sure the expected set of tests is passing.

> > It is the job of the CI system to flag up pass/fail/skip transitions.
> > You're no worse off using feature tests.
> > 
> > Stefan
> > 
> 
> What I'm trying to help us achieve here is a reliable and predictable
> way for the same test job execution to be comparable across
> environments.  From the individual developer workstation, CI, QA etc.

1. Use ./configure --enable-foo options for all desired features.
2. Run the ./check command-line and there should be no unexpected skips
   like this:

087         [not run] missing aio=native support

To me this seems to address the problem.

I have mentioned the issues with the build flags solution: it creates a
dependency on the build environment and forces test feature checks to
duplicate build dependency logic.  This is why I think feature tests are
a cleaner solution.

Stefan
Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Markus Armbruster 6 years, 8 months ago
Stefan Hajnoczi <stefanha@gmail.com> writes:

> On Wed, Jul 26, 2017 at 02:24:02PM -0400, Cleber Rosa wrote:
>> 
>> 
>> On 07/26/2017 01:58 PM, Stefan Hajnoczi wrote:
>> > On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
>> >> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
>> >>> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
>> >>>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
>> >>>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
>> >>>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
>> >>>> Without the static capabilities defined, the dynamic check would be
>> >>>> influenced by the run time environment.  It would really mean "qemu-io
>> >>>> running on this environment (filesystem?) can do native aio".  Again,
>> >>>> that's not the best type of information to depend on when writing tests.
>> >>>
>> >>> Can you explain this more?
>> >>>
>> >>> It seems logical to me that if qemu-io in this environment cannot do
>> >>> aio=native then we must skip those tests.
>> >>>
>> >>> Stefan
>> >>>
>> >>
>> >> OK, let's abstract a bit more.  Let's take this part of your statement:
>> >>
>> >>  "if qemu-io in this environment cannot do aio=native"
>> >>
>> >> Let's call that a feature check.  Depending on how the *feature check*
>> >> is written, a negative result may hide a test failure, because it would
>> >> now be skipped.
>> > 
>> > You are saying a pass->skip transition can hide a failure but ./check
>> > tracks skipped tests.  See tests/qemu-iotests/check.log for a
>> > pass/fail/skip history.
>> > 
>> 
>> You're not focusing on the problem here.  The problem is that a test
>> that *was not* supposed to be skipped, would be skipped.
>
> As Daniel Berrange mentioned, ./configure has the same problem.  You
> cannot just run it blindly because it silently disables features.
>
> What I'm saying is that in addition to watching ./configure closely, you
> also need to look at the skipped tests that ./check reports.  If you do
> that then you can be sure the expected set of tests is passing.
>
>> > It is the job of the CI system to flag up pass/fail/skip transitions.
>> > You're no worse off using feature tests.
>> > 
>> > Stefan
>> > 
>> 
>> What I'm trying to help us achieve here is a reliable and predictable
>> way for the same test job execution to be comparable across
>> environments.  From the individual developer workstation, CI, QA etc.
>
> 1. Use ./configure --enable-foo options for all desired features.
> 2. Run the ./check command-line and there should be no unexpected skips
>    like this:
>
> 087         [not run] missing aio=native support
>
> To me this seems to address the problem.
>
> I have mentioned the issues with the build flags solution: it creates a
> dependency on the build environment and forces test feature checks to
> duplicate build dependency logic.  This is why I think feature tests are
> a cleaner solution.

I suspect the actual problem here is that the qemu-iotests harness is
not integrated in the build process.  For other tests, we specify the
tests to run in a Makefile, and use the same configuration mechanism as
for building stuff conditionally.

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Stefan Hajnoczi 6 years, 8 months ago
On Tue, Aug 08, 2017 at 10:06:04AM +0200, Markus Armbruster wrote:
> Stefan Hajnoczi <stefanha@gmail.com> writes:
> 
> > On Wed, Jul 26, 2017 at 02:24:02PM -0400, Cleber Rosa wrote:
> >> 
> >> 
> >> On 07/26/2017 01:58 PM, Stefan Hajnoczi wrote:
> >> > On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
> >> >> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
> >> >>> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
> >> >>>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> >> >>>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> >> >>>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> >> >>>> Without the static capabilities defined, the dynamic check would be
> >> >>>> influenced by the run time environment.  It would really mean "qemu-io
> >> >>>> running on this environment (filesystem?) can do native aio".  Again,
> >> >>>> that's not the best type of information to depend on when writing tests.
> >> >>>
> >> >>> Can you explain this more?
> >> >>>
> >> >>> It seems logical to me that if qemu-io in this environment cannot do
> >> >>> aio=native then we must skip those tests.
> >> >>>
> >> >>> Stefan
> >> >>>
> >> >>
> >> >> OK, let's abstract a bit more.  Let's take this part of your statement:
> >> >>
> >> >>  "if qemu-io in this environment cannot do aio=native"
> >> >>
> >> >> Let's call that a feature check.  Depending on how the *feature check*
> >> >> is written, a negative result may hide a test failure, because it would
> >> >> now be skipped.
> >> > 
> >> > You are saying a pass->skip transition can hide a failure but ./check
> >> > tracks skipped tests.  See tests/qemu-iotests/check.log for a
> >> > pass/fail/skip history.
> >> > 
> >> 
> >> You're not focusing on the problem here.  The problem is that a test
> >> that *was not* supposed to be skipped, would be skipped.
> >
> > As Daniel Berrange mentioned, ./configure has the same problem.  You
> > cannot just run it blindly because it silently disables features.
> >
> > What I'm saying is that in addition to watching ./configure closely, you
> > also need to look at the skipped tests that ./check reports.  If you do
> > that then you can be sure the expected set of tests is passing.
> >
> >> > It is the job of the CI system to flag up pass/fail/skip transitions.
> >> > You're no worse off using feature tests.
> >> > 
> >> > Stefan
> >> > 
> >> 
> >> What I'm trying to help us achieve here is a reliable and predictable
> >> way for the same test job execution to be comparable across
> >> environments.  From the individual developer workstation, CI, QA etc.
> >
> > 1. Use ./configure --enable-foo options for all desired features.
> > 2. Run the ./check command-line and there should be no unexpected skips
> >    like this:
> >
> > 087         [not run] missing aio=native support
> >
> > To me this seems to address the problem.
> >
> > I have mentioned the issues with the build flags solution: it creates a
> > dependency on the build environment and forces test feature checks to
> > duplicate build dependency logic.  This is why I think feature tests are
> > a cleaner solution.
> 
> I suspect the actual problem here is that the qemu-iotests harness is
> not integrated in the build process.  For other tests, we specify the
> tests to run in a Makefile, and use the same configuration mechanism as
> for building stuff conditionally.

The ability to run tests against QEMU binaries without a build
environment is useful though.  It would still be possible to symlink to
external binaries but then the build feature information could be
incorrect.

Stefan
Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Markus Armbruster 6 years, 8 months ago
Stefan Hajnoczi <stefanha@gmail.com> writes:

> On Tue, Aug 08, 2017 at 10:06:04AM +0200, Markus Armbruster wrote:
>> Stefan Hajnoczi <stefanha@gmail.com> writes:
>> 
>> > On Wed, Jul 26, 2017 at 02:24:02PM -0400, Cleber Rosa wrote:
>> >> 
>> >> 
>> >> On 07/26/2017 01:58 PM, Stefan Hajnoczi wrote:
>> >> > On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
>> >> >> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
>> >> >>> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
>> >> >>>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
>> >> >>>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
>> >> >>>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
>> >> >>>> Without the static capabilities defined, the dynamic check would be
>> >> >>>> influenced by the run time environment.  It would really mean "qemu-io
>> >> >>>> running on this environment (filesystem?) can do native aio".  Again,
>> >> >>>> that's not the best type of information to depend on when writing tests.
>> >> >>>
>> >> >>> Can you explain this more?
>> >> >>>
>> >> >>> It seems logical to me that if qemu-io in this environment cannot do
>> >> >>> aio=native then we must skip those tests.
>> >> >>>
>> >> >>> Stefan
>> >> >>>
>> >> >>
>> >> >> OK, let's abstract a bit more.  Let's take this part of your statement:
>> >> >>
>> >> >>  "if qemu-io in this environment cannot do aio=native"
>> >> >>
>> >> >> Let's call that a feature check.  Depending on how the *feature check*
>> >> >> is written, a negative result may hide a test failure, because it would
>> >> >> now be skipped.
>> >> > 
>> >> > You are saying a pass->skip transition can hide a failure but ./check
>> >> > tracks skipped tests.  See tests/qemu-iotests/check.log for a
>> >> > pass/fail/skip history.
>> >> > 
>> >> 
>> >> You're not focusing on the problem here.  The problem is that a test
>> >> that *was not* supposed to be skipped, would be skipped.
>> >
>> > As Daniel Berrange mentioned, ./configure has the same problem.  You
>> > cannot just run it blindly because it silently disables features.
>> >
>> > What I'm saying is that in addition to watching ./configure closely, you
>> > also need to look at the skipped tests that ./check reports.  If you do
>> > that then you can be sure the expected set of tests is passing.
>> >
>> >> > It is the job of the CI system to flag up pass/fail/skip transitions.
>> >> > You're no worse off using feature tests.
>> >> > 
>> >> > Stefan
>> >> > 
>> >> 
>> >> What I'm trying to help us achieve here is a reliable and predictable
>> >> way for the same test job execution to be comparable across
>> >> environments.  From the individual developer workstation, CI, QA etc.
>> >
>> > 1. Use ./configure --enable-foo options for all desired features.
>> > 2. Run the ./check command-line and there should be no unexpected skips
>> >    like this:
>> >
>> > 087         [not run] missing aio=native support
>> >
>> > To me this seems to address the problem.
>> >
>> > I have mentioned the issues with the build flags solution: it creates a
>> > dependency on the build environment and forces test feature checks to
>> > duplicate build dependency logic.  This is why I think feature tests are
>> > a cleaner solution.
>> 
>> I suspect the actual problem here is that the qemu-iotests harness is
>> not integrated in the build process.  For other tests, we specify the
>> tests to run in a Makefile, and use the same configuration mechanism as
>> for building stuff conditionally.
>
> The ability to run tests against QEMU binaries without a build
> environment is useful though.  It would still be possible to symlink to
> external binaries but then the build feature information could be
> incorrect.

I don't dispute it's useful.  "make check" doesn't do it, though.

I think we can either have a standalone test suite (introspects the
binaries under test to figure out what to test), or an integrated test
suite (tests exactly what is configured).  "make check" is the latter.
qemu-iotests is kind-of-sort-of the former.

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Posted by Stefan Hajnoczi 6 years, 8 months ago
On Tue, Aug 08, 2017 at 04:52:25PM +0200, Markus Armbruster wrote:
> Stefan Hajnoczi <stefanha@gmail.com> writes:
> 
> > On Tue, Aug 08, 2017 at 10:06:04AM +0200, Markus Armbruster wrote:
> >> Stefan Hajnoczi <stefanha@gmail.com> writes:
> >> 
> >> > On Wed, Jul 26, 2017 at 02:24:02PM -0400, Cleber Rosa wrote:
> >> >> 
> >> >> 
> >> >> On 07/26/2017 01:58 PM, Stefan Hajnoczi wrote:
> >> >> > On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
> >> >> >> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
> >> >> >>> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
> >> >> >>>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> >> >> >>>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> >> >> >>>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> >> >> >>>> Without the static capabilities defined, the dynamic check would be
> >> >> >>>> influenced by the run time environment.  It would really mean "qemu-io
> >> >> >>>> running on this environment (filesystem?) can do native aio".  Again,
> >> >> >>>> that's not the best type of information to depend on when writing tests.
> >> >> >>>
> >> >> >>> Can you explain this more?
> >> >> >>>
> >> >> >>> It seems logical to me that if qemu-io in this environment cannot do
> >> >> >>> aio=native then we must skip those tests.
> >> >> >>>
> >> >> >>> Stefan
> >> >> >>>
> >> >> >>
> >> >> >> OK, let's abstract a bit more.  Let's take this part of your statement:
> >> >> >>
> >> >> >>  "if qemu-io in this environment cannot do aio=native"
> >> >> >>
> >> >> >> Let's call that a feature check.  Depending on how the *feature check*
> >> >> >> is written, a negative result may hide a test failure, because it would
> >> >> >> now be skipped.
> >> >> > 
> >> >> > You are saying a pass->skip transition can hide a failure but ./check
> >> >> > tracks skipped tests.  See tests/qemu-iotests/check.log for a
> >> >> > pass/fail/skip history.
> >> >> > 
> >> >> 
> >> >> You're not focusing on the problem here.  The problem is that a test
> >> >> that *was not* supposed to be skipped, would be skipped.
> >> >
> >> > As Daniel Berrange mentioned, ./configure has the same problem.  You
> >> > cannot just run it blindly because it silently disables features.
> >> >
> >> > What I'm saying is that in addition to watching ./configure closely, you
> >> > also need to look at the skipped tests that ./check reports.  If you do
> >> > that then you can be sure the expected set of tests is passing.
> >> >
> >> >> > It is the job of the CI system to flag up pass/fail/skip transitions.
> >> >> > You're no worse off using feature tests.
> >> >> > 
> >> >> > Stefan
> >> >> > 
> >> >> 
> >> >> What I'm trying to help us achieve here is a reliable and predictable
> >> >> way for the same test job execution to be comparable across
> >> >> environments.  From the individual developer workstation, CI, QA etc.
> >> >
> >> > 1. Use ./configure --enable-foo options for all desired features.
> >> > 2. Run the ./check command-line and there should be no unexpected skips
> >> >    like this:
> >> >
> >> > 087         [not run] missing aio=native support
> >> >
> >> > To me this seems to address the problem.
> >> >
> >> > I have mentioned the issues with the build flags solution: it creates a
> >> > dependency on the build environment and forces test feature checks to
> >> > duplicate build dependency logic.  This is why I think feature tests are
> >> > a cleaner solution.
> >> 
> >> I suspect the actual problem here is that the qemu-iotests harness is
> >> not integrated in the build process.  For other tests, we specify the
> >> tests to run in a Makefile, and use the same configuration mechanism as
> >> for building stuff conditionally.
> >
> > The ability to run tests against QEMU binaries without a build
> > environment is useful though.  It would still be possible to symlink to
> > external binaries but then the build feature information could be
> > incorrect.
> 
> I don't dispute it's useful.  "make check" doesn't do it, though.
> 
> I think we can either have a standalone test suite (introspects the
> binaries under test to figure out what to test), or an integrated test
> suite (tests exactly what is configured).  "make check" is the latter.
> qemu-iotests is kind-of-sort-of the former.

Yes, originally qemu-iotests was a separate repo.  It was moved into
qemu.git so that it's easier to include tests in a patch series.  But as
a result of this history it has the ability to run against any QEMU.

Actually I'm not sure how important that ability is anymore.  Some
testing teams use qemu-iotests against QEMU binaries from elsewhere, so
we'd inconvenience them by tying it to a build.  But they could update
their process to get the QEMU tree that matches their binaries, if
necessary.

Stefan