tests/lavocado/.gitignore | 3 + tests/lavocado/Makefile | 2 + tests/lavocado/README.md | 124 +++++++++++++++++ tests/lavocado/lavocado/__init__.py | 0 tests/lavocado/lavocado/defaults.py | 11 ++ tests/lavocado/lavocado/exceptions.py | 20 +++ tests/lavocado/lavocado/helpers/__init__.py | 0 tests/lavocado/lavocado/helpers/domains.py | 75 ++++++++++ tests/lavocado/lavocado/test.py | 144 ++++++++++++++++++++ tests/lavocado/requirements.txt | 3 + tests/lavocado/templates/domain.xml.jinja | 20 +++ tests/lavocado/tests/domain/transient.py | 102 ++++++++++++++ 12 files changed, 504 insertions(+) create mode 100644 tests/lavocado/.gitignore create mode 100644 tests/lavocado/Makefile create mode 100644 tests/lavocado/README.md create mode 100644 tests/lavocado/lavocado/__init__.py create mode 100644 tests/lavocado/lavocado/defaults.py create mode 100644 tests/lavocado/lavocado/exceptions.py create mode 100644 tests/lavocado/lavocado/helpers/__init__.py create mode 100644 tests/lavocado/lavocado/helpers/domains.py create mode 100644 tests/lavocado/lavocado/test.py create mode 100644 tests/lavocado/requirements.txt create mode 100644 tests/lavocado/templates/domain.xml.jinja create mode 100644 tests/lavocado/tests/domain/transient.py
lavocado aims to be an alternative test framework for the libvirt project using Python, python-libvirt and Avocado. This can be used to write unit, functional and integration tests and it is inspired by the libvirt-tck framework. This series introduces the basic framework along with some basic test examples that I got from libvirt-tck. I would appreciate your comments on this RFC, to see if this fits this project's needs. Also, names and locations are just a proposal and can be changed. For now, this framework assumes that you are going to run the tests in a fresh clean environment, i.e. a VM. If you decide to use your local system, beware that execution of the tests may affect your system. One of the future goals of this framework is to utilize nested virtualization technologies and hence make sure an L1 guest is provisioned automatically for the tests to be executed in this environment and not tamper with your main system. I'm adding more information with some details inside the README file. Beraldo Leal (4): tests: introduce lavocado: initial code structure tests.lavocado: adding basic transient domain tests tests.lavocado: adding a .gitignore tests.lavocado: adding a README and Makefile for convenience tests/lavocado/.gitignore | 3 + tests/lavocado/Makefile | 2 + tests/lavocado/README.md | 124 +++++++++++++++++ tests/lavocado/lavocado/__init__.py | 0 tests/lavocado/lavocado/defaults.py | 11 ++ tests/lavocado/lavocado/exceptions.py | 20 +++ tests/lavocado/lavocado/helpers/__init__.py | 0 tests/lavocado/lavocado/helpers/domains.py | 75 ++++++++++ tests/lavocado/lavocado/test.py | 144 ++++++++++++++++++++ tests/lavocado/requirements.txt | 3 + tests/lavocado/templates/domain.xml.jinja | 20 +++ tests/lavocado/tests/domain/transient.py | 102 ++++++++++++++ 12 files changed, 504 insertions(+) create mode 100644 tests/lavocado/.gitignore create mode 100644 tests/lavocado/Makefile create mode 100644 tests/lavocado/README.md create mode 100644 tests/lavocado/lavocado/__init__.py create mode 100644 tests/lavocado/lavocado/defaults.py create mode 100644 tests/lavocado/lavocado/exceptions.py create mode 100644 tests/lavocado/lavocado/helpers/__init__.py create mode 100644 tests/lavocado/lavocado/helpers/domains.py create mode 100644 tests/lavocado/lavocado/test.py create mode 100644 tests/lavocado/requirements.txt create mode 100644 tests/lavocado/templates/domain.xml.jinja create mode 100644 tests/lavocado/tests/domain/transient.py -- 2.26.3
On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > lavocado aims to be an alternative test framework for the libvirt > project using Python, python-libvirt and Avocado. This can be used to > write unit, functional and integration tests and it is inspired by the > libvirt-tck framework. > > This series introduces the basic framework along with some basic test > examples that I got from libvirt-tck. I would appreciate your comments > on this RFC, to see if this fits this project's needs. Also, names and > locations are just a proposal and can be changed. Some high level thoughts - More extensive functional integration testing coverage is good - We need to actually run the functional tests regularly reporting via GitLab pipelines in some way - Using Python is way more contributor friendly than Perl - This does not need to live in libvirt.git as we don't follow a monolithic repo approach in libvirt, and it already depends on functionality provided by other repos. When it comes to testing, I feel like there are several distinct pieces to think about - The execution & reporting harness - Supporting infrastructure to aid writing tests - The set of tests themselves If I look at the TCK - The harness is essentially the standard Perl harness with a thin CLI wrapper, thus essentially works with any test emitting TAP format - The support infra is all custom APIs using libvirt-perl - The tests are mostly written in Perl, but some are written in shell (yuk). They all output TAP format. One key thing here is that the test harness is fairly loosely coupled to the support infra & tests. The TAP data format bridged the two, letting us write tests in essentially any language. Of course writing tests in non-Perl was/is tedious today, since there's no support infra for that which exists today. The TAP data format bridge also means we can easily throw away the current TCK harness and replace it with anything else that can consume tests emitting TAP data. If I look at Avocado, I think (correct me if i'm wrong) 1. The harness is essentially the standard Python harness with a thin CLI wrapper. Thus needs all tests to implement the Python test APIs 2. The support infra is all custom APIs using libvirt-python 3. The tests are to be written entirely in Python, to integrate with the python test harness IIUC, 90% of this patch series is essentially about (2), since (1) is fairly trivial and for (3) there's just simple demo tests. Libvirt has consumers writing applications in a variety of languages, and periodically reporting bugs. My general wish for a test harness would be for something that can invoke and consume arbitrary test cases. Essentially if someone can provide us a piece of code that demonstrates a problem in libvirt, I would like to be able to use that directly as a functional test, regardless of language it was written in. In theory the libvirt TCK allowed for that, and indeed the few tests written in shell essentially came to exist because someone at IBM had written some shell code to test firewalls and contributed that as a functional test. They just hacked their script to emit TAP data. > For now, this framework assumes that you are going to run the tests in a > fresh clean environment, i.e. a VM. If you decide to use your local > system, beware that execution of the tests may affect your system. It is always good to be wary of functional test frameworks trashing your system, but at the same time I think makes things more flexible if they are able to be reasonably safely run on /any/ host. For the TCK we tried to be quite conservative in this regard, because it was actually an explicit goal that you can run it on any Fedora host to validate it correctly functioning for KVM. To achieve that we tried to use some standard naming conventions for any resources that tests created, and if we saw pre-existing resources named differently we didn't touch them. ie all VMs were named 'tck-XXX', and were free to be deleted, but other named VMs were ignored. For anything special, such as testing PCI device assignment, the test configuration file had to explicitly contain details of host devices that were safe to use. This was also done for host block devices or NICs, etc. Thus a default invokation only ran a subset of tests which were safe. The more dangerous tests required you to modify the config file to grant access. I think it would be good if the Avocado supporting test APIs had a similar conceptual approach, especially wrt to a config file granting access to host resources such as NICs/Block devs/PCI devs, where you need prior explicit admin permission to avoid danger. > One of the future goals of this framework is to utilize nested > virtualization technologies and hence make sure an L1 guest is > provisioned automatically for the tests to be executed in this > environment and not tamper with your main system. I think the test framework should not concern itself with this kind of thing directly. We already have libvirt-ci, which has tools for provisioning VMs with pre-defined package sets. The test harness should just expect that this has been done, and that it is already running in such an environment if the user wanted it to be. In the TCK config file we provided setting for a URI to connect to other hosts, to enable multi-hosts tests like live migration to be done. > I'm adding more information with some details inside the README file. Overall, I'm more enthusiastic about writing tests in Python than Perl, for the long term, but would also potentially like to write tests in Go too. I'm wondering if we can't bridge the divide between what we have already in libvirt-tck, and what you're bringing to the table with avocado here. While we've not done much development with the TCK recently, there are some very valuable tests there, especially related to firewall support and I don't fancy rewriting them. Thus my suggestion is that we: - Put this avocado code into the libvirt-tck repository, with focus on the supporting infra for making it easy to write Python tests - Declare that all tests need a way to emit TAP format, no matter what language they're written in. This could either be the test directly emitting TAP, or it could be via use of a plugin. For example 'tappy' can make existing Python tests emit TAP, with no modifications to the tests themselves. https://tappy.readthedocs.io/en/latest/consumers.html IOW, you can still invoke the python tests using the standard Python test runner, and still invoke the perl tests using the stnadard Perl test runner if desired. - Switch the TCK configuration file to use JSON/YAML instead of its current custom format. This is to enable the Python tests to share a config file with the Perl and Shell tests. Thus they can all have the same way to figure out what block devs, NICs, PCI devs, etc they are permitted to use, and what dirs it is safe to create VM images in, etc. - Replace the current Perl based execution harness with something using Python, so that it is less of a pain to install in terms of dependancies. Looks like we could just do something using the 'tappy' Python APIs to consume the TAP format from all the tests and generate a junit report that GitLab can consume. - Rename the RPMs so they match the project name "libvirt-tck", instead of "perl-Sys-Virt-TCK" to emphasis that this is not a Perl project, it is a testing project, of which some parts were historically Perl, but new parts will be mostly Python (and possibly Go/etc in future if desired). Potentially have RPMs split: * libvirt-tck - just the execution harness, currently the Perl one, but to be replaced with a Python one. * libvirt-tck-tests-python - supporting APIs and tests written in Python - essentially all of this series, and future tests written * libvirt-tck-tests-perl - supporting APIs and tests written in Perl - essentially most of existing TCK stuff * libvirt-tck-tests-shell - supporting APIs and tests written in shell - mostly the network/firewall related TCK pieces Given the existance of the 'tappy' tool, it feels like the first two points ought to be reasonably doable without any significant changes to what you've already written. Just need to import it as is, and then using tappy as a shim to let it invoked from libvirt-tck. Can still have the existing way to invoke it directly too as an alternative. The common JSON/YAML config file, and adopting some standard naming conventions and resource usage conventions are probably where the real work would lie. That's likely not too bad at this stage though, since you've not actually written a large number of test cases. > Beraldo Leal (4): > tests: introduce lavocado: initial code structure > tests.lavocado: adding basic transient domain tests > tests.lavocado: adding a .gitignore > tests.lavocado: adding a README and Makefile for convenience Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Thu, Jul 01, 2021 at 07:04:32PM +0100, Daniel P. Berrangé wrote: > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > If I look at Avocado, I think (correct me if i'm wrong) > > 1. The harness is essentially the standard Python harness > with a thin CLI wrapper. Thus needs all tests to implement > the Python test APIs > 2. The support infra is all custom APIs using libvirt-python > 3. The tests are to be written entirely in Python, to integrate > with the python test harness The harness proposed here would be the Avocado itself. That doesn't just run Python tests, it can run any executable. And also has support for multiple test results, including TAP. However, for developers trying to write Python tests will receive some support infrastructure classes that will help them. > IIUC, 90% of this patch series is essentially about (2), > since (1) is fairly trivial and for (3) there's just > simple demo tests. Yes, and (1) is trivial because I’m delegating this to Avocado. > Libvirt has consumers writing applications in a variety of > languages, and periodically reporting bugs. My general wish > for a test harness would be for something that can invoke > and consume arbitrary test cases. Essentially if someone > can provide us a piece of code that demonstrates a problem > in libvirt, I would like to be able to use that directly as > a functional test, regardless of language it was written > in. > > In theory the libvirt TCK allowed for that, and indeed the > few tests written in shell essentially came to exist because > someone at IBM had written some shell code to test firewalls > and contributed that as a functional test. They just hacked > their script to emit TAP data. For sure, I completely agree and understand. We take that into account as well, but maybe it wasn't so clear in the RFC. With Avocado you can do that, and there is no need for hacking the scripts to emit TAP data. Avocado has a --tap option that will output results in TAP format: $ avocado run --tap - /bin/true /bin/false 1..2 ok 1 /bin/true not ok 2 /bin/false > > For now, this framework assumes that you are going to run the tests in a > > fresh clean environment, i.e. a VM. If you decide to use your local > > system, beware that execution of the tests may affect your system. > > It is always good to be wary of functional test frameworks trashing > your system, but at the same time I think makes things more flexible > if they are able to be reasonably safely run on /any/ host. > > For the TCK we tried to be quite conservative in this regard, because > it was actually an explicit goal that you can run it on any Fedora > host to validate it correctly functioning for KVM. To achieve that > we tried to use some standard naming conventions for any resources > that tests created, and if we saw pre-existing resources named > differently we didn't touch them. ie all VMs were named 'tck-XXX', > and were free to be deleted, but other named VMs were ignored. > > For anything special, such as testing PCI device assignment, the test > configuration file had to explicitly contain details of host devices > that were safe to use. This was also done for host block devices or > NICs, etc. Thus a default invokation only ran a subset of tests > which were safe. The more dangerous tests required you to modify the > config file to grant access. > > I think it would be good if the Avocado supporting test APIs had a > similar conceptual approach, especially wrt to a config file > granting access to host resources such as NICs/Block devs/PCI devs, > where you need prior explicit admin permission to avoid danger. Indeed. It is a good practice to isolate those things. Again, it was my fault that wasn't clear on the RFC, and yes we have that in mind. As mentioned lavocado it was inspired by libvirt-tck and I'm already using the same approach with the prefix: when creating domains I'm prefixing the `test.id` to the domain name (and this can be customizable). On top of that, Avocado has the spawners concept, where we support right now subprocess and podman. But a new spawner could be developed to create a more isolated env. We also have the Job API feature, where, using python code, you can create custom job suites based on your logic. > > One of the future goals of this framework is to utilize nested > > virtualization technologies and hence make sure an L1 guest is > > provisioned automatically for the tests to be executed in this > > environment and not tamper with your main system. > > I think the test framework should not concern itself with this > kind of thing directly. We already have libvirt-ci, which has > tools for provisioning VMs with pre-defined package sets. The > test harness should just expect that this has been done, and > that it is already running in such an environment if the user > wanted it to be. Sure, again I wasn't clear. We intend to delegate this to tools like lcitool. > In the TCK config file we provided setting for a URI to connect > to other hosts, to enable multi-hosts tests like live migration > to be done. I notice that, and I did the same here on LIBVIRT_URI config option. By default it is pointing to "qemu:///system". But for sure, could be customized. It is on my plans to extend this to support > 1 URI to support live migration. > > I'm adding more information with some details inside the README file. > > Overall, I'm more enthusiastic about writing tests in Python > than Perl, for the long term, but would also potentially like > to write tests in Go too. > > I'm wondering if we can't bridge the divide between what we > have already in libvirt-tck, and what you're bringing to the > table with avocado here. While we've not done much development > with the TCK recently, there are some very valuable tests > there, especially related to firewall support and I don't > fancy rewriting them. > > Thus my suggestion is that we: > > - Put this avocado code into the libvirt-tck repository, > with focus on the supporting infra for making it easy to > write Python tests > > - Declare that all tests need a way to emit TAP format, > no matter what language they're written in. This could > either be the test directly emitting TAP, or it could > be via use of a plugin. For example 'tappy' can make > existing Python tests emit TAP, with no modifications > to the tests themselves. > > https://tappy.readthedocs.io/en/latest/consumers.html > > IOW, you can still invoke the python tests using the > standard Python test runner, and still invoke the perl > tests using the stnadard Perl test runner if desired. This is supported already: $ avocado run --tap - --test-runner='nrunner' tests/domain/transient.py 1..3 ok 1 tests/domain/transient.py:TransientDomain.test_autostart ok 2 tests/domain/transient.py:TransientDomain.test_lifecycle ok 3 tests/domain/transient.py:TransientDomain.test_convert_transient_to_persistent Thank you for your comments and review here. -- Beraldo
On Thu, Jul 01, 2021 at 06:09:47PM -0300, Beraldo Leal wrote: > On Thu, Jul 01, 2021 at 07:04:32PM +0100, Daniel P. Berrangé wrote: > > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > > I'm adding more information with some details inside the README file. > > > > Overall, I'm more enthusiastic about writing tests in Python > > than Perl, for the long term, but would also potentially like > > to write tests in Go too. > > > > I'm wondering if we can't bridge the divide between what we > > have already in libvirt-tck, and what you're bringing to the > > table with avocado here. While we've not done much development > > with the TCK recently, there are some very valuable tests > > there, especially related to firewall support and I don't > > fancy rewriting them. > > > > Thus my suggestion is that we: > > > > - Put this avocado code into the libvirt-tck repository, > > with focus on the supporting infra for making it easy to > > write Python tests > > > > - Declare that all tests need a way to emit TAP format, > > no matter what language they're written in. This could > > either be the test directly emitting TAP, or it could > > be via use of a plugin. For example 'tappy' can make > > existing Python tests emit TAP, with no modifications > > to the tests themselves. > > > > https://tappy.readthedocs.io/en/latest/consumers.html > > > > IOW, you can still invoke the python tests using the > > standard Python test runner, and still invoke the perl > > tests using the stnadard Perl test runner if desired. > > This is supported already: > > $ avocado run --tap - --test-runner='nrunner' tests/domain/transient.py > 1..3 > ok 1 tests/domain/transient.py:TransientDomain.test_autostart > ok 2 tests/domain/transient.py:TransientDomain.test_lifecycle > ok 3 tests/domain/transient.py:TransientDomain.test_convert_transient_to_persistent This is nice, showing fine grained TAP output lines for each individual test within the test program I tried using the hints file that Cleber pointed to make avocado *consume* TAP format for the Perl/Shell scripts: $ cd libvirt-tck $ cat .avocado.hint [kinds] tap = scripts/*/*.t [tap] uri = $testpath And I can indeed invoke the scripts: $ avocado run ./scripts/domain/05*.t JOB ID : b5d596d909dc8024d986957c909fc8fb6b31e2dd JOB LOG : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/job.log (1/2) ./scripts/domain/050-transient-lifecycle.t: PASS (0.70 s) (2/2) ./scripts/domain/051-transient-autostart.t: PASS (0.76 s) RESULTS : PASS 2 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB HTML : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/results.html JOB TIME : 1.90 s which is good. And also I can ask it to produce tap output too: $ avocado run --tap - ./scripts/domain/05*.t 1..2 ok 1 ./scripts/domain/050-transient-lifecycle.t ok 2 ./scripts/domain/051-transient-autostart.t But this output isn't entirely what I was after. This is just summarizing the results of each test program. I can't find a way to make it show the fine grained tap output for the individual tests, like it does for the python program eg I'd like to be able to see something similar to: $ ./scripts/domain/050-transient-lifecycle.t 1..2 # Creating a new transient domain ok 1 - created transient domain object # Destroying the transient domain # Checking that transient domain has gone away ok 2 - NO_DOMAIN error raised from missing domain $ ./scripts/domain/051-transient-autostart.t 1..4 # Creating a new transient domain ok 1 - created transient domain object ok 2 - autostart is disabled for transient VMs ok 3 - Set autostart not supported on transient VMs # Destroying the transient domain # Checking that transient domain has gone away ok 4 - NO_DOMAIN error raised from missing domain None the less this does seem like we're pretty close to being able to do something useful in integration Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
Daniel P. Berrangé writes: > On Thu, Jul 01, 2021 at 06:09:47PM -0300, Beraldo Leal wrote: >> >> This is supported already: >> >> $ avocado run --tap - --test-runner='nrunner' tests/domain/transient.py >> 1..3 >> ok 1 tests/domain/transient.py:TransientDomain.test_autostart >> ok 2 tests/domain/transient.py:TransientDomain.test_lifecycle >> ok 3 tests/domain/transient.py:TransientDomain.test_convert_transient_to_persistent > > This is nice, showing fine grained TAP output lines for each > individual test within the test program > > > I tried using the hints file that Cleber pointed to make avocado > *consume* TAP format for the Perl/Shell scripts: > > $ cd libvirt-tck > $ cat .avocado.hint > [kinds] > tap = scripts/*/*.t > > [tap] > uri = $testpath > > > And I can indeed invoke the scripts: > > $ avocado run ./scripts/domain/05*.t > JOB ID : b5d596d909dc8024d986957c909fc8fb6b31e2dd > JOB LOG : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/job.log > (1/2) ./scripts/domain/050-transient-lifecycle.t: PASS (0.70 s) > (2/2) ./scripts/domain/051-transient-autostart.t: PASS (0.76 s) > RESULTS : PASS 2 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 > JOB HTML : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/results.html > JOB TIME : 1.90 s > > which is good. > > And also I can ask it to produce tap output too: > > $ avocado run --tap - ./scripts/domain/05*.t > 1..2 > ok 1 ./scripts/domain/050-transient-lifecycle.t > ok 2 ./scripts/domain/051-transient-autostart.t > > > But this output isn't entirely what I was after. This is just summarizing > the results of each test program. > Hi Daniel, >From the user experience perspective, I completely understand your point. I would side with this expectation and output any day if acting purely as a user. But, as a framework, Avocado came up with some well defined concepts, and one of them is what a test is. To be able to "replay only the failed tests of the latest job" or to "run a test completely isolated in a container or VM", there needs to be an specific and indistinguishable entry point for anything that is considered a test. > I can't find a way to make it show the fine grained tap output for the > individual tests, like it does for the python program > > eg I'd like to be able to see something similar to: > > $ ./scripts/domain/050-transient-lifecycle.t > 1..2 > # Creating a new transient domain > ok 1 - created transient domain object > # Destroying the transient domain > # Checking that transient domain has gone away > ok 2 - NO_DOMAIN error raised from missing domain > > > $ ./scripts/domain/051-transient-autostart.t > 1..4 > # Creating a new transient domain > ok 1 - created transient domain object > ok 2 - autostart is disabled for transient VMs > ok 3 - Set autostart not supported on transient VMs > # Destroying the transient domain > # Checking that transient domain has gone away > ok 4 - NO_DOMAIN error raised from missing domain > Results for tests that produce TAP can be seen in the granularity you list here, given some of the conditions like the one explained before. For instance, the GLib plugin[1] knows that it's possible to run a single test case on tests that use the g_test_*() APIs, by providing a "-p /test/case/name". We could in theory reorganize the current Perl based tests, so that a test matches a function or some other type of code marker that can be used as an entry point. Given that the overall result will always be valid and the log files will always contain the more granular information, I have mixed feelings about the cost/benefit of doing that, but you are surely a much better judge for that. > > None the less this does seem like we're pretty close to being able > to do something useful in integration > > Regards, > Daniel Thanks for the feedback! - Cleber. PS: The GLib plugin was deprecated because we observed that most users would do fine with the TAP plugin by itself, but it can be reinstated if there is a demand. PS2: If a test needs to produce more detailed information about its status, such as for a long running test with different stages, we recommend that advanced logging is used[2], instead of calling the different stages of a single test (from Avocado's PoV). [1] - https://avocado-framework.readthedocs.io/en/82.0/plugins/optional/glib.html [2] - https://avocado-framework.readthedocs.io/en/89.0/guides/writer/chapters/logging.html
On Wed, Jul 21, 2021 at 06:01:34PM -0400, Cleber Rosa wrote: > > Daniel P. Berrangé writes: > > > On Thu, Jul 01, 2021 at 06:09:47PM -0300, Beraldo Leal wrote: > >> > >> This is supported already: > >> > >> $ avocado run --tap - --test-runner='nrunner' tests/domain/transient.py > >> 1..3 > >> ok 1 tests/domain/transient.py:TransientDomain.test_autostart > >> ok 2 tests/domain/transient.py:TransientDomain.test_lifecycle > >> ok 3 tests/domain/transient.py:TransientDomain.test_convert_transient_to_persistent > > > > This is nice, showing fine grained TAP output lines for each > > individual test within the test program > > > > > > I tried using the hints file that Cleber pointed to make avocado > > *consume* TAP format for the Perl/Shell scripts: > > > > $ cd libvirt-tck > > $ cat .avocado.hint > > [kinds] > > tap = scripts/*/*.t > > > > [tap] > > uri = $testpath > > > > > > And I can indeed invoke the scripts: > > > > $ avocado run ./scripts/domain/05*.t > > JOB ID : b5d596d909dc8024d986957c909fc8fb6b31e2dd > > JOB LOG : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/job.log > > (1/2) ./scripts/domain/050-transient-lifecycle.t: PASS (0.70 s) > > (2/2) ./scripts/domain/051-transient-autostart.t: PASS (0.76 s) > > RESULTS : PASS 2 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 > > JOB HTML : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/results.html > > JOB TIME : 1.90 s > > > > which is good. > > > > And also I can ask it to produce tap output too: > > > > $ avocado run --tap - ./scripts/domain/05*.t > > 1..2 > > ok 1 ./scripts/domain/050-transient-lifecycle.t > > ok 2 ./scripts/domain/051-transient-autostart.t > > > > > > But this output isn't entirely what I was after. This is just summarizing > > the results of each test program. > > > > Hi Daniel, > > From the user experience perspective, I completely understand your > point. I would side with this expectation and output any day if acting > purely as a user. > > But, as a framework, Avocado came up with some well defined concepts, > and one of them is what a test is. To be able to "replay only the > failed tests of the latest job" or to "run a test completely isolated in > a container or VM", there needs to be an specific and indistinguishable > entry point for anything that is considered a test. Ok, I see what you mean. Regardless of what its written in, it is common that a test will print arbitrary stuff to stdout or stderr if things go very badly wrong, possibly including stack traces, etc. What is the expected approach for debugging failed tests such that you can see the full stdout/stderr ? Are you supposed to just directly invoke the individual test program without using the avocado harness to see full output ? > > I can't find a way to make it show the fine grained tap output for the > > individual tests, like it does for the python program > > > > eg I'd like to be able to see something similar to: > > > > $ ./scripts/domain/050-transient-lifecycle.t > > 1..2 > > # Creating a new transient domain > > ok 1 - created transient domain object > > # Destroying the transient domain > > # Checking that transient domain has gone away > > ok 2 - NO_DOMAIN error raised from missing domain > > > > > > $ ./scripts/domain/051-transient-autostart.t > > 1..4 > > # Creating a new transient domain > > ok 1 - created transient domain object > > ok 2 - autostart is disabled for transient VMs > > ok 3 - Set autostart not supported on transient VMs > > # Destroying the transient domain > > # Checking that transient domain has gone away > > ok 4 - NO_DOMAIN error raised from missing domain > > > > Results for tests that produce TAP can be seen in the granularity you > list here, given some of the conditions like the one explained before. > For instance, the GLib plugin[1] knows that it's possible to run a > single test case on tests that use the g_test_*() APIs, by providing a > "-p /test/case/name". > > We could in theory reorganize the current Perl based tests, so that a > test matches a function or some other type of code marker that can be > used as an entry point. Given that the overall result will always be > valid and the log files will always contain the more granular > information, I have mixed feelings about the cost/benefit of doing that, > but you are surely a much better judge for that. Yeah, that concept is alien to the Perl tests and not worth the effort to try to change that - you'd be better off just rewriting from scratch at that point. The perl re-execution granularity is purely at an individual script level - the implication is that while in python you might have a single test_foo.py with many tests inside, in Perl you'd probably have many xxxx.t files. > > None the less this does seem like we're pretty close to being able > > to do something useful in integration > > PS: The GLib plugin was deprecated because we observed that most users > would do fine with the TAP plugin by itself, but it can be reinstated if > there is a demand. GLib's test harness has two output formats - it can output its own custom data format, or it can output in TAP format. The granularity of its TAP format is pretty much the same as the way avocado reports TAP format granularity for python programs. Using TAP format, however, requires the test harness to pass an extra --tap arg to the individual test program. You can also still use the -p arg. If you're deprecating the GLib plugin, is avocado TAP harness have a way to be told to pass --tap, and a way to se -p to select individual tests ? I agre with no needing to support the GLib test output format - it is more typical for GLib apps to be using its TAP format instead these days. > PS2: If a test needs to produce more detailed information about its > status, such as for a long running test with different stages, we > recommend that advanced logging is used[2], instead of calling the > different stages of a single test (from Avocado's PoV). > > [1] - https://avocado-framework.readthedocs.io/en/82.0/plugins/optional/glib.html > [2] - https://avocado-framework.readthedocs.io/en/89.0/guides/writer/chapters/logging.html That logging assume a python program ? Is there a way for non-python programs to get logging back into avocado ? Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Wed, Jul 21, 2021 at 06:50:03PM +0100, Daniel P. Berrangé wrote: > On Thu, Jul 01, 2021 at 06:09:47PM -0300, Beraldo Leal wrote: > > On Thu, Jul 01, 2021 at 07:04:32PM +0100, Daniel P. Berrangé wrote: > > > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > > > I'm adding more information with some details inside the README file. > > > > > > Overall, I'm more enthusiastic about writing tests in Python > > > than Perl, for the long term, but would also potentially like > > > to write tests in Go too. > > > > > > I'm wondering if we can't bridge the divide between what we > > > have already in libvirt-tck, and what you're bringing to the > > > table with avocado here. While we've not done much development > > > with the TCK recently, there are some very valuable tests > > > there, especially related to firewall support and I don't > > > fancy rewriting them. > > > > > > Thus my suggestion is that we: > > > > > > - Put this avocado code into the libvirt-tck repository, > > > with focus on the supporting infra for making it easy to > > > write Python tests > > > > > > - Declare that all tests need a way to emit TAP format, > > > no matter what language they're written in. This could > > > either be the test directly emitting TAP, or it could > > > be via use of a plugin. For example 'tappy' can make > > > existing Python tests emit TAP, with no modifications > > > to the tests themselves. > > > > > > https://tappy.readthedocs.io/en/latest/consumers.html > > > > > > IOW, you can still invoke the python tests using the > > > standard Python test runner, and still invoke the perl > > > tests using the stnadard Perl test runner if desired. > > > > This is supported already: > > > > $ avocado run --tap - --test-runner='nrunner' tests/domain/transient.py > > 1..3 > > ok 1 tests/domain/transient.py:TransientDomain.test_autostart > > ok 2 tests/domain/transient.py:TransientDomain.test_lifecycle > > ok 3 tests/domain/transient.py:TransientDomain.test_convert_transient_to_persistent > > This is nice, showing fine grained TAP output lines for each > individual test within the test program > > > I tried using the hints file that Cleber pointed to make avocado > *consume* TAP format for the Perl/Shell scripts: > > $ cd libvirt-tck > $ cat .avocado.hint > [kinds] > tap = scripts/*/*.t > > [tap] > uri = $testpath > > > And I can indeed invoke the scripts: > > $ avocado run ./scripts/domain/05*.t > JOB ID : b5d596d909dc8024d986957c909fc8fb6b31e2dd > JOB LOG : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/job.log > (1/2) ./scripts/domain/050-transient-lifecycle.t: PASS (0.70 s) > (2/2) ./scripts/domain/051-transient-autostart.t: PASS (0.76 s) > RESULTS : PASS 2 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 > JOB HTML : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/results.html > JOB TIME : 1.90 s > > which is good. > > And also I can ask it to produce tap output too: > > $ avocado run --tap - ./scripts/domain/05*.t > 1..2 > ok 1 ./scripts/domain/050-transient-lifecycle.t > ok 2 ./scripts/domain/051-transient-autostart.t > > > But this output isn't entirely what I was after. This is just summarizing > the results of each test program. > > I can't find a way to make it show the fine grained tap output for the > individual tests, like it does for the python program Actually, the first Python TAP output example is showing a coarse TAP result. I have combined three transient tests into one single file but splitting them into three different methods, so we could execute each one individually. i.e: tests/domain/transient.py:TransientDomain.test_autostart. So, I did some reorg when migrating to Python test. In order to archive the same with Perl, we could do the same there, because the way individual tests are written there, doesn't allow for individual execution. Yes, we could do some tricks, to parse and combine outputs and list as it was a more fine graned, but afaict, we could not individually execute those. This is part of Avocado test definition where in order to be called a test, we need to be able to execute those individually as well. -- Beraldo
On Wed, Jul 21, 2021 at 04:22:19PM -0300, Beraldo Leal wrote: > On Wed, Jul 21, 2021 at 06:50:03PM +0100, Daniel P. Berrangé wrote: > > On Thu, Jul 01, 2021 at 06:09:47PM -0300, Beraldo Leal wrote: > > > On Thu, Jul 01, 2021 at 07:04:32PM +0100, Daniel P. Berrangé wrote: > > > > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > > > > I'm adding more information with some details inside the README file. > > > > > > > > Overall, I'm more enthusiastic about writing tests in Python > > > > than Perl, for the long term, but would also potentially like > > > > to write tests in Go too. > > > > > > > > I'm wondering if we can't bridge the divide between what we > > > > have already in libvirt-tck, and what you're bringing to the > > > > table with avocado here. While we've not done much development > > > > with the TCK recently, there are some very valuable tests > > > > there, especially related to firewall support and I don't > > > > fancy rewriting them. > > > > > > > > Thus my suggestion is that we: > > > > > > > > - Put this avocado code into the libvirt-tck repository, > > > > with focus on the supporting infra for making it easy to > > > > write Python tests > > > > > > > > - Declare that all tests need a way to emit TAP format, > > > > no matter what language they're written in. This could > > > > either be the test directly emitting TAP, or it could > > > > be via use of a plugin. For example 'tappy' can make > > > > existing Python tests emit TAP, with no modifications > > > > to the tests themselves. > > > > > > > > https://tappy.readthedocs.io/en/latest/consumers.html > > > > > > > > IOW, you can still invoke the python tests using the > > > > standard Python test runner, and still invoke the perl > > > > tests using the stnadard Perl test runner if desired. > > > > > > This is supported already: > > > > > > $ avocado run --tap - --test-runner='nrunner' tests/domain/transient.py > > > 1..3 > > > ok 1 tests/domain/transient.py:TransientDomain.test_autostart > > > ok 2 tests/domain/transient.py:TransientDomain.test_lifecycle > > > ok 3 tests/domain/transient.py:TransientDomain.test_convert_transient_to_persistent > > > > This is nice, showing fine grained TAP output lines for each > > individual test within the test program > > > > > > I tried using the hints file that Cleber pointed to make avocado > > *consume* TAP format for the Perl/Shell scripts: > > > > $ cd libvirt-tck > > $ cat .avocado.hint > > [kinds] > > tap = scripts/*/*.t > > > > [tap] > > uri = $testpath > > > > > > And I can indeed invoke the scripts: > > > > $ avocado run ./scripts/domain/05*.t > > JOB ID : b5d596d909dc8024d986957c909fc8fb6b31e2dd > > JOB LOG : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/job.log > > (1/2) ./scripts/domain/050-transient-lifecycle.t: PASS (0.70 s) > > (2/2) ./scripts/domain/051-transient-autostart.t: PASS (0.76 s) > > RESULTS : PASS 2 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 > > JOB HTML : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/results.html > > JOB TIME : 1.90 s > > > > which is good. > > > > And also I can ask it to produce tap output too: > > > > $ avocado run --tap - ./scripts/domain/05*.t > > 1..2 > > ok 1 ./scripts/domain/050-transient-lifecycle.t > > ok 2 ./scripts/domain/051-transient-autostart.t > > > > > > But this output isn't entirely what I was after. This is just summarizing > > the results of each test program. > > > > I can't find a way to make it show the fine grained tap output for the > > individual tests, like it does for the python program > > Actually, the first Python TAP output example is showing a coarse TAP > result. I have combined three transient tests into one single file but > splitting them into three different methods, so we could execute each one > individually. i.e: tests/domain/transient.py:TransientDomain.test_autostart. > > So, I did some reorg when migrating to Python test. > > In order to archive the same with Perl, we could do the same there, > because the way individual tests are written there, doesn't allow for > individual execution. > > Yes, we could do some tricks, to parse and combine outputs and list as > it was a more fine graned, but afaict, we could not individually execute > those. This is part of Avocado test definition where in order to be > called a test, we need to be able to execute those individually as well. Ok, I'm not so fussed about whether avocado can ultimately preserve the fine grained TAP output. Mostly I'm looking to understand how you should debug failures when they go wrong, becuase the default output I see from avocado is very terse giving no indiciation of the the failure - merely that there has been a failure. In an interactive environment you can just re-rnu the individual failed test directly. In an automated CI environment you need the test harness to display enough info to debug the failure directly. Ideally it should dump the full stdout+stderr either to the console or a log file that can be publish as an artifact from CI. Meson tends todo the latter creating a log file will full stdout/err from all tests. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Thu, Jul 22, 2021 at 09:44:28AM +0100, Daniel P. Berrangé wrote: > On Wed, Jul 21, 2021 at 04:22:19PM -0300, Beraldo Leal wrote: > > On Wed, Jul 21, 2021 at 06:50:03PM +0100, Daniel P. Berrangé wrote: > > > On Thu, Jul 01, 2021 at 06:09:47PM -0300, Beraldo Leal wrote: > > > > On Thu, Jul 01, 2021 at 07:04:32PM +0100, Daniel P. Berrangé wrote: > > > > > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > > > > > I'm adding more information with some details inside the README file. > > > > > > > > > > Overall, I'm more enthusiastic about writing tests in Python > > > > > than Perl, for the long term, but would also potentially like > > > > > to write tests in Go too. > > > > > > > > > > I'm wondering if we can't bridge the divide between what we > > > > > have already in libvirt-tck, and what you're bringing to the > > > > > table with avocado here. While we've not done much development > > > > > with the TCK recently, there are some very valuable tests > > > > > there, especially related to firewall support and I don't > > > > > fancy rewriting them. > > > > > > > > > > Thus my suggestion is that we: > > > > > > > > > > - Put this avocado code into the libvirt-tck repository, > > > > > with focus on the supporting infra for making it easy to > > > > > write Python tests > > > > > > > > > > - Declare that all tests need a way to emit TAP format, > > > > > no matter what language they're written in. This could > > > > > either be the test directly emitting TAP, or it could > > > > > be via use of a plugin. For example 'tappy' can make > > > > > existing Python tests emit TAP, with no modifications > > > > > to the tests themselves. > > > > > > > > > > https://tappy.readthedocs.io/en/latest/consumers.html > > > > > > > > > > IOW, you can still invoke the python tests using the > > > > > standard Python test runner, and still invoke the perl > > > > > tests using the stnadard Perl test runner if desired. > > > > > > > > This is supported already: > > > > > > > > $ avocado run --tap - --test-runner='nrunner' tests/domain/transient.py > > > > 1..3 > > > > ok 1 tests/domain/transient.py:TransientDomain.test_autostart > > > > ok 2 tests/domain/transient.py:TransientDomain.test_lifecycle > > > > ok 3 tests/domain/transient.py:TransientDomain.test_convert_transient_to_persistent > > > > > > This is nice, showing fine grained TAP output lines for each > > > individual test within the test program > > > > > > > > > I tried using the hints file that Cleber pointed to make avocado > > > *consume* TAP format for the Perl/Shell scripts: > > > > > > $ cd libvirt-tck > > > $ cat .avocado.hint > > > [kinds] > > > tap = scripts/*/*.t > > > > > > [tap] > > > uri = $testpath > > > > > > > > > And I can indeed invoke the scripts: > > > > > > $ avocado run ./scripts/domain/05*.t > > > JOB ID : b5d596d909dc8024d986957c909fc8fb6b31e2dd > > > JOB LOG : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/job.log > > > (1/2) ./scripts/domain/050-transient-lifecycle.t: PASS (0.70 s) > > > (2/2) ./scripts/domain/051-transient-autostart.t: PASS (0.76 s) > > > RESULTS : PASS 2 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 > > > JOB HTML : /home/berrange/avocado/job-results/job-2021-07-21T18.45-b5d596d/results.html > > > JOB TIME : 1.90 s > > > > > > which is good. > > > > > > And also I can ask it to produce tap output too: > > > > > > $ avocado run --tap - ./scripts/domain/05*.t > > > 1..2 > > > ok 1 ./scripts/domain/050-transient-lifecycle.t > > > ok 2 ./scripts/domain/051-transient-autostart.t > > > > > > > > > But this output isn't entirely what I was after. This is just summarizing > > > the results of each test program. > > > > > > I can't find a way to make it show the fine grained tap output for the > > > individual tests, like it does for the python program > > > > Actually, the first Python TAP output example is showing a coarse TAP > > result. I have combined three transient tests into one single file but > > splitting them into three different methods, so we could execute each one > > individually. i.e: tests/domain/transient.py:TransientDomain.test_autostart. > > > > So, I did some reorg when migrating to Python test. > > > > In order to archive the same with Perl, we could do the same there, > > because the way individual tests are written there, doesn't allow for > > individual execution. > > > > Yes, we could do some tricks, to parse and combine outputs and list as > > it was a more fine graned, but afaict, we could not individually execute > > those. This is part of Avocado test definition where in order to be > > called a test, we need to be able to execute those individually as well. > > Ok, I'm not so fussed about whether avocado can ultimately preserve the > fine grained TAP output. Mostly I'm looking to understand how you should > debug failures when they go wrong, becuase the default output I see from > avocado is very terse giving no indiciation of the the failure - merely > that there has been a failure. > > In an interactive environment you can just re-rnu the individual failed > test directly. In an automated CI environment you need the test harness > to display enough info to debug the failure directly. Ideally it should > dump the full stdout+stderr either to the console or a log file that > can be publish as an artifact from CI. Meson tends todo the latter > creating a log file will full stdout/err from all tests. Sure, I completely understand your concerns. In this case, the output here is "very terse" because you are running the "--tap" version. When using the "default/human" output, Avocado outputs some pointers to where you could get more details. We understand that debugging is a critical part of the process, so Avocado stores the job information inside "~/avocado/job-results/[job-id|latest]/" folder by default (regardless of the output format used). Inside this folder, you can find a "debug.log" file and a "test-results" sub-folder (with per-test stdout/stderr files). The entire folder could be uploaded as an artifact for any CI. It is also generated by default a results.xml (xUnit format) for better debugging on CI that supports this format (i.e: Gitlab). If you would like to see an example, QEMU project's test list[1] on Gitlab is based on this XML file. I understand that there is some room for improvements here and we are glad to address them. In meanwhile I was working on a proposal for the libvirt-tck based on the progress of this thread. Hopefully, this will help with the Avocado bootstrap phase. [1] - https://gitlab.com/qemu-project/qemu/-/pipelines/340613922/test_report [2] - https://gitlab.com/beraldoleal/libvirt-tck/-/tree/avocado-libvirt/ Regards, -- Beraldo
Hi Daniel, On Thu, Jul 1, 2021 at 2:05 PM Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > lavocado aims to be an alternative test framework for the libvirt > > project using Python, python-libvirt and Avocado. This can be used to > > write unit, functional and integration tests and it is inspired by the > > libvirt-tck framework. > > > > This series introduces the basic framework along with some basic test > > examples that I got from libvirt-tck. I would appreciate your comments > > on this RFC, to see if this fits this project's needs. Also, names and > > locations are just a proposal and can be changed. > > Some high level thoughts > > - More extensive functional integration testing coverage is good > > - We need to actually run the functional tests regularly reporting > via GitLab pipelines in some way > > - Using Python is way more contributor friendly than Perl > > - This does not need to live in libvirt.git as we don't follow > a monolithic repo approach in libvirt, and it already depends > on functionality provided by other repos. > > > When it comes to testing, I feel like there are several distinct > pieces to think about > > - The execution & reporting harness > - Supporting infrastructure to aid writing tests > - The set of tests themselves > > If I look at the TCK > > - The harness is essentially the standard Perl harness > with a thin CLI wrapper, thus essentially works with > any test emitting TAP format > - The support infra is all custom APIs using libvirt-perl > - The tests are mostly written in Perl, but some are > written in shell (yuk). They all output TAP format. > > One key thing here is that the test harness is fairly loosely > coupled to the support infra & tests. > > The TAP data format bridged the two, letting us write tests > in essentially any language. Of course writing tests in > non-Perl was/is tedious today, since there's no support > infra for that which exists today. > > The TAP data format bridge also means we can easily throw > away the current TCK harness and replace it with anything > else that can consume tests emitting TAP data. > > > If I look at Avocado, I think (correct me if i'm wrong) > > 1. The harness is essentially the standard Python harness > with a thin CLI wrapper. Thus needs all tests to implement > the Python test APIs Not really. Even though Avocado is mostly written in Python, there have been provisions for accommodating foreign types of tests (in different forms) since its inception. The most basic way, is, of course, simply treating a test as an executable. But this is far from the only way. For instance, these are other possibilities: a) if an executable generates TAP, it can consume the test's TAP output and determine the test results b) you can write support for completely new test types[1]. If you want your test to be "discoverable" when running `avocado list ...` you will have to write some Python code. But you can skip that part, and have an executable named "avocado-runner-$test-type" written in any language whatsoever, which will be used to execute whatever your tests are and feed the information to Avocado. BTW, we already have a plugin that can list go tests[2] and run them (via a Python wrapper that uses `go test`). > 2. The support infra is all custom APIs using libvirt-python > 3. The tests are to be written entirely in Python, to integrate > with the python test harness > > IIUC, 90% of this patch series is essentially about (2), > since (1) is fairly trivial and for (3) there's just > simple demo tests. > > > Libvirt has consumers writing applications in a variety of > languages, and periodically reporting bugs. My general wish > for a test harness would be for something that can invoke > and consume arbitrary test cases. Essentially if someone > can provide us a piece of code that demonstrates a problem > in libvirt, I would like to be able to use that directly as > a functional test, regardless of language it was written > in. > > In theory the libvirt TCK allowed for that, and indeed the > few tests written in shell essentially came to exist because > someone at IBM had written some shell code to test firewalls > and contributed that as a functional test. They just hacked > their script to emit TAP data. > > I agree there's no reason to *not* continue to allow that. I'd even argue that Avocado can give libvirt even more options wrt integrating different types of tests. > > > For now, this framework assumes that you are going to run the tests in a > > fresh clean environment, i.e. a VM. If you decide to use your local > > system, beware that execution of the tests may affect your system. > > It is always good to be wary of functional test frameworks trashing > your system, but at the same time I think makes things more flexible > if they are able to be reasonably safely run on /any/ host. > I'm on the same boat here. All the QEMU/Libvirt testing infrastructure that I was involved with, evolved from something that needed to run as privileged users and could easily trash your system, from something that is runnable as a regular user after a `pip install --user`. There are challenges, but most situations are handled with tests being canceled if they do not have what they need to run, and by always assuming a "good neighbor" policy (that is, you're not alone in the system). Having said that, I've seen how much care has been dedicated to libvirt-ci so that test execution environments would be predictable, reliable and easy to obtain. I guess the original text from Beraldo tried to make it clear that we envision an obvious/default integration with those environments created by libvirt-ci, while not restricting other environments or assuming every execution will be on those environments. > For the TCK we tried to be quite conservative in this regard, because > it was actually an explicit goal that you can run it on any Fedora > host to validate it correctly functioning for KVM. To achieve that > we tried to use some standard naming conventions for any resources > that tests created, and if we saw pre-existing resources named > differently we didn't touch them. ie all VMs were named 'tck-XXX', > and were free to be deleted, but other named VMs were ignored. > > For anything special, such as testing PCI device assignment, the test > configuration file had to explicitly contain details of host devices > that were safe to use. This was also done for host block devices or > NICs, etc. Thus a default invokation only ran a subset of tests > which were safe. The more dangerous tests required you to modify the > config file to grant access. > > I think it would be good if the Avocado supporting test APIs had a > similar conceptual approach, especially wrt to a config file > granting access to host resources such as NICs/Block devs/PCI devs, > where you need prior explicit admin permission to avoid danger. > Absolutely agreed. > > One of the future goals of this framework is to utilize nested > > virtualization technologies and hence make sure an L1 guest is > > provisioned automatically for the tests to be executed in this > > environment and not tamper with your main system. > > I think the test framework should not concern itself with this > kind of thing directly. We already have libvirt-ci, which has > tools for provisioning VMs with pre-defined package sets. The > test harness should just expect that this has been done, and > that it is already running in such an environment if the user > wanted it to be. > Right. I know for a fact the original text was not about depending exclusively on those VM environments, but having an obvious default experience based on them. > In the TCK config file we provided setting for a URI to connect > to other hosts, to enable multi-hosts tests like live migration > to be done. > > > I'm adding more information with some details inside the README file. > > Overall, I'm more enthusiastic about writing tests in Python > than Perl, for the long term, but would also potentially like > to write tests in Go too. > > I'm wondering if we can't bridge the divide between what we > have already in libvirt-tck, and what you're bringing to the > table with avocado here. While we've not done much development > with the TCK recently, there are some very valuable tests > there, especially related to firewall support and I don't > fancy rewriting them. > > Thus my suggestion is that we: > > - Put this avocado code into the libvirt-tck repository, > with focus on the supporting infra for making it easy to > write Python tests > One drawback would be the various supporting files for a perl+Python package. I mean, a Makefile.PL and a setup.py in the same repo can be a source of confusion. But, maybe I'm overreacting. > - Declare that all tests need a way to emit TAP format, > no matter what language they're written in. This could > either be the test directly emitting TAP, or it could > be via use of a plugin. For example 'tappy' can make > existing Python tests emit TAP, with no modifications > to the tests themselves. > > https://tappy.readthedocs.io/en/latest/consumers.html > I honestly think Avocado can provide a better range of integration opportunities. > IOW, you can still invoke the python tests using the > standard Python test runner, and still invoke the perl > tests using the stnadard Perl test runner if desired. > > - Switch the TCK configuration file to use JSON/YAML > instead of its current custom format. This is to enable > the Python tests to share a config file with the Perl > and Shell tests. Thus they can all have the same way > to figure out what block devs, NICs, PCI devs, etc > they are permitted to use, and what dirs it is safe > to create VM images in, etc. > Makes sense. > - Replace the current Perl based execution harness > with something using Python, so that it is less > of a pain to install in terms of dependancies. > Looks like we could just do something using the > 'tappy' Python APIs to consume the TAP format > from all the tests and generate a junit report > that GitLab can consume. > These are all things Avocado can do right away. > - Rename the RPMs so they match the project name > "libvirt-tck", instead of "perl-Sys-Virt-TCK" > to emphasis that this is not a Perl project, it > is a testing project, of which some parts were > historically Perl, but new parts will be mostly > Python (and possibly Go/etc in future if desired). > Potentially have RPMs split: > > * libvirt-tck - just the execution harness, > currently the Perl one, but to be replaced > with a Python one. > > * libvirt-tck-tests-python - supporting APIs > and tests written in Python - essentially > all of this series, and future tests written > > * libvirt-tck-tests-perl - supporting APIs > and tests written in Perl - essentially > most of existing TCK stuff > > * libvirt-tck-tests-shell - supporting APIs > and tests written in shell - mostly the > network/firewall related TCK pieces > > > Given the existance of the 'tappy' tool, it feels like the first > two points ought to be reasonably doable without any significant > changes to what you've already written. Just need to import it > as is, and then using tappy as a shim to let it invoked from > libvirt-tck. Can still have the existing way to invoke it > directly too as an alternative. > > The common JSON/YAML config file, and adopting some standard > naming conventions and resource usage conventions are probably > where the real work would lie. That's likely not too bad at > this stage though, since you've not actually written a large > number of test cases. > > > Beraldo Leal (4): > > tests: introduce lavocado: initial code structure > > tests.lavocado: adding basic transient domain tests > > tests.lavocado: adding a .gitignore > > tests.lavocado: adding a README and Makefile for convenience > > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| > Cheers, - Cleber. [1] - https://avocado-framework.readthedocs.io/en/89.0/guides/contributor/chapters/plugins.html#new-test-type-plugin-example [2] - https://avocado-framework.readthedocs.io/en/89.0/plugins/optional/golang.html
On Fri, Jul 02, 2021 at 07:23:57AM -0400, Cleber Rosa wrote: > On Thu, Jul 1, 2021 at 2:05 PM Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > Overall, I'm more enthusiastic about writing tests in Python > > than Perl, for the long term, but would also potentially like > > to write tests in Go too. > > > > I'm wondering if we can't bridge the divide between what we > > have already in libvirt-tck, and what you're bringing to the > > table with avocado here. While we've not done much development > > with the TCK recently, there are some very valuable tests > > there, especially related to firewall support and I don't > > fancy rewriting them. > > > > Thus my suggestion is that we: > > > > - Put this avocado code into the libvirt-tck repository, > > with focus on the supporting infra for making it easy to > > write Python tests > > > > One drawback would be the various supporting files for a perl+Python > package. I mean, a Makefile.PL and a setup.py in the same repo can be > a source of confusion. But, maybe I'm overreacting. I don't think that's an issue. We're not actually trying to ship Perl modules or Python modules, we're providing an application that happens to use some Perl and Python and $whatever code. As such, I would not use either Makefile.PL or setup.py for this task. Instead I think we should just use meson to deal with it all, keeping it all private to the application. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Fri, Jul 02, 2021 at 07:23:57AM -0400, Cleber Rosa wrote: > Hi Daniel, > > On Thu, Jul 1, 2021 at 2:05 PM Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > > lavocado aims to be an alternative test framework for the libvirt > > > project using Python, python-libvirt and Avocado. This can be used to > > > write unit, functional and integration tests and it is inspired by the > > > libvirt-tck framework. > > > > > > This series introduces the basic framework along with some basic test > > > examples that I got from libvirt-tck. I would appreciate your comments > > > on this RFC, to see if this fits this project's needs. Also, names and > > > locations are just a proposal and can be changed. > > > > Some high level thoughts > > > > - More extensive functional integration testing coverage is good > > > > - We need to actually run the functional tests regularly reporting > > via GitLab pipelines in some way > > > > - Using Python is way more contributor friendly than Perl > > > > - This does not need to live in libvirt.git as we don't follow > > a monolithic repo approach in libvirt, and it already depends > > on functionality provided by other repos. > > > > > > When it comes to testing, I feel like there are several distinct > > pieces to think about > > > > - The execution & reporting harness > > - Supporting infrastructure to aid writing tests > > - The set of tests themselves > > > > If I look at the TCK > > > > - The harness is essentially the standard Perl harness > > with a thin CLI wrapper, thus essentially works with > > any test emitting TAP format > > - The support infra is all custom APIs using libvirt-perl > > - The tests are mostly written in Perl, but some are > > written in shell (yuk). They all output TAP format. > > > > One key thing here is that the test harness is fairly loosely > > coupled to the support infra & tests. > > > > The TAP data format bridged the two, letting us write tests > > in essentially any language. Of course writing tests in > > non-Perl was/is tedious today, since there's no support > > infra for that which exists today. > > > > The TAP data format bridge also means we can easily throw > > away the current TCK harness and replace it with anything > > else that can consume tests emitting TAP data. > > > > > > If I look at Avocado, I think (correct me if i'm wrong) > > > > 1. The harness is essentially the standard Python harness > > with a thin CLI wrapper. Thus needs all tests to implement > > the Python test APIs > > Not really. Even though Avocado is mostly written in Python, there > have been provisions for accommodating foreign types of tests (in > different forms) since its inception. The most basic way, is, of > course, simply treating a test as an executable. But this is far from > the only way. For instance, these are other possibilities: > > a) if an executable generates TAP, it can consume the test's TAP > output and determine the test results Can you show how to actually make this work, since from the manpage I can only see how to make it emit TAP, not consume it. I see there is a 'tap' plugin but that doesn't seem to provide a runner. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Fri, Jul 2, 2021 at 7:55 AM Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Fri, Jul 02, 2021 at 07:23:57AM -0400, Cleber Rosa wrote: > > Hi Daniel, > > > > On Thu, Jul 1, 2021 at 2:05 PM Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > On Wed, Jun 30, 2021 at 01:36:30PM -0300, Beraldo Leal wrote: > > > > lavocado aims to be an alternative test framework for the libvirt > > > > project using Python, python-libvirt and Avocado. This can be used to > > > > write unit, functional and integration tests and it is inspired by the > > > > libvirt-tck framework. > > > > > > > > This series introduces the basic framework along with some basic test > > > > examples that I got from libvirt-tck. I would appreciate your comments > > > > on this RFC, to see if this fits this project's needs. Also, names and > > > > locations are just a proposal and can be changed. > > > > > > Some high level thoughts > > > > > > - More extensive functional integration testing coverage is good > > > > > > - We need to actually run the functional tests regularly reporting > > > via GitLab pipelines in some way > > > > > > - Using Python is way more contributor friendly than Perl > > > > > > - This does not need to live in libvirt.git as we don't follow > > > a monolithic repo approach in libvirt, and it already depends > > > on functionality provided by other repos. > > > > > > > > > When it comes to testing, I feel like there are several distinct > > > pieces to think about > > > > > > - The execution & reporting harness > > > - Supporting infrastructure to aid writing tests > > > - The set of tests themselves > > > > > > If I look at the TCK > > > > > > - The harness is essentially the standard Perl harness > > > with a thin CLI wrapper, thus essentially works with > > > any test emitting TAP format > > > - The support infra is all custom APIs using libvirt-perl > > > - The tests are mostly written in Perl, but some are > > > written in shell (yuk). They all output TAP format. > > > > > > One key thing here is that the test harness is fairly loosely > > > coupled to the support infra & tests. > > > > > > The TAP data format bridged the two, letting us write tests > > > in essentially any language. Of course writing tests in > > > non-Perl was/is tedious today, since there's no support > > > infra for that which exists today. > > > > > > The TAP data format bridge also means we can easily throw > > > away the current TCK harness and replace it with anything > > > else that can consume tests emitting TAP data. > > > > > > > > > If I look at Avocado, I think (correct me if i'm wrong) > > > > > > 1. The harness is essentially the standard Python harness > > > with a thin CLI wrapper. Thus needs all tests to implement > > > the Python test APIs > > > > Not really. Even though Avocado is mostly written in Python, there > > have been provisions for accommodating foreign types of tests (in > > different forms) since its inception. The most basic way, is, of > > course, simply treating a test as an executable. But this is far from > > the only way. For instance, these are other possibilities: > > > > a) if an executable generates TAP, it can consume the test's TAP > > output and determine the test results > > Can you show how to actually make this work, since from the manpage > I can only see how to make it emit TAP, not consume it. I see there > is a 'tap' plugin but that doesn't seem to provide a runner. > There are a couple of ways. The simplest is hinting to Avocado that a file is of kind "tap" with a hintfile. Suppose you have a "test-suite" directory, and in it, you have "test.sh": #!/bin/sh -e echo "1..2" echo "ok 2 /bin/true" echo "ok 3 /bin/uname" And ".avocado.hint": [kinds] tap = *.sh [tap] uri = $testpath If you "cd test-suite", and do "avocado list --resolver *sh" you get: Type Test Tag(s) tap test.sh Resolver Reference Info TEST TYPES SUMMARY ================== tap: 1 And then you can run it with: $ avocado run --test-runner=nrunner ./test.sh JOB ID : 2166ad93ffc25da4d5fc8e7db073f4555d55e81a JOB LOG : /home/cleber/avocado/job-results/job-2021-07-02T08.39-2166ad9/job.log (1/1) ./test.sh: STARTED (1/1) ./test.sh: PASS (0.01 s) RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB HTML : /home/cleber/avocado/job-results/job-2021-07-02T08.39-2166ad9/results.html JOB TIME : 1.31 s This is a relevant documentation pointer: https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/introduction.html#the-hint-files And I'll make sure the man page is updated, thanks for noticing it. Thanks, - Cleber. > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| >
On Fri, Jul 2, 2021 at 8:40 AM Cleber Rosa <crosa@redhat.com> wrote: > > > There are a couple of ways. The simplest is hinting to Avocado that a > file is of kind "tap" with a hintfile. Suppose you have a > "test-suite" directory, and in it, you have "test.sh": > > #!/bin/sh -e > echo "1..2" > echo "ok 2 /bin/true" > echo "ok 3 /bin/uname" > > And ".avocado.hint": > > [kinds] > tap = *.sh > > [tap] > uri = $testpath > > If you "cd test-suite", and do "avocado list --resolver *sh" you get: Actually, I ran the verbose version, that is, ""avocado -V list --resolver *sh". > > Type Test Tag(s) > tap test.sh > > Resolver Reference Info > > TEST TYPES SUMMARY > ================== > tap: 1 > > > And then you can run it with: > > $ avocado run --test-runner=nrunner ./test.sh > JOB ID : 2166ad93ffc25da4d5fc8e7db073f4555d55e81a > JOB LOG : /home/cleber/avocado/job-results/job-2021-07-02T08.39-2166ad9/job.log > (1/1) ./test.sh: STARTED > (1/1) ./test.sh: PASS (0.01 s) > RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 > | CANCEL 0 > JOB HTML : /home/cleber/avocado/job-results/job-2021-07-02T08.39-2166ad9/results.html > JOB TIME : 1.31 s > > This is a relevant documentation pointer: > https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/introduction.html#the-hint-files > > And I'll make sure the man page is updated, thanks for noticing it. > > Thanks, > - Cleber. > > > Regards, > > Daniel > > -- > > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > > |: https://libvirt.org -o- https://fstop138.berrange.com :| > > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| > >
© 2016 - 2024 Red Hat, Inc.