[Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test

Aleksandar Markovic posted 2 patches 4 years, 8 months ago
Test docker-clang@ubuntu passed
Test s390x passed
Test asan passed
Test docker-mingw@fedora passed
Test FreeBSD passed
Test checkpatch passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/1564760158-27536-1-git-send-email-aleksandar.markovic@rt-rk.com
Maintainers: Aleksandar Rikalo <arikalo@wavecomp.com>, Aurelien Jarno <aurelien@aurel32.net>
tests/acceptance/linux_ssh_mips_malta.py | 81 ++++++++++++++++++++++++++------
1 file changed, 66 insertions(+), 15 deletions(-)
[Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Aleksandar Markovic 4 years, 8 months ago
From: Aleksandar Markovic <amarkovic@wavecomp.com>

This little series improves linux_ssh_mips_malta.py, both in the sense
of code organization and in the sense of quantity of executed tests.

Aleksandar Markovic (2):
  tests/acceptance: Refactor and improve reporting in
    linux_ssh_mips_malta.py
  tests/acceptance: Add new test cases in linux_ssh_mips_malta.py

 tests/acceptance/linux_ssh_mips_malta.py | 81 ++++++++++++++++++++++++++------
 1 file changed, 66 insertions(+), 15 deletions(-)

-- 
2.7.4


Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Aleksandar Markovic 4 years, 8 months ago
02.08.2019. 17.37, "Aleksandar Markovic" <aleksandar.markovic@rt-rk.com> је
написао/ла:
>
> From: Aleksandar Markovic <amarkovic@wavecomp.com>
>
> This little series improves linux_ssh_mips_malta.py, both in the sense
> of code organization and in the sense of quantity of executed tests.
>

Hello, all.

I am going to send a new version in few days, and I have a question for
test team:

Currently, the outcome of the script execition is either PASS:1 FAIL:0 or
PASS:0 FAIL:1. But the test actually consists of several subtests. Is there
any way that this single Python script considers these subtests as separate
tests (test cases), reporting something like PASS:12 FAIL:7? If yes, what
would be the best way to achieve that?

Thanks in advance,
Aleksandar

> Aleksandar Markovic (2):
>   tests/acceptance: Refactor and improve reporting in
>     linux_ssh_mips_malta.py
>   tests/acceptance: Add new test cases in linux_ssh_mips_malta.py
>
>  tests/acceptance/linux_ssh_mips_malta.py | 81
++++++++++++++++++++++++++------
>  1 file changed, 66 insertions(+), 15 deletions(-)
>
> --
> 2.7.4
>
>
Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Eduardo Habkost 4 years, 8 months ago
On Wed, Aug 21, 2019 at 10:27:11PM +0200, Aleksandar Markovic wrote:
> 02.08.2019. 17.37, "Aleksandar Markovic" <aleksandar.markovic@rt-rk.com> је
> написао/ла:
> >
> > From: Aleksandar Markovic <amarkovic@wavecomp.com>
> >
> > This little series improves linux_ssh_mips_malta.py, both in the sense
> > of code organization and in the sense of quantity of executed tests.
> >
> 
> Hello, all.
> 
> I am going to send a new version in few days, and I have a question for
> test team:
> 
> Currently, the outcome of the script execition is either PASS:1 FAIL:0 or
> PASS:0 FAIL:1. But the test actually consists of several subtests. Is there
> any way that this single Python script considers these subtests as separate
> tests (test cases), reporting something like PASS:12 FAIL:7? If yes, what
> would be the best way to achieve that?

If you are talking about each test_*() method, they are already
treated like separate tests.  If you mean treating each
ssh_command_output_contains() call as a separate test, this might
be difficult.

Cleber, is there something already available in the Avocado API
that would help us report more fine-grained results inside each
test case?


> 
> Thanks in advance,
> Aleksandar
> 
> > Aleksandar Markovic (2):
> >   tests/acceptance: Refactor and improve reporting in
> >     linux_ssh_mips_malta.py
> >   tests/acceptance: Add new test cases in linux_ssh_mips_malta.py
> >
> >  tests/acceptance/linux_ssh_mips_malta.py | 81
> ++++++++++++++++++++++++++------
> >  1 file changed, 66 insertions(+), 15 deletions(-)
> >
> > --
> > 2.7.4
> >
> >

-- 
Eduardo

Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Aleksandar Markovic 4 years, 8 months ago
21.08.2019. 23.00, "Eduardo Habkost" <ehabkost@redhat.com> је написао/ла:
>
> On Wed, Aug 21, 2019 at 10:27:11PM +0200, Aleksandar Markovic wrote:
> > 02.08.2019. 17.37, "Aleksandar Markovic" <aleksandar.markovic@rt-rk.com>
је
> > написао/ла:
> > >
> > > From: Aleksandar Markovic <amarkovic@wavecomp.com>
> > >
> > > This little series improves linux_ssh_mips_malta.py, both in the sense
> > > of code organization and in the sense of quantity of executed tests.
> > >
> >
> > Hello, all.
> >
> > I am going to send a new version in few days, and I have a question for
> > test team:
> >
> > Currently, the outcome of the script execition is either PASS:1 FAIL:0
or
> > PASS:0 FAIL:1. But the test actually consists of several subtests. Is
there
> > any way that this single Python script considers these subtests as
separate
> > tests (test cases), reporting something like PASS:12 FAIL:7? If yes,
what
> > would be the best way to achieve that?
>
> If you are talking about each test_*() method, they are already
> treated like separate tests.  If you mean treating each
> ssh_command_output_contains() call as a separate test, this might
> be difficult.
>

Yes, I meant the latter one, individual code segments involving an
invocation of ssh_command_output_contains() instance being treated as
separate tests.

> Cleber, is there something already available in the Avocado API
> that would help us report more fine-grained results inside each
> test case?
>

Thanks, that would be a better way of expressing my question.

>
> >
> > Thanks in advance,
> > Aleksandar
> >
> > > Aleksandar Markovic (2):
> > >   tests/acceptance: Refactor and improve reporting in
> > >     linux_ssh_mips_malta.py
> > >   tests/acceptance: Add new test cases in linux_ssh_mips_malta.py
> > >
> > >  tests/acceptance/linux_ssh_mips_malta.py | 81
> > ++++++++++++++++++++++++++------
> > >  1 file changed, 66 insertions(+), 15 deletions(-)
> > >
> > > --
> > > 2.7.4
> > >
> > >
>
> --
> Eduardo
Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Aleksandar Markovic 4 years, 8 months ago
22.08.2019. 05.15, "Aleksandar Markovic" <aleksandar.m.mail@gmail.com> је
написао/ла:
>
>
> 21.08.2019. 23.00, "Eduardo Habkost" <ehabkost@redhat.com> је написао/ла:
> >
> > On Wed, Aug 21, 2019 at 10:27:11PM +0200, Aleksandar Markovic wrote:
> > > 02.08.2019. 17.37, "Aleksandar Markovic" <
aleksandar.markovic@rt-rk.com> је
> > > написао/ла:
> > > >
> > > > From: Aleksandar Markovic <amarkovic@wavecomp.com>
> > > >
> > > > This little series improves linux_ssh_mips_malta.py, both in the
sense
> > > > of code organization and in the sense of quantity of executed tests.
> > > >
> > >
> > > Hello, all.
> > >
> > > I am going to send a new version in few days, and I have a question
for
> > > test team:
> > >
> > > Currently, the outcome of the script execition is either PASS:1
FAIL:0 or
> > > PASS:0 FAIL:1. But the test actually consists of several subtests. Is
there
> > > any way that this single Python script considers these subtests as
separate
> > > tests (test cases), reporting something like PASS:12 FAIL:7? If yes,
what
> > > would be the best way to achieve that?
> >
> > If you are talking about each test_*() method, they are already
> > treated like separate tests.  If you mean treating each
> > ssh_command_output_contains() call as a separate test, this might
> > be difficult.
> >
>
> Yes, I meant the latter one, individual code segments involving an
invocation of ssh_command_output_contains() instance being treated as
separate tests.
>

Hello, Cleber,

I am willing to rewamp python file structure if needed.

The only thing I feel a little unconfortable is if I need to reboot the
virtual machine for each case of ssh_command_output_contains().

Grateful in advance,
Aleksandar

> > Cleber, is there something already available in the Avocado API
> > that would help us report more fine-grained results inside each
> > test case?
> >
>
> Thanks, that would be a better way of expressing my question.
>
> >
> > >
> > > Thanks in advance,
> > > Aleksandar
> > >
> > > > Aleksandar Markovic (2):
> > > >   tests/acceptance: Refactor and improve reporting in
> > > >     linux_ssh_mips_malta.py
> > > >   tests/acceptance: Add new test cases in linux_ssh_mips_malta.py
> > > >
> > > >  tests/acceptance/linux_ssh_mips_malta.py | 81
> > > ++++++++++++++++++++++++++------
> > > >  1 file changed, 66 insertions(+), 15 deletions(-)
> > > >
> > > > --
> > > > 2.7.4
> > > >
> > > >
> >
> > --
> > Eduardo
Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Aleksandar Markovic 4 years, 8 months ago
ping

22.08.2019. 19.59, "Aleksandar Markovic" <aleksandar.m.mail@gmail.com> је
написао/ла:
>
>
> 22.08.2019. 05.15, "Aleksandar Markovic" <aleksandar.m.mail@gmail.com> је
написао/ла:
> >
> >
> > 21.08.2019. 23.00, "Eduardo Habkost" <ehabkost@redhat.com> је
написао/ла:
> > >
> > > On Wed, Aug 21, 2019 at 10:27:11PM +0200, Aleksandar Markovic wrote:
> > > > 02.08.2019. 17.37, "Aleksandar Markovic" <
aleksandar.markovic@rt-rk.com> је
> > > > написао/ла:
> > > > >
> > > > > From: Aleksandar Markovic <amarkovic@wavecomp.com>
> > > > >
> > > > > This little series improves linux_ssh_mips_malta.py, both in the
sense
> > > > > of code organization and in the sense of quantity of executed
tests.
> > > > >
> > > >
> > > > Hello, all.
> > > >
> > > > I am going to send a new version in few days, and I have a question
for
> > > > test team:
> > > >
> > > > Currently, the outcome of the script execition is either PASS:1
FAIL:0 or
> > > > PASS:0 FAIL:1. But the test actually consists of several subtests.
Is there
> > > > any way that this single Python script considers these subtests as
separate
> > > > tests (test cases), reporting something like PASS:12 FAIL:7? If
yes, what
> > > > would be the best way to achieve that?
> > >
> > > If you are talking about each test_*() method, they are already
> > > treated like separate tests.  If you mean treating each
> > > ssh_command_output_contains() call as a separate test, this might
> > > be difficult.
> > >
> >
> > Yes, I meant the latter one, individual code segments involving an
invocation of ssh_command_output_contains() instance being treated as
separate tests.
> >
>
> Hello, Cleber,
>
> I am willing to rewamp python file structure if needed.
>
> The only thing I feel a little unconfortable is if I need to reboot the
virtual machine for each case of ssh_command_output_contains().
>
> Grateful in advance,
> Aleksandar
>
> > > Cleber, is there something already available in the Avocado API
> > > that would help us report more fine-grained results inside each
> > > test case?
> > >
> >
> > Thanks, that would be a better way of expressing my question.
> >
> > >
> > > >
> > > > Thanks in advance,
> > > > Aleksandar
> > > >
> > > > > Aleksandar Markovic (2):
> > > > >   tests/acceptance: Refactor and improve reporting in
> > > > >     linux_ssh_mips_malta.py
> > > > >   tests/acceptance: Add new test cases in linux_ssh_mips_malta.py
> > > > >
> > > > >  tests/acceptance/linux_ssh_mips_malta.py | 81
> > > > ++++++++++++++++++++++++++------
> > > > >  1 file changed, 66 insertions(+), 15 deletions(-)
> > > > >
> > > > > --
> > > > > 2.7.4
> > > > >
> > > > >
> > >
> > > --
> > > Eduardo
Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Cleber Rosa 4 years, 8 months ago
On Thu, Aug 22, 2019 at 07:59:07PM +0200, Aleksandar Markovic wrote:
> 22.08.2019. 05.15, "Aleksandar Markovic" <aleksandar.m.mail@gmail.com> је
> написао/ла:
> >
> >
> > 21.08.2019. 23.00, "Eduardo Habkost" <ehabkost@redhat.com> је написао/ла:
> > >
> > > On Wed, Aug 21, 2019 at 10:27:11PM +0200, Aleksandar Markovic wrote:
> > > > 02.08.2019. 17.37, "Aleksandar Markovic" <
> aleksandar.markovic@rt-rk.com> је
> > > > написао/ла:
> > > > >
> > > > > From: Aleksandar Markovic <amarkovic@wavecomp.com>
> > > > >
> > > > > This little series improves linux_ssh_mips_malta.py, both in the
> sense
> > > > > of code organization and in the sense of quantity of executed tests.
> > > > >
> > > >
> > > > Hello, all.
> > > >
> > > > I am going to send a new version in few days, and I have a question
> for
> > > > test team:
> > > >
> > > > Currently, the outcome of the script execition is either PASS:1
> FAIL:0 or
> > > > PASS:0 FAIL:1. But the test actually consists of several subtests. Is
> there
> > > > any way that this single Python script considers these subtests as
> separate
> > > > tests (test cases), reporting something like PASS:12 FAIL:7? If yes,
> what
> > > > would be the best way to achieve that?
> > >
> > > If you are talking about each test_*() method, they are already
> > > treated like separate tests.  If you mean treating each
> > > ssh_command_output_contains() call as a separate test, this might
> > > be difficult.
> > >
> >
> > Yes, I meant the latter one, individual code segments involving an
> invocation of ssh_command_output_contains() instance being treated as
> separate tests.
> >
> 
> Hello, Cleber,
> 
> I am willing to rewamp python file structure if needed.
> 
> The only thing I feel a little unconfortable is if I need to reboot the
> virtual machine for each case of ssh_command_output_contains().
>

Hi Aleksandar,

The short answer is that Avocado provides no way to report "subtest"
statuses (as a formal concept), neither does the current
"avocado_qemu" infrastructure allow for management of VMs across
tests.  The later is an Avocado-VT feature, and it to be honest it
brings a good deal of problems in itself, which we decided to avoid
here.

About the lack of subtests, we (the autotest project, then the Avocado
project) found that this concept, to be well applied, need more than
we could deal with initially.  For instance, Avocado has the concept
of "pre_test" and "post_test" hooks, with that, should those be
applied to subtests as well?  Also, there's support for capturing
system information (a feature called sysinfo) before and after the
tests... again, should it be applied to subtests?  Avocado also stores
a well defined results directory, and we'd have to deal with something
like that for subtests.  With regards to the variants feature, should
they also be applied to subtests?  The list of questions goes on and
on.

The fact that one test should not be able (as much as possible) to
influence another test also comes into play in our initial decision
to avoid subtests.

IMO, the best way to handle this is to either keep a separate logger
with the test progress:

  https://avocado-framework.readthedocs.io/en/71.0/WritingTests.html#advanced-logging-capabilities

With a change similar to:

---
diff --git a/tests/acceptance/linux_ssh_mips_malta.py b/tests/acceptance/linux_ssh_mips_malta.py
index 509ff929cf..0683586c35 100644
--- a/tests/acceptance/linux_ssh_mips_malta.py
+++ b/tests/acceptance/linux_ssh_mips_malta.py
@@ -17,6 +17,7 @@ from avocado_qemu import Test
 from avocado.utils import process
 from avocado.utils import archive
 
+progress_log = logging.getLogger("progress")
 
 class LinuxSSH(Test):
 
@@ -149,6 +150,7 @@ class LinuxSSH(Test):
         stdout, _ = self.ssh_command(cmd)
         for line in stdout:
             if exp in line:
+                progress_log.info('Check successful for "%s"', cmd)
                 break
         else:
             self.fail('"%s" output does not contain "%s"' % (cmd, exp))
---

You could run tests with:

  $ ./tests/venv/bin/avocado --show=console,progress run --store-logging-stream progress -- tests/acceptance/linux_ssh_mips_malta.py

And at the same time:

  $ tail -f ~/avocado/job-results/latest/progress.INFO 
  17:20:44 INFO | Check successful for "uname -a"
  17:20:44 INFO | Check successful for "cat /proc/cpuinfo"
  ...

I hope this helps somehow.

Best regards,
- Cleber.

> Grateful in advance,
> Aleksandar
> 
> > > Cleber, is there something already available in the Avocado API
> > > that would help us report more fine-grained results inside each
> > > test case?
> > >
> >
> > Thanks, that would be a better way of expressing my question.
> >
> > >
> > > >
> > > > Thanks in advance,
> > > > Aleksandar
> > > >
> > > > > Aleksandar Markovic (2):
> > > > >   tests/acceptance: Refactor and improve reporting in
> > > > >     linux_ssh_mips_malta.py
> > > > >   tests/acceptance: Add new test cases in linux_ssh_mips_malta.py
> > > > >
> > > > >  tests/acceptance/linux_ssh_mips_malta.py | 81
> > > > ++++++++++++++++++++++++++------
> > > > >  1 file changed, 66 insertions(+), 15 deletions(-)
> > > > >
> > > > > --
> > > > > 2.7.4
> > > > >
> > > > >
> > >
> > > --
> > > Eduardo

Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Aleksandar Markovic 4 years, 7 months ago
28.08.2019. 23.24, "Cleber Rosa" <crosa@redhat.com> је написао/ла:
>
> On Thu, Aug 22, 2019 at 07:59:07PM +0200, Aleksandar Markovic wrote:
> > 22.08.2019. 05.15, "Aleksandar Markovic" <aleksandar.m.mail@gmail.com>
је
> > написао/ла:
> > >
> > >
> > > 21.08.2019. 23.00, "Eduardo Habkost" <ehabkost@redhat.com> је
написао/ла:
> > > >
> > > > On Wed, Aug 21, 2019 at 10:27:11PM +0200, Aleksandar Markovic wrote:
> > > > > 02.08.2019. 17.37, "Aleksandar Markovic" <
> > aleksandar.markovic@rt-rk.com> је
> > > > > написао/ла:
> > > > > >
> > > > > > From: Aleksandar Markovic <amarkovic@wavecomp.com>
> > > > > >
> > > > > > This little series improves linux_ssh_mips_malta.py, both in the
> > sense
> > > > > > of code organization and in the sense of quantity of executed
tests.
> > > > > >
> > > > >
> > > > > Hello, all.
> > > > >
> > > > > I am going to send a new version in few days, and I have a
question
> > for
> > > > > test team:
> > > > >
> > > > > Currently, the outcome of the script execition is either PASS:1
> > FAIL:0 or
> > > > > PASS:0 FAIL:1. But the test actually consists of several
subtests. Is
> > there
> > > > > any way that this single Python script considers these subtests as
> > separate
> > > > > tests (test cases), reporting something like PASS:12 FAIL:7? If
yes,
> > what
> > > > > would be the best way to achieve that?
> > > >
> > > > If you are talking about each test_*() method, they are already
> > > > treated like separate tests.  If you mean treating each
> > > > ssh_command_output_contains() call as a separate test, this might
> > > > be difficult.
> > > >
> > >
> > > Yes, I meant the latter one, individual code segments involving an
> > invocation of ssh_command_output_contains() instance being treated as
> > separate tests.
> > >
> >
> > Hello, Cleber,
> >
> > I am willing to rewamp python file structure if needed.
> >
> > The only thing I feel a little unconfortable is if I need to reboot the
> > virtual machine for each case of ssh_command_output_contains().
> >
>
> Hi Aleksandar,
>
> The short answer is that Avocado provides no way to report "subtest"
> statuses (as a formal concept), neither does the current
> "avocado_qemu" infrastructure allow for management of VMs across
> tests.  The later is an Avocado-VT feature, and it to be honest it
> brings a good deal of problems in itself, which we decided to avoid
> here.
>
> About the lack of subtests, we (the autotest project, then the Avocado
> project) found that this concept, to be well applied, need more than
> we could deal with initially.  For instance, Avocado has the concept
> of "pre_test" and "post_test" hooks, with that, should those be
> applied to subtests as well?  Also, there's support for capturing
> system information (a feature called sysinfo) before and after the
> tests... again, should it be applied to subtests?  Avocado also stores
> a well defined results directory, and we'd have to deal with something
> like that for subtests.  With regards to the variants feature, should
> they also be applied to subtests?  The list of questions goes on and
> on.
>
> The fact that one test should not be able (as much as possible) to
> influence another test also comes into play in our initial decision
> to avoid subtests.
>
> IMO, the best way to handle this is to either keep a separate logger
> with the test progress:
>
>
https://avocado-framework.readthedocs.io/en/71.0/WritingTests.html#advanced-logging-capabilities
>
> With a change similar to:
>
> ---
> diff --git a/tests/acceptance/linux_ssh_mips_malta.py
b/tests/acceptance/linux_ssh_mips_malta.py
> index 509ff929cf..0683586c35 100644
> --- a/tests/acceptance/linux_ssh_mips_malta.py
> +++ b/tests/acceptance/linux_ssh_mips_malta.py
> @@ -17,6 +17,7 @@ from avocado_qemu import Test
>  from avocado.utils import process
>  from avocado.utils import archive
>
> +progress_log = logging.getLogger("progress")
>
>  class LinuxSSH(Test):
>
> @@ -149,6 +150,7 @@ class LinuxSSH(Test):
>          stdout, _ = self.ssh_command(cmd)
>          for line in stdout:
>              if exp in line:
> +                progress_log.info('Check successful for "%s"', cmd)
>                  break
>          else:
>              self.fail('"%s" output does not contain "%s"' % (cmd, exp))
> ---
>
> You could run tests with:
>
>   $ ./tests/venv/bin/avocado --show=console,progress run
--store-logging-stream progress -- tests/acceptance/linux_ssh_mips_malta.py
>
> And at the same time:
>
>   $ tail -f ~/avocado/job-results/latest/progress.INFO
>   17:20:44 INFO | Check successful for "uname -a"
>   17:20:44 INFO | Check successful for "cat /proc/cpuinfo"
>   ...
>
> I hope this helps somehow.
>
> Best regards,
> - Cleber.
>

Thanks, Cleber, for your detailed response. I'll use whatever is available,
along the lines you highligted. I will most likely gradually modify this
test until I find the sweet spot where I am satisfied with test behavior
and reporting, but also everything fits well into Avocado framework.

Thanks again, both to you and Eduardo,
Aleksandar

> > Grateful in advance,
> > Aleksandar
> >
> > > > Cleber, is there something already available in the Avocado API
> > > > that would help us report more fine-grained results inside each
> > > > test case?
> > > >
> > >
> > > Thanks, that would be a better way of expressing my question.
> > >
> > > >
> > > > >
> > > > > Thanks in advance,
> > > > > Aleksandar
> > > > >
> > > > > > Aleksandar Markovic (2):
> > > > > >   tests/acceptance: Refactor and improve reporting in
> > > > > >     linux_ssh_mips_malta.py
> > > > > >   tests/acceptance: Add new test cases in
linux_ssh_mips_malta.py
> > > > > >
> > > > > >  tests/acceptance/linux_ssh_mips_malta.py | 81
> > > > > ++++++++++++++++++++++++++------
> > > > > >  1 file changed, 66 insertions(+), 15 deletions(-)
> > > > > >
> > > > > > --
> > > > > > 2.7.4
> > > > > >
> > > > > >
> > > >
> > > > --
> > > > Eduardo
Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Posted by Eduardo Habkost 4 years, 7 months ago
On Fri, Aug 02, 2019 at 05:35:56PM +0200, Aleksandar Markovic wrote:
> From: Aleksandar Markovic <amarkovic@wavecomp.com>
> 
> This little series improves linux_ssh_mips_malta.py, both in the sense
> of code organization and in the sense of quantity of executed tests.

Thanks!  I'm queueing it on python-next.  The changes suggested
by others can be implemented as follow up patches.


> 
> Aleksandar Markovic (2):
>   tests/acceptance: Refactor and improve reporting in
>     linux_ssh_mips_malta.py
>   tests/acceptance: Add new test cases in linux_ssh_mips_malta.py
> 
>  tests/acceptance/linux_ssh_mips_malta.py | 81 ++++++++++++++++++++++++++------
>  1 file changed, 66 insertions(+), 15 deletions(-)
> 
> -- 
> 2.7.4
> 
> 

-- 
Eduardo