This turns run_linters() into a bit of a hybrid test; returning non-zero
on failed execution while also printing diffable information. This is
done for the benefit of the avocado simple test runner, which will soon
be attempting to execute this test from a different environment.
(Note: universal_newlines is added to the pylint invocation for type
consistency with the mypy run -- it's not strictly necessary, but it
avoids some typing errors caused by our re-use of the 'p' variable.)
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
tests/qemu-iotests/297 | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
index e05c99972e..f9ddfb53a0 100755
--- a/tests/qemu-iotests/297
+++ b/tests/qemu-iotests/297
@@ -68,19 +68,22 @@ def run_linters(
files: List[str],
directory: str = '.',
env: Optional[Mapping[str, str]] = None,
-) -> None:
+) -> int:
+ ret = 0
print('=== pylint ===')
sys.stdout.flush()
# Todo notes are fine, but fixme's or xxx's should probably just be
# fixed (in tests, at least)
- subprocess.run(
+ p = subprocess.run(
('python3', '-m', 'pylint', '--score=n', '--notes=FIXME,XXX', *files),
cwd=directory,
env=env,
check=False,
+ universal_newlines=True,
)
+ ret += p.returncode
print('=== mypy ===')
sys.stdout.flush()
@@ -113,9 +116,12 @@ def run_linters(
universal_newlines=True
)
+ ret += p.returncode
if p.returncode != 0:
print(p.stdout)
+ return ret
+
def main() -> None:
for linter in ('pylint-3', 'mypy'):
--
2.31.1
On 16.09.21 06:09, John Snow wrote:
> This turns run_linters() into a bit of a hybrid test; returning non-zero
> on failed execution while also printing diffable information. This is
> done for the benefit of the avocado simple test runner, which will soon
> be attempting to execute this test from a different environment.
>
> (Note: universal_newlines is added to the pylint invocation for type
> consistency with the mypy run -- it's not strictly necessary, but it
> avoids some typing errors caused by our re-use of the 'p' variable.)
>
> Signed-off-by: John Snow <jsnow@redhat.com>
> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
> tests/qemu-iotests/297 | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
I don’t think I like this very much. Returning an integer error code
seems archaic.
(You can perhaps already see that this is going to be one of these
reviews of mine where I won’t say anything is really wrong, but where I
will just lament subjectively missing beauty.)
As you say, run_linters() to me seems very iotests-specific still: It
emits a specific output that is compared against a reference output.
Fine for 297, but not fine for a function provided by a linters.py, I’d say.
I’d prefer run_linters() to return something like a Map[str,
Optional[str]], that would map a tool to its output in case of error,
i.e. ideally:
`{'pylint': None, 'mypy': None}`
297 could format it something like
```
for tool, output in run_linters().items():
print(f'=== {tool} ===')
if output is not None:
print(output)
```
Or something.
To check for error, you could put a Python script in python/tests that
checks `any(output is not None for output in run_linters().values())` or
something (and on error print the output).
Pulling out run_linters() into an external file and having it print
something to stdout just seems too iotests-centric to me. I suppose as
long as the return code is right (which this patch is for) it should
work for Avocado’s simple tests, too (which I don’t like very much
either, by the way, because they too seem archaic to me), but, well. It
almost seems like the Avocado test should just run ./check then.
Come to think of it, to be absolutely blasphemous, why not. I could say
all of this seems like quite some work that could be done by a
python/tests script that does this:
```
#!/bin/sh
set -e
cat >/tmp/qemu-parrot.sh <<EOF
#!/bin/sh
echo ': qcow2'
echo ': qcow2'
echo 'virtio-blk'
EOF
cd $QEMU_DIR/tests/qemu-iotests
QEMU_PROG="/tmp/qemu-parrot.sh" \
QEMU_IMG_PROG=$(which false) \
QEMU_IO_PROG=$(which false) \
QEMU_NBD_PROG=$(which false) \
QSD_PROG=$(which false) \
./check 297
```
And, no, I don’t want that! But the point of this series seems to just
be to rip the core of 297 out so it can run without ./check, because
./check requires some environment variables to be set. Doing that seems
just seems wrong to me.
Like, can’t we have a Python script in python/tests that imports
linters.py, invokes run_linters() and sensibly checks the output? Or,
you know, at the very least not have run_linters() print anything to
stdout and not have it return an integer code. linters.py:main() can do
that conversion.
Or, something completely different, perhaps my problem is that you put
linters.py as a fully standalone test into the iotests directory,
without it being an iotest. So, I think I could also agree on putting
linters.py into python/tests, and then having 297 execute that. Or you
know, we just drop 297 altogether, as you suggest in patch 13 – if
that’s what it takes, then so be it.
Hanna
PS: Also, summing up processes’ return codes makes me feel not good.
> diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
> index e05c99972e..f9ddfb53a0 100755
> --- a/tests/qemu-iotests/297
> +++ b/tests/qemu-iotests/297
> @@ -68,19 +68,22 @@ def run_linters(
> files: List[str],
> directory: str = '.',
> env: Optional[Mapping[str, str]] = None,
> -) -> None:
> +) -> int:
> + ret = 0
>
> print('=== pylint ===')
> sys.stdout.flush()
>
> # Todo notes are fine, but fixme's or xxx's should probably just be
> # fixed (in tests, at least)
> - subprocess.run(
> + p = subprocess.run(
> ('python3', '-m', 'pylint', '--score=n', '--notes=FIXME,XXX', *files),
> cwd=directory,
> env=env,
> check=False,
> + universal_newlines=True,
> )
> + ret += p.returncode
>
> print('=== mypy ===')
> sys.stdout.flush()
> @@ -113,9 +116,12 @@ def run_linters(
> universal_newlines=True
> )
>
> + ret += p.returncode
> if p.returncode != 0:
> print(p.stdout)
>
> + return ret
> +
>
> def main() -> None:
> for linter in ('pylint-3', 'mypy'):
On Fri, Sep 17, 2021 at 7:00 AM Hanna Reitz <hreitz@redhat.com> wrote:
> On 16.09.21 06:09, John Snow wrote:
> > This turns run_linters() into a bit of a hybrid test; returning non-zero
> > on failed execution while also printing diffable information. This is
> > done for the benefit of the avocado simple test runner, which will soon
> > be attempting to execute this test from a different environment.
> >
> > (Note: universal_newlines is added to the pylint invocation for type
> > consistency with the mypy run -- it's not strictly necessary, but it
> > avoids some typing errors caused by our re-use of the 'p' variable.)
> >
> > Signed-off-by: John Snow <jsnow@redhat.com>
> > Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > ---
> > tests/qemu-iotests/297 | 10 ++++++++--
> > 1 file changed, 8 insertions(+), 2 deletions(-)
>
> I don’t think I like this very much. Returning an integer error code
> seems archaic.
>
> (You can perhaps already see that this is going to be one of these
> reviews of mine where I won’t say anything is really wrong, but where I
> will just lament subjectively missing beauty.)
>
>
Haha. It's fine, Vladimir didn't like the smell of this one either. I just
didn't want to prematurely optimize or overcomplicate.
> As you say, run_linters() to me seems very iotests-specific still: It
> emits a specific output that is compared against a reference output.
> Fine for 297, but not fine for a function provided by a linters.py, I’d
> say.
>
> I’d prefer run_linters() to return something like a Map[str,
> Optional[str]], that would map a tool to its output in case of error,
> i.e. ideally:
>
> `{'pylint': None, 'mypy': None}`
>
>
Splitting the test apart into two sub-tests is completely reasonable.
Python CI right now has individual tests for pylint, mypy, etc.
> 297 could format it something like
>
> ```
> for tool, output in run_linters().items():
> print(f'=== {tool} ===')
> if output is not None:
> print(output)
> ```
>
> Or something.
>
> To check for error, you could put a Python script in python/tests that
> checks `any(output is not None for output in run_linters().values())` or
> something (and on error print the output).
>
>
> Pulling out run_linters() into an external file and having it print
> something to stdout just seems too iotests-centric to me. I suppose as
> long as the return code is right (which this patch is for) it should
> work for Avocado’s simple tests, too (which I don’t like very much
> either, by the way, because they too seem archaic to me), but, well. It
> almost seems like the Avocado test should just run ./check then.
>
>
Yeh. Ideally, we'd just have a mypy.ini and a pylintrc that configures the
tests adequately, and then 297 (or whomever else) would just call the
linters which would in turn read the same configuration. This series is
somewhat of a stop-gap to measure the temperature of the room to see how
important it was to leave 297 inside of iotests. So, I did the conservative
thing that's faster to review even if it now looks *slightly* fishy.
As for things being archaic or not ... possibly, but why involve extra
complexity if it isn't warranted? a simple pass/fail works perfectly well.
(And the human can read the output to understand WHY it failed.) If you
want more rigorous analytics for some reason, we can discuss the use cases
and figure out how to represent that information, but that's definitely a
bit beyond scope here.
>
> Come to think of it, to be absolutely blasphemous, why not. I could say
> all of this seems like quite some work that could be done by a
> python/tests script that does this:
>
> ```
> #!/bin/sh
> set -e
>
> cat >/tmp/qemu-parrot.sh <<EOF
> #!/bin/sh
> echo ': qcow2'
> echo ': qcow2'
> echo 'virtio-blk'
> EOF
>
> cd $QEMU_DIR/tests/qemu-iotests
>
> QEMU_PROG="/tmp/qemu-parrot.sh" \
> QEMU_IMG_PROG=$(which false) \
> QEMU_IO_PROG=$(which false) \
> QEMU_NBD_PROG=$(which false) \
> QSD_PROG=$(which false) \
> ./check 297
> ```
>
> And, no, I don’t want that! But the point of this series seems to just
> be to rip the core of 297 out so it can run without ./check, because
> ./check requires some environment variables to be set. Doing that seems
> just seems wrong to me.
>
>
Right, the point of this series is effectively to split out the linting
configuration and separate it from the "test" which executes the linters
with that configuration. Simplest possible thing was to just take the
configuration as it exists in its current form -- hardcoded in a python
script -- and isolate it. To my delight, it worked quite well!
> Like, can’t we have a Python script in python/tests that imports
> linters.py, invokes run_linters() and sensibly checks the output? Or,
> you know, at the very least not have run_linters() print anything to
> stdout and not have it return an integer code. linters.py:main() can do
> that conversion.
>
>
Well, I certainly don't want to bother parsing output from python tools and
trying to make sure that it works sensibly cross-version and cross-distro.
The return code being 0/non-zero is vastly simpler. Let the human read the
output log on failure cases to get a sense of what exactly went wrong. Is
there some reason why parsing the output would be beneficial to anyone?
(The Python GitLab CI hooks don't even bother printing output to the
console unless it returns non-zero, and then it will just print whatever it
saw. Seems to work well in practice.)
>
> Or, something completely different, perhaps my problem is that you put
> linters.py as a fully standalone test into the iotests directory,
> without it being an iotest. So, I think I could also agree on putting
> linters.py into python/tests, and then having 297 execute that. Or you
> know, we just drop 297 altogether, as you suggest in patch 13 – if
> that’s what it takes, then so be it.
>
>
I can definitely drop 297 if you'd like, and work on making the linter
configuration more declarative. I erred on the side of less movement
instead of more so that disruption would be minimal. It might take me some
extra time to work out how to un-scriptify the test, though. I'd like to
get a quick temperature check from kwolf on this before I put the work in.
> Hanna
>
>
> PS: Also, summing up processes’ return codes makes me feel not good.
>
>
That's the part Vladimir didn't like. There was no real reason for it,
other than it was "easy". I can make it a binary 0/1 return instead, if
that'd grease the wheels.
(I'll sit on respinning the series for now until we've had some time to
discuss it. I would rather like a chance to involve kwolf as the other
major user of this subsystem.)
> > diff --git a/tests/qemu-iotests/297 b/tests/qemu-iotests/297
> > index e05c99972e..f9ddfb53a0 100755
> > --- a/tests/qemu-iotests/297
> > +++ b/tests/qemu-iotests/297
> > @@ -68,19 +68,22 @@ def run_linters(
> > files: List[str],
> > directory: str = '.',
> > env: Optional[Mapping[str, str]] = None,
> > -) -> None:
> > +) -> int:
> > + ret = 0
> >
> > print('=== pylint ===')
> > sys.stdout.flush()
> >
> > # Todo notes are fine, but fixme's or xxx's should probably just be
> > # fixed (in tests, at least)
> > - subprocess.run(
> > + p = subprocess.run(
> > ('python3', '-m', 'pylint', '--score=n', '--notes=FIXME,XXX',
> *files),
> > cwd=directory,
> > env=env,
> > check=False,
> > + universal_newlines=True,
> > )
> > + ret += p.returncode
> >
> > print('=== mypy ===')
> > sys.stdout.flush()
> > @@ -113,9 +116,12 @@ def run_linters(
> > universal_newlines=True
> > )
> >
> > + ret += p.returncode
> > if p.returncode != 0:
> > print(p.stdout)
> >
> > + return ret
> > +
> >
> > def main() -> None:
> > for linter in ('pylint-3', 'mypy'):
>
>
On 22.09.21 22:18, John Snow wrote:
>
>
> On Fri, Sep 17, 2021 at 7:00 AM Hanna Reitz <hreitz@redhat.com
> <mailto:hreitz@redhat.com>> wrote:
[...]
>
> As you say, run_linters() to me seems very iotests-specific still: It
> emits a specific output that is compared against a reference output.
> Fine for 297, but not fine for a function provided by a
> linters.py, I’d say.
>
> I’d prefer run_linters() to return something like a Map[str,
> Optional[str]], that would map a tool to its output in case of error,
> i.e. ideally:
>
> `{'pylint': None, 'mypy': None}`
>
>
> Splitting the test apart into two sub-tests is completely reasonable.
> Python CI right now has individual tests for pylint, mypy, etc.
>
> 297 could format it something like
>
> ```
> for tool, output in run_linters().items():
> print(f'=== {tool} ===')
> if output is not None:
> print(output)
> ```
>
> Or something.
>
> To check for error, you could put a Python script in python/tests
> that
> checks `any(output is not None for output in
> run_linters().values())` or
> something (and on error print the output).
>
>
> Pulling out run_linters() into an external file and having it print
> something to stdout just seems too iotests-centric to me. I
> suppose as
> long as the return code is right (which this patch is for) it should
> work for Avocado’s simple tests, too (which I don’t like very much
> either, by the way, because they too seem archaic to me), but,
> well. It
> almost seems like the Avocado test should just run ./check then.
>
>
> Yeh. Ideally, we'd just have a mypy.ini and a pylintrc that configures
> the tests adequately, and then 297 (or whomever else) would just call
> the linters which would in turn read the same configuration. This
> series is somewhat of a stop-gap to measure the temperature of the
> room to see how important it was to leave 297 inside of iotests. So, I
> did the conservative thing that's faster to review even if it now
> looks *slightly* fishy.
>
> As for things being archaic or not ... possibly, but why involve extra
> complexity if it isn't warranted?
My opinion is that I find an interface of “prints something to stdout
and returns an integer status code” to be non-intuitive and thus rather
complex actually. That’s why I’d prefer different complexity, namely
one which has a more intuitive interface.
I’m also not sure where the extra complexity would be for a
`run_linters() -> Map[str, Optional[str]]`, because 297 just needs the
loop suggested above over `run_linters().items()`, and as for the
Avocado test...
> a simple pass/fail works perfectly well.
I don’t find `any(error_msg for error_msg in run_linters().values())`
much more complex than pass/fail.
(Note: Above, I called it `output`. Probably should have called it
`error_msg` like I did now to clarify that it’s `None` in case of
success and a string otherwise.)
> (And the human can read the output to understand WHY it failed.) If
> you want more rigorous analytics for some reason, we can discuss the
> use cases and figure out how to represent that information, but that's
> definitely a bit beyond scope here.
[...]
> Like, can’t we have a Python script in python/tests that imports
> linters.py, invokes run_linters() and sensibly checks the output? Or,
> you know, at the very least not have run_linters() print anything to
> stdout and not have it return an integer code. linters.py:main()
> can do
> that conversion.
>
>
> Well, I certainly don't want to bother parsing output from python
> tools and trying to make sure that it works sensibly cross-version and
> cross-distro. The return code being 0/non-zero is vastly simpler. Let
> the human read the output log on failure cases to get a sense of what
> exactly went wrong. Is there some reason why parsing the output would
> be beneficial to anyone?
Perhaps there was a misunderstanding here, because there’s no need to
parse the output in my suggestion. `run_linters() -> Map[str,
Optional[str]]` would map a tool name to its potential error output; if
the tool exited successfully (exit code 0), then that `Optional[str]`
error output would be `None`. It would only be something if there was
an error.
> (The Python GitLab CI hooks don't even bother printing output to the
> console unless it returns non-zero, and then it will just print
> whatever it saw. Seems to work well in practice.)
>
>
> Or, something completely different, perhaps my problem is that you
> put
> linters.py as a fully standalone test into the iotests directory,
> without it being an iotest. So, I think I could also agree on
> putting
> linters.py into python/tests, and then having 297 execute that.
> Or you
> know, we just drop 297 altogether, as you suggest in patch 13 – if
> that’s what it takes, then so be it.
>
>
> I can definitely drop 297 if you'd like, and work on making the linter
> configuration more declarative. I erred on the side of less movement
> instead of more so that disruption would be minimal. It might take me
> some extra time to work out how to un-scriptify the test, though. I'd
> like to get a quick temperature check from kwolf on this before I put
> the work in.
So since we seem to want to keep 297, would it be possible to have 297
run a linters.py that’s in python/tests?
> Hanna
>
>
> PS: Also, summing up processes’ return codes makes me feel not good.
>
>
> That's the part Vladimir didn't like. There was no real reason for it,
> other than it was "easy".
I very much don’t find it easy, because it’s semantically wrong and thus
comparatively hard to understand.
> I can make it a binary 0/1 return instead, if that'd grease the wheels.
Well, while I consider it necessary, it doesn’t really make the patch
more palatable to me.
Hanna
On Mon, Oct 4, 2021 at 3:45 AM Hanna Reitz <hreitz@redhat.com> wrote:
> On 22.09.21 22:18, John Snow wrote:
> >
> >
> > On Fri, Sep 17, 2021 at 7:00 AM Hanna Reitz <hreitz@redhat.com
> > <mailto:hreitz@redhat.com>> wrote:
>
> [...]
>
> >
> > As you say, run_linters() to me seems very iotests-specific still: It
> > emits a specific output that is compared against a reference output.
> > Fine for 297, but not fine for a function provided by a
> > linters.py, I’d say.
> >
>
> I’d prefer run_linters() to return something like a Map[str,
> > Optional[str]], that would map a tool to its output in case of error,
> > i.e. ideally:
> >
> > `{'pylint': None, 'mypy': None}`
> >
>
>
> > Splitting the test apart into two sub-tests is completely reasonable.
> > Python CI right now has individual tests for pylint, mypy, etc.
> >
> > 297 could format it something like
> >
> > ```
> > for tool, output in run_linters().items():
> > print(f'=== {tool} ===')
> > if output is not None:
> > print(output)
> > ```
> >
> > Or something.
> >
> > To check for error, you could put a Python script in python/tests
> > that
> > checks `any(output is not None for output in
> > run_linters().values())` or
> > something (and on error print the output).
> >
> >
> > Pulling out run_linters() into an external file and having it print
> > something to stdout just seems too iotests-centric to me. I
> > suppose as
> > long as the return code is right (which this patch is for) it should
> > work for Avocado’s simple tests, too (which I don’t like very much
> > either, by the way, because they too seem archaic to me), but,
> > well. It
> > almost seems like the Avocado test should just run ./check then.
> >
> >
> > Yeh. Ideally, we'd just have a mypy.ini and a pylintrc that configures
> > the tests adequately, and then 297 (or whomever else) would just call
> > the linters which would in turn read the same configuration. This
> > series is somewhat of a stop-gap to measure the temperature of the
> > room to see how important it was to leave 297 inside of iotests. So, I
> > did the conservative thing that's faster to review even if it now
> > looks *slightly* fishy.
> >
> > As for things being archaic or not ... possibly, but why involve extra
> > complexity if it isn't warranted?
>
> My opinion is that I find an interface of “prints something to stdout
> and returns an integer status code” to be non-intuitive and thus rather
> complex actually. That’s why I’d prefer different complexity, namely
> one which has a more intuitive interface.
>
>
I'm not sure I follow, though, because ultimately what we're trying to do
is run terminal commands as part of a broader test suite. Returning status
codes and printing output is what they do. We can't escape that paradigm,
so is it really necessary to abstract away from it?
> I’m also not sure where the extra complexity would be for a
> `run_linters() -> Map[str, Optional[str]]`, because 297 just needs the
> loop suggested above over `run_linters().items()`, and as for the
> Avocado test...
>
> > a simple pass/fail works perfectly well.
>
> I don’t find `any(error_msg for error_msg in run_linters().values())`
> much more complex than pass/fail.
>
> (Note: Above, I called it `output`. Probably should have called it
> `error_msg` like I did now to clarify that it’s `None` in case of
> success and a string otherwise.)
>
> > (And the human can read the output to understand WHY it failed.) If
> > you want more rigorous analytics for some reason, we can discuss the
> > use cases and figure out how to represent that information, but that's
> > definitely a bit beyond scope here.
>
> [...]
>
> > Like, can’t we have a Python script in python/tests that imports
> > linters.py, invokes run_linters() and sensibly checks the output? Or,
> > you know, at the very least not have run_linters() print anything to
> > stdout and not have it return an integer code. linters.py:main()
> > can do
> > that conversion.
> >
> >
> > Well, I certainly don't want to bother parsing output from python
> > tools and trying to make sure that it works sensibly cross-version and
> > cross-distro. The return code being 0/non-zero is vastly simpler. Let
> > the human read the output log on failure cases to get a sense of what
> > exactly went wrong. Is there some reason why parsing the output would
> > be beneficial to anyone?
>
> Perhaps there was a misunderstanding here, because there’s no need to
> parse the output in my suggestion. `run_linters() -> Map[str,
> Optional[str]]` would map a tool name to its potential error output; if
> the tool exited successfully (exit code 0), then that `Optional[str]`
> error output would be `None`. It would only be something if there was
> an error.
>
>
Misunderstood based on "checks the output." I might still be approaching
this from the standpoint of "I don't see a reason to capture the output" --
beyond letting iotests use it for the diff phase at the end, but I don't
think I need to encapsulate it in a return value anywhere for that to
happen -- I can just let it print to sys.[stdout|stderr] and let the diff
handle the rest, right?
Is there specific value in replicating that 'diff' feature ourselves? We
already don't do that, so is it really necessary for me to begin doing it?
> > (The Python GitLab CI hooks don't even bother printing output to the
> > console unless it returns non-zero, and then it will just print
> > whatever it saw. Seems to work well in practice.)
> >
> >
> > Or, something completely different, perhaps my problem is that you
> > put
> > linters.py as a fully standalone test into the iotests directory,
> > without it being an iotest. So, I think I could also agree on
> > putting
> > linters.py into python/tests, and then having 297 execute that.
> > Or you
> > know, we just drop 297 altogether, as you suggest in patch 13 – if
> > that’s what it takes, then so be it.
> >
> >
> > I can definitely drop 297 if you'd like, and work on making the linter
> > configuration more declarative. I erred on the side of less movement
> > instead of more so that disruption would be minimal. It might take me
> > some extra time to work out how to un-scriptify the test, though. I'd
> > like to get a quick temperature check from kwolf on this before I put
> > the work in.
>
> So since we seem to want to keep 297, would it be possible to have 297
> run a linters.py that’s in python/tests?
>
>
Maybe ... I felt like maybe that'd be a bad idea, though, because it puts
an iotest-related thing quite far away from the iotests directory. I didn't
want anyone to have to hunt for this stuff. I try to explain my case for
this a bit better in the commit messages for v2.
I'm sympathetic to the dislike of having something "test-like, but isn't an
iotest" in the folder, though, and tried to address that in v2, but I'm not
confident it'll be to your satisfaction.
> > Hanna
> >
> >
> > PS: Also, summing up processes’ return codes makes me feel not good.
> >
> >
> > That's the part Vladimir didn't like. There was no real reason for it,
> > other than it was "easy".
>
> I very much don’t find it easy, because it’s semantically wrong and thus
> comparatively hard to understand.
>
> > I can make it a binary 0/1 return instead, if that'd grease the wheels.
>
> Well, while I consider it necessary, it doesn’t really make the patch
> more palatable to me.
>
>
OK, I am going to send a V2 that may-or-may-not precisely address your core
critique, but I think it's quite a bit tidier and goes quite a bit further
than what I did here in V1. I think I am still misunderstanding a core
complaint here, but I tried to address the things I thought I grokked:
Separate mypy and pylint tests, no funky return code manipulation, no
iotest prints inside of linters.py, etc. If it's still untenable for you,
I'll just have to go from there.
--js
© 2016 - 2026 Red Hat, Inc.