When a developer's environment is already within a podman container it
is not possible to use 'podman' again to create containers. It will
usually result in wierd errors such as:
Error: fatal error, invalid internal status, unable to create a new pause process: cannot re-exec process to join the existing user namespace. Try running "podman system migrate" and if that doesn't work reboot to recover
Podman offers the ability to talk to a daemon outside the container,
however, which could be leveraged by QEMU.
This can be used by invoking "podman --remote", or equivalently the
separate "podman-remote" binary:
https://github.com/containers/podman/blob/main/docs/tutorials/remote_client.md
The current 'podman version' check is insufficient to detect the
inability to launch containers, so it is replaced with the stronger
'podman info' check.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
tests/docker/docker.py | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tests/docker/docker.py b/tests/docker/docker.py
index ff68c7bf6f..9e18b984f4 100755
--- a/tests/docker/docker.py
+++ b/tests/docker/docker.py
@@ -76,14 +76,16 @@ def _guess_engine_command():
commands = []
if USE_ENGINE in [EngineEnum.AUTO, EngineEnum.PODMAN]:
- commands += [["podman"]]
+ commands += [["podman"], ["podman-remote"], ["podman", "--remote"]]
if USE_ENGINE in [EngineEnum.AUTO, EngineEnum.DOCKER]:
commands += [["docker"], ["sudo", "-n", "docker"]]
for cmd in commands:
try:
- # docker version will return the client details in stdout
- # but still report a status of 1 if it can't contact the daemon
- if subprocess.call(cmd + ["version"],
+ # 'version' is not sufficient to prove a working binary
+ # for podman. 'info' is a stronger check that is more
+ # likely to correlate with ability to create containers,
+ # and required to detect the need for podman remote
+ if subprocess.call(cmd + ["info"],
stdout=DEVNULL, stderr=DEVNULL) == 0:
return cmd
except OSError:
--
2.52.0