On Wed, Aug 07, 2024 at 12:11:26AM +0200, Stefano Brivio wrote: > On Mon, 5 Aug 2024 22:36:45 +1000 > David Gibson wrote: > > > Add a new test script to run the equivalent of the tests in build/all > > using exeter and Avocado. This new version of the tests is more robust > > than the original, since it makes a temporary copy of the source tree so > > will not be affected by concurrent manual builds. > > I think this is much more readable than the previous Python attempt. That's encouraging. > On the other hand, I guess it's not an ideal candidate for a fair > comparison because this is exactly the kind of stuff where shell > scripting shines: it's a simple test that needs a few basic shell > commands. Right. > On that subject, the shell test is about half the lines of code (just > skipping headers, it's 48 lines instead of 90... and yes, this version Even ignoring the fact that this case is particularly suited to shell, I don't think that's really an accurate comparison, but getting to one is pretty hard. The existing test isn't 48 lines of shell, but of "passt test DSL". There are several hundred additional lines of shell to interpret that. Now obviously we don't need all of that for just this test. Likewise the new Python test needs at least exeter - that's only a couple of hundred lines - but also Avocado (huge, but only a small amount is really relevant here). > now uses a copy of the source code, but that would be two lines). I feel like it would be a bit more than two lines, to copy exactly what youwant, and to clean up after yourself. > In terms of time overhead, dropping delays to make the display capture > nice (a feature that we would anyway lose with exeter plus Avocado, if > I understood correctly): Yes. Unlike you, I'm really not convinced of the value of the display capture versus log files, at least in the majority of cases. I certainly don't think it's worth slowing down the test running in the normal case. > > $ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; ) > [...] > real 0m17.449s > user 0m15.616s > sys 0m2.136s On my system: [...] real 0m20.325s user 0m15.595s sys 0m5.287s > compared to: > > $ time ./run > [...] > real 0m18.217s > user 0m0.010s > sys 0m0.001s > > ...which I would call essentially no overhead. I didn't try out this > version yet, I suspect it would be somewhere in between. Well.. $ time PYTHONPATH=test/exeter/py3 test/venv/bin/avocado run test/build/build.json [...] RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 10.85 s real 0m11.000s user 0m23.439s sys 0m7.315s Because parallel. It looks like the avocado start up time is reasonably substantial too, so that should look better with a larger set of tests. > > Signed-off-by: David Gibson > > --- > > test/Makefile | 19 +++++--- > > test/build/.gitignore | 1 + > > test/build/build.py | 105 ++++++++++++++++++++++++++++++++++++++++++ > > test/run_avocado | 2 +- > > 4 files changed, 120 insertions(+), 7 deletions(-) > > create mode 100644 test/build/build.py > > > > diff --git a/test/Makefile b/test/Makefile > > index dae25312..d24fce14 100644 > > --- a/test/Makefile > > +++ b/test/Makefile > > @@ -64,15 +64,19 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \ > > ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS) > > > > EXETER_SH = build/static_checkers.sh > > -EXETER_JOBS = $(EXETER_SH:%.sh=%.json) > > +EXETER_PY = build/build.py > > +EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json) > > > > AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json > > > > -SYSTEM_PYTHON = python3 > > +PYTHON = python3 > > VENV = venv > > -PYTHON = $(VENV)/bin/python3 > > PIP = $(VENV)/bin/pip3 > > -RUN_AVOCADO = cd .. && test/$(PYTHON) test/run_avocado > > +PYPATH = exeter/py3 > > +SPACE = $(subst ,, ) > > +PYPATH_TEST = $(subst $(SPACE),:,$(PYPATH)) > > +PYPATH_BASE = $(subst $(SPACE),:,$(PYPATH:%=test/%)) > > +RUN_AVOCADO = cd .. && PYTHONPATH=$(PYPATH_BASE) test/$(VENV)/bin/python3 test/run_avocado > > At least intuitively, I would have no clue what this all does. But it > doesn't matter so much, I could try to find out the day that something > doesn't work. Yeah, this makefile stuff is very mucky, I'm certainly hoping this can be improved. > > CFLAGS = -Wall -Werror -Wextra -pedantic -std=c99 > > > > @@ -131,13 +135,16 @@ big.bin: > > dd if=/dev/urandom bs=1M count=10 of=$@ > > > > .PHONY: venv > > -venv: > > - $(SYSTEM_PYTHON) -m venv $(VENV) > > +venv: pull-exeter > > + $(PYTHON) -m venv $(VENV) > > $(PIP) install avocado-framework > > > > %.json: %.sh pull-exeter > > cd ..; sh test/$< --avocado > test/$@ > > > > +%.json: %.py pull-exeter > > + cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) test/$< --avocado > test/$@ > > + > > Same here. It looks messy because of the (interim, I hope) path & cwd wrangling stuff. But the basis is very simple. We run the exeter program: $(PYTHON) test/$< with the '--avocado' flag --avocado and send the output to a json file > $@ Later.. > > .PHONY: avocado > > avocado: venv $(AVOCADO_JOBS) > > $(RUN_AVOCADO) $(AVOCADO_JOBS) ..we feed that json file to avocado to actually run the tests. > > diff --git a/test/build/.gitignore b/test/build/.gitignore > > index a6c57f5f..4ef40dd0 100644 > > --- a/test/build/.gitignore > > +++ b/test/build/.gitignore > > @@ -1 +1,2 @@ > > *.json > > +build.exeter > > diff --git a/test/build/build.py b/test/build/build.py > > new file mode 100644 > > index 00000000..79668672 > > --- /dev/null > > +++ b/test/build/build.py > > @@ -0,0 +1,105 @@ > > +#! /usr/bin/env python3 > > +# > > +# SPDX-License-Identifier: GPL-2.0-or-later > > +# > > +# PASST - Plug A Simple Socket Transport > > +# for qemu/UNIX domain socket mode > > +# > > +# PASTA - Pack A Subtle Tap Abstraction > > +# for network namespace/tap device mode > > +# > > +# test/build/build.sh - Test build and install targets > > +# > > +# Copyright Red Hat > > +# Author: David Gibson > > + > > +import contextlib > > +import os.path > > +import shutil > > +import subprocess > > +import tempfile > > + > > +import exeter > > + > > + > > +def host_run(*cmd, **kwargs): > > + return subprocess.run(cmd, check=True, encoding='UTF-8', **kwargs) > > + > > + > > +def host_out(*cmd, **kwargs): > > + return host_run(*cmd, capture_output=True, **kwargs).stdout > > A vague idea only, so far, but I guess it's fine to have some amount of > boilerplate. Right. These are loosely equivalent to the implementation of the "host" and "hout" directives in the existing DSL. > > +@contextlib.contextmanager > > +def clone_source_tree(): > > + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) as tmpdir: > > + # Make a temporary copy of the sources > > + srcfiles = host_out('git', 'ls-files').splitlines() > > + for src in srcfiles: > > + dst = os.path.join(tmpdir, src) > > + os.makedirs(os.path.dirname(dst), exist_ok=True) > > + shutil.copy(src, dst) > > + os.chdir(tmpdir) > > + yield tmpdir > > This all makes sense. > > Of course it would be more readable in shell script (including the trap > to remove the temporary directory on failure/interrupt), but I think > it's as clear as it can get in any other language. > > > + > > + > > +def build_target(target, outputs): > > + with clone_source_tree(): > > + for o in outputs: > > + assert not os.path.exists(o) > > + host_run('make', f'{target}', 'CFLAGS="-Werror"') > > Compared to: > > host CFLAGS="-Werror" make > > I would say it's not great, but again, it makes sense, and it's as good > as it gets, I suppose. I don't think that's a fair comparison. The Python equivalent to the DSL line is just: host_run('make', f'{target}', 'CFLAGS="-Werror"') The loop before it is verifying that the targets didn't exist before the make - i.e. we won't spuriously pass because of a stile build. The shell version didn't do that. The with clone_source_tree(): is essentially equivalent saying which (elswhere defined) setup we want. Invoking that explicitly in each test is more verbose, but makes it much easier to see what setup each test needs, and much easier to have lots of different tests with lots of different setups. > > > + for o in outputs: > > + assert os.path.exists(o) > > + host_run('make', 'clean') > > + for o in outputs: > > + assert not os.path.exists(o) > > Same here, > > check [ -f passt ] > check [ -h pasta ] > check [ -f qrap ] > > > + > > + > > +@exeter.test > > +def test_make_passt(): > > + build_target('passt', ['passt']) > > + > > + > > +@exeter.test > > +def test_make_pasta(): > > + build_target('pasta', ['pasta']) > > + > > + > > +@exeter.test > > +def test_make_qrap(): > > + build_target('qrap', ['qrap']) > > + > > + > > +@exeter.test > > +def test_make_all(): > > + build_target('all', ['passt', 'pasta', 'qrap']) > > These all make sense and look relatively readable (while not as... > writable as shell commands "everybody" is familiar with). So, unlike the shell version I'm using a parameterized helper rather than copy-pasting each case. So, that's a readability / brevity trade-off independent of the shell vs. python difference. > > + > > +@exeter.test > > +def test_make_install_uninstall(): > > + with clone_source_tree(): > > + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) \ > > + as prefix: > > + bindir = os.path.join(prefix, 'bin') > > + mandir = os.path.join(prefix, 'share', 'man') > > + exes = ['passt', 'pasta', 'qrap'] > > + > > + # Install > > + host_run('make', 'install', 'CFLAGS="-Werror"', f'prefix={prefix}') > > + > > + for t in exes: > > + assert os.path.isfile(os.path.join(bindir, t)) > > + host_run('man', '-M', f'{mandir}', '-W', 'passt') > > + > > + # Uninstall > > + host_run('make', 'uninstall', f'prefix={prefix}') > > + > > + for t in exes: > > + assert not os.path.exists(os.path.join(bindir, t)) > > + cmd = ['man', '-M', f'{mandir}', '-W', 'passt'] > > Same, up to here: it's much more readable and obvious to write in shell > script, but I don't find it impossible to grasp in Python, either. > > > + exeter.assert_raises(subprocess.CalledProcessError, > > + host_run, *cmd) > > This, I have no idea why. Why is it only in this loop? How does it > affect the control flow? So, this is essentially check ! man -M ... Now that we've uninstalled, we're re-running (host_run), the same man command (*cmd) as we used before, and checking that it fails (raises the CalledProcessError exception). Come to think of it, I can definitely write this more simply. I'll improve it in the next spin. [snip] > Patch 22/22 will take me a bit longer (I'm just looking at these two > for the moment, as you suggested). Sure. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson