* [PATCH v2 01/22] nstool: Fix some trivial typos
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 02/22] nstool: Propagate SIGTERM to processes executed in the namespace David Gibson
` (21 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/nstool.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/test/nstool.c b/test/nstool.c
index 1bdf44e8..a6aca981 100644
--- a/test/nstool.c
+++ b/test/nstool.c
@@ -359,7 +359,7 @@ static void wait_for_child(pid_t pid)
if (rc != pid)
die("waitpid() on %d returned %d", pid, rc);
if (WIFSTOPPED(status)) {
- /* Stop the parent to patch */
+ /* Stop the parent to match */
kill(getpid(), SIGSTOP);
/* We must have resumed, resume the child */
kill(pid, SIGCONT);
@@ -508,7 +508,7 @@ static void cmd_exec(int argc, char *argv[])
/* CHILD */
if (argc > optind + 1) {
exe = argv[optind + 1];
- xargs = (const char * const*)(argv + optind + 1);
+ xargs = (const char *const *)(argv + optind + 1);
} else {
exe = getenv("SHELL");
if (!exe)
--
@@ -359,7 +359,7 @@ static void wait_for_child(pid_t pid)
if (rc != pid)
die("waitpid() on %d returned %d", pid, rc);
if (WIFSTOPPED(status)) {
- /* Stop the parent to patch */
+ /* Stop the parent to match */
kill(getpid(), SIGSTOP);
/* We must have resumed, resume the child */
kill(pid, SIGCONT);
@@ -508,7 +508,7 @@ static void cmd_exec(int argc, char *argv[])
/* CHILD */
if (argc > optind + 1) {
exe = argv[optind + 1];
- xargs = (const char * const*)(argv + optind + 1);
+ xargs = (const char *const *)(argv + optind + 1);
} else {
exe = getenv("SHELL");
if (!exe)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 02/22] nstool: Propagate SIGTERM to processes executed in the namespace
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
2024-08-05 12:36 ` [PATCH v2 01/22] nstool: Fix some trivial typos David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-07 7:23 ` Stefano Brivio
2024-08-05 12:36 ` [PATCH v2 03/22] test: run static checkers with Avocado and JSON definitions David Gibson
` (20 subsequent siblings)
22 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Particularly in shell it's sometimes natural to save the pid from a process
run and later kill it. If doing this with nstool exec, however, it will
kill nstool itself, not the program it is running, which isn't usually what
you want or expect.
Address this by having nstool propagate SIGTERM to its child process. It
may make sense to propagate some other signals, but some introduce extra
complications, so we'll worry about them when and if it seems useful.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/nstool.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/test/nstool.c b/test/nstool.c
index a6aca981..fc357d8a 100644
--- a/test/nstool.c
+++ b/test/nstool.c
@@ -345,17 +345,39 @@ static int openns(const char *fmt, ...)
return fd;
}
+static pid_t sig_pid;
+static void sig_handler(int signum)
+{
+ int err;
+
+ err = kill(sig_pid, signum);
+ if (err)
+ die("Propagating %s: %s\n", strsignal(signum), strerror(errno));
+}
+
static void wait_for_child(pid_t pid)
{
- int status;
+ struct sigaction sa = {
+ .sa_handler = sig_handler,
+ .sa_flags = SA_RESETHAND,
+ };
+ int status, err;
+
+ sig_pid = pid;
+ err = sigaction(SIGTERM, &sa, NULL);
+ if (err)
+ die("sigaction(SIGTERM): %s\n", strerror(errno));
/* Match the child's exit status, if possible */
for (;;) {
pid_t rc;
rc = waitpid(pid, &status, WUNTRACED);
- if (rc < 0)
+ if (rc < 0) {
+ if (errno == EINTR)
+ continue;
die("waitpid() on %d: %s\n", pid, strerror(errno));
+ }
if (rc != pid)
die("waitpid() on %d returned %d", pid, rc);
if (WIFSTOPPED(status)) {
--
@@ -345,17 +345,39 @@ static int openns(const char *fmt, ...)
return fd;
}
+static pid_t sig_pid;
+static void sig_handler(int signum)
+{
+ int err;
+
+ err = kill(sig_pid, signum);
+ if (err)
+ die("Propagating %s: %s\n", strsignal(signum), strerror(errno));
+}
+
static void wait_for_child(pid_t pid)
{
- int status;
+ struct sigaction sa = {
+ .sa_handler = sig_handler,
+ .sa_flags = SA_RESETHAND,
+ };
+ int status, err;
+
+ sig_pid = pid;
+ err = sigaction(SIGTERM, &sa, NULL);
+ if (err)
+ die("sigaction(SIGTERM): %s\n", strerror(errno));
/* Match the child's exit status, if possible */
for (;;) {
pid_t rc;
rc = waitpid(pid, &status, WUNTRACED);
- if (rc < 0)
+ if (rc < 0) {
+ if (errno == EINTR)
+ continue;
die("waitpid() on %d: %s\n", pid, strerror(errno));
+ }
if (rc != pid)
die("waitpid() on %d returned %d", pid, rc);
if (WIFSTOPPED(status)) {
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 02/22] nstool: Propagate SIGTERM to processes executed in the namespace
2024-08-05 12:36 ` [PATCH v2 02/22] nstool: Propagate SIGTERM to processes executed in the namespace David Gibson
@ 2024-08-07 7:23 ` Stefano Brivio
0 siblings, 0 replies; 31+ messages in thread
From: Stefano Brivio @ 2024-08-07 7:23 UTC (permalink / raw)
To: David Gibson; +Cc: passt-dev, Cleber Rosa
On Mon, 5 Aug 2024 22:36:41 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:
> Particularly in shell it's sometimes natural to save the pid from a process
> run and later kill it. If doing this with nstool exec, however, it will
> kill nstool itself, not the program it is running, which isn't usually what
> you want or expect.
>
> Address this by having nstool propagate SIGTERM to its child process. It
> may make sense to propagate some other signals, but some introduce extra
> complications, so we'll worry about them when and if it seems useful.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> test/nstool.c | 26 ++++++++++++++++++++++++--
> 1 file changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/test/nstool.c b/test/nstool.c
> index a6aca981..fc357d8a 100644
> --- a/test/nstool.c
> +++ b/test/nstool.c
> @@ -345,17 +345,39 @@ static int openns(const char *fmt, ...)
> return fd;
> }
>
> +static pid_t sig_pid;
> +static void sig_handler(int signum)
> +{
> + int err;
> +
> + err = kill(sig_pid, signum);
> + if (err)
> + die("Propagating %s: %s\n", strsignal(signum), strerror(errno));
As I've just been bitten by this, f30ed68c5273 ("pasta: Save errno on
signal handler entry, restore on return when needed"), I was kind of
wondering if we should save and restore errno, regardless of the fact
it's not needed here (if kill() affects ernno, we won't return).
On the other hand this handler at the moment is simple enough that we
would notice if it's needed because of some further changes.
--
Stefano
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 03/22] test: run static checkers with Avocado and JSON definitions
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
2024-08-05 12:36 ` [PATCH v2 01/22] nstool: Fix some trivial typos David Gibson
2024-08-05 12:36 ` [PATCH v2 02/22] nstool: Propagate SIGTERM to processes executed in the namespace David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 04/22] test: Extend make targets to run Avocado tests David Gibson
` (19 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa
From: Cleber Rosa <crosa@redhat.com>
This adds a script and configuration to use the Avocado Testing
Framework to run, at this time, the static checkers.
The actual tests are defined using (JSON based) files, that are known
to Avocado as "recipes". The JSON files are parsed and "resolved"
into tests by Avocado's "runnables-recipe" resolver. The syntax
allows for any kind of test supported by Avocado to be defined there,
including a mix of different test types.
By the nature of Avocado's default configuration, those will run in
parallel in the host system. For more complex tests or different use
cases, Avocado could help in future versions by running those in
different environments such as containers.
The entry point ("test/run_avocado") is intended to be an optional
tool at this point, coexisting with the current implementation to run
tests. It uses Avocado's Job API to create a job with, at this point,
the static checkers suite.
The installation of Avocado itself is left to users, given that the
details on how to install it (virtual environments and specific
tooling) can be a very different and long discussion.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-ID: <20240629121342.3284907-1-crosa@redhat.com>
---
test/avocado/static_checkers.json | 16 ++++++++++
test/run_avocado | 49 +++++++++++++++++++++++++++++++
2 files changed, 65 insertions(+)
create mode 100644 test/avocado/static_checkers.json
create mode 100755 test/run_avocado
diff --git a/test/avocado/static_checkers.json b/test/avocado/static_checkers.json
new file mode 100644
index 00000000..5fae43ed
--- /dev/null
+++ b/test/avocado/static_checkers.json
@@ -0,0 +1,16 @@
+[
+ {
+ "kind": "exec-test",
+ "uri": "make",
+ "args": [
+ "clang-tidy"
+ ]
+ },
+ {
+ "kind": "exec-test",
+ "uri": "make",
+ "args": [
+ "cppcheck"
+ ]
+ }
+]
diff --git a/test/run_avocado b/test/run_avocado
new file mode 100755
index 00000000..37db17c3
--- /dev/null
+++ b/test/run_avocado
@@ -0,0 +1,49 @@
+#!/usr/bin/env python3
+
+import os
+import sys
+
+
+def check_avocado_version():
+ minimum_version = 106.0
+
+ def error_out():
+ print(
+ f"Avocado version {minimum_version} or later is required.\n"
+ f"You may install it with: \n"
+ f" python3 -m pip install avocado-framework",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ try:
+ from avocado import VERSION
+
+ if (float(VERSION)) < minimum_version:
+ error_out()
+ except ImportError:
+ error_out()
+
+
+check_avocado_version()
+from avocado.core.job import Job
+from avocado.core.suite import TestSuite
+
+
+def main():
+ repo_root_path = os.path.abspath(
+ os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
+ )
+ config = {
+ "resolver.references": [
+ os.path.join(repo_root_path, "test", "avocado", "static_checkers.json")
+ ],
+ "runner.identifier_format": "{args[0]}",
+ }
+ suite = TestSuite.from_config(config, name="static_checkers")
+ with Job(config, [suite]) as j:
+ return j.run()
+
+
+if __name__ == "__main__":
+ sys.exit(main())
--
@@ -0,0 +1,49 @@
+#!/usr/bin/env python3
+
+import os
+import sys
+
+
+def check_avocado_version():
+ minimum_version = 106.0
+
+ def error_out():
+ print(
+ f"Avocado version {minimum_version} or later is required.\n"
+ f"You may install it with: \n"
+ f" python3 -m pip install avocado-framework",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ try:
+ from avocado import VERSION
+
+ if (float(VERSION)) < minimum_version:
+ error_out()
+ except ImportError:
+ error_out()
+
+
+check_avocado_version()
+from avocado.core.job import Job
+from avocado.core.suite import TestSuite
+
+
+def main():
+ repo_root_path = os.path.abspath(
+ os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
+ )
+ config = {
+ "resolver.references": [
+ os.path.join(repo_root_path, "test", "avocado", "static_checkers.json")
+ ],
+ "runner.identifier_format": "{args[0]}",
+ }
+ suite = TestSuite.from_config(config, name="static_checkers")
+ with Job(config, [suite]) as j:
+ return j.run()
+
+
+if __name__ == "__main__":
+ sys.exit(main())
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 04/22] test: Extend make targets to run Avocado tests
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (2 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 03/22] test: run static checkers with Avocado and JSON definitions David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 05/22] test: Exeter based static tests David Gibson
` (18 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Add a new 'avocado' target to the test/ Makefile, which will install
avocado into a Python venv, and run the Avocado based tests with it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/.gitignore | 1 +
test/Makefile | 16 ++++++++++++++++
test/run_avocado | 9 +++++----
3 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/test/.gitignore b/test/.gitignore
index 6dd4790b..a79d5b6f 100644
--- a/test/.gitignore
+++ b/test/.gitignore
@@ -10,3 +10,4 @@ QEMU_EFI.fd
nstool
guest-key
guest-key.pub
+/venv/
diff --git a/test/Makefile b/test/Makefile
index 35a3b559..fda62984 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -63,6 +63,12 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
+SYSTEM_PYTHON = python3
+VENV = venv
+PYTHON = $(VENV)/bin/python3
+PIP = $(VENV)/bin/pip3
+RUN_AVOCADO = cd .. && test/$(PYTHON) test/run_avocado
+
CFLAGS = -Wall -Werror -Wextra -pedantic -std=c99
assets: $(ASSETS)
@@ -116,6 +122,15 @@ medium.bin:
big.bin:
dd if=/dev/urandom bs=1M count=10 of=$@
+.PHONY: venv
+venv:
+ $(SYSTEM_PYTHON) -m venv $(VENV)
+ $(PIP) install avocado-framework
+
+.PHONY: avocado
+avocado: venv
+ $(RUN_AVOCADO) avocado
+
check: assets
./run
@@ -127,6 +142,7 @@ clean:
rm -f $(LOCAL_ASSETS)
rm -rf test_logs
rm -f prepared-*.qcow2 prepared-*.img
+ rm -rf $(VENV)
realclean: clean
rm -rf $(DOWNLOAD_ASSETS)
diff --git a/test/run_avocado b/test/run_avocado
index 37db17c3..19a94a8f 100755
--- a/test/run_avocado
+++ b/test/run_avocado
@@ -32,12 +32,13 @@ from avocado.core.suite import TestSuite
def main():
repo_root_path = os.path.abspath(
- os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
+ os.path.dirname(os.path.dirname(__file__))
)
+
+ references = [os.path.join(repo_root_path, 'test', x) for x in sys.argv[1:]]
+
config = {
- "resolver.references": [
- os.path.join(repo_root_path, "test", "avocado", "static_checkers.json")
- ],
+ "resolver.references": references,
"runner.identifier_format": "{args[0]}",
}
suite = TestSuite.from_config(config, name="static_checkers")
--
@@ -32,12 +32,13 @@ from avocado.core.suite import TestSuite
def main():
repo_root_path = os.path.abspath(
- os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
+ os.path.dirname(os.path.dirname(__file__))
)
+
+ references = [os.path.join(repo_root_path, 'test', x) for x in sys.argv[1:]]
+
config = {
- "resolver.references": [
- os.path.join(repo_root_path, "test", "avocado", "static_checkers.json")
- ],
+ "resolver.references": references,
"runner.identifier_format": "{args[0]}",
}
suite = TestSuite.from_config(config, name="static_checkers")
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 05/22] test: Exeter based static tests
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (3 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 04/22] test: Extend make targets to run Avocado tests David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 06/22] test: Add exeter+Avocado based build tests David Gibson
` (17 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Introduce some trivial testcases based on the exeter library. These run
the C static checkers, which is redundant with the included Avocado json
file, but are useful as an example. We extend the make avocado target to
generate Avocado job files from the exeter tests and include them in the
test run.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/.gitignore | 1 +
test/Makefile | 18 +++++++++++++++---
test/build/.gitignore | 1 +
test/build/static_checkers.sh | 30 ++++++++++++++++++++++++++++++
test/run_avocado | 2 +-
5 files changed, 48 insertions(+), 4 deletions(-)
create mode 100644 test/build/.gitignore
create mode 100644 test/build/static_checkers.sh
diff --git a/test/.gitignore b/test/.gitignore
index a79d5b6f..bded349b 100644
--- a/test/.gitignore
+++ b/test/.gitignore
@@ -11,3 +11,4 @@ nstool
guest-key
guest-key.pub
/venv/
+/exeter/
diff --git a/test/Makefile b/test/Makefile
index fda62984..dae25312 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -52,7 +52,7 @@ UBUNTU_NEW_IMGS = xenial-server-cloudimg-powerpc-disk1.img \
jammy-server-cloudimg-s390x.img
UBUNTU_IMGS = $(UBUNTU_OLD_IMGS) $(UBUNTU_NEW_IMGS)
-DOWNLOAD_ASSETS = mbuto podman \
+DOWNLOAD_ASSETS = exeter mbuto podman \
$(DEBIAN_IMGS) $(FEDORA_IMGS) $(OPENSUSE_IMGS) $(UBUNTU_IMGS)
TESTDATA_ASSETS = small.bin big.bin medium.bin
LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
@@ -63,6 +63,11 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
+EXETER_SH = build/static_checkers.sh
+EXETER_JOBS = $(EXETER_SH:%.sh=%.json)
+
+AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
+
SYSTEM_PYTHON = python3
VENV = venv
PYTHON = $(VENV)/bin/python3
@@ -77,6 +82,9 @@ assets: $(ASSETS)
pull-%: %
git -C $* pull
+exeter:
+ git clone https://gitlab.com/dgibson/exeter.git
+
mbuto:
git clone git://mbuto.sh/mbuto
@@ -127,9 +135,12 @@ venv:
$(SYSTEM_PYTHON) -m venv $(VENV)
$(PIP) install avocado-framework
+%.json: %.sh pull-exeter
+ cd ..; sh test/$< --avocado > test/$@
+
.PHONY: avocado
-avocado: venv
- $(RUN_AVOCADO) avocado
+avocado: venv $(AVOCADO_JOBS)
+ $(RUN_AVOCADO) $(AVOCADO_JOBS)
check: assets
./run
@@ -143,6 +154,7 @@ clean:
rm -rf test_logs
rm -f prepared-*.qcow2 prepared-*.img
rm -rf $(VENV)
+ rm -f $(EXETER_JOBS)
realclean: clean
rm -rf $(DOWNLOAD_ASSETS)
diff --git a/test/build/.gitignore b/test/build/.gitignore
new file mode 100644
index 00000000..a6c57f5f
--- /dev/null
+++ b/test/build/.gitignore
@@ -0,0 +1 @@
+*.json
diff --git a/test/build/static_checkers.sh b/test/build/static_checkers.sh
new file mode 100644
index 00000000..ec159ea2
--- /dev/null
+++ b/test/build/static_checkers.sh
@@ -0,0 +1,30 @@
+#! /bin/sh
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# PASST - Plug A Simple Socket Transport
+# for qemu/UNIX domain socket mode
+#
+# PASTA - Pack A Subtle Tap Abstraction
+# for network namespace/tap device mode
+#
+# test/build/static_checkers.sh - Run static checkers
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+source $(dirname $0)/../exeter/sh/exeter.sh
+
+cppcheck () {
+ make cppcheck
+}
+exeter_register cppcheck
+
+clang_tidy () {
+ make clang-tidy
+}
+exeter_register clang_tidy
+
+exeter_main "$@"
+
+
diff --git a/test/run_avocado b/test/run_avocado
index 19a94a8f..d518b9ec 100755
--- a/test/run_avocado
+++ b/test/run_avocado
@@ -39,7 +39,7 @@ def main():
config = {
"resolver.references": references,
- "runner.identifier_format": "{args[0]}",
+ "runner.identifier_format": "{args}",
}
suite = TestSuite.from_config(config, name="static_checkers")
with Job(config, [suite]) as j:
--
@@ -39,7 +39,7 @@ def main():
config = {
"resolver.references": references,
- "runner.identifier_format": "{args[0]}",
+ "runner.identifier_format": "{args}",
}
suite = TestSuite.from_config(config, name="static_checkers")
with Job(config, [suite]) as j:
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 06/22] test: Add exeter+Avocado based build tests
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (4 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 05/22] test: Exeter based static tests David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-06 22:11 ` Stefano Brivio
2024-08-05 12:36 ` [PATCH v2 07/22] test: Add linters for Python code David Gibson
` (16 subsequent siblings)
22 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Add a new test script to run the equivalent of the tests in build/all
using exeter and Avocado. This new version of the tests is more robust
than the original, since it makes a temporary copy of the source tree so
will not be affected by concurrent manual builds.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 19 +++++---
test/build/.gitignore | 1 +
test/build/build.py | 105 ++++++++++++++++++++++++++++++++++++++++++
test/run_avocado | 2 +-
4 files changed, 120 insertions(+), 7 deletions(-)
create mode 100644 test/build/build.py
diff --git a/test/Makefile b/test/Makefile
index dae25312..d24fce14 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -64,15 +64,19 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
EXETER_SH = build/static_checkers.sh
-EXETER_JOBS = $(EXETER_SH:%.sh=%.json)
+EXETER_PY = build/build.py
+EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-SYSTEM_PYTHON = python3
+PYTHON = python3
VENV = venv
-PYTHON = $(VENV)/bin/python3
PIP = $(VENV)/bin/pip3
-RUN_AVOCADO = cd .. && test/$(PYTHON) test/run_avocado
+PYPATH = exeter/py3
+SPACE = $(subst ,, )
+PYPATH_TEST = $(subst $(SPACE),:,$(PYPATH))
+PYPATH_BASE = $(subst $(SPACE),:,$(PYPATH:%=test/%))
+RUN_AVOCADO = cd .. && PYTHONPATH=$(PYPATH_BASE) test/$(VENV)/bin/python3 test/run_avocado
CFLAGS = -Wall -Werror -Wextra -pedantic -std=c99
@@ -131,13 +135,16 @@ big.bin:
dd if=/dev/urandom bs=1M count=10 of=$@
.PHONY: venv
-venv:
- $(SYSTEM_PYTHON) -m venv $(VENV)
+venv: pull-exeter
+ $(PYTHON) -m venv $(VENV)
$(PIP) install avocado-framework
%.json: %.sh pull-exeter
cd ..; sh test/$< --avocado > test/$@
+%.json: %.py pull-exeter
+ cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) test/$< --avocado > test/$@
+
.PHONY: avocado
avocado: venv $(AVOCADO_JOBS)
$(RUN_AVOCADO) $(AVOCADO_JOBS)
diff --git a/test/build/.gitignore b/test/build/.gitignore
index a6c57f5f..4ef40dd0 100644
--- a/test/build/.gitignore
+++ b/test/build/.gitignore
@@ -1 +1,2 @@
*.json
+build.exeter
diff --git a/test/build/build.py b/test/build/build.py
new file mode 100644
index 00000000..79668672
--- /dev/null
+++ b/test/build/build.py
@@ -0,0 +1,105 @@
+#! /usr/bin/env python3
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# PASST - Plug A Simple Socket Transport
+# for qemu/UNIX domain socket mode
+#
+# PASTA - Pack A Subtle Tap Abstraction
+# for network namespace/tap device mode
+#
+# test/build/build.sh - Test build and install targets
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+import contextlib
+import os.path
+import shutil
+import subprocess
+import tempfile
+
+import exeter
+
+
+def host_run(*cmd, **kwargs):
+ return subprocess.run(cmd, check=True, encoding='UTF-8', **kwargs)
+
+
+def host_out(*cmd, **kwargs):
+ return host_run(*cmd, capture_output=True, **kwargs).stdout
+
+
+@contextlib.contextmanager
+def clone_source_tree():
+ with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) as tmpdir:
+ # Make a temporary copy of the sources
+ srcfiles = host_out('git', 'ls-files').splitlines()
+ for src in srcfiles:
+ dst = os.path.join(tmpdir, src)
+ os.makedirs(os.path.dirname(dst), exist_ok=True)
+ shutil.copy(src, dst)
+ os.chdir(tmpdir)
+ yield tmpdir
+
+
+def build_target(target, outputs):
+ with clone_source_tree():
+ for o in outputs:
+ assert not os.path.exists(o)
+ host_run('make', f'{target}', 'CFLAGS="-Werror"')
+ for o in outputs:
+ assert os.path.exists(o)
+ host_run('make', 'clean')
+ for o in outputs:
+ assert not os.path.exists(o)
+
+
+@exeter.test
+def test_make_passt():
+ build_target('passt', ['passt'])
+
+
+@exeter.test
+def test_make_pasta():
+ build_target('pasta', ['pasta'])
+
+
+@exeter.test
+def test_make_qrap():
+ build_target('qrap', ['qrap'])
+
+
+@exeter.test
+def test_make_all():
+ build_target('all', ['passt', 'pasta', 'qrap'])
+
+
+@exeter.test
+def test_make_install_uninstall():
+ with clone_source_tree():
+ with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) \
+ as prefix:
+ bindir = os.path.join(prefix, 'bin')
+ mandir = os.path.join(prefix, 'share', 'man')
+ exes = ['passt', 'pasta', 'qrap']
+
+ # Install
+ host_run('make', 'install', 'CFLAGS="-Werror"', f'prefix={prefix}')
+
+ for t in exes:
+ assert os.path.isfile(os.path.join(bindir, t))
+ host_run('man', '-M', f'{mandir}', '-W', 'passt')
+
+ # Uninstall
+ host_run('make', 'uninstall', f'prefix={prefix}')
+
+ for t in exes:
+ assert not os.path.exists(os.path.join(bindir, t))
+ cmd = ['man', '-M', f'{mandir}', '-W', 'passt']
+ exeter.assert_raises(subprocess.CalledProcessError,
+ host_run, *cmd)
+
+
+if __name__ == '__main__':
+ exeter.main()
diff --git a/test/run_avocado b/test/run_avocado
index d518b9ec..26a226ce 100755
--- a/test/run_avocado
+++ b/test/run_avocado
@@ -41,7 +41,7 @@ def main():
"resolver.references": references,
"runner.identifier_format": "{args}",
}
- suite = TestSuite.from_config(config, name="static_checkers")
+ suite = TestSuite.from_config(config, name="all")
with Job(config, [suite]) as j:
return j.run()
--
@@ -41,7 +41,7 @@ def main():
"resolver.references": references,
"runner.identifier_format": "{args}",
}
- suite = TestSuite.from_config(config, name="static_checkers")
+ suite = TestSuite.from_config(config, name="all")
with Job(config, [suite]) as j:
return j.run()
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 06/22] test: Add exeter+Avocado based build tests
2024-08-05 12:36 ` [PATCH v2 06/22] test: Add exeter+Avocado based build tests David Gibson
@ 2024-08-06 22:11 ` Stefano Brivio
2024-08-07 10:51 ` David Gibson
0 siblings, 1 reply; 31+ messages in thread
From: Stefano Brivio @ 2024-08-06 22:11 UTC (permalink / raw)
To: David Gibson; +Cc: passt-dev, Cleber Rosa
On Mon, 5 Aug 2024 22:36:45 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:
> Add a new test script to run the equivalent of the tests in build/all
> using exeter and Avocado. This new version of the tests is more robust
> than the original, since it makes a temporary copy of the source tree so
> will not be affected by concurrent manual builds.
I think this is much more readable than the previous Python attempt.
On the other hand, I guess it's not an ideal candidate for a fair
comparison because this is exactly the kind of stuff where shell
scripting shines: it's a simple test that needs a few basic shell
commands.
On that subject, the shell test is about half the lines of code (just
skipping headers, it's 48 lines instead of 90... and yes, this version
now uses a copy of the source code, but that would be two lines).
In terms of time overhead, dropping delays to make the display capture
nice (a feature that we would anyway lose with exeter plus Avocado, if
I understood correctly):
$ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; )
[...]
real 0m17.449s
user 0m15.616s
sys 0m2.136s
compared to:
$ time ./run
[...]
real 0m18.217s
user 0m0.010s
sys 0m0.001s
...which I would call essentially no overhead. I didn't try out this
version yet, I suspect it would be somewhere in between.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> test/Makefile | 19 +++++---
> test/build/.gitignore | 1 +
> test/build/build.py | 105 ++++++++++++++++++++++++++++++++++++++++++
> test/run_avocado | 2 +-
> 4 files changed, 120 insertions(+), 7 deletions(-)
> create mode 100644 test/build/build.py
>
> diff --git a/test/Makefile b/test/Makefile
> index dae25312..d24fce14 100644
> --- a/test/Makefile
> +++ b/test/Makefile
> @@ -64,15 +64,19 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
> ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
>
> EXETER_SH = build/static_checkers.sh
> -EXETER_JOBS = $(EXETER_SH:%.sh=%.json)
> +EXETER_PY = build/build.py
> +EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
>
> AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
>
> -SYSTEM_PYTHON = python3
> +PYTHON = python3
> VENV = venv
> -PYTHON = $(VENV)/bin/python3
> PIP = $(VENV)/bin/pip3
> -RUN_AVOCADO = cd .. && test/$(PYTHON) test/run_avocado
> +PYPATH = exeter/py3
> +SPACE = $(subst ,, )
> +PYPATH_TEST = $(subst $(SPACE),:,$(PYPATH))
> +PYPATH_BASE = $(subst $(SPACE),:,$(PYPATH:%=test/%))
> +RUN_AVOCADO = cd .. && PYTHONPATH=$(PYPATH_BASE) test/$(VENV)/bin/python3 test/run_avocado
At least intuitively, I would have no clue what this all does. But it
doesn't matter so much, I could try to find out the day that something
doesn't work.
>
> CFLAGS = -Wall -Werror -Wextra -pedantic -std=c99
>
> @@ -131,13 +135,16 @@ big.bin:
> dd if=/dev/urandom bs=1M count=10 of=$@
>
> .PHONY: venv
> -venv:
> - $(SYSTEM_PYTHON) -m venv $(VENV)
> +venv: pull-exeter
> + $(PYTHON) -m venv $(VENV)
> $(PIP) install avocado-framework
>
> %.json: %.sh pull-exeter
> cd ..; sh test/$< --avocado > test/$@
>
> +%.json: %.py pull-exeter
> + cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) test/$< --avocado > test/$@
> +
Same here.
> .PHONY: avocado
> avocado: venv $(AVOCADO_JOBS)
> $(RUN_AVOCADO) $(AVOCADO_JOBS)
> diff --git a/test/build/.gitignore b/test/build/.gitignore
> index a6c57f5f..4ef40dd0 100644
> --- a/test/build/.gitignore
> +++ b/test/build/.gitignore
> @@ -1 +1,2 @@
> *.json
> +build.exeter
> diff --git a/test/build/build.py b/test/build/build.py
> new file mode 100644
> index 00000000..79668672
> --- /dev/null
> +++ b/test/build/build.py
> @@ -0,0 +1,105 @@
> +#! /usr/bin/env python3
> +#
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +#
> +# PASST - Plug A Simple Socket Transport
> +# for qemu/UNIX domain socket mode
> +#
> +# PASTA - Pack A Subtle Tap Abstraction
> +# for network namespace/tap device mode
> +#
> +# test/build/build.sh - Test build and install targets
> +#
> +# Copyright Red Hat
> +# Author: David Gibson <david@gibson.dropbear.id.au>
> +
> +import contextlib
> +import os.path
> +import shutil
> +import subprocess
> +import tempfile
> +
> +import exeter
> +
> +
> +def host_run(*cmd, **kwargs):
> + return subprocess.run(cmd, check=True, encoding='UTF-8', **kwargs)
> +
> +
> +def host_out(*cmd, **kwargs):
> + return host_run(*cmd, capture_output=True, **kwargs).stdout
A vague idea only, so far, but I guess it's fine to have some amount of
boilerplate.
> +
> +
> +@contextlib.contextmanager
> +def clone_source_tree():
> + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) as tmpdir:
> + # Make a temporary copy of the sources
> + srcfiles = host_out('git', 'ls-files').splitlines()
> + for src in srcfiles:
> + dst = os.path.join(tmpdir, src)
> + os.makedirs(os.path.dirname(dst), exist_ok=True)
> + shutil.copy(src, dst)
> + os.chdir(tmpdir)
> + yield tmpdir
This all makes sense.
Of course it would be more readable in shell script (including the trap
to remove the temporary directory on failure/interrupt), but I think
it's as clear as it can get in any other language.
> +
> +
> +def build_target(target, outputs):
> + with clone_source_tree():
> + for o in outputs:
> + assert not os.path.exists(o)
> + host_run('make', f'{target}', 'CFLAGS="-Werror"')
Compared to:
host CFLAGS="-Werror" make
I would say it's not great, but again, it makes sense, and it's as good
as it gets, I suppose.
> + for o in outputs:
> + assert os.path.exists(o)
> + host_run('make', 'clean')
> + for o in outputs:
> + assert not os.path.exists(o)
Same here,
check [ -f passt ]
check [ -h pasta ]
check [ -f qrap ]
> +
> +
> +@exeter.test
> +def test_make_passt():
> + build_target('passt', ['passt'])
> +
> +
> +@exeter.test
> +def test_make_pasta():
> + build_target('pasta', ['pasta'])
> +
> +
> +@exeter.test
> +def test_make_qrap():
> + build_target('qrap', ['qrap'])
> +
> +
> +@exeter.test
> +def test_make_all():
> + build_target('all', ['passt', 'pasta', 'qrap'])
These all make sense and look relatively readable (while not as...
writable as shell commands "everybody" is familiar with).
> +
> +@exeter.test
> +def test_make_install_uninstall():
> + with clone_source_tree():
> + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) \
> + as prefix:
> + bindir = os.path.join(prefix, 'bin')
> + mandir = os.path.join(prefix, 'share', 'man')
> + exes = ['passt', 'pasta', 'qrap']
> +
> + # Install
> + host_run('make', 'install', 'CFLAGS="-Werror"', f'prefix={prefix}')
> +
> + for t in exes:
> + assert os.path.isfile(os.path.join(bindir, t))
> + host_run('man', '-M', f'{mandir}', '-W', 'passt')
> +
> + # Uninstall
> + host_run('make', 'uninstall', f'prefix={prefix}')
> +
> + for t in exes:
> + assert not os.path.exists(os.path.join(bindir, t))
> + cmd = ['man', '-M', f'{mandir}', '-W', 'passt']
Same, up to here: it's much more readable and obvious to write in shell
script, but I don't find it impossible to grasp in Python, either.
> + exeter.assert_raises(subprocess.CalledProcessError,
> + host_run, *cmd)
This, I have no idea why. Why is it only in this loop? How does it
affect the control flow?
> +
> +
> +if __name__ == '__main__':
> + exeter.main()
> diff --git a/test/run_avocado b/test/run_avocado
> index d518b9ec..26a226ce 100755
> --- a/test/run_avocado
> +++ b/test/run_avocado
> @@ -41,7 +41,7 @@ def main():
> "resolver.references": references,
> "runner.identifier_format": "{args}",
> }
> - suite = TestSuite.from_config(config, name="static_checkers")
> + suite = TestSuite.from_config(config, name="all")
> with Job(config, [suite]) as j:
> return j.run()
>
Patch 22/22 will take me a bit longer (I'm just looking at these two
for the moment, as you suggested).
--
Stefano
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 06/22] test: Add exeter+Avocado based build tests
2024-08-06 22:11 ` Stefano Brivio
@ 2024-08-07 10:51 ` David Gibson
2024-08-07 13:06 ` Stefano Brivio
0 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2024-08-07 10:51 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-dev, Cleber Rosa
[-- Attachment #1: Type: text/plain, Size: 11912 bytes --]
On Wed, Aug 07, 2024 at 12:11:26AM +0200, Stefano Brivio wrote:
> On Mon, 5 Aug 2024 22:36:45 +1000
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
> > Add a new test script to run the equivalent of the tests in build/all
> > using exeter and Avocado. This new version of the tests is more robust
> > than the original, since it makes a temporary copy of the source tree so
> > will not be affected by concurrent manual builds.
>
> I think this is much more readable than the previous Python attempt.
That's encouraging.
> On the other hand, I guess it's not an ideal candidate for a fair
> comparison because this is exactly the kind of stuff where shell
> scripting shines: it's a simple test that needs a few basic shell
> commands.
Right.
> On that subject, the shell test is about half the lines of code (just
> skipping headers, it's 48 lines instead of 90... and yes, this version
Even ignoring the fact that this case is particularly suited to shell,
I don't think that's really an accurate comparison, but getting to one
is pretty hard.
The existing test isn't 48 lines of shell, but of "passt test DSL".
There are several hundred additional lines of shell to interpret that.
Now obviously we don't need all of that for just this test. Likewise
the new Python test needs at least exeter - that's only a couple of
hundred lines - but also Avocado (huge, but only a small amount is
really relevant here).
> now uses a copy of the source code, but that would be two lines).
I feel like it would be a bit more than two lines, to copy exactly
what youwant, and to clean up after yourself.
> In terms of time overhead, dropping delays to make the display capture
> nice (a feature that we would anyway lose with exeter plus Avocado, if
> I understood correctly):
Yes. Unlike you, I'm really not convinced of the value of the display
capture versus log files, at least in the majority of cases. I
certainly don't think it's worth slowing down the test running in the
normal case.
>
> $ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; )
> [...]
> real 0m17.449s
> user 0m15.616s
> sys 0m2.136s
On my system:
[...]
real 0m20.325s
user 0m15.595s
sys 0m5.287s
> compared to:
>
> $ time ./run
> [...]
> real 0m18.217s
> user 0m0.010s
> sys 0m0.001s
>
> ...which I would call essentially no overhead. I didn't try out this
> version yet, I suspect it would be somewhere in between.
Well..
$ time PYTHONPATH=test/exeter/py3 test/venv/bin/avocado run test/build/build.json
[...]
RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME : 10.85 s
real 0m11.000s
user 0m23.439s
sys 0m7.315s
Because parallel. It looks like the avocado start up time is
reasonably substantial too, so that should look better with a larger
set of tests.
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> > test/Makefile | 19 +++++---
> > test/build/.gitignore | 1 +
> > test/build/build.py | 105 ++++++++++++++++++++++++++++++++++++++++++
> > test/run_avocado | 2 +-
> > 4 files changed, 120 insertions(+), 7 deletions(-)
> > create mode 100644 test/build/build.py
> >
> > diff --git a/test/Makefile b/test/Makefile
> > index dae25312..d24fce14 100644
> > --- a/test/Makefile
> > +++ b/test/Makefile
> > @@ -64,15 +64,19 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
> > ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
> >
> > EXETER_SH = build/static_checkers.sh
> > -EXETER_JOBS = $(EXETER_SH:%.sh=%.json)
> > +EXETER_PY = build/build.py
> > +EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
> >
> > AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
> >
> > -SYSTEM_PYTHON = python3
> > +PYTHON = python3
> > VENV = venv
> > -PYTHON = $(VENV)/bin/python3
> > PIP = $(VENV)/bin/pip3
> > -RUN_AVOCADO = cd .. && test/$(PYTHON) test/run_avocado
> > +PYPATH = exeter/py3
> > +SPACE = $(subst ,, )
> > +PYPATH_TEST = $(subst $(SPACE),:,$(PYPATH))
> > +PYPATH_BASE = $(subst $(SPACE),:,$(PYPATH:%=test/%))
> > +RUN_AVOCADO = cd .. && PYTHONPATH=$(PYPATH_BASE) test/$(VENV)/bin/python3 test/run_avocado
>
> At least intuitively, I would have no clue what this all does. But it
> doesn't matter so much, I could try to find out the day that something
> doesn't work.
Yeah, this makefile stuff is very mucky, I'm certainly hoping this can
be improved.
> > CFLAGS = -Wall -Werror -Wextra -pedantic -std=c99
> >
> > @@ -131,13 +135,16 @@ big.bin:
> > dd if=/dev/urandom bs=1M count=10 of=$@
> >
> > .PHONY: venv
> > -venv:
> > - $(SYSTEM_PYTHON) -m venv $(VENV)
> > +venv: pull-exeter
> > + $(PYTHON) -m venv $(VENV)
> > $(PIP) install avocado-framework
> >
> > %.json: %.sh pull-exeter
> > cd ..; sh test/$< --avocado > test/$@
> >
> > +%.json: %.py pull-exeter
> > + cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) test/$< --avocado > test/$@
> > +
>
> Same here.
It looks messy because of the (interim, I hope) path & cwd wrangling
stuff. But the basis is very simple. We run the exeter program:
$(PYTHON) test/$<
with the '--avocado' flag
--avocado
and send the output to a json file
> $@
Later..
> > .PHONY: avocado
> > avocado: venv $(AVOCADO_JOBS)
> > $(RUN_AVOCADO) $(AVOCADO_JOBS)
..we feed that json file to avocado to actually run the tests.
> > diff --git a/test/build/.gitignore b/test/build/.gitignore
> > index a6c57f5f..4ef40dd0 100644
> > --- a/test/build/.gitignore
> > +++ b/test/build/.gitignore
> > @@ -1 +1,2 @@
> > *.json
> > +build.exeter
> > diff --git a/test/build/build.py b/test/build/build.py
> > new file mode 100644
> > index 00000000..79668672
> > --- /dev/null
> > +++ b/test/build/build.py
> > @@ -0,0 +1,105 @@
> > +#! /usr/bin/env python3
> > +#
> > +# SPDX-License-Identifier: GPL-2.0-or-later
> > +#
> > +# PASST - Plug A Simple Socket Transport
> > +# for qemu/UNIX domain socket mode
> > +#
> > +# PASTA - Pack A Subtle Tap Abstraction
> > +# for network namespace/tap device mode
> > +#
> > +# test/build/build.sh - Test build and install targets
> > +#
> > +# Copyright Red Hat
> > +# Author: David Gibson <david@gibson.dropbear.id.au>
> > +
> > +import contextlib
> > +import os.path
> > +import shutil
> > +import subprocess
> > +import tempfile
> > +
> > +import exeter
> > +
> > +
> > +def host_run(*cmd, **kwargs):
> > + return subprocess.run(cmd, check=True, encoding='UTF-8', **kwargs)
> > +
> > +
> > +def host_out(*cmd, **kwargs):
> > + return host_run(*cmd, capture_output=True, **kwargs).stdout
>
> A vague idea only, so far, but I guess it's fine to have some amount of
> boilerplate.
Right. These are loosely equivalent to the implementation of the
"host" and "hout" directives in the existing DSL.
> > +@contextlib.contextmanager
> > +def clone_source_tree():
> > + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) as tmpdir:
> > + # Make a temporary copy of the sources
> > + srcfiles = host_out('git', 'ls-files').splitlines()
> > + for src in srcfiles:
> > + dst = os.path.join(tmpdir, src)
> > + os.makedirs(os.path.dirname(dst), exist_ok=True)
> > + shutil.copy(src, dst)
> > + os.chdir(tmpdir)
> > + yield tmpdir
>
> This all makes sense.
>
> Of course it would be more readable in shell script (including the trap
> to remove the temporary directory on failure/interrupt), but I think
> it's as clear as it can get in any other language.
>
> > +
> > +
> > +def build_target(target, outputs):
> > + with clone_source_tree():
> > + for o in outputs:
> > + assert not os.path.exists(o)
> > + host_run('make', f'{target}', 'CFLAGS="-Werror"')
>
> Compared to:
>
> host CFLAGS="-Werror" make
>
> I would say it's not great, but again, it makes sense, and it's as good
> as it gets, I suppose.
I don't think that's a fair comparison. The Python equivalent to the DSL
line is just:
host_run('make', f'{target}', 'CFLAGS="-Werror"')
The loop before it is verifying that the targets didn't exist before
the make - i.e. we won't spuriously pass because of a stile build.
The shell version didn't do that. The
with clone_source_tree():
is essentially equivalent saying which (elswhere defined) setup we
want. Invoking that explicitly in each test is more verbose, but
makes it much easier to see what setup each test needs, and much
easier to have lots of different tests with lots of different setups.
>
> > + for o in outputs:
> > + assert os.path.exists(o)
> > + host_run('make', 'clean')
> > + for o in outputs:
> > + assert not os.path.exists(o)
>
> Same here,
>
> check [ -f passt ]
> check [ -h pasta ]
> check [ -f qrap ]
>
> > +
> > +
> > +@exeter.test
> > +def test_make_passt():
> > + build_target('passt', ['passt'])
> > +
> > +
> > +@exeter.test
> > +def test_make_pasta():
> > + build_target('pasta', ['pasta'])
> > +
> > +
> > +@exeter.test
> > +def test_make_qrap():
> > + build_target('qrap', ['qrap'])
> > +
> > +
> > +@exeter.test
> > +def test_make_all():
> > + build_target('all', ['passt', 'pasta', 'qrap'])
>
> These all make sense and look relatively readable (while not as...
> writable as shell commands "everybody" is familiar with).
So, unlike the shell version I'm using a parameterized helper rather
than copy-pasting each case. So, that's a readability / brevity
trade-off independent of the shell vs. python difference.
> > +
> > +@exeter.test
> > +def test_make_install_uninstall():
> > + with clone_source_tree():
> > + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) \
> > + as prefix:
> > + bindir = os.path.join(prefix, 'bin')
> > + mandir = os.path.join(prefix, 'share', 'man')
> > + exes = ['passt', 'pasta', 'qrap']
> > +
> > + # Install
> > + host_run('make', 'install', 'CFLAGS="-Werror"', f'prefix={prefix}')
> > +
> > + for t in exes:
> > + assert os.path.isfile(os.path.join(bindir, t))
> > + host_run('man', '-M', f'{mandir}', '-W', 'passt')
> > +
> > + # Uninstall
> > + host_run('make', 'uninstall', f'prefix={prefix}')
> > +
> > + for t in exes:
> > + assert not os.path.exists(os.path.join(bindir, t))
> > + cmd = ['man', '-M', f'{mandir}', '-W', 'passt']
>
> Same, up to here: it's much more readable and obvious to write in shell
> script, but I don't find it impossible to grasp in Python, either.
>
> > + exeter.assert_raises(subprocess.CalledProcessError,
> > + host_run, *cmd)
>
> This, I have no idea why. Why is it only in this loop? How does it
> affect the control flow?
So, this is essentially
check ! man -M ...
Now that we've uninstalled, we're re-running (host_run), the same man
command (*cmd) as we used before, and checking that it fails (raises
the CalledProcessError exception).
Come to think of it, I can definitely write this more simply. I'll
improve it in the next spin.
[snip]
> Patch 22/22 will take me a bit longer (I'm just looking at these two
> for the moment, as you suggested).
Sure.
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 06/22] test: Add exeter+Avocado based build tests
2024-08-07 10:51 ` David Gibson
@ 2024-08-07 13:06 ` Stefano Brivio
2024-08-08 1:28 ` David Gibson
0 siblings, 1 reply; 31+ messages in thread
From: Stefano Brivio @ 2024-08-07 13:06 UTC (permalink / raw)
To: David Gibson; +Cc: passt-dev, Cleber Rosa
On Wed, 7 Aug 2024 20:51:08 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Wed, Aug 07, 2024 at 12:11:26AM +0200, Stefano Brivio wrote:
> > On Mon, 5 Aug 2024 22:36:45 +1000
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >
> > > Add a new test script to run the equivalent of the tests in build/all
> > > using exeter and Avocado. This new version of the tests is more robust
> > > than the original, since it makes a temporary copy of the source tree so
> > > will not be affected by concurrent manual builds.
> >
> > I think this is much more readable than the previous Python attempt.
>
> That's encouraging.
>
> > On the other hand, I guess it's not an ideal candidate for a fair
> > comparison because this is exactly the kind of stuff where shell
> > scripting shines: it's a simple test that needs a few basic shell
> > commands.
>
> Right.
>
> > On that subject, the shell test is about half the lines of code (just
> > skipping headers, it's 48 lines instead of 90... and yes, this version
>
> Even ignoring the fact that this case is particularly suited to shell,
> I don't think that's really an accurate comparison, but getting to one
> is pretty hard.
>
> The existing test isn't 48 lines of shell, but of "passt test DSL".
> There are several hundred additional lines of shell to interpret that.
Yeah, but the 48 lines is all I have to look at, which is what matters
I would argue. That's exactly why I wrote that interpreter.
Here, it's 90 lines of *test file*.
> Now obviously we don't need all of that for just this test. Likewise
> the new Python test needs at least exeter - that's only a couple of
> hundred lines - but also Avocado (huge, but only a small amount is
> really relevant here).
>
> > now uses a copy of the source code, but that would be two lines).
>
> I feel like it would be a bit more than two lines, to copy exactly
> what youwant, and to clean up after yourself.
host mkdir __STATEDIR__/sources
host cp --parents $(git ls-files) __STATEDIR__/sources
...which is actually an improvement on the original as __STATEDIR__ can
be handled in a centralised way, if one wants to keep that after the
single test case, after the whole test run, or not at all.
>
> > In terms of time overhead, dropping delays to make the display capture
> > nice (a feature that we would anyway lose with exeter plus Avocado, if
> > I understood correctly):
>
> Yes. Unlike you, I'm really not convinced of the value of the display
> capture versus log files, at least in the majority of cases.
Well, but I use that...
By the way, openQA nowadays takes periodic screenshots. That's certainly
not as useful, but I'm indeed not the only one who benefits from
_seeing_ tests as they run instead of correlating log files from
different contexts, especially when you have a client, a server, and
what you're testing in between.
> I certainly don't think it's worth slowing down the test running in the
> normal case.
It doesn't significantly slow things down, but it certainly makes it
more complicated to run test cases in parallel... which you can't do
anyway for throughput and latency tests (which take 22 out of the 37
minutes of a current CI run), unless you set up VMs with CPU pinning and
cgroups, or a server farm.
I mean, I see the value of running things in parallel in a general
case, but I don't think you should just ignore everything else.
> > $ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; )
> > [...]
> > real 0m17.449s
> > user 0m15.616s
> > sys 0m2.136s
>
> On my system:
> [...]
> real 0m20.325s
> user 0m15.595s
> sys 0m5.287s
>
> > compared to:
> >
> > $ time ./run
> > [...]
> > real 0m18.217s
> > user 0m0.010s
> > sys 0m0.001s
> >
> > ...which I would call essentially no overhead. I didn't try out this
> > version yet, I suspect it would be somewhere in between.
>
> Well..
>
> $ time PYTHONPATH=test/exeter/py3 test/venv/bin/avocado run test/build/build.json
> [...]
> RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
> JOB TIME : 10.85 s
>
> real 0m11.000s
> user 0m23.439s
> sys 0m7.315s
>
> Because parallel. It looks like the avocado start up time is
> reasonably substantial too, so that should look better with a larger
> set of tests.
With the current set of tests, I doubt it's ever going to pay off. Even
if you run the non-perf tests in 10% of the time, it's going to be 24
minutes instead of 37.
I guess it will start making sense with larger matrices of network
environments, or with more test cases (but really a lot of them).
> > > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > > ---
> > > test/Makefile | 19 +++++---
> > > test/build/.gitignore | 1 +
> > > test/build/build.py | 105 ++++++++++++++++++++++++++++++++++++++++++
> > > test/run_avocado | 2 +-
> > > 4 files changed, 120 insertions(+), 7 deletions(-)
> > > create mode 100644 test/build/build.py
> > >
> > > diff --git a/test/Makefile b/test/Makefile
> > > index dae25312..d24fce14 100644
> > > --- a/test/Makefile
> > > +++ b/test/Makefile
> > > @@ -64,15 +64,19 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
> > > ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
> > >
> > > EXETER_SH = build/static_checkers.sh
> > > -EXETER_JOBS = $(EXETER_SH:%.sh=%.json)
> > > +EXETER_PY = build/build.py
> > > +EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
> > >
> > > AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
> > >
> > > -SYSTEM_PYTHON = python3
> > > +PYTHON = python3
> > > VENV = venv
> > > -PYTHON = $(VENV)/bin/python3
> > > PIP = $(VENV)/bin/pip3
> > > -RUN_AVOCADO = cd .. && test/$(PYTHON) test/run_avocado
> > > +PYPATH = exeter/py3
> > > +SPACE = $(subst ,, )
> > > +PYPATH_TEST = $(subst $(SPACE),:,$(PYPATH))
> > > +PYPATH_BASE = $(subst $(SPACE),:,$(PYPATH:%=test/%))
> > > +RUN_AVOCADO = cd .. && PYTHONPATH=$(PYPATH_BASE) test/$(VENV)/bin/python3 test/run_avocado
> >
> > At least intuitively, I would have no clue what this all does. But it
> > doesn't matter so much, I could try to find out the day that something
> > doesn't work.
>
> Yeah, this makefile stuff is very mucky, I'm certainly hoping this can
> be improved.
>
> > > CFLAGS = -Wall -Werror -Wextra -pedantic -std=c99
> > >
> > > @@ -131,13 +135,16 @@ big.bin:
> > > dd if=/dev/urandom bs=1M count=10 of=$@
> > >
> > > .PHONY: venv
> > > -venv:
> > > - $(SYSTEM_PYTHON) -m venv $(VENV)
> > > +venv: pull-exeter
> > > + $(PYTHON) -m venv $(VENV)
> > > $(PIP) install avocado-framework
> > >
> > > %.json: %.sh pull-exeter
> > > cd ..; sh test/$< --avocado > test/$@
> > >
> > > +%.json: %.py pull-exeter
> > > + cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) test/$< --avocado > test/$@
> > > +
> >
> > Same here.
>
> It looks messy because of the (interim, I hope) path & cwd wrangling
> stuff. But the basis is very simple. We run the exeter program:
> $(PYTHON) test/$<
> with the '--avocado' flag
> --avocado
> and send the output to a json file
> > $@
>
> Later..
>
> > > .PHONY: avocado
> > > avocado: venv $(AVOCADO_JOBS)
> > > $(RUN_AVOCADO) $(AVOCADO_JOBS)
>
> ..we feed that json file to avocado to actually run the tests.
>
> > > diff --git a/test/build/.gitignore b/test/build/.gitignore
> > > index a6c57f5f..4ef40dd0 100644
> > > --- a/test/build/.gitignore
> > > +++ b/test/build/.gitignore
> > > @@ -1 +1,2 @@
> > > *.json
> > > +build.exeter
> > > diff --git a/test/build/build.py b/test/build/build.py
> > > new file mode 100644
> > > index 00000000..79668672
> > > --- /dev/null
> > > +++ b/test/build/build.py
> > > @@ -0,0 +1,105 @@
> > > +#! /usr/bin/env python3
> > > +#
> > > +# SPDX-License-Identifier: GPL-2.0-or-later
> > > +#
> > > +# PASST - Plug A Simple Socket Transport
> > > +# for qemu/UNIX domain socket mode
> > > +#
> > > +# PASTA - Pack A Subtle Tap Abstraction
> > > +# for network namespace/tap device mode
> > > +#
> > > +# test/build/build.sh - Test build and install targets
> > > +#
> > > +# Copyright Red Hat
> > > +# Author: David Gibson <david@gibson.dropbear.id.au>
> > > +
> > > +import contextlib
> > > +import os.path
> > > +import shutil
> > > +import subprocess
> > > +import tempfile
> > > +
> > > +import exeter
> > > +
> > > +
> > > +def host_run(*cmd, **kwargs):
> > > + return subprocess.run(cmd, check=True, encoding='UTF-8', **kwargs)
> > > +
> > > +
> > > +def host_out(*cmd, **kwargs):
> > > + return host_run(*cmd, capture_output=True, **kwargs).stdout
> >
> > A vague idea only, so far, but I guess it's fine to have some amount of
> > boilerplate.
>
> Right. These are loosely equivalent to the implementation of the
> "host" and "hout" directives in the existing DSL.
>
> > > +@contextlib.contextmanager
> > > +def clone_source_tree():
> > > + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) as tmpdir:
> > > + # Make a temporary copy of the sources
> > > + srcfiles = host_out('git', 'ls-files').splitlines()
> > > + for src in srcfiles:
> > > + dst = os.path.join(tmpdir, src)
> > > + os.makedirs(os.path.dirname(dst), exist_ok=True)
> > > + shutil.copy(src, dst)
> > > + os.chdir(tmpdir)
> > > + yield tmpdir
> >
> > This all makes sense.
> >
> > Of course it would be more readable in shell script (including the trap
> > to remove the temporary directory on failure/interrupt), but I think
> > it's as clear as it can get in any other language.
> >
> > > +
> > > +
> > > +def build_target(target, outputs):
> > > + with clone_source_tree():
> > > + for o in outputs:
> > > + assert not os.path.exists(o)
> > > + host_run('make', f'{target}', 'CFLAGS="-Werror"')
> >
> > Compared to:
> >
> > host CFLAGS="-Werror" make
> >
> > I would say it's not great, but again, it makes sense, and it's as good
> > as it gets, I suppose.
>
> I don't think that's a fair comparison. The Python equivalent to the DSL
> line is just:
> host_run('make', f'{target}', 'CFLAGS="-Werror"')
Yes, that's exactly what I meant...
> The loop before it is verifying that the targets didn't exist before
> the make - i.e. we won't spuriously pass because of a stile build.
> The shell version didn't do that. The
> with clone_source_tree():
> is essentially equivalent saying which (elswhere defined) setup we
> want. Invoking that explicitly in each test is more verbose, but
> makes it much easier to see what setup each test needs, and much
> easier to have lots of different tests with lots of different setups.
No, absolutely, the rest is actually clear enough, I guess.
> > > + for o in outputs:
> > > + assert os.path.exists(o)
> > > + host_run('make', 'clean')
> > > + for o in outputs:
> > > + assert not os.path.exists(o)
> >
> > Same here,
> >
> > check [ -f passt ]
> > check [ -h pasta ]
> > check [ -f qrap ]
> >
> > > +
> > > +
> > > +@exeter.test
> > > +def test_make_passt():
> > > + build_target('passt', ['passt'])
> > > +
> > > +
> > > +@exeter.test
> > > +def test_make_pasta():
> > > + build_target('pasta', ['pasta'])
> > > +
> > > +
> > > +@exeter.test
> > > +def test_make_qrap():
> > > + build_target('qrap', ['qrap'])
> > > +
> > > +
> > > +@exeter.test
> > > +def test_make_all():
> > > + build_target('all', ['passt', 'pasta', 'qrap'])
> >
> > These all make sense and look relatively readable (while not as...
> > writable as shell commands "everybody" is familiar with).
>
> So, unlike the shell version I'm using a parameterized helper rather
> than copy-pasting each case. So, that's a readability / brevity
> trade-off independent of the shell vs. python difference.
>
> > > +
> > > +@exeter.test
> > > +def test_make_install_uninstall():
> > > + with clone_source_tree():
> > > + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) \
> > > + as prefix:
> > > + bindir = os.path.join(prefix, 'bin')
> > > + mandir = os.path.join(prefix, 'share', 'man')
> > > + exes = ['passt', 'pasta', 'qrap']
> > > +
> > > + # Install
> > > + host_run('make', 'install', 'CFLAGS="-Werror"', f'prefix={prefix}')
> > > +
> > > + for t in exes:
> > > + assert os.path.isfile(os.path.join(bindir, t))
> > > + host_run('man', '-M', f'{mandir}', '-W', 'passt')
> > > +
> > > + # Uninstall
> > > + host_run('make', 'uninstall', f'prefix={prefix}')
> > > +
> > > + for t in exes:
> > > + assert not os.path.exists(os.path.join(bindir, t))
> > > + cmd = ['man', '-M', f'{mandir}', '-W', 'passt']
> >
> > Same, up to here: it's much more readable and obvious to write in shell
> > script, but I don't find it impossible to grasp in Python, either.
> >
> > > + exeter.assert_raises(subprocess.CalledProcessError,
> > > + host_run, *cmd)
> >
> > This, I have no idea why. Why is it only in this loop? How does it
> > affect the control flow?
>
> So, this is essentially
> check ! man -M ...
>
> Now that we've uninstalled, we're re-running (host_run), the same man
> command (*cmd) as we used before, and checking that it fails (raises
> the CalledProcessError exception).
>
> Come to think of it, I can definitely write this more simply. I'll
> improve it in the next spin.
>
> [snip]
> > Patch 22/22 will take me a bit longer (I'm just looking at these two
> > for the moment, as you suggested).
>
> Sure.
--
Stefano
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 06/22] test: Add exeter+Avocado based build tests
2024-08-07 13:06 ` Stefano Brivio
@ 2024-08-08 1:28 ` David Gibson
2024-08-08 22:55 ` Stefano Brivio
0 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2024-08-08 1:28 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-dev, Cleber Rosa
[-- Attachment #1: Type: text/plain, Size: 6626 bytes --]
On Wed, Aug 07, 2024 at 03:06:44PM +0200, Stefano Brivio wrote:
> On Wed, 7 Aug 2024 20:51:08 +1000
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
> > On Wed, Aug 07, 2024 at 12:11:26AM +0200, Stefano Brivio wrote:
> > > On Mon, 5 Aug 2024 22:36:45 +1000
> > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > >
> > > > Add a new test script to run the equivalent of the tests in build/all
> > > > using exeter and Avocado. This new version of the tests is more robust
> > > > than the original, since it makes a temporary copy of the source tree so
> > > > will not be affected by concurrent manual builds.
> > >
> > > I think this is much more readable than the previous Python attempt.
> >
> > That's encouraging.
> >
> > > On the other hand, I guess it's not an ideal candidate for a fair
> > > comparison because this is exactly the kind of stuff where shell
> > > scripting shines: it's a simple test that needs a few basic shell
> > > commands.
> >
> > Right.
> >
> > > On that subject, the shell test is about half the lines of code (just
> > > skipping headers, it's 48 lines instead of 90... and yes, this version
> >
> > Even ignoring the fact that this case is particularly suited to shell,
> > I don't think that's really an accurate comparison, but getting to one
> > is pretty hard.
> >
> > The existing test isn't 48 lines of shell, but of "passt test DSL".
> > There are several hundred additional lines of shell to interpret that.
>
> Yeah, but the 48 lines is all I have to look at, which is what matters
> I would argue. That's exactly why I wrote that interpreter.
>
> Here, it's 90 lines of *test file*.
Fair point. Fwiw, it's down to 77 so far for my next draft.
> > Now obviously we don't need all of that for just this test. Likewise
> > the new Python test needs at least exeter - that's only a couple of
> > hundred lines - but also Avocado (huge, but only a small amount is
> > really relevant here).
> >
> > > now uses a copy of the source code, but that would be two lines).
> >
> > I feel like it would be a bit more than two lines, to copy exactly
> > what youwant, and to clean up after yourself.
>
> host mkdir __STATEDIR__/sources
> host cp --parents $(git ls-files) __STATEDIR__/sources
>
> ...which is actually an improvement on the original as __STATEDIR__ can
> be handled in a centralised way, if one wants to keep that after the
> single test case, after the whole test run, or not at all.
Huh, I didn't know about cp --parents, which does exactly what's
needed. In the Python library there are, alas, several things that do
almost but not quite what's needed. I guess I could just invoke 'cp
--parents' myself.
> > > In terms of time overhead, dropping delays to make the display capture
> > > nice (a feature that we would anyway lose with exeter plus Avocado, if
> > > I understood correctly):
> >
> > Yes. Unlike you, I'm really not convinced of the value of the display
> > capture versus log files, at least in the majority of cases.
>
> Well, but I use that...
>
> By the way, openQA nowadays takes periodic screenshots. That's certainly
> not as useful, but I'm indeed not the only one who benefits from
> _seeing_ tests as they run instead of correlating log files from
> different contexts, especially when you have a client, a server, and
> what you're testing in between.
If you have to correlate multiple logs that's a pain, yes. My
approach here is, as much as possible, to have a single "log"
(actually stdout & stderr) from the top level test logic, so the
logical ordering is kind of built in.
> > I certainly don't think it's worth slowing down the test running in the
> > normal case.
>
> It doesn't significantly slow things down,
It does if you explicitly add delays to make the display capture nice
as mentioned above.
> but it certainly makes it
> more complicated to run test cases in parallel... which you can't do
> anyway for throughput and latency tests (which take 22 out of the 37
> minutes of a current CI run), unless you set up VMs with CPU pinning and
> cgroups, or a server farm.
So, yes, the perf tests take the majority of the runtime for CI, but
I'm less concerned about runtime for CI tests. I'm more interested in
runtime for a subset of functional tests you can run repeatedly while
developing. I routinely disable the perf and other slow tests, to get
a subset taking 5-7 minutes. That's ok, but I'm pretty confident I
can get better coverage in significantly less time using parallel
tests.
> I mean, I see the value of running things in parallel in a general
> case, but I don't think you should just ignore everything else.
>
> > > $ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; )
> > > [...]
> > > real 0m17.449s
> > > user 0m15.616s
> > > sys 0m2.136s
> >
> > On my system:
> > [...]
> > real 0m20.325s
> > user 0m15.595s
> > sys 0m5.287s
> >
> > > compared to:
> > >
> > > $ time ./run
> > > [...]
> > > real 0m18.217s
> > > user 0m0.010s
> > > sys 0m0.001s
> > >
> > > ...which I would call essentially no overhead. I didn't try out this
> > > version yet, I suspect it would be somewhere in between.
> >
> > Well..
> >
> > $ time PYTHONPATH=test/exeter/py3 test/venv/bin/avocado run test/build/build.json
> > [...]
> > RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
> > JOB TIME : 10.85 s
> >
> > real 0m11.000s
> > user 0m23.439s
> > sys 0m7.315s
> >
> > Because parallel. It looks like the avocado start up time is
> > reasonably substantial too, so that should look better with a larger
> > set of tests.
>
> With the current set of tests, I doubt it's ever going to pay off. Even
> if you run the non-perf tests in 10% of the time, it's going to be 24
> minutes instead of 37.
Including the perf tests, probably not. Excluding them (which is
extremely useful when actively coding) I think it will.
> I guess it will start making sense with larger matrices of network
> environments, or with more test cases (but really a lot of them).
We could certainly do with a lot more tests, though I expect it will
take a while to get them.
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 06/22] test: Add exeter+Avocado based build tests
2024-08-08 1:28 ` David Gibson
@ 2024-08-08 22:55 ` Stefano Brivio
0 siblings, 0 replies; 31+ messages in thread
From: Stefano Brivio @ 2024-08-08 22:55 UTC (permalink / raw)
To: David Gibson; +Cc: passt-dev, Cleber Rosa
[-- Attachment #1: Type: text/plain, Size: 8136 bytes --]
On Thu, 8 Aug 2024 11:28:50 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Wed, Aug 07, 2024 at 03:06:44PM +0200, Stefano Brivio wrote:
> > On Wed, 7 Aug 2024 20:51:08 +1000
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >
> > > On Wed, Aug 07, 2024 at 12:11:26AM +0200, Stefano Brivio wrote:
> > > > On Mon, 5 Aug 2024 22:36:45 +1000
> > > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > >
> > > > > Add a new test script to run the equivalent of the tests in build/all
> > > > > using exeter and Avocado. This new version of the tests is more robust
> > > > > than the original, since it makes a temporary copy of the source tree so
> > > > > will not be affected by concurrent manual builds.
> > > >
> > > > I think this is much more readable than the previous Python attempt.
> > >
> > > That's encouraging.
> > >
> > > > On the other hand, I guess it's not an ideal candidate for a fair
> > > > comparison because this is exactly the kind of stuff where shell
> > > > scripting shines: it's a simple test that needs a few basic shell
> > > > commands.
> > >
> > > Right.
> > >
> > > > On that subject, the shell test is about half the lines of code (just
> > > > skipping headers, it's 48 lines instead of 90... and yes, this version
> > >
> > > Even ignoring the fact that this case is particularly suited to shell,
> > > I don't think that's really an accurate comparison, but getting to one
> > > is pretty hard.
> > >
> > > The existing test isn't 48 lines of shell, but of "passt test DSL".
> > > There are several hundred additional lines of shell to interpret that.
> >
> > Yeah, but the 48 lines is all I have to look at, which is what matters
> > I would argue. That's exactly why I wrote that interpreter.
> >
> > Here, it's 90 lines of *test file*.
>
> Fair point. Fwiw, it's down to 77 so far for my next draft.
>
> > > Now obviously we don't need all of that for just this test. Likewise
> > > the new Python test needs at least exeter - that's only a couple of
> > > hundred lines - but also Avocado (huge, but only a small amount is
> > > really relevant here).
> > >
> > > > now uses a copy of the source code, but that would be two lines).
> > >
> > > I feel like it would be a bit more than two lines, to copy exactly
> > > what youwant, and to clean up after yourself.
> >
> > host mkdir __STATEDIR__/sources
> > host cp --parents $(git ls-files) __STATEDIR__/sources
> >
> > ...which is actually an improvement on the original as __STATEDIR__ can
> > be handled in a centralised way, if one wants to keep that after the
> > single test case, after the whole test run, or not at all.
>
> Huh, I didn't know about cp --parents, which does exactly what's
> needed. In the Python library there are, alas, several things that do
> almost but not quite what's needed. I guess I could just invoke 'cp
> --parents' myself.
>
> > > > In terms of time overhead, dropping delays to make the display capture
> > > > nice (a feature that we would anyway lose with exeter plus Avocado, if
> > > > I understood correctly):
> > >
> > > Yes. Unlike you, I'm really not convinced of the value of the display
> > > capture versus log files, at least in the majority of cases.
> >
> > Well, but I use that...
> >
> > By the way, openQA nowadays takes periodic screenshots. That's certainly
> > not as useful, but I'm indeed not the only one who benefits from
> > _seeing_ tests as they run instead of correlating log files from
> > different contexts, especially when you have a client, a server, and
> > what you're testing in between.
>
> If you have to correlate multiple logs that's a pain, yes. My
> approach here is, as much as possible, to have a single "log"
> (actually stdout & stderr) from the top level test logic, so the
> logical ordering is kind of built in.
That's not necessarily helpful: if I have a client and a server, things
are much clearer to me if I have two different logs, side-by-side. Even
more so if you have a guest, a host, and a namespace "in between".
I see the difference as I'm often digging through Podman CI's logs,
where there's a single log (including stdout and stderr), because bats
doesn't offer a context functionality like we have right now.
It's sometimes really not easy to understand what's going on in Podman's
tests without copying and pasting into an editor and manually marking
things.
> > > I certainly don't think it's worth slowing down the test running in the
> > > normal case.
> >
> > It doesn't significantly slow things down,
>
> It does if you explicitly add delays to make the display capture nice
> as mentioned above.
Okay, I didn't realise the amount of eye-candy I left in even when
${FAST} is set (which probably only makes sense when run as './ci').
With the patch attached I get:
$ time ./run
[...]
real 17m17.686s
user 0m0.010s
sys 0m0.014s
I also cut the duration of throughput and latency tests down to one
second. After we fixed lot of issues in passt, and some in QEMU and
kernel, results are now surprisingly consistent.
Still, a significant part of it is Podman's tests (which I'm working on
speeding up, for the sake of Podman's own CI), and performance tests
anyway. Without those:
$ time ./run
[...]
real 5m57.612s
user 0m0.011s
sys 0m0.009s
> > but it certainly makes it
> > more complicated to run test cases in parallel... which you can't do
> > anyway for throughput and latency tests (which take 22 out of the 37
> > minutes of a current CI run), unless you set up VMs with CPU pinning and
> > cgroups, or a server farm.
>
> So, yes, the perf tests take the majority of the runtime for CI, but
> I'm less concerned about runtime for CI tests. I'm more interested in
> runtime for a subset of functional tests you can run repeatedly while
> developing. I routinely disable the perf and other slow tests, to get
> a subset taking 5-7 minutes. That's ok, but I'm pretty confident I
> can get better coverage in significantly less time using parallel
> tests.
Probably, yes, but still I would like to point out that the difference
between five and ten minutes is not as relevant in terms of workflow as
the difference between one and five minutes.
> > I mean, I see the value of running things in parallel in a general
> > case, but I don't think you should just ignore everything else.
> >
> > > > $ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; )
> > > > [...]
> > > > real 0m17.449s
> > > > user 0m15.616s
> > > > sys 0m2.136s
> > >
> > > On my system:
> > > [...]
> > > real 0m20.325s
> > > user 0m15.595s
> > > sys 0m5.287s
> > >
> > > > compared to:
> > > >
> > > > $ time ./run
> > > > [...]
> > > > real 0m18.217s
> > > > user 0m0.010s
> > > > sys 0m0.001s
> > > >
> > > > ...which I would call essentially no overhead. I didn't try out this
> > > > version yet, I suspect it would be somewhere in between.
> > >
> > > Well..
> > >
> > > $ time PYTHONPATH=test/exeter/py3 test/venv/bin/avocado run test/build/build.json
> > > [...]
> > > RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
> > > JOB TIME : 10.85 s
> > >
> > > real 0m11.000s
> > > user 0m23.439s
> > > sys 0m7.315s
> > >
> > > Because parallel. It looks like the avocado start up time is
> > > reasonably substantial too, so that should look better with a larger
> > > set of tests.
> >
> > With the current set of tests, I doubt it's ever going to pay off. Even
> > if you run the non-perf tests in 10% of the time, it's going to be 24
> > minutes instead of 37.
>
> Including the perf tests, probably not. Excluding them (which is
> extremely useful when actively coding) I think it will.
>
> > I guess it will start making sense with larger matrices of network
> > environments, or with more test cases (but really a lot of them).
>
> We could certainly do with a lot more tests, though I expect it will
> take a while to get them.
>
--
Stefano
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: test_speedup.patch --]
[-- Type: text/x-patch, Size: 14213 bytes --]
diff --git a/test/lib/layout b/test/lib/layout
index f9a1cf1..4d03572 100644
--- a/test/lib/layout
+++ b/test/lib/layout
@@ -15,7 +15,7 @@
# layout_pasta() - Panes for host, pasta, and separate one for namespace
layout_pasta() {
- sleep 3
+ sleep 1
tmux kill-pane -a -t 0
cmd_write 0 clear
@@ -46,7 +46,7 @@ layout_pasta() {
# layout_passt() - Panes for host, passt, and guest
layout_passt() {
- sleep 3
+ sleep 1
tmux kill-pane -a -t 0
cmd_write 0 clear
@@ -77,7 +77,7 @@ layout_passt() {
# layout_passt_in_pasta() - Host, passt within pasta, namespace and guest
layout_passt_in_pasta() {
- sleep 3
+ sleep 1
tmux kill-pane -a -t 0
cmd_write 0 clear
@@ -113,7 +113,7 @@ layout_passt_in_pasta() {
# layout_two_guests() - Two guest panes, two passt panes, plus host and log
layout_two_guests() {
- sleep 3
+ sleep 1
tmux kill-pane -a -t 0
cmd_write 0 clear
@@ -152,7 +152,7 @@ layout_two_guests() {
# layout_demo_pasta() - Four panes for pasta demo
layout_demo_pasta() {
- sleep 3
+ sleep 1
cmd_write 0 cd ${BASEPATH}
cmd_write 0 clear
@@ -188,7 +188,7 @@ layout_demo_pasta() {
# layout_demo_passt() - Four panes for passt demo
layout_demo_passt() {
- sleep 3
+ sleep 1
cmd_write 0 cd ${BASEPATH}
cmd_write 0 clear
@@ -224,7 +224,7 @@ layout_demo_passt() {
# layout_demo_podman() - Four panes for pasta demo with Podman
layout_demo_podman() {
- sleep 3
+ sleep 1
cmd_write 0 cd ${BASEPATH}
cmd_write 0 clear
diff --git a/test/lib/term b/test/lib/term
index 262937e..95f9a01 100755
--- a/test/lib/term
+++ b/test/lib/term
@@ -97,7 +97,6 @@ display_delay() {
switch_pane() {
tmux select-pane -t ${1}
PR_DELAY=${PR_DELAY_INIT}
- display_delay "0.2"
}
# cmd_write() - Write a command to a pane, letter by letter, and execute it
@@ -199,7 +198,7 @@ pane_run() {
# $1: Pane name
pane_wait() {
__lc="$(echo "${1}" | tr [A-Z] [a-z])"
- sleep 0.1 || sleep 1
+ sleep 0.01 || sleep 1
__done=0
while
@@ -207,7 +206,7 @@ pane_wait() {
case ${__l} in
*"$ " | *"# ") return ;;
esac
- do sleep 0.1 || sleep 1; done
+ do sleep 0.01 || sleep 1; done
}
# pane_parse() - Print last line, @EMPTY@ if command had no output
@@ -231,7 +230,7 @@ pane_status() {
__status="$(pane_parse "${1}")"
while ! [ "${__status}" -eq "${__status}" ] 2>/dev/null; do
- sleep 1
+ sleep 0.1
pane_run "${1}" 'echo $?'
pane_wait "${1}"
__status="$(pane_parse "${1}")"
@@ -390,13 +389,6 @@ info_passed() {
info_nolog "...${PR_GREEN}passed${PR_NC}.\n"
log "...passed."
log
-
- for i in `seq 1 3`; do
- tmux set status-right-style 'bg=colour1 fg=colour2 bold'
- sleep "0.1"
- tmux set status-right-style 'bg=colour1 fg=colour233 bold'
- sleep "0.1"
- done
}
# info_failed() - Display, log, and make status bar blink when a test passes
@@ -407,13 +399,6 @@ info_failed() {
log "...failed."
log
- for i in `seq 1 3`; do
- tmux set status-right-style 'bg=colour1 fg=colour196 bold'
- sleep "0.1"
- tmux set status-right-style 'bg=colour1 fg=colour233 bold'
- sleep "0.1"
- done
-
pause_continue \
"Press any key to pause test session" \
"Resuming in " \
diff --git a/test/lib/test b/test/lib/test
index c525f8e..e6726be 100755
--- a/test/lib/test
+++ b/test/lib/test
@@ -33,7 +33,7 @@ test_iperf3k() {
pane_or_context_run "${__sctx}" 'kill -INT $(cat s.pid); rm s.pid'
- sleep 3 # Wait for kernel to free up ports
+ sleep 1 # Wait for kernel to free up ports
}
# test_iperf3() - Ugly helper for iperf3 directive
diff --git a/test/pasta_options/log_to_file b/test/pasta_options/log_to_file
index fe50e50..3ead06c 100644
--- a/test/pasta_options/log_to_file
+++ b/test/pasta_options/log_to_file
@@ -19,7 +19,7 @@ sleep 1
endef
def flood_log_client
-host tcp_crr --nolog -P 10001 -C 10002 -6 -c -H ::1
+host tcp_crr --nolog -l1 -P 10001 -C 10002 -6 -c -H ::1
endef
def check_log_size_mountns
@@ -42,7 +42,7 @@ pout PID2 echo $!
check head -1 __LOG_FILE__ | grep '^pasta .* [(]__PID2__[)]$'
test Maximum log size
-passtb ./pasta --config-net -d -f -l __LOG_FILE__ --log-size $((100 * 1024)) -- sh -c 'while true; do tcp_crr --nolog -P 10001 -C 10002 -6; done'
+passtb ./pasta --config-net -d -f -l __LOG_FILE__ --log-size $((100 * 1024)) -- sh -c 'while true; do tcp_crr --nolog -l1 -P 10001 -C 10002 -6; done'
sleep 1
flood_log_client
diff --git a/test/perf/passt_tcp b/test/perf/passt_tcp
index 14343cb..695479f 100644
--- a/test/perf/passt_tcp
+++ b/test/perf/passt_tcp
@@ -38,7 +38,7 @@ hout FREQ_CPUFREQ (echo "scale=1"; printf '( %i + 10^5 / 2 ) / 10^6\n' $(cat /sy
hout FREQ [ -n "__FREQ_CPUFREQ__" ] && echo __FREQ_CPUFREQ__ || echo __FREQ_PROCFS__
set THREADS 4
-set TIME 10
+set TIME 1
set OMIT 0.1
set OPTS -Z -P __THREADS__ -l 1M -O__OMIT__
@@ -75,7 +75,7 @@ lat -
lat -
lat -
nsb tcp_rr --nolog -6
-gout LAT tcp_rr --nolog -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout LAT tcp_rr --nolog -l1 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 200 150
tl TCP CRR latency over IPv6: guest to host
@@ -85,7 +85,7 @@ lat -
lat -
lat -
nsb tcp_crr --nolog -6
-gout LAT tcp_crr --nolog -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout LAT tcp_crr --nolog -l1 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 500 400
tr TCP throughput over IPv4: guest to host
@@ -119,7 +119,7 @@ lat -
lat -
lat -
nsb tcp_rr --nolog -4
-gout LAT tcp_rr --nolog -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout LAT tcp_rr --nolog -l1 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 200 150
tl TCP CRR latency over IPv4: guest to host
@@ -129,7 +129,7 @@ lat -
lat -
lat -
nsb tcp_crr --nolog -4
-gout LAT tcp_crr --nolog -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout LAT tcp_crr --nolog -l1 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 500 400
tr TCP throughput over IPv6: host to guest
@@ -153,7 +153,7 @@ lat -
lat -
guestb tcp_rr --nolog -P 10001 -C 10011 -6
sleep 1
-nsout LAT tcp_rr --nolog -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_rr --nolog -l1 -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 200 150
tl TCP CRR latency over IPv6: host to guest
@@ -164,7 +164,7 @@ lat -
lat -
guestb tcp_crr --nolog -P 10001 -C 10011 -6
sleep 1
-nsout LAT tcp_crr --nolog -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_crr --nolog -l1 -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 500 350
@@ -189,7 +189,7 @@ lat -
lat -
guestb tcp_rr --nolog -P 10001 -C 10011 -4
sleep 1
-nsout LAT tcp_rr --nolog -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_rr --nolog -l1 -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 200 150
tl TCP CRR latency over IPv6: host to guest
@@ -200,7 +200,7 @@ lat -
lat -
guestb tcp_crr --nolog -P 10001 -C 10011 -4
sleep 1
-nsout LAT tcp_crr --nolog -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_crr --nolog -l1 -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
lat __LAT__ 500 300
te
diff --git a/test/perf/passt_udp b/test/perf/passt_udp
index 8919280..f25c903 100644
--- a/test/perf/passt_udp
+++ b/test/perf/passt_udp
@@ -31,7 +31,7 @@ hout FREQ_CPUFREQ (echo "scale=1"; printf '( %i + 10^5 / 2 ) / 10^6\n' $(cat /sy
hout FREQ [ -n "__FREQ_CPUFREQ__" ] && echo __FREQ_CPUFREQ__ || echo __FREQ_PROCFS__
set THREADS 2
-set TIME 10
+set TIME 1
set OPTS -u -P __THREADS__ --pacing-timer 1000
info Throughput in Gbps, latency in µs, __THREADS__ threads at __FREQ__ GHz
diff --git a/test/perf/pasta_tcp b/test/perf/pasta_tcp
index 8d2f911..a443f5a 100644
--- a/test/perf/pasta_tcp
+++ b/test/perf/pasta_tcp
@@ -22,7 +22,7 @@ ns /sbin/sysctl -w net.ipv4.tcp_timestamps=0
set THREADS 4
-set TIME 10
+set TIME 1
set OMIT 0.1
set OPTS -Z -w 4M -l 1M -P __THREADS__ -O__OMIT__
@@ -46,13 +46,13 @@ iperf3k host
tl TCP RR latency over IPv6: ns to host
hostb tcp_rr --nolog -P 10003 -C 10013 -6
-nsout LAT tcp_rr --nolog -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 150 100
tl TCP CRR latency over IPv6: ns to host
hostb tcp_crr --nolog -P 10003 -C 10013 -6
-nsout LAT tcp_crr --nolog -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 500 350
@@ -67,13 +67,13 @@ iperf3k host
tl TCP RR latency over IPv4: ns to host
hostb tcp_rr --nolog -P 10003 -C 10013 -4
-nsout LAT tcp_rr --nolog -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 150 100
tl TCP CRR latency over IPv4: ns to host
hostb tcp_crr --nolog -P 10003 -C 10013 -4
-nsout LAT tcp_crr --nolog -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 500 350
@@ -87,13 +87,13 @@ iperf3k ns
tl TCP RR latency over IPv6: host to ns
nsb tcp_rr --nolog -P 10002 -C 10012 -6
-hout LAT tcp_rr --nolog -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 150 100
tl TCP CRR latency over IPv6: host to ns
nsb tcp_crr --nolog -P 10002 -C 10012 -6
-hout LAT tcp_crr --nolog -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 1000 700
@@ -108,13 +108,13 @@ iperf3k ns
tl TCP RR latency over IPv4: host to ns
nsb tcp_rr --nolog -P 10002 -C 10012 -4
-hout LAT tcp_rr --nolog -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 150 100
tl TCP CRR latency over IPv4: host to ns
nsb tcp_crr --nolog -P 10002 -C 10012 -4
-hout LAT tcp_crr --nolog -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 1000 700
@@ -156,7 +156,7 @@ lat -
lat -
lat -
hostb tcp_rr --nolog -P 10003 -C 10013 -6
-nsout LAT tcp_rr --nolog -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 150 100
@@ -165,7 +165,7 @@ lat -
lat -
lat -
hostb tcp_crr --nolog -P 10003 -C 10013 -6
-nsout LAT tcp_crr --nolog -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 1500 500
@@ -193,7 +193,7 @@ lat -
lat -
lat -
hostb tcp_rr --nolog -P 10003 -C 10013 -4
-nsout LAT tcp_rr --nolog -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 150 100
@@ -202,7 +202,7 @@ lat -
lat -
lat -
hostb tcp_crr --nolog -P 10003 -C 10013 -4
-nsout LAT tcp_crr --nolog -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
hostw
lat __LAT__ 1500 500
@@ -224,7 +224,7 @@ lat -
lat -
lat -
nsb tcp_rr --nolog -P 10002 -C 10012 -6
-hout LAT tcp_rr --nolog -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 150 100
@@ -234,7 +234,7 @@ lat -
lat -
sleep 1
nsb tcp_crr --nolog -P 10002 -C 10012 -6
-hout LAT tcp_crr --nolog -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 5000 10000
@@ -256,7 +256,7 @@ lat -
lat -
lat -
nsb tcp_rr --nolog -P 10002 -C 10012 -4
-hout LAT tcp_rr --nolog -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 150 100
@@ -266,7 +266,7 @@ lat -
lat -
sleep 1
nsb tcp_crr --nolog -P 10002 -C 10012 -4
-hout LAT tcp_crr --nolog -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
nsw
lat __LAT__ 5000 10000
diff --git a/test/perf/pasta_udp b/test/perf/pasta_udp
index 6acbfd3..9fed62e 100644
--- a/test/perf/pasta_udp
+++ b/test/perf/pasta_udp
@@ -21,7 +21,7 @@ hout FREQ_CPUFREQ (echo "scale=1"; printf '( %i + 10^5 / 2 ) / 10^6\n' $(cat /sy
hout FREQ [ -n "__FREQ_CPUFREQ__" ] && echo __FREQ_CPUFREQ__ || echo __FREQ_PROCFS__
set THREADS 1
-set TIME 10
+set TIME 1
set OPTS -u -P __THREADS__
info Throughput in Gbps, latency in µs, one thread at __FREQ__ GHz
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 07/22] test: Add linters for Python code
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (5 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 06/22] test: Add exeter+Avocado based build tests David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 08/22] tasst: Introduce library of common test helpers David Gibson
` (15 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
We use both cppcheck and clang-tidy to lint our C code. Now that we're
introducing Python code in the tests, use linters pycodestyle and flake8.
Add a "make meta" target to run tests of the test infrastructure. For now
it just has the linters, but we'll add more in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 22 +++++++++++++++++++---
test/build/static_checkers.sh | 2 --
test/meta/.gitignore | 1 +
test/meta/lint.sh | 28 ++++++++++++++++++++++++++++
test/run_avocado | 5 +++--
5 files changed, 51 insertions(+), 7 deletions(-)
create mode 100644 test/meta/.gitignore
create mode 100644 test/meta/lint.sh
diff --git a/test/Makefile b/test/Makefile
index d24fce14..0b3ed3d0 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -6,6 +6,8 @@
# Author: David Gibson <david@gibson.dropbear.id.au>
WGET = wget -c
+FLAKE8 = flake8-3
+PYCODESTYLE = pycodestyle-3
DEBIAN_IMGS = debian-8.11.0-openstack-amd64.qcow2 \
debian-9-nocloud-amd64-daily-20200210-166.qcow2 \
@@ -66,9 +68,13 @@ ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
EXETER_SH = build/static_checkers.sh
EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
-
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
+EXETER_META = meta/lint.json
+META_JOBS = $(EXETER_META)
+
+PYPKGS = $(EXETER_PY)
+
PYTHON = python3
VENV = venv
PIP = $(VENV)/bin/pip3
@@ -147,7 +153,17 @@ venv: pull-exeter
.PHONY: avocado
avocado: venv $(AVOCADO_JOBS)
- $(RUN_AVOCADO) $(AVOCADO_JOBS)
+ $(RUN_AVOCADO) all $(AVOCADO_JOBS)
+
+.PHONY: meta
+meta: venv $(META_JOBS)
+ $(RUN_AVOCADO) meta $(META_JOBS)
+
+flake8:
+ $(FLAKE8) $(PYPKGS)
+
+pycodestyle:
+ $(PYCODESTYLE) $(PYPKGS)
check: assets
./run
@@ -161,7 +177,7 @@ clean:
rm -rf test_logs
rm -f prepared-*.qcow2 prepared-*.img
rm -rf $(VENV)
- rm -f $(EXETER_JOBS)
+ rm -f $(EXETER_JOBS) $(EXETER_META)
realclean: clean
rm -rf $(DOWNLOAD_ASSETS)
diff --git a/test/build/static_checkers.sh b/test/build/static_checkers.sh
index ec159ea2..fa07f8fd 100644
--- a/test/build/static_checkers.sh
+++ b/test/build/static_checkers.sh
@@ -26,5 +26,3 @@ clang_tidy () {
exeter_register clang_tidy
exeter_main "$@"
-
-
diff --git a/test/meta/.gitignore b/test/meta/.gitignore
new file mode 100644
index 00000000..a6c57f5f
--- /dev/null
+++ b/test/meta/.gitignore
@@ -0,0 +1 @@
+*.json
diff --git a/test/meta/lint.sh b/test/meta/lint.sh
new file mode 100644
index 00000000..6cbaa5d4
--- /dev/null
+++ b/test/meta/lint.sh
@@ -0,0 +1,28 @@
+#! /bin/sh
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# PASST - Plug A Simple Socket Transport
+# for qemu/UNIX domain socket mode
+#
+# PASTA - Pack A Subtle Tap Abstraction
+# for network namespace/tap device mode
+#
+# test/meta/lint.sh - Linters for the test code
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+source $(dirname $0)/../exeter/sh/exeter.sh
+
+flake8 () {
+ make -C test flake8
+}
+exeter_register flake8
+
+pycodestyle () {
+ make -C test pycodestyle
+}
+exeter_register pycodestyle
+
+exeter_main "$@"
diff --git a/test/run_avocado b/test/run_avocado
index 26a226ce..b62864f6 100755
--- a/test/run_avocado
+++ b/test/run_avocado
@@ -35,13 +35,14 @@ def main():
os.path.dirname(os.path.dirname(__file__))
)
- references = [os.path.join(repo_root_path, 'test', x) for x in sys.argv[1:]]
+ suitename = sys.argv[1]
+ references = [os.path.join(repo_root_path, 'test', x) for x in sys.argv[2:]]
config = {
"resolver.references": references,
"runner.identifier_format": "{args}",
}
- suite = TestSuite.from_config(config, name="all")
+ suite = TestSuite.from_config(config, name=suitename)
with Job(config, [suite]) as j:
return j.run()
--
@@ -35,13 +35,14 @@ def main():
os.path.dirname(os.path.dirname(__file__))
)
- references = [os.path.join(repo_root_path, 'test', x) for x in sys.argv[1:]]
+ suitename = sys.argv[1]
+ references = [os.path.join(repo_root_path, 'test', x) for x in sys.argv[2:]]
config = {
"resolver.references": references,
"runner.identifier_format": "{args}",
}
- suite = TestSuite.from_config(config, name="all")
+ suite = TestSuite.from_config(config, name=suitename)
with Job(config, [suite]) as j:
return j.run()
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 08/22] tasst: Introduce library of common test helpers
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (6 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 07/22] test: Add linters for Python code David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 09/22] tasst: "snh" module for simulated network hosts David Gibson
` (14 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Create a Python package "tasst" with common helper code for use in passt
and pasta. Initially it just has a placeholder selftest.
Extend the meta tests to include selftests within the tasst library. This
lets us test the functionality of the library itself without involving
actual passt or pasta.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 13 +++++++++----
test/tasst/.gitignore | 1 +
test/tasst/__init__.py | 11 +++++++++++
test/tasst/__main__.py | 22 ++++++++++++++++++++++
4 files changed, 43 insertions(+), 4 deletions(-)
create mode 100644 test/tasst/.gitignore
create mode 100644 test/tasst/__init__.py
create mode 100644 test/tasst/__main__.py
diff --git a/test/Makefile b/test/Makefile
index 0b3ed3d0..81f94f70 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -70,15 +70,17 @@ EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-EXETER_META = meta/lint.json
+TASST_SRCS = __init__.py __main__.py
+
+EXETER_META = meta/lint.json meta/tasst.json
META_JOBS = $(EXETER_META)
-PYPKGS = $(EXETER_PY)
+PYPKGS = tasst $(EXETER_PY)
PYTHON = python3
VENV = venv
PIP = $(VENV)/bin/pip3
-PYPATH = exeter/py3
+PYPATH = . exeter/py3
SPACE = $(subst ,, )
PYPATH_TEST = $(subst $(SPACE),:,$(PYPATH))
PYPATH_BASE = $(subst $(SPACE),:,$(PYPATH:%=test/%))
@@ -151,6 +153,9 @@ venv: pull-exeter
%.json: %.py pull-exeter
cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) test/$< --avocado > test/$@
+meta/tasst.json: $(TASST_SRCS:%=tasst/%) $(VENV) pull-exeter
+ cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) -m tasst --avocado > test/$@
+
.PHONY: avocado
avocado: venv $(AVOCADO_JOBS)
$(RUN_AVOCADO) all $(AVOCADO_JOBS)
@@ -176,7 +181,7 @@ clean:
rm -f $(LOCAL_ASSETS)
rm -rf test_logs
rm -f prepared-*.qcow2 prepared-*.img
- rm -rf $(VENV)
+ rm -rf $(VENV) tasst/__pycache__
rm -f $(EXETER_JOBS) $(EXETER_META)
realclean: clean
diff --git a/test/tasst/.gitignore b/test/tasst/.gitignore
new file mode 100644
index 00000000..c18dd8d8
--- /dev/null
+++ b/test/tasst/.gitignore
@@ -0,0 +1 @@
+__pycache__/
diff --git a/test/tasst/__init__.py b/test/tasst/__init__.py
new file mode 100644
index 00000000..c1d5d9dd
--- /dev/null
+++ b/test/tasst/__init__.py
@@ -0,0 +1,11 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+library of test helpers for passt & pasta
+"""
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
new file mode 100644
index 00000000..c365b986
--- /dev/null
+++ b/test/tasst/__main__.py
@@ -0,0 +1,22 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+library of test helpers for passt & pasta
+"""
+
+import exeter
+
+
+@exeter.test
+def placeholder():
+ pass
+
+
+if __name__ == '__main__':
+ exeter.main()
--
@@ -0,0 +1,22 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+library of test helpers for passt & pasta
+"""
+
+import exeter
+
+
+@exeter.test
+def placeholder():
+ pass
+
+
+if __name__ == '__main__':
+ exeter.main()
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 09/22] tasst: "snh" module for simulated network hosts
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (7 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 08/22] tasst: Introduce library of common test helpers David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 10/22] tasst: Add helper to get network interface names for a site David Gibson
` (13 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Add to the tasst library a SimNetHost class used to represent a
simulated network host of some type (e.g. namespaces, VMs). For now
all it does is lets you execute commands, either foreground or
background in the context of the simulated host. We add some "meta"
exeter tests for it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 2 +-
test/tasst/__main__.py | 6 +-
test/tasst/snh.py | 187 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 190 insertions(+), 5 deletions(-)
create mode 100644 test/tasst/snh.py
diff --git a/test/Makefile b/test/Makefile
index 81f94f70..8373ae77 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -70,7 +70,7 @@ EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-TASST_SRCS = __init__.py __main__.py
+TASST_SRCS = __init__.py __main__.py snh.py
EXETER_META = meta/lint.json meta/tasst.json
META_JOBS = $(EXETER_META)
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index c365b986..91499128 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -12,10 +12,8 @@ library of test helpers for passt & pasta
import exeter
-
-@exeter.test
-def placeholder():
- pass
+# We import just to get the exeter tests, which flake8 can't see
+from . import snh # noqa: F401
if __name__ == '__main__':
diff --git a/test/tasst/snh.py b/test/tasst/snh.py
new file mode 100644
index 00000000..dfbe2c84
--- /dev/null
+++ b/test/tasst/snh.py
@@ -0,0 +1,187 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+tasst/snh.py - Simulated network hosts for testing
+"""
+
+
+import contextlib
+import subprocess
+import sys
+
+import exeter
+
+
+STDOUT = 1
+
+
+class SnhProcess(contextlib.AbstractContextManager):
+ """
+ A background process running on a SimNetHost
+ """
+
+ def __init__(self, snh, *cmd, check=True, context_timeout=1.0, **kwargs):
+ self.snh = snh
+ self.cmd = cmd
+ self.check = check
+ self.context_timeout = float(context_timeout)
+
+ self.kwargs = kwargs
+
+ def __enter__(self):
+ self.popen = subprocess.Popen(self.cmd, **self.kwargs)
+ return self
+
+ def run(self, **kwargs):
+ stdout, stderr = self.popen.communicate(**kwargs)
+ cp = subprocess.CompletedProcess(self.popen.args,
+ self.popen.returncode,
+ stdout, stderr)
+ if self.check:
+ cp.check_returncode()
+ return cp
+
+ def terminate(self):
+ self.popen.terminate()
+
+ def kill(self):
+ self.popen.kill()
+
+ def __exit__(self, *exc_details):
+ try:
+ self.popen.wait(timeout=self.context_timeout)
+ except subprocess.TimeoutExpired as e:
+ self.terminate()
+ try:
+ self.popen.wait(timeout=self.context_timeout)
+ except subprocess.TimeoutExpired:
+ self.kill()
+ raise e
+
+
+class SimNetHost(contextlib.AbstractContextManager):
+ """
+ A (usually virtual or simulated) location where we can execute
+ commands and configure networks.
+
+ """
+
+ def __init__(self, name):
+ self.name = name # For debugging
+
+ def hostify(self, *cmd, **kwargs):
+ raise NotImplementedError
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, *exc_details):
+ pass
+
+ def output(self, *cmd, **kwargs):
+ proc = self.fg(*cmd, capture=STDOUT, **kwargs)
+ return proc.stdout
+
+ def fg(self, *cmd, timeout=None, **kwargs):
+ # We don't use subprocess.run() because it kills without
+ # attempting to terminate on timeout
+ with self.bg(*cmd, **kwargs) as proc:
+ res = proc.run(timeout=timeout)
+ return res
+
+ def bg(self, *cmd, capture=None, **kwargs):
+ if capture == STDOUT:
+ kwargs['stdout'] = subprocess.PIPE
+ hostcmd, kwargs = self.hostify(*cmd, **kwargs)
+ proc = SnhProcess(self, *hostcmd, **kwargs)
+ print(f"SimNetHost {self.name}: Started {cmd} => {proc}",
+ file=sys.stderr)
+ return proc
+
+ # Internal tests
+ def test_true(self):
+ with self as snh:
+ snh.fg('true')
+
+ def test_false(self):
+ with self as snh:
+ exeter.assert_raises(subprocess.CalledProcessError,
+ snh.fg, 'false')
+
+ def test_echo(self):
+ msg = 'Hello tasst'
+ with self as snh:
+ out = snh.output('echo', f'{msg}')
+ exeter.assert_eq(out, msg.encode('utf-8') + b'\n')
+
+ def test_timeout(self):
+ with self as snh:
+ exeter.assert_raises(subprocess.TimeoutExpired, snh.fg,
+ 'sleep', 'infinity', timeout=0.1, check=False)
+
+ def test_bg_true(self):
+ with self as snh:
+ with snh.bg('true'):
+ pass
+
+ def test_bg_false(self):
+ with self as snh:
+ with snh.bg('false') as proc:
+ exeter.assert_raises(subprocess.CalledProcessError, proc.run)
+
+ def test_bg_echo(self):
+ msg = 'Hello tasst'
+ with self as snh:
+ with snh.bg('echo', f'{msg}', capture=STDOUT) as proc:
+ res = proc.run()
+ exeter.assert_eq(res.stdout, msg.encode('utf-8') + b'\n')
+
+ def test_bg_timeout(self):
+ with self as snh:
+ with snh.bg('sleep', 'infinity') as proc:
+ exeter.assert_raises(subprocess.TimeoutExpired,
+ proc.run, timeout=0.1)
+ proc.terminate()
+
+ def test_bg_context_timeout(self):
+ with self as snh:
+ def run_timeout():
+ with snh.bg('sleep', 'infinity', context_timeout=0.1):
+ pass
+ exeter.assert_raises(subprocess.TimeoutExpired, run_timeout)
+
+ SELFTESTS = [test_true, test_false, test_echo, test_timeout,
+ test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
+ test_bg_context_timeout]
+
+ @classmethod
+ def selftest(cls, setup):
+ "Register standard snh tests for instance returned by setup"
+ for t in cls.SELFTESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+class RealHost(SimNetHost):
+ """Represents the host on which the tests are running (as opposed
+ to some simulated host created by the tests)
+
+ """
+
+ def __init__(self):
+ super().__init__('REAL_HOST')
+
+ def hostify(self, *cmd, capable=False, **kwargs):
+ assert not capable, \
+ "BUG: Shouldn't run commands with capabilities on host"
+ return cmd, kwargs
+
+
+SimNetHost.selftest(RealHost)
--
@@ -0,0 +1,187 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+tasst/snh.py - Simulated network hosts for testing
+"""
+
+
+import contextlib
+import subprocess
+import sys
+
+import exeter
+
+
+STDOUT = 1
+
+
+class SnhProcess(contextlib.AbstractContextManager):
+ """
+ A background process running on a SimNetHost
+ """
+
+ def __init__(self, snh, *cmd, check=True, context_timeout=1.0, **kwargs):
+ self.snh = snh
+ self.cmd = cmd
+ self.check = check
+ self.context_timeout = float(context_timeout)
+
+ self.kwargs = kwargs
+
+ def __enter__(self):
+ self.popen = subprocess.Popen(self.cmd, **self.kwargs)
+ return self
+
+ def run(self, **kwargs):
+ stdout, stderr = self.popen.communicate(**kwargs)
+ cp = subprocess.CompletedProcess(self.popen.args,
+ self.popen.returncode,
+ stdout, stderr)
+ if self.check:
+ cp.check_returncode()
+ return cp
+
+ def terminate(self):
+ self.popen.terminate()
+
+ def kill(self):
+ self.popen.kill()
+
+ def __exit__(self, *exc_details):
+ try:
+ self.popen.wait(timeout=self.context_timeout)
+ except subprocess.TimeoutExpired as e:
+ self.terminate()
+ try:
+ self.popen.wait(timeout=self.context_timeout)
+ except subprocess.TimeoutExpired:
+ self.kill()
+ raise e
+
+
+class SimNetHost(contextlib.AbstractContextManager):
+ """
+ A (usually virtual or simulated) location where we can execute
+ commands and configure networks.
+
+ """
+
+ def __init__(self, name):
+ self.name = name # For debugging
+
+ def hostify(self, *cmd, **kwargs):
+ raise NotImplementedError
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, *exc_details):
+ pass
+
+ def output(self, *cmd, **kwargs):
+ proc = self.fg(*cmd, capture=STDOUT, **kwargs)
+ return proc.stdout
+
+ def fg(self, *cmd, timeout=None, **kwargs):
+ # We don't use subprocess.run() because it kills without
+ # attempting to terminate on timeout
+ with self.bg(*cmd, **kwargs) as proc:
+ res = proc.run(timeout=timeout)
+ return res
+
+ def bg(self, *cmd, capture=None, **kwargs):
+ if capture == STDOUT:
+ kwargs['stdout'] = subprocess.PIPE
+ hostcmd, kwargs = self.hostify(*cmd, **kwargs)
+ proc = SnhProcess(self, *hostcmd, **kwargs)
+ print(f"SimNetHost {self.name}: Started {cmd} => {proc}",
+ file=sys.stderr)
+ return proc
+
+ # Internal tests
+ def test_true(self):
+ with self as snh:
+ snh.fg('true')
+
+ def test_false(self):
+ with self as snh:
+ exeter.assert_raises(subprocess.CalledProcessError,
+ snh.fg, 'false')
+
+ def test_echo(self):
+ msg = 'Hello tasst'
+ with self as snh:
+ out = snh.output('echo', f'{msg}')
+ exeter.assert_eq(out, msg.encode('utf-8') + b'\n')
+
+ def test_timeout(self):
+ with self as snh:
+ exeter.assert_raises(subprocess.TimeoutExpired, snh.fg,
+ 'sleep', 'infinity', timeout=0.1, check=False)
+
+ def test_bg_true(self):
+ with self as snh:
+ with snh.bg('true'):
+ pass
+
+ def test_bg_false(self):
+ with self as snh:
+ with snh.bg('false') as proc:
+ exeter.assert_raises(subprocess.CalledProcessError, proc.run)
+
+ def test_bg_echo(self):
+ msg = 'Hello tasst'
+ with self as snh:
+ with snh.bg('echo', f'{msg}', capture=STDOUT) as proc:
+ res = proc.run()
+ exeter.assert_eq(res.stdout, msg.encode('utf-8') + b'\n')
+
+ def test_bg_timeout(self):
+ with self as snh:
+ with snh.bg('sleep', 'infinity') as proc:
+ exeter.assert_raises(subprocess.TimeoutExpired,
+ proc.run, timeout=0.1)
+ proc.terminate()
+
+ def test_bg_context_timeout(self):
+ with self as snh:
+ def run_timeout():
+ with snh.bg('sleep', 'infinity', context_timeout=0.1):
+ pass
+ exeter.assert_raises(subprocess.TimeoutExpired, run_timeout)
+
+ SELFTESTS = [test_true, test_false, test_echo, test_timeout,
+ test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
+ test_bg_context_timeout]
+
+ @classmethod
+ def selftest(cls, setup):
+ "Register standard snh tests for instance returned by setup"
+ for t in cls.SELFTESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+class RealHost(SimNetHost):
+ """Represents the host on which the tests are running (as opposed
+ to some simulated host created by the tests)
+
+ """
+
+ def __init__(self):
+ super().__init__('REAL_HOST')
+
+ def hostify(self, *cmd, capable=False, **kwargs):
+ assert not capable, \
+ "BUG: Shouldn't run commands with capabilities on host"
+ return cmd, kwargs
+
+
+SimNetHost.selftest(RealHost)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 10/22] tasst: Add helper to get network interface names for a site
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (8 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 09/22] tasst: "snh" module for simulated network hosts David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 11/22] tasst: Add helpers to run commands with nstool David Gibson
` (12 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Start adding convenience functions for handling sites as places with
network setup with a simple helper which lists the network interface names
for a site.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/tasst/snh.py | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/test/tasst/snh.py b/test/tasst/snh.py
index dfbe2c84..8ee9802a 100644
--- a/test/tasst/snh.py
+++ b/test/tasst/snh.py
@@ -13,6 +13,7 @@ tasst/snh.py - Simulated network hosts for testing
import contextlib
+import json
import subprocess
import sys
@@ -105,6 +106,10 @@ class SimNetHost(contextlib.AbstractContextManager):
file=sys.stderr)
return proc
+ def ifs(self):
+ info = json.loads(self.output('ip', '-j', 'link', 'show'))
+ return [i['ifname'] for i in info]
+
# Internal tests
def test_true(self):
with self as snh:
@@ -157,9 +162,14 @@ class SimNetHost(contextlib.AbstractContextManager):
pass
exeter.assert_raises(subprocess.TimeoutExpired, run_timeout)
+ def test_has_lo(self):
+ with self as snh:
+ assert 'lo' in snh.ifs()
+
SELFTESTS = [test_true, test_false, test_echo, test_timeout,
test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
- test_bg_context_timeout]
+ test_bg_context_timeout,
+ test_has_lo]
@classmethod
def selftest(cls, setup):
--
@@ -13,6 +13,7 @@ tasst/snh.py - Simulated network hosts for testing
import contextlib
+import json
import subprocess
import sys
@@ -105,6 +106,10 @@ class SimNetHost(contextlib.AbstractContextManager):
file=sys.stderr)
return proc
+ def ifs(self):
+ info = json.loads(self.output('ip', '-j', 'link', 'show'))
+ return [i['ifname'] for i in info]
+
# Internal tests
def test_true(self):
with self as snh:
@@ -157,9 +162,14 @@ class SimNetHost(contextlib.AbstractContextManager):
pass
exeter.assert_raises(subprocess.TimeoutExpired, run_timeout)
+ def test_has_lo(self):
+ with self as snh:
+ assert 'lo' in snh.ifs()
+
SELFTESTS = [test_true, test_false, test_echo, test_timeout,
test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
- test_bg_context_timeout]
+ test_bg_context_timeout,
+ test_has_lo]
@classmethod
def selftest(cls, setup):
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 11/22] tasst: Add helpers to run commands with nstool
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (9 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 10/22] tasst: Add helper to get network interface names for a site David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 12/22] tasst: Add ifup and network address helpers to SimNetHost David Gibson
` (11 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Use our existing nstool C helper, add python wrappers to easily run
commands in various namespaces.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 8 +-
test/tasst/__main__.py | 2 +-
test/tasst/nstool.py | 170 +++++++++++++++++++++++++++++++++++++++++
test/tasst/snh.py | 16 ++++
4 files changed, 192 insertions(+), 4 deletions(-)
create mode 100644 test/tasst/nstool.py
diff --git a/test/Makefile b/test/Makefile
index 8373ae77..83725f59 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -64,13 +64,15 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
$(TESTDATA_ASSETS)
ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
+AVOCADO_ASSETS =
+META_ASSETS = nstool
EXETER_SH = build/static_checkers.sh
EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-TASST_SRCS = __init__.py __main__.py snh.py
+TASST_SRCS = __init__.py __main__.py nstool.py snh.py
EXETER_META = meta/lint.json meta/tasst.json
META_JOBS = $(EXETER_META)
@@ -157,11 +159,11 @@ meta/tasst.json: $(TASST_SRCS:%=tasst/%) $(VENV) pull-exeter
cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) -m tasst --avocado > test/$@
.PHONY: avocado
-avocado: venv $(AVOCADO_JOBS)
+avocado: venv $(AVOCADO_ASSETS) $(AVOCADO_JOBS)
$(RUN_AVOCADO) all $(AVOCADO_JOBS)
.PHONY: meta
-meta: venv $(META_JOBS)
+meta: venv $(META_ASSETS) $(META_JOBS)
$(RUN_AVOCADO) meta $(META_JOBS)
flake8:
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index 91499128..9fd6174e 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -13,7 +13,7 @@ library of test helpers for passt & pasta
import exeter
# We import just to get the exeter tests, which flake8 can't see
-from . import snh # noqa: F401
+from . import nstool, snh # noqa: F401
if __name__ == '__main__':
diff --git a/test/tasst/nstool.py b/test/tasst/nstool.py
new file mode 100644
index 00000000..0b23fbfb
--- /dev/null
+++ b/test/tasst/nstool.py
@@ -0,0 +1,170 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+nstool.py - Run commands in namespaces via 'nstool'
+"""
+
+import contextlib
+import os
+import subprocess
+import tempfile
+
+import exeter
+
+from .snh import RealHost, SimNetHost
+
+# FIXME: Can this be made more portable?
+UNIX_PATH_MAX = 108
+
+NSTOOL_BIN = 'test/nstool'
+
+
+class NsTool(SimNetHost):
+ """A bundle of Linux namespaces managed by nstool"""
+
+ def __init__(self, name, sockpath, parent=RealHost()):
+ if len(sockpath) > UNIX_PATH_MAX:
+ raise ValueError(
+ f'Unix domain socket path "{sockpath}" is too long'
+ )
+
+ super().__init__(name)
+ self.sockpath = sockpath
+ self.parent = parent
+ self._pid = None
+
+ def __enter__(self):
+ cmd = [f'{NSTOOL_BIN}', 'info', '-wp', f'{self.sockpath}']
+ pid = self.parent.output(*cmd, timeout=1)
+ self._pid = int(pid)
+ return self
+
+ def __exit__(self, *exc_details):
+ pass
+
+ # PID of the nstool hold process as seen by the parent snh
+ def pid(self):
+ return self._pid
+
+ # PID of the nstool hold process as seen by another snh which can
+ # see the nstool socket (important when using PID namespaces)
+ def relative_pid(self, relative_to):
+ cmd = [f'{NSTOOL_BIN}', 'info', '-p', f'{self.sockpath}']
+ relpid = relative_to.output(*cmd)
+ return int(relpid)
+
+ def hostify(self, *cmd, capable=False, **kwargs):
+ hostcmd = [f'{NSTOOL_BIN}', 'exec']
+ if capable:
+ hostcmd.append('--keep-caps')
+ hostcmd += [self.sockpath, '--']
+ hostcmd += list(cmd)
+ return hostcmd, kwargs
+
+
+@contextlib.contextmanager
+def unshare_snh(name, *opts, parent=RealHost(), capable=False):
+ # Create path for temporary nstool Unix socket
+ with tempfile.TemporaryDirectory() as tmpd:
+ sockpath = os.path.join(tmpd, name)
+ cmd = ['unshare'] + list(opts)
+ cmd += ['--', f'{NSTOOL_BIN}', 'hold', f'{sockpath}']
+ with parent.bg(*cmd, capable=capable) as holder:
+ try:
+ with NsTool(name, sockpath, parent=parent) as snh:
+ yield snh
+ finally:
+ try:
+ parent.fg(f'{NSTOOL_BIN}', 'stop', f'{sockpath}')
+ finally:
+ try:
+ holder.run(timeout=0.1)
+ holder.kill()
+ finally:
+ try:
+ os.remove(sockpath)
+ except FileNotFoundError:
+ pass
+
+
+TEST_EXC = ValueError
+
+
+def test_sockdir_cleanup(s):
+ def mess(sockpaths):
+ with s as snh:
+ ns = snh
+ while isinstance(ns, NsTool):
+ sockpaths.append(ns.sockpath)
+ ns = ns.parent
+ raise TEST_EXC
+
+ sockpaths = []
+ exeter.assert_raises(TEST_EXC, mess, sockpaths)
+ assert sockpaths
+ for path in sockpaths:
+ assert not os.path.exists(os.path.dirname(path))
+
+
+def userns_snh():
+ return unshare_snh('usernetns', '-Ucn')
+
+
+@exeter.test
+def test_userns():
+ cmd = ['capsh', '--has-p=CAP_SETUID']
+ with RealHost() as realhost:
+ status = realhost.fg(*cmd, check=False)
+ assert status.returncode != 0
+ with userns_snh() as ns:
+ ns.fg(*cmd, capable=True)
+
+
+@contextlib.contextmanager
+def nested_snh():
+ with unshare_snh('userns', '-Uc') as userns:
+ with unshare_snh('netns', '-n', parent=userns, capable=True) as netns:
+ yield netns
+
+
+def pidns_snh():
+ return unshare_snh('pidns', '-Upfn')
+
+
+@exeter.test
+def test_relative_pid():
+ with pidns_snh() as snh:
+ # The holder is init (pid 1) within its own pidns
+ exeter.assert_eq(snh.relative_pid(snh), 1)
+
+
+# General tests for all the nstool examples
+for setup in [userns_snh, nested_snh, pidns_snh]:
+ # Common snh tests
+ SimNetHost.selftest_isolated(setup)
+ exeter.register_pipe(f'{setup.__qualname__}|test_sockdir_cleanup',
+ setup, test_sockdir_cleanup)
+
+
+@contextlib.contextmanager
+def connect_snh():
+ with tempfile.TemporaryDirectory() as tmpd:
+ sockpath = os.path.join(tmpd, 'nons')
+ holdcmd = [f'{NSTOOL_BIN}', 'hold', f'{sockpath}']
+ with subprocess.Popen(holdcmd) as holder:
+ try:
+ with NsTool("fakens", sockpath) as snh:
+ yield snh
+ finally:
+ holder.kill()
+ os.remove(sockpath)
+
+
+SimNetHost.selftest(connect_snh)
diff --git a/test/tasst/snh.py b/test/tasst/snh.py
index 8ee9802a..598ea979 100644
--- a/test/tasst/snh.py
+++ b/test/tasst/snh.py
@@ -178,6 +178,22 @@ class SimNetHost(contextlib.AbstractContextManager):
testid = f'{setup.__qualname__}|{t.__qualname__}'
exeter.register_pipe(testid, setup, t)
+ # Additional tests only valid if the snh is isolated (no outside
+ # network connections)
+ def test_is_isolated(self):
+ with self as snh:
+ exeter.assert_eq(snh.ifs(), ['lo'])
+
+ ISOLATED_SELFTESTS = [test_is_isolated]
+
+ @classmethod
+ def selftest_isolated(cls, setup):
+ "Register self tests for an isolated snh example"
+ cls.selftest(setup)
+ for t in cls.ISOLATED_SELFTESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
class RealHost(SimNetHost):
"""Represents the host on which the tests are running (as opposed
--
@@ -178,6 +178,22 @@ class SimNetHost(contextlib.AbstractContextManager):
testid = f'{setup.__qualname__}|{t.__qualname__}'
exeter.register_pipe(testid, setup, t)
+ # Additional tests only valid if the snh is isolated (no outside
+ # network connections)
+ def test_is_isolated(self):
+ with self as snh:
+ exeter.assert_eq(snh.ifs(), ['lo'])
+
+ ISOLATED_SELFTESTS = [test_is_isolated]
+
+ @classmethod
+ def selftest_isolated(cls, setup):
+ "Register self tests for an isolated snh example"
+ cls.selftest(setup)
+ for t in cls.ISOLATED_SELFTESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
class RealHost(SimNetHost):
"""Represents the host on which the tests are running (as opposed
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 12/22] tasst: Add ifup and network address helpers to SimNetHost
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (10 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 11/22] tasst: Add helpers to run commands with nstool David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 13/22] tasst: Helper for creating veth devices between namespaces David Gibson
` (10 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Add a helper to bring network interfaces up on an snh, and to retrieve
configured IP addresses.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/tasst/nstool.py | 11 +++++++++--
test/tasst/snh.py | 28 +++++++++++++++++++++++++++-
2 files changed, 36 insertions(+), 3 deletions(-)
diff --git a/test/tasst/nstool.py b/test/tasst/nstool.py
index 0b23fbfb..d852d81e 100644
--- a/test/tasst/nstool.py
+++ b/test/tasst/nstool.py
@@ -113,8 +113,11 @@ def test_sockdir_cleanup(s):
assert not os.path.exists(os.path.dirname(path))
+@contextlib.contextmanager
def userns_snh():
- return unshare_snh('usernetns', '-Ucn')
+ with unshare_snh('usernetns', '-Ucn') as ns:
+ ns.ifup('lo')
+ yield ns
@exeter.test
@@ -131,11 +134,15 @@ def test_userns():
def nested_snh():
with unshare_snh('userns', '-Uc') as userns:
with unshare_snh('netns', '-n', parent=userns, capable=True) as netns:
+ netns.ifup('lo')
yield netns
+@contextlib.contextmanager
def pidns_snh():
- return unshare_snh('pidns', '-Upfn')
+ with unshare_snh('pidns', '-Upfn') as ns:
+ ns.ifup('lo')
+ yield ns
@exeter.test
diff --git a/test/tasst/snh.py b/test/tasst/snh.py
index 598ea979..fd8f6f13 100644
--- a/test/tasst/snh.py
+++ b/test/tasst/snh.py
@@ -13,6 +13,7 @@ tasst/snh.py - Simulated network hosts for testing
import contextlib
+import ipaddress
import json
import subprocess
import sys
@@ -110,6 +111,25 @@ class SimNetHost(contextlib.AbstractContextManager):
info = json.loads(self.output('ip', '-j', 'link', 'show'))
return [i['ifname'] for i in info]
+ def ifup(self, ifname):
+ self.fg('ip', 'link', 'set', f'{ifname}', 'up', capable=True)
+
+ def addrinfos(self, ifname, **criteria):
+ info = json.loads(self.output('ip', '-j', 'addr', 'show', f'{ifname}'))
+ assert len(info) == 1 # We specified a specific interface
+
+ ais = list(ai for ai in info[0]['addr_info'])
+ for key, value in criteria.items():
+ ais = [ai for ai in ais if key in ai and ai[key] == value]
+
+ return ais
+
+ def addrs(self, ifname, **criteria):
+ # Return just the parsed, non-tentative addresses
+ return [ipaddress.ip_interface(f'{ai["local"]}/{ai["prefixlen"]}')
+ for ai in self.addrinfos(ifname, **criteria)
+ if 'tentative' not in ai]
+
# Internal tests
def test_true(self):
with self as snh:
@@ -166,10 +186,16 @@ class SimNetHost(contextlib.AbstractContextManager):
with self as snh:
assert 'lo' in snh.ifs()
+ def test_lo_addrs(self):
+ expected = set(ipaddress.ip_interface(a)
+ for a in ['127.0.0.1/8', '::1/128'])
+ with self as snh:
+ assert set(snh.addrs('lo')) == expected
+
SELFTESTS = [test_true, test_false, test_echo, test_timeout,
test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
test_bg_context_timeout,
- test_has_lo]
+ test_has_lo, test_lo_addrs]
@classmethod
def selftest(cls, setup):
--
@@ -13,6 +13,7 @@ tasst/snh.py - Simulated network hosts for testing
import contextlib
+import ipaddress
import json
import subprocess
import sys
@@ -110,6 +111,25 @@ class SimNetHost(contextlib.AbstractContextManager):
info = json.loads(self.output('ip', '-j', 'link', 'show'))
return [i['ifname'] for i in info]
+ def ifup(self, ifname):
+ self.fg('ip', 'link', 'set', f'{ifname}', 'up', capable=True)
+
+ def addrinfos(self, ifname, **criteria):
+ info = json.loads(self.output('ip', '-j', 'addr', 'show', f'{ifname}'))
+ assert len(info) == 1 # We specified a specific interface
+
+ ais = list(ai for ai in info[0]['addr_info'])
+ for key, value in criteria.items():
+ ais = [ai for ai in ais if key in ai and ai[key] == value]
+
+ return ais
+
+ def addrs(self, ifname, **criteria):
+ # Return just the parsed, non-tentative addresses
+ return [ipaddress.ip_interface(f'{ai["local"]}/{ai["prefixlen"]}')
+ for ai in self.addrinfos(ifname, **criteria)
+ if 'tentative' not in ai]
+
# Internal tests
def test_true(self):
with self as snh:
@@ -166,10 +186,16 @@ class SimNetHost(contextlib.AbstractContextManager):
with self as snh:
assert 'lo' in snh.ifs()
+ def test_lo_addrs(self):
+ expected = set(ipaddress.ip_interface(a)
+ for a in ['127.0.0.1/8', '::1/128'])
+ with self as snh:
+ assert set(snh.addrs('lo')) == expected
+
SELFTESTS = [test_true, test_false, test_echo, test_timeout,
test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
test_bg_context_timeout,
- test_has_lo]
+ test_has_lo, test_lo_addrs]
@classmethod
def selftest(cls, setup):
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 13/22] tasst: Helper for creating veth devices between namespaces
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (11 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 12/22] tasst: Add ifup and network address helpers to SimNetHost David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 14/22] tasst: Add helper for getting MTU of a network interface David Gibson
` (9 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 3 ++-
test/tasst/__main__.py | 1 +
test/tasst/nstool.py | 9 +++++++++
test/tasst/selftest/__init__.py | 16 ++++++++++++++++
test/tasst/selftest/veth.py | 33 +++++++++++++++++++++++++++++++++
5 files changed, 61 insertions(+), 1 deletion(-)
create mode 100644 test/tasst/selftest/__init__.py
create mode 100644 test/tasst/selftest/veth.py
diff --git a/test/Makefile b/test/Makefile
index 83725f59..e13c49c8 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -72,7 +72,8 @@ EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-TASST_SRCS = __init__.py __main__.py nstool.py snh.py
+TASST_SRCS = __init__.py __main__.py nstool.py snh.py \
+ selftest/__init__.py selftest/veth.py
EXETER_META = meta/lint.json meta/tasst.json
META_JOBS = $(EXETER_META)
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index 9fd6174e..d52f9c55 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -14,6 +14,7 @@ import exeter
# We import just to get the exeter tests, which flake8 can't see
from . import nstool, snh # noqa: F401
+from .selftest import veth # noqa: F401
if __name__ == '__main__':
diff --git a/test/tasst/nstool.py b/test/tasst/nstool.py
index d852d81e..bf0174eb 100644
--- a/test/tasst/nstool.py
+++ b/test/tasst/nstool.py
@@ -68,6 +68,15 @@ class NsTool(SimNetHost):
hostcmd += list(cmd)
return hostcmd, kwargs
+ def veth(self, ifname, peername, peer=None):
+ self.fg('ip', 'link', 'add', f'{ifname}', 'type', 'veth',
+ 'peer', 'name', f'{peername}', capable=True)
+ if peer is not None:
+ if not isinstance(peer, NsTool):
+ raise TypeError
+ self.fg('ip', 'link', 'set', f'{peername}',
+ 'netns', f'{peer.relative_pid(self)}', capable=True)
+
@contextlib.contextmanager
def unshare_snh(name, *opts, parent=RealHost(), capable=False):
diff --git a/test/tasst/selftest/__init__.py b/test/tasst/selftest/__init__.py
new file mode 100644
index 00000000..d7742930
--- /dev/null
+++ b/test/tasst/selftest/__init__.py
@@ -0,0 +1,16 @@
+#! /usr/bin/python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""Test A Simple Socket Transport
+
+selftest/ - Selftests for the tasst library
+
+Usually, tests for the tasst helper library itself should go next to
+the implementation of the thing being tested. Sometimes that's
+inconvenient or impossible (usually because it would cause a circular
+module dependency). In that case those tests can go here.
+"""
diff --git a/test/tasst/selftest/veth.py b/test/tasst/selftest/veth.py
new file mode 100644
index 00000000..3c0b3f5b
--- /dev/null
+++ b/test/tasst/selftest/veth.py
@@ -0,0 +1,33 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+selftest/veth.py - Test various veth configurations
+"""
+
+import contextlib
+
+import exeter
+
+from tasst import nstool
+
+
+@contextlib.contextmanager
+def unconfigured_veth():
+ with nstool.unshare_snh('ns1', '-Un') as ns1:
+ with nstool.unshare_snh('ns2', '-n', parent=ns1, capable=True) as ns2:
+ ns1.veth('veth1', 'veth2', ns2)
+ yield (ns1, ns2)
+
+
+@exeter.test
+def test_ifs():
+ with unconfigured_veth() as (ns1, ns2):
+ exeter.assert_eq(set(ns1.ifs()), set(['lo', 'veth1']))
+ exeter.assert_eq(set(ns2.ifs()), set(['lo', 'veth2']))
--
@@ -0,0 +1,33 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+selftest/veth.py - Test various veth configurations
+"""
+
+import contextlib
+
+import exeter
+
+from tasst import nstool
+
+
+@contextlib.contextmanager
+def unconfigured_veth():
+ with nstool.unshare_snh('ns1', '-Un') as ns1:
+ with nstool.unshare_snh('ns2', '-n', parent=ns1, capable=True) as ns2:
+ ns1.veth('veth1', 'veth2', ns2)
+ yield (ns1, ns2)
+
+
+@exeter.test
+def test_ifs():
+ with unconfigured_veth() as (ns1, ns2):
+ exeter.assert_eq(set(ns1.ifs()), set(['lo', 'veth1']))
+ exeter.assert_eq(set(ns2.ifs()), set(['lo', 'veth2']))
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 14/22] tasst: Add helper for getting MTU of a network interface
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (12 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 13/22] tasst: Helper for creating veth devices between namespaces David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 15/22] tasst: Add helper to wait for IP address to appear David Gibson
` (8 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/tasst/selftest/veth.py | 7 +++++++
test/tasst/snh.py | 11 ++++++++++-
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/test/tasst/selftest/veth.py b/test/tasst/selftest/veth.py
index 3c0b3f5b..5c8f0c0b 100644
--- a/test/tasst/selftest/veth.py
+++ b/test/tasst/selftest/veth.py
@@ -31,3 +31,10 @@ def test_ifs():
with unconfigured_veth() as (ns1, ns2):
exeter.assert_eq(set(ns1.ifs()), set(['lo', 'veth1']))
exeter.assert_eq(set(ns2.ifs()), set(['lo', 'veth2']))
+
+
+@exeter.test
+def test_mtu():
+ with unconfigured_veth() as (ns1, ns2):
+ exeter.assert_eq(ns1.mtu('veth1'), 1500)
+ exeter.assert_eq(ns2.mtu('veth2'), 1500)
diff --git a/test/tasst/snh.py b/test/tasst/snh.py
index fd8f6f13..0554fbd0 100644
--- a/test/tasst/snh.py
+++ b/test/tasst/snh.py
@@ -130,6 +130,11 @@ class SimNetHost(contextlib.AbstractContextManager):
for ai in self.addrinfos(ifname, **criteria)
if 'tentative' not in ai]
+ def mtu(self, ifname):
+ cmd = ['ip', '-j', 'link', 'show', f'{ifname}']
+ (info,) = json.loads(self.output(*cmd))
+ return info['mtu']
+
# Internal tests
def test_true(self):
with self as snh:
@@ -192,10 +197,14 @@ class SimNetHost(contextlib.AbstractContextManager):
with self as snh:
assert set(snh.addrs('lo')) == expected
+ def test_lo_mtu(self):
+ with self as snh:
+ exeter.assert_eq(snh.mtu('lo'), 65536)
+
SELFTESTS = [test_true, test_false, test_echo, test_timeout,
test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
test_bg_context_timeout,
- test_has_lo, test_lo_addrs]
+ test_has_lo, test_lo_addrs, test_lo_mtu]
@classmethod
def selftest(cls, setup):
--
@@ -130,6 +130,11 @@ class SimNetHost(contextlib.AbstractContextManager):
for ai in self.addrinfos(ifname, **criteria)
if 'tentative' not in ai]
+ def mtu(self, ifname):
+ cmd = ['ip', '-j', 'link', 'show', f'{ifname}']
+ (info,) = json.loads(self.output(*cmd))
+ return info['mtu']
+
# Internal tests
def test_true(self):
with self as snh:
@@ -192,10 +197,14 @@ class SimNetHost(contextlib.AbstractContextManager):
with self as snh:
assert set(snh.addrs('lo')) == expected
+ def test_lo_mtu(self):
+ with self as snh:
+ exeter.assert_eq(snh.mtu('lo'), 65536)
+
SELFTESTS = [test_true, test_false, test_echo, test_timeout,
test_bg_true, test_bg_false, test_bg_echo, test_bg_timeout,
test_bg_context_timeout,
- test_has_lo, test_lo_addrs]
+ test_has_lo, test_lo_addrs, test_lo_mtu]
@classmethod
def selftest(cls, setup):
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 15/22] tasst: Add helper to wait for IP address to appear
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (13 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 14/22] tasst: Add helper for getting MTU of a network interface David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 16/22] tasst: Add helpers for getting a SimNetHost's routes David Gibson
` (7 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Add a helper to the Site() class to wait for an address with specified
characteristics to be ready on an interface. In particular this is useful
for waiting for IPv6 SLAAC & DAD (Duplicate Address Detection) to complete.
Because DAD is not going to be useful in many of our scenarios, also extend
Site.ifup() to allow DAD to be switched to optimistic mode or disabled.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 2 +-
test/tasst/__main__.py | 2 +-
test/tasst/selftest/static_ifup.py | 40 ++++++++++++++++++++++++++++++
test/tasst/selftest/veth.py | 27 ++++++++++++++++++++
test/tasst/snh.py | 24 +++++++++++++++++-
5 files changed, 92 insertions(+), 3 deletions(-)
create mode 100644 test/tasst/selftest/static_ifup.py
diff --git a/test/Makefile b/test/Makefile
index e13c49c8..139a0b14 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -73,7 +73,7 @@ EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
TASST_SRCS = __init__.py __main__.py nstool.py snh.py \
- selftest/__init__.py selftest/veth.py
+ selftest/__init__.py selftest/static_ifup.py selftest/veth.py
EXETER_META = meta/lint.json meta/tasst.json
META_JOBS = $(EXETER_META)
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index d52f9c55..f3f88424 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -14,7 +14,7 @@ import exeter
# We import just to get the exeter tests, which flake8 can't see
from . import nstool, snh # noqa: F401
-from .selftest import veth # noqa: F401
+from .selftest import static_ifup, veth # noqa: F401
if __name__ == '__main__':
diff --git a/test/tasst/selftest/static_ifup.py b/test/tasst/selftest/static_ifup.py
new file mode 100644
index 00000000..0c6375d4
--- /dev/null
+++ b/test/tasst/selftest/static_ifup.py
@@ -0,0 +1,40 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+meta/static_ifup - Static address configuration
+"""
+
+import contextlib
+import ipaddress
+
+import exeter
+
+from tasst import nstool
+
+
+IFNAME = 'testveth'
+IFNAME_PEER = 'vethpeer'
+TEST_IPS = set([ipaddress.ip_interface('192.0.2.1/24'),
+ ipaddress.ip_interface('2001:db8:9a55::1/112'),
+ ipaddress.ip_interface('10.1.2.3/8')])
+
+
+@contextlib.contextmanager
+def setup_ns():
+ with nstool.unshare_snh('ns', '-Un') as ns:
+ ns.veth(IFNAME, IFNAME_PEER)
+ ns.ifup(IFNAME, *TEST_IPS, dad='disable')
+ yield ns
+
+
+@exeter.test
+def test_addr():
+ with setup_ns() as ns:
+ exeter.assert_eq(set(ns.addrs(IFNAME, scope='global')), TEST_IPS)
diff --git a/test/tasst/selftest/veth.py b/test/tasst/selftest/veth.py
index 5c8f0c0b..24bbdc27 100644
--- a/test/tasst/selftest/veth.py
+++ b/test/tasst/selftest/veth.py
@@ -12,6 +12,7 @@ selftest/veth.py - Test various veth configurations
"""
import contextlib
+import ipaddress
import exeter
@@ -38,3 +39,29 @@ def test_mtu():
with unconfigured_veth() as (ns1, ns2):
exeter.assert_eq(ns1.mtu('veth1'), 1500)
exeter.assert_eq(ns2.mtu('veth2'), 1500)
+
+
+@exeter.test
+def test_slaac(dad=None):
+ TESTMAC = '02:aa:bb:cc:dd:ee'
+ TESTIP = ipaddress.ip_interface('fe80::aa:bbff:fecc:ddee/64')
+
+ with unconfigured_veth() as (ns1, ns2):
+ ns1.fg('ip', 'link', 'set', 'dev', 'veth1', 'address', f'{TESTMAC}',
+ capable=True)
+
+ ns1.ifup('veth1', dad=dad)
+ ns2.ifup('veth2')
+
+ addrs = ns1.addr_wait('veth1', family='inet6', scope='link')
+ exeter.assert_eq(addrs, [TESTIP])
+
+
+@exeter.test
+def test_optimistic_dad():
+ test_slaac(dad='optimistic')
+
+
+@exeter.test
+def test_no_dad():
+ test_slaac(dad='disable')
diff --git a/test/tasst/snh.py b/test/tasst/snh.py
index 0554fbd0..a1225ff0 100644
--- a/test/tasst/snh.py
+++ b/test/tasst/snh.py
@@ -111,7 +111,23 @@ class SimNetHost(contextlib.AbstractContextManager):
info = json.loads(self.output('ip', '-j', 'link', 'show'))
return [i['ifname'] for i in info]
- def ifup(self, ifname):
+ def ifup(self, ifname, *addrs, dad=None):
+ if dad == 'disable':
+ self.fg('sysctl', f'net.ipv6.conf.{ifname}.accept_dad=0',
+ capable=True)
+ elif dad == 'optimistic':
+ self.fg('sysctl', f'net.ipv6.conf.{ifname}.optimistic_dad=1',
+ capable=True)
+ elif dad is not None:
+ raise ValueError
+
+ for a in addrs:
+ if not isinstance(a, ipaddress.IPv4Interface) \
+ and not isinstance(a, ipaddress.IPv6Interface):
+ raise TypeError
+ self.fg('ip', 'addr', 'add', f'{a.with_prefixlen}',
+ 'dev', f'{ifname}', capable=True)
+
self.fg('ip', 'link', 'set', f'{ifname}', 'up', capable=True)
def addrinfos(self, ifname, **criteria):
@@ -135,6 +151,12 @@ class SimNetHost(contextlib.AbstractContextManager):
(info,) = json.loads(self.output(*cmd))
return info['mtu']
+ def addr_wait(self, ifname, **criteria):
+ while True:
+ addrs = self.addrs(ifname, **criteria)
+ if addrs:
+ return addrs
+
# Internal tests
def test_true(self):
with self as snh:
--
@@ -111,7 +111,23 @@ class SimNetHost(contextlib.AbstractContextManager):
info = json.loads(self.output('ip', '-j', 'link', 'show'))
return [i['ifname'] for i in info]
- def ifup(self, ifname):
+ def ifup(self, ifname, *addrs, dad=None):
+ if dad == 'disable':
+ self.fg('sysctl', f'net.ipv6.conf.{ifname}.accept_dad=0',
+ capable=True)
+ elif dad == 'optimistic':
+ self.fg('sysctl', f'net.ipv6.conf.{ifname}.optimistic_dad=1',
+ capable=True)
+ elif dad is not None:
+ raise ValueError
+
+ for a in addrs:
+ if not isinstance(a, ipaddress.IPv4Interface) \
+ and not isinstance(a, ipaddress.IPv6Interface):
+ raise TypeError
+ self.fg('ip', 'addr', 'add', f'{a.with_prefixlen}',
+ 'dev', f'{ifname}', capable=True)
+
self.fg('ip', 'link', 'set', f'{ifname}', 'up', capable=True)
def addrinfos(self, ifname, **criteria):
@@ -135,6 +151,12 @@ class SimNetHost(contextlib.AbstractContextManager):
(info,) = json.loads(self.output(*cmd))
return info['mtu']
+ def addr_wait(self, ifname, **criteria):
+ while True:
+ addrs = self.addrs(ifname, **criteria)
+ if addrs:
+ return addrs
+
# Internal tests
def test_true(self):
with self as snh:
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 16/22] tasst: Add helpers for getting a SimNetHost's routes
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (14 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 15/22] tasst: Add helper to wait for IP address to appear David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 17/22] tasst: Helpers to test transferring data between sites David Gibson
` (6 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/tasst/selftest/static_ifup.py | 20 ++++++++++++++++++++
test/tasst/snh.py | 13 +++++++++++++
2 files changed, 33 insertions(+)
diff --git a/test/tasst/selftest/static_ifup.py b/test/tasst/selftest/static_ifup.py
index 0c6375d4..2627b579 100644
--- a/test/tasst/selftest/static_ifup.py
+++ b/test/tasst/selftest/static_ifup.py
@@ -38,3 +38,23 @@ def setup_ns():
def test_addr():
with setup_ns() as ns:
exeter.assert_eq(set(ns.addrs(IFNAME, scope='global')), TEST_IPS)
+
+
+@exeter.test
+def test_routes4():
+ with setup_ns() as ns:
+ expected_routes = set(i.network for i in TEST_IPS
+ if isinstance(i, ipaddress.IPv4Interface))
+ actual_routes = set(ipaddress.ip_interface(r['dst']).network
+ for r in ns.routes4(dev=IFNAME))
+ exeter.assert_eq(expected_routes, actual_routes)
+
+
+@exeter.test
+def test_routes6():
+ with setup_ns() as ns:
+ expected_routes = set(i.network for i in TEST_IPS
+ if isinstance(i, ipaddress.IPv6Interface))
+ actual_routes = set(ipaddress.ip_interface(r['dst']).network
+ for r in ns.routes6(dev=IFNAME))
+ exeter.assert_eq(expected_routes, actual_routes)
diff --git a/test/tasst/snh.py b/test/tasst/snh.py
index a1225ff0..4ddcbb16 100644
--- a/test/tasst/snh.py
+++ b/test/tasst/snh.py
@@ -157,6 +157,19 @@ class SimNetHost(contextlib.AbstractContextManager):
if addrs:
return addrs
+ def _routes(self, ipv, **criteria):
+ routes = json.loads(self.output('ip', '-j', f'-{ipv}', 'route'))
+ for key, value in criteria.items():
+ routes = [r for r in routes if key in r and r[key] == value]
+
+ return routes
+
+ def routes4(self, **criteria):
+ return self._routes('4', **criteria)
+
+ def routes6(self, **criteria):
+ return self._routes('6', **criteria)
+
# Internal tests
def test_true(self):
with self as snh:
--
@@ -157,6 +157,19 @@ class SimNetHost(contextlib.AbstractContextManager):
if addrs:
return addrs
+ def _routes(self, ipv, **criteria):
+ routes = json.loads(self.output('ip', '-j', f'-{ipv}', 'route'))
+ for key, value in criteria.items():
+ routes = [r for r in routes if key in r and r[key] == value]
+
+ return routes
+
+ def routes4(self, **criteria):
+ return self._routes('4', **criteria)
+
+ def routes6(self, **criteria):
+ return self._routes('6', **criteria)
+
# Internal tests
def test_true(self):
with self as snh:
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 17/22] tasst: Helpers to test transferring data between sites
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (15 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 16/22] tasst: Add helpers for getting a SimNetHost's routes David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 18/22] tasst: IP address allocation helpers David Gibson
` (5 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Many of our existing tests are based on using socat to transfer between
various locations connected via pasta or passt. Add helpers to make
avocado tests performing similar transfers. Add selftests to verify those
work as expected when we don't have pasta or passt involved yet.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 4 +-
test/tasst/__main__.py | 2 +-
test/tasst/transfer.py | 194 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 197 insertions(+), 3 deletions(-)
create mode 100644 test/tasst/transfer.py
diff --git a/test/Makefile b/test/Makefile
index 139a0b14..584f56e9 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -65,14 +65,14 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
AVOCADO_ASSETS =
-META_ASSETS = nstool
+META_ASSETS = nstool small.bin medium.bin big.bin
EXETER_SH = build/static_checkers.sh
EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-TASST_SRCS = __init__.py __main__.py nstool.py snh.py \
+TASST_SRCS = __init__.py __main__.py nstool.py snh.py transfer.py \
selftest/__init__.py selftest/static_ifup.py selftest/veth.py
EXETER_META = meta/lint.json meta/tasst.json
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index f3f88424..98a94011 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -13,7 +13,7 @@ library of test helpers for passt & pasta
import exeter
# We import just to get the exeter tests, which flake8 can't see
-from . import nstool, snh # noqa: F401
+from . import nstool, snh, transfer # noqa: F401
from .selftest import static_ifup, veth # noqa: F401
diff --git a/test/tasst/transfer.py b/test/tasst/transfer.py
new file mode 100644
index 00000000..be3eebc2
--- /dev/null
+++ b/test/tasst/transfer.py
@@ -0,0 +1,194 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+transfer.py - Helpers for testing data transfers
+"""
+
+import contextlib
+from ipaddress import IPv4Address, IPv6Address
+import time
+
+import exeter
+
+from . import nstool, snh
+
+
+# HACK: how long to wait for the server to be ready and listening (s)
+SERVER_READY_DELAY = 0.1 # 1/10th of a second
+
+
+# socat needs IPv6 addresses in square brackets
+def socat_ip(ip):
+ if isinstance(ip, IPv6Address):
+ return f'[{ip}]'
+ if isinstance(ip, IPv4Address):
+ return f'{ip}'
+ raise TypeError
+
+
+def socat_upload(datafile, csnh, ssnh, connect, listen):
+ srcdata = csnh.output('cat', f'{datafile}')
+ with ssnh.bg('socat', '-u', f'{listen}', 'STDOUT',
+ capture=snh.STDOUT) as server:
+ time.sleep(SERVER_READY_DELAY)
+
+ # Can't use csnh.fg() here, because while we wait for the
+ # client to complete we won't be reading from the output pipe
+ # of the server, meaning it will freeze once the buffers fill
+ with csnh.bg('socat', '-u', f'OPEN:{datafile}', f'{connect}') \
+ as client:
+ res = server.run()
+ client.run()
+ exeter.assert_eq(srcdata, res.stdout)
+
+
+def socat_download(datafile, csnh, ssnh, connect, listen):
+ srcdata = ssnh.output('cat', f'{datafile}')
+ with ssnh.bg('socat', '-u', f'OPEN:{datafile}', f'{listen}'):
+ time.sleep(SERVER_READY_DELAY)
+ dstdata = csnh.output('socat', '-u', f'{connect}', 'STDOUT')
+ exeter.assert_eq(srcdata, dstdata)
+
+
+def _tcp_socat(connectip, connectport, listenip, listenport, fromip):
+ v6 = isinstance(connectip, IPv6Address)
+ if listenport is None:
+ listenport = connectport
+ if v6:
+ connect = f'TCP6:[{connectip}]:{connectport},ipv6only'
+ listen = f'TCP6-LISTEN:{listenport},ipv6only'
+ else:
+ connect = f'TCP4:{connectip}:{connectport}'
+ listen = f'TCP4-LISTEN:{listenport}'
+ if listenip is not None:
+ listen += f',bind={socat_ip(listenip)}'
+ if fromip is not None:
+ connect += f',bind={socat_ip(fromip)}'
+ return (connect, listen)
+
+
+def tcp_upload(datafile, cs, ss, connectip, connectport,
+ listenip=None, listenport=None, fromip=None):
+ connect, listen = _tcp_socat(connectip, connectport, listenip, listenport,
+ fromip)
+ socat_upload(datafile, cs, ss, connect, listen)
+
+
+def tcp_download(datafile, cs, ss, connectip, connectport,
+ listenip=None, listenport=None, fromip=None):
+ connect, listen = _tcp_socat(connectip, connectport, listenip, listenport,
+ fromip)
+ socat_download(datafile, cs, ss, connect, listen)
+
+
+def udp_transfer(datafile, cs, ss, connectip, connectport,
+ listenip=None, listenport=None, fromip=None):
+ v6 = isinstance(connectip, IPv6Address)
+ if listenport is None:
+ listenport = connectport
+ if v6:
+ connect = f'UDP6:[{connectip}]:{connectport},ipv6only,shut-null'
+ listen = f'UDP6-LISTEN:{listenport},ipv6only,null-eof'
+ else:
+ connect = f'UDP4:{connectip}:{connectport},shut-null'
+ listen = f'UDP4-LISTEN:{listenport},null-eof'
+ if listenip is not None:
+ listen += f',bind={socat_ip(listenip)}'
+ if fromip is not None:
+ connect += f',bind={socat_ip(fromip)}'
+
+ socat_upload(datafile, cs, ss, connect, listen)
+
+
+SMALL_DATA = 'test/small.bin'
+BIG_DATA = 'test/big.bin'
+UDP_DATA = 'test/medium.bin'
+
+
+class TransferTestScenario:
+ def __init__(self, *, client, server, connect_ip, connect_port,
+ listen_ip=None, listen_port=None, from_ip=None):
+ self.client = client
+ self.server = server
+ if isinstance(connect_ip, IPv4Address):
+ self.ip = connect_ip
+ self.listen_ip = listen_ip
+ self.from_ip = from_ip
+ elif isinstance(connect_ip, IPv6Address):
+ self.ip = connect_ip
+ self.listen_ip = listen_ip
+ self.from_ip = from_ip
+ self.port = connect_port
+ self.listen_port = listen_port
+
+
+def test_tcp_upload(setup, datafile=SMALL_DATA):
+ with setup as scn:
+ tcp_upload(datafile, scn.client, scn.server, scn.ip, scn.port,
+ listenip=scn.listen_ip, listenport=scn.listen_port,
+ fromip=scn.from_ip)
+
+
+def test_tcp_big_upload(setup):
+ return test_tcp_upload(setup, datafile=BIG_DATA)
+
+
+def test_tcp_download(setup, datafile=SMALL_DATA):
+ with setup as scn:
+ tcp_download(datafile, scn.client, scn.server, scn.ip, scn.port,
+ listenip=scn.listen_ip, listenport=scn.listen_port,
+ fromip=scn.from_ip)
+
+
+def test_tcp_big_download(setup):
+ return test_tcp_download(setup, datafile=BIG_DATA)
+
+
+def test_udp_transfer(setup, datafile=UDP_DATA):
+ with setup as scn:
+ udp_transfer(datafile, scn.client, scn.server,
+ scn.ip, scn.port,
+ listenip=scn.listen_ip, listenport=scn.listen_port,
+ fromip=scn.from_ip)
+
+
+TRANSFER_TESTS = [test_tcp_upload, test_tcp_big_upload,
+ test_tcp_download, test_tcp_big_download,
+ test_udp_transfer]
+
+
+def transfer_tests(setup):
+ for t in TRANSFER_TESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+@contextlib.contextmanager
+def local_transfer4():
+ with nstool.unshare_snh('ns', '-Un') as ns:
+ ns.ifup('lo')
+ yield TransferTestScenario(client=ns, server=ns,
+ connect_ip=IPv4Address('127.0.0.1'),
+ connect_port=10000)
+
+
+transfer_tests(local_transfer4)
+
+
+@contextlib.contextmanager
+def local_transfer6():
+ with nstool.unshare_snh('ns', '-Un') as ns:
+ ns.ifup('lo')
+ yield TransferTestScenario(client=ns, server=ns,
+ connect_ip=IPv6Address('::1'),
+ connect_port=10000)
+
+
+transfer_tests(local_transfer6)
--
@@ -0,0 +1,194 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+transfer.py - Helpers for testing data transfers
+"""
+
+import contextlib
+from ipaddress import IPv4Address, IPv6Address
+import time
+
+import exeter
+
+from . import nstool, snh
+
+
+# HACK: how long to wait for the server to be ready and listening (s)
+SERVER_READY_DELAY = 0.1 # 1/10th of a second
+
+
+# socat needs IPv6 addresses in square brackets
+def socat_ip(ip):
+ if isinstance(ip, IPv6Address):
+ return f'[{ip}]'
+ if isinstance(ip, IPv4Address):
+ return f'{ip}'
+ raise TypeError
+
+
+def socat_upload(datafile, csnh, ssnh, connect, listen):
+ srcdata = csnh.output('cat', f'{datafile}')
+ with ssnh.bg('socat', '-u', f'{listen}', 'STDOUT',
+ capture=snh.STDOUT) as server:
+ time.sleep(SERVER_READY_DELAY)
+
+ # Can't use csnh.fg() here, because while we wait for the
+ # client to complete we won't be reading from the output pipe
+ # of the server, meaning it will freeze once the buffers fill
+ with csnh.bg('socat', '-u', f'OPEN:{datafile}', f'{connect}') \
+ as client:
+ res = server.run()
+ client.run()
+ exeter.assert_eq(srcdata, res.stdout)
+
+
+def socat_download(datafile, csnh, ssnh, connect, listen):
+ srcdata = ssnh.output('cat', f'{datafile}')
+ with ssnh.bg('socat', '-u', f'OPEN:{datafile}', f'{listen}'):
+ time.sleep(SERVER_READY_DELAY)
+ dstdata = csnh.output('socat', '-u', f'{connect}', 'STDOUT')
+ exeter.assert_eq(srcdata, dstdata)
+
+
+def _tcp_socat(connectip, connectport, listenip, listenport, fromip):
+ v6 = isinstance(connectip, IPv6Address)
+ if listenport is None:
+ listenport = connectport
+ if v6:
+ connect = f'TCP6:[{connectip}]:{connectport},ipv6only'
+ listen = f'TCP6-LISTEN:{listenport},ipv6only'
+ else:
+ connect = f'TCP4:{connectip}:{connectport}'
+ listen = f'TCP4-LISTEN:{listenport}'
+ if listenip is not None:
+ listen += f',bind={socat_ip(listenip)}'
+ if fromip is not None:
+ connect += f',bind={socat_ip(fromip)}'
+ return (connect, listen)
+
+
+def tcp_upload(datafile, cs, ss, connectip, connectport,
+ listenip=None, listenport=None, fromip=None):
+ connect, listen = _tcp_socat(connectip, connectport, listenip, listenport,
+ fromip)
+ socat_upload(datafile, cs, ss, connect, listen)
+
+
+def tcp_download(datafile, cs, ss, connectip, connectport,
+ listenip=None, listenport=None, fromip=None):
+ connect, listen = _tcp_socat(connectip, connectport, listenip, listenport,
+ fromip)
+ socat_download(datafile, cs, ss, connect, listen)
+
+
+def udp_transfer(datafile, cs, ss, connectip, connectport,
+ listenip=None, listenport=None, fromip=None):
+ v6 = isinstance(connectip, IPv6Address)
+ if listenport is None:
+ listenport = connectport
+ if v6:
+ connect = f'UDP6:[{connectip}]:{connectport},ipv6only,shut-null'
+ listen = f'UDP6-LISTEN:{listenport},ipv6only,null-eof'
+ else:
+ connect = f'UDP4:{connectip}:{connectport},shut-null'
+ listen = f'UDP4-LISTEN:{listenport},null-eof'
+ if listenip is not None:
+ listen += f',bind={socat_ip(listenip)}'
+ if fromip is not None:
+ connect += f',bind={socat_ip(fromip)}'
+
+ socat_upload(datafile, cs, ss, connect, listen)
+
+
+SMALL_DATA = 'test/small.bin'
+BIG_DATA = 'test/big.bin'
+UDP_DATA = 'test/medium.bin'
+
+
+class TransferTestScenario:
+ def __init__(self, *, client, server, connect_ip, connect_port,
+ listen_ip=None, listen_port=None, from_ip=None):
+ self.client = client
+ self.server = server
+ if isinstance(connect_ip, IPv4Address):
+ self.ip = connect_ip
+ self.listen_ip = listen_ip
+ self.from_ip = from_ip
+ elif isinstance(connect_ip, IPv6Address):
+ self.ip = connect_ip
+ self.listen_ip = listen_ip
+ self.from_ip = from_ip
+ self.port = connect_port
+ self.listen_port = listen_port
+
+
+def test_tcp_upload(setup, datafile=SMALL_DATA):
+ with setup as scn:
+ tcp_upload(datafile, scn.client, scn.server, scn.ip, scn.port,
+ listenip=scn.listen_ip, listenport=scn.listen_port,
+ fromip=scn.from_ip)
+
+
+def test_tcp_big_upload(setup):
+ return test_tcp_upload(setup, datafile=BIG_DATA)
+
+
+def test_tcp_download(setup, datafile=SMALL_DATA):
+ with setup as scn:
+ tcp_download(datafile, scn.client, scn.server, scn.ip, scn.port,
+ listenip=scn.listen_ip, listenport=scn.listen_port,
+ fromip=scn.from_ip)
+
+
+def test_tcp_big_download(setup):
+ return test_tcp_download(setup, datafile=BIG_DATA)
+
+
+def test_udp_transfer(setup, datafile=UDP_DATA):
+ with setup as scn:
+ udp_transfer(datafile, scn.client, scn.server,
+ scn.ip, scn.port,
+ listenip=scn.listen_ip, listenport=scn.listen_port,
+ fromip=scn.from_ip)
+
+
+TRANSFER_TESTS = [test_tcp_upload, test_tcp_big_upload,
+ test_tcp_download, test_tcp_big_download,
+ test_udp_transfer]
+
+
+def transfer_tests(setup):
+ for t in TRANSFER_TESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+@contextlib.contextmanager
+def local_transfer4():
+ with nstool.unshare_snh('ns', '-Un') as ns:
+ ns.ifup('lo')
+ yield TransferTestScenario(client=ns, server=ns,
+ connect_ip=IPv4Address('127.0.0.1'),
+ connect_port=10000)
+
+
+transfer_tests(local_transfer4)
+
+
+@contextlib.contextmanager
+def local_transfer6():
+ with nstool.unshare_snh('ns', '-Un') as ns:
+ ns.ifup('lo')
+ yield TransferTestScenario(client=ns, server=ns,
+ connect_ip=IPv6Address('::1'),
+ connect_port=10000)
+
+
+transfer_tests(local_transfer6)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 18/22] tasst: IP address allocation helpers
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (16 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 17/22] tasst: Helpers to test transferring data between sites David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 19/22] tasst: Helpers for testing NDP behaviour David Gibson
` (4 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
A bunch of our test scenarious will require us to allocate IPv4 and IPv6
addresses in example networks. Make helpers to do this easily.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 2 +-
test/tasst/address.py | 79 +++++++++++++++++++++++++++++++++++++
test/tasst/selftest/veth.py | 41 ++++++++++++++++++-
test/tasst/transfer.py | 6 +--
4 files changed, 123 insertions(+), 5 deletions(-)
create mode 100644 test/tasst/address.py
diff --git a/test/Makefile b/test/Makefile
index 584f56e9..f3a3cc58 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -72,7 +72,7 @@ EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-TASST_SRCS = __init__.py __main__.py nstool.py snh.py transfer.py \
+TASST_SRCS = __init__.py __main__.py address.py nstool.py snh.py transfer.py \
selftest/__init__.py selftest/static_ifup.py selftest/veth.py
EXETER_META = meta/lint.json meta/tasst.json
diff --git a/test/tasst/address.py b/test/tasst/address.py
new file mode 100644
index 00000000..70899789
--- /dev/null
+++ b/test/tasst/address.py
@@ -0,0 +1,79 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+address.py - Address allocation helpers
+"""
+
+import ipaddress
+
+import exeter
+
+# Loopback addresses, for convenience
+LOOPBACK4 = ipaddress.ip_address('127.0.0.1')
+LOOPBACK6 = ipaddress.ip_address('::1')
+
+# Documentation test networks defined by RFC 5737
+TEST_NET_1 = ipaddress.ip_network('192.0.2.0/24')
+TEST_NET_2 = ipaddress.ip_network('198.51.100.0/24')
+TEST_NET_3 = ipaddress.ip_network('203.0.113.0/24')
+
+# Documentation test network defined by RFC 3849
+TEST_NET6 = ipaddress.ip_network('2001:db8::/32')
+# Some subnets of that for our usage
+TEST_NET6_TASST_A = ipaddress.ip_network('2001:db8:9a55:aaaa::/64')
+TEST_NET6_TASST_B = ipaddress.ip_network('2001:db8:9a55:bbbb::/64')
+TEST_NET6_TASST_C = ipaddress.ip_network('2001:db8:9a55:cccc::/64')
+
+
+class IpiAllocator:
+ """IP address allocator"""
+
+ DEFAULT_NETS = [TEST_NET_1, TEST_NET6_TASST_A]
+
+ def __init__(self, *nets):
+ if not nets:
+ nets = self.DEFAULT_NETS
+
+ self.nets = [ipaddress.ip_network(n) for n in nets]
+ self.hostses = [n.hosts() for n in self.nets]
+
+ def next_ipis(self):
+ addrs = [next(h) for h in self.hostses]
+ return [ipaddress.ip_interface(f'{a}/{n.prefixlen}')
+ for a, n in zip(addrs, self.nets)]
+
+
+@exeter.test
+def ipa_test(nets=None, count=12):
+ if nets is None:
+ ipa = IpiAllocator()
+ nets = IpiAllocator.DEFAULT_NETS
+ else:
+ ipa = IpiAllocator(*nets)
+
+ addrsets = [set() for i in range(len(nets))]
+ for i in range(count):
+ addrs = ipa.next_ipis()
+ # Check we got as many addresses as expected
+ exeter.assert_eq(len(addrs), len(nets))
+ for s, a, n in zip(addrsets, addrs, nets):
+ # Check the addresses belong to the right network
+ exeter.assert_eq(a.network, ipaddress.ip_network(n))
+ s.add(a)
+
+ print(addrsets)
+ # Check the addresses are unique
+ for s in addrsets:
+ exeter.assert_eq(len(s), count)
+
+
+@exeter.test
+def ipa_test_custom():
+ ipa_test(nets=['10.55.0.0/16', '192.168.55.0/24', 'fd00:9a57:a000::/48'])
diff --git a/test/tasst/selftest/veth.py b/test/tasst/selftest/veth.py
index 24bbdc27..39ac947d 100644
--- a/test/tasst/selftest/veth.py
+++ b/test/tasst/selftest/veth.py
@@ -16,7 +16,7 @@ import ipaddress
import exeter
-from tasst import nstool
+from tasst import address, nstool, transfer
@contextlib.contextmanager
@@ -65,3 +65,42 @@ def test_optimistic_dad():
@exeter.test
def test_no_dad():
test_slaac(dad='disable')
+
+
+@contextlib.contextmanager
+def configured_veth(ip1, ip2):
+ with unconfigured_veth() as (ns1, ns2):
+ ns1.ifup('lo')
+ ns1.ifup('veth1', ip1, dad='disable')
+
+ ns2.ifup('lo')
+ ns2.ifup('veth2', ip2, dad='disable')
+
+ yield (ns1, ns2)
+
+
+@contextlib.contextmanager
+def veth_transfer(ip1, ip2):
+ with configured_veth(ip1, ip2) as (ns1, ns2):
+ yield transfer.TransferTestScenario(client=ns1, server=ns2,
+ connect_ip=ip2.ip,
+ connect_port=10000)
+
+
+ipa = address.IpiAllocator()
+NS1_IP4, NS1_IP6 = ipa.next_ipis()
+NS2_IP4, NS2_IP6 = ipa.next_ipis()
+
+
+def veth_transfer4():
+ return veth_transfer(NS1_IP4, NS2_IP4)
+
+
+transfer.transfer_tests(veth_transfer4)
+
+
+def veth_transfer6():
+ return veth_transfer(NS1_IP6, NS2_IP6)
+
+
+transfer.transfer_tests(veth_transfer6)
diff --git a/test/tasst/transfer.py b/test/tasst/transfer.py
index be3eebc2..a5aa0614 100644
--- a/test/tasst/transfer.py
+++ b/test/tasst/transfer.py
@@ -17,7 +17,7 @@ import time
import exeter
-from . import nstool, snh
+from . import address, nstool, snh
# HACK: how long to wait for the server to be ready and listening (s)
@@ -175,7 +175,7 @@ def local_transfer4():
with nstool.unshare_snh('ns', '-Un') as ns:
ns.ifup('lo')
yield TransferTestScenario(client=ns, server=ns,
- connect_ip=IPv4Address('127.0.0.1'),
+ connect_ip=address.LOOPBACK4,
connect_port=10000)
@@ -187,7 +187,7 @@ def local_transfer6():
with nstool.unshare_snh('ns', '-Un') as ns:
ns.ifup('lo')
yield TransferTestScenario(client=ns, server=ns,
- connect_ip=IPv6Address('::1'),
+ connect_ip=address.LOOPBACK6,
connect_port=10000)
--
@@ -17,7 +17,7 @@ import time
import exeter
-from . import nstool, snh
+from . import address, nstool, snh
# HACK: how long to wait for the server to be ready and listening (s)
@@ -175,7 +175,7 @@ def local_transfer4():
with nstool.unshare_snh('ns', '-Un') as ns:
ns.ifup('lo')
yield TransferTestScenario(client=ns, server=ns,
- connect_ip=IPv4Address('127.0.0.1'),
+ connect_ip=address.LOOPBACK4,
connect_port=10000)
@@ -187,7 +187,7 @@ def local_transfer6():
with nstool.unshare_snh('ns', '-Un') as ns:
ns.ifup('lo')
yield TransferTestScenario(client=ns, server=ns,
- connect_ip=IPv6Address('::1'),
+ connect_ip=address.LOOPBACK6,
connect_port=10000)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 19/22] tasst: Helpers for testing NDP behaviour
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (17 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 18/22] tasst: IP address allocation helpers David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:36 ` [PATCH v2 20/22] tasst: Helpers for testing DHCP & DHCPv6 behaviour David Gibson
` (3 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Signed-iff-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 3 +-
test/tasst/__main__.py | 2 +-
test/tasst/ndp.py | 116 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 119 insertions(+), 2 deletions(-)
create mode 100644 test/tasst/ndp.py
diff --git a/test/Makefile b/test/Makefile
index f3a3cc58..248329e5 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -72,7 +72,8 @@ EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-TASST_SRCS = __init__.py __main__.py address.py nstool.py snh.py transfer.py \
+TASST_SRCS = __init__.py __main__.py address.py ndp.py nstool.py snh.py \
+ transfer.py \
selftest/__init__.py selftest/static_ifup.py selftest/veth.py
EXETER_META = meta/lint.json meta/tasst.json
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index 98a94011..6a95eec1 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -13,7 +13,7 @@ library of test helpers for passt & pasta
import exeter
# We import just to get the exeter tests, which flake8 can't see
-from . import nstool, snh, transfer # noqa: F401
+from . import ndp, nstool, snh, transfer # noqa: F401
from .selftest import static_ifup, veth # noqa: F401
diff --git a/test/tasst/ndp.py b/test/tasst/ndp.py
new file mode 100644
index 00000000..1c18385c
--- /dev/null
+++ b/test/tasst/ndp.py
@@ -0,0 +1,116 @@
+#! /usr/bin/env avocado-runner-avocado-classless
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+ndp.py - Helpers for testing NDP
+"""
+
+import contextlib
+import ipaddress
+import os
+import tempfile
+
+import exeter
+
+from . import address, nstool
+
+
+class NdpTestScenario:
+ def __init__(self, *, client, ifname, network, gateway):
+ self.client = client
+ self.ifname = ifname
+ self.network = network
+ self.gateway = gateway
+
+
+def test_ndp_addr(setup):
+ with setup as scn:
+ # Wait for NDP to do its thing
+ (addr,) = scn.client.addr_wait(scn.ifname, family='inet6',
+ scope='global')
+
+ # The SLAAC address is derived from the guest ns MAC, so
+ # probably won't exactly match the host address (we need
+ # DHCPv6 for that). It should be in the right network though.
+ exeter.assert_eq(addr.network, scn.network)
+
+
+def test_ndp_route(setup):
+ with setup as scn:
+ defroutes = scn.client.routes6(dst='default')
+ while not defroutes:
+ defroutes = scn.client.routes6(dst='default')
+
+ exeter.assert_eq(len(defroutes), 1)
+ gw = ipaddress.ip_address(defroutes[0]['gateway'])
+ exeter.assert_eq(gw, scn.gateway)
+
+
+NDP_TESTS = [test_ndp_addr, test_ndp_route]
+
+
+def ndp_tests(setup):
+ for t in NDP_TESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+IFNAME = 'clientif'
+NETWORK = address.TEST_NET6_TASST_A
+ipa = address.IpiAllocator(NETWORK)
+(ROUTER_IP6,) = ipa.next_ipis()
+
+
+@contextlib.contextmanager
+def setup_radvd():
+ router_ifname = 'routerif'
+
+ with nstool.unshare_snh('client', '-Un') as client, \
+ nstool.unshare_snh('router', '-n',
+ parent=client, capable=True) as router:
+ with tempfile.TemporaryDirectory() as tmpdir:
+ client.veth(IFNAME, router_ifname, router)
+
+ # Configure the simulated router
+ confpath = os.path.join(tmpdir, 'radvd.conf')
+ pidfile = os.path.join(tmpdir, 'radvd.pid')
+ open(confpath, 'w', encoding='UTF-8').write(
+ f'''
+ interface {router_ifname} {{
+ AdvSendAdvert on;
+ prefix {NETWORK} {{
+ }};
+ }};
+ '''
+ )
+
+ router.ifup('lo')
+ router.ifup('routerif', ROUTER_IP6)
+
+ # Configure the client
+ client.ifup('lo')
+ client.ifup(IFNAME)
+
+ # Get the router's link-local-address
+ (router_ll,) = router.addr_wait(router_ifname,
+ family='inet6', scope='link')
+
+ # Run radvd
+ router.fg('radvd', '-c', '-C', f'{confpath}')
+ radvd_cmd = ['radvd', '-C', f'{confpath}', '-n',
+ '-p', f'{pidfile}', '-d', '5']
+ with router.bg(*radvd_cmd, capable=True) as radvd:
+ yield NdpTestScenario(client=client,
+ ifname=IFNAME,
+ network=NETWORK,
+ gateway=router_ll.ip)
+ radvd.terminate()
+
+
+ndp_tests(setup_radvd)
--
@@ -0,0 +1,116 @@
+#! /usr/bin/env avocado-runner-avocado-classless
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+ndp.py - Helpers for testing NDP
+"""
+
+import contextlib
+import ipaddress
+import os
+import tempfile
+
+import exeter
+
+from . import address, nstool
+
+
+class NdpTestScenario:
+ def __init__(self, *, client, ifname, network, gateway):
+ self.client = client
+ self.ifname = ifname
+ self.network = network
+ self.gateway = gateway
+
+
+def test_ndp_addr(setup):
+ with setup as scn:
+ # Wait for NDP to do its thing
+ (addr,) = scn.client.addr_wait(scn.ifname, family='inet6',
+ scope='global')
+
+ # The SLAAC address is derived from the guest ns MAC, so
+ # probably won't exactly match the host address (we need
+ # DHCPv6 for that). It should be in the right network though.
+ exeter.assert_eq(addr.network, scn.network)
+
+
+def test_ndp_route(setup):
+ with setup as scn:
+ defroutes = scn.client.routes6(dst='default')
+ while not defroutes:
+ defroutes = scn.client.routes6(dst='default')
+
+ exeter.assert_eq(len(defroutes), 1)
+ gw = ipaddress.ip_address(defroutes[0]['gateway'])
+ exeter.assert_eq(gw, scn.gateway)
+
+
+NDP_TESTS = [test_ndp_addr, test_ndp_route]
+
+
+def ndp_tests(setup):
+ for t in NDP_TESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+IFNAME = 'clientif'
+NETWORK = address.TEST_NET6_TASST_A
+ipa = address.IpiAllocator(NETWORK)
+(ROUTER_IP6,) = ipa.next_ipis()
+
+
+@contextlib.contextmanager
+def setup_radvd():
+ router_ifname = 'routerif'
+
+ with nstool.unshare_snh('client', '-Un') as client, \
+ nstool.unshare_snh('router', '-n',
+ parent=client, capable=True) as router:
+ with tempfile.TemporaryDirectory() as tmpdir:
+ client.veth(IFNAME, router_ifname, router)
+
+ # Configure the simulated router
+ confpath = os.path.join(tmpdir, 'radvd.conf')
+ pidfile = os.path.join(tmpdir, 'radvd.pid')
+ open(confpath, 'w', encoding='UTF-8').write(
+ f'''
+ interface {router_ifname} {{
+ AdvSendAdvert on;
+ prefix {NETWORK} {{
+ }};
+ }};
+ '''
+ )
+
+ router.ifup('lo')
+ router.ifup('routerif', ROUTER_IP6)
+
+ # Configure the client
+ client.ifup('lo')
+ client.ifup(IFNAME)
+
+ # Get the router's link-local-address
+ (router_ll,) = router.addr_wait(router_ifname,
+ family='inet6', scope='link')
+
+ # Run radvd
+ router.fg('radvd', '-c', '-C', f'{confpath}')
+ radvd_cmd = ['radvd', '-C', f'{confpath}', '-n',
+ '-p', f'{pidfile}', '-d', '5']
+ with router.bg(*radvd_cmd, capable=True) as radvd:
+ yield NdpTestScenario(client=client,
+ ifname=IFNAME,
+ network=NETWORK,
+ gateway=router_ll.ip)
+ radvd.terminate()
+
+
+ndp_tests(setup_radvd)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 20/22] tasst: Helpers for testing DHCP & DHCPv6 behaviour
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (18 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 19/22] tasst: Helpers for testing NDP behaviour David Gibson
@ 2024-08-05 12:36 ` David Gibson
2024-08-05 12:37 ` [PATCH v2 21/22] tasst: Helpers to construct a simple network environment for tests David Gibson
` (2 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:36 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 4 +-
test/tasst/__main__.py | 2 +-
test/tasst/dhcp.py | 132 +++++++++++++++++++++++++++++++++++++++++
test/tasst/dhcpv6.py | 89 +++++++++++++++++++++++++++
4 files changed, 224 insertions(+), 3 deletions(-)
create mode 100644 test/tasst/dhcp.py
create mode 100644 test/tasst/dhcpv6.py
diff --git a/test/Makefile b/test/Makefile
index 248329e5..0eeaf82e 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -72,8 +72,8 @@ EXETER_PY = build/build.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
-TASST_SRCS = __init__.py __main__.py address.py ndp.py nstool.py snh.py \
- transfer.py \
+TASST_SRCS = __init__.py __main__.py address.py dhcp.py dhcpv6.py ndp.py \
+ nstool.py snh.py transfer.py \
selftest/__init__.py selftest/static_ifup.py selftest/veth.py
EXETER_META = meta/lint.json meta/tasst.json
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index 6a95eec1..8c4efd74 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -13,7 +13,7 @@ library of test helpers for passt & pasta
import exeter
# We import just to get the exeter tests, which flake8 can't see
-from . import ndp, nstool, snh, transfer # noqa: F401
+from . import dhcp, dhcpv6, ndp, nstool, snh, transfer # noqa: F401
from .selftest import static_ifup, veth # noqa: F401
diff --git a/test/tasst/dhcp.py b/test/tasst/dhcp.py
new file mode 100644
index 00000000..d86df2de
--- /dev/null
+++ b/test/tasst/dhcp.py
@@ -0,0 +1,132 @@
+#! /usr/bin/env avocado-runner-avocado-classless
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+dhcp.py - Helpers for testing DHCP
+"""
+
+import contextlib
+import ipaddress
+import os
+import tempfile
+
+import exeter
+
+from . import address, nstool
+
+
+DHCLIENT = '/sbin/dhclient'
+
+
+@contextlib.contextmanager
+def dhclient(snh, ifname, ipv='4'):
+ with tempfile.TemporaryDirectory() as tmpdir:
+ pidfile = os.path.join(tmpdir, 'dhclient.pid')
+ leasefile = os.path.join(tmpdir, 'dhclient.leases')
+
+ # We need '-nc' because we may be running with
+ # capabilities but not UID 0. Without -nc dhclient drops
+ # capabilities before invoking dhclient-script, so it's
+ # unable to actually configure the interface
+ opts = [f'-{ipv}', '-v', '-nc', '-pf', f'{pidfile}',
+ '-lf', f'{leasefile}', f'{ifname}']
+ snh.fg(f'{DHCLIENT}', *opts, capable=True)
+ yield
+ snh.fg(f'{DHCLIENT}', '-x', '-pf', f'{pidfile}', capable=True)
+
+
+class DhcpTestScenario:
+ def __init__(self, *, client, ifname, addr, gateway, mtu):
+ self.client = client
+ self.ifname = ifname
+ self.addr = addr
+ self.gateway = gateway
+ self.mtu = mtu
+
+
+def test_dhcp_addr(setup):
+ with setup as scn, dhclient(scn.client, scn.ifname):
+ (actual_addr,) = scn.client.addrs(scn.ifname,
+ family='inet', scope='global')
+ exeter.assert_eq(actual_addr.ip, scn.addr)
+
+
+def test_dhcp_route(setup):
+ with setup as scn, dhclient(scn.client, scn.ifname):
+ (defroute,) = scn.client.routes4(dst='default')
+ exeter.assert_eq(ipaddress.ip_address(defroute['gateway']),
+ scn.gateway)
+
+
+def test_dhcp_mtu(setup):
+ with setup as scn, dhclient(scn.client, scn.ifname):
+ exeter.assert_eq(scn.client.mtu(scn.ifname), scn.mtu)
+
+
+DHCP_TESTS = [test_dhcp_addr, test_dhcp_route, test_dhcp_mtu]
+
+
+def dhcp_tests(setup):
+ for t in DHCP_TESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+DHCPD = 'dhcpd'
+SUBNET = address.TEST_NET_1
+ipa = address.IpiAllocator(SUBNET)
+(SERVER_IP4,) = ipa.next_ipis()
+(CLIENT_IP4,) = ipa.next_ipis()
+IFNAME = 'clientif'
+
+
+@contextlib.contextmanager
+def setup_dhcpd_common(ifname, server_ifname):
+ with nstool.unshare_snh('client', '-Un') as client, \
+ nstool.unshare_snh('server', '-n',
+ parent=client, capable=True) as server:
+ client.veth(ifname, server_ifname, server)
+
+ with tempfile.TemporaryDirectory() as tmpdir:
+ yield (client, server, tmpdir)
+
+
+@contextlib.contextmanager
+def setup_dhcpd():
+ server_ifname = 'serverif'
+
+ with setup_dhcpd_common(IFNAME, server_ifname) as (client, server, tmpdir):
+ # Configure dhcpd
+ confpath = os.path.join(tmpdir, 'dhcpd.conf')
+ open(confpath, 'w', encoding='UTF-8').write(
+ f'''subnet {SUBNET.network_address} netmask {SUBNET.netmask} {{
+ option routers {SERVER_IP4.ip};
+ range {CLIENT_IP4.ip} {CLIENT_IP4.ip};
+ }}'''
+ )
+ pidfile = os.path.join(tmpdir, 'dhcpd.pid')
+ leasepath = os.path.join(tmpdir, 'dhcpd.leases')
+ open(leasepath, 'wb').write(b'')
+
+ server.ifup('lo')
+ server.ifup(server_ifname, SERVER_IP4)
+
+ opts = ['-f', '-d', '-4', '-cf', f'{confpath}',
+ '-lf', f'{leasepath}', '-pf', f'{pidfile}']
+ server.fg(f'{DHCPD}', '-t', *opts) # test config
+ with server.bg(f'{DHCPD}', *opts, capable=True, check=False) as dhcpd:
+ # Configure the client
+ client.ifup('lo')
+ yield DhcpTestScenario(client=client, ifname=IFNAME,
+ addr=CLIENT_IP4.ip,
+ gateway=SERVER_IP4.ip, mtu=1500)
+ dhcpd.terminate()
+
+
+dhcp_tests(setup_dhcpd)
diff --git a/test/tasst/dhcpv6.py b/test/tasst/dhcpv6.py
new file mode 100644
index 00000000..ab119ae7
--- /dev/null
+++ b/test/tasst/dhcpv6.py
@@ -0,0 +1,89 @@
+#! /usr/bin/env avocado-runner-avocado-classless
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+dhcpv6.py - Helpers for testing DHCPv6
+"""
+
+import contextlib
+import os
+
+import exeter
+
+from . import address, dhcp
+
+
+def dhclientv6(snh, ifname):
+ return dhcp.dhclient(snh, ifname, '6')
+
+
+class Dhcpv6TestScenario:
+ def __init__(self, *, client, ifname, addr):
+ self.client = client
+ self.ifname = ifname
+ self.addr = addr
+
+
+def test_dhcp6_addr(setup):
+ with setup as scn, dhclientv6(scn.client, scn.ifname):
+ addrs = [a.ip for a in scn.client.addrs(scn.ifname, family='inet6',
+ scope='global')]
+ assert scn.addr in addrs # Might also have a SLAAC address
+
+
+DHCP6_TESTS = [test_dhcp6_addr]
+
+
+def dhcp6_tests(setup):
+ for t in DHCP6_TESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+DHCPD = 'dhcpd'
+SUBNET = address.TEST_NET6_TASST_A
+ipa = address.IpiAllocator(SUBNET)
+(SERVER_IP6,) = ipa.next_ipis()
+(CLIENT_IP6,) = ipa.next_ipis()
+IFNAME = 'clientif'
+
+
+@contextlib.contextmanager
+def setup_dhcpdv6():
+ server_ifname = 'serverif'
+
+ with dhcp.setup_dhcpd_common(IFNAME, server_ifname) \
+ as (client, server, tmpdir):
+ # Sort out link local addressing
+ server.ifup('lo')
+ server.ifup(server_ifname, SERVER_IP6)
+ client.ifup('lo')
+ client.ifup(IFNAME)
+ server.addr_wait(server_ifname, family='inet6', scope='link')
+
+ # Configure the DHCP server
+ confpath = os.path.join(tmpdir, 'dhcpd.conf')
+ open(confpath, 'w', encoding='UTF-8').write(
+ f'''subnet6 {SUBNET} {{
+ range6 {CLIENT_IP6.ip} {CLIENT_IP6.ip};
+ }}''')
+ pidfile = os.path.join(tmpdir, 'dhcpd.pid')
+ leasepath = os.path.join(tmpdir, 'dhcpd.leases')
+ open(leasepath, 'wb').write(b'')
+
+ opts = ['-f', '-d', '-6', '-cf', f'{confpath}',
+ '-lf', f'{leasepath}', '-pf', f'{pidfile}']
+ server.fg(f'{DHCPD}', '-t', *opts) # test config
+ with server.bg(f'{DHCPD}', *opts, capable=True, check=False) as dhcpd:
+ yield Dhcpv6TestScenario(client=client, ifname=IFNAME,
+ addr=CLIENT_IP6.ip)
+ dhcpd.terminate()
+
+
+dhcp6_tests(setup_dhcpdv6)
--
@@ -0,0 +1,89 @@
+#! /usr/bin/env avocado-runner-avocado-classless
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+dhcpv6.py - Helpers for testing DHCPv6
+"""
+
+import contextlib
+import os
+
+import exeter
+
+from . import address, dhcp
+
+
+def dhclientv6(snh, ifname):
+ return dhcp.dhclient(snh, ifname, '6')
+
+
+class Dhcpv6TestScenario:
+ def __init__(self, *, client, ifname, addr):
+ self.client = client
+ self.ifname = ifname
+ self.addr = addr
+
+
+def test_dhcp6_addr(setup):
+ with setup as scn, dhclientv6(scn.client, scn.ifname):
+ addrs = [a.ip for a in scn.client.addrs(scn.ifname, family='inet6',
+ scope='global')]
+ assert scn.addr in addrs # Might also have a SLAAC address
+
+
+DHCP6_TESTS = [test_dhcp6_addr]
+
+
+def dhcp6_tests(setup):
+ for t in DHCP6_TESTS:
+ testid = f'{setup.__qualname__}|{t.__qualname__}'
+ exeter.register_pipe(testid, setup, t)
+
+
+DHCPD = 'dhcpd'
+SUBNET = address.TEST_NET6_TASST_A
+ipa = address.IpiAllocator(SUBNET)
+(SERVER_IP6,) = ipa.next_ipis()
+(CLIENT_IP6,) = ipa.next_ipis()
+IFNAME = 'clientif'
+
+
+@contextlib.contextmanager
+def setup_dhcpdv6():
+ server_ifname = 'serverif'
+
+ with dhcp.setup_dhcpd_common(IFNAME, server_ifname) \
+ as (client, server, tmpdir):
+ # Sort out link local addressing
+ server.ifup('lo')
+ server.ifup(server_ifname, SERVER_IP6)
+ client.ifup('lo')
+ client.ifup(IFNAME)
+ server.addr_wait(server_ifname, family='inet6', scope='link')
+
+ # Configure the DHCP server
+ confpath = os.path.join(tmpdir, 'dhcpd.conf')
+ open(confpath, 'w', encoding='UTF-8').write(
+ f'''subnet6 {SUBNET} {{
+ range6 {CLIENT_IP6.ip} {CLIENT_IP6.ip};
+ }}''')
+ pidfile = os.path.join(tmpdir, 'dhcpd.pid')
+ leasepath = os.path.join(tmpdir, 'dhcpd.leases')
+ open(leasepath, 'wb').write(b'')
+
+ opts = ['-f', '-d', '-6', '-cf', f'{confpath}',
+ '-lf', f'{leasepath}', '-pf', f'{pidfile}']
+ server.fg(f'{DHCPD}', '-t', *opts) # test config
+ with server.bg(f'{DHCPD}', *opts, capable=True, check=False) as dhcpd:
+ yield Dhcpv6TestScenario(client=client, ifname=IFNAME,
+ addr=CLIENT_IP6.ip)
+ dhcpd.terminate()
+
+
+dhcp6_tests(setup_dhcpdv6)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 21/22] tasst: Helpers to construct a simple network environment for tests
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (19 preceding siblings ...)
2024-08-05 12:36 ` [PATCH v2 20/22] tasst: Helpers for testing DHCP & DHCPv6 behaviour David Gibson
@ 2024-08-05 12:37 ` David Gibson
2024-08-05 12:37 ` [PATCH v2 22/22] avocado: Convert basic pasta tests David Gibson
2024-08-06 12:28 ` [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:37 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
This constructs essentially the simplest sensible network for passt/pasta
to operate in. We have one netns "simhost" to represent the host where we
will run passt or pasta, and a second "gw" to represent its default
gateway.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 3 +-
test/tasst/__main__.py | 1 +
test/tasst/scenario/__init__.py | 12 ++++
test/tasst/scenario/simple.py | 109 ++++++++++++++++++++++++++++++++
4 files changed, 124 insertions(+), 1 deletion(-)
create mode 100644 test/tasst/scenario/__init__.py
create mode 100644 test/tasst/scenario/simple.py
diff --git a/test/Makefile b/test/Makefile
index 0eeaf82e..6748d38a 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -74,7 +74,8 @@ AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
TASST_SRCS = __init__.py __main__.py address.py dhcp.py dhcpv6.py ndp.py \
nstool.py snh.py transfer.py \
- selftest/__init__.py selftest/static_ifup.py selftest/veth.py
+ selftest/__init__.py selftest/static_ifup.py selftest/veth.py \
+ scenario/__init__.py scenario/simple.py
EXETER_META = meta/lint.json meta/tasst.json
META_JOBS = $(EXETER_META)
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index 8c4efd74..491c68c9 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -14,6 +14,7 @@ import exeter
# We import just to get the exeter tests, which flake8 can't see
from . import dhcp, dhcpv6, ndp, nstool, snh, transfer # noqa: F401
+from .scenario import simple # noqa: F401
from .selftest import static_ifup, veth # noqa: F401
diff --git a/test/tasst/scenario/__init__.py b/test/tasst/scenario/__init__.py
new file mode 100644
index 00000000..4ea4584d
--- /dev/null
+++ b/test/tasst/scenario/__init__.py
@@ -0,0 +1,12 @@
+#! /usr/bin/python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+scenario/ - Helpers to set up various sample network topologies
+"""
diff --git a/test/tasst/scenario/simple.py b/test/tasst/scenario/simple.py
new file mode 100644
index 00000000..d8b78568
--- /dev/null
+++ b/test/tasst/scenario/simple.py
@@ -0,0 +1,109 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+scenario/simple.py - Smallest sensible network to use passt/pasta
+"""
+
+import contextlib
+
+from .. import address, nstool, transfer
+
+
+class __SimpleNet: # pylint: disable=R0903
+ """A simple network setup scenario
+
+ The sample network has 2 snhs (network namespaces) connected with
+ a veth link:
+ [simhost] <-veth-> [gw]
+
+ gw is set up as the default router for simhost.
+
+ simhost has addresses:
+ self.IP4 (IPv4), self.IP6 (IPv6), self.ip6_ll (IPv6 link local)
+
+ gw has addresses:
+ self.GW_IP4 (IPv4), self.GW_IP6 (IPv6),
+ self.gw_ip6_ll (IPv6 link local)
+ self.REMOTE_IP4 (IPv4), self.REMOTE_IP6 (IPv6)
+
+ The "remote" addresses are on a different subnet from the others,
+ so the only way for simhost to reach them is via its default
+ route. This helps to exercise that we're actually using that,
+ rather than just local net routes.
+
+ """
+
+ IFNAME = 'veth'
+ ipa_local = address.IpiAllocator()
+ (IP4, IP6) = ipa_local.next_ipis()
+ (GW_IP4, GW_IP6) = ipa_local.next_ipis()
+
+ ipa_remote = address.IpiAllocator(address.TEST_NET_2,
+ address.TEST_NET6_TASST_B)
+ (REMOTE_IP4, REMOTE_IP6) = ipa_remote.next_ipis()
+
+ def __init__(self, simhost, gw):
+ self.simhost = simhost
+ self.gw = gw
+
+ ifname = self.IFNAME
+ self.gw_ifname = 'gw' + ifname
+ self.simhost.veth(self.IFNAME, self.gw_ifname, self.gw)
+
+ self.gw.ifup('lo')
+ self.gw.ifup(self.gw_ifname, self.GW_IP4, self.GW_IP6,
+ self.REMOTE_IP4, self.REMOTE_IP6)
+
+ self.simhost.ifup('lo')
+ self.simhost.ifup(ifname, self.IP4, self.IP6)
+
+ # Once link is up on both sides, SLAAC will run
+ self.gw_ip6_ll = self.gw.addr_wait(self.gw_ifname,
+ family='inet6', scope='link')[0]
+ self.ip6_ll = self.simhost.addr_wait(ifname,
+ family='inet6', scope='link')[0]
+
+ # Set up the default route
+ self.simhost.fg('ip', '-4', 'route', 'add', 'default',
+ 'via', f'{self.GW_IP4.ip}', capable=True)
+ self.simhost.fg('ip', '-6', 'route', 'add', 'default',
+ 'via', f'{self.gw_ip6_ll.ip}', 'dev', f'{ifname}',
+ capable=True)
+
+
+@contextlib.contextmanager
+def simple_net():
+ with nstool.unshare_snh('simhost', '-Ucnpf', '--mount-proc') as simhost, \
+ nstool.unshare_snh('gw', '-n', parent=simhost, capable=True) as gw:
+ yield __SimpleNet(simhost, gw)
+
+
+@contextlib.contextmanager
+def simple_transfer4_setup():
+ with simple_net() as snet:
+ yield transfer.TransferTestScenario(client=snet.simhost,
+ server=snet.gw,
+ connect_ip=snet.REMOTE_IP4.ip,
+ connect_port=10000)
+
+
+transfer.transfer_tests(simple_transfer4_setup)
+
+
+@contextlib.contextmanager
+def simple_transfer6_setup():
+ with simple_net() as snet:
+ yield transfer.TransferTestScenario(client=snet.simhost,
+ server=snet.gw,
+ connect_ip=snet.REMOTE_IP6.ip,
+ connect_port=10000)
+
+
+transfer.transfer_tests(simple_transfer6_setup)
--
@@ -0,0 +1,109 @@
+#! /usr/bin/env python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+scenario/simple.py - Smallest sensible network to use passt/pasta
+"""
+
+import contextlib
+
+from .. import address, nstool, transfer
+
+
+class __SimpleNet: # pylint: disable=R0903
+ """A simple network setup scenario
+
+ The sample network has 2 snhs (network namespaces) connected with
+ a veth link:
+ [simhost] <-veth-> [gw]
+
+ gw is set up as the default router for simhost.
+
+ simhost has addresses:
+ self.IP4 (IPv4), self.IP6 (IPv6), self.ip6_ll (IPv6 link local)
+
+ gw has addresses:
+ self.GW_IP4 (IPv4), self.GW_IP6 (IPv6),
+ self.gw_ip6_ll (IPv6 link local)
+ self.REMOTE_IP4 (IPv4), self.REMOTE_IP6 (IPv6)
+
+ The "remote" addresses are on a different subnet from the others,
+ so the only way for simhost to reach them is via its default
+ route. This helps to exercise that we're actually using that,
+ rather than just local net routes.
+
+ """
+
+ IFNAME = 'veth'
+ ipa_local = address.IpiAllocator()
+ (IP4, IP6) = ipa_local.next_ipis()
+ (GW_IP4, GW_IP6) = ipa_local.next_ipis()
+
+ ipa_remote = address.IpiAllocator(address.TEST_NET_2,
+ address.TEST_NET6_TASST_B)
+ (REMOTE_IP4, REMOTE_IP6) = ipa_remote.next_ipis()
+
+ def __init__(self, simhost, gw):
+ self.simhost = simhost
+ self.gw = gw
+
+ ifname = self.IFNAME
+ self.gw_ifname = 'gw' + ifname
+ self.simhost.veth(self.IFNAME, self.gw_ifname, self.gw)
+
+ self.gw.ifup('lo')
+ self.gw.ifup(self.gw_ifname, self.GW_IP4, self.GW_IP6,
+ self.REMOTE_IP4, self.REMOTE_IP6)
+
+ self.simhost.ifup('lo')
+ self.simhost.ifup(ifname, self.IP4, self.IP6)
+
+ # Once link is up on both sides, SLAAC will run
+ self.gw_ip6_ll = self.gw.addr_wait(self.gw_ifname,
+ family='inet6', scope='link')[0]
+ self.ip6_ll = self.simhost.addr_wait(ifname,
+ family='inet6', scope='link')[0]
+
+ # Set up the default route
+ self.simhost.fg('ip', '-4', 'route', 'add', 'default',
+ 'via', f'{self.GW_IP4.ip}', capable=True)
+ self.simhost.fg('ip', '-6', 'route', 'add', 'default',
+ 'via', f'{self.gw_ip6_ll.ip}', 'dev', f'{ifname}',
+ capable=True)
+
+
+@contextlib.contextmanager
+def simple_net():
+ with nstool.unshare_snh('simhost', '-Ucnpf', '--mount-proc') as simhost, \
+ nstool.unshare_snh('gw', '-n', parent=simhost, capable=True) as gw:
+ yield __SimpleNet(simhost, gw)
+
+
+@contextlib.contextmanager
+def simple_transfer4_setup():
+ with simple_net() as snet:
+ yield transfer.TransferTestScenario(client=snet.simhost,
+ server=snet.gw,
+ connect_ip=snet.REMOTE_IP4.ip,
+ connect_port=10000)
+
+
+transfer.transfer_tests(simple_transfer4_setup)
+
+
+@contextlib.contextmanager
+def simple_transfer6_setup():
+ with simple_net() as snet:
+ yield transfer.TransferTestScenario(client=snet.simhost,
+ server=snet.gw,
+ connect_ip=snet.REMOTE_IP6.ip,
+ connect_port=10000)
+
+
+transfer.transfer_tests(simple_transfer6_setup)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 22/22] avocado: Convert basic pasta tests
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (20 preceding siblings ...)
2024-08-05 12:37 ` [PATCH v2 21/22] tasst: Helpers to construct a simple network environment for tests David Gibson
@ 2024-08-05 12:37 ` David Gibson
2024-08-06 12:28 ` [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
22 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2024-08-05 12:37 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa, David Gibson
Convert the old-style tests for pasta (DHCP, NDP, TCP and UDP transfers)
to using avocado. There are a few differences in what we test, but this
should generally improve coverage:
* We run in a constructed network environment, so we no longer depend on
the real host's networking configuration
* We do independent setup for each individual test
* We add explicit tests for --config-net, which we use to accelerate that
setup for the TCP and UDP tests
* The TCP and UDP tests now test transfers between the guest and a
(simulated) remote site that's on a different network from the simulated
pasta host. Thus testing the no NAT case that passt/pasta emphasizes.
(We need to add tests for the NAT cases back in).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
test/Makefile | 4 +-
test/pasta/.gitignore | 1 +
test/pasta/pasta.py | 138 +++++++++++++++++++++++++++++++++++++++++
test/tasst/__main__.py | 2 +-
test/tasst/pasta.py | 52 ++++++++++++++++
5 files changed, 194 insertions(+), 3 deletions(-)
create mode 100644 test/pasta/.gitignore
create mode 100644 test/pasta/pasta.py
create mode 100644 test/tasst/pasta.py
diff --git a/test/Makefile b/test/Makefile
index 6748d38a..ba249a5d 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -64,11 +64,11 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \
$(TESTDATA_ASSETS)
ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS)
-AVOCADO_ASSETS =
+AVOCADO_ASSETS = nstool small.bin medium.bin big.bin
META_ASSETS = nstool small.bin medium.bin big.bin
EXETER_SH = build/static_checkers.sh
-EXETER_PY = build/build.py
+EXETER_PY = build/build.py pasta/pasta.py
EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json)
AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json
diff --git a/test/pasta/.gitignore b/test/pasta/.gitignore
new file mode 100644
index 00000000..a6c57f5f
--- /dev/null
+++ b/test/pasta/.gitignore
@@ -0,0 +1 @@
+*.json
diff --git a/test/pasta/pasta.py b/test/pasta/pasta.py
new file mode 100644
index 00000000..491927a6
--- /dev/null
+++ b/test/pasta/pasta.py
@@ -0,0 +1,138 @@
+#! /usr/bin/env avocado-runner-avocado-classless
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+avocado/pasta.py - Basic tests for pasta mode
+"""
+
+import contextlib
+import ipaddress
+
+import exeter
+
+from tasst import dhcp, dhcpv6, ndp, nstool
+from tasst.pasta import Pasta
+from tasst.scenario.simple import simple_net
+
+IN_FWD_PORT = 10002
+SPLICE_FWD_PORT = 10003
+FWD_OPTS = ['-t', f'{IN_FWD_PORT}', '-u', f'{IN_FWD_PORT}',
+ '-T', f'{SPLICE_FWD_PORT}', '-U', f'{SPLICE_FWD_PORT}']
+
+
+@contextlib.contextmanager
+def pasta_unconfigured(*opts):
+ with simple_net() as simnet:
+ with nstool.unshare_snh('pastans', '-Ucnpf', '--mount-proc',
+ parent=simnet.simhost, capable=True) \
+ as guestns:
+ with Pasta(host=simnet.simhost, opts=opts, ns=guestns) as pasta:
+ yield simnet, pasta.ns
+
+
+@exeter.test
+def test_ifname():
+ with pasta_unconfigured() as (simnet, ns):
+ expected = set(['lo', simnet.IFNAME])
+ exeter.assert_eq(set(ns.ifs()), expected)
+
+
+@contextlib.contextmanager
+def pasta_ndp_setup():
+ with pasta_unconfigured() as (simnet, guestns):
+ guestns.ifup(simnet.IFNAME)
+ yield ndp.NdpTestScenario(client=guestns,
+ ifname=simnet.IFNAME,
+ network=simnet.IP6.network,
+ gateway=simnet.gw_ip6_ll.ip)
+
+
+ndp.ndp_tests(pasta_ndp_setup)
+
+
+@contextlib.contextmanager
+def pasta_dhcp():
+ with pasta_unconfigured() as (simnet, guestns):
+ yield dhcp.DhcpTestScenario(client=guestns,
+ ifname=simnet.IFNAME,
+ addr=simnet.IP4.ip,
+ gateway=simnet.GW_IP4.ip,
+ mtu=65520)
+
+
+dhcp.dhcp_tests(pasta_dhcp)
+
+
+@contextlib.contextmanager
+def pasta_dhcpv6():
+ with pasta_unconfigured() as (simnet, guestns):
+ yield dhcpv6.Dhcpv6TestScenario(client=guestns,
+ ifname=simnet.IFNAME,
+ addr=simnet.IP6.ip)
+
+
+dhcpv6.dhcp6_tests(pasta_dhcpv6)
+
+
+@contextlib.contextmanager
+def pasta_configured():
+ with pasta_unconfigured('--config-net', *FWD_OPTS) as (simnet, ns):
+ # Wait for DAD to complete on the --config-net address
+ ns.addr_wait(simnet.IFNAME, family='inet6', scope='global')
+ yield simnet, ns
+
+
+@exeter.test
+def test_config_net_addr():
+ with pasta_configured() as (simnet, ns):
+ addrs = ns.addrs(simnet.IFNAME, scope='global')
+ exeter.assert_eq(set(addrs), set([simnet.IP4, simnet.IP6]))
+
+
+@exeter.test
+def test_config_net_route4():
+ with pasta_configured() as (simnet, ns):
+ (defroute,) = ns.routes4(dst='default')
+ gateway = ipaddress.ip_address(defroute['gateway'])
+ exeter.assert_eq(gateway, simnet.GW_IP4.ip)
+
+
+@exeter.test
+def test_config_net_route6():
+ with pasta_configured() as (simnet, ns):
+ (defroute,) = ns.routes6(dst='default')
+ gateway = ipaddress.ip_address(defroute['gateway'])
+ exeter.assert_eq(gateway, simnet.gw_ip6_ll.ip)
+
+
+@exeter.test
+def test_config_net_mtu():
+ with pasta_configured() as (simnet, ns):
+ mtu = ns.mtu(simnet.IFNAME)
+ exeter.assert_eq(mtu, 65520)
+
+
+@contextlib.contextmanager
+def outward_transfer():
+ with pasta_configured() as (simnet, ns):
+ yield ns, simnet.gw
+
+
+@contextlib.contextmanager
+def inward_transfer():
+ with pasta_configured() as (simnet, ns):
+ yield simnet.gw, ns
+
+
+@contextlib.contextmanager
+def spliced_transfer():
+ with pasta_configured() as (simnet, ns):
+ yield ns, simnet.simhost
+
+
+if __name__ == '__main__':
+ exeter.main()
diff --git a/test/tasst/__main__.py b/test/tasst/__main__.py
index 491c68c9..058b3746 100644
--- a/test/tasst/__main__.py
+++ b/test/tasst/__main__.py
@@ -13,7 +13,7 @@ library of test helpers for passt & pasta
import exeter
# We import just to get the exeter tests, which flake8 can't see
-from . import dhcp, dhcpv6, ndp, nstool, snh, transfer # noqa: F401
+from . import dhcp, dhcpv6, ndp, nstool, pasta, snh, transfer # noqa: F401
from .scenario import simple # noqa: F401
from .selftest import static_ifup, veth # noqa: F401
diff --git a/test/tasst/pasta.py b/test/tasst/pasta.py
new file mode 100644
index 00000000..030affce
--- /dev/null
+++ b/test/tasst/pasta.py
@@ -0,0 +1,52 @@
+#! /usr/bin/python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+pasta.py - Helpers for starting pasta
+"""
+
+import contextlib
+import os.path
+import tempfile
+
+
+PASTA_BIN = './pasta'
+
+
+class Pasta(contextlib.AbstractContextManager):
+ """A managed pasta instance"""
+
+ def __init__(self, *, host, ns, opts):
+ self.host = host
+ self.ns = ns
+ self.opts = opts
+ self.proc = None
+
+ def __enter__(self):
+ self.tmpdir = tempfile.TemporaryDirectory()
+ piddir = self.tmpdir.__enter__()
+ pidfile = os.path.join(piddir, 'pasta.pid')
+ relpid = self.ns.relative_pid(self.host)
+ cmd = [f'{PASTA_BIN}', '-f', '-P', f'{pidfile}'] + list(self.opts) + \
+ [f'{relpid}']
+ self.proc = self.host.bg(*cmd)
+ self.proc.__enter__()
+ # Wait for the PID file to be written
+ pidstr = None
+ while not pidstr:
+ pidstr = self.host.output('cat', f'{pidfile}', check=False)
+ self.pid = int(pidstr)
+ return self
+
+ def __exit__(self, *exc_details):
+ try:
+ self.host.fg('kill', '-TERM', f'{self.pid}')
+ self.proc.__exit__(*exc_details)
+ finally:
+ self.tmpdir.__exit__(*exc_details)
--
@@ -0,0 +1,52 @@
+#! /usr/bin/python3
+
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+# Copyright Red Hat
+# Author: David Gibson <david@gibson.dropbear.id.au>
+
+"""
+Test A Simple Socket Transport
+
+pasta.py - Helpers for starting pasta
+"""
+
+import contextlib
+import os.path
+import tempfile
+
+
+PASTA_BIN = './pasta'
+
+
+class Pasta(contextlib.AbstractContextManager):
+ """A managed pasta instance"""
+
+ def __init__(self, *, host, ns, opts):
+ self.host = host
+ self.ns = ns
+ self.opts = opts
+ self.proc = None
+
+ def __enter__(self):
+ self.tmpdir = tempfile.TemporaryDirectory()
+ piddir = self.tmpdir.__enter__()
+ pidfile = os.path.join(piddir, 'pasta.pid')
+ relpid = self.ns.relative_pid(self.host)
+ cmd = [f'{PASTA_BIN}', '-f', '-P', f'{pidfile}'] + list(self.opts) + \
+ [f'{relpid}']
+ self.proc = self.host.bg(*cmd)
+ self.proc.__enter__()
+ # Wait for the PID file to be written
+ pidstr = None
+ while not pidstr:
+ pidstr = self.host.output('cat', f'{pidfile}', check=False)
+ self.pid = int(pidstr)
+ return self
+
+ def __exit__(self, *exc_details):
+ try:
+ self.host.fg('kill', '-TERM', f'{self.pid}')
+ self.proc.__exit__(*exc_details)
+ finally:
+ self.tmpdir.__exit__(*exc_details)
--
2.45.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
` (21 preceding siblings ...)
2024-08-05 12:37 ` [PATCH v2 22/22] avocado: Convert basic pasta tests David Gibson
@ 2024-08-06 12:28 ` David Gibson
2024-08-07 8:17 ` Stefano Brivio
22 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2024-08-06 12:28 UTC (permalink / raw)
To: Stefano Brivio, passt-dev; +Cc: Cleber Rosa
[-- Attachment #1: Type: text/plain, Size: 1654 bytes --]
On Mon, Aug 05, 2024 at 10:36:39PM +1000, David Gibson wrote:
> Here's a rough proof of concept showing how we could run tests for
> passt with Avocado and the exeter library I recently created. It
> includes Cleber's patch adding some basic Avocado tests and builds on
> that.
>
> The current draft is pretty janky:
> * The build rules to download and install the necessary pieces are messy
> * We create the Avocado job files from the exeter sources in the
> Makefile. Ideally Avocado would eventually be extended to handle
> this itself
> * The names that Avocado sees for each test are overlong
> * There's some hacks to make sure things are executed from the
> right working directory
>
> But, it's a starting point.
>
> Stefano,
>
> If you could look particularly at 6/22 and 22/22 which add the real
> tests for passt/pasta, that would be great. The more specific you can
> be about what you find ugly about how the tests are written, then
> better I can try to address that.
>
> I suspect it will be easier to actually apply the series, then look at
> the new test files (test/build/build.py, and test/pasta/pasta.py
> particularly). From there you can look at as much of the support
> library as you need to, rather than digging through the actual patches
> to look for that.
Forgot to mention. Patches 1 & 2 should be good to go regardless of
what we do with the rest of the testing stuff.
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests
2024-08-06 12:28 ` [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
@ 2024-08-07 8:17 ` Stefano Brivio
0 siblings, 0 replies; 31+ messages in thread
From: Stefano Brivio @ 2024-08-07 8:17 UTC (permalink / raw)
To: David Gibson; +Cc: passt-dev, Cleber Rosa
On Tue, 6 Aug 2024 22:28:19 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Mon, Aug 05, 2024 at 10:36:39PM +1000, David Gibson wrote:
> > Here's a rough proof of concept showing how we could run tests for
> > passt with Avocado and the exeter library I recently created. It
> > includes Cleber's patch adding some basic Avocado tests and builds on
> > that.
> >
> > The current draft is pretty janky:
> > * The build rules to download and install the necessary pieces are messy
> > * We create the Avocado job files from the exeter sources in the
> > Makefile. Ideally Avocado would eventually be extended to handle
> > this itself
> > * The names that Avocado sees for each test are overlong
> > * There's some hacks to make sure things are executed from the
> > right working directory
> >
> > But, it's a starting point.
> >
> > Stefano,
> >
> > If you could look particularly at 6/22 and 22/22 which add the real
> > tests for passt/pasta, that would be great. The more specific you can
> > be about what you find ugly about how the tests are written, then
> > better I can try to address that.
> >
> > I suspect it will be easier to actually apply the series, then look at
> > the new test files (test/build/build.py, and test/pasta/pasta.py
> > particularly). From there you can look at as much of the support
> > library as you need to, rather than digging through the actual patches
> > to look for that.
>
> Forgot to mention. Patches 1 & 2 should be good to go regardless of
> what we do with the rest of the testing stuff.
Applied up to 2/22.
--
Stefano
^ permalink raw reply [flat|nested] 31+ messages in thread