public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: Stefano Brivio <sbrivio@redhat.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: passt-dev@passt.top, Cleber Rosa <crosa@redhat.com>
Subject: Re: [PATCH v2 06/22] test: Add exeter+Avocado based build tests
Date: Fri, 9 Aug 2024 00:55:37 +0200	[thread overview]
Message-ID: <20240809005537.137f68ce@elisabeth> (raw)
In-Reply-To: <ZrQfUmbu6u-TGJPR@zatzit.fritz.box>

[-- Attachment #1: Type: text/plain, Size: 8136 bytes --]

On Thu, 8 Aug 2024 11:28:50 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:

> On Wed, Aug 07, 2024 at 03:06:44PM +0200, Stefano Brivio wrote:
> > On Wed, 7 Aug 2024 20:51:08 +1000
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >   
> > > On Wed, Aug 07, 2024 at 12:11:26AM +0200, Stefano Brivio wrote:  
> > > > On Mon,  5 Aug 2024 22:36:45 +1000
> > > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > >     
> > > > > Add a new test script to run the equivalent of the tests in build/all
> > > > > using exeter and Avocado.  This new version of the tests is more robust
> > > > > than the original, since it makes a temporary copy of the source tree so
> > > > > will not be affected by concurrent manual builds.    
> > > > 
> > > > I think this is much more readable than the previous Python attempt.    
> > > 
> > > That's encouraging.
> > >   
> > > > On the other hand, I guess it's not an ideal candidate for a fair
> > > > comparison because this is exactly the kind of stuff where shell
> > > > scripting shines: it's a simple test that needs a few basic shell
> > > > commands.    
> > > 
> > > Right.
> > >   
> > > > On that subject, the shell test is about half the lines of code (just
> > > > skipping headers, it's 48 lines instead of 90... and yes, this version    
> > > 
> > > Even ignoring the fact that this case is particularly suited to shell,
> > > I don't think that's really an accurate comparison, but getting to one
> > > is pretty hard.
> > > 
> > > The existing test isn't 48 lines of shell, but of "passt test DSL".
> > > There are several hundred additional lines of shell to interpret that.  
> > 
> > Yeah, but the 48 lines is all I have to look at, which is what matters
> > I would argue. That's exactly why I wrote that interpreter.
> > 
> > Here, it's 90 lines of *test file*.  
> 
> Fair point.  Fwiw, it's down to 77 so far for my next draft.
> 
> > > Now obviously we don't need all of that for just this test.  Likewise
> > > the new Python test needs at least exeter - that's only a couple of
> > > hundred lines - but also Avocado (huge, but only a small amount is
> > > really relevant here).
> > >   
> > > > now uses a copy of the source code, but that would be two lines).    
> > > 
> > > I feel like it would be a bit more than two lines, to copy exactly
> > > what youwant, and to clean up after yourself.  
> > 
> > host    mkdir __STATEDIR__/sources
> > host    cp --parents $(git ls-files) __STATEDIR__/sources
> > 
> > ...which is actually an improvement on the original as __STATEDIR__ can
> > be handled in a centralised way, if one wants to keep that after the
> > single test case, after the whole test run, or not at all.  
> 
> Huh, I didn't know about cp --parents, which does exactly what's
> needed.  In the Python library there are, alas, several things that do
> almost but not quite what's needed.  I guess I could just invoke 'cp
> --parents' myself.
> 
> > > > In terms of time overhead, dropping delays to make the display capture
> > > > nice (a feature that we would anyway lose with exeter plus Avocado, if
> > > > I understood correctly):    
> > > 
> > > Yes.  Unlike you, I'm really not convinced of the value of the display
> > > capture versus log files, at least in the majority of cases.  
> > 
> > Well, but I use that...
> > 
> > By the way, openQA nowadays takes periodic screenshots. That's certainly
> > not as useful, but I'm indeed not the only one who benefits from
> > _seeing_ tests as they run instead of correlating log files from
> > different contexts, especially when you have a client, a server, and
> > what you're testing in between.  
> 
> If you have to correlate multiple logs that's a pain, yes.  My
> approach here is, as much as possible, to have a single "log"
> (actually stdout & stderr) from the top level test logic, so the
> logical ordering is kind of built in.

That's not necessarily helpful: if I have a client and a server, things
are much clearer to me if I have two different logs, side-by-side. Even
more so if you have a guest, a host, and a namespace "in between".

I see the difference as I'm often digging through Podman CI's logs,
where there's a single log (including stdout and stderr), because bats
doesn't offer a context functionality like we have right now.

It's sometimes really not easy to understand what's going on in Podman's
tests without copying and pasting into an editor and manually marking
things.

> > > I certainly don't think it's worth slowing down the test running in the
> > > normal case.  
> > 
> > It doesn't significantly slow things down,  
> 
> It does if you explicitly add delays to make the display capture nice
> as mentioned above.

Okay, I didn't realise the amount of eye-candy I left in even when
${FAST} is set (which probably only makes sense when run as './ci').
With the patch attached I get:

$ time ./run
[...]
real	17m17.686s
user	0m0.010s
sys	0m0.014s

I also cut the duration of throughput and latency tests down to one
second. After we fixed lot of issues in passt, and some in QEMU and
kernel, results are now surprisingly consistent.

Still, a significant part of it is Podman's tests (which I'm working on
speeding up, for the sake of Podman's own CI), and performance tests
anyway. Without those:

$ time ./run
[...]
real	5m57.612s
user	0m0.011s
sys	0m0.009s

> > but it certainly makes it
> > more complicated to run test cases in parallel... which you can't do
> > anyway for throughput and latency tests (which take 22 out of the 37
> > minutes of a current CI run), unless you set up VMs with CPU pinning and
> > cgroups, or a server farm.  
> 
> So, yes, the perf tests take the majority of the runtime for CI, but
> I'm less concerned about runtime for CI tests.  I'm more interested in
> runtime for a subset of functional tests you can run repeatedly while
> developing.  I routinely disable the perf and other slow tests, to get
> a subset taking 5-7 minutes.  That's ok, but I'm pretty confident I
> can get better coverage in significantly less time using parallel
> tests.

Probably, yes, but still I would like to point out that the difference
between five and ten minutes is not as relevant in terms of workflow as
the difference between one and five minutes.

> > I mean, I see the value of running things in parallel in a general
> > case, but I don't think you should just ignore everything else.
> >   
> > > > $ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; )
> > > > [...]
> > > > real	0m17.449s
> > > > user	0m15.616s
> > > > sys	0m2.136s    
> > > 
> > > On my system:
> > > [...]
> > > real	0m20.325s
> > > user	0m15.595s
> > > sys	0m5.287s
> > >   
> > > > compared to:
> > > > 
> > > > $ time ./run
> > > > [...]
> > > > real	0m18.217s
> > > > user	0m0.010s
> > > > sys	0m0.001s
> > > > 
> > > > ...which I would call essentially no overhead. I didn't try out this
> > > > version yet, I suspect it would be somewhere in between.    
> > > 
> > > Well..
> > > 
> > > $ time PYTHONPATH=test/exeter/py3 test/venv/bin/avocado run test/build/build.json 
> > > [...]
> > > RESULTS    : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
> > > JOB TIME   : 10.85 s
> > > 
> > > real	0m11.000s
> > > user	0m23.439s
> > > sys	0m7.315s
> > > 
> > > Because parallel.  It looks like the avocado start up time is
> > > reasonably substantial too, so that should look better with a larger
> > > set of tests.  
> > 
> > With the current set of tests, I doubt it's ever going to pay off. Even
> > if you run the non-perf tests in 10% of the time, it's going to be 24
> > minutes instead of 37.  
> 
> Including the perf tests, probably not.  Excluding them (which is
> extremely useful when actively coding) I think it will.
> 
> > I guess it will start making sense with larger matrices of network
> > environments, or with more test cases (but really a lot of them).  
> 
> We could certainly do with a lot more tests, though I expect it will
> take a while to get them.
> 

-- 
Stefano

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: test_speedup.patch --]
[-- Type: text/x-patch, Size: 14213 bytes --]

diff --git a/test/lib/layout b/test/lib/layout
index f9a1cf1..4d03572 100644
--- a/test/lib/layout
+++ b/test/lib/layout
@@ -15,7 +15,7 @@
 
 # layout_pasta() - Panes for host, pasta, and separate one for namespace
 layout_pasta() {
-	sleep 3
+	sleep 1
 
 	tmux kill-pane -a -t 0
 	cmd_write 0 clear
@@ -46,7 +46,7 @@ layout_pasta() {
 
 # layout_passt() - Panes for host, passt, and guest
 layout_passt() {
-	sleep 3
+	sleep 1
 
 	tmux kill-pane -a -t 0
 	cmd_write 0 clear
@@ -77,7 +77,7 @@ layout_passt() {
 
 # layout_passt_in_pasta() - Host, passt within pasta, namespace and guest
 layout_passt_in_pasta() {
-	sleep 3
+	sleep 1
 
 	tmux kill-pane -a -t 0
 	cmd_write 0 clear
@@ -113,7 +113,7 @@ layout_passt_in_pasta() {
 
 # layout_two_guests() - Two guest panes, two passt panes, plus host and log
 layout_two_guests() {
-	sleep 3
+	sleep 1
 
 	tmux kill-pane -a -t 0
 	cmd_write 0 clear
@@ -152,7 +152,7 @@ layout_two_guests() {
 
 # layout_demo_pasta() - Four panes for pasta demo
 layout_demo_pasta() {
-	sleep 3
+	sleep 1
 
 	cmd_write 0 cd ${BASEPATH}
 	cmd_write 0 clear
@@ -188,7 +188,7 @@ layout_demo_pasta() {
 
 # layout_demo_passt() - Four panes for passt demo
 layout_demo_passt() {
-	sleep 3
+	sleep 1
 
 	cmd_write 0 cd ${BASEPATH}
 	cmd_write 0 clear
@@ -224,7 +224,7 @@ layout_demo_passt() {
 
 # layout_demo_podman() - Four panes for pasta demo with Podman
 layout_demo_podman() {
-	sleep 3
+	sleep 1
 
 	cmd_write 0 cd ${BASEPATH}
 	cmd_write 0 clear
diff --git a/test/lib/term b/test/lib/term
index 262937e..95f9a01 100755
--- a/test/lib/term
+++ b/test/lib/term
@@ -97,7 +97,6 @@ display_delay() {
 switch_pane() {
 	tmux select-pane -t ${1}
 	PR_DELAY=${PR_DELAY_INIT}
-	display_delay "0.2"
 }
 
 # cmd_write() - Write a command to a pane, letter by letter, and execute it
@@ -199,7 +198,7 @@ pane_run() {
 # $1:	Pane name
 pane_wait() {
 	__lc="$(echo "${1}" | tr [A-Z] [a-z])"
-	sleep 0.1 || sleep 1
+	sleep 0.01 || sleep 1
 
 	__done=0
 	while
@@ -207,7 +206,7 @@ pane_wait() {
 		case ${__l} in
 		*"$ " | *"# ") return ;;
 		esac
-	do sleep 0.1 || sleep 1; done
+	do sleep 0.01 || sleep 1; done
 }
 
 # pane_parse() - Print last line, @EMPTY@ if command had no output
@@ -231,7 +230,7 @@ pane_status() {
 
 	__status="$(pane_parse "${1}")"
 	while ! [ "${__status}" -eq "${__status}" ] 2>/dev/null; do
-		sleep 1
+		sleep 0.1
 		pane_run "${1}" 'echo $?'
 		pane_wait "${1}"
 		__status="$(pane_parse "${1}")"
@@ -390,13 +389,6 @@ info_passed() {
 	info_nolog "...${PR_GREEN}passed${PR_NC}.\n"
 	log "...passed."
 	log
-
-	for i in `seq 1 3`; do
-		tmux set status-right-style 'bg=colour1 fg=colour2 bold'
-		sleep "0.1"
-		tmux set status-right-style 'bg=colour1 fg=colour233 bold'
-		sleep "0.1"
-	done
 }
 
 # info_failed() - Display, log, and make status bar blink when a test passes
@@ -407,13 +399,6 @@ info_failed() {
 	log "...failed."
 	log
 
-	for i in `seq 1 3`; do
-		tmux set status-right-style 'bg=colour1 fg=colour196 bold'
-		sleep "0.1"
-		tmux set status-right-style 'bg=colour1 fg=colour233 bold'
-		sleep "0.1"
-	done
-
 	pause_continue \
 		"Press any key to pause test session"		\
 		"Resuming in "					\
diff --git a/test/lib/test b/test/lib/test
index c525f8e..e6726be 100755
--- a/test/lib/test
+++ b/test/lib/test
@@ -33,7 +33,7 @@ test_iperf3k() {
 
 	pane_or_context_run "${__sctx}" 'kill -INT $(cat s.pid); rm s.pid'
 
-	sleep 3		# Wait for kernel to free up ports
+	sleep 1		# Wait for kernel to free up ports
 }
 
 # test_iperf3() - Ugly helper for iperf3 directive
diff --git a/test/pasta_options/log_to_file b/test/pasta_options/log_to_file
index fe50e50..3ead06c 100644
--- a/test/pasta_options/log_to_file
+++ b/test/pasta_options/log_to_file
@@ -19,7 +19,7 @@ sleep	1
 endef
 
 def	flood_log_client
-host	tcp_crr --nolog -P 10001 -C 10002 -6 -c -H ::1
+host	tcp_crr --nolog -l1 -P 10001 -C 10002 -6 -c -H ::1
 endef
 
 def	check_log_size_mountns
@@ -42,7 +42,7 @@ pout	PID2 echo $!
 check	head -1 __LOG_FILE__ | grep '^pasta .* [(]__PID2__[)]$'
 
 test	Maximum log size
-passtb	./pasta --config-net -d -f -l __LOG_FILE__ --log-size $((100 * 1024)) -- sh -c 'while true; do tcp_crr --nolog -P 10001 -C 10002 -6; done'
+passtb	./pasta --config-net -d -f -l __LOG_FILE__ --log-size $((100 * 1024)) -- sh -c 'while true; do tcp_crr --nolog -l1 -P 10001 -C 10002 -6; done'
 sleep	1
 
 flood_log_client
diff --git a/test/perf/passt_tcp b/test/perf/passt_tcp
index 14343cb..695479f 100644
--- a/test/perf/passt_tcp
+++ b/test/perf/passt_tcp
@@ -38,7 +38,7 @@ hout	FREQ_CPUFREQ (echo "scale=1"; printf '( %i + 10^5 / 2 ) / 10^6\n' $(cat /sy
 hout	FREQ [ -n "__FREQ_CPUFREQ__" ] && echo __FREQ_CPUFREQ__ || echo __FREQ_PROCFS__
 
 set	THREADS 4
-set	TIME 10
+set	TIME 1
 set	OMIT 0.1
 set	OPTS -Z -P __THREADS__ -l 1M -O__OMIT__
 
@@ -75,7 +75,7 @@ lat	-
 lat	-
 lat	-
 nsb	tcp_rr --nolog -6
-gout	LAT tcp_rr --nolog -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout	LAT tcp_rr --nolog -l1 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 200 150
 
 tl	TCP CRR latency over IPv6: guest to host
@@ -85,7 +85,7 @@ lat	-
 lat	-
 lat	-
 nsb	tcp_crr --nolog -6
-gout	LAT tcp_crr --nolog -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout	LAT tcp_crr --nolog -l1 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 500 400
 
 tr	TCP throughput over IPv4: guest to host
@@ -119,7 +119,7 @@ lat	-
 lat	-
 lat	-
 nsb	tcp_rr --nolog -4
-gout	LAT tcp_rr --nolog -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout	LAT tcp_rr --nolog -l1 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 200 150
 
 tl	TCP CRR latency over IPv4: guest to host
@@ -129,7 +129,7 @@ lat	-
 lat	-
 lat	-
 nsb	tcp_crr --nolog -4
-gout	LAT tcp_crr --nolog -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+gout	LAT tcp_crr --nolog -l1 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 500 400
 
 tr	TCP throughput over IPv6: host to guest
@@ -153,7 +153,7 @@ lat	-
 lat	-
 guestb	tcp_rr --nolog -P 10001 -C 10011 -6
 sleep	1
-nsout	LAT tcp_rr --nolog -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_rr --nolog -l1 -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 200 150
 
 tl	TCP CRR latency over IPv6: host to guest
@@ -164,7 +164,7 @@ lat	-
 lat	-
 guestb	tcp_crr --nolog -P 10001 -C 10011 -6
 sleep	1
-nsout	LAT tcp_crr --nolog -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_crr --nolog -l1 -P 10001 -C 10011 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 500 350
 
 
@@ -189,7 +189,7 @@ lat	-
 lat	-
 guestb	tcp_rr --nolog -P 10001 -C 10011 -4
 sleep	1
-nsout	LAT tcp_rr --nolog -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_rr --nolog -l1 -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 200 150
 
 tl	TCP CRR latency over IPv6: host to guest
@@ -200,7 +200,7 @@ lat	-
 lat	-
 guestb	tcp_crr --nolog -P 10001 -C 10011 -4
 sleep	1
-nsout	LAT tcp_crr --nolog -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_crr --nolog -l1 -P 10001 -C 10011 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
 lat	__LAT__ 500 300
 
 te
diff --git a/test/perf/passt_udp b/test/perf/passt_udp
index 8919280..f25c903 100644
--- a/test/perf/passt_udp
+++ b/test/perf/passt_udp
@@ -31,7 +31,7 @@ hout	FREQ_CPUFREQ (echo "scale=1"; printf '( %i + 10^5 / 2 ) / 10^6\n' $(cat /sy
 hout	FREQ [ -n "__FREQ_CPUFREQ__" ] && echo __FREQ_CPUFREQ__ || echo __FREQ_PROCFS__
 
 set	THREADS 2
-set	TIME 10
+set	TIME 1
 set	OPTS -u -P __THREADS__ --pacing-timer 1000
 
 info	Throughput in Gbps, latency in µs, __THREADS__ threads at __FREQ__ GHz
diff --git a/test/perf/pasta_tcp b/test/perf/pasta_tcp
index 8d2f911..a443f5a 100644
--- a/test/perf/pasta_tcp
+++ b/test/perf/pasta_tcp
@@ -22,7 +22,7 @@ ns	/sbin/sysctl -w net.ipv4.tcp_timestamps=0
 
 
 set	THREADS 4
-set	TIME 10
+set	TIME 1
 set	OMIT 0.1
 set	OPTS -Z -w 4M -l 1M -P __THREADS__ -O__OMIT__
 
@@ -46,13 +46,13 @@ iperf3k	host
 
 tl	TCP RR latency over IPv6: ns to host
 hostb	tcp_rr --nolog -P 10003 -C 10013 -6
-nsout	LAT tcp_rr --nolog -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 150 100
 
 tl	TCP CRR latency over IPv6: ns to host
 hostb	tcp_crr --nolog -P 10003 -C 10013 -6
-nsout	LAT tcp_crr --nolog -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 500 350
 
@@ -67,13 +67,13 @@ iperf3k	host
 
 tl	TCP RR latency over IPv4: ns to host
 hostb	tcp_rr --nolog -P 10003 -C 10013 -4
-nsout	LAT tcp_rr --nolog -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 150 100
 
 tl	TCP CRR latency over IPv4: ns to host
 hostb	tcp_crr --nolog -P 10003 -C 10013 -4
-nsout	LAT tcp_crr --nolog -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 500 350
 
@@ -87,13 +87,13 @@ iperf3k	ns
 
 tl	TCP RR latency over IPv6: host to ns
 nsb	tcp_rr --nolog -P 10002 -C 10012 -6
-hout	LAT tcp_rr --nolog -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 150 100
 
 tl	TCP CRR latency over IPv6: host to ns
 nsb	tcp_crr --nolog -P 10002 -C 10012 -6
-hout	LAT tcp_crr --nolog -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -6 -c -H ::1 | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 1000 700
 
@@ -108,13 +108,13 @@ iperf3k	ns
 
 tl	TCP RR latency over IPv4: host to ns
 nsb	tcp_rr --nolog -P 10002 -C 10012 -4
-hout	LAT tcp_rr --nolog -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 150 100
 
 tl	TCP CRR latency over IPv4: host to ns
 nsb	tcp_crr --nolog -P 10002 -C 10012 -4
-hout	LAT tcp_crr --nolog -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -4 -c -H 127.0.0.1 | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 1000 700
 
@@ -156,7 +156,7 @@ lat	-
 lat	-
 lat	-
 hostb	tcp_rr --nolog -P 10003 -C 10013 -6
-nsout	LAT tcp_rr --nolog -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 150 100
 
@@ -165,7 +165,7 @@ lat	-
 lat	-
 lat	-
 hostb	tcp_crr --nolog -P 10003 -C 10013 -6
-nsout	LAT tcp_crr --nolog -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -6 -c -H __GW6__%__IFNAME__ | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 1500 500
 
@@ -193,7 +193,7 @@ lat	-
 lat	-
 lat	-
 hostb	tcp_rr --nolog -P 10003 -C 10013 -4
-nsout	LAT tcp_rr --nolog -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_rr --nolog -l1 -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 150 100
 
@@ -202,7 +202,7 @@ lat	-
 lat	-
 lat	-
 hostb	tcp_crr --nolog -P 10003 -C 10013 -4
-nsout	LAT tcp_crr --nolog -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
+nsout	LAT tcp_crr --nolog -l1 -P 10003 -C 10013 -4 -c -H __GW__ | sed -n 's/^throughput=\(.*\)/\1/p'
 hostw
 lat	__LAT__ 1500 500
 
@@ -224,7 +224,7 @@ lat	-
 lat	-
 lat	-
 nsb	tcp_rr --nolog -P 10002 -C 10012 -6
-hout	LAT tcp_rr --nolog -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 150 100
 
@@ -234,7 +234,7 @@ lat	-
 lat	-
 sleep	1
 nsb	tcp_crr --nolog -P 10002 -C 10012 -6
-hout	LAT tcp_crr --nolog -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -6 -c -H __ADDR6__ | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 5000 10000
 
@@ -256,7 +256,7 @@ lat	-
 lat	-
 lat	-
 nsb	tcp_rr --nolog -P 10002 -C 10012 -4
-hout	LAT tcp_rr --nolog -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_rr --nolog -l1 -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 150 100
 
@@ -266,7 +266,7 @@ lat	-
 lat	-
 sleep	1
 nsb	tcp_crr --nolog -P 10002 -C 10012 -4
-hout	LAT tcp_crr --nolog -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
+hout	LAT tcp_crr --nolog -l1 -P 10002 -C 10012 -4 -c -H __ADDR__ | sed -n 's/^throughput=\(.*\)/\1/p'
 nsw
 lat	__LAT__ 5000 10000
 
diff --git a/test/perf/pasta_udp b/test/perf/pasta_udp
index 6acbfd3..9fed62e 100644
--- a/test/perf/pasta_udp
+++ b/test/perf/pasta_udp
@@ -21,7 +21,7 @@ hout	FREQ_CPUFREQ (echo "scale=1"; printf '( %i + 10^5 / 2 ) / 10^6\n' $(cat /sy
 hout	FREQ [ -n "__FREQ_CPUFREQ__" ] && echo __FREQ_CPUFREQ__ || echo __FREQ_PROCFS__
 
 set	THREADS 1
-set	TIME 10
+set	TIME 1
 set	OPTS -u -P __THREADS__
 
 info	Throughput in Gbps, latency in µs, one thread at __FREQ__ GHz

  reply	other threads:[~2024-08-08 22:55 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-05 12:36 [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
2024-08-05 12:36 ` [PATCH v2 01/22] nstool: Fix some trivial typos David Gibson
2024-08-05 12:36 ` [PATCH v2 02/22] nstool: Propagate SIGTERM to processes executed in the namespace David Gibson
2024-08-07  7:23   ` Stefano Brivio
2024-08-05 12:36 ` [PATCH v2 03/22] test: run static checkers with Avocado and JSON definitions David Gibson
2024-08-05 12:36 ` [PATCH v2 04/22] test: Extend make targets to run Avocado tests David Gibson
2024-08-05 12:36 ` [PATCH v2 05/22] test: Exeter based static tests David Gibson
2024-08-05 12:36 ` [PATCH v2 06/22] test: Add exeter+Avocado based build tests David Gibson
2024-08-06 22:11   ` Stefano Brivio
2024-08-07 10:51     ` David Gibson
2024-08-07 13:06       ` Stefano Brivio
2024-08-08  1:28         ` David Gibson
2024-08-08 22:55           ` Stefano Brivio [this message]
2024-08-05 12:36 ` [PATCH v2 07/22] test: Add linters for Python code David Gibson
2024-08-05 12:36 ` [PATCH v2 08/22] tasst: Introduce library of common test helpers David Gibson
2024-08-05 12:36 ` [PATCH v2 09/22] tasst: "snh" module for simulated network hosts David Gibson
2024-08-05 12:36 ` [PATCH v2 10/22] tasst: Add helper to get network interface names for a site David Gibson
2024-08-05 12:36 ` [PATCH v2 11/22] tasst: Add helpers to run commands with nstool David Gibson
2024-08-05 12:36 ` [PATCH v2 12/22] tasst: Add ifup and network address helpers to SimNetHost David Gibson
2024-08-05 12:36 ` [PATCH v2 13/22] tasst: Helper for creating veth devices between namespaces David Gibson
2024-08-05 12:36 ` [PATCH v2 14/22] tasst: Add helper for getting MTU of a network interface David Gibson
2024-08-05 12:36 ` [PATCH v2 15/22] tasst: Add helper to wait for IP address to appear David Gibson
2024-08-05 12:36 ` [PATCH v2 16/22] tasst: Add helpers for getting a SimNetHost's routes David Gibson
2024-08-05 12:36 ` [PATCH v2 17/22] tasst: Helpers to test transferring data between sites David Gibson
2024-08-05 12:36 ` [PATCH v2 18/22] tasst: IP address allocation helpers David Gibson
2024-08-05 12:36 ` [PATCH v2 19/22] tasst: Helpers for testing NDP behaviour David Gibson
2024-08-05 12:36 ` [PATCH v2 20/22] tasst: Helpers for testing DHCP & DHCPv6 behaviour David Gibson
2024-08-05 12:37 ` [PATCH v2 21/22] tasst: Helpers to construct a simple network environment for tests David Gibson
2024-08-05 12:37 ` [PATCH v2 22/22] avocado: Convert basic pasta tests David Gibson
2024-08-06 12:28 ` [PATCH v2 00/22] RFC: Proof-of-concept based exeter+Avocado tests David Gibson
2024-08-07  8:17   ` Stefano Brivio

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240809005537.137f68ce@elisabeth \
    --to=sbrivio@redhat.com \
    --cc=crosa@redhat.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=passt-dev@passt.top \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).