public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: passt-dev@passt.top, Stefano Brivio <sbrivio@redhat.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Subject: [PATCH 2/8] test/perf: Get iperf3 stats from client side
Date: Mon,  6 Nov 2023 18:08:27 +1100	[thread overview]
Message-ID: <20231106070834.1270986-3-david@gibson.dropbear.id.au> (raw)
In-Reply-To: <20231106070834.1270986-1-david@gibson.dropbear.id.au>

iperf3 generates statistics about its run on both the client and server
sides.  They don't have exactly the same information, but both have the
pieces we need (AFAICT the server communicates some nformation to the
client over the control socket, so the most important information is in the
client side output, even if measured by the server).

Currently we use the server side information for our measurements. Using
the client side information has several advantages though:

 * We can directly wait for the client to complete and we know we'll have
   the output we want.  We don't need to sleep to give the server time to
   write out the results.
 * That in turn means we can wrap up as soon as the client is done, we
   don't need to wait overlong to make sure everything is finished.
 * The slightly different organisation of the data in the client output
   means that we always want the same json value, rather than requiring
   slightly different onces for UDP and TCP.

The fact that we avoid some extra delays speeds up the overal run of the
perf tests by around 7 minutes (out of around 35 minutes) on my laptop.

The fact that we no longer unconditionally kill client and server after
a certain time means that the client could run indefinitely if the server
doesn't respond.  We mitigate that by setting 1s connect timeout on the
client.  This isn't foolproof - if we get an initial response, but then
lose connectivity this could still run indefinitely, however it does cover
by far the most likely failure cases.  --snd-timeout would provide more
robustness, but I've hit odd failures when trying to use it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 .gitignore    |  2 +-
 test/lib/test | 32 ++++++++++++++------------------
 2 files changed, 15 insertions(+), 19 deletions(-)

diff --git a/.gitignore b/.gitignore
index d3d0e2c..d1c8be9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,5 +6,5 @@
 /qrap
 /pasta.1
 /seccomp.h
-/s*.json
+/c*.json
 README.plain.md
diff --git a/test/lib/test b/test/lib/test
index 115dd21..3ca5dbc 100755
--- a/test/lib/test
+++ b/test/lib/test
@@ -31,41 +31,37 @@ test_iperf3() {
 	__procs="$((${1} - 1))"; shift
 	__time="${1}"; shift
 
-	pane_or_context_run "${__sctx}" 'rm -f s*.json'
+	pane_or_context_run "${__cctx}" 'rm -f c*.json'
 
 	pane_or_context_run_bg "${__sctx}" 				\
 		 'for i in $(seq 0 '${__procs}'); do'			\
-		 '	(iperf3 -s1J -p'${__port}' -i'${__time}		\
-		 '	 > s${i}.json) &'				\
-		 '	echo $! > s${i}.pid &'				\
+		 '	(iperf3 -s1 -p'${__port}' -i'${__time}') &'	\
+		 '	echo $! > s${i}.pid; '				\
 		 'done'							\
 
 	sleep 1		# Wait for server to be ready
 
-	pane_or_context_run_bg "${__cctx}" 				\
+        # A 1s wait for connection on what's basically a local link
+        # indicates something is pretty wrong
+        __timeout=1000
+	pane_or_context_run "${__cctx}" 				\
 		 '('							\
 		 '	for i in $(seq 0 '${__procs}'); do'		\
-		 '		iperf3 -c '${__dest}' -p '${__port}	\
-		 '		 -t'${__time}' -i0 -T s${i} '"${@}"' &' \
+		 '		iperf3 -J -c '${__dest}' -p '${__port}	\
+		 '		 --connect-timeout '${__timeout}	\
+		 '		 -t'${__time}' -i0 -T c${i} '"${@}"	\
+                 ' 		> c${i}.json &'				\
 		 '	done;'						\
 		 '	wait'						\
 		 ')'
 
-	sleep $((__time + 5))
-
-	# If client fails to deliver control message, tell server we're done
+	# Kill the server, just in case -1 didn't work right
 	pane_or_context_run "${__sctx}" 'kill -INT $(cat s*.pid); rm s*.pid'
 
-	sleep 1		# ...and wait for output to be flushed
-
 	__jval=".end.sum_received.bits_per_second"
-	for __opt in ${@}; do
-		# UDP test
-		[ "${__opt}" = "-u" ] && __jval=".intervals[0].sum.bits_per_second"
-	done
 
-	__bw=$(pane_or_context_output "${__sctx}"			\
-		 'cat s*.json | jq -rMs "map('${__jval}') | add"')
+	__bw=$(pane_or_context_output "${__cctx}"			\
+		 'cat c*.json | jq -rMs "map('${__jval}') | add"')
 
 	TEST_ONE_subs="$(list_add_pair "${TEST_ONE_subs}" "__${__var}__" "${__bw}" )"
 
-- 
@@ -31,41 +31,37 @@ test_iperf3() {
 	__procs="$((${1} - 1))"; shift
 	__time="${1}"; shift
 
-	pane_or_context_run "${__sctx}" 'rm -f s*.json'
+	pane_or_context_run "${__cctx}" 'rm -f c*.json'
 
 	pane_or_context_run_bg "${__sctx}" 				\
 		 'for i in $(seq 0 '${__procs}'); do'			\
-		 '	(iperf3 -s1J -p'${__port}' -i'${__time}		\
-		 '	 > s${i}.json) &'				\
-		 '	echo $! > s${i}.pid &'				\
+		 '	(iperf3 -s1 -p'${__port}' -i'${__time}') &'	\
+		 '	echo $! > s${i}.pid; '				\
 		 'done'							\
 
 	sleep 1		# Wait for server to be ready
 
-	pane_or_context_run_bg "${__cctx}" 				\
+        # A 1s wait for connection on what's basically a local link
+        # indicates something is pretty wrong
+        __timeout=1000
+	pane_or_context_run "${__cctx}" 				\
 		 '('							\
 		 '	for i in $(seq 0 '${__procs}'); do'		\
-		 '		iperf3 -c '${__dest}' -p '${__port}	\
-		 '		 -t'${__time}' -i0 -T s${i} '"${@}"' &' \
+		 '		iperf3 -J -c '${__dest}' -p '${__port}	\
+		 '		 --connect-timeout '${__timeout}	\
+		 '		 -t'${__time}' -i0 -T c${i} '"${@}"	\
+                 ' 		> c${i}.json &'				\
 		 '	done;'						\
 		 '	wait'						\
 		 ')'
 
-	sleep $((__time + 5))
-
-	# If client fails to deliver control message, tell server we're done
+	# Kill the server, just in case -1 didn't work right
 	pane_or_context_run "${__sctx}" 'kill -INT $(cat s*.pid); rm s*.pid'
 
-	sleep 1		# ...and wait for output to be flushed
-
 	__jval=".end.sum_received.bits_per_second"
-	for __opt in ${@}; do
-		# UDP test
-		[ "${__opt}" = "-u" ] && __jval=".intervals[0].sum.bits_per_second"
-	done
 
-	__bw=$(pane_or_context_output "${__sctx}"			\
-		 'cat s*.json | jq -rMs "map('${__jval}') | add"')
+	__bw=$(pane_or_context_output "${__cctx}"			\
+		 'cat c*.json | jq -rMs "map('${__jval}') | add"')
 
 	TEST_ONE_subs="$(list_add_pair "${TEST_ONE_subs}" "__${__var}__" "${__bw}" )"
 
-- 
2.41.0


  parent reply	other threads:[~2023-11-06  7:08 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-06  7:08 [PATCH 0/8] Clean ups and speed ups to benchmarks David Gibson
2023-11-06  7:08 ` [PATCH 1/8] test/perf: Remove stale iperf3c/iperf3s directives David Gibson
2023-11-06  7:08 ` David Gibson [this message]
2023-11-06  7:08 ` [PATCH 3/8] test/perf: Start iperf3 server less often David Gibson
2023-11-06  7:08 ` [PATCH 4/8] test/perf: Small MTUs for spliced TCP aren't interesting David Gibson
2023-11-06  7:08 ` [PATCH 5/8] test/perf: Explicitly control UDP packet length, instead of MTU David Gibson
2023-11-06  7:08 ` [PATCH 6/8] test/perf: "MTU" changes in passt_tcp host to guest aren't useful David Gibson
2023-11-06  7:08 ` [PATCH 7/8] test/perf: Remove unnecessary --pacing-timer options David Gibson
2023-11-06  7:08 ` [PATCH 8/8] test/perf: Simplify calculation of "omit" time for TCP throughput David Gibson
2023-11-07 12:45 ` [PATCH 0/8] Clean ups and speed ups to benchmarks Stefano Brivio

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231106070834.1270986-3-david@gibson.dropbear.id.au \
    --to=david@gibson.dropbear.id.au \
    --cc=passt-dev@passt.top \
    --cc=sbrivio@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).