* Problematic throughput numbers
@ 2024-09-25 16:36 Jon Maloy
2024-09-25 17:47 ` Stefano Brivio
2024-09-26 1:59 ` David Gibson
0 siblings, 2 replies; 5+ messages in thread
From: Jon Maloy @ 2024-09-25 16:36 UTC (permalink / raw)
To: passt-dev, sbrivio, dgibson, lvivier
I made many runs with iperf3 ns->host, and the results are puzzling me.
Over and over again, I see throughput practically collapse, with a two
orders of magnitude decrease.
Just to make sure this wasn't something introduced by me, I went back to
the master branch and disabled the SO_PEEK_OFF feature.
The result was the same.
The below log shows typical run, but it is highly variable.
Sometimes almost the whole series is in the 50-70 Gb/s range, and
sometimes almost all in the 100-300 Mb/s range.
When I added the kernel fix it didn't seem to make any difference.
To me this is really worrying, and should be investigated.
///jon
pasta NS->host (master branch, SO_PEEK_OFF disabled
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
Accepted connection from 127.0.0.1, port 48354
[ 5] local 127.0.0.1 port 5201 connected to 127.0.0.1 port 48360
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 7.61 GBytes 65.3 Gbits/sec
[ 5] 1.00-2.00 sec 7.59 GBytes 65.2 Gbits/sec
[ 5] 2.00-3.00 sec 7.55 GBytes 64.8 Gbits/sec
[ 5] 3.00-4.00 sec 7.69 GBytes 66.1 Gbits/sec
[ 5] 4.00-5.00 sec 7.55 GBytes 64.8 Gbits/sec
[ 5] 5.00-6.00 sec 7.60 GBytes 65.3 Gbits/sec
[ 5] 6.00-7.00 sec 7.52 GBytes 64.6 Gbits/sec
[ 5] 7.00-8.00 sec 1.02 GBytes 8.73 Gbits/sec
[ 5] 8.00-9.00 sec 3.50 MBytes 29.4 Mbits/sec
[ 5] 9.00-10.00 sec 17.2 MBytes 145 Mbits/sec
[ 5] 10.00-11.00 sec 64.0 MBytes 537 Mbits/sec
[ 5] 11.00-12.00 sec 24.1 MBytes 202 Mbits/sec
[ 5] 12.00-13.00 sec 25.4 MBytes 213 Mbits/sec
[ 5] 13.00-14.00 sec 24.9 MBytes 209 Mbits/sec
[ 5] 14.00-15.00 sec 21.4 MBytes 179 Mbits/sec
[ 5] 15.00-16.00 sec 32.0 MBytes 268 Mbits/sec
[ 5] 16.00-17.00 sec 10.9 MBytes 91.2 Mbits/sec
[ 5] 17.00-18.00 sec 27.6 MBytes 232 Mbits/sec
[ 5] 18.00-19.00 sec 75.6 MBytes 634 Mbits/sec
[ 5] 19.00-20.00 sec 21.1 MBytes 177 Mbits/sec
[ 5] 20.00-21.00 sec 109 MBytes 912 Mbits/sec
[ 5] 21.00-22.00 sec 23.9 MBytes 200 Mbits/sec
[ 5] 22.00-23.00 sec 62.6 MBytes 525 Mbits/sec
[ 5] 23.00-24.00 sec 34.5 MBytes 289 Mbits/sec
[ 5] 24.00-25.00 sec 54.4 MBytes 456 Mbits/sec
[ 5] 25.00-26.00 sec 14.1 MBytes 118 Mbits/sec
[ 5] 26.00-27.00 sec 38.8 MBytes 325 Mbits/sec
[ 5] 27.00-28.00 sec 95.0 MBytes 797 Mbits/sec
[ 5] 28.00-29.00 sec 23.2 MBytes 195 Mbits/sec
[ 5] 29.00-30.00 sec 71.9 MBytes 603 Mbits/sec
[ 5] 30.00-31.00 sec 28.8 MBytes 241 Mbits/sec
[ 5] 31.00-32.00 sec 34.8 MBytes 292 Mbits/sec
[ 5] 32.00-33.00 sec 19.4 MBytes 163 Mbits/sec
[ 5] 33.00-34.00 sec 39.1 MBytes 328 Mbits/sec
[ 5] 34.00-35.00 sec 31.4 MBytes 263 Mbits/sec
[ 5] 35.00-36.00 sec 28.2 MBytes 237 Mbits/sec
[ 5] 36.00-37.00 sec 48.5 MBytes 407 Mbits/sec
[ 5] 37.00-38.00 sec 23.4 MBytes 196 Mbits/sec
[ 5] 38.00-39.00 sec 71.4 MBytes 599 Mbits/sec
[ 5] 39.00-40.00 sec 41.4 MBytes 347 Mbits/sec
[ 5] 40.00-41.00 sec 15.2 MBytes 128 Mbits/sec
[ 5] 41.00-42.00 sec 34.1 MBytes 286 Mbits/sec
[ 5] 42.00-43.00 sec 31.1 MBytes 261 Mbits/sec
[ 5] 43.00-44.00 sec 61.5 MBytes 516 Mbits/sec
[ 5] 44.00-45.00 sec 38.2 MBytes 321 Mbits/sec
[ 5] 45.00-46.00 sec 34.2 MBytes 287 Mbits/sec
[ 5] 46.00-47.00 sec 33.8 MBytes 283 Mbits/sec
[ 5] 47.00-48.00 sec 7.12 MBytes 59.8 Mbits/sec
[ 5] 48.00-49.00 sec 97.4 MBytes 817 Mbits/sec
[ 5] 49.00-50.00 sec 13.6 MBytes 114 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-50.00 sec 55.7 GBytes 9.57 Gbits/sec
receiver
-----------------------------------------------------------
Server listening on 5201 (test #3)
-----------------------------------------------------------
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Problematic throughput numbers
2024-09-25 16:36 Problematic throughput numbers Jon Maloy
@ 2024-09-25 17:47 ` Stefano Brivio
2024-09-26 1:59 ` David Gibson
1 sibling, 0 replies; 5+ messages in thread
From: Stefano Brivio @ 2024-09-25 17:47 UTC (permalink / raw)
To: Jon Maloy; +Cc: passt-dev, dgibson, lvivier
On Wed, 25 Sep 2024 12:36:43 -0400
Jon Maloy <jmaloy@redhat.com> wrote:
> I made many runs with iperf3 ns->host, and the results are puzzling me.
This has nothing to do with the path affected by your patches, because
you're connecting to a loopback address, and that's dealt with by
spliced connections. See tcp_splice.c, and:
https://passt.top/#pasta-pack-a-subtle-tap-abstraction
"Handling of local traffic in pasta" in pasta(1)
for more details. If you want to try out your patches, you could run
the test suite, or use a non-loopback address from the container.
> Over and over again, I see throughput practically collapse, with a two
> orders of magnitude decrease.
> Just to make sure this wasn't something introduced by me, I went back to
> the master branch and disabled the SO_PEEK_OFF feature.
> The result was the same.
> The below log shows typical run, but it is highly variable.
> Sometimes almost the whole series is in the 50-70 Gb/s range, and
> sometimes almost all in the 100-300 Mb/s range.
> When I added the kernel fix it didn't seem to make any difference.
>
> To me this is really worrying, and should be investigated.
I can't reproduce this, and we didn't have user reports of anything of
this sort, so far.
To investigate this, I would suggest that you have a look with strace
(as root) at what pasta is doing once the throughput decreases.
Packet captures with -p / --pcap won't show anything, because we don't
actually deal with packets on the spliced path.
Using --trace and a log file might help, but you'll probably decrease
throughput enough, that way, that the "good" condition won't be
distinguishable from the "bad" one.
--
Stefano
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Problematic throughput numbers
2024-09-25 16:36 Problematic throughput numbers Jon Maloy
2024-09-25 17:47 ` Stefano Brivio
@ 2024-09-26 1:59 ` David Gibson
2024-09-26 3:45 ` Stefano Brivio
1 sibling, 1 reply; 5+ messages in thread
From: David Gibson @ 2024-09-26 1:59 UTC (permalink / raw)
To: Jon Maloy; +Cc: passt-dev, sbrivio, dgibson, lvivier
[-- Attachment #1: Type: text/plain, Size: 4597 bytes --]
On Wed, Sep 25, 2024 at 12:36:43PM -0400, Jon Maloy wrote:
> I made many runs with iperf3 ns->host, and the results are puzzling me.
> Over and over again, I see throughput practically collapse, with a two
> orders of magnitude decrease.
> Just to make sure this wasn't something introduced by me, I went back to the
> master branch and disabled the SO_PEEK_OFF feature.
> The result was the same.
> The below log shows typical run, but it is highly variable.
> Sometimes almost the whole series is in the 50-70 Gb/s range, and sometimes
> almost all in the 100-300 Mb/s range.
> When I added the kernel fix it didn't seem to make any difference.
>
> To me this is really worrying, and should be investigated.
Those numbers are certainly worrying. Like Stefano, though, I have
encountered this myself.
>
> ///jon
>
>
>
>
> pasta NS->host (master branch, SO_PEEK_OFF disabled
>
> -----------------------------------------------------------
> Server listening on 5201 (test #2)
> -----------------------------------------------------------
> Accepted connection from 127.0.0.1, port 48354
> [ 5] local 127.0.0.1 port 5201 connected to 127.0.0.1 port 48360
> [ ID] Interval Transfer Bitrate
> [ 5] 0.00-1.00 sec 7.61 GBytes 65.3 Gbits/sec
> [ 5] 1.00-2.00 sec 7.59 GBytes 65.2 Gbits/sec
> [ 5] 2.00-3.00 sec 7.55 GBytes 64.8 Gbits/sec
> [ 5] 3.00-4.00 sec 7.69 GBytes 66.1 Gbits/sec
> [ 5] 4.00-5.00 sec 7.55 GBytes 64.8 Gbits/sec
> [ 5] 5.00-6.00 sec 7.60 GBytes 65.3 Gbits/sec
> [ 5] 6.00-7.00 sec 7.52 GBytes 64.6 Gbits/sec
> [ 5] 7.00-8.00 sec 1.02 GBytes 8.73 Gbits/sec
> [ 5] 8.00-9.00 sec 3.50 MBytes 29.4 Mbits/sec
> [ 5] 9.00-10.00 sec 17.2 MBytes 145 Mbits/sec
> [ 5] 10.00-11.00 sec 64.0 MBytes 537 Mbits/sec
> [ 5] 11.00-12.00 sec 24.1 MBytes 202 Mbits/sec
> [ 5] 12.00-13.00 sec 25.4 MBytes 213 Mbits/sec
> [ 5] 13.00-14.00 sec 24.9 MBytes 209 Mbits/sec
> [ 5] 14.00-15.00 sec 21.4 MBytes 179 Mbits/sec
> [ 5] 15.00-16.00 sec 32.0 MBytes 268 Mbits/sec
> [ 5] 16.00-17.00 sec 10.9 MBytes 91.2 Mbits/sec
> [ 5] 17.00-18.00 sec 27.6 MBytes 232 Mbits/sec
> [ 5] 18.00-19.00 sec 75.6 MBytes 634 Mbits/sec
> [ 5] 19.00-20.00 sec 21.1 MBytes 177 Mbits/sec
> [ 5] 20.00-21.00 sec 109 MBytes 912 Mbits/sec
> [ 5] 21.00-22.00 sec 23.9 MBytes 200 Mbits/sec
> [ 5] 22.00-23.00 sec 62.6 MBytes 525 Mbits/sec
> [ 5] 23.00-24.00 sec 34.5 MBytes 289 Mbits/sec
> [ 5] 24.00-25.00 sec 54.4 MBytes 456 Mbits/sec
> [ 5] 25.00-26.00 sec 14.1 MBytes 118 Mbits/sec
> [ 5] 26.00-27.00 sec 38.8 MBytes 325 Mbits/sec
> [ 5] 27.00-28.00 sec 95.0 MBytes 797 Mbits/sec
> [ 5] 28.00-29.00 sec 23.2 MBytes 195 Mbits/sec
> [ 5] 29.00-30.00 sec 71.9 MBytes 603 Mbits/sec
> [ 5] 30.00-31.00 sec 28.8 MBytes 241 Mbits/sec
> [ 5] 31.00-32.00 sec 34.8 MBytes 292 Mbits/sec
> [ 5] 32.00-33.00 sec 19.4 MBytes 163 Mbits/sec
> [ 5] 33.00-34.00 sec 39.1 MBytes 328 Mbits/sec
> [ 5] 34.00-35.00 sec 31.4 MBytes 263 Mbits/sec
> [ 5] 35.00-36.00 sec 28.2 MBytes 237 Mbits/sec
> [ 5] 36.00-37.00 sec 48.5 MBytes 407 Mbits/sec
> [ 5] 37.00-38.00 sec 23.4 MBytes 196 Mbits/sec
> [ 5] 38.00-39.00 sec 71.4 MBytes 599 Mbits/sec
> [ 5] 39.00-40.00 sec 41.4 MBytes 347 Mbits/sec
> [ 5] 40.00-41.00 sec 15.2 MBytes 128 Mbits/sec
> [ 5] 41.00-42.00 sec 34.1 MBytes 286 Mbits/sec
> [ 5] 42.00-43.00 sec 31.1 MBytes 261 Mbits/sec
> [ 5] 43.00-44.00 sec 61.5 MBytes 516 Mbits/sec
> [ 5] 44.00-45.00 sec 38.2 MBytes 321 Mbits/sec
> [ 5] 45.00-46.00 sec 34.2 MBytes 287 Mbits/sec
> [ 5] 46.00-47.00 sec 33.8 MBytes 283 Mbits/sec
> [ 5] 47.00-48.00 sec 7.12 MBytes 59.8 Mbits/sec
> [ 5] 48.00-49.00 sec 97.4 MBytes 817 Mbits/sec
> [ 5] 49.00-50.00 sec 13.6 MBytes 114 Mbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval Transfer Bitrate
> [ 5] 0.00-50.00 sec 55.7 GBytes 9.57 Gbits/sec
> receiver
> -----------------------------------------------------------
> Server listening on 5201 (test #3)
> -----------------------------------------------------------
>
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Problematic throughput numbers
2024-09-26 1:59 ` David Gibson
@ 2024-09-26 3:45 ` Stefano Brivio
2024-09-26 3:50 ` David Gibson
0 siblings, 1 reply; 5+ messages in thread
From: Stefano Brivio @ 2024-09-26 3:45 UTC (permalink / raw)
To: David Gibson; +Cc: Jon Maloy, passt-dev, dgibson, lvivier
On Thu, 26 Sep 2024 11:59:17 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Wed, Sep 25, 2024 at 12:36:43PM -0400, Jon Maloy wrote:
> > I made many runs with iperf3 ns->host, and the results are puzzling me.
> > Over and over again, I see throughput practically collapse, with a two
> > orders of magnitude decrease.
> > Just to make sure this wasn't something introduced by me, I went back to the
> > master branch and disabled the SO_PEEK_OFF feature.
> > The result was the same.
> > The below log shows typical run, but it is highly variable.
> > Sometimes almost the whole series is in the 50-70 Gb/s range, and sometimes
> > almost all in the 100-300 Mb/s range.
> > When I added the kernel fix it didn't seem to make any difference.
> >
> > To me this is really worrying, and should be investigated.
>
> Those numbers are certainly worrying. Like Stefano, though, I have
...not...?
> encountered this myself.
--
Stefano
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Problematic throughput numbers
2024-09-26 3:45 ` Stefano Brivio
@ 2024-09-26 3:50 ` David Gibson
0 siblings, 0 replies; 5+ messages in thread
From: David Gibson @ 2024-09-26 3:50 UTC (permalink / raw)
To: Stefano Brivio; +Cc: Jon Maloy, passt-dev, dgibson, lvivier
[-- Attachment #1: Type: text/plain, Size: 1308 bytes --]
On Thu, Sep 26, 2024 at 05:45:52AM +0200, Stefano Brivio wrote:
> On Thu, 26 Sep 2024 11:59:17 +1000
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
> > On Wed, Sep 25, 2024 at 12:36:43PM -0400, Jon Maloy wrote:
> > > I made many runs with iperf3 ns->host, and the results are puzzling me.
> > > Over and over again, I see throughput practically collapse, with a two
> > > orders of magnitude decrease.
> > > Just to make sure this wasn't something introduced by me, I went back to the
> > > master branch and disabled the SO_PEEK_OFF feature.
> > > The result was the same.
> > > The below log shows typical run, but it is highly variable.
> > > Sometimes almost the whole series is in the 50-70 Gb/s range, and sometimes
> > > almost all in the 100-300 Mb/s range.
> > > When I added the kernel fix it didn't seem to make any difference.
> > >
> > > To me this is really worrying, and should be investigated.
> >
> > Those numbers are certainly worrying. Like Stefano, though, I have
>
> ...not...?
Oops, yes, that's what I meant.
>
> > encountered this myself.
>
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-09-26 3:53 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-25 16:36 Problematic throughput numbers Jon Maloy
2024-09-25 17:47 ` Stefano Brivio
2024-09-26 1:59 ` David Gibson
2024-09-26 3:45 ` Stefano Brivio
2024-09-26 3:50 ` David Gibson
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).