From: David Gibson <david@gibson.dropbear.id.au>
To: Stefano Brivio <sbrivio@redhat.com>
Cc: Matej Hrica <mhrica@redhat.com>, passt-dev@passt.top
Subject: Re: [PATCH RFT 3/5] tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag
Date: Thu, 28 Sep 2023 11:51:50 +1000 [thread overview]
Message-ID: <ZRTcNgaPjRTDw1bd@zatzit> (raw)
In-Reply-To: <20230927190550.65cebdce@elisabeth>
[-- Attachment #1: Type: text/plain, Size: 5132 bytes --]
On Wed, Sep 27, 2023 at 07:05:50PM +0200, Stefano Brivio wrote:
> On Mon, 25 Sep 2023 14:21:47 +1000
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
> > On Mon, Sep 25, 2023 at 02:09:41PM +1000, David Gibson wrote:
> > > On Sat, Sep 23, 2023 at 12:06:08AM +0200, Stefano Brivio wrote:
> > > > It looks like we need it as workaround for this situation, readily
> > > > reproducible at least with a 6.5 Linux kernel, with default rmem_max
> > > > and wmem_max values:
> > > >
> > > > - an iperf3 client on the host sends about 160 KiB, typically
> > > > segmented into five frames by passt. We read this data using
> > > > MSG_PEEK
> > > >
> > > > - the iperf3 server on the guest starts receiving
> > > >
> > > > - meanwhile, the host kernel advertised a zero-sized window to the
> > > > receiver, as expected
> > > >
> > > > - eventually, the guest acknowledges all the data sent so far, and
> > > > we drop it from the buffer, courtesy of tcp_sock_consume(), using
> > > > recv() with MSG_TRUNC
> > > >
> > > > - the client, however, doesn't get an updated window value, and
> > > > even keepalive packets are answered with zero-window segments,
> > > > until the connection is closed
> > > >
> > > > It looks like dropping data from a socket using MSG_TRUNC doesn't
> > > > cause a recalculation of the window, which would be expected as a
> > > > result of any receiving operation that invalidates data on a buffer
> > > > (that is, not with MSG_PEEK).
> > > >
> > > > Strangely enough, setting TCP_WINDOW_CLAMP via setsockopt(), even to
> > > > the previous value we clamped to, forces a recalculation of the
> > > > window which is advertised to the guest.
> > > >
> > > > I couldn't quite confirm this issue by following all the possible
> > > > code paths in the kernel, yet. If confirmed, this should be fixed in
> > > > the kernel, but meanwhile this workaround looks robust to me (and it
> > > > will be needed for backward compatibility anyway).
> > >
> > > So, I tested this, and things got a bit complicated.
> > >
> > > First, I reproduced the "read side" problem by setting
> > > net.core.rmem_max to 256kiB while setting net.core.wmem_max to 16MiB.
> > > The "160kiB" stall happened almost every time. Applying this patch
> > > appears to fix it completely, getting GiB/s throughput consistently.
> > > So, yah.
> > >
> > > Then I tried reproducing it differently: by setting both
> > > net.core.rmem_max and net.core.wmem_max to 16MiB, but setting
> > > SO_RCVBUF to 128kiB explicitly in tcp_sock_set_bufsize() (which
> > > actually results in a 256kiB buffer, because of the kernel's weird
> > > interpretation).
> > >
> > > With the SO_RCVBUF clamp and without this patch, I don't get the
> > > 160kiB stall consistently any more. What I *do* get is nearly every
> > > time - but not *every* time - is slow transfers, ~40Mbps vs. ~12Gbps.
> > > Sometimes it stalls after several seconds. The stall is slightly
> > > different from the 160kiB stall though: the 160kiB stall seems 0 bytes
> > > transferred on both sides. With the RCVBUF stall I get a trickle of
> > > bytes (620 bytes/s) on the receiver/guest side, with mostly 0 bytes
> > > per interval on the sender but occasionally an interval with several
> > > hundred KB.
> > >
> > > That is it seems like there's a buffer somewhere that's very slowly
> > > draining into the receiver, then getting topped up in an instant once
> > > it gets low enough.
> > >
> > > When I have both this patch and the RCVBUF clamp, I don't seem to be
> > > able to reproduce the trickle-stall anymore, but I still get the slow
> > > transfer speeds most, but not every time. Sometimes, but only rarely,
> > > I do seem to still get a complete stall (0 bytes on both sides).
> >
> > I noted another oddity. With this patch, _no_ RCVBUF clamp and 16MiB
> > wmem_max fixed, things seem to behave much better with a small
> > rmem_max than large. With rmem_max=256KiB I get pretty consistent
> > 37Gbps throughput and iperf3 -c reports 0 retransmits.
> >
> > With rmem_max=16MiB, the throughput fluctuates from second to second
> > between ~3Gbps and ~30Gbps. The client reports retransmits in some
> > intervals, which is pretty weird over lo.
> >
> > Urgh... so many variables.
>
> This is probably due to the receive buffer getting bigger than
> TCP_FRAMES_MEM * MSS4 (or MSS6), so the amount of data we can read in
> one shot from the sockets isn't optimally sized anymore.
Hm, ok. Not really sure why that would cause such nasty behaviour.
> We should have a look at the difference between not clamping at all
> (and if that yields the same throughput, great), and clamping to, I
> guess, TCP_FRAMES_MEM * MIN(MSS4, MSS6).
Right. We currently ask for the largest RCVBUF we can get, which
might not really be what we want.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2023-09-28 1:52 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-22 22:06 [PATCH RFT 0/5] Fixes and a workaround for TCP stalls with small buffers Stefano Brivio
2023-09-22 22:06 ` [PATCH RFT 1/5] tcp: Fix comment to tcp_sock_consume() Stefano Brivio
2023-09-23 2:48 ` David Gibson
2023-09-22 22:06 ` [PATCH RFT 2/5] tcp: Reset STALLED flag on ACK only, check for pending socket data Stefano Brivio
2023-09-25 3:07 ` David Gibson
2023-09-27 17:05 ` Stefano Brivio
2023-09-28 1:48 ` David Gibson
2023-09-29 15:20 ` Stefano Brivio
2023-10-03 3:20 ` David Gibson
2023-10-05 6:18 ` Stefano Brivio
2023-10-05 7:36 ` David Gibson
2023-09-22 22:06 ` [PATCH RFT 3/5] tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag Stefano Brivio
2023-09-22 22:31 ` Stefano Brivio
2023-09-23 7:55 ` David Gibson
2023-09-25 4:09 ` David Gibson
2023-09-25 4:10 ` David Gibson
2023-09-25 4:21 ` David Gibson
2023-09-27 17:05 ` Stefano Brivio
2023-09-28 1:51 ` David Gibson [this message]
2023-09-22 22:06 ` [PATCH RFT 4/5] tcp, tap: Don't increase tap-side sequence counter for dropped frames Stefano Brivio
2023-09-25 4:47 ` David Gibson
2023-09-27 17:06 ` Stefano Brivio
2023-09-28 1:58 ` David Gibson
2023-09-29 15:19 ` Stefano Brivio
2023-10-03 3:22 ` David Gibson
2023-10-05 6:19 ` Stefano Brivio
2023-10-05 7:38 ` David Gibson
2023-09-22 22:06 ` [PATCH RFT 5/5] passt.1: Add note about tuning rmem_max and wmem_max for throughput Stefano Brivio
2023-09-25 4:57 ` David Gibson
2023-09-27 17:06 ` Stefano Brivio
2023-09-28 2:02 ` David Gibson
2023-09-25 5:52 ` [PATCH RFT 0/5] Fixes and a workaround for TCP stalls with small buffers David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZRTcNgaPjRTDw1bd@zatzit \
--to=david@gibson.dropbear.id.au \
--cc=mhrica@redhat.com \
--cc=passt-dev@passt.top \
--cc=sbrivio@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).