From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by passt.top (Postfix) with ESMTPS id C6A2F5A026F for ; Thu, 28 Sep 2023 03:52:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gibson.dropbear.id.au; s=201602; t=1695865918; bh=e5aoKE545jW3Etq7JOThaTjBesTT9Qv3WJ6O3pfDA3Y=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=n4AsLwTkPxObfhCz85B30Za2MNCAEWcfqAFPEM93phSu5DAIbExy58y0dG/5cS7FI EQzr0AX8tVkhxi4IgC33uwA7Vqxn81RamtLNQUovKZY8Z56MzvNn74Mhc2qcahLgnm QpfQG+Hhzo9OemkGHrTCHakdLn3pEL0LzIdo3ymo= Received: by gandalf.ozlabs.org (Postfix, from userid 1007) id 4RwxL657xWz4xQk; Thu, 28 Sep 2023 11:51:58 +1000 (AEST) Date: Thu, 28 Sep 2023 11:51:50 +1000 From: David Gibson To: Stefano Brivio Subject: Re: [PATCH RFT 3/5] tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag Message-ID: References: <20230922220610.58767-1-sbrivio@redhat.com> <20230922220610.58767-4-sbrivio@redhat.com> <20230927190550.65cebdce@elisabeth> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="WkFVmQdma7Nauooj" Content-Disposition: inline In-Reply-To: <20230927190550.65cebdce@elisabeth> Message-ID-Hash: TOVP2FCHSC7FVDK5PGEK7QIDTFANV7KP X-Message-ID-Hash: TOVP2FCHSC7FVDK5PGEK7QIDTFANV7KP X-MailFrom: dgibson@gandalf.ozlabs.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Matej Hrica , passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: --WkFVmQdma7Nauooj Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Sep 27, 2023 at 07:05:50PM +0200, Stefano Brivio wrote: > On Mon, 25 Sep 2023 14:21:47 +1000 > David Gibson wrote: >=20 > > On Mon, Sep 25, 2023 at 02:09:41PM +1000, David Gibson wrote: > > > On Sat, Sep 23, 2023 at 12:06:08AM +0200, Stefano Brivio wrote: =20 > > > > It looks like we need it as workaround for this situation, readily > > > > reproducible at least with a 6.5 Linux kernel, with default rmem_max > > > > and wmem_max values: > > > >=20 > > > > - an iperf3 client on the host sends about 160 KiB, typically > > > > segmented into five frames by passt. We read this data using > > > > MSG_PEEK > > > >=20 > > > > - the iperf3 server on the guest starts receiving > > > >=20 > > > > - meanwhile, the host kernel advertised a zero-sized window to the > > > > receiver, as expected > > > >=20 > > > > - eventually, the guest acknowledges all the data sent so far, and > > > > we drop it from the buffer, courtesy of tcp_sock_consume(), using > > > > recv() with MSG_TRUNC > > > >=20 > > > > - the client, however, doesn't get an updated window value, and > > > > even keepalive packets are answered with zero-window segments, > > > > until the connection is closed > > > >=20 > > > > It looks like dropping data from a socket using MSG_TRUNC doesn't > > > > cause a recalculation of the window, which would be expected as a > > > > result of any receiving operation that invalidates data on a buffer > > > > (that is, not with MSG_PEEK). > > > >=20 > > > > Strangely enough, setting TCP_WINDOW_CLAMP via setsockopt(), even to > > > > the previous value we clamped to, forces a recalculation of the > > > > window which is advertised to the guest. > > > >=20 > > > > I couldn't quite confirm this issue by following all the possible > > > > code paths in the kernel, yet. If confirmed, this should be fixed in > > > > the kernel, but meanwhile this workaround looks robust to me (and it > > > > will be needed for backward compatibility anyway). =20 > > >=20 > > > So, I tested this, and things got a bit complicated. > > >=20 > > > First, I reproduced the "read side" problem by setting > > > net.core.rmem_max to 256kiB while setting net.core.wmem_max to 16MiB. > > > The "160kiB" stall happened almost every time. Applying this patch > > > appears to fix it completely, getting GiB/s throughput consistently. > > > So, yah. > > >=20 > > > Then I tried reproducing it differently: by setting both > > > net.core.rmem_max and net.core.wmem_max to 16MiB, but setting > > > SO_RCVBUF to 128kiB explicitly in tcp_sock_set_bufsize() (which > > > actually results in a 256kiB buffer, because of the kernel's weird > > > interpretation). > > >=20 > > > With the SO_RCVBUF clamp and without this patch, I don't get the > > > 160kiB stall consistently any more. What I *do* get is nearly every > > > time - but not *every* time - is slow transfers, ~40Mbps vs. ~12Gbps. > > > Sometimes it stalls after several seconds. The stall is slightly > > > different from the 160kiB stall though: the 160kiB stall seems 0 bytes > > > transferred on both sides. With the RCVBUF stall I get a trickle of > > > bytes (620 bytes/s) on the receiver/guest side, with mostly 0 bytes > > > per interval on the sender but occasionally an interval with several > > > hundred KB. > > >=20 > > > That is it seems like there's a buffer somewhere that's very slowly > > > draining into the receiver, then getting topped up in an instant once > > > it gets low enough. > > >=20 > > > When I have both this patch and the RCVBUF clamp, I don't seem to be > > > able to reproduce the trickle-stall anymore, but I still get the slow > > > transfer speeds most, but not every time. Sometimes, but only rarely, > > > I do seem to still get a complete stall (0 bytes on both sides). =20 > >=20 > > I noted another oddity. With this patch, _no_ RCVBUF clamp and 16MiB > > wmem_max fixed, things seem to behave much better with a small > > rmem_max than large. With rmem_max=3D256KiB I get pretty consistent > > 37Gbps throughput and iperf3 -c reports 0 retransmits. > >=20 > > With rmem_max=3D16MiB, the throughput fluctuates from second to second > > between ~3Gbps and ~30Gbps. The client reports retransmits in some > > intervals, which is pretty weird over lo. > >=20 > > Urgh... so many variables. >=20 > This is probably due to the receive buffer getting bigger than > TCP_FRAMES_MEM * MSS4 (or MSS6), so the amount of data we can read in > one shot from the sockets isn't optimally sized anymore. Hm, ok. Not really sure why that would cause such nasty behaviour. > We should have a look at the difference between not clamping at all > (and if that yields the same throughput, great), and clamping to, I > guess, TCP_FRAMES_MEM * MIN(MSS4, MSS6). Right. We currently ask for the largest RCVBUF we can get, which might not really be what we want. --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --WkFVmQdma7Nauooj Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEO+dNsU4E3yXUXRK2zQJF27ox2GcFAmUU3C8ACgkQzQJF27ox 2GcRKg//f6yLcCEqFqfeW8gphqUXn2rgTHF7itlhOGscaRmtrb+99aQdOeLoQrzJ EDVlugYB63HDu8Db7v+J0J1CP5ZL4PHdRCmsrndX7YEYda4XMjsiJR/yR9WUIzWW U4uXRBe93RyaZhmSLPnr2z24iU+bheQLeLMTLfh3XMxrYxn9BRt24ObO2ekKFM0n EyGkOh5LapKM69nHrbdF9dGUrffsbkH4yLCSYZH+DZ8GkOtpUH1Ob7Ljz2HZV+Ka fummQN7aU8S9Y7KddAMZa3Lyzcdt+IMOMiSAKirQISzCOe//DAJyJ+Njr0cPyQGv vSuZmd5sRnbC+0UvL1FX4aFJ+ro9DnDjvoOuWmP2BJq1lHsLdsk9bY8aMwc2OMAs gFg2/tG2m0FlDj9C1TOqNL9MrJG9sZI+xEcqVipfDLmyqvcHPxyGCQm9931PxlLR FAO7FPxVM07N2ZnOWVOes2HgriZUF71btjPJstvSy9sDrryI7Yv2POzPqBdQbnad cPhnQwvJz7sUGjA4oKtqzoQAqEHt1hXHZOn4M57vpGjE+oaU36dHgf62TZZhoDCp rf1UCDKrJ6mEm7ACXg6ktpZrHJAIm9FGVAOqXoC25WHYgYoEF0uBcFTdAN1M3Nvp 5yuphAa2JM/WHMaV4WivUyWCbzCBmkTRCplyfgqxOQrUWOJ9YgQ= =MAKP -----END PGP SIGNATURE----- --WkFVmQdma7Nauooj--