From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=K39TvH1j; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id 07B055A0262 for ; Sat, 25 Apr 2026 11:06:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1777107969; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2DU9plMlj5TiiO392P0MIH/Z35sFXuLbDlcypxU1kMc=; b=K39TvH1jbTbzWE5TdyMdon279Dm4xVljL3q8gHU+LEf13joXOeg0s7A9EaQdpmM0/VWCrL nxSO8tFSi0kQTNBbdcO5NmmgDtb8n7qpUz+mzBG87MdFWW5pN/fC3EmS/blph3tUBhvt+J RuGuE9NoQ6gOaCj6vxtpPgGOoHHFi4I= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-537-pORGPW2_OVyUI-Fl8K9v-g-1; Sat, 25 Apr 2026 05:06:08 -0400 X-MC-Unique: pORGPW2_OVyUI-Fl8K9v-g-1 X-Mimecast-MFC-AGG-ID: pORGPW2_OVyUI-Fl8K9v-g_1777107967 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-48a5775d647so45781565e9.2 for ; Sat, 25 Apr 2026 02:06:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777107967; x=1777712767; h=date:content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2DU9plMlj5TiiO392P0MIH/Z35sFXuLbDlcypxU1kMc=; b=gLba81g+jas67pPE8LwczM0qhY+Nr4B0uSrRcIEjVSGEulMZ8W4fjvyQRGQCy+XCQB PBleVp96WWjejmWLZT82shjN6xmk6XQ0ujiszOqJoGnqVq+PkifUfuGFgfKflFDD9vKN flzIad6RH+cDYPKdj0a02axGJ0OzV6tsKI0rKypCLUWIsLDXnNldXyQdEcYEuEZyiluX mIOwEMyj2wJklWUYEpsQTc4PhUBEbtpCWuZ9ujDT42O3Ig5XMhgcQKgeTuzzY832RqRf t0+yAgRVVdlyVXJVqe25k6ns4rbgc6/EAUvWFz13KAIsshHkP1DF/D+bAYaHR2C3q4yO cryg== X-Forwarded-Encrypted: i=1; AFNElJ/yy9xIAnFNKleKb4nwWTY+V12O2yw7obmTKpifUH6pDDy9RK/wfTeDm8BF9F9Mm9KVdixjEdajips=@passt.top X-Gm-Message-State: AOJu0Yze+43qctTR2hb0WOBIMEMLT0eZi46GOpzy5ODVLo00lYiyCXWr l7JoNUn5WcbJhLACwR0Mg2HGIh5llEKLy/PXx2dMoiETTLAcl8lWkzFE4ha3o8gk4A2O8xBx3A6 O8bgRT/aMtKgKRs3p96o0ilfjcYOlV7w0Xpq+5WtBDOJmWJdCXcxzuA== X-Gm-Gg: AeBDievOH+/+uff/WOZflaVvDaRIOr2C0LkwBZYYfcTKYtz1gHqSZsXP758FNtw8mRK /IZ1RbFHsV68FpRMttPgtskIwQIEzn37enWOzRHFSaj80jn0QNggwu1YSSuK3MZeS9YVvtsil5D t8A9puo9e3pN8dwjSSy8ZemEeWv6O9WHP9fQypoSl6XUaW3EI0O/CQ2p8iRo6K9+OLDxG445dtf yJ149Goj9H+t9ogoZieYK7tyjzkMtWRz8UVpjYnNCXTeD5brg3pQgJW3hLJnRBuWYlwMuX0XqOn V7WkI+kuKISHQjsnATOtq7SnOPSrK2Xwio+5RnNVgjfmhJj+2VBCTD42Aa8/wZWPE36GCvg8TBZ 1Zal9tuph6Eg39qzXw+fgBfwzBjSPln1R6QrLiL7KCwk= X-Received: by 2002:a05:600c:4c08:b0:489:1c1f:35f1 with SMTP id 5b1f17b1804b1-4891c1f3850mr214812095e9.4.1777107966809; Sat, 25 Apr 2026 02:06:06 -0700 (PDT) X-Received: by 2002:a05:600c:4c08:b0:489:1c1f:35f1 with SMTP id 5b1f17b1804b1-4891c1f3850mr214811675e9.4.1777107966252; Sat, 25 Apr 2026 02:06:06 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [2a10:fc81:a806:d6a9::1]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-488fc1c01cfsm673145285e9.10.2026.04.25.02.06.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Apr 2026 02:06:05 -0700 (PDT) From: Stefano Brivio To: Jon Maloy Subject: Re: [PATCH v2] tcp: Use SO_MEMINFO for accurate send buffer overhead accounting Message-ID: <20260425110604.63ab62b1@elisabeth> In-Reply-To: <20260424010615.253127-1-jmaloy@redhat.com> References: <20260424010615.253127-1-jmaloy@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.49; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: OIQHcAuGi2Wgj4EEQlWmfE74ONk61I_cwkLLFCd8Zqg_1777107967 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Hits: emergency X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved Message-ID-Hash: TCI4MCSNO2Y2CSJVSXZ2QGAFV3LBTUP5 X-Message-ID-Hash: TCI4MCSNO2Y2CSJVSXZ2QGAFV3LBTUP5 X-Mailman-Approved-At: Mon, 27 Apr 2026 09:47:04 +0200 CC: david@gibson.dropbear.id.au, passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Date: Sat, 25 Apr 2026 09:06:11 X-Original-Date: Sat, 25 Apr 2026 11:06:05 +0200 (CEST) On Thu, 23 Apr 2026 21:06:15 -0400 Jon Maloy wrote: > The TCP window advertised to the guest/container must balance two > competing needs: large enough to trigger kernel socket buffer > auto-tuning, but not so large that sendmsg() partially fails, causing > retransmissions. > > The current approach uses the difference (SNDBUF_GET() - SIOCOUTQ), but > SNDBUF_GET() returns a scaled value that only roughly accounts for > per-skb overhead. The clamped_scale approximation doesn't accurately > track the actual per-segment overhead, which can lead to both excessive > retransmissions and reduced throughput. > > We now introduce the use of SO_MEMINFO to obtain SK_MEMINFO_SNDBUF and > SK_MEMINFO_WMEM_QUEUED from the kernel. The latter is presented in the > kernel's own accounting units, i.e. including the sk_buff overhead, > and matches exactly what the kernel's own sk_stream_memory_free() > function is using. > > When data is queued and the overhead ratio is observable, we calculate > the per-segment overhead as (wmem_queued - sendq) / num_segments, then > determine how many additional segments should fit in the remaining > buffer space, considering the calculated per-mss overhead. This approach > treats segments as discrete quantities, and produces a more accurate > estimate of available buffer space than a linear scaling factor does. > > When the ratio cannot be observed, e.g. because the queue is empty or > we are in a transient state, we fall back to the existing clamped_scale > calculation (scaling between 100% and 75% of buffer capacity). > > When SO_MEMINFO succeeds, we also use SK_MEMINFO_SNDBUF directly to > set SNDBUF, avoiding a separate SO_SNDBUF getsockopt() call. > > If SO_MEMINFO is unavailable, we fall back to the pre-existing > SNDBUF_GET() - SIOCOUTQ calculation. > > Link: https://bugs.passt.top/show_bug.cgi?id=138 We should also add: Link: https://github.com/containers/podman/issues/28219 > Signed-off-by: Jon Maloy > > --- > v2: Updated according to feedback from Stefano. My own measurements > indicate that this approach largely solves both the retransmission > and throughput issues observed with the previous version. I ran quite extensive tests (not yet with sub-millisecond RTTs though, and we definitely should) and this looks like magic. Not a retransmission, throughput converges very quickly, and in general (especially with higher RTTs) the throughput is better than without pasta (!). But I can't understand why. :) I think the approach I suggested considering segments as discrete should be slightly more accurate but I can't see why it would make such a big difference. Is it about the "early" (!sendq) phase perhaps, where you now went back to the previous approach instead of keeping a flat x / 3 * 4 proportion? Any clue? A few preliminary comments below: > --- > tcp.c | 42 ++++++++++++++++++++++++++++++++++-------- > tcp_conn.h | 2 +- > 2 files changed, 35 insertions(+), 9 deletions(-) > > diff --git a/tcp.c b/tcp.c > index 43b8fdb..2ba08fd 100644 > --- a/tcp.c > +++ b/tcp.c > @@ -295,6 +295,7 @@ > #include > > #include > +#include > > #include "checksum.h" > #include "util.h" > @@ -1128,19 +1129,44 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, > new_wnd_to_tap = tinfo->tcpi_snd_wnd; > } else { > unsigned rtt_ms_ceiling = DIV_ROUND_UP(tinfo->tcpi_rtt, 1000); > + uint32_t mem[SK_MEMINFO_VARS]; > + socklen_t mem_sl = sizeof(mem); > + int mss = MSS_GET(conn); > uint32_t sendq; > - int limit; > + uint32_t limit; > > if (ioctl(s, SIOCOUTQ, &sendq)) { > debug_perror("SIOCOUTQ on socket %i, assuming 0", s); > sendq = 0; > } > - tcp_get_sndbuf(conn); > > - if ((int)sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */ I think it would be good to preserve this comment because I found it quite surprising that it would actually happen in my tests and that we even had to check for it. > - limit = 0; > - else > - limit = SNDBUF_GET(conn) - (int)sendq; > + if (getsockopt(s, SOL_SOCKET, SO_MEMINFO, &mem, &mem_sl)) { > + tcp_get_sndbuf(conn); > + if (sendq > SNDBUF_GET(conn)) > + limit = 0; > + else > + limit = SNDBUF_GET(conn) - sendq; This functionally makes sense but I wonder if we could structure things differently because it's becoming a bit hard to follow: - (long overdue even before your patch) turn this into a new function - handling the exceptional case (SO_MEMINFO failing) more clearly as an exception - decouple checks from calculations Something like (I didn't finish or really think it through): --- /** * tcp_wnd_from_sndbuf() - Calculate window value from available sending buffer * @tinfo: tcp_info from kernel * * Return: window value to advertise, not scaled * * #syscalls ioctl */ uint32_t tcp_wnd_from_sndbuf(struct tcp_info *tinfo) { unsigned rtt_ms_ceiling = DIV_ROUND_UP(tinfo->tcpi_rtt, 1000); uint32_t mem[SK_MEMINFO_VARS]; socklen_t mem_sl = sizeof(mem); int mss = MSS_GET(conn); uint32_t sendq, sndbuf; uint32_t limit; if (getsockopt(s, SOL_SOCKET, SO_MEMINFO, &mem, &mem_sl)) { if (ioctl(s, SIOCOUTQ, &sendq)) { debug_perror("SIOCOUTQ on socket %i, assuming 0", s); sendq = 0; } tcp_get_sndbuf(conn); if (sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */ limit = 0; else limit = SNDBUF_GET(conn) - sendq; goto clamp; } sndbuf = mem[SK_MEMINFO_SNDBUF]; sendq = mem[SK_MEMINFO_WMEM_QUEUED]; if (sendq > sndbuf) limit = 0; ... clamp: ... } ... int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, bool force_seq, struct tcp_info_linux *tinfo) { ... if ((conn->flags & LOCAL) || tcp_rtt_dst_low(conn)) new_wnd_to_tap = tinfo->tcpi_snd_wnd; else new_wnd_to_tap = tcp_wnd_from_sndbuf(tinfo); ... } --- > + } else { > + uint32_t sb = mem[SK_MEMINFO_SNDBUF]; > + uint32_t wq = mem[SK_MEMINFO_WMEM_QUEUED]; > + uint32_t cs = clamped_scale(sb, sb, SNDBUF_SMALL, > + SNDBUF_BIG, 75); > + > + SNDBUF_SET(conn, MIN(INT_MAX, cs)); > + > + if (wq > sb) { > + limit = 0; > + } else if (!sendq || wq <= sendq || !mss) { > + limit = SNDBUF_GET(conn) - sendq; Isn't the new 'wq' value (which I would call 'sendq' if doable, while not fetching SIOCOUTQ) more accurate than sendq anyway? That is, shouldn't we rather use SNDBUF_GET(conn) - wq here? > + } else { > + uint32_t nsegs = MAX(sendq / mss, 1); > + uint32_t overhead = (wq - sendq) / nsegs; > + uint32_t remaining = sb - wq; > + > + nsegs = remaining / (mss + overhead); > + limit = nsegs * mss; ...magic. Maybe it's really this. > + } > + } > > /* If the sender uses mechanisms to prevent Silly Window > * Syndrome (SWS, described in RFC 813 Section 3) it's critical > @@ -1168,11 +1194,11 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, > * but we won't send enough to fill one because we're stuck > * with pending data in the outbound queue > */ > - if (limit < MSS_GET(conn) && sendq && > + if (limit < (uint32_t)MSS_GET(conn) && sendq && > tinfo->tcpi_last_data_sent < rtt_ms_ceiling * 10) > limit = 0; > > - new_wnd_to_tap = MIN((int)tinfo->tcpi_snd_wnd, limit); > + new_wnd_to_tap = MIN(tinfo->tcpi_snd_wnd, limit); > } > > new_wnd_to_tap = MIN(new_wnd_to_tap, MAX_WINDOW); > diff --git a/tcp_conn.h b/tcp_conn.h > index 6985426..9f5bee0 100644 > --- a/tcp_conn.h > +++ b/tcp_conn.h > @@ -98,7 +98,7 @@ struct tcp_tap_conn { > #define SNDBUF_BITS 24 > unsigned int sndbuf :SNDBUF_BITS; > #define SNDBUF_SET(conn, bytes) (conn->sndbuf = ((bytes) >> (32 - SNDBUF_BITS))) > -#define SNDBUF_GET(conn) (conn->sndbuf << (32 - SNDBUF_BITS)) > +#define SNDBUF_GET(conn) ((uint32_t)(conn->sndbuf << (32 - SNDBUF_BITS))) > > uint8_t seq_dup_ack_approx; > -- Stefano