public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
* [PATCH v2] tcp: Use SO_MEMINFO for accurate send buffer overhead accounting
@ 2026-04-24  1:06 Jon Maloy
  0 siblings, 0 replies; only message in thread
From: Jon Maloy @ 2026-04-24  1:06 UTC (permalink / raw)
  To: sbrivio, david, jmaloy, passt-dev

The TCP window advertised to the guest/container must balance two
competing needs: large enough to trigger kernel socket buffer
auto-tuning, but not so large that sendmsg() partially fails, causing
retransmissions.

The current approach uses the difference (SNDBUF_GET() - SIOCOUTQ), but
SNDBUF_GET() returns a scaled value that only roughly accounts for
per-skb overhead. The clamped_scale approximation doesn't accurately
track the actual per-segment overhead, which can lead to both excessive
retransmissions and reduced throughput.

We now introduce the use of SO_MEMINFO to obtain SK_MEMINFO_SNDBUF and
SK_MEMINFO_WMEM_QUEUED from the kernel. The latter is presented in the
kernel's own accounting units, i.e. including the sk_buff overhead,
and matches exactly what the kernel's own sk_stream_memory_free()
function is using.

When data is queued and the overhead ratio is observable, we calculate
the per-segment overhead as (wmem_queued - sendq) / num_segments, then
determine how many additional segments should fit in the remaining
buffer space, considering the calculated per-mss overhead. This approach
treats segments as discrete quantities, and produces a more accurate
estimate of available buffer space than a linear scaling factor does.

When the ratio cannot be observed, e.g. because the queue is empty or
we are in a transient state, we fall back to the existing clamped_scale
calculation (scaling between 100% and 75% of buffer capacity).

When SO_MEMINFO succeeds, we also use SK_MEMINFO_SNDBUF directly to
set SNDBUF, avoiding a separate SO_SNDBUF getsockopt() call.

If SO_MEMINFO is unavailable, we fall back to the pre-existing
SNDBUF_GET() - SIOCOUTQ calculation.

Link: https://bugs.passt.top/show_bug.cgi?id=138
Signed-off-by: Jon Maloy <jmaloy@redhat.com>

---
v2: Updated according to feedback from Stefano. My own measurements
    indicate that this approach largely solves both the retransmission
    and throughput issues observed with the previous version.
---
 tcp.c      | 42 ++++++++++++++++++++++++++++++++++--------
 tcp_conn.h |  2 +-
 2 files changed, 35 insertions(+), 9 deletions(-)

diff --git a/tcp.c b/tcp.c
index 43b8fdb..2ba08fd 100644
--- a/tcp.c
+++ b/tcp.c
@@ -295,6 +295,7 @@
 #include <arpa/inet.h>
 
 #include <linux/sockios.h>
+#include <linux/sock_diag.h>
 
 #include "checksum.h"
 #include "util.h"
@@ -1128,19 +1129,44 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn,
 		new_wnd_to_tap = tinfo->tcpi_snd_wnd;
 	} else {
 		unsigned rtt_ms_ceiling = DIV_ROUND_UP(tinfo->tcpi_rtt, 1000);
+		uint32_t mem[SK_MEMINFO_VARS];
+		socklen_t mem_sl = sizeof(mem);
+		int mss = MSS_GET(conn);
 		uint32_t sendq;
-		int limit;
+		uint32_t limit;
 
 		if (ioctl(s, SIOCOUTQ, &sendq)) {
 			debug_perror("SIOCOUTQ on socket %i, assuming 0", s);
 			sendq = 0;
 		}
-		tcp_get_sndbuf(conn);
 
-		if ((int)sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */
-			limit = 0;
-		else
-			limit = SNDBUF_GET(conn) - (int)sendq;
+		if (getsockopt(s, SOL_SOCKET, SO_MEMINFO, &mem, &mem_sl)) {
+			tcp_get_sndbuf(conn);
+			if (sendq > SNDBUF_GET(conn))
+				limit = 0;
+			else
+				limit = SNDBUF_GET(conn) - sendq;
+		} else {
+			uint32_t sb = mem[SK_MEMINFO_SNDBUF];
+			uint32_t wq = mem[SK_MEMINFO_WMEM_QUEUED];
+			uint32_t cs = clamped_scale(sb, sb, SNDBUF_SMALL,
+						    SNDBUF_BIG, 75);
+
+			SNDBUF_SET(conn, MIN(INT_MAX, cs));
+
+			if (wq > sb) {
+				limit = 0;
+			} else if (!sendq || wq <= sendq || !mss) {
+				limit = SNDBUF_GET(conn) - sendq;
+			} else {
+				uint32_t nsegs = MAX(sendq / mss, 1);
+				uint32_t overhead = (wq - sendq) / nsegs;
+				uint32_t remaining = sb - wq;
+
+				nsegs = remaining / (mss + overhead);
+				limit = nsegs * mss;
+			}
+		}
 
 		/* If the sender uses mechanisms to prevent Silly Window
 		 * Syndrome (SWS, described in RFC 813 Section 3) it's critical
@@ -1168,11 +1194,11 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn,
 		 *   but we won't send enough to fill one because we're stuck
 		 *   with pending data in the outbound queue
 		 */
-		if (limit < MSS_GET(conn) && sendq &&
+		if (limit < (uint32_t)MSS_GET(conn) && sendq &&
 		    tinfo->tcpi_last_data_sent < rtt_ms_ceiling * 10)
 			limit = 0;
 
-		new_wnd_to_tap = MIN((int)tinfo->tcpi_snd_wnd, limit);
+		new_wnd_to_tap = MIN(tinfo->tcpi_snd_wnd, limit);
 	}
 
 	new_wnd_to_tap = MIN(new_wnd_to_tap, MAX_WINDOW);
diff --git a/tcp_conn.h b/tcp_conn.h
index 6985426..9f5bee0 100644
--- a/tcp_conn.h
+++ b/tcp_conn.h
@@ -98,7 +98,7 @@ struct tcp_tap_conn {
 #define SNDBUF_BITS		24
 	unsigned int	sndbuf		:SNDBUF_BITS;
 #define SNDBUF_SET(conn, bytes)	(conn->sndbuf = ((bytes) >> (32 - SNDBUF_BITS)))
-#define SNDBUF_GET(conn)	(conn->sndbuf << (32 - SNDBUF_BITS))
+#define SNDBUF_GET(conn)	((uint32_t)(conn->sndbuf << (32 - SNDBUF_BITS)))
 
 	uint8_t		seq_dup_ack_approx;
 
-- 
2.52.0


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2026-04-24  1:06 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-24  1:06 [PATCH v2] tcp: Use SO_MEMINFO for accurate send buffer overhead accounting Jon Maloy

Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).