From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FpnEegJU; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id BB9E55A0272 for ; Fri, 24 Apr 2026 03:06:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1776992780; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=PKedNdsPoS+p0JQUJ5fEXLAznll3PjStJCZt43UfDSY=; b=FpnEegJUGGR0xxggL/JIPkHh4ToXZTg4XpT5XBqF9dRnmfJIQpsCWZtPIZ0V9fHh0TSd5B RPijVq3/C/d6nYkZHdritG/DKd+q5On5XQLztEaMMEr3HX9LTYTA2byVdxNT2S/hh7pCqH QaljgTV77E0vaPjsgUQfBIH0QbAbCF0= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-124-pzF5RDN7P_K53R5EESZ4UA-1; Thu, 23 Apr 2026 21:06:19 -0400 X-MC-Unique: pzF5RDN7P_K53R5EESZ4UA-1 X-Mimecast-MFC-AGG-ID: pzF5RDN7P_K53R5EESZ4UA_1776992778 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 197F618003F6; Fri, 24 Apr 2026 01:06:18 +0000 (UTC) Received: from jmaloy-thinkpadp16vgen1.rmtcaqc.csb (unknown [10.22.89.222]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A8131180047F; Fri, 24 Apr 2026 01:06:16 +0000 (UTC) From: Jon Maloy To: sbrivio@redhat.com, david@gibson.dropbear.id.au, jmaloy@redhat.com, passt-dev@passt.top Subject: [PATCH v2] tcp: Use SO_MEMINFO for accurate send buffer overhead accounting Date: Thu, 23 Apr 2026 21:06:15 -0400 Message-ID: <20260424010615.253127-1-jmaloy@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: nS7609G6qR_pkQWwqLhUJbkB4L0bf_O2FsyP_Ed-9mQ_1776992778 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true Message-ID-Hash: L5XW2T3I6IYPLIYKLN4VYBOUFT3UTFBG X-Message-ID-Hash: L5XW2T3I6IYPLIYKLN4VYBOUFT3UTFBG X-MailFrom: jmaloy@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: The TCP window advertised to the guest/container must balance two competing needs: large enough to trigger kernel socket buffer auto-tuning, but not so large that sendmsg() partially fails, causing retransmissions. The current approach uses the difference (SNDBUF_GET() - SIOCOUTQ), but SNDBUF_GET() returns a scaled value that only roughly accounts for per-skb overhead. The clamped_scale approximation doesn't accurately track the actual per-segment overhead, which can lead to both excessive retransmissions and reduced throughput. We now introduce the use of SO_MEMINFO to obtain SK_MEMINFO_SNDBUF and SK_MEMINFO_WMEM_QUEUED from the kernel. The latter is presented in the kernel's own accounting units, i.e. including the sk_buff overhead, and matches exactly what the kernel's own sk_stream_memory_free() function is using. When data is queued and the overhead ratio is observable, we calculate the per-segment overhead as (wmem_queued - sendq) / num_segments, then determine how many additional segments should fit in the remaining buffer space, considering the calculated per-mss overhead. This approach treats segments as discrete quantities, and produces a more accurate estimate of available buffer space than a linear scaling factor does. When the ratio cannot be observed, e.g. because the queue is empty or we are in a transient state, we fall back to the existing clamped_scale calculation (scaling between 100% and 75% of buffer capacity). When SO_MEMINFO succeeds, we also use SK_MEMINFO_SNDBUF directly to set SNDBUF, avoiding a separate SO_SNDBUF getsockopt() call. If SO_MEMINFO is unavailable, we fall back to the pre-existing SNDBUF_GET() - SIOCOUTQ calculation. Link: https://bugs.passt.top/show_bug.cgi?id=138 Signed-off-by: Jon Maloy --- v2: Updated according to feedback from Stefano. My own measurements indicate that this approach largely solves both the retransmission and throughput issues observed with the previous version. --- tcp.c | 42 ++++++++++++++++++++++++++++++++++-------- tcp_conn.h | 2 +- 2 files changed, 35 insertions(+), 9 deletions(-) diff --git a/tcp.c b/tcp.c index 43b8fdb..2ba08fd 100644 --- a/tcp.c +++ b/tcp.c @@ -295,6 +295,7 @@ #include #include +#include #include "checksum.h" #include "util.h" @@ -1128,19 +1129,44 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, new_wnd_to_tap = tinfo->tcpi_snd_wnd; } else { unsigned rtt_ms_ceiling = DIV_ROUND_UP(tinfo->tcpi_rtt, 1000); + uint32_t mem[SK_MEMINFO_VARS]; + socklen_t mem_sl = sizeof(mem); + int mss = MSS_GET(conn); uint32_t sendq; - int limit; + uint32_t limit; if (ioctl(s, SIOCOUTQ, &sendq)) { debug_perror("SIOCOUTQ on socket %i, assuming 0", s); sendq = 0; } - tcp_get_sndbuf(conn); - if ((int)sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */ - limit = 0; - else - limit = SNDBUF_GET(conn) - (int)sendq; + if (getsockopt(s, SOL_SOCKET, SO_MEMINFO, &mem, &mem_sl)) { + tcp_get_sndbuf(conn); + if (sendq > SNDBUF_GET(conn)) + limit = 0; + else + limit = SNDBUF_GET(conn) - sendq; + } else { + uint32_t sb = mem[SK_MEMINFO_SNDBUF]; + uint32_t wq = mem[SK_MEMINFO_WMEM_QUEUED]; + uint32_t cs = clamped_scale(sb, sb, SNDBUF_SMALL, + SNDBUF_BIG, 75); + + SNDBUF_SET(conn, MIN(INT_MAX, cs)); + + if (wq > sb) { + limit = 0; + } else if (!sendq || wq <= sendq || !mss) { + limit = SNDBUF_GET(conn) - sendq; + } else { + uint32_t nsegs = MAX(sendq / mss, 1); + uint32_t overhead = (wq - sendq) / nsegs; + uint32_t remaining = sb - wq; + + nsegs = remaining / (mss + overhead); + limit = nsegs * mss; + } + } /* If the sender uses mechanisms to prevent Silly Window * Syndrome (SWS, described in RFC 813 Section 3) it's critical @@ -1168,11 +1194,11 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, * but we won't send enough to fill one because we're stuck * with pending data in the outbound queue */ - if (limit < MSS_GET(conn) && sendq && + if (limit < (uint32_t)MSS_GET(conn) && sendq && tinfo->tcpi_last_data_sent < rtt_ms_ceiling * 10) limit = 0; - new_wnd_to_tap = MIN((int)tinfo->tcpi_snd_wnd, limit); + new_wnd_to_tap = MIN(tinfo->tcpi_snd_wnd, limit); } new_wnd_to_tap = MIN(new_wnd_to_tap, MAX_WINDOW); diff --git a/tcp_conn.h b/tcp_conn.h index 6985426..9f5bee0 100644 --- a/tcp_conn.h +++ b/tcp_conn.h @@ -98,7 +98,7 @@ struct tcp_tap_conn { #define SNDBUF_BITS 24 unsigned int sndbuf :SNDBUF_BITS; #define SNDBUF_SET(conn, bytes) (conn->sndbuf = ((bytes) >> (32 - SNDBUF_BITS))) -#define SNDBUF_GET(conn) (conn->sndbuf << (32 - SNDBUF_BITS)) +#define SNDBUF_GET(conn) ((uint32_t)(conn->sndbuf << (32 - SNDBUF_BITS))) uint8_t seq_dup_ack_approx; -- 2.52.0