From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=ggXd+EEN; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id 4CD885A0262 for ; Wed, 22 Apr 2026 04:23:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1776824629; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=VkZaX2MDhh1OXj6uyPLDVivtQxabe0dgOMXaHd5fS/U=; b=ggXd+EENBZvxhu3tCqeUPZtJoqzMzU+pHBDrc5Tsd88mCXeBPMYyUommwBqf/qQ9bZF42k 4qPbAcLZy9Ix6iotBG5GkTHrSKMo3P7bezSijxJKW10ul8aI0UQAA4vjT7MGMlmSgyAAZO aPp9gUF6fFr5m5OnBxyZEiK7QGdCpgw= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-303-pAcbG0nbOxGneW9lQ1HXZg-1; Tue, 21 Apr 2026 22:23:46 -0400 X-MC-Unique: pAcbG0nbOxGneW9lQ1HXZg-1 X-Mimecast-MFC-AGG-ID: pAcbG0nbOxGneW9lQ1HXZg_1776824625 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D0F5A19560A6; Wed, 22 Apr 2026 02:23:44 +0000 (UTC) Received: from jmaloy-thinkpadp16vgen1.rmtcaqc.csb (unknown [10.22.80.16]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2792219560AB; Wed, 22 Apr 2026 02:23:43 +0000 (UTC) From: Jon Maloy To: sbrivio@redhat.com, david@gibson.dropbear.id.au, jmaloy@redhat.com, passt-dev@passt.top Subject: [PATCH] tcp: Use SO_MEMINFO for accurate send buffer overhead accounting Date: Tue, 21 Apr 2026 22:23:42 -0400 Message-ID: <20260422022342.72046-1-jmaloy@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: IfLYsLDafu-PJZr-WmLRu_JL40CXD7W0Q-Dy8ZAaYt0_1776824625 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true Message-ID-Hash: 34IXUBY4FGQOGISVY6ROQNREX3FOXVJF X-Message-ID-Hash: 34IXUBY4FGQOGISVY6ROQNREX3FOXVJF X-MailFrom: jmaloy@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: The TCP window advertised to the guest/container must balance two competing needs: large enough to trigger kernel socket buffer auto-tuning, but not so large that sendmsg() partially fails, causing retransmissions. The current approach uses the difference (SNDBUF_GET() - SIOCOUTQ), but these values are in reality representing different units: SO_SNDBUF includes the buffer overhead (sk_buff head, alignment, skb_shared_info), while SIOCOUTQ only returns the actual payload bytes. The clamped_scale value of 75% is a rough approximation of this overhead, but it is inaccurate: too generous for large buffers, causing retransmissions at higher RTTs, and too conservative for small ones, hence inhibiting auto-tuning. We now introduce the use of SO_MEMINFO to obtain SK_MEMINFO_SNDBUF and SK_MEMINFO_WMEM_QUEUED from the kernel. Those are both presented in the kernel's own accounting units, i.e. including the per-skb overhead, and match exactly what the kernel's own sk_stream_memory_free() function is using. When we combine the above with the payload bytes indicated by SIOCOUTQ, the observed overhead ratio self-calibrates to whatever gso_segs, cache line size, and sk_buff layout the kernel may use, and is even architecture agnostic. When data is queued and the overhead ratio is observable (wmem_queued > sendq), the available payload window is calculated as: (sk_sndbuf - wmem_queued) * sendq / wmem_queued When the ratio cannot be observed, e.g. because the queue is empty or we are in a transient state, we fall back to 75% of remaining buffer capacity, like before. If SO_MEMINFO is unavailable, we fall back to the pre-existing SNDBUF_GET() - SIOCOUTQ calculation. Link: https://bugs.passt.top/show_bug.cgi?id=138 Signed-off-by: Jon Maloy --- tcp.c | 33 ++++++++++++++++++++++++++------- util.c | 1 + 2 files changed, 27 insertions(+), 7 deletions(-) diff --git a/tcp.c b/tcp.c index 43b8fdb..3b47a3b 100644 --- a/tcp.c +++ b/tcp.c @@ -295,6 +295,7 @@ #include #include +#include #include "checksum.h" #include "util.h" @@ -1128,19 +1129,37 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, new_wnd_to_tap = tinfo->tcpi_snd_wnd; } else { unsigned rtt_ms_ceiling = DIV_ROUND_UP(tinfo->tcpi_rtt, 1000); + uint32_t mem[SK_MEMINFO_VARS]; + socklen_t mem_sl; uint32_t sendq; - int limit; + uint32_t sndbuf; + uint32_t limit; if (ioctl(s, SIOCOUTQ, &sendq)) { debug_perror("SIOCOUTQ on socket %i, assuming 0", s); sendq = 0; } tcp_get_sndbuf(conn); + sndbuf = SNDBUF_GET(conn); - if ((int)sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */ - limit = 0; - else - limit = SNDBUF_GET(conn) - (int)sendq; + mem_sl = sizeof(mem); + if (getsockopt(s, SOL_SOCKET, SO_MEMINFO, &mem, &mem_sl)) { + if (sendq > sndbuf) + limit = 0; + else + limit = sndbuf - sendq; + } else { + uint32_t sb = mem[SK_MEMINFO_SNDBUF]; + uint32_t wq = mem[SK_MEMINFO_WMEM_QUEUED]; + + if (wq > sb) + limit = 0; + else if (!sendq || wq <= sendq) + limit = (sb - wq) * 3 / 4; + else + limit = (uint64_t)(sb - wq) * + sendq / wq; + } /* If the sender uses mechanisms to prevent Silly Window * Syndrome (SWS, described in RFC 813 Section 3) it's critical @@ -1168,11 +1187,11 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, * but we won't send enough to fill one because we're stuck * with pending data in the outbound queue */ - if (limit < MSS_GET(conn) && sendq && + if (limit < (unsigned int)MSS_GET(conn) && sendq && tinfo->tcpi_last_data_sent < rtt_ms_ceiling * 10) limit = 0; - new_wnd_to_tap = MIN((int)tinfo->tcpi_snd_wnd, limit); + new_wnd_to_tap = MIN(tinfo->tcpi_snd_wnd, limit); } new_wnd_to_tap = MIN(new_wnd_to_tap, MAX_WINDOW); diff --git a/util.c b/util.c index 73c9d51..036fac1 100644 --- a/util.c +++ b/util.c @@ -1137,3 +1137,4 @@ long clamped_scale(long x, long y, long lo, long hi, long f) return x - (x * (y - lo) / (hi - lo)) * (100 - f) / 100; } + -- 2.52.0