From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=T/NboBXY; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id AF03F5A061D for ; Sat, 25 Apr 2026 21:58:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1777147106; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=RgE863+iCHpIMF37YoQuNCYhz+E8Mxb3strJz+C7syk=; b=T/NboBXYybgenmznASyZNFnzMiN54U5wdQ+Ivz/jdV+TJxPV2rXJ4u68E1o5eAhf18RYbm u/DYPrnM5AENHINbBxDfiq6SDnJ4q5F5J7FLt9x+iBynmrC51kOXTP+iV/jYqIuJCqidWc cxtodR52qULIGFD1JRWVSDRR27Bf0Yc= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-493-9pS4yJhUNn6egW1Fanhq3g-1; Sat, 25 Apr 2026 15:58:23 -0400 X-MC-Unique: 9pS4yJhUNn6egW1Fanhq3g-1 X-Mimecast-MFC-AGG-ID: 9pS4yJhUNn6egW1Fanhq3g_1777147101 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3D4611956089; Sat, 25 Apr 2026 19:58:21 +0000 (UTC) Received: from jmaloy-thinkpadp16vgen1.rmtcaqc.csb (unknown [10.22.80.13]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CB05A1800906; Sat, 25 Apr 2026 19:58:19 +0000 (UTC) From: Jon Maloy To: sbrivio@redhat.com, david@gibson.dropbear.id.au, jmaloy@redhat.com, passt-dev@passt.top Subject: [PATCH v3] tcp: Use SO_MEMINFO for accurate send buffer overhead accounting Message-ID: <20260425195818.572409-1-jmaloy@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: OM1v3XYJ9WMZ5eY0gYBolUoHD-So-wLp72uTCiQUs2k_1777147101 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-MailFrom: jmaloy@redhat.com X-Mailman-Rule-Hits: emergency X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved Message-ID-Hash: G2FOCCLMIOTXPW6ODEZHG44I3WZKISXI X-Message-ID-Hash: G2FOCCLMIOTXPW6ODEZHG44I3WZKISXI X-Mailman-Approved-At: Mon, 27 Apr 2026 09:47:04 +0200 X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Date: Sat, 25 Apr 2026 19:58:28 X-Original-Date: Sat, 25 Apr 2026 15:58:18 -0400 The TCP window advertised to the guest/container must balance two competing needs: large enough to trigger kernel socket buffer auto-tuning, but not so large that sendmsg() partially fails, causing retransmissions. The current approach uses the difference (SNDBUF_GET() - SIOCOUTQ), but SNDBUF_GET() returns a scaled value that only roughly accounts for per-skb overhead. The clamped_scale approximation doesn't accurately track the actual per-segment overhead, which can lead to both excessive retransmissions and reduced throughput. We now introduce the use of SO_MEMINFO to obtain SK_MEMINFO_SNDBUF and SK_MEMINFO_WMEM_QUEUED from the kernel. The latter is presented in the kernel's own accounting units, i.e. including the sk_buff overhead, and matches exactly what the kernel's own sk_stream_memory_free() function is using. When data is queued and the overhead ratio is observable, we calculate the per-segment overhead as (wmem_queued - sendq) / num_segments, then determine how many additional segments should fit in the remaining buffer space, considering the calculated per-mss overhead. This approach treats segments as discrete quantities, and produces a more accurate estimate of available buffer space than a linear scaling factor does. When the ratio cannot be observed, e.g. because the queue is empty or we are in a transient state, we fall back to the existing clamped_scale calculation (scaling between 100% and 75% of buffer capacity). When SO_MEMINFO succeeds, we also use SK_MEMINFO_SNDBUF directly to set SNDBUF, avoiding a separate SO_SNDBUF getsockopt() call. If SO_MEMINFO is unavailable, we fall back to the pre-existing SNDBUF_GET() - SIOCOUTQ calculation. Link: https://bugs.passt.top/show_bug.cgi?id=138 Link: https://github.com/containers/podman/issues/28219 Signed-off-by: Jon Maloy --- v2: Updated according to feedback from Stefano. Segment-based discrete overhead calculation instead of linear ratio. v3: Addressed Stefano's v2 feedback: - Extracted window calculation into tcp_wnd_from_sndbuf() - Use wmem_queued instead of SIOCOUTQ for fallback and SWS check --- tcp.c | 137 ++++++++++++++++++++++++++++++++++------------------- tcp_conn.h | 2 +- 2 files changed, 89 insertions(+), 50 deletions(-) diff --git a/tcp.c b/tcp.c index 43b8fdb..61160cf 100644 --- a/tcp.c +++ b/tcp.c @@ -295,6 +295,7 @@ #include #include +#include #include "checksum.h" #include "util.h" @@ -1017,6 +1018,90 @@ size_t tcp_fill_headers(const struct ctx *c, struct tcp_tap_conn *conn, return MAX(l3len + sizeof(struct ethhdr), ETH_ZLEN); } +/** + * tcp_wnd_from_sndbuf() - Calculate window from available send buffer space + * @s: Socket file descriptor + * @conn: Connection pointer + * @tinfo: tcp_info from kernel + * + * Return: window value to advertise, not scaled + */ +static uint32_t tcp_wnd_from_sndbuf(int s, struct tcp_tap_conn *conn, + const struct tcp_info_linux *tinfo) +{ + uint32_t rtt_ms_ceiling = DIV_ROUND_UP(tinfo->tcpi_rtt, 1000); + uint32_t mem[SK_MEMINFO_VARS]; + socklen_t mem_sl = sizeof(mem); + int mss = MSS_GET(conn); + uint32_t limit, sendq; + + if (ioctl(s, SIOCOUTQ, &sendq)) { + debug_perror("SIOCOUTQ on socket %i, assuming 0", s); + sendq = 0; + } + + if (getsockopt(s, SOL_SOCKET, SO_MEMINFO, &mem, &mem_sl)) { + tcp_get_sndbuf(conn); + + if (sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */ + limit = 0; + else + limit = SNDBUF_GET(conn) - sendq; + } else { + uint32_t sndbuf = mem[SK_MEMINFO_SNDBUF]; + uint32_t wmemq = mem[SK_MEMINFO_WMEM_QUEUED]; + uint32_t scaled = clamped_scale(sndbuf, sndbuf, SNDBUF_SMALL, + SNDBUF_BIG, 75); + + SNDBUF_SET(conn, MIN(INT_MAX, scaled)); + + if (wmemq > sndbuf) { + limit = 0; + } else if (!sendq || !mss || wmemq <= sendq) { + limit = SNDBUF_GET(conn) - wmemq; + } else { + uint32_t used_segs = MAX(sendq / mss, 1); + uint32_t overhead = (wmemq - sendq) / used_segs; + uint32_t remaining = sndbuf - wmemq; + uint32_t avail_segs = remaining / (mss + overhead); + + limit = avail_segs * mss; + } + } + + /* If the sender uses mechanisms to prevent Silly Window + * Syndrome (SWS, described in RFC 813 Section 3) it's critical + * that, should the window ever become less than the MSS, we + * advertise a new value once it increases again to be above it. + * + * The mechanism to avoid SWS in the kernel is, implicitly, + * implemented by Nagle's algorithm (which was proposed after + * RFC 813). + * + * To this end, for simplicity, approximate a window value below + * the MSS to zero, as we already have mechanisms in place to + * force updates after the window becomes zero. This matches the + * suggestion from RFC 813, Section 4. + * + * But don't do this if, either: + * + * - there's nothing in the outbound queue: the size of the + * sending buffer is limiting us, and it won't increase if we + * don't send data, so there's no point in waiting, or + * + * - we haven't sent data in a while (somewhat arbitrarily, ten + * times the RTT), as that might indicate that the receiver + * will only process data in batches that are large enough, + * but we won't send enough to fill one because we're stuck + * with pending data in the outbound queue + */ + if (limit < (uint32_t)MSS_GET(conn) && sendq && + tinfo->tcpi_last_data_sent < rtt_ms_ceiling * 10) + limit = 0; + + return MIN(tinfo->tcpi_snd_wnd, limit); +} + /** * tcp_update_seqack_wnd() - Update ACK sequence and window to guest/tap * @c: Execution context @@ -1124,56 +1209,10 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, } } - if ((conn->flags & LOCAL) || tcp_rtt_dst_low(conn)) { + if ((conn->flags & LOCAL) || tcp_rtt_dst_low(conn)) new_wnd_to_tap = tinfo->tcpi_snd_wnd; - } else { - unsigned rtt_ms_ceiling = DIV_ROUND_UP(tinfo->tcpi_rtt, 1000); - uint32_t sendq; - int limit; - - if (ioctl(s, SIOCOUTQ, &sendq)) { - debug_perror("SIOCOUTQ on socket %i, assuming 0", s); - sendq = 0; - } - tcp_get_sndbuf(conn); - - if ((int)sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */ - limit = 0; - else - limit = SNDBUF_GET(conn) - (int)sendq; - - /* If the sender uses mechanisms to prevent Silly Window - * Syndrome (SWS, described in RFC 813 Section 3) it's critical - * that, should the window ever become less than the MSS, we - * advertise a new value once it increases again to be above it. - * - * The mechanism to avoid SWS in the kernel is, implicitly, - * implemented by Nagle's algorithm (which was proposed after - * RFC 813). - * - * To this end, for simplicity, approximate a window value below - * the MSS to zero, as we already have mechanisms in place to - * force updates after the window becomes zero. This matches the - * suggestion from RFC 813, Section 4. - * - * But don't do this if, either: - * - * - there's nothing in the outbound queue: the size of the - * sending buffer is limiting us, and it won't increase if we - * don't send data, so there's no point in waiting, or - * - * - we haven't sent data in a while (somewhat arbitrarily, ten - * times the RTT), as that might indicate that the receiver - * will only process data in batches that are large enough, - * but we won't send enough to fill one because we're stuck - * with pending data in the outbound queue - */ - if (limit < MSS_GET(conn) && sendq && - tinfo->tcpi_last_data_sent < rtt_ms_ceiling * 10) - limit = 0; - - new_wnd_to_tap = MIN((int)tinfo->tcpi_snd_wnd, limit); - } + else + new_wnd_to_tap = tcp_wnd_from_sndbuf(s, conn, tinfo); new_wnd_to_tap = MIN(new_wnd_to_tap, MAX_WINDOW); if (!(conn->events & ESTABLISHED)) diff --git a/tcp_conn.h b/tcp_conn.h index 6985426..9f5bee0 100644 --- a/tcp_conn.h +++ b/tcp_conn.h @@ -98,7 +98,7 @@ struct tcp_tap_conn { #define SNDBUF_BITS 24 unsigned int sndbuf :SNDBUF_BITS; #define SNDBUF_SET(conn, bytes) (conn->sndbuf = ((bytes) >> (32 - SNDBUF_BITS))) -#define SNDBUF_GET(conn) (conn->sndbuf << (32 - SNDBUF_BITS)) +#define SNDBUF_GET(conn) ((uint32_t)(conn->sndbuf << (32 - SNDBUF_BITS))) uint8_t seq_dup_ack_approx; -- 2.52.0