public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
* [RFC PATCH] tcp: Replace send buffer boost with EPOLLOUT monitoring
@ 2026-03-20 10:32 Yumei Huang
  2026-04-20 22:33 ` Stefano Brivio
  0 siblings, 1 reply; 2+ messages in thread
From: Yumei Huang @ 2026-03-20 10:32 UTC (permalink / raw)
  To: passt-dev, sbrivio; +Cc: david, yuhuang

Currently we use the SNDBUF boost mechanism to force TCP auto-tuning.
However, it doesn't always work, and sometimes causes a lot of
retransmissions. As a result, the throughput suffers.

This patch replaces it with monitoring EPOLLOUT when sendmsg() failure
(with EAGAIN and EWOULDBLOCK) and partial sends occur.

Tested with iperf3 inside pasta: throughput is now comparable to running
iperf3 directly on the host without pasta. However, retransmissions can
still be elevated when RTT >= 50ms. For example, when RTT is between
200ms and 500ms, retransmission count varies from 30 to 120 in roughly
80% of test runs.

Link: https://bugs.passt.top/show_bug.cgi?id=138
Suggested-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Yumei Huang <yuhuang@redhat.com>
---
 tcp.c | 57 +++++++++++++++++----------------------------------------
 1 file changed, 17 insertions(+), 40 deletions(-)

diff --git a/tcp.c b/tcp.c
index 9d91c3c..f7e3932 100644
--- a/tcp.c
+++ b/tcp.c
@@ -353,13 +353,6 @@ enum {
 #define LOW_RTT_TABLE_SIZE		8
 #define LOW_RTT_THRESHOLD		10 /* us */
 
-/* Parameters to temporarily exceed sending buffer to force TCP auto-tuning */
-#define SNDBUF_BOOST_BYTES_RTT_LO	2500 /* B * s: no boost until here */
-/* ...examples:  5 MB sent * 500 ns RTT, 250 kB * 10 ms,  8 kB * 300 ms */
-#define SNDBUF_BOOST_FACTOR		150 /* % */
-#define SNDBUF_BOOST_BYTES_RTT_HI	6000 /* apply full boost factor */
-/*		12 MB sent * 500 ns RTT, 600 kB * 10 ms, 20 kB * 300 ms */
-
 /* Ratio of buffer to bandwidth * delay product implying interactive traffic */
 #define SNDBUF_TO_BW_DELAY_INTERACTIVE	/* > */ 20 /* (i.e. < 5% of buffer) */
 
@@ -1023,35 +1016,6 @@ size_t tcp_fill_headers(const struct ctx *c, struct tcp_tap_conn *conn,
 	return MAX(l3len + sizeof(struct ethhdr), ETH_ZLEN);
 }
 
-/**
- * tcp_sndbuf_boost() - Calculate limit of sending buffer to force auto-tuning
- * @conn:	Connection pointer
- * @tinfo:	tcp_info from kernel, must be pre-fetched
- *
- * Return: increased sending buffer to use as a limit for advertised window
- */
-static unsigned long tcp_sndbuf_boost(const struct tcp_tap_conn *conn,
-				      const struct tcp_info_linux *tinfo)
-{
-	unsigned long bytes_rtt_product;
-
-	if (!bytes_acked_cap)
-		return SNDBUF_GET(conn);
-
-	/* This is *not* a bandwidth-delay product, but it's somewhat related:
-	 * as we send more data (usually at the beginning of a connection), we
-	 * try to make the sending buffer progressively grow, with the RTT as a
-	 * factor (longer delay, bigger buffer needed).
-	 */
-	bytes_rtt_product = (long long)tinfo->tcpi_bytes_acked *
-			    tinfo->tcpi_rtt / 1000 / 1000;
-
-	return clamped_scale(SNDBUF_GET(conn), bytes_rtt_product,
-			     SNDBUF_BOOST_BYTES_RTT_LO,
-			     SNDBUF_BOOST_BYTES_RTT_HI,
-			     SNDBUF_BOOST_FACTOR);
-}
-
 /**
  * tcp_update_seqack_wnd() - Update ACK sequence and window to guest/tap
  * @c:		Execution context
@@ -1174,8 +1138,6 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn,
 
 		if ((int)sendq > SNDBUF_GET(conn)) /* Due to memory pressure? */
 			limit = 0;
-		else if ((int)tinfo->tcpi_snd_wnd > SNDBUF_GET(conn))
-			limit = tcp_sndbuf_boost(conn, tinfo) - (int)sendq;
 		else
 			limit = SNDBUF_GET(conn) - (int)sendq;
 
@@ -2050,14 +2012,28 @@ eintr:
 
 		if (errno == EAGAIN || errno == EWOULDBLOCK) {
 			tcp_send_flag(c, conn, ACK | DUP_ACK);
+			uint32_t events = tcp_conn_epoll_events(conn->events,
+								conn->flags);
+			events |= EPOLLOUT;
+			if (flow_epoll_set(&conn->f, EPOLL_CTL_MOD, events,
+			    conn->sock, !TAPSIDE(conn)) < 0)
+				debug("Failed to add EPOLLOUT");
 			return p->count - idx;
-
 		}
 		return -1;
 	}
 
-	if (n < (int)(seq_from_tap - conn->seq_from_tap))
+	if (n < (int)(seq_from_tap - conn->seq_from_tap)) {
 		partial_send = 1;
+		uint32_t events = tcp_conn_epoll_events(conn->events,
+							conn->flags);
+		events |= EPOLLOUT;
+		if (flow_epoll_set(&conn->f, EPOLL_CTL_MOD, events, conn->sock,
+		    !TAPSIDE(conn)) < 0)
+			debug("Failed to add EPOLLOUT");
+	 } else {
+		tcp_epoll_ctl(conn);
+	 }
 
 	conn->seq_from_tap += n;
 
@@ -2661,6 +2637,7 @@ void tcp_sock_handler(const struct ctx *c, union epoll_ref ref,
 			tcp_data_from_sock(c, conn);
 
 		if (events & EPOLLOUT) {
+			tcp_epoll_ctl(conn);
 			if (tcp_update_seqack_wnd(c, conn, false, NULL))
 				tcp_send_flag(c, conn, ACK);
 		}
-- 
2.53.0


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [RFC PATCH] tcp: Replace send buffer boost with EPOLLOUT monitoring
  2026-03-20 10:32 [RFC PATCH] tcp: Replace send buffer boost with EPOLLOUT monitoring Yumei Huang
@ 2026-04-20 22:33 ` Stefano Brivio
  0 siblings, 0 replies; 2+ messages in thread
From: Stefano Brivio @ 2026-04-20 22:33 UTC (permalink / raw)
  To: Yumei Huang; +Cc: passt-dev, david

On Fri, 20 Mar 2026 18:32:14 +0800
Yumei Huang <yuhuang@redhat.com> wrote:

> Currently we use the SNDBUF boost mechanism to force TCP auto-tuning.
> However, it doesn't always work, and sometimes causes a lot of
> retransmissions. As a result, the throughput suffers.
> 
> This patch replaces it with monitoring EPOLLOUT when sendmsg() failure
> (with EAGAIN and EWOULDBLOCK) and partial sends occur.
> 
> Tested with iperf3 inside pasta: throughput is now comparable to running
> iperf3 directly on the host without pasta. However, retransmissions can
> still be elevated when RTT >= 50ms. For example, when RTT is between
> 200ms and 500ms, retransmission count varies from 30 to 120 in roughly
> 80% of test runs.
> 
> Link: https://bugs.passt.top/show_bug.cgi?id=138
> Suggested-by: Stefano Brivio <sbrivio@redhat.com>
> Signed-off-by: Yumei Huang <yuhuang@redhat.com>
> ---
>  tcp.c | 57 +++++++++++++++++----------------------------------------
>  1 file changed, 17 insertions(+), 40 deletions(-)

Thanks a lot! This is definitely a massive improvement, and a much
needed simplification over the original, so I applied this as it is (I
also tested it quite thoroughly).

I'm still looking into how we can replace the 75% to 100% linearly
scaled usage factor from tcp_get_sndbuf() with a more accurate
calculation (assuming it's doable), as a follow-up change, but even
once/if we do that, properly reacting on EPOLLOUT as this patch adds is
something we'll need anyway.

-- 
Stefano


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-20 22:33 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-20 10:32 [RFC PATCH] tcp: Replace send buffer boost with EPOLLOUT monitoring Yumei Huang
2026-04-20 22:33 ` Stefano Brivio

Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).