On Wed, Feb 25, 2026 at 07:15:41AM +0100, Stefano Brivio wrote: > On Fri, 6 Feb 2026 17:17:37 +1100 > David Gibson wrote: > > > We previously had a mechanism to remove TCP connections which were > > inactive for 2 hours. That was broken for a long time, due to poor > > interactions with the timerfd handling, so we removed it. > > > > Adding this long scale timer onto the timerfd handling, which mostly > > handles much shorter timeouts is tricky to reason about. However, for the > > inactivity timeouts, we don't require precision. Instead, we can use > > a 1-bit page replacement / "clock" algorithm. Every INACTIVITY_INTERVAL > > (2 hours), a global timer marks every TCP connection as tentatively > > inactive. That flag is cleared if we get any events, either tap side or > > socket side. > > > > If the inactive flag is still set when the next INACTIVITY_INTERVAL expires > > then the connection has been inactive for an extended period and we reset > > and close it. In practice this means that connections will be removed > > after 2-4 hours of inactivity. > > > > This is not a true fix for bug 179, but it does mitigate the damage, by > > limiting the time that inactive connections will remain around, > > > > Link: https://bugs.passt.top/show_bug.cgi?id=179 > > Signed-off-by: David Gibson > > --- > > tcp.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++---- > > tcp.h | 4 +++- > > tcp_conn.h | 3 +++ > > 3 files changed, 55 insertions(+), 5 deletions(-) > > > > diff --git a/tcp.c b/tcp.c > > index f8663369..09929ee9 100644 > > --- a/tcp.c > > +++ b/tcp.c > > @@ -198,6 +198,13 @@ > > * TCP_INFO, with a representable range from RTT_STORE_MIN (100 us) to > > * RTT_STORE_MAX (3276.8 ms). The timeout value is clamped accordingly. > > * > > + * We also use a global interval timer for an activity timeout which doesn't > > + * require precision: > > + * > > + * - INACTIVITY_INTERVAL: if a connection has had no activity for an entire > > + * interval, close and reset it. This means that idle connections (without > > + * keepalives) will be removed between INACTIVITY_INTERVAL seconds and > > + * 2*INACTIVITY_INTERVAL seconds after the last activity. > > * > > * Summary of data flows (with ESTABLISHED event) > > * ---------------------------------------------- > > @@ -333,7 +340,8 @@ enum { > > > > #define RTO_INIT 1 /* s, RFC 6298 */ > > #define RTO_INIT_AFTER_SYN_RETRIES 3 /* s, RFC 6298 */ > > -#define ACT_TIMEOUT 7200 > > + > > +#define INACTIVITY_INTERVAL 7200 /* s */ > > > > #define LOW_RTT_TABLE_SIZE 8 > > #define LOW_RTT_THRESHOLD 10 /* us */ > > @@ -2254,6 +2262,8 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, > > return 1; > > } > > > > + conn->inactive = false; > > + > > if (th->ack && !(conn->events & ESTABLISHED)) > > tcp_update_seqack_from_tap(c, conn, ntohl(th->ack_seq)); > > > > @@ -2622,6 +2632,8 @@ void tcp_sock_handler(const struct ctx *c, union epoll_ref ref, > > return; > > } > > > > + conn->inactive = false; > > + > > if ((conn->events & TAP_FIN_ACKED) && (events & EPOLLHUP)) { > > conn_event(c, conn, CLOSED); > > return; > > @@ -2872,18 +2884,51 @@ int tcp_init(struct ctx *c) > > return 0; > > } > > > > +/** > > + * tcp_inactivity() - Scan for and close long-inactive connections > > + * @c: Execution context > > + * @now: Current timestamp > > + */ > > +static void tcp_inactivity(struct ctx *c, const struct timespec *now) > > +{ > > + union flow *flow; > > + > > + if (now->tv_sec - c->tcp.inactivity_run < INACTIVITY_INTERVAL) > > + return; > > + > > + debug("TCP inactivity scan"); > > + c->tcp.inactivity_run = now->tv_sec; > > + > > + flow_foreach(flow) { > > Nit: this could be flow_foreach_of_type((flow), FLOW_TCP), or, given Oops, I forgot I created that. > that it's the second usage of that, we could finally introduce a > foreach_tcp_flow() macro, and rebuild foreach_established_tcp_flow() on > top of that. I looked into making a foreach_tcp_flow() macro that used a struct tcp_conn * instead of a union flow *. I think it's possible but it was pretty fiddly, so I gave up. Given that, I'm more comfortable keeping flow_foreach_of_type(). Patch using it for these cases posted. > Using foreach_established_tcp_flow() should be equivalent here by the > way, because in all non-established cases we should have shorter > timeouts, but it looks unnecessarily fragile. Agreed. > Same for tcp_keepalive() from 4/4. Done. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson