From: Jon Maloy <jmaloy@redhat.com>
To: Stefano Brivio <sbrivio@redhat.com>, passt-dev@passt.top
Cc: Paul Holzinger <pholzing@redhat.com>,
David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [PATCH v4 5/8] tcp: Don't try to transmit right after the peer shrank the window to zero
Date: Wed, 10 Sep 2025 20:12:13 -0400 [thread overview]
Message-ID: <2aa49410-9bf9-40d0-bfcf-c88c80c0430a@redhat.com> (raw)
In-Reply-To: <20250909181655.2990223-6-sbrivio@redhat.com>
On 2025-09-09 14:16, Stefano Brivio wrote:
> If the peer shrinks the window to zero, we'll skip storing the new
> window, as a convenient
Is this really convenient? It looks more like an inconsistency with
potential for future trouble to me. Wouldn't it be better with just
a SEND_WIN_PROBE flag or similar to be reset as soon as the window goes
non-zero again?
> way to cause window probes (which exceed any
> zero-sized window, strictly speaking) if we don't get window updates
> in a while.
>
> As we do so, though, we need to ensure we don't try to queue more data
> from the socket right after we process this window update, as the
> entire point of a zero-window advertisement is to keep us from sending
> more data.
>
> Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> tcp.c | 16 +++++++++-------
> 1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/tcp.c b/tcp.c
> index b83510b..9c70a25 100644
> --- a/tcp.c
> +++ b/tcp.c
> @@ -1271,8 +1271,10 @@ static void tcp_get_tap_ws(struct tcp_tap_conn *conn,
> * @c: Execution context
> * @conn: Connection pointer
> * @wnd: Window value, host order, unscaled
> + *
> + * Return: false on zero window (not stored to wnd_from_tap), true otherwise
> */
> -static void tcp_tap_window_update(const struct ctx *c,
> +static bool tcp_tap_window_update(const struct ctx *c,
> struct tcp_tap_conn *conn, unsigned wnd)
> {
> wnd = MIN(MAX_WINDOW, wnd << conn->ws_from_tap);
> @@ -1285,13 +1287,14 @@ static void tcp_tap_window_update(const struct ctx *c,
> */
> if (!wnd && SEQ_LT(conn->seq_ack_from_tap, conn->seq_to_tap)) {
> tcp_rewind_seq(c, conn);
> - return;
> + return false;
> }
>
> conn->wnd_from_tap = MIN(wnd >> conn->ws_from_tap, USHRT_MAX);
>
> /* FIXME: reflect the tap-side receiver's window back to the sock-side
> * sender by adjusting SO_RCVBUF? */
Not so sure. That sender will stop in due time anyway, with no harm
done. Starting fiddling with SO_RCVBUF sounds like something to avoid.
> + return true;
> }
>
> /**
> @@ -2101,9 +2104,8 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af,
> if (!th->ack)
> goto reset;
>
> - tcp_tap_window_update(c, conn, ntohs(th->window));
> -
> - tcp_data_from_sock(c, conn);
> + if (tcp_tap_window_update(c, conn, ntohs(th->window)))
> + tcp_data_from_sock(c, conn);
>
> if (p->count - idx == 1)
> return 1;
> @@ -2113,8 +2115,8 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af,
> if (conn->events & TAP_FIN_RCVD) {
> tcp_sock_consume(conn, ntohl(th->ack_seq));
> tcp_update_seqack_from_tap(c, conn, ntohl(th->ack_seq));
> - tcp_tap_window_update(c, conn, ntohs(th->window));
> - tcp_data_from_sock(c, conn);
> + if (tcp_tap_window_update(c, conn, ntohs(th->window)))
> + tcp_data_from_sock(c, conn);
>
> if (conn->seq_ack_from_tap == conn->seq_to_tap) {
> if (th->ack && conn->events & TAP_FIN_SENT)
next prev parent reply other threads:[~2025-09-11 0:12 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-09 18:16 [PATCH v4 0/8] tcp: Fixes for issues uncovered by tests with 6.17-rc1 kernels Stefano Brivio
2025-09-09 18:16 ` [PATCH v4 1/8] tcp: FIN flags have to be retransmitted as well Stefano Brivio
2025-09-09 18:16 ` [PATCH v4 2/8] tcp: Factor sequence rewind for retransmissions into a new function Stefano Brivio
2025-09-09 18:16 ` [PATCH v4 3/8] tcp: Rewind sequence when guest shrinks window to zero Stefano Brivio
2025-09-10 2:20 ` David Gibson
2025-09-10 6:37 ` Stefano Brivio
2025-09-10 7:18 ` David Gibson
2025-09-10 23:48 ` Jon Maloy
2025-09-09 18:16 ` [PATCH v4 4/8] tcp: Fix closing logic for half-closed connections Stefano Brivio
2025-09-10 23:56 ` Jon Maloy
2025-09-09 18:16 ` [PATCH v4 5/8] tcp: Don't try to transmit right after the peer shrank the window to zero Stefano Brivio
2025-09-11 0:12 ` Jon Maloy [this message]
2025-09-09 18:16 ` [PATCH v4 6/8] tcp: Cast operands of sequence comparison macros to uint32_t before using them Stefano Brivio
2025-09-10 2:21 ` David Gibson
2025-09-11 0:13 ` Jon Maloy
2025-09-09 18:16 ` [PATCH v4 7/8] tcp: Fast re-transmit if half-closed, make TAP_FIN_RCVD path consistent Stefano Brivio
2025-09-10 2:27 ` David Gibson
2025-09-10 9:57 ` Stefano Brivio
2025-09-11 2:37 ` David Gibson
2025-09-11 0:24 ` Jon Maloy
2025-09-09 18:16 ` [PATCH v4 8/8] tcp: Don't send FIN segment to guest yet if we have pending unacknowledged data Stefano Brivio
2025-09-10 2:29 ` David Gibson
2025-09-10 6:37 ` Stefano Brivio
2025-09-11 0:38 ` Jon Maloy
2025-09-10 9:10 ` [PATCH v4 0/8] tcp: Fixes for issues uncovered by tests with 6.17-rc1 kernels Paul Holzinger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2aa49410-9bf9-40d0-bfcf-c88c80c0430a@redhat.com \
--to=jmaloy@redhat.com \
--cc=david@gibson.dropbear.id.au \
--cc=passt-dev@passt.top \
--cc=pholzing@redhat.com \
--cc=sbrivio@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).