* [PATCH v2 0/2] SO_PEEK_OFF support
@ 2024-05-01 20:28 Jon Maloy
2024-05-01 20:28 ` [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available Jon Maloy
2024-05-01 20:28 ` [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero Jon Maloy
0 siblings, 2 replies; 10+ messages in thread
From: Jon Maloy @ 2024-05-01 20:28 UTC (permalink / raw)
To: passt-dev, sbrivio, lvivier, dgibson, jmaloy
SO_PEEK_OFF support and a fix for the zero-window problem.
Jon Maloy (2):
tcp: leverage support of SO_PEEK_OFF socket option when available
tcp: allow retransmit when peer receive window is zero
tcp.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++--------
tcp_conn.h | 2 ++
2 files changed, 77 insertions(+), 12 deletions(-)
--
2.42.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available
2024-05-01 20:28 [PATCH v2 0/2] SO_PEEK_OFF support Jon Maloy
@ 2024-05-01 20:28 ` Jon Maloy
2024-05-02 1:31 ` David Gibson
2024-05-01 20:28 ` [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero Jon Maloy
1 sibling, 1 reply; 10+ messages in thread
From: Jon Maloy @ 2024-05-01 20:28 UTC (permalink / raw)
To: passt-dev, sbrivio, lvivier, dgibson, jmaloy
From linux-6.9.0 the kernel will contain
commit 05ea491641d3 ("tcp: add support for SO_PEEK_OFF socket option").
This new feature makes is possible to call recv_msg(MSG_PEEK) and make
it start reading data from a given offset set by the SO_PEEK_OFF socket
option. This way, we can avoid repeated reading of already read bytes of
a received message, hence saving read cycles when forwarding TCP messages
in the host->name space direction.
In this commit, we add functionality to leverage this feature when
available, while we fall back to the previous behavior when not.
Measurements with iperf3 shows that throughput increases with 15-20 percent
in the host->namespace direction when this feature is used.
Signed-off-by: Jon Maloy <jmaloy@redhat.com>
---
v2: - Some smaller changes as suggested by David Gibson and Stefano Brivio.
- Moved set_peek_offset() to only the locations where the socket is set
to ESTABLISHED.
- Removed the per-packet synchronization between sk_peek_off and
already_sent. Instead only doing it in retransmit situations.
- The problem I found when trouble shooting the occasionally occurring
out of synch values between 'already_sent' and 'sk_peek_offset' may
have deeper implications that we may need to be investigate.
---
tcp.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 55 insertions(+), 3 deletions(-)
diff --git a/tcp.c b/tcp.c
index 905d26f..535f1a2 100644
--- a/tcp.c
+++ b/tcp.c
@@ -513,6 +513,9 @@ static struct iovec tcp6_l2_iov [TCP_FRAMES_MEM];
static struct iovec tcp4_l2_flags_iov [TCP_FRAMES_MEM];
static struct iovec tcp6_l2_flags_iov [TCP_FRAMES_MEM];
+/* Does the kernel support TCP_PEEK_OFF? */
+static bool peek_offset_cap;
+
/* sendmsg() to socket */
static struct iovec tcp_iov [UIO_MAXIOV];
@@ -582,6 +585,22 @@ static_assert(ARRAY_SIZE(tc_hash) >= FLOW_MAX,
int init_sock_pool4 [TCP_SOCK_POOL_SIZE];
int init_sock_pool6 [TCP_SOCK_POOL_SIZE];
+/**
+ * set_peek_offset() - Set SO_PEEK_OFF offset on a socket if supported
+ * @conn Connection struct with reference to socket and flags
+ * @offset Offset in bytes
+ */
+static void set_peek_offset(struct tcp_tap_conn *conn, int offset)
+{
+ int s;
+
+ if (!peek_offset_cap)
+ return;
+ s = conn->sock;
+ if (setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &offset, sizeof(offset)))
+ perror("Failed to set SO_PEEK_OFF\n");
+}
+
/**
* tcp_conn_epoll_events() - epoll events mask for given connection state
* @events: Current connection events
@@ -2174,6 +2193,12 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
if (iov_rem)
iov_sock[fill_bufs].iov_len = iov_rem;
+ if (peek_offset_cap) {
+ /* Don't use discard buffer */
+ mh_sock.msg_iov = &iov_sock[1];
+ mh_sock.msg_iovlen -= 1;
+ }
+
/* Receive into buffers, don't dequeue until acknowledged by guest. */
do
len = recvmsg(s, &mh_sock, MSG_PEEK);
@@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
return 0;
}
- sendlen = len - already_sent;
+ sendlen = len;
+ if (!peek_offset_cap)
+ sendlen -= already_sent;
+
if (sendlen <= 0) {
conn_flag(c, conn, STALLED);
return 0;
@@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
flow_trace(conn,
"fast re-transmit, ACK: %u, previous sequence: %u",
max_ack_seq, conn->seq_to_tap);
+
+ /* Ensure seq_from_tap isn't updated twice after call */
+ tcp_l2_data_buf_flush(c);
+
conn->seq_ack_from_tap = max_ack_seq;
conn->seq_to_tap = max_ack_seq;
+ set_peek_offset(conn, 0);
tcp_data_from_sock(c, conn);
+
+ /* Empty queue before any POLL event tries to send it again */
+ tcp_l2_data_buf_flush(c);
}
if (!iov_i)
@@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
conn->seq_ack_to_tap = conn->seq_from_tap;
conn_event(c, conn, ESTABLISHED);
+ set_peek_offset(conn, 0);
/* The client might have sent data already, which we didn't
* dequeue waiting for SYN,ACK from tap -- check now.
@@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
goto reset;
conn_event(c, conn, ESTABLISHED);
+ set_peek_offset(conn, 0);
if (th->fin) {
conn->seq_from_tap++;
@@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref,
struct sockaddr_storage sa;
socklen_t sl = sizeof(sa);
union flow *flow;
- int s;
+ int s = 0;
if (c->no_tcp || !(flow = flow_alloc()))
return;
@@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
flow_dbg(conn, "ACK timeout, retry");
conn->retrans++;
conn->seq_to_tap = conn->seq_ack_from_tap;
+ set_peek_offset(conn, 0);
+ tcp_l2_data_buf_flush(c);
tcp_data_from_sock(c, conn);
+ tcp_l2_data_buf_flush(c);
tcp_timer_ctl(c, conn);
}
} else {
@@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct ctx *c)
*/
int tcp_init(struct ctx *c)
{
- unsigned b;
+ unsigned int b, optv = 0;
+ int s;
for (b = 0; b < TCP_HASH_TABLE_SIZE; b++)
tc_hash[b] = FLOW_SIDX_NONE;
@@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c)
NS_CALL(tcp_ns_socks_init, c);
}
+ /* Probe for SO_PEEK_OFF support */
+ s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
+ if (s < 0) {
+ warn("Temporary TCP socket creation failed");
+ } else {
+ if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int)))
+ peek_offset_cap = true;
+ close(s);
+ }
+ debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not ");
return 0;
}
--
@@ -513,6 +513,9 @@ static struct iovec tcp6_l2_iov [TCP_FRAMES_MEM];
static struct iovec tcp4_l2_flags_iov [TCP_FRAMES_MEM];
static struct iovec tcp6_l2_flags_iov [TCP_FRAMES_MEM];
+/* Does the kernel support TCP_PEEK_OFF? */
+static bool peek_offset_cap;
+
/* sendmsg() to socket */
static struct iovec tcp_iov [UIO_MAXIOV];
@@ -582,6 +585,22 @@ static_assert(ARRAY_SIZE(tc_hash) >= FLOW_MAX,
int init_sock_pool4 [TCP_SOCK_POOL_SIZE];
int init_sock_pool6 [TCP_SOCK_POOL_SIZE];
+/**
+ * set_peek_offset() - Set SO_PEEK_OFF offset on a socket if supported
+ * @conn Connection struct with reference to socket and flags
+ * @offset Offset in bytes
+ */
+static void set_peek_offset(struct tcp_tap_conn *conn, int offset)
+{
+ int s;
+
+ if (!peek_offset_cap)
+ return;
+ s = conn->sock;
+ if (setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &offset, sizeof(offset)))
+ perror("Failed to set SO_PEEK_OFF\n");
+}
+
/**
* tcp_conn_epoll_events() - epoll events mask for given connection state
* @events: Current connection events
@@ -2174,6 +2193,12 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
if (iov_rem)
iov_sock[fill_bufs].iov_len = iov_rem;
+ if (peek_offset_cap) {
+ /* Don't use discard buffer */
+ mh_sock.msg_iov = &iov_sock[1];
+ mh_sock.msg_iovlen -= 1;
+ }
+
/* Receive into buffers, don't dequeue until acknowledged by guest. */
do
len = recvmsg(s, &mh_sock, MSG_PEEK);
@@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
return 0;
}
- sendlen = len - already_sent;
+ sendlen = len;
+ if (!peek_offset_cap)
+ sendlen -= already_sent;
+
if (sendlen <= 0) {
conn_flag(c, conn, STALLED);
return 0;
@@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
flow_trace(conn,
"fast re-transmit, ACK: %u, previous sequence: %u",
max_ack_seq, conn->seq_to_tap);
+
+ /* Ensure seq_from_tap isn't updated twice after call */
+ tcp_l2_data_buf_flush(c);
+
conn->seq_ack_from_tap = max_ack_seq;
conn->seq_to_tap = max_ack_seq;
+ set_peek_offset(conn, 0);
tcp_data_from_sock(c, conn);
+
+ /* Empty queue before any POLL event tries to send it again */
+ tcp_l2_data_buf_flush(c);
}
if (!iov_i)
@@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
conn->seq_ack_to_tap = conn->seq_from_tap;
conn_event(c, conn, ESTABLISHED);
+ set_peek_offset(conn, 0);
/* The client might have sent data already, which we didn't
* dequeue waiting for SYN,ACK from tap -- check now.
@@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
goto reset;
conn_event(c, conn, ESTABLISHED);
+ set_peek_offset(conn, 0);
if (th->fin) {
conn->seq_from_tap++;
@@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref,
struct sockaddr_storage sa;
socklen_t sl = sizeof(sa);
union flow *flow;
- int s;
+ int s = 0;
if (c->no_tcp || !(flow = flow_alloc()))
return;
@@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
flow_dbg(conn, "ACK timeout, retry");
conn->retrans++;
conn->seq_to_tap = conn->seq_ack_from_tap;
+ set_peek_offset(conn, 0);
+ tcp_l2_data_buf_flush(c);
tcp_data_from_sock(c, conn);
+ tcp_l2_data_buf_flush(c);
tcp_timer_ctl(c, conn);
}
} else {
@@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct ctx *c)
*/
int tcp_init(struct ctx *c)
{
- unsigned b;
+ unsigned int b, optv = 0;
+ int s;
for (b = 0; b < TCP_HASH_TABLE_SIZE; b++)
tc_hash[b] = FLOW_SIDX_NONE;
@@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c)
NS_CALL(tcp_ns_socks_init, c);
}
+ /* Probe for SO_PEEK_OFF support */
+ s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
+ if (s < 0) {
+ warn("Temporary TCP socket creation failed");
+ } else {
+ if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int)))
+ peek_offset_cap = true;
+ close(s);
+ }
+ debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not ");
return 0;
}
--
2.42.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero
2024-05-01 20:28 [PATCH v2 0/2] SO_PEEK_OFF support Jon Maloy
2024-05-01 20:28 ` [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available Jon Maloy
@ 2024-05-01 20:28 ` Jon Maloy
2024-05-03 13:43 ` Stefano Brivio
1 sibling, 1 reply; 10+ messages in thread
From: Jon Maloy @ 2024-05-01 20:28 UTC (permalink / raw)
To: passt-dev, sbrivio, lvivier, dgibson, jmaloy
A bug in kernel TCP may lead to a deadlock where a zero window is sent
from the peer, while he is unable to send out window updates even after
reads have freed up enough buffer space to permit a larger window.
In this situation, new window advertisemnts from the peer can only be
triggered by packets arriving from this side.
However, such packets are never sent, because the zero-window condition
currently prevents this side from sending out any packets whatsoever
to the peer.
We notice that the above bug is triggered *only* after the peer has
dropped an arriving packet because of severe memory squeeze, and that we
hence always enter a retransmission situation when this occurs. This
also means that RFC 9293 is violated, since the zero-advertisement
shrinks the previously advertised window within which the dropped packet
originally was sent.
RFC 9293 gives the solution to this situation. In chapter 3.6.1 we find
the following statement:
"A TCP receiver SHOULD NOT shrink the window, i.e., move the right
window edge to the left (SHLD-14). However, a sending TCP peer MUST
be robust against window shrinking, which may cause the
"usable window" (see Section 3.8.6.2.1) to become negative (MUST-34).
If this happens, the sender SHOULD NOT send new data (SHLD-15), but
SHOULD retransmit normally the old unacknowledged data between SND.UNA
and SND.UNA+SND.WND (SHLD-16). The sender MAY also retransmit old data
beyond SND.UNA+SND.WND (MAY-7)"
It should be noted that although this solves the problem we have at
hand, it is not a genuine solution to the kernel bug. There may well
be TCP stacks around in other OS-es which don't do this, nor have
keep-alive probing as an alternatve way to solve the situation.
Signed-off-by: Jon Maloy <jmaloy@redhat.com>
---
v2: - Using previously advertised window during retransmission, instead
highest send sequencece number in the cycle.
---
tcp.c | 29 ++++++++++++++++++++---------
tcp_conn.h | 2 ++
2 files changed, 22 insertions(+), 9 deletions(-)
diff --git a/tcp.c b/tcp.c
index 535f1a2..703c62f 100644
--- a/tcp.c
+++ b/tcp.c
@@ -1749,9 +1749,15 @@ static void tcp_get_tap_ws(struct tcp_tap_conn *conn,
*/
static void tcp_tap_window_update(struct tcp_tap_conn *conn, unsigned wnd)
{
+ uint32_t wnd_upper;
+
wnd = MIN(MAX_WINDOW, wnd << conn->ws_from_tap);
conn->wnd_from_tap = MIN(wnd >> conn->ws_from_tap, USHRT_MAX);
+ wnd_upper = conn->seq_ack_from_tap + wnd;
+ if (wnd && SEQ_GT(wnd_upper, conn->seq_wup_from_tap))
+ conn->seq_wup_from_tap = wnd_upper;
+
/* FIXME: reflect the tap-side receiver's window back to the sock-side
* sender by adjusting SO_RCVBUF? */
}
@@ -1784,6 +1790,7 @@ static void tcp_seq_init(const struct ctx *c, struct tcp_tap_conn *conn,
ns = (now->tv_sec * 1000000000 + now->tv_nsec) >> 5;
conn->seq_to_tap = ((uint32_t)(hash >> 32) ^ (uint32_t)hash) + ns;
+ conn->seq_wup_from_tap = conn->seq_to_tap;
}
/**
@@ -2133,9 +2140,8 @@ static void tcp_data_to_tap(const struct ctx *c, struct tcp_tap_conn *conn,
*
* #syscalls recvmsg
*/
-static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
+static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn, uint32_t wnd_scaled)
{
- uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap;
int fill_bufs, send_bufs = 0, last_len, iov_rem = 0;
int sendlen, len, plen, v4 = CONN_V4(conn);
int s = conn->sock, i, ret = 0;
@@ -2282,6 +2288,7 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
uint32_t max_ack_seq = conn->seq_ack_from_tap;
uint32_t seq_from_tap = conn->seq_from_tap;
struct msghdr mh = { .msg_iov = tcp_iov };
+ uint32_t wnd;
size_t len;
ssize_t n;
@@ -2325,8 +2332,7 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
SEQ_GE(ack_seq, max_ack_seq)) {
/* Fast re-transmit */
retr = !len && !th->fin &&
- ack_seq == max_ack_seq &&
- ntohs(th->window) == max_ack_seq_wnd;
+ ack_seq == max_ack_seq;
max_ack_seq_wnd = ntohs(th->window);
max_ack_seq = ack_seq;
@@ -2400,7 +2406,8 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
conn->seq_ack_from_tap = max_ack_seq;
conn->seq_to_tap = max_ack_seq;
set_peek_offset(conn, 0);
- tcp_data_from_sock(c, conn);
+ wnd = conn->seq_wup_from_tap - max_ack_seq;
+ tcp_data_from_sock(c, conn, wnd);
/* Empty queue before any POLL event tries to send it again */
tcp_l2_data_buf_flush(c);
@@ -2500,7 +2507,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
/* The client might have sent data already, which we didn't
* dequeue waiting for SYN,ACK from tap -- check now.
*/
- tcp_data_from_sock(c, conn);
+ tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
tcp_send_flag(c, conn, ACK);
}
@@ -2593,7 +2600,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
tcp_tap_window_update(conn, ntohs(th->window));
- tcp_data_from_sock(c, conn);
+ tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
if (p->count - idx == 1)
return 1;
@@ -2776,6 +2783,7 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
{
struct itimerspec check_armed = { { 0 }, { 0 } };
struct tcp_tap_conn *conn = CONN(ref.flow);
+ uint32_t wnd;
if (c->no_tcp)
return;
@@ -2807,7 +2815,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
conn->seq_to_tap = conn->seq_ack_from_tap;
set_peek_offset(conn, 0);
tcp_l2_data_buf_flush(c);
- tcp_data_from_sock(c, conn);
+
+ /* RFC 9293, 3.8.6: Send 1 byte of new data if needed */
+ wnd = conn->seq_wup_from_tap - conn->seq_ack_from_tap;
+ tcp_data_from_sock(c, conn, MAX(1, wnd));
tcp_l2_data_buf_flush(c);
tcp_timer_ctl(c, conn);
}
@@ -2863,7 +2874,7 @@ void tcp_sock_handler(struct ctx *c, union epoll_ref ref, uint32_t events)
conn_event(c, conn, SOCK_FIN_RCVD);
if (events & EPOLLIN)
- tcp_data_from_sock(c, conn);
+ tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
if (events & EPOLLOUT)
tcp_update_seqack_wnd(c, conn, 0, NULL);
diff --git a/tcp_conn.h b/tcp_conn.h
index a5f5cfe..d6b627a 100644
--- a/tcp_conn.h
+++ b/tcp_conn.h
@@ -30,6 +30,7 @@
* @wnd_to_tap: Sending window advertised to tap, unscaled (as sent)
* @seq_to_tap: Next sequence for packets to tap
* @seq_ack_from_tap: Last ACK number received from tap
+ * @seq_wup_from_tap: Right edge (+1) of last non-zero window from tap
* @seq_from_tap: Next sequence for packets from tap (not actually sent)
* @seq_ack_to_tap: Last ACK number sent to tap
* @seq_init_from_tap: Initial sequence number from tap
@@ -101,6 +102,7 @@ struct tcp_tap_conn {
uint32_t seq_to_tap;
uint32_t seq_ack_from_tap;
+ uint32_t seq_wup_from_tap;
uint32_t seq_from_tap;
uint32_t seq_ack_to_tap;
uint32_t seq_init_from_tap;
--
@@ -30,6 +30,7 @@
* @wnd_to_tap: Sending window advertised to tap, unscaled (as sent)
* @seq_to_tap: Next sequence for packets to tap
* @seq_ack_from_tap: Last ACK number received from tap
+ * @seq_wup_from_tap: Right edge (+1) of last non-zero window from tap
* @seq_from_tap: Next sequence for packets from tap (not actually sent)
* @seq_ack_to_tap: Last ACK number sent to tap
* @seq_init_from_tap: Initial sequence number from tap
@@ -101,6 +102,7 @@ struct tcp_tap_conn {
uint32_t seq_to_tap;
uint32_t seq_ack_from_tap;
+ uint32_t seq_wup_from_tap;
uint32_t seq_from_tap;
uint32_t seq_ack_to_tap;
uint32_t seq_init_from_tap;
--
2.42.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available
2024-05-01 20:28 ` [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available Jon Maloy
@ 2024-05-02 1:31 ` David Gibson
2024-05-03 13:42 ` Stefano Brivio
0 siblings, 1 reply; 10+ messages in thread
From: David Gibson @ 2024-05-02 1:31 UTC (permalink / raw)
To: Jon Maloy; +Cc: passt-dev, sbrivio, lvivier, dgibson
[-- Attachment #1: Type: text/plain, Size: 8227 bytes --]
On Wed, May 01, 2024 at 04:28:38PM -0400, Jon Maloy wrote:
> >From linux-6.9.0 the kernel will contain
> commit 05ea491641d3 ("tcp: add support for SO_PEEK_OFF socket option").
>
> This new feature makes is possible to call recv_msg(MSG_PEEK) and make
> it start reading data from a given offset set by the SO_PEEK_OFF socket
> option. This way, we can avoid repeated reading of already read bytes of
> a received message, hence saving read cycles when forwarding TCP messages
> in the host->name space direction.
>
> In this commit, we add functionality to leverage this feature when
> available, while we fall back to the previous behavior when not.
>
> Measurements with iperf3 shows that throughput increases with 15-20 percent
> in the host->namespace direction when this feature is used.
>
> Signed-off-by: Jon Maloy <jmaloy@redhat.com>
>
> ---
> v2: - Some smaller changes as suggested by David Gibson and Stefano Brivio.
> - Moved set_peek_offset() to only the locations where the socket is set
> to ESTABLISHED.
> - Removed the per-packet synchronization between sk_peek_off and
> already_sent. Instead only doing it in retransmit situations.
> - The problem I found when trouble shooting the occasionally occurring
> out of synch values between 'already_sent' and 'sk_peek_offset' may
> have deeper implications that we may need to be investigate.
> ---
> tcp.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 55 insertions(+), 3 deletions(-)
>
> diff --git a/tcp.c b/tcp.c
> index 905d26f..535f1a2 100644
> --- a/tcp.c
> +++ b/tcp.c
> @@ -513,6 +513,9 @@ static struct iovec tcp6_l2_iov [TCP_FRAMES_MEM];
> static struct iovec tcp4_l2_flags_iov [TCP_FRAMES_MEM];
> static struct iovec tcp6_l2_flags_iov [TCP_FRAMES_MEM];
>
> +/* Does the kernel support TCP_PEEK_OFF? */
> +static bool peek_offset_cap;
> +
> /* sendmsg() to socket */
> static struct iovec tcp_iov [UIO_MAXIOV];
>
> @@ -582,6 +585,22 @@ static_assert(ARRAY_SIZE(tc_hash) >= FLOW_MAX,
> int init_sock_pool4 [TCP_SOCK_POOL_SIZE];
> int init_sock_pool6 [TCP_SOCK_POOL_SIZE];
>
> +/**
> + * set_peek_offset() - Set SO_PEEK_OFF offset on a socket if supported
> + * @conn Connection struct with reference to socket and flags
> + * @offset Offset in bytes
> + */
> +static void set_peek_offset(struct tcp_tap_conn *conn, int offset)
> +{
> + int s;
> +
> + if (!peek_offset_cap)
> + return;
> + s = conn->sock;
> + if (setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &offset, sizeof(offset)))
> + perror("Failed to set SO_PEEK_OFF\n");
> +}
> +
> /**
> * tcp_conn_epoll_events() - epoll events mask for given connection state
> * @events: Current connection events
> @@ -2174,6 +2193,12 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> if (iov_rem)
> iov_sock[fill_bufs].iov_len = iov_rem;
>
> + if (peek_offset_cap) {
> + /* Don't use discard buffer */
> + mh_sock.msg_iov = &iov_sock[1];
> + mh_sock.msg_iovlen -= 1;
> + }
> +
I think this would be a little clearer if moved up to where we
currently initialise mh_sock.msg_iov and iov_sock[0], and make that an
else clause to this if. That would make it more obvious that we have
two different - and mutually exclusive - mechanisms for dealing with
un-acked data in the socket buffer. Not worth a respin on its own,
though.
> /* Receive into buffers, don't dequeue until acknowledged by guest. */
> do
> len = recvmsg(s, &mh_sock, MSG_PEEK);
> @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> return 0;
> }
>
> - sendlen = len - already_sent;
> + sendlen = len;
> + if (!peek_offset_cap)
> + sendlen -= already_sent;
> +
> if (sendlen <= 0) {
> conn_flag(c, conn, STALLED);
> return 0;
> @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> flow_trace(conn,
> "fast re-transmit, ACK: %u, previous sequence: %u",
> max_ack_seq, conn->seq_to_tap);
> +
> + /* Ensure seq_from_tap isn't updated twice after call */
> + tcp_l2_data_buf_flush(c);
tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a
recently merged change from Laurent.
IIUC, this is necessary because otherwise our update to seq_to_tap can
be clobbered from tcp_payload_flush() when we process the
queued-but-not-sent frames. This seems like a correct fix, but not an
optimal one: we're flushing out data we've already determined we're
going to retransmit. Instead, I think we want a different helper that
simply discards the queued frames - I'm thinking maybe we actually
want a helper that's called from both the fast and slow retransmit
paths and handles that.
Ah, wait, we only want to discard queued frames that belong to this
connection, that's trickier.
It seems to me this is a pre-existing bug, we just managed to get away
with it previously. I think this is at least one cause of the weirdly
jumping forwarding sequence numbers you observed. So I think we want
to make a patch fixing this that goes before the SO_PEEK_OFF changes.
> +
> conn->seq_ack_from_tap = max_ack_seq;
> conn->seq_to_tap = max_ack_seq;
> + set_peek_offset(conn, 0);
> tcp_data_from_sock(c, conn);
> +
> + /* Empty queue before any POLL event tries to send it again */
> + tcp_l2_data_buf_flush(c);
I'm not clear on what the second flush call is for. The only frames
queued should be those added by the tcp_data_from_sock() just above,
and those should be flushed when we get to tcp_defer_handler() before
we return to the epoll loop.
> }
>
> if (!iov_i)
> @@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
> conn->seq_ack_to_tap = conn->seq_from_tap;
>
> conn_event(c, conn, ESTABLISHED);
> + set_peek_offset(conn, 0);
>
> /* The client might have sent data already, which we didn't
> * dequeue waiting for SYN,ACK from tap -- check now.
> @@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
> goto reset;
>
> conn_event(c, conn, ESTABLISHED);
> + set_peek_offset(conn, 0);
The set_peek_offset() could go into conn_event() to avoid the
duplication. Not sure if it's worth it or not.
> if (th->fin) {
> conn->seq_from_tap++;
> @@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref,
> struct sockaddr_storage sa;
> socklen_t sl = sizeof(sa);
> union flow *flow;
> - int s;
> + int s = 0;
>
> if (c->no_tcp || !(flow = flow_alloc()))
> return;
> @@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
> flow_dbg(conn, "ACK timeout, retry");
> conn->retrans++;
> conn->seq_to_tap = conn->seq_ack_from_tap;
> + set_peek_offset(conn, 0);
> + tcp_l2_data_buf_flush(c);
Uh.. doesn't this flush have to go *before* the seq_to_tap update, for
the reasons discussed above?
> tcp_data_from_sock(c, conn);
> + tcp_l2_data_buf_flush(c);
> tcp_timer_ctl(c, conn);
> }
> } else {
> @@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct ctx *c)
> */
> int tcp_init(struct ctx *c)
> {
> - unsigned b;
> + unsigned int b, optv = 0;
> + int s;
>
> for (b = 0; b < TCP_HASH_TABLE_SIZE; b++)
> tc_hash[b] = FLOW_SIDX_NONE;
> @@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c)
> NS_CALL(tcp_ns_socks_init, c);
> }
>
> + /* Probe for SO_PEEK_OFF support */
> + s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
> + if (s < 0) {
> + warn("Temporary TCP socket creation failed");
> + } else {
> + if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int)))
> + peek_offset_cap = true;
> + close(s);
> + }
> + debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not ");
> return 0;
> }
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available
2024-05-02 1:31 ` David Gibson
@ 2024-05-03 13:42 ` Stefano Brivio
2024-05-03 14:43 ` Jon Maloy
2024-05-06 6:51 ` David Gibson
0 siblings, 2 replies; 10+ messages in thread
From: Stefano Brivio @ 2024-05-03 13:42 UTC (permalink / raw)
To: David Gibson, Jon Maloy; +Cc: passt-dev, lvivier, dgibson
On Thu, 2 May 2024 11:31:52 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Wed, May 01, 2024 at 04:28:38PM -0400, Jon Maloy wrote:
> > >From linux-6.9.0 the kernel will contain
> > commit 05ea491641d3 ("tcp: add support for SO_PEEK_OFF socket option").
> >
> > This new feature makes is possible to call recv_msg(MSG_PEEK) and make
> > it start reading data from a given offset set by the SO_PEEK_OFF socket
> > option. This way, we can avoid repeated reading of already read bytes of
> > a received message, hence saving read cycles when forwarding TCP messages
> > in the host->name space direction.
> >
> > In this commit, we add functionality to leverage this feature when
> > available, while we fall back to the previous behavior when not.
> >
> > Measurements with iperf3 shows that throughput increases with 15-20 percent
> > in the host->namespace direction when this feature is used.
> >
> > Signed-off-by: Jon Maloy <jmaloy@redhat.com>
> >
> > ---
> > v2: - Some smaller changes as suggested by David Gibson and Stefano Brivio.
> > - Moved set_peek_offset() to only the locations where the socket is set
> > to ESTABLISHED.
> > - Removed the per-packet synchronization between sk_peek_off and
> > already_sent. Instead only doing it in retransmit situations.
> > - The problem I found when trouble shooting the occasionally occurring
> > out of synch values between 'already_sent' and 'sk_peek_offset' may
> > have deeper implications that we may need to be investigate.
> > ---
> > tcp.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
> > 1 file changed, 55 insertions(+), 3 deletions(-)
> >
> > diff --git a/tcp.c b/tcp.c
> > index 905d26f..535f1a2 100644
> > --- a/tcp.c
> > +++ b/tcp.c
> > @@ -513,6 +513,9 @@ static struct iovec tcp6_l2_iov [TCP_FRAMES_MEM];
> > static struct iovec tcp4_l2_flags_iov [TCP_FRAMES_MEM];
> > static struct iovec tcp6_l2_flags_iov [TCP_FRAMES_MEM];
> >
> > +/* Does the kernel support TCP_PEEK_OFF? */
> > +static bool peek_offset_cap;
> > +
> > /* sendmsg() to socket */
> > static struct iovec tcp_iov [UIO_MAXIOV];
> >
> > @@ -582,6 +585,22 @@ static_assert(ARRAY_SIZE(tc_hash) >= FLOW_MAX,
> > int init_sock_pool4 [TCP_SOCK_POOL_SIZE];
> > int init_sock_pool6 [TCP_SOCK_POOL_SIZE];
> >
> > +/**
> > + * set_peek_offset() - Set SO_PEEK_OFF offset on a socket if supported
> > + * @conn Connection struct with reference to socket and flags
> > + * @offset Offset in bytes
Sorry, my bad: I forgot colons after the variable names in my proposal
for this comment: @conn:, @offset:.
> > + */
> > +static void set_peek_offset(struct tcp_tap_conn *conn, int offset)
> > +{
> > + int s;
> > +
> > + if (!peek_offset_cap)
> > + return;
Usually we have one extra newline after an early return between a
condition and some other code that's not so closely related.
> > + s = conn->sock;
> > + if (setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &offset, sizeof(offset)))
> > + perror("Failed to set SO_PEEK_OFF\n");
Don't print to stderr directly, use err().
> > +}
> > +
> > /**
> > * tcp_conn_epoll_events() - epoll events mask for given connection state
> > * @events: Current connection events
> > @@ -2174,6 +2193,12 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> > if (iov_rem)
> > iov_sock[fill_bufs].iov_len = iov_rem;
> >
> > + if (peek_offset_cap) {
> > + /* Don't use discard buffer */
> > + mh_sock.msg_iov = &iov_sock[1];
> > + mh_sock.msg_iovlen -= 1;
> > + }
> > +
>
> I think this would be a little clearer if moved up to where we
> currently initialise mh_sock.msg_iov and iov_sock[0], and make that an
> else clause to this if. That would make it more obvious that we have
> two different - and mutually exclusive - mechanisms for dealing with
> un-acked data in the socket buffer. Not worth a respin on its own,
> though.
>
> > /* Receive into buffers, don't dequeue until acknowledged by guest. */
> > do
> > len = recvmsg(s, &mh_sock, MSG_PEEK);
> > @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> > return 0;
> > }
> >
> > - sendlen = len - already_sent;
> > + sendlen = len;
> > + if (!peek_offset_cap)
> > + sendlen -= already_sent;
> > +
> > if (sendlen <= 0) {
> > conn_flag(c, conn, STALLED);
> > return 0;
> > @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> > flow_trace(conn,
> > "fast re-transmit, ACK: %u, previous sequence: %u",
> > max_ack_seq, conn->seq_to_tap);
> > +
> > + /* Ensure seq_from_tap isn't updated twice after call */
> > + tcp_l2_data_buf_flush(c);
>
> tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a
> recently merged change from Laurent.
>
> IIUC, this is necessary because otherwise our update to seq_to_tap can
...but Jon's comment refers to seq_from_tap (not seq_to_tap)? I'm
confused.
> be clobbered from tcp_payload_flush() when we process the
> queued-but-not-sent frames.
...how? I don't quite understand the issue here: tcp_payload_flush()
updates seq_to_tap once we send the frames, not before, right?
> This seems like a correct fix, but not an
> optimal one: we're flushing out data we've already determined we're
> going to retransmit. Instead, I think we want a different helper that
> simply discards the queued frames
Don't we always send (within the same epoll_wait() cycle) what we
queued? What am I missing?
> - I'm thinking maybe we actually
> want a helper that's called from both the fast and slow retransmit
> paths and handles that.
>
> Ah, wait, we only want to discard queued frames that belong to this
> connection, that's trickier.
>
> It seems to me this is a pre-existing bug, we just managed to get away
> with it previously. I think this is at least one cause of the weirdly
> jumping forwarding sequence numbers you observed. So I think we want
> to make a patch fixing this that goes before the SO_PEEK_OFF changes.
>
> > +
> > conn->seq_ack_from_tap = max_ack_seq;
> > conn->seq_to_tap = max_ack_seq;
> > + set_peek_offset(conn, 0);
> > tcp_data_from_sock(c, conn);
> > +
> > + /* Empty queue before any POLL event tries to send it again */
> > + tcp_l2_data_buf_flush(c);
>
> I'm not clear on what the second flush call is for. The only frames
> queued should be those added by the tcp_data_from_sock() just above,
> and those should be flushed when we get to tcp_defer_handler() before
> we return to the epoll loop.
>
> > }
> >
> > if (!iov_i)
> > @@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
> > conn->seq_ack_to_tap = conn->seq_from_tap;
> >
> > conn_event(c, conn, ESTABLISHED);
> > + set_peek_offset(conn, 0);
> >
> > /* The client might have sent data already, which we didn't
> > * dequeue waiting for SYN,ACK from tap -- check now.
> > @@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
> > goto reset;
> >
> > conn_event(c, conn, ESTABLISHED);
> > + set_peek_offset(conn, 0);
>
> The set_peek_offset() could go into conn_event() to avoid the
> duplication. Not sure if it's worth it or not.
I wouldn't do that in conn_event(), the existing side effects there are
kind of expected, but set_peek_offset() isn't so closely related to TCP
events I'd say.
> > if (th->fin) {
> > conn->seq_from_tap++;
> > @@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref,
> > struct sockaddr_storage sa;
> > socklen_t sl = sizeof(sa);
> > union flow *flow;
> > - int s;
> > + int s = 0;
> >
> > if (c->no_tcp || !(flow = flow_alloc()))
> > return;
> > @@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
> > flow_dbg(conn, "ACK timeout, retry");
> > conn->retrans++;
> > conn->seq_to_tap = conn->seq_ack_from_tap;
> > + set_peek_offset(conn, 0);
> > + tcp_l2_data_buf_flush(c);
>
> Uh.. doesn't this flush have to go *before* the seq_to_tap update, for
> the reasons discussed above?
>
> > tcp_data_from_sock(c, conn);
> > + tcp_l2_data_buf_flush(c);
I don't understand the purpose of these new tcp_l2_data_buf_flush()
calls. If they fix a pre-existing issue (but I'm not sure which one),
they should be in a different patch.
> > tcp_timer_ctl(c, conn);
> > }
> > } else {
> > @@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct ctx *c)
> > */
> > int tcp_init(struct ctx *c)
> > {
> > - unsigned b;
> > + unsigned int b, optv = 0;
> > + int s;
> >
> > for (b = 0; b < TCP_HASH_TABLE_SIZE; b++)
> > tc_hash[b] = FLOW_SIDX_NONE;
> > @@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c)
> > NS_CALL(tcp_ns_socks_init, c);
> > }
> >
> > + /* Probe for SO_PEEK_OFF support */
> > + s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
> > + if (s < 0) {
> > + warn("Temporary TCP socket creation failed");
> > + } else {
> > + if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int)))
> > + peek_offset_cap = true;
> > + close(s);
> > + }
> > + debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not ");
> > return 0;
> > }
> >
>
--
Stefano
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero
2024-05-01 20:28 ` [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero Jon Maloy
@ 2024-05-03 13:43 ` Stefano Brivio
2024-05-03 15:30 ` Jon Maloy
0 siblings, 1 reply; 10+ messages in thread
From: Stefano Brivio @ 2024-05-03 13:43 UTC (permalink / raw)
To: Jon Maloy; +Cc: passt-dev, lvivier, dgibson
On Wed, 1 May 2024 16:28:39 -0400
Jon Maloy <jmaloy@redhat.com> wrote:
> A bug in kernel TCP may lead to a deadlock where a zero window is sent
> from the peer, while he is unable to send out window updates even after
s/he/it/
> reads have freed up enough buffer space to permit a larger window.
> In this situation, new window advertisemnts from the peer can only be
> triggered by packets arriving from this side.
>
> However, such packets are never sent, because the zero-window condition
> currently prevents this side from sending out any packets whatsoever
> to the peer.
>
> We notice that the above bug is triggered *only* after the peer has
> dropped an arriving packet because of severe memory squeeze, and that we
> hence always enter a retransmission situation when this occurs.
This is clearly an issue, but:
> This
> also means that RFC 9293 is violated, since the zero-advertisement
> shrinks the previously advertised window within which the dropped packet
> originally was sent.
...I don't think the RFC is violated, it just says "SHOULD NOT".
> RFC 9293 gives the solution to this situation. In chapter 3.6.1 we find
> the following statement:
> "A TCP receiver SHOULD NOT shrink the window, i.e., move the right
> window edge to the left (SHLD-14). However, a sending TCP peer MUST
> be robust against window shrinking, which may cause the
> "usable window" (see Section 3.8.6.2.1) to become negative (MUST-34).
>
> If this happens, the sender SHOULD NOT send new data (SHLD-15), but
My understanding of this "this" is that it refers to the case where the
usable window becomes negative, which I'm not sure it's the case here.
That is, if the zero window update you get includes an acknowledgement
for all the data that was sent so far, the usable window is not
negative, it's zero, and we shouldn't retransmit anything.
If previously sent data is not acknowledged, then yes, we should
retransmit it at this point.
In both cases, we _must_ implement zero-window probing (3.8.6.1), which
is not entirely covered by this patch, if I understand correctly.
> SHOULD retransmit normally the old unacknowledged data between SND.UNA
> and SND.UNA+SND.WND (SHLD-16). The sender MAY also retransmit old data
> beyond SND.UNA+SND.WND (MAY-7)"
>
> It should be noted that although this solves the problem we have at
> hand, it is not a genuine solution to the kernel bug. There may well
> be TCP stacks around in other OS-es which don't do this, nor have
> keep-alive probing as an alternatve way to solve the situation.
>
> Signed-off-by: Jon Maloy <jmaloy@redhat.com>
>
> ---
> v2: - Using previously advertised window during retransmission, instead
> highest send sequencece number in the cycle.
> ---
> tcp.c | 29 ++++++++++++++++++++---------
> tcp_conn.h | 2 ++
> 2 files changed, 22 insertions(+), 9 deletions(-)
>
> diff --git a/tcp.c b/tcp.c
> index 535f1a2..703c62f 100644
> --- a/tcp.c
> +++ b/tcp.c
> @@ -1749,9 +1749,15 @@ static void tcp_get_tap_ws(struct tcp_tap_conn *conn,
> */
> static void tcp_tap_window_update(struct tcp_tap_conn *conn, unsigned wnd)
> {
> + uint32_t wnd_upper;
> +
> wnd = MIN(MAX_WINDOW, wnd << conn->ws_from_tap);
> conn->wnd_from_tap = MIN(wnd >> conn->ws_from_tap, USHRT_MAX);
>
> + wnd_upper = conn->seq_ack_from_tap + wnd;
> + if (wnd && SEQ_GT(wnd_upper, conn->seq_wup_from_tap))
> + conn->seq_wup_from_tap = wnd_upper;
> +
> /* FIXME: reflect the tap-side receiver's window back to the sock-side
> * sender by adjusting SO_RCVBUF? */
> }
> @@ -1784,6 +1790,7 @@ static void tcp_seq_init(const struct ctx *c, struct tcp_tap_conn *conn,
> ns = (now->tv_sec * 1000000000 + now->tv_nsec) >> 5;
>
> conn->seq_to_tap = ((uint32_t)(hash >> 32) ^ (uint32_t)hash) + ns;
> + conn->seq_wup_from_tap = conn->seq_to_tap;
> }
>
> /**
> @@ -2133,9 +2140,8 @@ static void tcp_data_to_tap(const struct ctx *c, struct tcp_tap_conn *conn,
> *
> * #syscalls recvmsg
> */
> -static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> +static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn, uint32_t wnd_scaled)
Function comment should be updated.
While at it, I wonder if instead of overriding the window value, we
shouldn't rather add a 'zwp' or 'force' flag here, because the regular
call sites become a bit awkward otherwise: why would you pass the
receive window to a a function that's named tcp_data_from_sock()?
I see that there's another point where you now need to call it with a
value greater than one (below, from tcp_data_from_tap()), but I wonder
if the window to be used, here, could (always?) be inferred by the value
of seq_wup_from_tap.
> {
> - uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap;
> int fill_bufs, send_bufs = 0, last_len, iov_rem = 0;
> int sendlen, len, plen, v4 = CONN_V4(conn);
> int s = conn->sock, i, ret = 0;
> @@ -2282,6 +2288,7 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> uint32_t max_ack_seq = conn->seq_ack_from_tap;
> uint32_t seq_from_tap = conn->seq_from_tap;
> struct msghdr mh = { .msg_iov = tcp_iov };
> + uint32_t wnd;
> size_t len;
> ssize_t n;
>
> @@ -2325,8 +2332,7 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> SEQ_GE(ack_seq, max_ack_seq)) {
> /* Fast re-transmit */
> retr = !len && !th->fin &&
> - ack_seq == max_ack_seq &&
> - ntohs(th->window) == max_ack_seq_wnd;
The reason for this condition is to tell apart mere window updates from
other segments.
That is, if you get:
- an acknowledgement for sequence x and advertised window y0
- an acknowledgement for sequence x and advertised window y1
...then it's not a request to retransmit anything, it's a window
update.
If you get:
- an acknowledgement for sequence x and advertised window y0
- an acknowledgement for sequence x and advertised window y0
...then it's a trigger for fast re-transmit.
Yes, we should retransmit something in some cases (what you quoted
above), but not on every kind of window update (usually the window
grows).
> + ack_seq == max_ack_seq;
>
> max_ack_seq_wnd = ntohs(th->window);
> max_ack_seq = ack_seq;
> @@ -2400,7 +2406,8 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> conn->seq_ack_from_tap = max_ack_seq;
> conn->seq_to_tap = max_ack_seq;
> set_peek_offset(conn, 0);
> - tcp_data_from_sock(c, conn);
> + wnd = conn->seq_wup_from_tap - max_ack_seq;
> + tcp_data_from_sock(c, conn, wnd);
>
> /* Empty queue before any POLL event tries to send it again */
> tcp_l2_data_buf_flush(c);
> @@ -2500,7 +2507,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
> /* The client might have sent data already, which we didn't
> * dequeue waiting for SYN,ACK from tap -- check now.
> */
> - tcp_data_from_sock(c, conn);
> + tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
> tcp_send_flag(c, conn, ACK);
> }
>
> @@ -2593,7 +2600,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
>
> tcp_tap_window_update(conn, ntohs(th->window));
>
> - tcp_data_from_sock(c, conn);
> + tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
>
> if (p->count - idx == 1)
> return 1;
> @@ -2776,6 +2783,7 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
> {
> struct itimerspec check_armed = { { 0 }, { 0 } };
> struct tcp_tap_conn *conn = CONN(ref.flow);
> + uint32_t wnd;
>
> if (c->no_tcp)
> return;
> @@ -2807,7 +2815,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
> conn->seq_to_tap = conn->seq_ack_from_tap;
> set_peek_offset(conn, 0);
> tcp_l2_data_buf_flush(c);
> - tcp_data_from_sock(c, conn);
> +
> + /* RFC 9293, 3.8.6: Send 1 byte of new data if needed */
...if available. But, if it's not available, we should send a
zero-window probe without payload.
And to check if it's needed, we should check that the advertised
receive window is zero, right? I understand that the outcome of this:
if (!conn->wnd_from_tap)
/* RFC 9293, 3.8.6.1, zero-window probing */
tcp_data_from_sock(c, conn, 1);
else
tcp_data_from_sock(c, conn, wnd);
is the same as:
tcp_data_from_sock(c, conn, MAX(1, wnd));
but I find it a bit complicated to understand what this does otherwise.
> + wnd = conn->seq_wup_from_tap - conn->seq_ack_from_tap;
> + tcp_data_from_sock(c, conn, MAX(1, wnd));
> tcp_l2_data_buf_flush(c);
> tcp_timer_ctl(c, conn);
> }
> @@ -2863,7 +2874,7 @@ void tcp_sock_handler(struct ctx *c, union epoll_ref ref, uint32_t events)
> conn_event(c, conn, SOCK_FIN_RCVD);
>
> if (events & EPOLLIN)
> - tcp_data_from_sock(c, conn);
> + tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
>
> if (events & EPOLLOUT)
> tcp_update_seqack_wnd(c, conn, 0, NULL);
> diff --git a/tcp_conn.h b/tcp_conn.h
> index a5f5cfe..d6b627a 100644
> --- a/tcp_conn.h
> +++ b/tcp_conn.h
> @@ -30,6 +30,7 @@
> * @wnd_to_tap: Sending window advertised to tap, unscaled (as sent)
> * @seq_to_tap: Next sequence for packets to tap
> * @seq_ack_from_tap: Last ACK number received from tap
> + * @seq_wup_from_tap: Right edge (+1) of last non-zero window from tap
It's not +1 anymore, right?
> * @seq_from_tap: Next sequence for packets from tap (not actually sent)
> * @seq_ack_to_tap: Last ACK number sent to tap
> * @seq_init_from_tap: Initial sequence number from tap
> @@ -101,6 +102,7 @@ struct tcp_tap_conn {
>
> uint32_t seq_to_tap;
> uint32_t seq_ack_from_tap;
> + uint32_t seq_wup_from_tap;
> uint32_t seq_from_tap;
> uint32_t seq_ack_to_tap;
> uint32_t seq_init_from_tap;
--
Stefano
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available
2024-05-03 13:42 ` Stefano Brivio
@ 2024-05-03 14:43 ` Jon Maloy
2024-05-06 7:15 ` David Gibson
2024-05-06 6:51 ` David Gibson
1 sibling, 1 reply; 10+ messages in thread
From: Jon Maloy @ 2024-05-03 14:43 UTC (permalink / raw)
To: Stefano Brivio, David Gibson; +Cc: passt-dev, lvivier, dgibson
On 2024-05-03 09:42, Stefano Brivio wrote:
> On Thu, 2 May 2024 11:31:52 +1000
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
>> On Wed, May 01, 2024 at 04:28:38PM -0400, Jon Maloy wrote:
>>> >From linux-6.9.0 the kernel will contain
>>> commit 05ea491641d3 ("tcp: add support for SO_PEEK_OFF socket option").
>>>
>>> This new feature makes is possible to call recv_msg(MSG_PEEK) and make
>>> it start reading data from a given offset set by the SO_PEEK_OFF socket
>>> option. This way, we can avoid repeated reading of already read bytes of
>>> a received message, hence saving read cycles when forwarding TCP messages
>>> in the host->name space direction.
>>>
>>> In this commit, we add functionality to leverage this feature when
>>> available, while we fall back to the previous behavior when not.
>>>
>>> Measurements with iperf3 shows that throughput increases with 15-20 percent
>>> in the host->namespace direction when this feature is used.
>>>
>>> Signed-off-by: Jon Maloy <jmaloy@redhat.com>
>>>
>>> ---
>>> v2: - Some smaller changes as suggested by David Gibson and Stefano Brivio.
>>> - Moved set_peek_offset() to only the locations where the socket is set
>>> to ESTABLISHED.
>>> - Removed the per-packet synchronization between sk_peek_off and
>>> already_sent. Instead only doing it in retransmit situations.
>>> - The problem I found when trouble shooting the occasionally occurring
>>> out of synch values between 'already_sent' and 'sk_peek_offset' may
>>> have deeper implications that we may need to be investigate.
>>> ---
>>> tcp.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
>>> 1 file changed, 55 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/tcp.c b/tcp.c
>>> index 905d26f..535f1a2 100644
>>> --- a/tcp.c
>>> +++ b/tcp.c
>>> @@ -513,6 +513,9 @@ static struct iovec tcp6_l2_iov [TCP_FRAMES_MEM];
>>> static struct iovec tcp4_l2_flags_iov [TCP_FRAMES_MEM];
>>> static struct iovec tcp6_l2_flags_iov [TCP_FRAMES_MEM];
>>>
>>> +/* Does the kernel support TCP_PEEK_OFF? */
>>> +static bool peek_offset_cap;
>>> +
>>> /* sendmsg() to socket */
>>> static struct iovec tcp_iov [UIO_MAXIOV];
>>>
>>> @@ -582,6 +585,22 @@ static_assert(ARRAY_SIZE(tc_hash) >= FLOW_MAX,
>>> int init_sock_pool4 [TCP_SOCK_POOL_SIZE];
>>> int init_sock_pool6 [TCP_SOCK_POOL_SIZE];
>>>
>>> +/**
>>> + * set_peek_offset() - Set SO_PEEK_OFF offset on a socket if supported
>>> + * @conn Connection struct with reference to socket and flags
>>> + * @offset Offset in bytes
> Sorry, my bad: I forgot colons after the variable names in my proposal
> for this comment: @conn:, @offset:.
ok. I'll fix that.
>
>>> + */
>>> +static void set_peek_offset(struct tcp_tap_conn *conn, int offset)
>>> +{
>>> + int s;
>>> +
>>> + if (!peek_offset_cap)
>>> + return;
> Usually we have one extra newline after an early return between a
> condition and some other code that's not so closely related.
ok
>
>>> + s = conn->sock;
>>> + if (setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &offset, sizeof(offset)))
>>> + perror("Failed to set SO_PEEK_OFF\n");
> Don't print to stderr directly, use err().
Ok. Maybe we end up with the same weird behavior as between info() and
printf(), although it hardly matters in this case.
>
>>> +}
>>> +
>>> /**
>>> * tcp_conn_epoll_events() - epoll events mask for given connection state
>>> * @events: Current connection events
>>> @@ -2174,6 +2193,12 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
>>> if (iov_rem)
>>> iov_sock[fill_bufs].iov_len = iov_rem;
>>>
>>> + if (peek_offset_cap) {
>>> + /* Don't use discard buffer */
>>> + mh_sock.msg_iov = &iov_sock[1];
>>> + mh_sock.msg_iovlen -= 1;
>>> + }
>>> +
>> I think this would be a little clearer if moved up to where we
>> currently initialise mh_sock.msg_iov and iov_sock[0], and make that an
>> else clause to this if. That would make it more obvious that we have
>> two different - and mutually exclusive - mechanisms for dealing with
>> un-acked data in the socket buffer. Not worth a respin on its own,
>> though.
ok
>>
>>> /* Receive into buffers, don't dequeue until acknowledged by guest. */
>>> do
>>> len = recvmsg(s, &mh_sock, MSG_PEEK);
>>> @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
>>> return 0;
>>> }
>>>
>>> - sendlen = len - already_sent;
>>> + sendlen = len;
>>> + if (!peek_offset_cap)
>>> + sendlen -= already_sent;
>>> +
>>> if (sendlen <= 0) {
>>> conn_flag(c, conn, STALLED);
>>> return 0;
>>> @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
>>> flow_trace(conn,
>>> "fast re-transmit, ACK: %u, previous sequence: %u",
>>> max_ack_seq, conn->seq_to_tap);
>>> +
>>> + /* Ensure seq_from_tap isn't updated twice after call */
>>> + tcp_l2_data_buf_flush(c);
>> tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a
>> recently merged change from Laurent.
>>
>> IIUC, this is necessary because otherwise our update to seq_to_tap can
> ...but Jon's comment refers to seq_from_tap (not seq_to_tap)? I'm
> confused.
Right. It should be seq_to_tap.
>> be clobbered from tcp_payload_flush() when we process the
>> queued-but-not-sent frames.
> ...how? I don't quite understand the issue here: tcp_payload_flush()
> updates seq_to_tap once we send the frames, not before, right?
If we don't flush, we may have a frame there, e.g. seqno 17, followed by
a lower numbered
frame, e.g. seqno 14.
Both will point to a seq_to_tap we just gave the value 14.
When the buffer queue is flushed we update seq_to_tap twice, so next
sent packet will be 16.
This would have worked in the old code, because we calculate the offset
value (already_sent) based
on the seq_to_tap value, so we just skip ahead one packet and continue
transmitting.
If we are lucky pkt #15 is already in the receiver's OOF queue, and we
are ok.
It will *not* work in my code, because the kernel offset is advanced
linearly, so we will resend
a packet called #16, but with the contents of the original pkt #15.
>
>> This seems like a correct fix, but not an
>> optimal one: we're flushing out data we've already determined we're
>> going to retransmit. Instead, I think we want a different helper that
>> simply discards the queued frames
> Don't we always send (within the same epoll_wait() cycle) what we
> queued? What am I missing?
No. Evidently not.
>
>> - I'm thinking maybe we actually
>> want a helper that's called from both the fast and slow retransmit
>> paths and handles that.
>>
>> Ah, wait, we only want to discard queued frames that belong to this
>> connection, that's trickier.
>>
>> It seems to me this is a pre-existing bug, we just managed to get away
>> with it previously. I think this is at least one cause of the weirdly
>> jumping forwarding sequence numbers you observed. So I think we want
>> to make a patch fixing this that goes before the SO_PEEK_OFF changes.
This was exactly the reason for my v2: comment in the commit log.
But it may even be worse. See below.
>>
>>> +
>>> conn->seq_ack_from_tap = max_ack_seq;
>>> conn->seq_to_tap = max_ack_seq;
>>> + set_peek_offset(conn, 0);
>>> tcp_data_from_sock(c, conn);
>>> +
>>> + /* Empty queue before any POLL event tries to send it again */
>>> + tcp_l2_data_buf_flush(c);
>> I'm not clear on what the second flush call is for. The only frames
>> queued should be those added by the tcp_data_from_sock() just above,
>> and those should be flushed when we get to tcp_defer_handler() before
>> we return to the epoll loop.
Sadly no. My debugging clearly shows that an epoll() may come in between,
and try to transmit a pkt #14 (from the example above), but now with the
contents
of the original pkt #15.
All sorts of weirdities may happen after that.
I am wondering if this is a generic problem: Is it possible that two
consecutive
epolls() may queue up two packets with the same number in the tap queue,
whereafter
the number will be incremented twice when flushed, and we create a gap
in the sequence causing
spurious retransmissions?
I haven't checked this theory yet, but that is part of my plan for today.
Anyway, I don't understand the point with the delayed update of
set_to_tap at all. To me
it looks plain wrong. But I am sure somebody can explain.
>>
>>> }
>>>
>>> if (!iov_i)
>>> @@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
>>> conn->seq_ack_to_tap = conn->seq_from_tap;
>>>
>>> conn_event(c, conn, ESTABLISHED);
>>> + set_peek_offset(conn, 0);
>>>
>>> /* The client might have sent data already, which we didn't
>>> * dequeue waiting for SYN,ACK from tap -- check now.
>>> @@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
>>> goto reset;
>>>
>>> conn_event(c, conn, ESTABLISHED);
>>> + set_peek_offset(conn, 0);
>> The set_peek_offset() could go into conn_event() to avoid the
>> duplication. Not sure if it's worth it or not.
> I wouldn't do that in conn_event(), the existing side effects there are
> kind of expected, but set_peek_offset() isn't so closely related to TCP
> events I'd say.
>
>>> if (th->fin) {
>>> conn->seq_from_tap++;
>>> @@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref,
>>> struct sockaddr_storage sa;
>>> socklen_t sl = sizeof(sa);
>>> union flow *flow;
>>> - int s;
>>> + int s = 0;
>>>
>>> if (c->no_tcp || !(flow = flow_alloc()))
>>> return;
>>> @@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
>>> flow_dbg(conn, "ACK timeout, retry");
>>> conn->retrans++;
>>> conn->seq_to_tap = conn->seq_ack_from_tap;
>>> + set_peek_offset(conn, 0);
>>> + tcp_l2_data_buf_flush(c);
>> Uh.. doesn't this flush have to go *before* the seq_to_tap update, for
>> the reasons discussed above?
Damn. Yes. It never degraded to the level where timer retransmit was
triggered, so I only copy-pasted
this, and obviously didn't do it right.
>>
>>> tcp_data_from_sock(c, conn);
>>> + tcp_l2_data_buf_flush(c);
> I don't understand the purpose of these new tcp_l2_data_buf_flush()
> calls. If they fix a pre-existing issue (but I'm not sure which one),
> they should be in a different patch.
Maybe so.
///jon
>
>>> tcp_timer_ctl(c, conn);
>>> }
>>> } else {
>>> @@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct ctx *c)
>>> */
>>> int tcp_init(struct ctx *c)
>>> {
>>> - unsigned b;
>>> + unsigned int b, optv = 0;
>>> + int s;
>>>
>>> for (b = 0; b < TCP_HASH_TABLE_SIZE; b++)
>>> tc_hash[b] = FLOW_SIDX_NONE;
>>> @@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c)
>>> NS_CALL(tcp_ns_socks_init, c);
>>> }
>>>
>>> + /* Probe for SO_PEEK_OFF support */
>>> + s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
>>> + if (s < 0) {
>>> + warn("Temporary TCP socket creation failed");
>>> + } else {
>>> + if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int)))
>>> + peek_offset_cap = true;
>>> + close(s);
>>> + }
>>> + debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not ");
>>> return 0;
>>> }
>>>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero
2024-05-03 13:43 ` Stefano Brivio
@ 2024-05-03 15:30 ` Jon Maloy
0 siblings, 0 replies; 10+ messages in thread
From: Jon Maloy @ 2024-05-03 15:30 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-dev, lvivier, dgibson
On 2024-05-03 09:43, Stefano Brivio wrote:
> On Wed, 1 May 2024 16:28:39 -0400
> Jon Maloy <jmaloy@redhat.com> wrote:
>
>> A bug in kernel TCP may lead to a deadlock where a zero window is sent
>> from the peer, while he is unable to send out window updates even after
> s/he/it/
they ? ;-)
>
>> reads have freed up enough buffer space to permit a larger window.
>> In this situation, new window advertisemnts from the peer can only be
>> triggered by packets arriving from this side.
>>
>> However, such packets are never sent, because the zero-window condition
>> currently prevents this side from sending out any packets whatsoever
>> to the peer.
>>
>> We notice that the above bug is triggered *only* after the peer has
>> dropped an arriving packet because of severe memory squeeze, and that we
>> hence always enter a retransmission situation when this occurs.
> This is clearly an issue, but:
>
>> This
>> also means that RFC 9293 is violated, since the zero-advertisement
>> shrinks the previously advertised window within which the dropped packet
>> originally was sent.
> ...I don't think the RFC is violated, it just says "SHOULD NOT".
Right. I can rephrase it. But it is clearly a bad situation we have to
remedy.
>
>> RFC 9293 gives the solution to this situation. In chapter 3.6.1 we find
>> the following statement:
>> "A TCP receiver SHOULD NOT shrink the window, i.e., move the right
>> window edge to the left (SHLD-14). However, a sending TCP peer MUST
>> be robust against window shrinking, which may cause the
>> "usable window" (see Section 3.8.6.2.1) to become negative (MUST-34).
>>
>> If this happens, the sender SHOULD NOT send new data (SHLD-15), but
> My understanding of this "this" is that it refers to the case where the
> usable window becomes negative, which I'm not sure it's the case here.
Window shrinking *may* cause the window to become negative, but
not necessarily. I agree this may be a matter of interpretation, but cannot
see any reason we should handle the situation differently if the shrinking
leads to a zero, or even a positive, window.
>
> That is, if the zero window update you get includes an acknowledgement
> for all the data that was sent so far,
> the usable window is not
> negative, it's zero, and we shouldn't retransmit anything.
>
> If previously sent data is not acknowledged, then yes, we should
> retransmit it at this point.
That is exactly the situation we are dealing with here.
>
> In both cases, we _must_ implement zero-window probing (3.8.6.1), which
> is not entirely covered by this patch, if I understand correctly.
Right. I was planning to add that in a separate patch.
>
>> SHOULD retransmit normally the old unacknowledged data between SND.UNA
>> and SND.UNA+SND.WND (SHLD-16). The sender MAY also retransmit old data
>> beyond SND.UNA+SND.WND (MAY-7)"
>>
>> It should be noted that although this solves the problem we have at
>> hand, it is not a genuine solution to the kernel bug. There may well
>> be TCP stacks around in other OS-es which don't do this, nor have
>> keep-alive probing as an alternatve way to solve the situation.
>>
>> Signed-off-by: Jon Maloy <jmaloy@redhat.com>
>>
>> ---
>> v2: - Using previously advertised window during retransmission, instead
>> highest send sequencece number in the cycle.
>> ---
>> tcp.c | 29 ++++++++++++++++++++---------
>> tcp_conn.h | 2 ++
>> 2 files changed, 22 insertions(+), 9 deletions(-)
>>
>> diff --git a/tcp.c b/tcp.c
>> index 535f1a2..703c62f 100644
>> --- a/tcp.c
>> +++ b/tcp.c
>> @@ -1749,9 +1749,15 @@ static void tcp_get_tap_ws(struct tcp_tap_conn *conn,
>> */
>> static void tcp_tap_window_update(struct tcp_tap_conn *conn, unsigned wnd)
>> {
>> + uint32_t wnd_upper;
>> +
>> wnd = MIN(MAX_WINDOW, wnd << conn->ws_from_tap);
>> conn->wnd_from_tap = MIN(wnd >> conn->ws_from_tap, USHRT_MAX);
>>
>> + wnd_upper = conn->seq_ack_from_tap + wnd;
>> + if (wnd && SEQ_GT(wnd_upper, conn->seq_wup_from_tap))
>> + conn->seq_wup_from_tap = wnd_upper;
>> +
>> /* FIXME: reflect the tap-side receiver's window back to the sock-side
>> * sender by adjusting SO_RCVBUF? */
>> }
>> @@ -1784,6 +1790,7 @@ static void tcp_seq_init(const struct ctx *c, struct tcp_tap_conn *conn,
>> ns = (now->tv_sec * 1000000000 + now->tv_nsec) >> 5;
>>
>> conn->seq_to_tap = ((uint32_t)(hash >> 32) ^ (uint32_t)hash) + ns;
>> + conn->seq_wup_from_tap = conn->seq_to_tap;
>> }
>>
>> /**
>> @@ -2133,9 +2140,8 @@ static void tcp_data_to_tap(const struct ctx *c, struct tcp_tap_conn *conn,
>> *
>> * #syscalls recvmsg
>> */
>> -static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
>> +static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn, uint32_t wnd_scaled)
> Function comment should be updated.
Right.
I also noted that the prototype is > 80 characters, although
checkpatch.pl didn't
catch that.
>
> While at it, I wonder if instead of overriding the window value, we
> shouldn't rather add a 'zwp' or 'force' flag here, because the regular
> call sites become a bit awkward otherwise: why would you pass the
> receive window to a a function that's named tcp_data_from_sock()?
It will cause an extra test whether we need to calculate or not.
Besides, by the same logics, why would we even *calculate*
a receive window in that function?
Maybe we should just call it something else than 'wnd_scaled"?
In this function, it is only about how much data it is allowed to read
and forward, without
any further interpretation.
>
> I see that there's another point where you now need to call it with a
> value greater than one (below, from tcp_data_from_tap()), but I wonder
> if the window to be used, here, could (always?) be inferred by the value
> of seq_wup_from_tap.
Good point. It probably could.
>
>> {
>> - uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap;
>> int fill_bufs, send_bufs = 0, last_len, iov_rem = 0;
>> int sendlen, len, plen, v4 = CONN_V4(conn);
>> int s = conn->sock, i, ret = 0;
>> @@ -2282,6 +2288,7 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
>> uint32_t max_ack_seq = conn->seq_ack_from_tap;
>> uint32_t seq_from_tap = conn->seq_from_tap;
>> struct msghdr mh = { .msg_iov = tcp_iov };
>> + uint32_t wnd;
>> size_t len;
>> ssize_t n;
>>
>> @@ -2325,8 +2332,7 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
>> SEQ_GE(ack_seq, max_ack_seq)) {
>> /* Fast re-transmit */
>> retr = !len && !th->fin &&
>> - ack_seq == max_ack_seq &&
>> - ntohs(th->window) == max_ack_seq_wnd;
> The reason for this condition is to tell apart mere window updates from
> other segments.
>
> That is, if you get:
>
> - an acknowledgement for sequence x and advertised window y0
> - an acknowledgement for sequence x and advertised window y1
>
> ...then it's not a request to retransmit anything, it's a window
> update.
>
> If you get:
>
> - an acknowledgement for sequence x and advertised window y0
> - an acknowledgement for sequence x and advertised window y0
>
> ...then it's a trigger for fast re-transmit.
>
> Yes, we should retransmit something in some cases (what you quoted
> above), but not on every kind of window update (usually the window
> grows).
You are right. I need to add a condition here.
Now I am slightly puzzled: Why don´t I see massive amounts of
retransmits happening all the time?
>
>> + ack_seq == max_ack_seq;
>>
>> max_ack_seq_wnd = ntohs(th->window);
>> max_ack_seq = ack_seq;
>> @@ -2400,7 +2406,8 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
>> conn->seq_ack_from_tap = max_ack_seq;
>> conn->seq_to_tap = max_ack_seq;
>> set_peek_offset(conn, 0);
>> - tcp_data_from_sock(c, conn);
>> + wnd = conn->seq_wup_from_tap - max_ack_seq;
>> + tcp_data_from_sock(c, conn, wnd);
>>
>> /* Empty queue before any POLL event tries to send it again */
>> tcp_l2_data_buf_flush(c);
>> @@ -2500,7 +2507,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
>> /* The client might have sent data already, which we didn't
>> * dequeue waiting for SYN,ACK from tap -- check now.
>> */
>> - tcp_data_from_sock(c, conn);
>> + tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
>> tcp_send_flag(c, conn, ACK);
>> }
>>
>> @@ -2593,7 +2600,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
>>
>> tcp_tap_window_update(conn, ntohs(th->window));
>>
>> - tcp_data_from_sock(c, conn);
>> + tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
>>
>> if (p->count - idx == 1)
>> return 1;
>> @@ -2776,6 +2783,7 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
>> {
>> struct itimerspec check_armed = { { 0 }, { 0 } };
>> struct tcp_tap_conn *conn = CONN(ref.flow);
>> + uint32_t wnd;
>>
>> if (c->no_tcp)
>> return;
>> @@ -2807,7 +2815,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
>> conn->seq_to_tap = conn->seq_ack_from_tap;
>> set_peek_offset(conn, 0);
>> tcp_l2_data_buf_flush(c);
>> - tcp_data_from_sock(c, conn);
>> +
>> + /* RFC 9293, 3.8.6: Send 1 byte of new data if needed */
> ...if available. But, if it's not available, we should send a
> zero-window probe without payload.
As mentioned, I am planning a separate patch for this. I think this
also affects the timer value, so it justifies an extra patch.
>
> And to check if it's needed, we should check that the advertised
> receive window is zero, right? I understand that the outcome of this:
>
> if (!conn->wnd_from_tap)
> /* RFC 9293, 3.8.6.1, zero-window probing */
> tcp_data_from_sock(c, conn, 1);
> else
> tcp_data_from_sock(c, conn, wnd);
>
> is the same as:
>
> tcp_data_from_sock(c, conn, MAX(1, wnd));
This still won't send any data if there is none available, so an extra
patch is merited,.
///jon
>
> but I find it a bit complicated to understand what this does otherwise.
>
>> + wnd = conn->seq_wup_from_tap - conn->seq_ack_from_tap;
>> + tcp_data_from_sock(c, conn, MAX(1, wnd));
>> tcp_l2_data_buf_flush(c);
>> tcp_timer_ctl(c, conn);
>> }
>> @@ -2863,7 +2874,7 @@ void tcp_sock_handler(struct ctx *c, union epoll_ref ref, uint32_t events)
>> conn_event(c, conn, SOCK_FIN_RCVD);
>>
>> if (events & EPOLLIN)
>> - tcp_data_from_sock(c, conn);
>> + tcp_data_from_sock(c, conn, conn->wnd_from_tap << conn->ws_from_tap);
>>
>> if (events & EPOLLOUT)
>> tcp_update_seqack_wnd(c, conn, 0, NULL);
>> diff --git a/tcp_conn.h b/tcp_conn.h
>> index a5f5cfe..d6b627a 100644
>> --- a/tcp_conn.h
>> +++ b/tcp_conn.h
>> @@ -30,6 +30,7 @@
>> * @wnd_to_tap: Sending window advertised to tap, unscaled (as sent)
>> * @seq_to_tap: Next sequence for packets to tap
>> * @seq_ack_from_tap: Last ACK number received from tap
>> + * @seq_wup_from_tap: Right edge (+1) of last non-zero window from tap
> It's not +1 anymore, right?
>
>> * @seq_from_tap: Next sequence for packets from tap (not actually sent)
>> * @seq_ack_to_tap: Last ACK number sent to tap
>> * @seq_init_from_tap: Initial sequence number from tap
>> @@ -101,6 +102,7 @@ struct tcp_tap_conn {
>>
>> uint32_t seq_to_tap;
>> uint32_t seq_ack_from_tap;
>> + uint32_t seq_wup_from_tap;
>> uint32_t seq_from_tap;
>> uint32_t seq_ack_to_tap;
>> uint32_t seq_init_from_tap;
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available
2024-05-03 13:42 ` Stefano Brivio
2024-05-03 14:43 ` Jon Maloy
@ 2024-05-06 6:51 ` David Gibson
1 sibling, 0 replies; 10+ messages in thread
From: David Gibson @ 2024-05-06 6:51 UTC (permalink / raw)
To: Stefano Brivio; +Cc: Jon Maloy, passt-dev, lvivier, dgibson
[-- Attachment #1: Type: text/plain, Size: 7383 bytes --]
On Fri, May 03, 2024 at 03:42:55PM +0200, Stefano Brivio wrote:
> On Thu, 2 May 2024 11:31:52 +1000
> David Gibson <david@gibson.dropbear.id.au> wrote:
> > On Wed, May 01, 2024 at 04:28:38PM -0400, Jon Maloy wrote:
[snip]
> > > /* Receive into buffers, don't dequeue until acknowledged by guest. */
> > > do
> > > len = recvmsg(s, &mh_sock, MSG_PEEK);
> > > @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> > > return 0;
> > > }
> > >
> > > - sendlen = len - already_sent;
> > > + sendlen = len;
> > > + if (!peek_offset_cap)
> > > + sendlen -= already_sent;
> > > +
> > > if (sendlen <= 0) {
> > > conn_flag(c, conn, STALLED);
> > > return 0;
> > > @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> > > flow_trace(conn,
> > > "fast re-transmit, ACK: %u, previous sequence: %u",
> > > max_ack_seq, conn->seq_to_tap);
> > > +
> > > + /* Ensure seq_from_tap isn't updated twice after call */
> > > + tcp_l2_data_buf_flush(c);
> >
> > tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a
> > recently merged change from Laurent.
> >
> > IIUC, this is necessary because otherwise our update to seq_to_tap can
>
> ...but Jon's comment refers to seq_from_tap (not seq_to_tap)? I'm
> confused.
I missed this, but as Jon clarified it's supposed to be seq_to_tap
here.
> > be clobbered from tcp_payload_flush() when we process the
> > queued-but-not-sent frames.
>
> ...how? I don't quite understand the issue here: tcp_payload_flush()
> updates seq_to_tap once we send the frames, not before, right?
Yes, but that's not relevant. The problem arises when (1) we queue
some frames without knowledge of the fast re-transmit, *then* (2) we
realize we need to re-transmit. This can happen if we have activity
on both sock-side and tap-side of the same connection in the same
epoll cycle. If we process the socket side first, we will do (1),
then while processing the tap side we could see dup acks triggering
(2).
At (2) we rewind seq_to_tap, but when we flush the frames queued at
(1) we advance it again, incorrectly - we should only be advancing it
when we *re*-transmit new data.
So, either we need to flush out everything *before* we wind back
seq_to_tap (Jon's approach), or we need to cancel those queued frames
(more optimal, but more complex to implement.
> > This seems like a correct fix, but not an
> > optimal one: we're flushing out data we've already determined we're
> > going to retransmit. Instead, I think we want a different helper that
> > simply discards the queued frames
>
> Don't we always send (within the same epoll_wait() cycle) what we
> queued? What am I missing?
We do (or at least should), yes. But in a single epoll cycle we can
queue frames (triggered by socket activity) before we realise we have
to retransmit (triggered by tap or timer activity).
> > - I'm thinking maybe we actually
> > want a helper that's called from both the fast and slow retransmit
> > paths and handles that.
> >
> > Ah, wait, we only want to discard queued frames that belong to this
> > connection, that's trickier.
> >
> > It seems to me this is a pre-existing bug, we just managed to get away
> > with it previously. I think this is at least one cause of the weirdly
> > jumping forwarding sequence numbers you observed. So I think we want
> > to make a patch fixing this that goes before the SO_PEEK_OFF changes.
> >
> > > +
> > > conn->seq_ack_from_tap = max_ack_seq;
> > > conn->seq_to_tap = max_ack_seq;
> > > + set_peek_offset(conn, 0);
> > > tcp_data_from_sock(c, conn);
> > > +
> > > + /* Empty queue before any POLL event tries to send it again */
> > > + tcp_l2_data_buf_flush(c);
> >
> > I'm not clear on what the second flush call is for. The only frames
> > queued should be those added by the tcp_data_from_sock() just above,
> > and those should be flushed when we get to tcp_defer_handler() before
> > we return to the epoll loop.
> >
> > > }
> > >
> > > if (!iov_i)
> > > @@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn,
> > > conn->seq_ack_to_tap = conn->seq_from_tap;
> > >
> > > conn_event(c, conn, ESTABLISHED);
> > > + set_peek_offset(conn, 0);
> > >
> > > /* The client might have sent data already, which we didn't
> > > * dequeue waiting for SYN,ACK from tap -- check now.
> > > @@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af,
> > > goto reset;
> > >
> > > conn_event(c, conn, ESTABLISHED);
> > > + set_peek_offset(conn, 0);
> >
> > The set_peek_offset() could go into conn_event() to avoid the
> > duplication. Not sure if it's worth it or not.
>
> I wouldn't do that in conn_event(), the existing side effects there are
> kind of expected, but set_peek_offset() isn't so closely related to TCP
> events I'd say.
Fair enough.
> > > if (th->fin) {
> > > conn->seq_from_tap++;
> > > @@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref,
> > > struct sockaddr_storage sa;
> > > socklen_t sl = sizeof(sa);
> > > union flow *flow;
> > > - int s;
> > > + int s = 0;
> > >
> > > if (c->no_tcp || !(flow = flow_alloc()))
> > > return;
> > > @@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref)
> > > flow_dbg(conn, "ACK timeout, retry");
> > > conn->retrans++;
> > > conn->seq_to_tap = conn->seq_ack_from_tap;
> > > + set_peek_offset(conn, 0);
> > > + tcp_l2_data_buf_flush(c);
> >
> > Uh.. doesn't this flush have to go *before* the seq_to_tap update, for
> > the reasons discussed above?
> >
> > > tcp_data_from_sock(c, conn);
> > > + tcp_l2_data_buf_flush(c);
>
> I don't understand the purpose of these new tcp_l2_data_buf_flush()
> calls. If they fix a pre-existing issue (but I'm not sure which one),
> they should be in a different patch.
As noted above I understand the purpose of the first one, but not the second.
> > > tcp_timer_ctl(c, conn);
> > > }
> > > } else {
> > > @@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct ctx *c)
> > > */
> > > int tcp_init(struct ctx *c)
> > > {
> > > - unsigned b;
> > > + unsigned int b, optv = 0;
> > > + int s;
> > >
> > > for (b = 0; b < TCP_HASH_TABLE_SIZE; b++)
> > > tc_hash[b] = FLOW_SIDX_NONE;
> > > @@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c)
> > > NS_CALL(tcp_ns_socks_init, c);
> > > }
> > >
> > > + /* Probe for SO_PEEK_OFF support */
> > > + s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
> > > + if (s < 0) {
> > > + warn("Temporary TCP socket creation failed");
> > > + } else {
> > > + if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int)))
> > > + peek_offset_cap = true;
> > > + close(s);
> > > + }
> > > + debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not ");
> > > return 0;
> > > }
> > >
> >
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available
2024-05-03 14:43 ` Jon Maloy
@ 2024-05-06 7:15 ` David Gibson
0 siblings, 0 replies; 10+ messages in thread
From: David Gibson @ 2024-05-06 7:15 UTC (permalink / raw)
To: Jon Maloy; +Cc: Stefano Brivio, passt-dev, lvivier, dgibson
[-- Attachment #1: Type: text/plain, Size: 7997 bytes --]
On Fri, May 03, 2024 at 10:43:52AM -0400, Jon Maloy wrote:
>
>
> On 2024-05-03 09:42, Stefano Brivio wrote:
> > On Thu, 2 May 2024 11:31:52 +1000
> > David Gibson <david@gibson.dropbear.id.au> wrote:
[snip]
> > > > /* Receive into buffers, don't dequeue until acknowledged by guest. */
> > > > do
> > > > len = recvmsg(s, &mh_sock, MSG_PEEK);
> > > > @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> > > > return 0;
> > > > }
> > > > - sendlen = len - already_sent;
> > > > + sendlen = len;
> > > > + if (!peek_offset_cap)
> > > > + sendlen -= already_sent;
> > > > +
> > > > if (sendlen <= 0) {
> > > > conn_flag(c, conn, STALLED);
> > > > return 0;
> > > > @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> > > > flow_trace(conn,
> > > > "fast re-transmit, ACK: %u, previous sequence: %u",
> > > > max_ack_seq, conn->seq_to_tap);
> > > > +
> > > > + /* Ensure seq_from_tap isn't updated twice after call */
> > > > + tcp_l2_data_buf_flush(c);
> > > tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a
> > > recently merged change from Laurent.
> > >
> > > IIUC, this is necessary because otherwise our update to seq_to_tap can
> > ...but Jon's comment refers to seq_from_tap (not seq_to_tap)? I'm
> > confused.
> Right. It should be seq_to_tap.
> > > be clobbered from tcp_payload_flush() when we process the
> > > queued-but-not-sent frames.
> > ...how? I don't quite understand the issue here: tcp_payload_flush()
> > updates seq_to_tap once we send the frames, not before, right?
> If we don't flush, we may have a frame there, e.g. seqno 17, followed by a
> lower numbered
> frame, e.g. seqno 14.
> Both will point to a seq_to_tap we just gave the value 14.
> When the buffer queue is flushed we update seq_to_tap twice, so next sent
> packet will be 16.
> This would have worked in the old code, because we calculate the offset
> value (already_sent) based
> on the seq_to_tap value, so we just skip ahead one packet and continue
> transmitting.
> If we are lucky pkt #15 is already in the receiver's OOF queue, and we are
> ok.
I'm struggling to follow the description above. As noted in my other
mail, I think the problem here is that we can queue frames before we
trigger the retransmit, but then send them and advance seq_to_tap
after we trigger the retransmit.
> It will *not* work in my code, because the kernel offset is advanced
> linearly, so we will resend
> a packet called #16, but with the contents of the original pkt #15.
So when I say it is a pre-existing bug, I mean that even without your
changes it meant that in this situation we could skip re-transmitting
part of what we're supposed to retransmit. The consequences are less
severe though, because we at least recalculate where we are in the
peek buffer based on the messed messed on seq_to_tap value. We don't
behave correctly but the receiver will probably be able to sort it out
(to them it may not be distinguishable from things that could happen
due to packet re-ordering). With Jon's change we wind back
SO_PEEK_OFF in step with seq_to_tap at the re-transmit, but when we
incorrectly push seq_to_tap back forward, we *don't* update the
kernel. So the two are out of sync, hence horrible breakage.
> > > This seems like a correct fix, but not an
> > > optimal one: we're flushing out data we've already determined we're
> > > going to retransmit. Instead, I think we want a different helper that
> > > simply discards the queued frames
> > Don't we always send (within the same epoll_wait() cycle) what we
> > queued? What am I missing?
> No. Evidently not.
Hrm. If that's true then that's another different bug from the one
I'm describing.
> > > - I'm thinking maybe we actually
> > > want a helper that's called from both the fast and slow retransmit
> > > paths and handles that.
> > >
> > > Ah, wait, we only want to discard queued frames that belong to this
> > > connection, that's trickier.
> > >
> > > It seems to me this is a pre-existing bug, we just managed to get away
> > > with it previously. I think this is at least one cause of the weirdly
> > > jumping forwarding sequence numbers you observed. So I think we want
> > > to make a patch fixing this that goes before the SO_PEEK_OFF changes.
> This was exactly the reason for my v2: comment in the commit log.
> But it may even be worse. See below.
> > >
> > > > +
> > > > conn->seq_ack_from_tap = max_ack_seq;
> > > > conn->seq_to_tap = max_ack_seq;
> > > > + set_peek_offset(conn, 0);
> > > > tcp_data_from_sock(c, conn);
> > > > +
> > > > + /* Empty queue before any POLL event tries to send it again */
> > > > + tcp_l2_data_buf_flush(c);
> > > I'm not clear on what the second flush call is for. The only frames
> > > queued should be those added by the tcp_data_from_sock() just above,
> > > and those should be flushed when we get to tcp_defer_handler() before
> > > we return to the epoll loop.
> Sadly no. My debugging clearly shows that an epoll() may come in between,
Hrm.. an epoll in between what and what, exactly? I can easily see
how we get a data_from_sock(), then a data_from_tap() on the same
connection during a single epoll cycle, leading to stale queued
frames. I suspect there may also be paths where we enter
data_from_sock() for the same connection twice in the same epoll
cycle.
I don't (so far) see any way we could have queued frames persisting
across an epoll cycle.
> and try to transmit a pkt #14 (from the example above), but now with the
> contents
> of the original pkt #15.
> All sorts of weirdities may happen after that.
>
> I am wondering if this is a generic problem: Is it possible that two
> consecutive
> epolls() may queue up two packets with the same number in the tap queue,
> whereafter
> the number will be incremented twice when flushed, and we create a gap in
> the sequence causing
> spurious retransmissions?
> I haven't checked this theory yet, but that is part of my plan for today.
>
> Anyway, I don't understand the point with the delayed update of set_to_tap
> at all. To me
> it looks plain wrong. But I am sure somebody can explain.
This is actually a relatively recent change: it's there so that if we
get a low-level error trying to push the frames out to the tap device
we don't advance seq_to_tap. In particular this can occur if we
overfill the socket send buffer on the tap socket with qemu.
It's not technically necessary to do this: we can treat such a failure
as packet loss that TCP will eventually deal with. This is an
optimization: given that in this case we already know the packets
didn't get through we don't want to wait for TCP to signal a
retransmit. Instead we avoid advancing seq_to_tap, meaning that we'll
carry on from the last point qt which the guest at least might get the
data.
...and writing the above, I just realised this is another potential
source of desync between the kernel SO_PEEK_OFF pointer and
seq_to_tap, although I don't know if it's one you're hitting in
practice Jon. Such a low-level transmit failure is essentially an
internally triggered re-transmit, so it's another case where we need
to wind back SO_PEEK_OFF.
To tackle this sanely, I think we have to invert how we're handling
the seq_to_tap update. Instead of deferring advancing it until the
frames are sent, we should advance it immediately upon queuing. Then
in the error path we need to explicitly treat this as a sort of
retransmit, where we wind back both seq_to_tap and SO_PEEK_OFF in sync
with each other.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-05-06 7:15 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-01 20:28 [PATCH v2 0/2] SO_PEEK_OFF support Jon Maloy
2024-05-01 20:28 ` [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available Jon Maloy
2024-05-02 1:31 ` David Gibson
2024-05-03 13:42 ` Stefano Brivio
2024-05-03 14:43 ` Jon Maloy
2024-05-06 7:15 ` David Gibson
2024-05-06 6:51 ` David Gibson
2024-05-01 20:28 ` [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero Jon Maloy
2024-05-03 13:43 ` Stefano Brivio
2024-05-03 15:30 ` Jon Maloy
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).