From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by passt.top (Postfix) with ESMTPS id A697F5A9D0A for ; Mon, 6 May 2024 09:15:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gibson.dropbear.id.au; s=202312; t=1714979718; bh=7IEVVSDpg6Kipq6ul00ungbl5zu9dusFEae3uI0cPio=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=DlWYEexiUbb+t/CGwVQFBn09AqM2kmmmFjF157y7LWxj6nXDQ/dzfBkpvKB1/Ay0t LTPrnUWP6kERKbc24wQpCF/W1Z+aG956mhIojSXpW8EuqU/gv4kvC3ZVdA4d/0Mrqu tkiIm5CdAUcUKaapckyboOln/q4PLTjZsODnyGjbXjKvPTQIzuK57eevkuu6uMXkwQ pbC8Km4/Y8km+l2SR34+IA74HQPhVaGAxyteJNvy6bCdxNP8mjF2ev0IHeFlxkQNqT hEnMm9q6YY/0/U2AZPmla0M+qPsBoBjxDYIg1kP+z+G5yVC8m9/2qwzePU2Gi299GX K4CGqkd+dzing== Received: by gandalf.ozlabs.org (Postfix, from userid 1007) id 4VXt3B2tDdz4wd3; Mon, 6 May 2024 17:15:18 +1000 (AEST) Date: Mon, 6 May 2024 16:51:03 +1000 From: David Gibson To: Stefano Brivio Subject: Re: [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available Message-ID: References: <20240501202839.3491885-1-jmaloy@redhat.com> <20240501202839.3491885-2-jmaloy@redhat.com> <20240503154255.3d062430@elisabeth> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="JlJ2DAPR1ZJPB6h4" Content-Disposition: inline In-Reply-To: <20240503154255.3d062430@elisabeth> Message-ID-Hash: QLIPS3VYWMDLJ2JHSSMPJV72ISDDOIH3 X-Message-ID-Hash: QLIPS3VYWMDLJ2JHSSMPJV72ISDDOIH3 X-MailFrom: dgibson@gandalf.ozlabs.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Jon Maloy , passt-dev@passt.top, lvivier@redhat.com, dgibson@redhat.com X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: --JlJ2DAPR1ZJPB6h4 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, May 03, 2024 at 03:42:55PM +0200, Stefano Brivio wrote: > On Thu, 2 May 2024 11:31:52 +1000 > David Gibson wrote: > > On Wed, May 01, 2024 at 04:28:38PM -0400, Jon Maloy wrote: [snip] > > > /* Receive into buffers, don't dequeue until acknowledged by guest.= */ > > > do > > > len =3D recvmsg(s, &mh_sock, MSG_PEEK); > > > @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, s= truct tcp_tap_conn *conn) > > > return 0; > > > } > > > =20 > > > - sendlen =3D len - already_sent; > > > + sendlen =3D len; > > > + if (!peek_offset_cap) > > > + sendlen -=3D already_sent; > > > + > > > if (sendlen <=3D 0) { > > > conn_flag(c, conn, STALLED); > > > return 0; > > > @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, st= ruct tcp_tap_conn *conn, > > > flow_trace(conn, > > > "fast re-transmit, ACK: %u, previous sequence: %u", > > > max_ack_seq, conn->seq_to_tap); > > > + > > > + /* Ensure seq_from_tap isn't updated twice after call */ > > > + tcp_l2_data_buf_flush(c); =20 > >=20 > > tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a > > recently merged change from Laurent. > >=20 > > IIUC, this is necessary because otherwise our update to seq_to_tap can >=20 > ...but Jon's comment refers to seq_from_tap (not seq_to_tap)? I'm > confused. I missed this, but as Jon clarified it's supposed to be seq_to_tap here. > > be clobbered from tcp_payload_flush() when we process the > > queued-but-not-sent frames. >=20 > ...how? I don't quite understand the issue here: tcp_payload_flush() > updates seq_to_tap once we send the frames, not before, right? Yes, but that's not relevant. The problem arises when (1) we queue some frames without knowledge of the fast re-transmit, *then* (2) we realize we need to re-transmit. This can happen if we have activity on both sock-side and tap-side of the same connection in the same epoll cycle. If we process the socket side first, we will do (1), then while processing the tap side we could see dup acks triggering (2). At (2) we rewind seq_to_tap, but when we flush the frames queued at (1) we advance it again, incorrectly - we should only be advancing it when we *re*-transmit new data. So, either we need to flush out everything *before* we wind back seq_to_tap (Jon's approach), or we need to cancel those queued frames (more optimal, but more complex to implement. > > This seems like a correct fix, but not an > > optimal one: we're flushing out data we've already determined we're > > going to retransmit. Instead, I think we want a different helper that > > simply discards the queued frames >=20 > Don't we always send (within the same epoll_wait() cycle) what we > queued? What am I missing? We do (or at least should), yes. But in a single epoll cycle we can queue frames (triggered by socket activity) before we realise we have to retransmit (triggered by tap or timer activity). > > - I'm thinking maybe we actually > > want a helper that's called from both the fast and slow retransmit > > paths and handles that. > >=20 > > Ah, wait, we only want to discard queued frames that belong to this > > connection, that's trickier. > >=20 > > It seems to me this is a pre-existing bug, we just managed to get away > > with it previously. I think this is at least one cause of the weirdly > > jumping forwarding sequence numbers you observed. So I think we want > > to make a patch fixing this that goes before the SO_PEEK_OFF changes. > >=20 > > > + > > > conn->seq_ack_from_tap =3D max_ack_seq; > > > conn->seq_to_tap =3D max_ack_seq; > > > + set_peek_offset(conn, 0); > > > tcp_data_from_sock(c, conn); > > > + > > > + /* Empty queue before any POLL event tries to send it again */ > > > + tcp_l2_data_buf_flush(c); =20 > >=20 > > I'm not clear on what the second flush call is for. The only frames > > queued should be those added by the tcp_data_from_sock() just above, > > and those should be flushed when we get to tcp_defer_handler() before > > we return to the epoll loop. > >=20 > > > } > > > =20 > > > if (!iov_i) > > > @@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ct= x *c, struct tcp_tap_conn *conn, > > > conn->seq_ack_to_tap =3D conn->seq_from_tap; > > > =20 > > > conn_event(c, conn, ESTABLISHED); > > > + set_peek_offset(conn, 0); > > > =20 > > > /* The client might have sent data already, which we didn't > > > * dequeue waiting for SYN,ACK from tap -- check now. > > > @@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif,= int af, > > > goto reset; > > > =20 > > > conn_event(c, conn, ESTABLISHED); > > > + set_peek_offset(conn, 0); =20 > >=20 > > The set_peek_offset() could go into conn_event() to avoid the > > duplication. Not sure if it's worth it or not. >=20 > I wouldn't do that in conn_event(), the existing side effects there are > kind of expected, but set_peek_offset() isn't so closely related to TCP > events I'd say. Fair enough. > > > if (th->fin) { > > > conn->seq_from_tap++; > > > @@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union ep= oll_ref ref, > > > struct sockaddr_storage sa; > > > socklen_t sl =3D sizeof(sa); > > > union flow *flow; > > > - int s; > > > + int s =3D 0; > > > =20 > > > if (c->no_tcp || !(flow =3D flow_alloc())) > > > return; > > > @@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union ep= oll_ref ref) > > > flow_dbg(conn, "ACK timeout, retry"); > > > conn->retrans++; > > > conn->seq_to_tap =3D conn->seq_ack_from_tap; > > > + set_peek_offset(conn, 0); > > > + tcp_l2_data_buf_flush(c); =20 > >=20 > > Uh.. doesn't this flush have to go *before* the seq_to_tap update, for > > the reasons discussed above? > >=20 > > > tcp_data_from_sock(c, conn); > > > + tcp_l2_data_buf_flush(c); >=20 > I don't understand the purpose of these new tcp_l2_data_buf_flush() > calls. If they fix a pre-existing issue (but I'm not sure which one), > they should be in a different patch. As noted above I understand the purpose of the first one, but not the secon= d. > > > tcp_timer_ctl(c, conn); > > > } > > > } else { > > > @@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct c= tx *c) > > > */ > > > int tcp_init(struct ctx *c) > > > { > > > - unsigned b; > > > + unsigned int b, optv =3D 0; > > > + int s; > > > =20 > > > for (b =3D 0; b < TCP_HASH_TABLE_SIZE; b++) > > > tc_hash[b] =3D FLOW_SIDX_NONE; > > > @@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c) > > > NS_CALL(tcp_ns_socks_init, c); > > > } > > > =20 > > > + /* Probe for SO_PEEK_OFF support */ > > > + s =3D socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP); > > > + if (s < 0) { > > > + warn("Temporary TCP socket creation failed"); > > > + } else { > > > + if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int))) > > > + peek_offset_cap =3D true; > > > + close(s); > > > + } > > > + debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not "); > > > return 0; > > > } > > > =20 > >=20 >=20 --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --JlJ2DAPR1ZJPB6h4 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEO+dNsU4E3yXUXRK2zQJF27ox2GcFAmY4fdIACgkQzQJF27ox 2GdwZRAApEEI5vsVzUjW7S69DK+p3c8YRqgg6y59mfFsZZAKzrcZiMx20i/00sTz BoKgCXcNXxZDBiFlty6LVrOk1ibsBHy+KpjCMT+VTGQnb5ah1BZCA3Ttp+iPy2kE DAz1aI51xa3bKJZd5vzEoMiqDrnidcegvSXWGF4/KGpyshFWvSEvNkq8f+QwlkeH GAJpvdxKuy4JTrddM+0oIJgZiVCWzzZdDtNU6eWAoKRU468HcZbLcS1/cnwmX4kh MGpk3Ysne4MB8j74YJ1BFfrBTBG+UKn1Tn64DLmdyJER/N9SNT7HwiJHGkM2rgd1 icsHD/iMuDpZdA2daouWrt7x+kqAceGW8WjGeIpzu0MTmut3Gh5CfPzQUaG12wez LzojNR50+EFpsRpmneMphhHvfZest+xOafdwMoYB2rwyDoUCjJcQatMHH2piNWGo EPjNDy1HDKj3jIVvOSK/OSmIA+AumYCjBv4wgk9Xjhw4/RK9QReW7jLv0BlIYJ6P +qY8bgH8biQNKzPxT/y59irpHWMgQhNlpB2L6LKFnVhziLbfnNV5P7I4pLQ4+0Iw 9DkRrOcEkooywVqv5fqB9HXVb1qfXrxrObj7wnD/QNZGMh3DCB+Srw3Ir0kg7K8Y nte5zU9CC8QtBrQXy9cYVWTVoB+4+BIQ9qA/APkKruL9ta/Lqgs= =Sw12 -----END PGP SIGNATURE----- --JlJ2DAPR1ZJPB6h4--