public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Jon Maloy <jmaloy@redhat.com>
Cc: Stefano Brivio <sbrivio@redhat.com>,
	passt-dev@passt.top, lvivier@redhat.com, dgibson@redhat.com
Subject: Re: [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available
Date: Mon, 6 May 2024 17:15:13 +1000	[thread overview]
Message-ID: <ZjiDgSrFCUIC0TlO@zatzit> (raw)
In-Reply-To: <767934ce-269a-cc9e-0cf3-1cb062103802@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 7997 bytes --]

On Fri, May 03, 2024 at 10:43:52AM -0400, Jon Maloy wrote:
> 
> 
> On 2024-05-03 09:42, Stefano Brivio wrote:
> > On Thu, 2 May 2024 11:31:52 +1000
> > David Gibson <david@gibson.dropbear.id.au> wrote:
[snip]
> > > >   	/* Receive into buffers, don't dequeue until acknowledged by guest. */
> > > >   	do
> > > >   		len = recvmsg(s, &mh_sock, MSG_PEEK);
> > > > @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> > > >   		return 0;
> > > >   	}
> > > > -	sendlen = len - already_sent;
> > > > +	sendlen = len;
> > > > +	if (!peek_offset_cap)
> > > > +		sendlen -= already_sent;
> > > > +
> > > >   	if (sendlen <= 0) {
> > > >   		conn_flag(c, conn, STALLED);
> > > >   		return 0;
> > > > @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn,
> > > >   		flow_trace(conn,
> > > >   			   "fast re-transmit, ACK: %u, previous sequence: %u",
> > > >   			   max_ack_seq, conn->seq_to_tap);
> > > > +
> > > > +		/* Ensure seq_from_tap isn't updated twice after call */
> > > > +		tcp_l2_data_buf_flush(c);
> > > tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a
> > > recently merged change from Laurent.
> > > 
> > > IIUC, this is necessary because otherwise our update to seq_to_tap can
> > ...but Jon's comment refers to seq_from_tap (not seq_to_tap)? I'm
> > confused.
> Right. It should be seq_to_tap.
> > > be clobbered from tcp_payload_flush() when we process the
> > > queued-but-not-sent frames.
> > ...how? I don't quite understand the issue here: tcp_payload_flush()
> > updates seq_to_tap once we send the frames, not before, right?
> If we don't flush, we may have a frame there, e.g. seqno 17, followed by a
> lower numbered
> frame, e.g. seqno 14.
> Both will point to a seq_to_tap we just gave the value 14.
> When the buffer queue is flushed we update seq_to_tap twice, so next sent
> packet will be 16.
> This would have worked in the old code, because we calculate the offset
> value (already_sent) based
> on the seq_to_tap value, so we just skip ahead one packet and continue
> transmitting.
> If we are lucky pkt #15 is already in the receiver's OOF queue, and we are
> ok.

I'm struggling to follow the description above.  As noted in my other
mail, I think the problem here is that we can queue frames before we
trigger the retransmit, but then send them and advance seq_to_tap
after we trigger the retransmit.

> It will *not* work in my code, because the kernel offset is advanced
> linearly, so we will resend
> a packet called #16, but with the contents of the original pkt #15.

So when I say it is a pre-existing bug, I mean that even without your
changes it meant that in this situation we could skip re-transmitting
part of what we're supposed to retransmit.  The consequences are less
severe though, because we at least recalculate where we are in the
peek buffer based on the messed messed on seq_to_tap value.  We don't
behave correctly but the receiver will probably be able to sort it out
(to them it may not be distinguishable from things that could happen
due to packet re-ordering).  With Jon's change we wind back
SO_PEEK_OFF in step with seq_to_tap at the re-transmit, but when we
incorrectly push seq_to_tap back forward, we *don't* update the
kernel.  So the two are out of sync, hence horrible breakage.

> > > This seems like a correct fix, but not an
> > > optimal one: we're flushing out data we've already determined we're
> > > going to retransmit.  Instead, I think we want a different helper that
> > > simply discards the queued frames
> > Don't we always send (within the same epoll_wait() cycle) what we
> > queued? What am I missing?
> No. Evidently not.

Hrm.  If that's true then that's another different bug from the one
I'm describing.

> > > - I'm thinking maybe we actually
> > > want a helper that's called from both the fast and slow retransmit
> > > paths and handles that.
> > > 
> > > Ah, wait, we only want to discard queued frames that belong to this
> > > connection, that's trickier.
> > > 
> > > It seems to me this is a pre-existing bug, we just managed to get away
> > > with it previously.  I think this is at least one cause of the weirdly
> > > jumping forwarding sequence numbers you observed.  So I think we want
> > > to make a patch fixing this that goes before the SO_PEEK_OFF changes.
> This was exactly the reason for my v2: comment in the commit log.
> But it may even be worse. See below.
> > > 
> > > > +
> > > >   		conn->seq_ack_from_tap = max_ack_seq;
> > > >   		conn->seq_to_tap = max_ack_seq;
> > > > +		set_peek_offset(conn, 0);
> > > >   		tcp_data_from_sock(c, conn);
> > > > +
> > > > +		/* Empty queue before any POLL event tries to send it again */
> > > > +		tcp_l2_data_buf_flush(c);
> > > I'm not clear on what the second flush call is for.  The only frames
> > > queued should be those added by the tcp_data_from_sock() just above,
> > > and those should be flushed when we get to tcp_defer_handler() before
> > > we return to the epoll loop.
> Sadly no. My debugging clearly shows that an epoll() may come in between,

Hrm.. an epoll in between what and what, exactly?  I can easily see
how we get a data_from_sock(), then a data_from_tap() on the same
connection during a single epoll cycle, leading to stale queued
frames.  I suspect there may also be paths where we enter
data_from_sock() for the same connection twice in the same epoll
cycle.

I don't (so far) see any way we could have queued frames persisting
across an epoll cycle.


> and try to transmit a pkt #14 (from the example above), but now with the
> contents
> of the original pkt #15.
> All sorts of weirdities may happen after that.
> 
> I am wondering if this is a generic problem: Is it possible that two
> consecutive
> epolls() may queue up two packets with the same number in the tap queue,
> whereafter
> the number will be incremented twice when flushed, and we create a gap in
> the sequence causing
> spurious retransmissions?
> I haven't checked this theory yet, but that is part of my plan for today.
> 
> Anyway, I don't understand the point with the delayed update of set_to_tap
> at all. To me
> it looks plain wrong. But I am sure somebody can explain.

This is actually a relatively recent change: it's there so that if we
get a low-level error trying to push the frames out to the tap device
we don't advance seq_to_tap.  In particular this can occur if we
overfill the socket send buffer on the tap socket with qemu.

It's not technically necessary to do this: we can treat such a failure
as packet loss that TCP will eventually deal with.  This is an
optimization: given that in this case we already know the packets
didn't get through we don't want to wait for TCP to signal a
retransmit.  Instead we avoid advancing seq_to_tap, meaning that we'll
carry on from the last point qt which the guest at least might get the
data.

...and writing the above, I just realised this is another potential
source of desync between the kernel SO_PEEK_OFF pointer and
seq_to_tap, although I don't know if it's one you're hitting in
practice Jon.  Such a low-level transmit failure is essentially an
internally triggered re-transmit, so it's another case where we need
to wind back SO_PEEK_OFF.

To tackle this sanely, I think we have to invert how we're handling
the seq_to_tap update.  Instead of deferring advancing it until the
frames are sent, we should advance it immediately upon queuing.  Then
in the error path we need to explicitly treat this as a sort of
retransmit, where we wind back both seq_to_tap and SO_PEEK_OFF in sync
with each other.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2024-05-06  7:15 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-01 20:28 [PATCH v2 0/2] SO_PEEK_OFF support Jon Maloy
2024-05-01 20:28 ` [PATCH v2 1/2] tcp: leverage support of SO_PEEK_OFF socket option when available Jon Maloy
2024-05-02  1:31   ` David Gibson
2024-05-03 13:42     ` Stefano Brivio
2024-05-03 14:43       ` Jon Maloy
2024-05-06  7:15         ` David Gibson [this message]
2024-05-06  6:51       ` David Gibson
2024-05-01 20:28 ` [PATCH v2 2/2] tcp: allow retransmit when peer receive window is zero Jon Maloy
2024-05-03 13:43   ` Stefano Brivio
2024-05-03 15:30     ` Jon Maloy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZjiDgSrFCUIC0TlO@zatzit \
    --to=david@gibson.dropbear.id.au \
    --cc=dgibson@redhat.com \
    --cc=jmaloy@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=passt-dev@passt.top \
    --cc=sbrivio@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).