public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: Stefano Brivio <sbrivio@redhat.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: Matej Hrica <mhrica@redhat.com>, passt-dev@passt.top
Subject: Re: [PATCH RFT 4/5] tcp, tap: Don't increase tap-side sequence counter for dropped frames
Date: Thu, 5 Oct 2023 08:19:00 +0200	[thread overview]
Message-ID: <20231005081900.01f7431a@elisabeth> (raw)
In-Reply-To: <ZRuJE7T2zJVY/cJF@zatzit>

On Tue, 3 Oct 2023 14:22:59 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:

> On Fri, Sep 29, 2023 at 05:19:50PM +0200, Stefano Brivio wrote:
> > On Thu, 28 Sep 2023 11:58:45 +1000
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >   
> > > On Wed, Sep 27, 2023 at 07:06:03PM +0200, Stefano Brivio wrote:  
> > > > On Mon, 25 Sep 2023 14:47:52 +1000
> > > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > >     
> > > > > On Sat, Sep 23, 2023 at 12:06:09AM +0200, Stefano Brivio wrote:    
> > > > > > ...so that we'll retry sending them, instead of more-or-less silently
> > > > > > dropping them. This happens quite frequently if our sending buffer on
> > > > > > the UNIX domain socket is heavily constrained (for instance, by the
> > > > > > 208 KiB default memory limit).
> > > > > > 
> > > > > > It might be argued that dropping frames is part of the expected TCP
> > > > > > flow: we don't dequeue those from the socket anyway, so we'll
> > > > > > eventually retransmit them.
> > > > > > 
> > > > > > But we don't need the receiver to tell us (by the way of duplicate or
> > > > > > missing ACKs) that we couldn't send them: we already know as
> > > > > > sendmsg() reports that. This seems to considerably increase
> > > > > > throughput stability and throughput itself for TCP connections with
> > > > > > default wmem_max values.
> > > > > > 
> > > > > > Unfortunately, the 16 bits left as padding in the frame descriptors      
> > > > > 
> > > > > I assume you're referring to the 'pad' fields in tcp[46]_l2_buf_t,
> > > > > yes?    
> > > > 
> > > > Right, that.
> > > >     
> > > > > For AVX2 we have substantially more space here.  Couldn't we put
> > > > > a conn (or seq) pointer in here at the cost of a few bytes MSS for
> > > > > non-AVX2 and zero cost for AVX2 (which is probably the majority case)?    
> > > > 
> > > > Yes, true. On the other hand, having this parallel array only affects
> > > > readability I guess, whereas inserting pointers and lengths in
> > > > tcp[46]_l2_buf_t actually decreases the usable MSS (not just on
> > > > non-AVX2 x86, but also on other architectures). So I'd rather stick to
> > > > this.    
> > > 
> > > Yeah, I guess so.
> > > 
> > > Actually.. I did just think of one other option.  It avoids both any
> > > extra padding and a parallel array, but at the cost of additional work
> > > when frames are dropped.  We could use that 16-bits of padding to
> > > store the TCP payload length.  Then when we don't manage to send all
> > > our frames, we do another loop through and add up how many stream
> > > bytes we actually sent to update the seq pointer.  
> > 
> > Hmm, yes. It's slightly more memory efficient, but the complexity seems
> > a bit overkill to me.  
> 
> More importantly, I forgot the fact that by the time we're sending the
> frames, we don't know what connection they're associated with any
> more.

Oh, I thought you wanted to rebuild the information about the
connection by looking into the hash table or something like that.

> [snip]
> > > > > > @@ -2282,14 +2310,15 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
> > > > > >  
> > > > > >  	/* Finally, queue to tap */
> > > > > >  	plen = mss;
> > > > > > +	seq = conn->seq_to_tap;      
> > > > > 
> > > > > This will only be correct if tcp_l2_data_buf_flush() is *always*
> > > > > called between tcp_data_from_sock() calls for the same socket.  That
> > > > > should be true for the normal course of things.  However, couldn't it
> > > > > happen that we get a normal socket EPOLLIN event for a particular
> > > > > connection - calling tcp_data_from_sock() - but in the same epoll()
> > > > > round we also get a tap ack for the same connection which causes
> > > > > another call to tcp_data_from_sock() (with the change from patch
> > > > > 2/5).  IIRC those would both happen before the deferred handling and
> > > > > therefore the data_buf_flush().    
> > > > 
> > > > Ah, yes, I actually wrote this before 2/5 and concluded it was okay :/
> > > > but with that change, it's not. Unless we drop that change from 2/5.    
> > > 
> > > Even if we drop the change, it's a worryingly subtle constraint.  
> > 
> > Another option to avoid this...
> >   
> > > > > Not sure how to deal with that short of separate 'seq_queued' and
> > > > > 'seq_sent' counters in the connection structure, which is a bit
> > > > > unfortunate.    
> > > > 
> > > > I wonder how bad it is if we call tcp_l2_data_buf_flush()
> > > > unconditionally before calling tcp_data_from_sock() from
> > > > tcp_tap_handler(). But again, maybe this is not needed at all, we
> > > > should check that epoll detail from 2/5 first...  
> > 
> > other than this one, would be to use that external table to update
> > sequence numbers *in the frames* as we send stuff out.  
> 
> Not really sure what you're proposing there.

That tcp_l2_buf_fill_headers() calculates the sequence from
conn->seq_to_tap plus a cumulative count from that table, instead of
passing it from the caller.

-- 
Stefano


  reply	other threads:[~2023-10-05  6:19 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-22 22:06 [PATCH RFT 0/5] Fixes and a workaround for TCP stalls with small buffers Stefano Brivio
2023-09-22 22:06 ` [PATCH RFT 1/5] tcp: Fix comment to tcp_sock_consume() Stefano Brivio
2023-09-23  2:48   ` David Gibson
2023-09-22 22:06 ` [PATCH RFT 2/5] tcp: Reset STALLED flag on ACK only, check for pending socket data Stefano Brivio
2023-09-25  3:07   ` David Gibson
2023-09-27 17:05     ` Stefano Brivio
2023-09-28  1:48       ` David Gibson
2023-09-29 15:20         ` Stefano Brivio
2023-10-03  3:20           ` David Gibson
2023-10-05  6:18             ` Stefano Brivio
2023-10-05  7:36               ` David Gibson
2023-09-22 22:06 ` [PATCH RFT 3/5] tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag Stefano Brivio
2023-09-22 22:31   ` Stefano Brivio
2023-09-23  7:55   ` David Gibson
2023-09-25  4:09   ` David Gibson
2023-09-25  4:10     ` David Gibson
2023-09-25  4:21     ` David Gibson
2023-09-27 17:05       ` Stefano Brivio
2023-09-28  1:51         ` David Gibson
2023-09-22 22:06 ` [PATCH RFT 4/5] tcp, tap: Don't increase tap-side sequence counter for dropped frames Stefano Brivio
2023-09-25  4:47   ` David Gibson
2023-09-27 17:06     ` Stefano Brivio
2023-09-28  1:58       ` David Gibson
2023-09-29 15:19         ` Stefano Brivio
2023-10-03  3:22           ` David Gibson
2023-10-05  6:19             ` Stefano Brivio [this message]
2023-10-05  7:38               ` David Gibson
2023-09-22 22:06 ` [PATCH RFT 5/5] passt.1: Add note about tuning rmem_max and wmem_max for throughput Stefano Brivio
2023-09-25  4:57   ` David Gibson
2023-09-27 17:06     ` Stefano Brivio
2023-09-28  2:02       ` David Gibson
2023-09-25  5:52 ` [PATCH RFT 0/5] Fixes and a workaround for TCP stalls with small buffers David Gibson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231005081900.01f7431a@elisabeth \
    --to=sbrivio@redhat.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=mhrica@redhat.com \
    --cc=passt-dev@passt.top \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).