From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTP id E2E865A026F for ; Thu, 5 Oct 2023 08:19:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1696486755; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T6EkKGStP8mASC8ydFgHLkbHtOybfB+qHI2O1xWYop4=; b=eL+lBXWgl8mS7Id+eohFBBjND389hKqCJ4KpM8elufpOkTY/GmYi9tzIxA67apYXEKLGGi CarEu37+gNIeeuBXb1A6xStyo/glcc5spt+OolmctK9VT5YmFmnol1+WYeVK+bchnAXNqZ X7yjB76jLyy6h7bqhJddaMg4kg7vutg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-324-PMqtD2P7Om-SgrXZE-ZyKg-1; Thu, 05 Oct 2023 02:19:04 -0400 X-MC-Unique: PMqtD2P7Om-SgrXZE-ZyKg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0E9EE101A550; Thu, 5 Oct 2023 06:19:04 +0000 (UTC) Received: from elisabeth (unknown [10.39.208.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EC35F2026D4B; Thu, 5 Oct 2023 06:19:02 +0000 (UTC) Date: Thu, 5 Oct 2023 08:19:00 +0200 From: Stefano Brivio To: David Gibson Subject: Re: [PATCH RFT 4/5] tcp, tap: Don't increase tap-side sequence counter for dropped frames Message-ID: <20231005081900.01f7431a@elisabeth> In-Reply-To: References: <20230922220610.58767-1-sbrivio@redhat.com> <20230922220610.58767-5-sbrivio@redhat.com> <20230927190603.10a1ed74@elisabeth> <20230929171950.5086d408@elisabeth> Organization: Red Hat MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: I2AV5OP656UEOU2P7AE236FY3OURQBZQ X-Message-ID-Hash: I2AV5OP656UEOU2P7AE236FY3OURQBZQ X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Matej Hrica , passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Tue, 3 Oct 2023 14:22:59 +1100 David Gibson wrote: > On Fri, Sep 29, 2023 at 05:19:50PM +0200, Stefano Brivio wrote: > > On Thu, 28 Sep 2023 11:58:45 +1000 > > David Gibson wrote: > > > > > On Wed, Sep 27, 2023 at 07:06:03PM +0200, Stefano Brivio wrote: > > > > On Mon, 25 Sep 2023 14:47:52 +1000 > > > > David Gibson wrote: > > > > > > > > > On Sat, Sep 23, 2023 at 12:06:09AM +0200, Stefano Brivio wrote: > > > > > > ...so that we'll retry sending them, instead of more-or-less silently > > > > > > dropping them. This happens quite frequently if our sending buffer on > > > > > > the UNIX domain socket is heavily constrained (for instance, by the > > > > > > 208 KiB default memory limit). > > > > > > > > > > > > It might be argued that dropping frames is part of the expected TCP > > > > > > flow: we don't dequeue those from the socket anyway, so we'll > > > > > > eventually retransmit them. > > > > > > > > > > > > But we don't need the receiver to tell us (by the way of duplicate or > > > > > > missing ACKs) that we couldn't send them: we already know as > > > > > > sendmsg() reports that. This seems to considerably increase > > > > > > throughput stability and throughput itself for TCP connections with > > > > > > default wmem_max values. > > > > > > > > > > > > Unfortunately, the 16 bits left as padding in the frame descriptors > > > > > > > > > > I assume you're referring to the 'pad' fields in tcp[46]_l2_buf_t, > > > > > yes? > > > > > > > > Right, that. > > > > > > > > > For AVX2 we have substantially more space here. Couldn't we put > > > > > a conn (or seq) pointer in here at the cost of a few bytes MSS for > > > > > non-AVX2 and zero cost for AVX2 (which is probably the majority case)? > > > > > > > > Yes, true. On the other hand, having this parallel array only affects > > > > readability I guess, whereas inserting pointers and lengths in > > > > tcp[46]_l2_buf_t actually decreases the usable MSS (not just on > > > > non-AVX2 x86, but also on other architectures). So I'd rather stick to > > > > this. > > > > > > Yeah, I guess so. > > > > > > Actually.. I did just think of one other option. It avoids both any > > > extra padding and a parallel array, but at the cost of additional work > > > when frames are dropped. We could use that 16-bits of padding to > > > store the TCP payload length. Then when we don't manage to send all > > > our frames, we do another loop through and add up how many stream > > > bytes we actually sent to update the seq pointer. > > > > Hmm, yes. It's slightly more memory efficient, but the complexity seems > > a bit overkill to me. > > More importantly, I forgot the fact that by the time we're sending the > frames, we don't know what connection they're associated with any > more. Oh, I thought you wanted to rebuild the information about the connection by looking into the hash table or something like that. > [snip] > > > > > > @@ -2282,14 +2310,15 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) > > > > > > > > > > > > /* Finally, queue to tap */ > > > > > > plen = mss; > > > > > > + seq = conn->seq_to_tap; > > > > > > > > > > This will only be correct if tcp_l2_data_buf_flush() is *always* > > > > > called between tcp_data_from_sock() calls for the same socket. That > > > > > should be true for the normal course of things. However, couldn't it > > > > > happen that we get a normal socket EPOLLIN event for a particular > > > > > connection - calling tcp_data_from_sock() - but in the same epoll() > > > > > round we also get a tap ack for the same connection which causes > > > > > another call to tcp_data_from_sock() (with the change from patch > > > > > 2/5). IIRC those would both happen before the deferred handling and > > > > > therefore the data_buf_flush(). > > > > > > > > Ah, yes, I actually wrote this before 2/5 and concluded it was okay :/ > > > > but with that change, it's not. Unless we drop that change from 2/5. > > > > > > Even if we drop the change, it's a worryingly subtle constraint. > > > > Another option to avoid this... > > > > > > > Not sure how to deal with that short of separate 'seq_queued' and > > > > > 'seq_sent' counters in the connection structure, which is a bit > > > > > unfortunate. > > > > > > > > I wonder how bad it is if we call tcp_l2_data_buf_flush() > > > > unconditionally before calling tcp_data_from_sock() from > > > > tcp_tap_handler(). But again, maybe this is not needed at all, we > > > > should check that epoll detail from 2/5 first... > > > > other than this one, would be to use that external table to update > > sequence numbers *in the frames* as we send stuff out. > > Not really sure what you're proposing there. That tcp_l2_buf_fill_headers() calculates the sequence from conn->seq_to_tap plus a cumulative count from that table, instead of passing it from the caller. -- Stefano