From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTP id 29AC85A026F for ; Wed, 27 Sep 2023 19:06:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695834368; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KLvFk4uS8mmjI6Ot58Bg8EuIdLZ//9NJ8IxGSDFVIxw=; b=PqP4gNYtO42PK44vClrdz1cgmYHfgQNS7BgLFbYb32KAkMiMbOKZVCL4j1Kwok72gEO3VH Hh1VNBiy6+4wMrzbFcle+q+em3SCqILRoGPOmVbfr87Hg0L5/LBi8fr25Vo75xajt+ct/V dVGbdSetlJxidw/Zyf/LMbdk43NRndk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-503-c23R7VOjPt2HK3aHzqwj6A-1; Wed, 27 Sep 2023 13:06:06 -0400 X-MC-Unique: c23R7VOjPt2HK3aHzqwj6A-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7162F185A78E; Wed, 27 Sep 2023 17:06:06 +0000 (UTC) Received: from elisabeth (unknown [10.39.208.37]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5990240C6EA8; Wed, 27 Sep 2023 17:06:05 +0000 (UTC) Date: Wed, 27 Sep 2023 19:06:03 +0200 From: Stefano Brivio To: David Gibson Subject: Re: [PATCH RFT 4/5] tcp, tap: Don't increase tap-side sequence counter for dropped frames Message-ID: <20230927190603.10a1ed74@elisabeth> In-Reply-To: References: <20230922220610.58767-1-sbrivio@redhat.com> <20230922220610.58767-5-sbrivio@redhat.com> Organization: Red Hat MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: EFPIW7TWTAF6VURF44PZUMI6TZP4F56R X-Message-ID-Hash: EFPIW7TWTAF6VURF44PZUMI6TZP4F56R X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Matej Hrica , passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Mon, 25 Sep 2023 14:47:52 +1000 David Gibson wrote: > On Sat, Sep 23, 2023 at 12:06:09AM +0200, Stefano Brivio wrote: > > ...so that we'll retry sending them, instead of more-or-less silently > > dropping them. This happens quite frequently if our sending buffer on > > the UNIX domain socket is heavily constrained (for instance, by the > > 208 KiB default memory limit). > > > > It might be argued that dropping frames is part of the expected TCP > > flow: we don't dequeue those from the socket anyway, so we'll > > eventually retransmit them. > > > > But we don't need the receiver to tell us (by the way of duplicate or > > missing ACKs) that we couldn't send them: we already know as > > sendmsg() reports that. This seems to considerably increase > > throughput stability and throughput itself for TCP connections with > > default wmem_max values. > > > > Unfortunately, the 16 bits left as padding in the frame descriptors > > I assume you're referring to the 'pad' fields in tcp[46]_l2_buf_t, > yes? Right, that. > For AVX2 we have substantially more space here. Couldn't we put > a conn (or seq) pointer in here at the cost of a few bytes MSS for > non-AVX2 and zero cost for AVX2 (which is probably the majority case)? Yes, true. On the other hand, having this parallel array only affects readability I guess, whereas inserting pointers and lengths in tcp[46]_l2_buf_t actually decreases the usable MSS (not just on non-AVX2 x86, but also on other architectures). So I'd rather stick to this. > > we use internally aren't enough to uniquely identify for which > > connection we should update sequence numbers: create a parallel > > array of pointers to sequence numbers and L4 lengths, of > > TCP_FRAMES_MEM size, and go through it after calling sendmsg(). > > > > Signed-off-by: Stefano Brivio > > --- > > tap.c | 10 +++++++--- > > tap.h | 2 +- > > tcp.c | 43 ++++++++++++++++++++++++++++++++++++------- > > 3 files changed, 44 insertions(+), 11 deletions(-) > > > > diff --git a/tap.c b/tap.c > > index 93db989..b30ff81 100644 > > --- a/tap.c > > +++ b/tap.c > > @@ -413,13 +413,15 @@ static size_t tap_send_frames_passt(const struct ctx *c, > > * @c: Execution context > > * @iov: Array of buffers, each containing one frame (with L2 headers) > > * @n: Number of buffers/frames in @iov > > + * > > + * Return: number of frames actually sent > > */ > > -void tap_send_frames(struct ctx *c, const struct iovec *iov, size_t n) > > +size_t tap_send_frames(struct ctx *c, const struct iovec *iov, size_t n) > > { > > size_t m; > > > > if (!n) > > - return; > > + return 0; > > > > if (c->mode == MODE_PASST) > > m = tap_send_frames_passt(c, iov, n); > > @@ -427,9 +429,11 @@ void tap_send_frames(struct ctx *c, const struct iovec *iov, size_t n) > > m = tap_send_frames_pasta(c, iov, n); > > > > if (m < n) > > - debug("tap: dropped %lu frames of %lu due to short send", n - m, n); > > + debug("tap: failed to send %lu frames of %lu", n - m, n); > > > > pcap_multiple(iov, m, c->mode == MODE_PASST ? sizeof(uint32_t) : 0); > > + > > + return m; > > } > > > > /** > > diff --git a/tap.h b/tap.h > > index 021fb7c..952fafc 100644 > > --- a/tap.h > > +++ b/tap.h > > @@ -73,7 +73,7 @@ void tap_icmp6_send(const struct ctx *c, > > const struct in6_addr *src, const struct in6_addr *dst, > > void *in, size_t len); > > int tap_send(const struct ctx *c, const void *data, size_t len); > > -void tap_send_frames(struct ctx *c, const struct iovec *iov, size_t n); > > +size_t tap_send_frames(struct ctx *c, const struct iovec *iov, size_t n); > > void tap_update_mac(struct tap_hdr *taph, > > const unsigned char *eth_d, const unsigned char *eth_s); > > void tap_listen_handler(struct ctx *c, uint32_t events); > > diff --git a/tcp.c b/tcp.c > > index 4606f17..76b7b8d 100644 > > --- a/tcp.c > > +++ b/tcp.c > > @@ -434,6 +434,16 @@ static int tcp_sock_ns [NUM_PORTS][IP_VERSIONS]; > > */ > > static union inany_addr low_rtt_dst[LOW_RTT_TABLE_SIZE]; > > > > +/** > > + * tcp_buf_seq_update - Sequences to update with length of frames once sent > > + * @seq: Pointer to sequence number sent to tap-side, to be updated > > + * @len: TCP payload length > > + */ > > +struct tcp_buf_seq_update { > > + uint32_t *seq; > > + uint16_t len; > > +}; > > + > > /* Static buffers */ > > > > /** > > @@ -462,6 +472,8 @@ static struct tcp4_l2_buf_t { > > #endif > > tcp4_l2_buf[TCP_FRAMES_MEM]; > > > > +static struct tcp_buf_seq_update tcp4_l2_buf_seq_update[TCP_FRAMES_MEM]; > > + > > static unsigned int tcp4_l2_buf_used; > > > > /** > > @@ -490,6 +502,8 @@ struct tcp6_l2_buf_t { > > #endif > > tcp6_l2_buf[TCP_FRAMES_MEM]; > > > > +static struct tcp_buf_seq_update tcp6_l2_buf_seq_update[TCP_FRAMES_MEM]; > > + > > static unsigned int tcp6_l2_buf_used; > > > > /* recvmsg()/sendmsg() data for tap */ > > @@ -1369,10 +1383,17 @@ static void tcp_l2_flags_buf_flush(struct ctx *c) > > */ > > static void tcp_l2_data_buf_flush(struct ctx *c) > > { > > - tap_send_frames(c, tcp6_l2_iov, tcp6_l2_buf_used); > > + unsigned i; > > + size_t m; > > + > > + m = tap_send_frames(c, tcp6_l2_iov, tcp6_l2_buf_used); > > + for (i = 0; i < m; i++) > > + *tcp6_l2_buf_seq_update[i].seq += tcp6_l2_buf_seq_update[i].len; > > tcp6_l2_buf_used = 0; > > > > - tap_send_frames(c, tcp4_l2_iov, tcp4_l2_buf_used); > > + m = tap_send_frames(c, tcp4_l2_iov, tcp4_l2_buf_used); > > + for (i = 0; i < m; i++) > > + *tcp4_l2_buf_seq_update[i].seq += tcp4_l2_buf_seq_update[i].len; > > tcp4_l2_buf_used = 0; > > } > > > > @@ -2149,10 +2170,11 @@ static int tcp_sock_consume(struct tcp_tap_conn *conn, uint32_t ack_seq) > > * @plen: Payload length at L4 > > * @no_csum: Don't compute IPv4 checksum, use the one from previous buffer > > * @seq: Sequence number to be sent > > - * @now: Current timestamp > > + * @seq_update: Pointer to sequence number to update on successful send > > */ > > static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, > > - ssize_t plen, int no_csum, uint32_t seq) > > + ssize_t plen, int no_csum, uint32_t seq, > > + uint32_t *seq_update) > > seq_update is always &conn->seq_to_tap, so there's no need for an > additional parameter. Oh, right, I'll drop that. > > { > > struct iovec *iov; > > > > @@ -2160,6 +2182,9 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, > > struct tcp4_l2_buf_t *b = &tcp4_l2_buf[tcp4_l2_buf_used]; > > uint16_t *check = no_csum ? &(b - 1)->iph.check : NULL; > > > > + tcp4_l2_buf_seq_update[tcp4_l2_buf_used].seq = seq_update; > > + tcp4_l2_buf_seq_update[tcp4_l2_buf_used].len = plen; > > + > > iov = tcp4_l2_iov + tcp4_l2_buf_used++; > > iov->iov_len = tcp_l2_buf_fill_headers(c, conn, b, plen, > > check, seq); > > @@ -2168,6 +2193,9 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, > > } else if (CONN_V6(conn)) { > > struct tcp6_l2_buf_t *b = &tcp6_l2_buf[tcp6_l2_buf_used]; > > > > + tcp6_l2_buf_seq_update[tcp6_l2_buf_used].seq = seq_update; > > + tcp6_l2_buf_seq_update[tcp6_l2_buf_used].len = plen; > > + > > iov = tcp6_l2_iov + tcp6_l2_buf_used++; > > iov->iov_len = tcp_l2_buf_fill_headers(c, conn, b, plen, > > NULL, seq); > > @@ -2193,7 +2221,7 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) > > int s = conn->sock, i, ret = 0; > > struct msghdr mh_sock = { 0 }; > > uint16_t mss = MSS_GET(conn); > > - uint32_t already_sent; > > + uint32_t already_sent, seq; > > struct iovec *iov; > > > > already_sent = conn->seq_to_tap - conn->seq_ack_from_tap; > > @@ -2282,14 +2310,15 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) > > > > /* Finally, queue to tap */ > > plen = mss; > > + seq = conn->seq_to_tap; > > This will only be correct if tcp_l2_data_buf_flush() is *always* > called between tcp_data_from_sock() calls for the same socket. That > should be true for the normal course of things. However, couldn't it > happen that we get a normal socket EPOLLIN event for a particular > connection - calling tcp_data_from_sock() - but in the same epoll() > round we also get a tap ack for the same connection which causes > another call to tcp_data_from_sock() (with the change from patch > 2/5). IIRC those would both happen before the deferred handling and > therefore the data_buf_flush(). Ah, yes, I actually wrote this before 2/5 and concluded it was okay :/ but with that change, it's not. Unless we drop that change from 2/5. > Not sure how to deal with that short of separate 'seq_queued' and > 'seq_sent' counters in the connection structure, which is a bit > unfortunate. I wonder how bad it is if we call tcp_l2_data_buf_flush() unconditionally before calling tcp_data_from_sock() from tcp_tap_handler(). But again, maybe this is not needed at all, we should check that epoll detail from 2/5 first... -- Stefano