From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=CL7New/G; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id 54B5F5A0265 for ; Fri, 03 Apr 2026 08:21:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1775197259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VR4Jwt62SFLBI5ysCm+D2I18c9tnNKvggKN9LBQdl/c=; b=CL7New/Gix1Tu5S+kOdtIPolSFDxlPUeUg7lijZWb9mOCKTF/N8D3HVH4T3Zb54aPuYfV6 Z6ntlHbcpPxLworSoUo5LAw2h2mTgmIhcwjwFbesV0M8bUtc0aY6Rz1VMn+iwm/8NPCKIn oXN9jKmEjQ49d1qxn65ujhPOyEYIx4s= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-347-dMjkXo_KNL6m8fOkpJtq7w-1; Fri, 03 Apr 2026 02:20:58 -0400 X-MC-Unique: dMjkXo_KNL6m8fOkpJtq7w-1 X-Mimecast-MFC-AGG-ID: dMjkXo_KNL6m8fOkpJtq7w_1775197257 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-4871f6712ebso9388785e9.1 for ; Thu, 02 Apr 2026 23:20:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775197255; x=1775802055; h=date:content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VR4Jwt62SFLBI5ysCm+D2I18c9tnNKvggKN9LBQdl/c=; b=VgXi6A1g08DrgsOxcsTPIbNhaDeYpJZgbJGNAWBmJo3HAJ14UQr/A9vWo+oJvQiGZK 8b8p+yeRkSfrAwOpbtoc9i2YKX7QUXEibTI0BN3K35KIS2frFbYYsbr+zlCK2jg1JyXz jycp4cmrUnZlzKXV6XUhXneQcOKTTQmzzG2PTHzVY4a9KiFVjZTXZnphXORdB7JTDPXA PezJ1zO7MLVApsKtLq5uLHrg2ZmqRkvCj2HBxG1JUkb+STm1JfxDhkOqEvyE7Wd9ev87 r7qOPkvgD2ga74xqNZOEcgNgc8Z1cJlwri0nLqH8Eri0XSkp0TSSEnOiZFNguLPhzIas K05A== X-Gm-Message-State: AOJu0YzR0gMVwuFDEfkDUKrlyy0pejnnFz9NNoGpUcQ2bxLVcx5SlsO/ oTHzr5wmZgILXplCQD/zfD4TRvTcqHskOuTI+D0GLHueyhNh5qixNt6Rf/rj7Y1nF4Y8Ros2+m9 /wdyAUzAKxo0hv3aIbK1oSzkgY3/4wecemyGTZXH0bHO9a/cVIQWtnS8Lha/F7r87ckkIMESD+c Wm6ujQbmdAx2pzDYhP0P8xNW+KQiVzlwVn0igz X-Gm-Gg: ATEYQzx1FbeYl5VB2Odpvn8LXsc8efiqICCpaSKSqctLnT1X57KwLkeqyTz4PaHL4j2 pXFA/cV2WQx4pvtJuM2tSWCqOZ1C+ZAKWXx6EufSNYkc2Hy+SVCEvgqVpBh1P+/ZyQG0TwqRQo2 3tFgQi0QdTCAiaqOU7D8GoCI85k4YFhfJ1JFJ6Fd3r/qFpQuwaeDqpkCjXmMKxX3FJDa1V1scVy sbhjj+6hyPOzwMlmsUXnI7JS2gMtG8MsR87IDvx+KJPZ6kSPEY1O4y8s58bovbDCF8YVqL8FxRb +GtqwnefpG56c5Wwa7wmPC47QOFKGCXYERCuBmDoaMgJqmk39PC38zlPzmYK7GgdlV3018mo4Kq iJQn4IBNzPlbhlAHAPcOza2EbgQn0S8ik X-Received: by 2002:a05:600c:4749:b0:485:3dfc:569 with SMTP id 5b1f17b1804b1-48899784d1amr24335375e9.16.1775197255128; Thu, 02 Apr 2026 23:20:55 -0700 (PDT) X-Received: by 2002:a05:600c:4749:b0:485:3dfc:569 with SMTP id 5b1f17b1804b1-48899784d1amr24334995e9.16.1775197254579; Thu, 02 Apr 2026 23:20:54 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [2a10:fc81:a806:d6a9::1]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43d1e4d289asm13525120f8f.19.2026.04.02.23.20.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Apr 2026 23:20:54 -0700 (PDT) From: Stefano Brivio To: Laurent Vivier Subject: Re: [PATCH 10/10] vhost-user: Centralise Ethernet frame padding in vu_collect() and vu_pad() Message-ID: <20260403082052.3cfebb68@elisabeth> In-Reply-To: <20260401191826.1782394-11-lvivier@redhat.com> References: <20260401191826.1782394-1-lvivier@redhat.com> <20260401191826.1782394-11-lvivier@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.49; x86_64-pc-linux-gnu) MIME-Version: 1.0 Date: Fri, 03 Apr 2026 08:20:53 +0200 (CEST) X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: vcI58XrBZEL_5ygVDwHoWf88Cjk6s1Gtm6iWyI7QoEw_1775197257 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: NKL3KXGXMYLQOG2QHTKWJEZ5TXXTVLBZ X-Message-ID-Hash: NKL3KXGXMYLQOG2QHTKWJEZ5TXXTVLBZ X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Wed, 1 Apr 2026 21:18:26 +0200 Laurent Vivier wrote: > The previous per-protocol padding done by vu_pad() in tcp_vu.c and > udp_vu.c was only correct for single-buffer frames: it assumed the > padding area always fell within the first iov, writing past its end > with a plain memset(). > > It also required each caller to compute MAX(..., ETH_ZLEN + VNET_HLEN) > for vu_collect() and to call vu_pad() at the right point, duplicating > the minimum-size logic across protocols. > > Move the Ethernet minimum size enforcement into vu_collect() itself, so > that enough buffer space is always reserved for padding regardless of > the requested frame size. > > Rewrite vu_pad() to take a full iovec array and use iov_memset(), > making it safe for multi-buffer (mergeable rx buffer) frames. > > In tcp_vu_sock_recv(), replace iov_truncate() with iov_skip_bytes(): > now that all consumers receive explicit data lengths, truncating the > iovecs is no longer needed. In tcp_vu_data_from_sock(), cap each > frame's data length against the remaining bytes actually received from > the socket, so that the last partial frame gets correct headers and > sequence number advancement. > > Signed-off-by: Laurent Vivier > --- > iov.c | 1 - > tcp_vu.c | 29 ++++++++++++++--------------- > udp_vu.c | 14 ++++++++------ > vu_common.c | 32 +++++++++++++++----------------- > vu_common.h | 2 +- > 5 files changed, 38 insertions(+), 40 deletions(-) > > diff --git a/iov.c b/iov.c > index 83b683f3976a..2289b425529e 100644 > --- a/iov.c > +++ b/iov.c > @@ -180,7 +180,6 @@ size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size) > * Will write less than @length bytes if it runs out of space in > * the iov > */ > -/* cppcheck-suppress unusedFunction */ > void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c, > size_t length) > { > diff --git a/tcp_vu.c b/tcp_vu.c > index ae79a6d856b0..cae6926334b9 100644 > --- a/tcp_vu.c > +++ b/tcp_vu.c > @@ -72,12 +72,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > struct vu_dev *vdev = c->vdev; > struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > struct vu_virtq_element flags_elem[2]; > - size_t optlen, hdrlen, l2len; > struct ipv6hdr *ip6h = NULL; > struct iphdr *ip4h = NULL; > struct iovec flags_iov[2]; > struct tcp_syn_opts *opts; > struct iov_tail payload; > + size_t optlen, hdrlen; > struct tcphdr *th; > struct ethhdr *eh; > uint32_t seq; > @@ -88,7 +88,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > > elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1, > &flags_iov[0], 1, NULL, > - MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL); > + hdrlen + sizeof(*opts), NULL); > if (elem_cnt != 1) > return -1; > > @@ -128,7 +128,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > return ret; > } > > - iov_truncate(&flags_iov[0], 1, hdrlen + optlen); > payload = IOV_TAIL(flags_elem[0].in_sg, 1, hdrlen); > > if (flags & KEEPALIVE) > @@ -137,9 +136,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > tcp_fill_headers(c, conn, eh, ip4h, ip6h, th, &payload, > optlen, NULL, seq, !*c->pcap); > > - l2len = optlen + hdrlen - VNET_HLEN; > - vu_pad(&flags_elem[0].in_sg[0], l2len); > - > + vu_pad(flags_elem[0].in_sg, 1, hdrlen + optlen); > vu_flush(vdev, vq, flags_elem, 1, hdrlen + optlen); > > if (*c->pcap) > @@ -149,7 +146,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > if (flags & DUP_ACK) { > elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1, > &flags_iov[1], 1, NULL, > - flags_elem[0].in_sg[0].iov_len, NULL); > + hdrlen + optlen, NULL); > if (elem_cnt == 1 && > flags_elem[1].in_sg[0].iov_len >= > flags_elem[0].in_sg[0].iov_len) { > @@ -213,7 +210,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, > ARRAY_SIZE(elem) - elem_cnt, > &iov_vu[DISCARD_IOV_NUM + iov_used], > VIRTQUEUE_MAX_SIZE - iov_used, &in_total, > - MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN), > + MIN(mss, fillsize) + hdrlen, > &frame_size); > if (cnt == 0) > break; > @@ -249,8 +246,11 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, > if (!peek_offset_cap) > ret -= already_sent; > > - /* adjust iov number and length of the last iov */ > - i = iov_truncate(&iov_vu[DISCARD_IOV_NUM], iov_used, ret); > + i = iov_skip_bytes(&iov_vu[DISCARD_IOV_NUM], iov_used, > + MAX(hdrlen + ret, VNET_HLEN + ETH_ZLEN), > + NULL); Nit: this should be aligned like this: i = iov_skip_bytes(&iov_vu[DISCARD_IOV_NUM], iov_used, MAX(hdrlen + ret, VNET_HLEN + ETH_ZLEN), NULL); > + if ((size_t)i < iov_used) > + i++; I'm a bit lost here. I see that this increment restores the iov_truncate() convention of returning the number of iov items (which we need later), but... what happens if we have i >= iov_used (even though my assumption is that it should never happen)? We're throwing away data? > > /* adjust head count */ > while (*head_cnt > 0 && head[*head_cnt - 1] >= i) > @@ -447,11 +447,13 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn) > size_t frame_size = iov_size(iov, buf_cnt); > bool push = i == head_cnt - 1; > ssize_t dlen; > - size_t l2len; > > assert(frame_size >= hdrlen); > > dlen = frame_size - hdrlen; > + if (dlen > len) > + dlen = len; > + len -= dlen; > > /* The IPv4 header checksum varies only with dlen */ > if (previous_dlen != dlen) > @@ -460,10 +462,7 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn) > > tcp_vu_prepare(c, conn, iov, buf_cnt, dlen, &check, !*c->pcap, push); > > - /* Pad first/single buffer only, it's at least ETH_ZLEN long */ > - l2len = dlen + hdrlen - VNET_HLEN; > - vu_pad(iov, l2len); > - > + vu_pad(elem[head[i]].in_sg, buf_cnt, dlen + hdrlen); > vu_flush(vdev, vq, &elem[head[i]], buf_cnt, dlen + hdrlen); > > if (*c->pcap) > diff --git a/udp_vu.c b/udp_vu.c > index 4641f42eb5c4..30af64034516 100644 > --- a/udp_vu.c > +++ b/udp_vu.c > @@ -65,7 +65,7 @@ static size_t udp_vu_hdrlen(bool v6) > static ssize_t udp_vu_sock_recv(struct iovec *iov, size_t *cnt, int s, bool v6) > { > struct msghdr msg = { 0 }; > - size_t hdrlen, l2len; > + size_t hdrlen, iov_used; > ssize_t dlen; > > /* compute L2 header length */ > @@ -88,11 +88,12 @@ static ssize_t udp_vu_sock_recv(struct iovec *iov, size_t *cnt, int s, bool v6) > iov[0].iov_base = (char *)iov[0].iov_base - hdrlen; > iov[0].iov_len += hdrlen; > > - *cnt = iov_truncate(iov, *cnt, dlen + hdrlen); > - > - /* pad frame to 60 bytes: first buffer is at least ETH_ZLEN long */ > - l2len = dlen + hdrlen - VNET_HLEN; > - vu_pad(&iov[0], l2len); > + iov_used = iov_skip_bytes(iov, *cnt, > + MAX(dlen + hdrlen, VNET_HLEN + ETH_ZLEN), > + NULL); > + if (iov_used < *cnt) > + iov_used++; (I would have the same question here) > + *cnt = iov_used; /* one iovec per element */ > > return dlen; > } > @@ -234,6 +235,7 @@ void udp_vu_sock_to_tap(const struct ctx *c, int s, int n, flow_sidx_t tosidx) > pcap_iov(iov_vu, iov_cnt, VNET_HLEN, > hdrlen + dlen - VNET_HLEN); > } > + vu_pad(iov_vu, iov_cnt, hdrlen + dlen); > vu_flush(vdev, vq, elem, elem_used, hdrlen + dlen); > vu_queue_notify(vdev, vq); > } > diff --git a/vu_common.c b/vu_common.c > index d371a59a1813..ca0aab369d3c 100644 > --- a/vu_common.c > +++ b/vu_common.c > @@ -74,6 +74,7 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, > size_t current_iov = 0; > int elem_cnt = 0; > > + size = MAX(size, ETH_ZLEN + VNET_HLEN); /* Ethernet minimum size */ I think this (if needed): size = MAX(size, ETH_ZLEN /* Ethernet minimum size */ + VNET_HLEN); would be more accurate. > while (current_size < size && elem_cnt < max_elem && > current_iov < max_in_sg) { > int ret; > @@ -262,29 +263,27 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) > return -1; > } > > - size += VNET_HLEN; > elem_cnt = vu_collect(vdev, vq, elem, ARRAY_SIZE(elem), in_sg, > - ARRAY_SIZE(in_sg), &in_total, size, &total); > - if (elem_cnt == 0 || total < size) { > + ARRAY_SIZE(in_sg), &in_total, VNET_HLEN + size, &total); > + if (elem_cnt == 0 || total < VNET_HLEN + size) { > debug("vu_send_single: no space to send the data " > "elem_cnt %d size %zu", elem_cnt, total); > goto err; > } > > - total -= VNET_HLEN; > - > /* copy data from the buffer to the iovec */ > - iov_from_buf(in_sg, in_total, VNET_HLEN, buf, total); > + iov_from_buf(in_sg, in_total, VNET_HLEN, buf, size); > > if (*c->pcap) > pcap_iov(in_sg, in_total, VNET_HLEN, size); > > + vu_pad(in_sg, in_total, VNET_HLEN + size); > vu_flush(vdev, vq, elem, elem_cnt, VNET_HLEN + size); > vu_queue_notify(vdev, vq); > > - trace("vhost-user sent %zu", total); > + trace("vhost-user sent %zu", size); > > - return total; > + return size; > err: > for (i = 0; i < elem_cnt; i++) > vu_queue_detach_element(vq); > @@ -293,15 +292,14 @@ err: > } > > /** > - * vu_pad() - Pad 802.3 frame to minimum length (60 bytes) if needed > - * @iov: Buffer in iovec array where end of 802.3 frame is stored > - * @l2len: Layer-2 length already filled in frame > + * vu_pad() - Pad short frames to minimum Ethernet length and truncate iovec > + * @iov: Pointer to iovec array > + * @cnt: Number of entries in @iov > + * @frame_len: Data length in @iov (including virtio-net header) > */ > -void vu_pad(struct iovec *iov, size_t l2len) > +void vu_pad(const struct iovec *iov, size_t cnt, size_t frame_len) > { > - if (l2len >= ETH_ZLEN) > - return; > - > - memset((char *)iov->iov_base + iov->iov_len, 0, ETH_ZLEN - l2len); > - iov->iov_len += ETH_ZLEN - l2len; > + if (frame_len < ETH_ZLEN + VNET_HLEN) Nit: curly brackets. Perhaps better: use a temporary variable for ETH_ZLEN + VNET_HLEN - frame_len ("padding"?). > + iov_memset(iov, cnt, frame_len, 0, > + ETH_ZLEN + VNET_HLEN - frame_len); > } > diff --git a/vu_common.h b/vu_common.h > index 77d1849e6115..51f70084a7cb 100644 > --- a/vu_common.h > +++ b/vu_common.h > @@ -44,6 +44,6 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, > void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, > const struct timespec *now); > int vu_send_single(const struct ctx *c, const void *buf, size_t size); > -void vu_pad(struct iovec *iov, size_t l2len); > +void vu_pad(const struct iovec *iov, size_t cnt, size_t frame_len); > > #endif /* VU_COMMON_H */ -- Stefano