From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=none (p=none dis=none) header.from=gibson.dropbear.id.au Authentication-Results: passt.top; dkim=pass (2048-bit key; secure) header.d=gibson.dropbear.id.au header.i=@gibson.dropbear.id.au header.a=rsa-sha256 header.s=202602 header.b=Lbt+0MV1; dkim-atps=neutral Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by passt.top (Postfix) with ESMTPS id AA92D5A0265 for ; Thu, 12 Mar 2026 03:05:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gibson.dropbear.id.au; s=202602; t=1773281123; bh=s+OvPCChETYNAVLxUsPIzc/ufvTvfaDDABnRFYJ80vY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Lbt+0MV1JK3aVRcfdFRW/ONY07u24B103XqAl2uwYE2TSYUcRGseDDtTuAYGsKJI9 BZTrml0e9s3wy5351H8dtzxJwaQPrGcvTBOrpldeyQ0XoJYYYto1mBiJkUq6XWBvaF 0BmKgouPBhv9O7KANEFqmtYuOcgvSvxIefgZidKUdG1usKxvdgZlAWFlF9BKSXLh89 URLC7xSztbDdo1CmBcs0tsZ9LXoiRJb34j1F5otKJMePq1xZPUnYZtwJBw6JcREx/d cvWg+upLrjcqnaHpPUz4YiT5oUMkiGRHOZooNW20jCFf0VJMnyXFajJV5Dvi7AtRkP 2RAFM5YXMTI8w== Received: by gandalf.ozlabs.org (Postfix, from userid 1007) id 4fWWD35r0zz4wD4; Thu, 12 Mar 2026 13:05:23 +1100 (AEDT) Date: Thu, 12 Mar 2026 13:05:13 +1100 From: David Gibson To: Laurent Vivier Subject: Re: [PATCH v2 02/13] vhost-user: Centralise 802.3 frame padding in vu_collect() and vu_flush() Message-ID: References: <20260309094744.1907754-1-lvivier@redhat.com> <20260309094744.1907754-3-lvivier@redhat.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="C4LaDxCizflpuzwK" Content-Disposition: inline In-Reply-To: <20260309094744.1907754-3-lvivier@redhat.com> Message-ID-Hash: RO34HMX2MAI3DXDLSGS5EOBCPT2FMXIG X-Message-ID-Hash: RO34HMX2MAI3DXDLSGS5EOBCPT2FMXIG X-MailFrom: dgibson@gandalf.ozlabs.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: --C4LaDxCizflpuzwK Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Mar 09, 2026 at 10:47:33AM +0100, Laurent Vivier wrote: > The per-protocol padding done by vu_pad() in tcp_vu.c and udp_vu.c was > only correct for single-buffer frames, and assumed the padding area always > fell within the first iov. It also relied on each caller computing the > right MAX(..., ETH_ZLEN + VNET_HLEN) size for vu_collect() and calling > vu_pad() at the right point. >=20 > Centralise padding logic into the two shared vhost-user helpers instead: >=20 > - vu_collect() now ensures at least ETH_ZLEN + VNET_HLEN bytes of buffer > space are collected, so there is always room for a minimum-sized frame. >=20 > - vu_flush() computes the actual frame length (accounting for > VIRTIO_NET_F_MRG_RXBUF multi-buffer frames) and passes the padded > length to vu_queue_fill(). >=20 > A new iov_memset() helper in iov.c zero-fills the padding area in each > buffer before iov_truncate() sets the logical frame size. The callers in > tcp_vu.c, udp_vu.c and vu_send_single() use iov_memset() directly, > replacing the now-removed vu_pad() helper and the MAX(..., ETH_ZLEN + > VNET_HLEN) size calculations passed to vu_collect(). >=20 > Centralising padding here will also ease the move to multi-iovec per > element support, since there will be a single place to update. >=20 > In vu_send_single(), fix padding, truncation and data copy to use the > requested frame size rather than the total available buffer space from > vu_collect(), which could be larger. Also add matching padding, truncati= on > and explicit size to vu_collect() for the DUP_ACK path in > tcp_vu_send_flag(). >=20 > Signed-off-by: Laurent Vivier AFAICT this is correct, but some notes for polish below. > --- > iov.c | 24 +++++++++++++++++++ > iov.h | 2 ++ > tcp_vu.c | 35 +++++++++++++++++---------- > udp_vu.c | 12 ++++++---- > vu_common.c | 69 +++++++++++++++++++++++++++++++---------------------- > vu_common.h | 1 - > 6 files changed, 96 insertions(+), 47 deletions(-) >=20 > diff --git a/iov.c b/iov.c > index 31a3f5bc29e5..cd48667226f3 100644 > --- a/iov.c > +++ b/iov.c > @@ -169,6 +169,30 @@ size_t iov_truncate(struct iovec *iov, size_t iov_cn= t, size_t size) > return i; > } > =20 > +/** > + * iov_memset() - Set bytes of an IO vector to a given value > + * @iov: IO vector > + * @iov_cnt: Number of elements in @iov > + * @offset: Byte offset in the iovec at which to start > + * @c: Byte value to fill with > + * @length: Number of bytes to set > + */ Nit: This will write less than @length bytes if it runs out of space in the iov. I think that's the correct choice, but it might be worth noting that explicitly in the description. Not worth a respin on its own, obviously. > +void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, = int c, > + size_t length) > +{ > + size_t i; > + > + i =3D iov_skip_bytes(iov, iov_cnt, offset, &offset); > + > + for ( ; i < iov_cnt; i++) { > + size_t n =3D MIN(iov[i].iov_len - offset, length); > + > + memset((char *)iov[i].iov_base + offset, c, n); > + offset =3D 0; > + length -=3D n; > + } > +} > + > /** > * iov_tail_prune() - Remove any unneeded buffers from an IOV tail > * @tail: IO vector tail (modified) > diff --git a/iov.h b/iov.h > index b4e50b0fca5a..d295d05b3bab 100644 > --- a/iov.h > +++ b/iov.h > @@ -30,6 +30,8 @@ size_t iov_to_buf(const struct iovec *iov, size_t iov_c= nt, > size_t offset, void *buf, size_t bytes); > size_t iov_size(const struct iovec *iov, size_t iov_cnt); > size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size); > +void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, = int c, > + size_t length); > =20 > /* > * DOC: Theory of Operation, struct iov_tail > diff --git a/tcp_vu.c b/tcp_vu.c > index fd734e857b3b..3adead5f33fa 100644 > --- a/tcp_vu.c > +++ b/tcp_vu.c > @@ -72,12 +72,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_= tap_conn *conn, int flags) > struct vu_dev *vdev =3D c->vdev; > struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > struct vu_virtq_element flags_elem[2]; > - size_t optlen, hdrlen, l2len; > struct ipv6hdr *ip6h =3D NULL; > struct iphdr *ip4h =3D NULL; > struct iovec flags_iov[2]; > struct tcp_syn_opts *opts; > struct iov_tail payload; > + size_t optlen, hdrlen; > struct tcphdr *th; > struct ethhdr *eh; > uint32_t seq; > @@ -90,7 +90,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_ta= p_conn *conn, int flags) > vu_set_element(&flags_elem[0], NULL, &flags_iov[0]); > =20 > elem_cnt =3D vu_collect(vdev, vq, &flags_elem[0], 1, > - MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL); > + hdrlen + sizeof(*opts), NULL); > if (elem_cnt !=3D 1) > return -1; > =20 > @@ -131,6 +131,11 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp= _tap_conn *conn, int flags) > return ret; > } > =20 > + /* Pad short frames to ETH_ZLEN */ > + if (ETH_ZLEN + VNET_HLEN > hdrlen + optlen) { > + iov_memset(&flags_iov[0], 1, hdrlen + optlen, 0, > + ETH_ZLEN + VNET_HLEN - (hdrlen + optlen)); > + } Nit: this is a mildly bulky construction for a conceptually simple operation, that you need to repeat several times. I wonder if it might be worth having an iov_memset() variant that takes and end point instead of a length (and safely no-ops if end < start). > iov_truncate(&flags_iov[0], 1, hdrlen + optlen); > payload =3D IOV_TAIL(flags_elem[0].in_sg, 1, hdrlen); > =20 > @@ -140,9 +145,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_= tap_conn *conn, int flags) > tcp_fill_headers(c, conn, eh, ip4h, ip6h, th, &payload, > NULL, seq, !*c->pcap); > =20 > - l2len =3D optlen + hdrlen - VNET_HLEN; > - vu_pad(&flags_elem[0].in_sg[0], l2len); > - > if (*c->pcap) > pcap_iov(&flags_elem[0].in_sg[0], 1, VNET_HLEN); > nb_ack =3D 1; > @@ -151,10 +153,17 @@ int tcp_vu_send_flag(const struct ctx *c, struct tc= p_tap_conn *conn, int flags) > vu_set_element(&flags_elem[1], NULL, &flags_iov[1]); > =20 > elem_cnt =3D vu_collect(vdev, vq, &flags_elem[1], 1, > - flags_elem[0].in_sg[0].iov_len, NULL); > + hdrlen + optlen, NULL); > if (elem_cnt =3D=3D 1 && > flags_elem[1].in_sg[0].iov_len >=3D > flags_elem[0].in_sg[0].iov_len) { > + /* Pad short frames to ETH_ZLEN */ > + if (ETH_ZLEN + VNET_HLEN > hdrlen + optlen) { > + iov_memset(&flags_iov[1], 1, hdrlen + optlen, 0, > + ETH_ZLEN + VNET_HLEN - > + (hdrlen + optlen)); > + } > + iov_truncate(&flags_iov[1], 1, hdrlen + optlen); > memcpy(flags_elem[1].in_sg[0].iov_base, > flags_elem[0].in_sg[0].iov_base, > flags_elem[0].in_sg[0].iov_len); > @@ -212,8 +221,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, = struct vu_virtq *vq, > =20 > cnt =3D vu_collect(vdev, vq, &elem[elem_cnt], > VIRTQUEUE_MAX_SIZE - elem_cnt, > - MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN), > - &frame_size); > + MIN(mss, fillsize) + hdrlen, &frame_size); > if (cnt =3D=3D 0) > break; > =20 > @@ -222,6 +230,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, = struct vu_virtq *vq, > /* reserve space for headers in iov */ > iov =3D &elem[elem_cnt].in_sg[0]; > ASSERT(iov->iov_len >=3D hdrlen); > + > iov->iov_base =3D (char *)iov->iov_base + hdrlen; > iov->iov_len -=3D hdrlen; > head[(*head_cnt)++] =3D elem_cnt; > @@ -246,6 +255,11 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c,= struct vu_virtq *vq, > if (!peek_offset_cap) > ret -=3D already_sent; > =20 > + /* Pad short frames to ETH_ZLEN */ > + if (ETH_ZLEN + VNET_HLEN > (size_t)ret + hdrlen) { > + iov_memset(&iov_vu[DISCARD_IOV_NUM], elem_cnt, ret, 0, > + ETH_ZLEN + VNET_HLEN - (ret + hdrlen)); > + } > /* adjust iov number and length of the last iov */ > i =3D iov_truncate(&iov_vu[DISCARD_IOV_NUM], elem_cnt, ret); > =20 > @@ -443,7 +457,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct= tcp_tap_conn *conn) > size_t frame_size =3D iov_size(iov, buf_cnt); > bool push =3D i =3D=3D head_cnt - 1; > ssize_t dlen; > - size_t l2len; > =20 > ASSERT(frame_size >=3D hdrlen); > =20 > @@ -457,10 +470,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struc= t tcp_tap_conn *conn) > =20 > tcp_vu_prepare(c, conn, iov, buf_cnt, &check, !*c->pcap, push); > =20 > - /* Pad first/single buffer only, it's at least ETH_ZLEN long */ > - l2len =3D dlen + hdrlen - VNET_HLEN; > - vu_pad(iov, l2len); > - > if (*c->pcap) > pcap_iov(iov, buf_cnt, VNET_HLEN); > =20 > diff --git a/udp_vu.c b/udp_vu.c > index 5effca777e0a..ef9d26118eaf 100644 > --- a/udp_vu.c > +++ b/udp_vu.c > @@ -73,7 +73,7 @@ static int udp_vu_sock_recv(const struct ctx *c, struct= vu_virtq *vq, int s, > const struct vu_dev *vdev =3D c->vdev; > struct msghdr msg =3D { 0 }; > int iov_cnt, iov_used; > - size_t hdrlen, l2len; > + size_t hdrlen; > =20 > ASSERT(!c->no_udp); > =20 > @@ -98,6 +98,7 @@ static int udp_vu_sock_recv(const struct ctx *c, struct= vu_virtq *vq, int s, > =20 > /* reserve space for the headers */ > ASSERT(iov_vu[0].iov_len >=3D MAX(hdrlen, ETH_ZLEN + VNET_HLEN)); > + > iov_vu[0].iov_base =3D (char *)iov_vu[0].iov_base + hdrlen; > iov_vu[0].iov_len -=3D hdrlen; > =20 > @@ -115,12 +116,13 @@ static int udp_vu_sock_recv(const struct ctx *c, st= ruct vu_virtq *vq, int s, > iov_vu[0].iov_base =3D (char *)iov_vu[0].iov_base - hdrlen; > iov_vu[0].iov_len +=3D hdrlen; > =20 > + /* Pad short frames to ETH_ZLEN */ > + if (ETH_ZLEN + VNET_HLEN > *dlen + hdrlen) { > + iov_memset(iov_vu, iov_cnt, *dlen + hdrlen, 0, > + ETH_ZLEN + VNET_HLEN - (*dlen + hdrlen)); > + } > iov_used =3D iov_truncate(iov_vu, iov_cnt, *dlen + hdrlen); > =20 > - /* pad frame to 60 bytes: first buffer is at least ETH_ZLEN long */ > - l2len =3D *dlen + hdrlen - VNET_HLEN; > - vu_pad(&iov_vu[0], l2len); > - > vu_set_vnethdr(iov_vu[0].iov_base, iov_used); > =20 > /* release unused buffers */ > diff --git a/vu_common.c b/vu_common.c > index 5f2ce18e5b71..8ea05dd30890 100644 > --- a/vu_common.c > +++ b/vu_common.c > @@ -87,8 +87,8 @@ int vu_collect(const struct vu_dev *vdev, struct vu_vir= tq *vq, > size_t current_size =3D 0; > int elem_cnt =3D 0; > =20 > + size =3D MAX(size, ETH_ZLEN + VNET_HLEN); /* 802.3 minimum size */ Nit: I usually prefer "Ethernet" to "802.3", since in practice most frames we actually use are in Ethernet-II format (ethertype field), rather than 802.3 format (length field). > while (current_size < size && elem_cnt < max_elem) { > - struct iovec *iov; > int ret; > =20 > ret =3D vu_queue_pop(vdev, vq, &elem[elem_cnt]); > @@ -101,12 +101,12 @@ int vu_collect(const struct vu_dev *vdev, struct vu= _virtq *vq, > break; > } > =20 > - iov =3D &elem[elem_cnt].in_sg[0]; > + elem[elem_cnt].in_num =3D iov_truncate(elem[elem_cnt].in_sg, > + elem[elem_cnt].in_num, > + size - current_size); > - if (iov->iov_len > size - current_size) > - iov->iov_len =3D size - current_size; > - > - current_size +=3D iov->iov_len; > + current_size +=3D iov_size(elem[elem_cnt].in_sg, > + elem[elem_cnt].in_num); Double scanning the iovs of the element (once for iov_truncate(), once for iov_size()) is a pity. I guess it's cache hot, so it's probably not a big deal. Could be avoided by adding a "truncated length" return parameter to iov_truncate(), but not sure it's worth the uglier interface. > elem_cnt++; > =20 > if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > @@ -143,10 +143,30 @@ void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf= *vnethdr, int num_buffers) > void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, > struct vu_virtq_element *elem, int elem_cnt) > { > - int i; > - > - for (i =3D 0; i < elem_cnt; i++) > - vu_queue_fill(vdev, vq, &elem[i], elem[i].in_sg[0].iov_len, i); > + int i, j, num_buffers; > + > + for (i =3D 0; i < elem_cnt; i +=3D num_buffers) { The name "num_buffers" is slightly confusing. AFAICT this is the number of elements in the.. group (?is there a proper term?). Each element in that group could have multiple buffers in its in_sg list. > + const struct virtio_net_hdr_mrg_rxbuf *vnethdr; > + size_t len, padding, elem_size; > + > + vnethdr =3D elem[i].in_sg[0].iov_base; This assumes that the vnethdr itself fits in the first element. I'm guessing that really is a constraint of the vhost protocol, though? > + num_buffers =3D le16toh(vnethdr->num_buffers); > + > + len =3D 0; > + for (j =3D 0; j < num_buffers - 1; j++) { > + elem_size =3D iov_size(elem[i + j].in_sg, > + elem[i + j].in_num); > + vu_queue_fill(vdev, vq, &elem[i + j], > + elem_size, i + j); > + len +=3D elem_size; > + } > + /* pad the last element to have an 802.3 minimum frame size */ > + elem_size =3D iov_size(elem[i + j].in_sg, elem[i + j].in_num); elem_size should already have this value from the inner loop, no? > + padding =3D MAX(0, (ssize_t)(ETH_ZLEN + VNET_HLEN) - > + (ssize_t)(len + elem_size)); I tend to prefer an x > y test followed by unsigned subtraction, rather than signed subtraction followed by checking for negative because it avoids thinking about whether each of the sighed/unsigned casts is strictly safe. > + vu_queue_fill(vdev, vq, &elem[i + j], elem_size + padding, > + i + j); > + } I'm not entirely clear on what makes using the padded size here safe. > =20 > vu_queue_flush(vdev, vq, elem_cnt); > vu_queue_notify(vdev, vq); > @@ -268,38 +288,31 @@ int vu_send_single(const struct ctx *c, const void = *buf, size_t size) > goto err; > } > =20 > + /* Pad short frames to ETH_ZLEN */ > + if (size < ETH_ZLEN + VNET_HLEN) { > + iov_memset(in_sg, elem_cnt, size, 0, > + ETH_ZLEN + VNET_HLEN - size); > + } > + elem_cnt =3D iov_truncate(in_sg, elem_cnt, size); Truncating to the unpadded size here seems odd. > vu_set_vnethdr(in_sg[0].iov_base, elem_cnt); > =20 > - total -=3D VNET_HLEN; > + size -=3D VNET_HLEN; > =20 > /* copy data from the buffer to the iovec */ > - iov_from_buf(in_sg, elem_cnt, VNET_HLEN, buf, total); > + iov_from_buf(in_sg, elem_cnt, VNET_HLEN, buf, size); > =20 > if (*c->pcap) > pcap_iov(in_sg, elem_cnt, VNET_HLEN); > =20 > vu_flush(vdev, vq, elem, elem_cnt); > =20 > - trace("vhost-user sent %zu", total); > + trace("vhost-user sent %zu", size); > =20 > - return total; > + return size; > err: > for (i =3D 0; i < elem_cnt; i++) > vu_queue_detach_element(vq); > =20 > return -1; > } > - > -/** > - * vu_pad() - Pad 802.3 frame to minimum length (60 bytes) if needed > - * @iov: Buffer in iovec array where end of 802.3 frame is stored > - * @l2len: Layer-2 length already filled in frame > - */ > -void vu_pad(struct iovec *iov, size_t l2len) > -{ > - if (l2len >=3D ETH_ZLEN) > - return; > - > - memset((char *)iov->iov_base + iov->iov_len, 0, ETH_ZLEN - l2len); > - iov->iov_len +=3D ETH_ZLEN - l2len; > -} > diff --git a/vu_common.h b/vu_common.h > index 865d9771fa89..5de0c987b936 100644 > --- a/vu_common.h > +++ b/vu_common.h > @@ -61,6 +61,5 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virt= q *vq, > void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, > const struct timespec *now); > int vu_send_single(const struct ctx *c, const void *buf, size_t size); > -void vu_pad(struct iovec *iov, size_t l2len); > =20 > #endif /* VU_COMMON_H */ > --=20 > 2.53.0 >=20 --=20 David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson --C4LaDxCizflpuzwK Content-Type: application/pgp-signature; name=signature.asc -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEO+dNsU4E3yXUXRK2zQJF27ox2GcFAmmyH0sACgkQzQJF27ox 2Gf9LA/+KVpG0O2Rqfq92NMhtnq0X3PPL85Pv+EeGpip422bR2D039qvFKgGzbZk 9QLmkrO1YSiGuQvaYoTa6Ul2wzc5N2oP0EQTsxpstdVLZOnYgt3CsRlq/6p3nsIn 2WvMG9pYDx05wFsrTfLnt0eQGmHpe2IiyJJCouxKFAfyM/uOAnbYDexWr4K7JLF8 jj0jBR+CgWin61QFzHltMCjJHxvZp8sMZYF9haRho+Oo+dK6fLTml9if2ieUXwtT 89PKxaRxb/bva4ZRzdza/HAJRiwdeH0kpYxbqmKcTRYN7Mtx8J7tQCqeh+D2dkkK 0emI9v2c9KOLduMmf6yrhIapq9+2/AF0WV7a9s9fp9P5nbw8WoAkekgnbixoykes +VrSRvDYoZuOKcA4m+BKyifga9d1bYKkXRHOYuyRIBXe3Ure9vCCPZ+66nsBV6Dm nQUL5MsKCICia6iKbDEniINSAczr34lyBXFvCLxsEoHt7kJwFdUy85pGiPvls0pT rKi7PNKcqOv+lCjN8Di1Yj5Ato3ZWchjRbW13IkWd1TUR2aUVY17tVafUxDSRk9A CpnhCsM/CtmbF+wLmaXzMbnGyqC+0BarQlZ38VMM9gGOfrV+UHESSb1ejl8M53V9 G5MAKTSkvLlt1KwU302SiJ7+KdcO294Pj4csmWTUQabbYR4N6eo= =STif -----END PGP SIGNATURE----- --C4LaDxCizflpuzwK--