public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: Laurent Vivier <lvivier@redhat.com>
To: passt-dev@passt.top
Cc: Laurent Vivier <lvivier@redhat.com>
Subject: [PATCH v5 3/8] vhost-user: Centralise Ethernet frame padding in vu_collect(), vu_pad() and vu_flush()
Date: Fri, 27 Mar 2026 18:58:29 +0100	[thread overview]
Message-ID: <20260327175834.831995-4-lvivier@redhat.com> (raw)
In-Reply-To: <20260327175834.831995-1-lvivier@redhat.com>

The per-protocol padding done by vu_pad() in tcp_vu.c and udp_vu.c was
only correct for single-buffer frames, and assumed the padding area always
fell within the first iov.  It also relied on each caller computing the
right MAX(..., ETH_ZLEN + VNET_HLEN) size for vu_collect() and calling
vu_pad() at the right point.

Centralise padding logic into three shared vhost-user helpers instead:

 - vu_collect() now ensures at least ETH_ZLEN + VNET_HLEN bytes of buffer
   space are collected, so there is always room for a minimum-sized frame.

 - vu_pad() replaces the old single-iov helper with a new implementation
   that takes a full iovec array plus a 'skipped' byte count.  It uses a
   new iov_memset() helper in iov.c to zero-fill the padding area across
   iovec boundaries, then calls iov_truncate() to set the logical frame
   size.

 - vu_flush() computes the actual frame length (accounting for
   VIRTIO_NET_F_MRG_RXBUF multi-buffer frames) and passes the padded
   length to vu_queue_fill().

Callers in tcp_vu.c, udp_vu.c and vu_send_single() now use the new
vu_pad() in place of the old pad-then-truncate sequences and the
MAX(..., ETH_ZLEN + VNET_HLEN) size calculations passed to vu_collect().

Centralising padding here will also ease the move to multi-iovec per
element support, since there will be a single place to update.

In vu_send_single(), fix padding, truncation and data copy to use the
requested frame size rather than the total available buffer space from
vu_collect(), which could be larger.  Also add matching padding, truncation
and explicit size to vu_collect() for the DUP_ACK path in
tcp_vu_send_flag().

Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
 iov.c       |  1 -
 tcp_vu.c    | 23 ++++++-------------
 udp_vu.c    |  9 ++------
 vu_common.c | 63 ++++++++++++++++++++++++++++++++++-------------------
 vu_common.h |  2 +-
 5 files changed, 50 insertions(+), 48 deletions(-)

diff --git a/iov.c b/iov.c
index 0188acdf5eba..8134b8c9f988 100644
--- a/iov.c
+++ b/iov.c
@@ -180,7 +180,6 @@ size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size)
  * 		Will write less than @length bytes if it runs out of space in
  * 		the iov
  */
-/* cppcheck-suppress unusedFunction */
 void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c,
 		size_t length)
 {
diff --git a/tcp_vu.c b/tcp_vu.c
index 0cd01190d612..7d3285152ad9 100644
--- a/tcp_vu.c
+++ b/tcp_vu.c
@@ -72,12 +72,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
 	struct vu_dev *vdev = c->vdev;
 	struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
 	struct vu_virtq_element flags_elem[2];
-	size_t optlen, hdrlen, l2len;
 	struct ipv6hdr *ip6h = NULL;
 	struct iphdr *ip4h = NULL;
 	struct iovec flags_iov[2];
 	struct tcp_syn_opts *opts;
 	struct iov_tail payload;
+	size_t optlen, hdrlen;
 	struct tcphdr *th;
 	struct ethhdr *eh;
 	uint32_t seq;
@@ -88,7 +88,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
 
 	elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1,
 			      &flags_iov[0], 1, NULL,
-			      MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL);
+			      hdrlen + sizeof(*opts), NULL);
 	if (elem_cnt != 1)
 		return -1;
 
@@ -128,7 +128,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
 		return ret;
 	}
 
-	iov_truncate(&flags_iov[0], 1, hdrlen + optlen);
+	vu_pad(&flags_iov[0], 1, 0, hdrlen + optlen);
 	payload = IOV_TAIL(flags_elem[0].in_sg, 1, hdrlen);
 
 	if (flags & KEEPALIVE)
@@ -137,9 +137,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
 	tcp_fill_headers(c, conn, eh, ip4h, ip6h, th, &payload,
 			 NULL, seq, !*c->pcap);
 
-	l2len = optlen + hdrlen - VNET_HLEN;
-	vu_pad(&flags_elem[0].in_sg[0], l2len);
-
 	vu_flush(vdev, vq, flags_elem, 1);
 
 	if (*c->pcap)
@@ -148,14 +145,14 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
 	if (flags & DUP_ACK) {
 		elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1,
 				      &flags_iov[1], 1, NULL,
-				      flags_elem[0].in_sg[0].iov_len, NULL);
+				      hdrlen + optlen, NULL);
 		if (elem_cnt == 1 &&
 		    flags_elem[1].in_sg[0].iov_len >=
 		    flags_elem[0].in_sg[0].iov_len) {
+			vu_pad(&flags_iov[1], 1, 0, hdrlen + optlen);
 			memcpy(flags_elem[1].in_sg[0].iov_base,
 			       flags_elem[0].in_sg[0].iov_base,
 			       flags_elem[0].in_sg[0].iov_len);
-
 			vu_flush(vdev, vq, &flags_elem[1], 1);
 
 			if (*c->pcap)
@@ -211,7 +208,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
 				 ARRAY_SIZE(elem) - elem_cnt,
 				 &iov_vu[DISCARD_IOV_NUM + iov_used],
 				 VIRTQUEUE_MAX_SIZE - iov_used, &in_total,
-				 MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN),
+				 MIN(mss, fillsize) + hdrlen,
 				 &frame_size);
 		if (cnt == 0)
 			break;
@@ -247,8 +244,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
 	if (!peek_offset_cap)
 		ret -= already_sent;
 
-	/* adjust iov number and length of the last iov */
-	i = iov_truncate(&iov_vu[DISCARD_IOV_NUM], iov_used, ret);
+	i = vu_pad(&iov_vu[DISCARD_IOV_NUM], iov_used, hdrlen, ret);
 
 	/* adjust head count */
 	while (*head_cnt > 0 && head[*head_cnt - 1] >= i)
@@ -444,7 +440,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
 		size_t frame_size = iov_size(iov, buf_cnt);
 		bool push = i == head_cnt - 1;
 		ssize_t dlen;
-		size_t l2len;
 
 		assert(frame_size >= hdrlen);
 
@@ -457,10 +452,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
 
 		tcp_vu_prepare(c, conn, iov, buf_cnt, &check, !*c->pcap, push);
 
-		/* Pad first/single buffer only, it's at least ETH_ZLEN long */
-		l2len = dlen + hdrlen - VNET_HLEN;
-		vu_pad(iov, l2len);
-
 		vu_flush(vdev, vq, &elem[head[i]], buf_cnt);
 
 		if (*c->pcap)
diff --git a/udp_vu.c b/udp_vu.c
index f8629af58ab5..537e9c92cfa6 100644
--- a/udp_vu.c
+++ b/udp_vu.c
@@ -73,8 +73,7 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s,
 	const struct vu_dev *vdev = c->vdev;
 	int elem_cnt, elem_used, iov_used;
 	struct msghdr msg  = { 0 };
-	size_t hdrlen, l2len;
-	size_t iov_cnt;
+	size_t iov_cnt, hdrlen;
 
 	assert(!c->no_udp);
 
@@ -117,13 +116,9 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s,
 	iov_vu[0].iov_base = (char *)iov_vu[0].iov_base - hdrlen;
 	iov_vu[0].iov_len += hdrlen;
 
-	iov_used = iov_truncate(iov_vu, iov_cnt, *dlen + hdrlen);
+	iov_used = vu_pad(iov_vu, iov_cnt, 0, *dlen + hdrlen);
 	elem_used = iov_used; /* one iovec per element */
 
-	/* pad frame to 60 bytes: first buffer is at least ETH_ZLEN long */
-	l2len = *dlen + hdrlen - VNET_HLEN;
-	vu_pad(&iov_vu[0], l2len);
-
 	/* release unused buffers */
 	vu_queue_rewind(vq, elem_cnt - elem_used);
 
diff --git a/vu_common.c b/vu_common.c
index 7627fad5976b..3bc6f1f42a8e 100644
--- a/vu_common.c
+++ b/vu_common.c
@@ -74,6 +74,7 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
 	size_t current_iov = 0;
 	int elem_cnt = 0;
 
+	size = MAX(size, ETH_ZLEN + VNET_HLEN); /* Ethernet minimum size */
 	while (current_size < size && elem_cnt < max_elem &&
 	       current_iov < max_in_sg) {
 		int ret;
@@ -113,13 +114,31 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
 	return elem_cnt;
 }
 
+/**
+ * vu_pad() - Pad short frames to minimum Ethernet length and truncate iovec
+ * @iov:	Pointer to iovec array
+ * @cnt:	Number of entries in @iov
+ * @skipped:	Bytes already accounted for in the frame but not in @iov
+ * @size:	Data length in @iov
+ *
+ * Return: number of iovec entries after truncation
+ */
+size_t vu_pad(struct iovec *iov, size_t cnt, size_t skipped, size_t size)
+{
+	if (skipped + size < ETH_ZLEN + VNET_HLEN) {
+		iov_memset(iov, cnt, size, 0,
+			   ETH_ZLEN + VNET_HLEN - (skipped + size));
+	}
+
+	return iov_truncate(iov, cnt, size);
+}
+
 /**
  * vu_set_vnethdr() - set virtio-net headers
  * @vnethdr:		Address of the header to set
  * @num_buffers:	Number of guest buffers of the frame
  */
-static void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr,
-			   int num_buffers)
+static void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers)
 {
 	vnethdr->hdr = VU_HEADER;
 	/* Note: if VIRTIO_NET_F_MRG_RXBUF is not negotiated,
@@ -138,15 +157,25 @@ static void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr,
 void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq,
 	      struct vu_virtq_element *elem, int elem_cnt)
 {
+	size_t len, padding, elem_size;
 	int i;
 
 	vu_set_vnethdr(elem[0].in_sg[0].iov_base, elem_cnt);
 
-	for (i = 0; i < elem_cnt; i++) {
-		size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num);
-
+	len = 0;
+	for (i = 0; i < elem_cnt - 1; i++) {
+		elem_size = iov_size(elem[i].in_sg, elem[i].in_num);
 		vu_queue_fill(vdev, vq, &elem[i], elem_size, i);
+		len += elem_size;
 	}
+	/* pad the last element to have an Ethernet minimum size */
+	elem_size = iov_size(elem[i].in_sg, elem[i].in_num);
+	if (ETH_ZLEN + VNET_HLEN > len + elem_size)
+		padding = ETH_ZLEN + VNET_HLEN - (len + elem_size);
+	else
+		padding = 0;
+
+	vu_queue_fill(vdev, vq, &elem[i], elem_size + padding, i);
 
 	vu_queue_flush(vdev, vq, elem_cnt);
 }
@@ -262,10 +291,12 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size)
 		goto err;
 	}
 
-	total -= VNET_HLEN;
+	in_total = vu_pad(in_sg, in_total, 0, size);
+
+	size -= VNET_HLEN;
 
 	/* copy data from the buffer to the iovec */
-	iov_from_buf(in_sg, in_total, VNET_HLEN, buf, total);
+	iov_from_buf(in_sg, in_total, VNET_HLEN, buf, size);
 
 	if (*c->pcap)
 		pcap_iov(in_sg, in_total, VNET_HLEN);
@@ -273,26 +304,12 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size)
 	vu_flush(vdev, vq, elem, elem_cnt);
 	vu_queue_notify(vdev, vq);
 
-	trace("vhost-user sent %zu", total);
+	trace("vhost-user sent %zu", size);
 
-	return total;
+	return size;
 err:
 	for (i = 0; i < elem_cnt; i++)
 		vu_queue_detach_element(vq);
 
 	return -1;
 }
-
-/**
- * vu_pad() - Pad 802.3 frame to minimum length (60 bytes) if needed
- * @iov:	Buffer in iovec array where end of 802.3 frame is stored
- * @l2len:	Layer-2 length already filled in frame
- */
-void vu_pad(struct iovec *iov, size_t l2len)
-{
-	if (l2len >= ETH_ZLEN)
-		return;
-
-	memset((char *)iov->iov_base + iov->iov_len, 0, ETH_ZLEN - l2len);
-	iov->iov_len += ETH_ZLEN - l2len;
-}
diff --git a/vu_common.h b/vu_common.h
index 4037ab765b7d..13e0126fb16c 100644
--- a/vu_common.h
+++ b/vu_common.h
@@ -39,11 +39,11 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
 	       struct vu_virtq_element *elem, int max_elem,
 	       struct iovec *in_sg, size_t max_in_sg, size_t *in_total,
 	       size_t size, size_t *collected);
+size_t vu_pad(struct iovec *iov, size_t cnt, size_t skipped, size_t size);
 void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq,
 	      struct vu_virtq_element *elem, int elem_cnt);
 void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref,
 		const struct timespec *now);
 int vu_send_single(const struct ctx *c, const void *buf, size_t size);
-void vu_pad(struct iovec *iov, size_t l2len);
 
 #endif /* VU_COMMON_H */
-- 
2.53.0


  parent reply	other threads:[~2026-03-27 17:58 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-27 17:58 [PATCH v5 0/8] vhost-user,udp: Handle multiple iovec entries per virtqueue element Laurent Vivier
2026-03-27 17:58 ` [PATCH v5 1/8] iov: Introduce iov_memset() Laurent Vivier
2026-03-27 17:58 ` [PATCH v5 2/8] vu_common: Move vnethdr setup into vu_flush() Laurent Vivier
2026-03-27 17:58 ` Laurent Vivier [this message]
2026-03-27 17:58 ` [PATCH v5 4/8] udp_vu: Move virtqueue management from udp_vu_sock_recv() to its caller Laurent Vivier
2026-03-27 17:58 ` [PATCH v5 5/8] udp_vu: Pass iov explicitly to helpers instead of using file-scoped array Laurent Vivier
2026-03-27 17:58 ` [PATCH v5 6/8] udp_vu: Allow virtqueue elements with multiple iovec entries Laurent Vivier
2026-03-27 17:58 ` [PATCH v5 7/8] iov: Introduce IOV_PUSH_HEADER() macro Laurent Vivier
2026-03-27 17:58 ` [PATCH v5 8/8] udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6() Laurent Vivier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260327175834.831995-4-lvivier@redhat.com \
    --to=lvivier@redhat.com \
    --cc=passt-dev@passt.top \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).