From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=OdGD0MkX; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id E2B2B5A0269 for ; Mon, 16 Mar 2026 19:07:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773684452; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qYWoOaQtDDXNXa6FpRZS86pSOU50/ahCfYEhb2DYrJ8=; b=OdGD0MkXzNduILBU/lkKVIuE0B2y+YdFnPGRqLx0Ug7Y2iNgg3j81wdKZgBiboBx9mqOkM 7+NgS65Kcd7y6f82hudWKQ2Q42xJ0JSoW98oMcBgv/sqNDOqUDyqlm48O72t609AkORrN+ 65clJQTvX5fChGuwexVTyCThXcEVnw8= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-588-AzcgFflxN-mJfG_bM3jcWw-1; Mon, 16 Mar 2026 14:07:31 -0400 X-MC-Unique: AzcgFflxN-mJfG_bM3jcWw-1 X-Mimecast-MFC-AGG-ID: AzcgFflxN-mJfG_bM3jcWw_1773684450 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A4108180034E for ; Mon, 16 Mar 2026 18:07:30 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.44.35.65]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AD21F1800576; Mon, 16 Mar 2026 18:07:29 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH v3 4/8] vhost-user: Centralise Ethernet frame padding in vu_collect(), vu_pad() and vu_flush() Date: Mon, 16 Mar 2026 19:07:17 +0100 Message-ID: <20260316180721.2230640-5-lvivier@redhat.com> In-Reply-To: <20260316180721.2230640-1-lvivier@redhat.com> References: <20260316180721.2230640-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: jRsRWppNiTN-ScX0wITjVL8sknvK1PJaQTZQTClLWM0_1773684450 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true Message-ID-Hash: M7Y2LF6UWSGIZDCNLHZ7RB66Q4BGYTKB X-Message-ID-Hash: M7Y2LF6UWSGIZDCNLHZ7RB66Q4BGYTKB X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: The per-protocol padding done by vu_pad() in tcp_vu.c and udp_vu.c was only correct for single-buffer frames, and assumed the padding area always fell within the first iov. It also relied on each caller computing the right MAX(..., ETH_ZLEN + VNET_HLEN) size for vu_collect() and calling vu_pad() at the right point. Centralise padding logic into three shared vhost-user helpers instead: - vu_collect() now ensures at least ETH_ZLEN + VNET_HLEN bytes of buffer space are collected, so there is always room for a minimum-sized frame. - vu_pad() replaces the old single-iov helper with a new implementation that takes a full iovec array plus a 'skipped' byte count. It uses a new iov_memset() helper in iov.c to zero-fill the padding area across iovec boundaries, then calls iov_truncate() to set the logical frame size. - vu_flush() computes the actual frame length (accounting for VIRTIO_NET_F_MRG_RXBUF multi-buffer frames) and passes the padded length to vu_queue_fill(). Callers in tcp_vu.c, udp_vu.c and vu_send_single() now use the new vu_pad() in place of the old pad-then-truncate sequences and the MAX(..., ETH_ZLEN + VNET_HLEN) size calculations passed to vu_collect(). Centralising padding here will also ease the move to multi-iovec per element support, since there will be a single place to update. In vu_send_single(), fix padding, truncation and data copy to use the requested frame size rather than the total available buffer space from vu_collect(), which could be larger. Also add matching padding, truncation and explicit size to vu_collect() for the DUP_ACK path in tcp_vu_send_flag(). Signed-off-by: Laurent Vivier --- iov.c | 26 ++++++++++++++++++ iov.h | 2 ++ tcp_vu.c | 23 ++++++---------- udp_vu.c | 10 +++---- vu_common.c | 76 ++++++++++++++++++++++++++++++++++++----------------- vu_common.h | 2 +- 6 files changed, 92 insertions(+), 47 deletions(-) diff --git a/iov.c b/iov.c index 31a3f5bc29e5..34d42c7a2264 100644 --- a/iov.c +++ b/iov.c @@ -169,6 +169,32 @@ size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size) return i; } +/** + * iov_memset() - Set bytes of an IO vector to a given value + * @iov: IO vector + * @iov_cnt: Number of elements in @iov + * @offset: Byte offset in the iovec at which to start + * @c: Byte value to fill with + * @length: Number of bytes to set + * Will write less than @length bytes if it runs out of space in + * the iov + */ +void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c, + size_t length) +{ + size_t i; + + i = iov_skip_bytes(iov, iov_cnt, offset, &offset); + + for ( ; i < iov_cnt && length; i++) { + size_t n = MIN(iov[i].iov_len - offset, length); + + memset((char *)iov[i].iov_base + offset, c, n); + offset = 0; + length -= n; + } +} + /** * iov_tail_prune() - Remove any unneeded buffers from an IOV tail * @tail: IO vector tail (modified) diff --git a/iov.h b/iov.h index b4e50b0fca5a..d295d05b3bab 100644 --- a/iov.h +++ b/iov.h @@ -30,6 +30,8 @@ size_t iov_to_buf(const struct iovec *iov, size_t iov_cnt, size_t offset, void *buf, size_t bytes); size_t iov_size(const struct iovec *iov, size_t iov_cnt); size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size); +void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c, + size_t length); /* * DOC: Theory of Operation, struct iov_tail diff --git a/tcp_vu.c b/tcp_vu.c index d470ab54bcea..80464d450fc9 100644 --- a/tcp_vu.c +++ b/tcp_vu.c @@ -72,12 +72,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) struct vu_dev *vdev = c->vdev; struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; struct vu_virtq_element flags_elem[2]; - size_t optlen, hdrlen, l2len; struct ipv6hdr *ip6h = NULL; struct iphdr *ip4h = NULL; struct iovec flags_iov[2]; struct tcp_syn_opts *opts; struct iov_tail payload; + size_t optlen, hdrlen; struct tcphdr *th; struct ethhdr *eh; uint32_t seq; @@ -89,7 +89,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1, &flags_iov[0], 1, NULL, - MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL); + hdrlen + sizeof(*opts), NULL); if (elem_cnt != 1) return -1; @@ -131,7 +131,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) return ret; } - iov_truncate(&flags_iov[0], 1, hdrlen + optlen); + vu_pad(&flags_iov[0], 1, 0, hdrlen + optlen); payload = IOV_TAIL(flags_elem[0].in_sg, 1, hdrlen); if (flags & KEEPALIVE) @@ -140,9 +140,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) tcp_fill_headers(c, conn, eh, ip4h, ip6h, th, &payload, NULL, seq, !*c->pcap); - l2len = optlen + hdrlen - VNET_HLEN; - vu_pad(&flags_elem[0].in_sg[0], l2len); - if (*c->pcap) pcap_iov(&flags_elem[0].in_sg[0], 1, VNET_HLEN); nb_ack = 1; @@ -150,10 +147,11 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) if (flags & DUP_ACK) { elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1, &flags_iov[1], 1, NULL, - flags_elem[0].in_sg[0].iov_len, NULL); + hdrlen + optlen, NULL); if (elem_cnt == 1 && flags_elem[1].in_sg[0].iov_len >= flags_elem[0].in_sg[0].iov_len) { + vu_pad(&flags_iov[1], 1, 0, hdrlen + optlen); memcpy(flags_elem[1].in_sg[0].iov_base, flags_elem[0].in_sg[0].iov_base, flags_elem[0].in_sg[0].iov_len); @@ -213,7 +211,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, ARRAY_SIZE(elem) - elem_cnt, &iov_vu[DISCARD_IOV_NUM + iov_used], VIRTQUEUE_MAX_SIZE - iov_used, &in_num, - MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN), + MIN(mss, fillsize) + hdrlen, &frame_size); if (cnt == 0) break; @@ -225,6 +223,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, /* reserve space for headers in iov */ iov = &elem[elem_cnt].in_sg[0]; ASSERT(iov->iov_len >= hdrlen); + iov->iov_base = (char *)iov->iov_base + hdrlen; iov->iov_len -= hdrlen; head[(*head_cnt)++] = elem_cnt; @@ -249,8 +248,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, if (!peek_offset_cap) ret -= already_sent; - /* adjust iov number and length of the last iov */ - i = iov_truncate(&iov_vu[DISCARD_IOV_NUM], iov_used, ret); + i = vu_pad(&iov_vu[DISCARD_IOV_NUM], iov_used, hdrlen, ret); /* adjust head count */ while (*head_cnt > 0 && head[*head_cnt - 1] >= i) @@ -446,7 +444,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn) size_t frame_size = iov_size(iov, buf_cnt); bool push = i == head_cnt - 1; ssize_t dlen; - size_t l2len; ASSERT(frame_size >= hdrlen); @@ -460,10 +457,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn) tcp_vu_prepare(c, conn, iov, buf_cnt, &check, !*c->pcap, push); - /* Pad first/single buffer only, it's at least ETH_ZLEN long */ - l2len = dlen + hdrlen - VNET_HLEN; - vu_pad(iov, l2len); - if (*c->pcap) pcap_iov(iov, buf_cnt, VNET_HLEN); diff --git a/udp_vu.c b/udp_vu.c index acc18d375a0f..5e030effa703 100644 --- a/udp_vu.c +++ b/udp_vu.c @@ -73,8 +73,7 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s, const struct vu_dev *vdev = c->vdev; int elem_cnt, elem_used, iov_used; struct msghdr msg = { 0 }; - size_t hdrlen, l2len; - size_t iov_cnt; + size_t iov_cnt, hdrlen; ASSERT(!c->no_udp); @@ -100,6 +99,7 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s, /* reserve space for the headers */ ASSERT(iov_vu[0].iov_len >= MAX(hdrlen, ETH_ZLEN + VNET_HLEN)); + iov_vu[0].iov_base = (char *)iov_vu[0].iov_base + hdrlen; iov_vu[0].iov_len -= hdrlen; @@ -117,13 +117,9 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s, iov_vu[0].iov_base = (char *)iov_vu[0].iov_base - hdrlen; iov_vu[0].iov_len += hdrlen; - iov_used = iov_truncate(iov_vu, iov_cnt, *dlen + hdrlen); + iov_used = vu_pad(iov_vu, iov_cnt, 0, *dlen + hdrlen); elem_used = iov_used; /* one iovec per element */ - /* pad frame to 60 bytes: first buffer is at least ETH_ZLEN long */ - l2len = *dlen + hdrlen - VNET_HLEN; - vu_pad(&iov_vu[0], l2len); - vu_set_vnethdr(iov_vu[0].iov_base, elem_used); /* release unused buffers */ diff --git a/vu_common.c b/vu_common.c index d5fca9e52004..4ba407f6faf7 100644 --- a/vu_common.c +++ b/vu_common.c @@ -74,6 +74,7 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, size_t current_iov = 0; int elem_cnt = 0; + size = MAX(size, ETH_ZLEN + VNET_HLEN); /* Ethernet minimum size */ while (current_size < size && elem_cnt < max_elem && current_iov < max_in_sg) { int ret; @@ -113,6 +114,25 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, return elem_cnt; } +/** + * vu_pad() - Pad short frames to minimum Ethernet length and truncate iovec + * @iov: Pointer to iovec array + * @cnt: Number of entries in @iov + * @skipped: Bytes already accounted for in the frame but not in @iov + * @size: Data length in @iov + * + * Return: number of iovec entries after truncation + */ +size_t vu_pad(struct iovec *iov, size_t cnt, size_t skipped, size_t size) +{ + if (skipped + size < ETH_ZLEN + VNET_HLEN) { + iov_memset(iov, cnt, size, 0, + ETH_ZLEN + VNET_HLEN - (skipped + size)); + } + + return iov_truncate(iov, cnt, size); +} + /** * vu_set_vnethdr() - set virtio-net headers * @vnethdr: Address of the header to set @@ -137,12 +157,32 @@ void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers) void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, struct vu_virtq_element *elem, int elem_cnt) { - int i; - - for (i = 0; i < elem_cnt; i++) { - size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num); - - vu_queue_fill(vdev, vq, &elem[i], elem_size, i); + int i, j, num_elements; + + for (i = 0; i < elem_cnt; i += num_elements) { + const struct virtio_net_hdr_mrg_rxbuf *vnethdr; + size_t len, padding, elem_size; + + vnethdr = elem[i].in_sg[0].iov_base; + num_elements = le16toh(vnethdr->num_buffers); + + len = 0; + for (j = 0; j < num_elements - 1; j++) { + elem_size = iov_size(elem[i + j].in_sg, + elem[i + j].in_num); + vu_queue_fill(vdev, vq, &elem[i + j], + elem_size, i + j); + len += elem_size; + } + /* pad the last element to have an Ethernet minimum size */ + elem_size = iov_size(elem[i + j].in_sg, elem[i + j].in_num); + if (ETH_ZLEN + VNET_HLEN > len + elem_size) + padding = ETH_ZLEN + VNET_HLEN - (len + elem_size); + else + padding = 0; + + vu_queue_fill(vdev, vq, &elem[i + j], elem_size + padding, + i + j); } vu_queue_flush(vdev, vq, elem_cnt); @@ -261,38 +301,26 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) goto err; } + in_num = vu_pad(in_sg, in_num, 0, size); + vu_set_vnethdr(in_sg[0].iov_base, elem_cnt); - total -= VNET_HLEN; + size -= VNET_HLEN; /* copy data from the buffer to the iovec */ - iov_from_buf(in_sg, in_num, VNET_HLEN, buf, total); + iov_from_buf(in_sg, in_num, VNET_HLEN, buf, size); if (*c->pcap) pcap_iov(in_sg, in_num, VNET_HLEN); vu_flush(vdev, vq, elem, elem_cnt); - trace("vhost-user sent %zu", total); + trace("vhost-user sent %zu", size); - return total; + return size; err: for (i = 0; i < elem_cnt; i++) vu_queue_detach_element(vq); return -1; } - -/** - * vu_pad() - Pad 802.3 frame to minimum length (60 bytes) if needed - * @iov: Buffer in iovec array where end of 802.3 frame is stored - * @l2len: Layer-2 length already filled in frame - */ -void vu_pad(struct iovec *iov, size_t l2len) -{ - if (l2len >= ETH_ZLEN) - return; - - memset((char *)iov->iov_base + iov->iov_len, 0, ETH_ZLEN - l2len); - iov->iov_len += ETH_ZLEN - l2len; -} diff --git a/vu_common.h b/vu_common.h index 6c31630e8712..73cc6ea9949b 100644 --- a/vu_common.h +++ b/vu_common.h @@ -39,12 +39,12 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, struct vu_virtq_element *elem, int max_elem, struct iovec *in_sg, size_t max_in_sg, size_t *in_num, size_t size, size_t *collected); +size_t vu_pad(struct iovec *iov, size_t cnt, size_t skipped, size_t size); void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers); void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, struct vu_virtq_element *elem, int elem_cnt); void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, const struct timespec *now); int vu_send_single(const struct ctx *c, const void *buf, size_t size); -void vu_pad(struct iovec *iov, size_t l2len); #endif /* VU_COMMON_H */ -- 2.53.0