public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: Laurent Vivier <lvivier@redhat.com>
To: Stefano Brivio <sbrivio@redhat.com>
Cc: passt-dev@passt.top
Subject: Re: [PATCH 10/10] vhost-user: Centralise Ethernet frame padding in vu_collect() and vu_pad()
Date: Fri, 3 Apr 2026 12:25:37 +0200	[thread overview]
Message-ID: <385c54b8-4bc7-4a8f-af21-94696eaed75d@redhat.com> (raw)
In-Reply-To: <20260403082052.3cfebb68@elisabeth>

On 4/3/26 08:20, Stefano Brivio wrote:
> On Wed,  1 Apr 2026 21:18:26 +0200
> Laurent Vivier <lvivier@redhat.com> wrote:
> 
>> The previous per-protocol padding done by vu_pad() in tcp_vu.c and
>> udp_vu.c was only correct for single-buffer frames: it assumed the
>> padding area always fell within the first iov, writing past its end
>> with a plain memset().
>>
>> It also required each caller to compute MAX(..., ETH_ZLEN + VNET_HLEN)
>> for vu_collect() and to call vu_pad() at the right point, duplicating
>> the minimum-size logic across protocols.
>>
>> Move the Ethernet minimum size enforcement into vu_collect() itself, so
>> that enough buffer space is always reserved for padding regardless of
>> the requested frame size.
>>
>> Rewrite vu_pad() to take a full iovec array and use iov_memset(),
>> making it safe for multi-buffer (mergeable rx buffer) frames.
>>
>> In tcp_vu_sock_recv(), replace iov_truncate() with iov_skip_bytes():
>> now that all consumers receive explicit data lengths, truncating the
>> iovecs is no longer needed.  In tcp_vu_data_from_sock(), cap each
>> frame's data length against the remaining bytes actually received from
>> the socket, so that the last partial frame gets correct headers and
>> sequence number advancement.
>>
>> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
>> ---
>>   iov.c       |  1 -
>>   tcp_vu.c    | 29 ++++++++++++++---------------
>>   udp_vu.c    | 14 ++++++++------
>>   vu_common.c | 32 +++++++++++++++-----------------
>>   vu_common.h |  2 +-
>>   5 files changed, 38 insertions(+), 40 deletions(-)
>>
>> diff --git a/iov.c b/iov.c
>> index 83b683f3976a..2289b425529e 100644
>> --- a/iov.c
>> +++ b/iov.c
>> @@ -180,7 +180,6 @@ size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size)
>>    * 		Will write less than @length bytes if it runs out of space in
>>    * 		the iov
>>    */
>> -/* cppcheck-suppress unusedFunction */
>>   void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c,
>>   		size_t length)
>>   {
>> diff --git a/tcp_vu.c b/tcp_vu.c
>> index ae79a6d856b0..cae6926334b9 100644
>> --- a/tcp_vu.c
>> +++ b/tcp_vu.c
>> @@ -72,12 +72,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>>   	struct vu_dev *vdev = c->vdev;
>>   	struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
>>   	struct vu_virtq_element flags_elem[2];
>> -	size_t optlen, hdrlen, l2len;
>>   	struct ipv6hdr *ip6h = NULL;
>>   	struct iphdr *ip4h = NULL;
>>   	struct iovec flags_iov[2];
>>   	struct tcp_syn_opts *opts;
>>   	struct iov_tail payload;
>> +	size_t optlen, hdrlen;
>>   	struct tcphdr *th;
>>   	struct ethhdr *eh;
>>   	uint32_t seq;
>> @@ -88,7 +88,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>>   
>>   	elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1,
>>   			      &flags_iov[0], 1, NULL,
>> -			      MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL);
>> +			      hdrlen + sizeof(*opts), NULL);
>>   	if (elem_cnt != 1)
>>   		return -1;
>>   
>> @@ -128,7 +128,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>>   		return ret;
>>   	}
>>   
>> -	iov_truncate(&flags_iov[0], 1, hdrlen + optlen);
>>   	payload = IOV_TAIL(flags_elem[0].in_sg, 1, hdrlen);
>>   
>>   	if (flags & KEEPALIVE)
>> @@ -137,9 +136,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>>   	tcp_fill_headers(c, conn, eh, ip4h, ip6h, th, &payload,
>>   			 optlen, NULL, seq, !*c->pcap);
>>   
>> -	l2len = optlen + hdrlen - VNET_HLEN;
>> -	vu_pad(&flags_elem[0].in_sg[0], l2len);
>> -
>> +	vu_pad(flags_elem[0].in_sg, 1, hdrlen + optlen);
>>   	vu_flush(vdev, vq, flags_elem, 1, hdrlen + optlen);
>>   
>>   	if (*c->pcap)
>> @@ -149,7 +146,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>>   	if (flags & DUP_ACK) {
>>   		elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1,
>>   				      &flags_iov[1], 1, NULL,
>> -				      flags_elem[0].in_sg[0].iov_len, NULL);
>> +				      hdrlen + optlen, NULL);
>>   		if (elem_cnt == 1 &&
>>   		    flags_elem[1].in_sg[0].iov_len >=
>>   		    flags_elem[0].in_sg[0].iov_len) {
>> @@ -213,7 +210,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
>>   				 ARRAY_SIZE(elem) - elem_cnt,
>>   				 &iov_vu[DISCARD_IOV_NUM + iov_used],
>>   				 VIRTQUEUE_MAX_SIZE - iov_used, &in_total,
>> -				 MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN),
>> +				 MIN(mss, fillsize) + hdrlen,
>>   				 &frame_size);
>>   		if (cnt == 0)
>>   			break;
>> @@ -249,8 +246,11 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
>>   	if (!peek_offset_cap)
>>   		ret -= already_sent;
>>   
>> -	/* adjust iov number and length of the last iov */
>> -	i = iov_truncate(&iov_vu[DISCARD_IOV_NUM], iov_used, ret);
>> +	i = iov_skip_bytes(&iov_vu[DISCARD_IOV_NUM], iov_used,
>> +			     MAX(hdrlen + ret, VNET_HLEN + ETH_ZLEN),
>> +			     NULL);
> 
> Nit: this should be aligned like this:
> 
> 	i = iov_skip_bytes(&iov_vu[DISCARD_IOV_NUM], iov_used,
> 			   MAX(hdrlen + ret, VNET_HLEN + ETH_ZLEN),
> 			   NULL);
> 
>> +	if ((size_t)i < iov_used)
>> +		i++;
> 
> I'm a bit lost here. I see that this increment restores the
> iov_truncate() convention of returning the number of iov items (which

iov_truncate() was truncating the iovec array (reducing the cnt and iov_len of the last 
iovec) to fit the actual size of the data.

Here we are counting the number of elements: we have collected more elements than needed 
to store the data, so we need to know how many we use to give back the unused to the 
virtio-queue.

Again, the confusing point is that we have the same number of elements as the number of 
iovec. It's fixed in the following series.

> we need later), but... what happens if we have i >= iov_used (even
> though my assumption is that it should never happen)? We're throwing
> away data?
>i cannot be greater than iov_used. if i == iov_used, it means we need all the elements.

Thanks,
Laurent


      reply	other threads:[~2026-04-03 10:25 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-01 19:18 [PATCH 00/10] vhost-user: Preparatory series for multiple iovec entries per virtqueue element Laurent Vivier
2026-04-01 19:18 ` [PATCH 01/10] iov: Introduce iov_memset() Laurent Vivier
2026-04-03 12:35   ` David Gibson
2026-04-01 19:18 ` [PATCH 02/10] iov: Add iov_memcopy() to copy data between iovec arrays Laurent Vivier
2026-04-03  6:20   ` Stefano Brivio
2026-04-01 19:18 ` [PATCH 03/10] vu_common: Move vnethdr setup into vu_flush() Laurent Vivier
2026-04-03  6:20   ` Stefano Brivio
2026-04-03 10:16     ` Laurent Vivier
2026-04-01 19:18 ` [PATCH 04/10] udp_vu: Move virtqueue management from udp_vu_sock_recv() to its caller Laurent Vivier
2026-04-01 19:18 ` [PATCH 05/10] udp_vu: Pass iov explicitly to helpers instead of using file-scoped array Laurent Vivier
2026-04-01 19:18 ` [PATCH 06/10] checksum: Pass explicit L4 length to checksum functions Laurent Vivier
2026-04-01 19:18 ` [PATCH 07/10] pcap: Pass explicit L2 length to pcap_iov() Laurent Vivier
2026-04-03  6:20   ` Stefano Brivio
2026-04-03 10:19     ` Laurent Vivier
2026-04-01 19:18 ` [PATCH 08/10] vu_common: Pass explicit frame length to vu_flush() Laurent Vivier
2026-04-03  6:20   ` Stefano Brivio
2026-04-01 19:18 ` [PATCH 09/10] tcp: Pass explicit data length to tcp_fill_headers() Laurent Vivier
2026-04-01 19:18 ` [PATCH 10/10] vhost-user: Centralise Ethernet frame padding in vu_collect() and vu_pad() Laurent Vivier
2026-04-03  6:20   ` Stefano Brivio
2026-04-03 10:25     ` Laurent Vivier [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=385c54b8-4bc7-4a8f-af21-94696eaed75d@redhat.com \
    --to=lvivier@redhat.com \
    --cc=passt-dev@passt.top \
    --cc=sbrivio@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).