public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Laurent Vivier <lvivier@redhat.com>
Cc: passt-dev@passt.top
Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect()
Date: Tue, 17 Mar 2026 13:40:16 +1100	[thread overview]
Message-ID: <abi_EIwrlJ9jFpJr@zatzit> (raw)
In-Reply-To: <20260313182618.4157365-4-lvivier@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 7472 bytes --]

On Fri, Mar 13, 2026 at 07:26:18PM +0100, Laurent Vivier wrote:
> Previously, callers had to pre-initialize virtqueue elements with iovec
> entries using vu_set_element() or vu_init_elem() before calling
> vu_collect().  This meant each element owned a fixed, pre-assigned iovec
> slot.
> 
> Move the iovec array into vu_collect() as explicit parameters (in_sg,
> max_in_sg, and in_num), letting it pass the remaining iovec capacity
> directly to vu_queue_pop().  A running current_iov counter tracks
> consumed entries across elements, so multiple elements share a single
> iovec pool.  The optional in_num output parameter reports how many iovec
> entries were consumed, allowing callers to track usage across multiple
> vu_collect() calls.
> 
> This removes vu_set_element() and vu_init_elem() which are no longer
> needed, and is a prerequisite for multi-buffer support where a single
> virtqueue element can use more than one iovec entry.  For now, callers
> assert the current single-iovec-per-element invariant until they are
> updated to handle multiple iovecs.
> 
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

Couple of thoughts on possible polish below.

[snip]
>  /**
>   * vu_collect() - collect virtio buffers from a given virtqueue
>   * @vdev:		vhost-user device
>   * @vq:			virtqueue to collect from
> - * @elem:		Array of virtqueue element
> - * 			each element must be initialized with one iovec entry
> - * 			in the in_sg array.
> + * @elem:		Array of @max_elem virtqueue elements
>   * @max_elem:		Number of virtqueue elements in the array
> + * @in_sg:		Incoming iovec array for device-writable descriptors
> + * @max_in_sg:		Maximum number of entries in @in_sg
> + * @in_num:		Number of collected entries from @in_sg (output)
>   * @size:		Maximum size of the data in the frame
>   * @collected:		Collected buffer length, up to @size, set on return
>   *
> @@ -80,20 +67,21 @@ void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, int elem_cnt
>   */
>  int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
>  	       struct vu_virtq_element *elem, int max_elem,
> +	       struct iovec *in_sg, size_t max_in_sg, size_t *in_num,
>  	       size_t size, size_t *collected)
>  {
>  	size_t current_size = 0;
> +	size_t current_iov = 0;
>  	int elem_cnt = 0;
>  
> -	while (current_size < size && elem_cnt < max_elem) {
> -		struct iovec *iov;
> +	while (current_size < size && elem_cnt < max_elem &&
> +	       current_iov < max_in_sg) {
>  		int ret;
>  
>  		ret = vu_queue_pop(vdev, vq, &elem[elem_cnt],
> -				   elem[elem_cnt].in_sg,
> -				   elem[elem_cnt].in_num,
> -				   elem[elem_cnt].out_sg,
> -				   elem[elem_cnt].out_num);
> +				   &in_sg[current_iov],
> +				   max_in_sg - current_iov,
> +				   NULL, 0);
>  		if (ret < 0)
>  			break;
>  
> @@ -103,18 +91,22 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
>  			break;
>  		}
>  
> -		iov = &elem[elem_cnt].in_sg[0];
> -
> -		if (iov->iov_len > size - current_size)
> -			iov->iov_len = size - current_size;
> +		elem[elem_cnt].in_num = iov_truncate(elem[elem_cnt].in_sg,
> +						     elem[elem_cnt].in_num,
> +						     size - current_size);

Will elem[].in_num always end up with the same value as the @in_num
parameter?  If so, do we need the explicit parameter?

>  
> -		current_size += iov->iov_len;
> +		current_size += iov_size(elem[elem_cnt].in_sg,
> +					 elem[elem_cnt].in_num);
> +		current_iov += elem[elem_cnt].in_num;
>  		elem_cnt++;
>  
>  		if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
>  			break;
>  	}
>  
> +	if (in_num)
> +		*in_num = current_iov;
> +
>  	if (collected)
>  		*collected = current_size;
>  
> @@ -147,8 +139,11 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq,
>  {
>  	int i;
>  
> -	for (i = 0; i < elem_cnt; i++)
> -		vu_queue_fill(vdev, vq, &elem[i], elem[i].in_sg[0].iov_len, i);
> +	for (i = 0; i < elem_cnt; i++) {
> +		size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num);

IIUC, the elem structure itself isn't shared with vhost, so we can
alter it.  Would it make sense to cache the number of bytes allocated
to the element there, to avoid repeated calls to iov_size()?

> +
> +		vu_queue_fill(vdev, vq, &elem[i], elem_size, i);
> +	}
>  
>  	vu_queue_flush(vdev, vq, elem_cnt);
>  	vu_queue_notify(vdev, vq);
> @@ -246,7 +241,7 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size)
>  	struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
>  	struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
>  	struct iovec in_sg[VIRTQUEUE_MAX_SIZE];
> -	size_t total;
> +	size_t total, in_num;
>  	int elem_cnt;
>  	int i;
>  
> @@ -257,11 +252,10 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size)
>  		return -1;
>  	}
>  
> -	vu_init_elem(elem, in_sg, VIRTQUEUE_MAX_SIZE);
> -
>  	size += VNET_HLEN;
> -	elem_cnt = vu_collect(vdev, vq, elem, VIRTQUEUE_MAX_SIZE, size, &total);
> -	if (total < size) {
> +	elem_cnt = vu_collect(vdev, vq, elem, ARRAY_SIZE(elem), in_sg,
> +			      ARRAY_SIZE(in_sg), &in_num, size, &total);
> +	if (elem_cnt == 0 || total < size) {
>  		debug("vu_send_single: no space to send the data "
>  		      "elem_cnt %d size %zd", elem_cnt, total);
>  		goto err;
> @@ -272,10 +266,10 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size)
>  	total -= VNET_HLEN;
>  
>  	/* copy data from the buffer to the iovec */
> -	iov_from_buf(in_sg, elem_cnt, VNET_HLEN, buf, total);
> +	iov_from_buf(in_sg, in_num, VNET_HLEN, buf, total);
>  
>  	if (*c->pcap)
> -		pcap_iov(in_sg, elem_cnt, VNET_HLEN);
> +		pcap_iov(in_sg, in_num, VNET_HLEN);
>  
>  	vu_flush(vdev, vq, elem, elem_cnt);
>  
> diff --git a/vu_common.h b/vu_common.h
> index 865d9771fa89..6c31630e8712 100644
> --- a/vu_common.h
> +++ b/vu_common.h
> @@ -35,26 +35,10 @@ static inline void *vu_payloadv6(void *base)
>  	return (struct ipv6hdr *)vu_ip(base) + 1;
>  }
>  
> -/**
> - * vu_set_element() - Initialize a vu_virtq_element
> - * @elem:	Element to initialize
> - * @out_sg:	One out iovec entry to set in elem
> - * @in_sg:	One in iovec entry to set in elem
> - */
> -static inline void vu_set_element(struct vu_virtq_element *elem,
> -				  struct iovec *out_sg, struct iovec *in_sg)
> -{
> -	elem->out_num = !!out_sg;
> -	elem->out_sg = out_sg;
> -	elem->in_num = !!in_sg;
> -	elem->in_sg = in_sg;
> -}
> -
> -void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov,
> -		  int elem_cnt);
>  int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
> -	       struct vu_virtq_element *elem, int max_elem, size_t size,
> -	       size_t *collected);
> +	       struct vu_virtq_element *elem, int max_elem,
> +	       struct iovec *in_sg, size_t max_in_sg, size_t *in_num,
> +	       size_t size, size_t *collected);
>  void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers);
>  void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq,
>  	      struct vu_virtq_element *elem, int elem_cnt);
> -- 
> 2.53.0
> 

-- 
David Gibson (he or they)	| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you, not the other way
				| around.
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2026-03-17  2:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-13 18:26 [PATCH v2 0/3] Decouple iovec management from virtqueue elements Laurent Vivier
2026-03-13 18:26 ` [PATCH v2 1/3] virtio: Pass iovec arrays as separate parameters to vu_queue_pop() Laurent Vivier
2026-03-16  8:25   ` David Gibson
2026-03-13 18:26 ` [PATCH v2 2/3] vu_handle_tx: Pass actual remaining out_sg capacity " Laurent Vivier
2026-03-16  9:15   ` David Gibson
2026-03-17  0:02   ` Stefano Brivio
2026-03-13 18:26 ` [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() Laurent Vivier
2026-03-17  2:40   ` David Gibson [this message]
2026-03-17  7:25     ` Laurent Vivier
2026-03-17 15:23       ` Stefano Brivio
2026-03-17 15:23   ` Stefano Brivio
2026-03-17 16:18     ` Laurent Vivier
2026-03-17 16:21       ` Stefano Brivio

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abi_EIwrlJ9jFpJr@zatzit \
    --to=david@gibson.dropbear.id.au \
    --cc=lvivier@redhat.com \
    --cc=passt-dev@passt.top \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).