On Tue, Mar 17, 2026 at 08:25:49AM +0100, Laurent Vivier wrote: > On 3/17/26 03:40, David Gibson wrote: > > On Fri, Mar 13, 2026 at 07:26:18PM +0100, Laurent Vivier wrote: > > > Previously, callers had to pre-initialize virtqueue elements with iovec > > > entries using vu_set_element() or vu_init_elem() before calling > > > vu_collect(). This meant each element owned a fixed, pre-assigned iovec > > > slot. > > > > > > Move the iovec array into vu_collect() as explicit parameters (in_sg, > > > max_in_sg, and in_num), letting it pass the remaining iovec capacity > > > directly to vu_queue_pop(). A running current_iov counter tracks > > > consumed entries across elements, so multiple elements share a single > > > iovec pool. The optional in_num output parameter reports how many iovec > > > entries were consumed, allowing callers to track usage across multiple > > > vu_collect() calls. > > > > > > This removes vu_set_element() and vu_init_elem() which are no longer > > > needed, and is a prerequisite for multi-buffer support where a single > > > virtqueue element can use more than one iovec entry. For now, callers > > > assert the current single-iovec-per-element invariant until they are > > > updated to handle multiple iovecs. > > > > > > Signed-off-by: Laurent Vivier > > > > Reviewed-by: David Gibson > > > > Couple of thoughts on possible polish below. > > > > [snip] > > > /** > > > * vu_collect() - collect virtio buffers from a given virtqueue > > > * @vdev: vhost-user device > > > * @vq: virtqueue to collect from > > > - * @elem: Array of virtqueue element > > > - * each element must be initialized with one iovec entry > > > - * in the in_sg array. > > > + * @elem: Array of @max_elem virtqueue elements > > > * @max_elem: Number of virtqueue elements in the array > > > + * @in_sg: Incoming iovec array for device-writable descriptors > > > + * @max_in_sg: Maximum number of entries in @in_sg > > > + * @in_num: Number of collected entries from @in_sg (output) > > > * @size: Maximum size of the data in the frame > > > * @collected: Collected buffer length, up to @size, set on return > > > * > > > @@ -80,20 +67,21 @@ void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, int elem_cnt > > > */ > > > int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, > > > struct vu_virtq_element *elem, int max_elem, > > > + struct iovec *in_sg, size_t max_in_sg, size_t *in_num, > > > size_t size, size_t *collected) > > > { > > > size_t current_size = 0; > > > + size_t current_iov = 0; > > > int elem_cnt = 0; > > > - while (current_size < size && elem_cnt < max_elem) { > > > - struct iovec *iov; > > > + while (current_size < size && elem_cnt < max_elem && > > > + current_iov < max_in_sg) { > > > int ret; > > > ret = vu_queue_pop(vdev, vq, &elem[elem_cnt], > > > - elem[elem_cnt].in_sg, > > > - elem[elem_cnt].in_num, > > > - elem[elem_cnt].out_sg, > > > - elem[elem_cnt].out_num); > > > + &in_sg[current_iov], > > > + max_in_sg - current_iov, > > > + NULL, 0); > > > if (ret < 0) > > > break; > > > @@ -103,18 +91,22 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, > > > break; > > > } > > > - iov = &elem[elem_cnt].in_sg[0]; > > > - > > > - if (iov->iov_len > size - current_size) > > > - iov->iov_len = size - current_size; > > > + elem[elem_cnt].in_num = iov_truncate(elem[elem_cnt].in_sg, > > > + elem[elem_cnt].in_num, > > > + size - current_size); > > > > Will elem[].in_num always end up with the same value as the @in_num > > parameter? If so, do we need the explicit parameter? > > @in_num parameter of vu_collect()? > > @in_num is the sum of all elem[].in_num, it can be computed by the caller > function from elem, but it is simpler to return it as we need to compute it > in the loop. Oh, right, sorry. I'm getting confused again by the two-level heirarchy - this gathers multiple elems as well as multiple iovs. > > > > > - current_size += iov->iov_len; > > > + current_size += iov_size(elem[elem_cnt].in_sg, > > > + elem[elem_cnt].in_num); > > > + current_iov += elem[elem_cnt].in_num; > > > elem_cnt++; > > > if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > > > break; > > > } > > > + if (in_num) > > > + *in_num = current_iov; > > > + > > > if (collected) > > > *collected = current_size; > > > @@ -147,8 +139,11 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, > > > { > > > int i; > > > - for (i = 0; i < elem_cnt; i++) > > > - vu_queue_fill(vdev, vq, &elem[i], elem[i].in_sg[0].iov_len, i); > > > + for (i = 0; i < elem_cnt; i++) { > > > + size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num); > > > > IIUC, the elem structure itself isn't shared with vhost, so we can > > alter it. Would it make sense to cache the number of bytes allocated > > to the element there, to avoid repeated calls to iov_size()? > > It's possible. But I think it could be complicated to keep in sync the > actual size of the iovec array and the value we store in elem, as we alter > the array at several points. Ok. We do expect the iovs to be pretty short in practice, so iov_size() shouldn't be too expensive. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson