From: David Gibson <david@gibson.dropbear.id.au>
To: Laurent Vivier <lvivier@redhat.com>
Cc: passt-dev@passt.top
Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect()
Date: Wed, 18 Mar 2026 12:15:30 +1100 [thread overview]
Message-ID: <abn8si7d3hcg9204@zatzit> (raw)
In-Reply-To: <b0331657-8a2b-4c9c-ae88-89590fd7a56a@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 5719 bytes --]
On Tue, Mar 17, 2026 at 08:25:49AM +0100, Laurent Vivier wrote:
> On 3/17/26 03:40, David Gibson wrote:
> > On Fri, Mar 13, 2026 at 07:26:18PM +0100, Laurent Vivier wrote:
> > > Previously, callers had to pre-initialize virtqueue elements with iovec
> > > entries using vu_set_element() or vu_init_elem() before calling
> > > vu_collect(). This meant each element owned a fixed, pre-assigned iovec
> > > slot.
> > >
> > > Move the iovec array into vu_collect() as explicit parameters (in_sg,
> > > max_in_sg, and in_num), letting it pass the remaining iovec capacity
> > > directly to vu_queue_pop(). A running current_iov counter tracks
> > > consumed entries across elements, so multiple elements share a single
> > > iovec pool. The optional in_num output parameter reports how many iovec
> > > entries were consumed, allowing callers to track usage across multiple
> > > vu_collect() calls.
> > >
> > > This removes vu_set_element() and vu_init_elem() which are no longer
> > > needed, and is a prerequisite for multi-buffer support where a single
> > > virtqueue element can use more than one iovec entry. For now, callers
> > > assert the current single-iovec-per-element invariant until they are
> > > updated to handle multiple iovecs.
> > >
> > > Signed-off-by: Laurent Vivier <lvivier@redhat.com>
> >
> > Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> >
> > Couple of thoughts on possible polish below.
> >
> > [snip]
> > > /**
> > > * vu_collect() - collect virtio buffers from a given virtqueue
> > > * @vdev: vhost-user device
> > > * @vq: virtqueue to collect from
> > > - * @elem: Array of virtqueue element
> > > - * each element must be initialized with one iovec entry
> > > - * in the in_sg array.
> > > + * @elem: Array of @max_elem virtqueue elements
> > > * @max_elem: Number of virtqueue elements in the array
> > > + * @in_sg: Incoming iovec array for device-writable descriptors
> > > + * @max_in_sg: Maximum number of entries in @in_sg
> > > + * @in_num: Number of collected entries from @in_sg (output)
> > > * @size: Maximum size of the data in the frame
> > > * @collected: Collected buffer length, up to @size, set on return
> > > *
> > > @@ -80,20 +67,21 @@ void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, int elem_cnt
> > > */
> > > int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
> > > struct vu_virtq_element *elem, int max_elem,
> > > + struct iovec *in_sg, size_t max_in_sg, size_t *in_num,
> > > size_t size, size_t *collected)
> > > {
> > > size_t current_size = 0;
> > > + size_t current_iov = 0;
> > > int elem_cnt = 0;
> > > - while (current_size < size && elem_cnt < max_elem) {
> > > - struct iovec *iov;
> > > + while (current_size < size && elem_cnt < max_elem &&
> > > + current_iov < max_in_sg) {
> > > int ret;
> > > ret = vu_queue_pop(vdev, vq, &elem[elem_cnt],
> > > - elem[elem_cnt].in_sg,
> > > - elem[elem_cnt].in_num,
> > > - elem[elem_cnt].out_sg,
> > > - elem[elem_cnt].out_num);
> > > + &in_sg[current_iov],
> > > + max_in_sg - current_iov,
> > > + NULL, 0);
> > > if (ret < 0)
> > > break;
> > > @@ -103,18 +91,22 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
> > > break;
> > > }
> > > - iov = &elem[elem_cnt].in_sg[0];
> > > -
> > > - if (iov->iov_len > size - current_size)
> > > - iov->iov_len = size - current_size;
> > > + elem[elem_cnt].in_num = iov_truncate(elem[elem_cnt].in_sg,
> > > + elem[elem_cnt].in_num,
> > > + size - current_size);
> >
> > Will elem[].in_num always end up with the same value as the @in_num
> > parameter? If so, do we need the explicit parameter?
>
> @in_num parameter of vu_collect()?
>
> @in_num is the sum of all elem[].in_num, it can be computed by the caller
> function from elem, but it is simpler to return it as we need to compute it
> in the loop.
Oh, right, sorry. I'm getting confused again by the two-level
heirarchy - this gathers multiple elems as well as multiple iovs.
> >
> > > - current_size += iov->iov_len;
> > > + current_size += iov_size(elem[elem_cnt].in_sg,
> > > + elem[elem_cnt].in_num);
> > > + current_iov += elem[elem_cnt].in_num;
> > > elem_cnt++;
> > > if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
> > > break;
> > > }
> > > + if (in_num)
> > > + *in_num = current_iov;
> > > +
> > > if (collected)
> > > *collected = current_size;
> > > @@ -147,8 +139,11 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq,
> > > {
> > > int i;
> > > - for (i = 0; i < elem_cnt; i++)
> > > - vu_queue_fill(vdev, vq, &elem[i], elem[i].in_sg[0].iov_len, i);
> > > + for (i = 0; i < elem_cnt; i++) {
> > > + size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num);
> >
> > IIUC, the elem structure itself isn't shared with vhost, so we can
> > alter it. Would it make sense to cache the number of bytes allocated
> > to the element there, to avoid repeated calls to iov_size()?
>
> It's possible. But I think it could be complicated to keep in sync the
> actual size of the iovec array and the value we store in elem, as we alter
> the array at several points.
Ok. We do expect the iovs to be pretty short in practice, so
iov_size() shouldn't be too expensive.
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2026-03-18 1:17 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-13 18:26 [PATCH v2 0/3] Decouple iovec management from virtqueue elements Laurent Vivier
2026-03-13 18:26 ` [PATCH v2 1/3] virtio: Pass iovec arrays as separate parameters to vu_queue_pop() Laurent Vivier
2026-03-16 8:25 ` David Gibson
2026-03-13 18:26 ` [PATCH v2 2/3] vu_handle_tx: Pass actual remaining out_sg capacity " Laurent Vivier
2026-03-16 9:15 ` David Gibson
2026-03-17 0:02 ` Stefano Brivio
2026-03-13 18:26 ` [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() Laurent Vivier
2026-03-17 2:40 ` David Gibson
2026-03-17 7:25 ` Laurent Vivier
2026-03-17 15:23 ` Stefano Brivio
2026-03-17 16:30 ` Laurent Vivier
2026-03-17 16:35 ` Stefano Brivio
2026-03-18 1:16 ` David Gibson
2026-03-18 7:21 ` Laurent Vivier
2026-03-18 9:04 ` Stefano Brivio
2026-03-18 9:07 ` Laurent Vivier
2026-03-18 1:15 ` David Gibson [this message]
2026-03-17 15:23 ` Stefano Brivio
2026-03-17 16:18 ` Laurent Vivier
2026-03-17 16:21 ` Stefano Brivio
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=abn8si7d3hcg9204@zatzit \
--to=david@gibson.dropbear.id.au \
--cc=lvivier@redhat.com \
--cc=passt-dev@passt.top \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).