public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: Stefano Brivio <sbrivio@redhat.com>
To: Laurent Vivier <lvivier@redhat.com>
Cc: David Gibson <david@gibson.dropbear.id.au>, passt-dev@passt.top
Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect()
Date: Tue, 17 Mar 2026 17:35:39 +0100 (CET)	[thread overview]
Message-ID: <20260317173538.7938b2a2@elisabeth> (raw)
In-Reply-To: <4705f7ab-9277-4462-ada6-6bee39342627@redhat.com>

On Tue, 17 Mar 2026 17:30:32 +0100
Laurent Vivier <lvivier@redhat.com> wrote:

> On 3/17/26 16:23, Stefano Brivio wrote:
> > On Tue, 17 Mar 2026 08:25:49 +0100
> > Laurent Vivier <lvivier@redhat.com> wrote:
> >   
> >> On 3/17/26 03:40, David Gibson wrote:  
> >>> On Fri, Mar 13, 2026 at 07:26:18PM +0100, Laurent Vivier wrote:  
> >>>> Previously, callers had to pre-initialize virtqueue elements with iovec
> >>>> entries using vu_set_element() or vu_init_elem() before calling
> >>>> vu_collect().  This meant each element owned a fixed, pre-assigned iovec
> >>>> slot.
> >>>>
> >>>> Move the iovec array into vu_collect() as explicit parameters (in_sg,
> >>>> max_in_sg, and in_num), letting it pass the remaining iovec capacity
> >>>> directly to vu_queue_pop().  A running current_iov counter tracks
> >>>> consumed entries across elements, so multiple elements share a single
> >>>> iovec pool.  The optional in_num output parameter reports how many iovec
> >>>> entries were consumed, allowing callers to track usage across multiple
> >>>> vu_collect() calls.
> >>>>
> >>>> This removes vu_set_element() and vu_init_elem() which are no longer
> >>>> needed, and is a prerequisite for multi-buffer support where a single
> >>>> virtqueue element can use more than one iovec entry.  For now, callers
> >>>> assert the current single-iovec-per-element invariant until they are
> >>>> updated to handle multiple iovecs.
> >>>>
> >>>> Signed-off-by: Laurent Vivier <lvivier@redhat.com>  
> >>>
> >>> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> >>>
> >>> Couple of thoughts on possible polish below.
> >>>
> >>> [snip]  
> >>>>    /**
> >>>>     * vu_collect() - collect virtio buffers from a given virtqueue
> >>>>     * @vdev:		vhost-user device
> >>>>     * @vq:			virtqueue to collect from
> >>>> - * @elem:		Array of virtqueue element
> >>>> - * 			each element must be initialized with one iovec entry
> >>>> - * 			in the in_sg array.
> >>>> + * @elem:		Array of @max_elem virtqueue elements
> >>>>     * @max_elem:		Number of virtqueue elements in the array
> >>>> + * @in_sg:		Incoming iovec array for device-writable descriptors
> >>>> + * @max_in_sg:		Maximum number of entries in @in_sg
> >>>> + * @in_num:		Number of collected entries from @in_sg (output)
> >>>>     * @size:		Maximum size of the data in the frame
> >>>>     * @collected:		Collected buffer length, up to @size, set on return
> >>>>     *
> >>>> @@ -80,20 +67,21 @@ void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, int elem_cnt
> >>>>     */
> >>>>    int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
> >>>>    	       struct vu_virtq_element *elem, int max_elem,
> >>>> +	       struct iovec *in_sg, size_t max_in_sg, size_t *in_num,
> >>>>    	       size_t size, size_t *collected)
> >>>>    {
> >>>>    	size_t current_size = 0;
> >>>> +	size_t current_iov = 0;
> >>>>    	int elem_cnt = 0;
> >>>>    
> >>>> -	while (current_size < size && elem_cnt < max_elem) {
> >>>> -		struct iovec *iov;
> >>>> +	while (current_size < size && elem_cnt < max_elem &&
> >>>> +	       current_iov < max_in_sg) {
> >>>>    		int ret;
> >>>>    
> >>>>    		ret = vu_queue_pop(vdev, vq, &elem[elem_cnt],
> >>>> -				   elem[elem_cnt].in_sg,
> >>>> -				   elem[elem_cnt].in_num,
> >>>> -				   elem[elem_cnt].out_sg,
> >>>> -				   elem[elem_cnt].out_num);
> >>>> +				   &in_sg[current_iov],
> >>>> +				   max_in_sg - current_iov,
> >>>> +				   NULL, 0);
> >>>>    		if (ret < 0)
> >>>>    			break;
> >>>>    
> >>>> @@ -103,18 +91,22 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
> >>>>    			break;
> >>>>    		}
> >>>>    
> >>>> -		iov = &elem[elem_cnt].in_sg[0];
> >>>> -
> >>>> -		if (iov->iov_len > size - current_size)
> >>>> -			iov->iov_len = size - current_size;
> >>>> +		elem[elem_cnt].in_num = iov_truncate(elem[elem_cnt].in_sg,
> >>>> +						     elem[elem_cnt].in_num,
> >>>> +						     size - current_size);  
> >>>
> >>> Will elem[].in_num always end up with the same value as the @in_num
> >>> parameter?  If so, do we need the explicit parameter?  
> >>
> >> @in_num parameter of vu_collect()?
> >>
> >> @in_num is the sum of all elem[].in_num, it can be computed by the caller function from
> >> elem, but it is simpler to return it as we need to compute it in the loop.  
> > 
> > I'm not sure I understood the point of David's comment here, and this
> > explanation makes sense to me now, but it took me a bit to figure that
> > out.
> > 
> > Could it be that @in_num is a bit confusing as it has "in" and "num" in
> > it, but it's actually an output representing how many "in" entries we
> > used/need?  
> 
> For an element, *ìn_*num is the number of *in_*sg we have read from the ring for an element.
> 
> It's virtio semantic, so  *in_* means sg going *in* the guest.

Sure, that's fair, and:

> For *out_*sg we have *out_*num.

...we certainly can't call it "out_" because of that. My problem is
that "num" together with that becomes quite unspecific.

> > What if we rename it to @in_used or @in_collected?
> 
> The idea was to keep the same name as in the element. But we can change this to @in_used.

Oh, I see now. Maybe let's wait for David to comment, as maybe he was
confused by the whole naming as well (my guess at least).

-- 
Stefano


  reply	other threads:[~2026-03-17 16:35 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-13 18:26 [PATCH v2 0/3] Decouple iovec management from virtqueue elements Laurent Vivier
2026-03-13 18:26 ` [PATCH v2 1/3] virtio: Pass iovec arrays as separate parameters to vu_queue_pop() Laurent Vivier
2026-03-16  8:25   ` David Gibson
2026-03-13 18:26 ` [PATCH v2 2/3] vu_handle_tx: Pass actual remaining out_sg capacity " Laurent Vivier
2026-03-16  9:15   ` David Gibson
2026-03-17  0:02   ` Stefano Brivio
2026-03-13 18:26 ` [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() Laurent Vivier
2026-03-17  2:40   ` David Gibson
2026-03-17  7:25     ` Laurent Vivier
2026-03-17 15:23       ` Stefano Brivio
2026-03-17 16:30         ` Laurent Vivier
2026-03-17 16:35           ` Stefano Brivio [this message]
2026-03-17 15:23   ` Stefano Brivio
2026-03-17 16:18     ` Laurent Vivier
2026-03-17 16:21       ` Stefano Brivio

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260317173538.7938b2a2@elisabeth \
    --to=sbrivio@redhat.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=lvivier@redhat.com \
    --cc=passt-dev@passt.top \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).