From: Stefano Brivio <sbrivio@redhat.com>
To: Laurent Vivier <lvivier@redhat.com>
Cc: passt-dev@passt.top
Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect()
Date: Tue, 17 Mar 2026 16:23:55 +0100 (CET) [thread overview]
Message-ID: <20260317162354.117a8604@elisabeth> (raw)
In-Reply-To: <20260313182618.4157365-4-lvivier@redhat.com>
On Fri, 13 Mar 2026 19:26:18 +0100
Laurent Vivier <lvivier@redhat.com> wrote:
> Previously, callers had to pre-initialize virtqueue elements with iovec
> entries using vu_set_element() or vu_init_elem() before calling
> vu_collect(). This meant each element owned a fixed, pre-assigned iovec
> slot.
>
> Move the iovec array into vu_collect() as explicit parameters (in_sg,
> max_in_sg, and in_num), letting it pass the remaining iovec capacity
> directly to vu_queue_pop(). A running current_iov counter tracks
> consumed entries across elements, so multiple elements share a single
> iovec pool. The optional in_num output parameter reports how many iovec
> entries were consumed, allowing callers to track usage across multiple
> vu_collect() calls.
>
> This removes vu_set_element() and vu_init_elem() which are no longer
> needed, and is a prerequisite for multi-buffer support where a single
> virtqueue element can use more than one iovec entry. For now, callers
> assert the current single-iovec-per-element invariant until they are
> updated to handle multiple iovecs.
>
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
This looks fine to me other than the comment about @in_num comment
(not sure if it makes sense) and pending clarification with David
regarding that argument to vu_collect(), so once that's sorted, if you
don't want further changes, I would apply it like it is. But I have a
question, just to be sure:
> ---
> tcp_vu.c | 25 +++++++++++---------
> udp_vu.c | 21 ++++++++++-------
> vu_common.c | 68 ++++++++++++++++++++++++-----------------------------
> vu_common.h | 22 +++--------------
> 4 files changed, 60 insertions(+), 76 deletions(-)
>
> diff --git a/tcp_vu.c b/tcp_vu.c
> index fd734e857b3b..d470ab54bcea 100644
> --- a/tcp_vu.c
> +++ b/tcp_vu.c
> @@ -87,13 +87,13 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>
> hdrlen = tcp_vu_hdrlen(CONN_V6(conn));
>
> - vu_set_element(&flags_elem[0], NULL, &flags_iov[0]);
> -
> elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1,
> + &flags_iov[0], 1, NULL,
> MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL);
> if (elem_cnt != 1)
> return -1;
>
> + ASSERT(flags_elem[0].in_num == 1);
> ASSERT(flags_elem[0].in_sg[0].iov_len >=
> MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN));
>
> @@ -148,9 +148,8 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
> nb_ack = 1;
>
> if (flags & DUP_ACK) {
> - vu_set_element(&flags_elem[1], NULL, &flags_iov[1]);
> -
> elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1,
> + &flags_iov[1], 1, NULL,
> flags_elem[0].in_sg[0].iov_len, NULL);
> if (elem_cnt == 1 &&
> flags_elem[1].in_sg[0].iov_len >=
> @@ -191,8 +190,8 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
> const struct vu_dev *vdev = c->vdev;
> struct msghdr mh_sock = { 0 };
> uint16_t mss = MSS_GET(conn);
> + size_t hdrlen, iov_used;
> int s = conn->sock;
> - size_t hdrlen;
> int elem_cnt;
> ssize_t ret;
> int i;
> @@ -201,22 +200,26 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
>
> hdrlen = tcp_vu_hdrlen(v6);
>
> - vu_init_elem(elem, &iov_vu[DISCARD_IOV_NUM], VIRTQUEUE_MAX_SIZE);
> -
> + iov_used = 0;
> elem_cnt = 0;
> *head_cnt = 0;
> - while (fillsize > 0 && elem_cnt < VIRTQUEUE_MAX_SIZE) {
> + while (fillsize > 0 && elem_cnt < ARRAY_SIZE(elem) &&
> + iov_used < VIRTQUEUE_MAX_SIZE) {
> + size_t frame_size, dlen, in_num;
> struct iovec *iov;
> - size_t frame_size, dlen;
> int cnt;
>
> cnt = vu_collect(vdev, vq, &elem[elem_cnt],
> - VIRTQUEUE_MAX_SIZE - elem_cnt,
> + ARRAY_SIZE(elem) - elem_cnt,
> + &iov_vu[DISCARD_IOV_NUM + iov_used],
> + VIRTQUEUE_MAX_SIZE - iov_used, &in_num,
> MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN),
> &frame_size);
> if (cnt == 0)
> break;
> + ASSERT((size_t)cnt == in_num); /* one iovec per element */
...this (and the UDP equivalent) will trigger if there are multiple
iovecs per element, until the point the next pending series deals with
it.
My assumption is that, if there multiple iovecs per element, we crash
in the way you reported in https://bugs.passt.top/show_bug.cgi?id=196
anyway, so triggering this assertion here doesn't make it any worse for
the moment.
Is my assumption correct, or do we risk adding additional cases where
we crash meanwhile? If we do, maybe it would be better to only merge
this patch with the next series.
--
Stefano
next prev parent reply other threads:[~2026-03-17 15:24 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-13 18:26 [PATCH v2 0/3] Decouple iovec management from virtqueue elements Laurent Vivier
2026-03-13 18:26 ` [PATCH v2 1/3] virtio: Pass iovec arrays as separate parameters to vu_queue_pop() Laurent Vivier
2026-03-16 8:25 ` David Gibson
2026-03-13 18:26 ` [PATCH v2 2/3] vu_handle_tx: Pass actual remaining out_sg capacity " Laurent Vivier
2026-03-16 9:15 ` David Gibson
2026-03-17 0:02 ` Stefano Brivio
2026-03-13 18:26 ` [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() Laurent Vivier
2026-03-17 2:40 ` David Gibson
2026-03-17 7:25 ` Laurent Vivier
2026-03-17 15:23 ` Stefano Brivio
2026-03-17 16:30 ` Laurent Vivier
2026-03-17 16:35 ` Stefano Brivio
2026-03-17 15:23 ` Stefano Brivio [this message]
2026-03-17 16:18 ` Laurent Vivier
2026-03-17 16:21 ` Stefano Brivio
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260317162354.117a8604@elisabeth \
--to=sbrivio@redhat.com \
--cc=lvivier@redhat.com \
--cc=passt-dev@passt.top \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).