From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Itq2XfDp; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id B3EA45A026E for ; Tue, 17 Mar 2026 16:24:01 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773761040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H/AlFiFJt2zkzvDPr9SdROXVDzCFbp88wAc2S8FJfEM=; b=Itq2XfDpQWvjVIrLXx/eI0iOEwxkvschqMFGvaAdmSyXYjqXn/FPBkJej4PaFqx0WTClOI nCAO2KkoFaKAN77GgofwvcmpGSNX9/2S6PIzhd4KInEQcSrAJzmnJYgDX14e3a6vPz4YAU PbYqjvn6Jol7wDodrweOqp+qmjzCpqQ= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-674-wMfcNu84NEeAfJcfpZgHCQ-1; Tue, 17 Mar 2026 11:23:59 -0400 X-MC-Unique: wMfcNu84NEeAfJcfpZgHCQ-1 X-Mimecast-MFC-AGG-ID: wMfcNu84NEeAfJcfpZgHCQ_1773761038 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-48531e8ae62so46811925e9.3 for ; Tue, 17 Mar 2026 08:23:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773761037; x=1774365837; h=date:content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H/AlFiFJt2zkzvDPr9SdROXVDzCFbp88wAc2S8FJfEM=; b=BK9glxOEKk00rM9MZPyfX5O6/WwKq3zg2mRonza5/r0rKkHSCnA8NmUKc4Eze/+4OV CJiFKHynfaVnyXb93a/Whmwz28CtfeIo7zKlrFYQ9K/FcBX1VyBgxDTugBHrvM61nMTM CMu26C4Mn8C6D3852TU0wug8WuSJrti3gBn6XXA18SHDiIruWtVq5Gj9GEqLUfkjJ/8V Mo20APgmHPBubYR1iQNxbdKIhhQN1TsOTcxI0SqHgRFRMtjfzyXwxS+WgTo37RySzARq S2aDGbvEajV3eNLj87pIGg+tP5Cv1oKFwe/esfe0AxV+DyE8vmnrVgcx+aolrzyTH+NK aqhg== X-Gm-Message-State: AOJu0YzUXr2+mQ4YI5XfpIajJL2Ve8X/uYtIasFMR0orPdKDKdgK5pGm +gH5jQVjnuFMn8UUl6JanRpf/W++kiAUZdMy8mi3Hx45sbqbebzsrFBakph5UceQwqgv+IAyFMr qhzaFZCE+BfFH7EOcK/E23RYuezOLN6tvmYecJdsy6/i9TWnliQkDowtDrlRmejDmJarPGsa2oX kB9dpZUPP0ji1IHBAEr+8ALRtJL5MfiNwPcK0X X-Gm-Gg: ATEYQzzEIE3SQ61p+XMO64j2flyHJVkNwDpd7DbIHhIkLikJ+r2vWMGsFX8rOI9lLeY RiC/4PE6aIMiOBdPsM8uhCJAjkM+N7PIbFFDpYAaoxSmAX83vA9po4dgbNr1de528f9o2xEU/yM DCJZA4//VS1RtVokhzGCrmRilLEE2EOP7SzXJb4hA30TQud4cfoEZKDvRkgtY4zNTPXF3kPyYUO 3w/QWZc6XQiWluBeZIwGCGHKgOnAiYtzzu5gyteaPyWYDlm7Pg9Q322dl6bz5HEpmDgm9xvK7qi IOzcfqUCo1pzYBcZNog+R4yR3xMxbxsrewdOQiEOAStgoERS7YEAphA+DjB95zl6Sqh7pq/zXQg ymxJGDK/dH31QZKlDotw6GT6mq+GuB5Wbjo83VeBoRtUuA9DzmQ== X-Received: by 2002:a05:600c:c162:b0:485:3fd1:9936 with SMTP id 5b1f17b1804b1-485566cfb7fmr290619035e9.5.1773761037197; Tue, 17 Mar 2026 08:23:57 -0700 (PDT) X-Received: by 2002:a05:600c:c162:b0:485:3fd1:9936 with SMTP id 5b1f17b1804b1-485566cfb7fmr290618455e9.5.1773761036645; Tue, 17 Mar 2026 08:23:56 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [176.103.220.4]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4856ea9d7e1sm72545265e9.7.2026.03.17.08.23.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Mar 2026 08:23:55 -0700 (PDT) From: Stefano Brivio To: Laurent Vivier Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() Message-ID: <20260317162354.117a8604@elisabeth> In-Reply-To: <20260313182618.4157365-4-lvivier@redhat.com> References: <20260313182618.4157365-1-lvivier@redhat.com> <20260313182618.4157365-4-lvivier@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.49; x86_64-pc-linux-gnu) MIME-Version: 1.0 Date: Tue, 17 Mar 2026 16:23:55 +0100 (CET) X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 0hVMBrQtO1IjrE2cWw6bAx-dgD1Z_X9pLpukYWxfftg_1773761038 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: 2HW6QPEKSSMCNEG5OE2PTS5XKTRUBFPF X-Message-ID-Hash: 2HW6QPEKSSMCNEG5OE2PTS5XKTRUBFPF X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Fri, 13 Mar 2026 19:26:18 +0100 Laurent Vivier wrote: > Previously, callers had to pre-initialize virtqueue elements with iovec > entries using vu_set_element() or vu_init_elem() before calling > vu_collect(). This meant each element owned a fixed, pre-assigned iovec > slot. > > Move the iovec array into vu_collect() as explicit parameters (in_sg, > max_in_sg, and in_num), letting it pass the remaining iovec capacity > directly to vu_queue_pop(). A running current_iov counter tracks > consumed entries across elements, so multiple elements share a single > iovec pool. The optional in_num output parameter reports how many iovec > entries were consumed, allowing callers to track usage across multiple > vu_collect() calls. > > This removes vu_set_element() and vu_init_elem() which are no longer > needed, and is a prerequisite for multi-buffer support where a single > virtqueue element can use more than one iovec entry. For now, callers > assert the current single-iovec-per-element invariant until they are > updated to handle multiple iovecs. > > Signed-off-by: Laurent Vivier This looks fine to me other than the comment about @in_num comment (not sure if it makes sense) and pending clarification with David regarding that argument to vu_collect(), so once that's sorted, if you don't want further changes, I would apply it like it is. But I have a question, just to be sure: > --- > tcp_vu.c | 25 +++++++++++--------- > udp_vu.c | 21 ++++++++++------- > vu_common.c | 68 ++++++++++++++++++++++++----------------------------- > vu_common.h | 22 +++-------------- > 4 files changed, 60 insertions(+), 76 deletions(-) > > diff --git a/tcp_vu.c b/tcp_vu.c > index fd734e857b3b..d470ab54bcea 100644 > --- a/tcp_vu.c > +++ b/tcp_vu.c > @@ -87,13 +87,13 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > > hdrlen = tcp_vu_hdrlen(CONN_V6(conn)); > > - vu_set_element(&flags_elem[0], NULL, &flags_iov[0]); > - > elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1, > + &flags_iov[0], 1, NULL, > MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL); > if (elem_cnt != 1) > return -1; > > + ASSERT(flags_elem[0].in_num == 1); > ASSERT(flags_elem[0].in_sg[0].iov_len >= > MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN)); > > @@ -148,9 +148,8 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > nb_ack = 1; > > if (flags & DUP_ACK) { > - vu_set_element(&flags_elem[1], NULL, &flags_iov[1]); > - > elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1, > + &flags_iov[1], 1, NULL, > flags_elem[0].in_sg[0].iov_len, NULL); > if (elem_cnt == 1 && > flags_elem[1].in_sg[0].iov_len >= > @@ -191,8 +190,8 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, > const struct vu_dev *vdev = c->vdev; > struct msghdr mh_sock = { 0 }; > uint16_t mss = MSS_GET(conn); > + size_t hdrlen, iov_used; > int s = conn->sock; > - size_t hdrlen; > int elem_cnt; > ssize_t ret; > int i; > @@ -201,22 +200,26 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, > > hdrlen = tcp_vu_hdrlen(v6); > > - vu_init_elem(elem, &iov_vu[DISCARD_IOV_NUM], VIRTQUEUE_MAX_SIZE); > - > + iov_used = 0; > elem_cnt = 0; > *head_cnt = 0; > - while (fillsize > 0 && elem_cnt < VIRTQUEUE_MAX_SIZE) { > + while (fillsize > 0 && elem_cnt < ARRAY_SIZE(elem) && > + iov_used < VIRTQUEUE_MAX_SIZE) { > + size_t frame_size, dlen, in_num; > struct iovec *iov; > - size_t frame_size, dlen; > int cnt; > > cnt = vu_collect(vdev, vq, &elem[elem_cnt], > - VIRTQUEUE_MAX_SIZE - elem_cnt, > + ARRAY_SIZE(elem) - elem_cnt, > + &iov_vu[DISCARD_IOV_NUM + iov_used], > + VIRTQUEUE_MAX_SIZE - iov_used, &in_num, > MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN), > &frame_size); > if (cnt == 0) > break; > + ASSERT((size_t)cnt == in_num); /* one iovec per element */ ...this (and the UDP equivalent) will trigger if there are multiple iovecs per element, until the point the next pending series deals with it. My assumption is that, if there multiple iovecs per element, we crash in the way you reported in https://bugs.passt.top/show_bug.cgi?id=196 anyway, so triggering this assertion here doesn't make it any worse for the moment. Is my assumption correct, or do we risk adding additional cases where we crash meanwhile? If we do, maybe it would be better to only merge this patch with the next series. -- Stefano