From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=TxsQezRJ; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id 62C0D5A0262 for ; Tue, 17 Mar 2026 17:21:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773764477; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=za0Ulu4Ok1UQpDpwktQB5TLP61TXrhF413S8yyltttA=; b=TxsQezRJUyVlNqEjvJGKauajpRIbE5zVw9T+RZma4zC3yOAMBBngGx+XiyDxLcAXlwRJxz azFvFkFxJR9JQ4ucvfGa9YuEufgo5kM2Qq4Ki3xEAbaBiFdo5N2ARDVqfOZ6SqsdonlAvU 9PvwFlTmSH0UIScIMHWThGxqoBf8HzM= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-111-Ty2hevSZMBOmK-Mt77Z0_w-1; Tue, 17 Mar 2026 12:21:15 -0400 X-MC-Unique: Ty2hevSZMBOmK-Mt77Z0_w-1 X-Mimecast-MFC-AGG-ID: Ty2hevSZMBOmK-Mt77Z0_w_1773764475 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-43b3a41c62bso3182008f8f.1 for ; Tue, 17 Mar 2026 09:21:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773764474; x=1774369274; h=date:content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=za0Ulu4Ok1UQpDpwktQB5TLP61TXrhF413S8yyltttA=; b=HKAJQ2h/2woZm7T4r2nSmPqbSbYYYYzsY+uHZoNKzlnQP7G/+HAJt1fkIfgrPZkMP5 PPFPV0i1WToQgyfdtoacQM1qvQ1FKgtqeP7jqGYMQDJII1lv0ytS526p33ERuclPyTMx ToN2fpwzUAZYawFXOl1nnDgNKsNqrqA/tdRUuyayrldSrS7V01rMGZOQyHX9FlQJLlbU 7f2o2BBBlE1EmPp8mLlyPI6bzSFjGF8euO9BdRwBIKn7tov24ipsmGmqQ3E/tvRva7eu 3YVUxNfb45c0vf/ulbzz0nNYFZOe4XL4kYqlICGKC6k0NQYzO0kF6quOklnu3cgKqpIq vZLw== X-Gm-Message-State: AOJu0YxDZlL9vxdsEMdgQC2qaIJhMCiqRpZxBMTfvuWSatTMA/2lMFjd Wx+4P0yghrfzcPrWV1NxjfTVISsENxk992gtp8FDMPATZyh9fxN7VR7nOTjfFh1Nq2LOT05RaUt aBXw68Ck4fOZSLiPCqcGI7VkDUFWuyIeJp81K4eCgM6Pn9E1Fv9YWjclSraUwLQTkjuEJR4JCkU lZeT/oeC+cfr6JOhmjkSqopVdDz+bpuSDI2dfl X-Gm-Gg: ATEYQzx/8y10CO+JJL+EksCxZuzN3Pp+C3O9Sybu5+1b95SNF/BMAqya7wUoFlOZMGA oO/ArtInBxf0Vm4pXznHtihmbgOKLkKDicxUa6BdpxWaKl1HHxort0MK3r00uemqqQgbdhPzMqh TG4NBkMfUsbH2vCVAPJErRQHC0+dbwlsq4H3/4ox3elmAjdGDgslf3ijEf7QKE285wSIg2LhbEi iTQ3M/tTkgyagnSiaM6TzwH6vx/ORNWCKL7Fqqt4u5v2HMB4qYWYJHKOK/sZIZ3DjtuXNMVUoUA wc0UudsrlNQqCnRJajaZlJZvrpzodWFYTQyUMYcAO2d8Jy3UwdB+RFQlOIfbZBAxOxORkTAlnhY 6h4bQuAf+/1tOeLJdPAPWCNyKbEYlH34TkwcreOQbz18ZNra8xQ== X-Received: by 2002:a05:6000:144f:b0:439:ba75:7dab with SMTP id ffacd0b85a97d-43a04d7a93dmr29470556f8f.9.1773764474105; Tue, 17 Mar 2026 09:21:14 -0700 (PDT) X-Received: by 2002:a05:6000:144f:b0:439:ba75:7dab with SMTP id ffacd0b85a97d-43a04d7a93dmr29470499f8f.9.1773764473428; Tue, 17 Mar 2026 09:21:13 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [176.103.220.4]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43b518a3dfasm406446f8f.33.2026.03.17.09.21.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Mar 2026 09:21:13 -0700 (PDT) From: Stefano Brivio To: Laurent Vivier Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() Message-ID: <20260317172110.63cb10a3@elisabeth> In-Reply-To: <0ad98126-2a3b-44a8-b1f2-ab15400f3f97@redhat.com> References: <20260313182618.4157365-1-lvivier@redhat.com> <20260313182618.4157365-4-lvivier@redhat.com> <20260317162354.117a8604@elisabeth> <0ad98126-2a3b-44a8-b1f2-ab15400f3f97@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.49; x86_64-pc-linux-gnu) MIME-Version: 1.0 Date: Tue, 17 Mar 2026 17:21:12 +0100 (CET) X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: y13hMMJPNntpfek_nvz7Ysu_ZGBMwZMIcQnpWLNlgzs_1773764475 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: XH5MAABCZTDDAYSSH5WQ5H32UAB3E2UQ X-Message-ID-Hash: XH5MAABCZTDDAYSSH5WQ5H32UAB3E2UQ X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Tue, 17 Mar 2026 17:18:58 +0100 Laurent Vivier wrote: > On 3/17/26 16:23, Stefano Brivio wrote: > > On Fri, 13 Mar 2026 19:26:18 +0100 > > Laurent Vivier wrote: > > > >> Previously, callers had to pre-initialize virtqueue elements with iovec > >> entries using vu_set_element() or vu_init_elem() before calling > >> vu_collect(). This meant each element owned a fixed, pre-assigned iovec > >> slot. > >> > >> Move the iovec array into vu_collect() as explicit parameters (in_sg, > >> max_in_sg, and in_num), letting it pass the remaining iovec capacity > >> directly to vu_queue_pop(). A running current_iov counter tracks > >> consumed entries across elements, so multiple elements share a single > >> iovec pool. The optional in_num output parameter reports how many iovec > >> entries were consumed, allowing callers to track usage across multiple > >> vu_collect() calls. > >> > >> This removes vu_set_element() and vu_init_elem() which are no longer > >> needed, and is a prerequisite for multi-buffer support where a single > >> virtqueue element can use more than one iovec entry. For now, callers > >> assert the current single-iovec-per-element invariant until they are > >> updated to handle multiple iovecs. > >> > >> Signed-off-by: Laurent Vivier > > > > This looks fine to me other than the comment about @in_num comment > > (not sure if it makes sense) and pending clarification with David > > regarding that argument to vu_collect(), so once that's sorted, if you > > don't want further changes, I would apply it like it is. But I have a > > question, just to be sure: > > > >> --- > >> tcp_vu.c | 25 +++++++++++--------- > >> udp_vu.c | 21 ++++++++++------- > >> vu_common.c | 68 ++++++++++++++++++++++++----------------------------- > >> vu_common.h | 22 +++-------------- > >> 4 files changed, 60 insertions(+), 76 deletions(-) > >> > >> diff --git a/tcp_vu.c b/tcp_vu.c > >> index fd734e857b3b..d470ab54bcea 100644 > >> --- a/tcp_vu.c > >> +++ b/tcp_vu.c > >> @@ -87,13 +87,13 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > >> > >> hdrlen = tcp_vu_hdrlen(CONN_V6(conn)); > >> > >> - vu_set_element(&flags_elem[0], NULL, &flags_iov[0]); > >> - > >> elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1, > >> + &flags_iov[0], 1, NULL, > >> MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL); > >> if (elem_cnt != 1) > >> return -1; > >> > >> + ASSERT(flags_elem[0].in_num == 1); > >> ASSERT(flags_elem[0].in_sg[0].iov_len >= > >> MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN)); > >> > >> @@ -148,9 +148,8 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > >> nb_ack = 1; > >> > >> if (flags & DUP_ACK) { > >> - vu_set_element(&flags_elem[1], NULL, &flags_iov[1]); > >> - > >> elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1, > >> + &flags_iov[1], 1, NULL, > >> flags_elem[0].in_sg[0].iov_len, NULL); > >> if (elem_cnt == 1 && > >> flags_elem[1].in_sg[0].iov_len >= > >> @@ -191,8 +190,8 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, > >> const struct vu_dev *vdev = c->vdev; > >> struct msghdr mh_sock = { 0 }; > >> uint16_t mss = MSS_GET(conn); > >> + size_t hdrlen, iov_used; > >> int s = conn->sock; > >> - size_t hdrlen; > >> int elem_cnt; > >> ssize_t ret; > >> int i; > >> @@ -201,22 +200,26 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, > >> > >> hdrlen = tcp_vu_hdrlen(v6); > >> > >> - vu_init_elem(elem, &iov_vu[DISCARD_IOV_NUM], VIRTQUEUE_MAX_SIZE); > >> - > >> + iov_used = 0; > >> elem_cnt = 0; > >> *head_cnt = 0; > >> - while (fillsize > 0 && elem_cnt < VIRTQUEUE_MAX_SIZE) { > >> + while (fillsize > 0 && elem_cnt < ARRAY_SIZE(elem) && > >> + iov_used < VIRTQUEUE_MAX_SIZE) { > >> + size_t frame_size, dlen, in_num; > >> struct iovec *iov; > >> - size_t frame_size, dlen; > >> int cnt; > >> > >> cnt = vu_collect(vdev, vq, &elem[elem_cnt], > >> - VIRTQUEUE_MAX_SIZE - elem_cnt, > >> + ARRAY_SIZE(elem) - elem_cnt, > >> + &iov_vu[DISCARD_IOV_NUM + iov_used], > >> + VIRTQUEUE_MAX_SIZE - iov_used, &in_num, > >> MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN), > >> &frame_size); > >> if (cnt == 0) > >> break; > >> + ASSERT((size_t)cnt == in_num); /* one iovec per element */ > > > > ...this (and the UDP equivalent) will trigger if there are multiple > > iovecs per element, until the point the next pending series deals with > > it. > > > > My assumption is that, if there multiple iovecs per element, we crash > > in the way you reported in https://bugs.passt.top/show_bug.cgi?id=196 > > anyway, so triggering this assertion here doesn't make it any worse for > > the moment. > > > > Is my assumption correct, or do we risk adding additional cases where > > we crash meanwhile? If we do, maybe it would be better to only merge > > this patch with the next series. > > > > You're right. The idea is to mark clearly that udp_vu.c and tcp_vu.c only manage 1 iovec > per element. But they can be removed from this series if you prefer. No, no, then it's convenient! It's also more convenient for any debugging meanwhile to have things crash right there, I suppose. -- Stefano