From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=iDFlcgc4; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id EC59D5A0275 for ; Fri, 27 Feb 2026 15:03:55 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772201034; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ja2eJInJUE4JY9rzFcvvAKiSJC/cq1rdWFNBnspC/jA=; b=iDFlcgc4VzZxBe2ucu/6P3mWSmineAKxf06MEgRVgDB8NK2hww/TOJ3h7UApD5VddnkUTG 8vEs7HEkTfufzxqbtGyKFCNazxmqyDO0K5naN32ttpeIhAHs3jAdUVk7AnJ7kbNXhCUSQt 4Fpuo6nxlaR4I7Nu59TT5ZuUeTxty9I= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-220-LEGFEtCnOu67mbP9X1MKVQ-1; Fri, 27 Feb 2026 09:03:52 -0500 X-MC-Unique: LEGFEtCnOu67mbP9X1MKVQ-1 X-Mimecast-MFC-AGG-ID: LEGFEtCnOu67mbP9X1MKVQ_1772201031 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CB6E91956070 for ; Fri, 27 Feb 2026 14:03:51 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.45.225.86]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D7F5C19560B5; Fri, 27 Feb 2026 14:03:50 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH 12/12] vhost-user,udp: Use 2 iovec entries per element Date: Fri, 27 Feb 2026 15:03:30 +0100 Message-ID: <20260227140330.2216753-13-lvivier@redhat.com> In-Reply-To: <20260227140330.2216753-1-lvivier@redhat.com> References: <20260227140330.2216753-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: jBkdjptPnA3Frc1h23StkBBPALSlCwc0LNkAwmEmIP4_1772201031 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true Message-ID-Hash: ZBVQANSUHRSKXGROWB67LAV25CGYJEHJ X-Message-ID-Hash: ZBVQANSUHRSKXGROWB67LAV25CGYJEHJ X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: iPXE places the vnet header in one virtqueue descriptor and the payload in another. When passt maps these descriptors, it needs two iovecs per virtqueue element to handle this layout. Without this, passt crashes with: ASSERTION FAILED in virtqueue_map_desc (virtio.c:403): num_sg < max_num_sg Signed-off-by: Laurent Vivier --- udp_vu.c | 8 ++++---- vu_common.c | 34 +++++++++++++++++++++++----------- 2 files changed, 27 insertions(+), 15 deletions(-) diff --git a/udp_vu.c b/udp_vu.c index 7e486b74883e..13fea87e1b9f 100644 --- a/udp_vu.c +++ b/udp_vu.c @@ -34,7 +34,7 @@ #include "vu_common.h" static struct iovec iov_vu [VIRTQUEUE_MAX_SIZE]; -static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE]; +static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE / IOV_PER_ELEM]; /** * udp_vu_hdrlen() - Sum size of all headers, from UDP to virtio-net @@ -214,21 +214,21 @@ void udp_vu_sock_to_tap(const struct ctx *c, int s, int n, flow_sidx_t tosidx) int elem_cnt, elem_used; ssize_t dlen; - vu_init_elem(elem, iov_vu, ARRAY_SIZE(elem), 1); + vu_init_elem(elem, iov_vu, ARRAY_SIZE(elem), IOV_PER_ELEM); elem_cnt = vu_collect(vdev, vq, elem, ARRAY_SIZE(elem), IP_MAX_MTU + ETH_HLEN + VNET_HLEN, NULL); if (elem_cnt == 0) break; - data = IOV_TAIL(iov_vu, elem_cnt, 0); + data = IOV_TAIL(iov_vu, (size_t)(elem_cnt * IOV_PER_ELEM), 0); dlen = udp_vu_sock_recv(&data, s, v6); if (dlen < 0) { vu_queue_rewind(vq, elem_cnt); continue; } - elem_used = data.cnt; + elem_used = DIV_ROUND_UP(data.cnt, IOV_PER_ELEM); /* release unused buffers */ vu_queue_rewind(vq, elem_cnt - elem_used); diff --git a/vu_common.c b/vu_common.c index 67d8f3e47338..3f50d31da633 100644 --- a/vu_common.c +++ b/vu_common.c @@ -63,8 +63,15 @@ void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, { int i, j; - for (i = 0, j = 0; i < elem_cnt; i++, j += iov_per_elem) + for (i = 0, j = 0; i < elem_cnt; i++, j += iov_per_elem) { + int k; + + for (k = 0; k < iov_per_elem; k++) { + iov[j + k].iov_base = NULL; + iov[j + k].iov_len = 0; + } vu_set_element(&elem[i], 0, NULL, iov_per_elem, &iov[j]); + } } /** @@ -88,7 +95,8 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, int elem_cnt = 0; while (current_size < size && elem_cnt < max_elem) { - struct iovec *iov; + struct iov_tail tail; + size_t elem_size; int ret; ret = vu_queue_pop(vdev, vq, &elem[elem_cnt]); @@ -101,12 +109,14 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, break; } - iov = &elem[elem_cnt].in_sg[0]; + tail = IOV_TAIL(elem[elem_cnt].in_sg, elem[elem_cnt].in_num, 0); + iov_tail_truncate(&tail, size - current_size); + elem[elem_cnt].in_num = tail.cnt; - if (iov->iov_len > size - current_size) - iov->iov_len = size - current_size; + elem_size = iov_size(elem[elem_cnt].in_sg, + elem[elem_cnt].in_num); - current_size += iov->iov_len; + current_size += elem_size; elem_cnt++; if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) @@ -153,8 +163,10 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, { int i; - for (i = 0; i < elem_cnt; i++) - vu_queue_fill(vdev, vq, &elem[i], elem[i].in_sg[0].iov_len, i); + for (i = 0; i < elem_cnt; i++) { + size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num); + vu_queue_fill(vdev, vq, &elem[i], elem_size, i); + } vu_queue_flush(vdev, vq, elem_cnt); vu_queue_notify(vdev, vq); @@ -253,7 +265,7 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) { struct vu_dev *vdev = c->vdev; struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; - struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE / IOV_PER_ELEM]; struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; struct iov_tail data; size_t total; @@ -267,7 +279,7 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) return -1; } - vu_init_elem(elem, in_sg, ARRAY_SIZE(elem), 1); + vu_init_elem(elem, in_sg, ARRAY_SIZE(elem), IOV_PER_ELEM); size += VNET_HLEN; elem_cnt = vu_collect(vdev, vq, elem, ARRAY_SIZE(elem), size, &total); @@ -277,7 +289,7 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) goto err; } - data = IOV_TAIL(&in_sg[0], elem_cnt, 0); + data = IOV_TAIL(&in_sg[0], (size_t)(elem_cnt * IOV_PER_ELEM), 0); vu_set_vnethdr(vdev, &data, elem_cnt); total -= VNET_HLEN; -- 2.53.0