From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=T9ZaDa/K; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id DA7165A0262 for ; Mon, 16 Mar 2026 19:07:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773684446; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=U8pl2oPJFxdfVKGKWypo3IP/XQDnS8dnWtqCWm5m4Xs=; b=T9ZaDa/Ks3HGb3EAqT2zgzazOU3nZPGak29N/pdesuGO596L62Dnng2DwtFRaYRT3HB4lI z9M46M4dE/H+BVO7mW+biCjp9XFfvo6nb01orzgJZHEYZHqyYKTCWzIsgXHTtE39VVMhdG kJbqUCDeojD35XrwNWgCJa27liO8u2A= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-248-tHvXX_yZOJq9jLsXVkqwCw-1; Mon, 16 Mar 2026 14:07:25 -0400 X-MC-Unique: tHvXX_yZOJq9jLsXVkqwCw-1 X-Mimecast-MFC-AGG-ID: tHvXX_yZOJq9jLsXVkqwCw_1773684444 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5573719560AE for ; Mon, 16 Mar 2026 18:07:24 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.44.35.65]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5D5B31800576; Mon, 16 Mar 2026 18:07:23 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH v3 0/8] vhost-user,udp: Handle multiple iovec entries per virtqueue element Date: Mon, 16 Mar 2026 19:07:13 +0100 Message-ID: <20260316180721.2230640-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: FKea1o8gXu2ImmJXFXzaZcFuAYhum9KQWYlR4FAsrPM_1773684444 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID-Hash: 6FYN5IJVILB3PMGPQA2WWYB7EW24ZHN2 X-Message-ID-Hash: 6FYN5IJVILB3PMGPQA2WWYB7EW24ZHN2 X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Some virtio-net drivers (notably iPXE) provide descriptors where the=0D vnet header and the frame payload are in separate buffers, resulting in=0D two iovec entries per virtqueue element. Currently, the RX (host to=0D guest) path assumes a single iovec per element, which triggers:=0D =0D ASSERTION FAILED in virtqueue_map_desc (virtio.c:403):=0D num_sg < max_num_sg=0D =0D This series reworks the UDP vhost-user receive path to support multiple=0D iovec entries per element, fixing the iPXE crash.=0D =0D This series only addresses the UDP path. TCP vhost-user will be=0D updated to use multi-iov elements in a subsequent series.=0D =0D v3:=0D - include the series "Decouple iovec management from virtqueues elements"= =0D - because of this series, drop:=0D "vu_common: Accept explicit iovec counts in vu_set_element()"=0D "vu_common: Accept explicit iovec count per element in vu_init_elem()"=0D "vu_common: Prepare to use multibuffer with guest RX"=0D "vhost-user,udp: Use 2 iovec entries per element"=0D - drop "vu_common: Pass iov_tail to vu_set_vnethdr()"=0D as the specs insures a buffer is big enough to contain vnet header=0D - introduce "with_header()" and merge=0D "udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6()" and=0D "udp_vu: Use iov_tail in udp_vu_prepare()"=0D to use it=0D =0D v2:=0D - add iov_truncate(), iov_memset()=0D - remove iov_tail_truncate() and iov_tail_zero_end()=0D - manage 802.3 minimum frame size=0D =0D Laurent Vivier (8):=0D virtio: Pass iovec arrays as separate parameters to vu_queue_pop()=0D vu_handle_tx: Pass actual remaining out_sg capacity to vu_queue_pop()=0D vu_common: Move iovec management into vu_collect()=0D vhost-user: Centralise Ethernet frame padding in vu_collect(),=0D vu_pad() and vu_flush()=0D udp_vu: Use iov_tail to manage virtqueue buffers=0D udp_vu: Move virtqueue management from udp_vu_sock_recv() to its=0D caller=0D iov: Add IOV_PUT_HEADER() and with_header() to write header data back=0D to iov_tail=0D udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6()=0D =0D iov.c | 47 +++++++++++=0D iov.h | 27 ++++++-=0D tcp_vu.c | 46 +++++------=0D udp.c | 129 ++++++++++++++++--------------=0D udp_internal.h | 6 +-=0D udp_vu.c | 207 +++++++++++++++++++++++++------------------------=0D virtio.c | 29 +++++--=0D virtio.h | 4 +-=0D vu_common.c | 149 ++++++++++++++++++++---------------=0D vu_common.h | 24 +-----=0D 10 files changed, 385 insertions(+), 283 deletions(-)=0D =0D --=20=0D 2.53.0=0D =0D