From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=R2SFPZlc; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id 3062B5A0262 for ; Wed, 01 Apr 2026 21:23:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1775071410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=tey4ebKaQySyZAAesofCxKRrhtmoD2UFCP++XmJIzzA=; b=R2SFPZlc8n5Bi8joR3TQscjdy9FaXeAMlPmGknHRKvATm34H3i8mqN35K3Tus2RNijS8a1 Zf0nDCC8Zli0nRcky0V5bWtsHk8kxAECU5Q9y6jGZ2jTQmJqbgls43Xf25zv8EOl658tZ8 NRADPWXwsRbDDGCk84t3NLWV3vyTRuI= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-53-u026JoyKPvuXBlcVPq-JYw-1; Wed, 01 Apr 2026 15:23:28 -0400 X-MC-Unique: u026JoyKPvuXBlcVPq-JYw-1 X-Mimecast-MFC-AGG-ID: u026JoyKPvuXBlcVPq-JYw_1775071408 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0B31A19560B5 for ; Wed, 1 Apr 2026 19:23:28 +0000 (UTC) Received: from lenovo-t14s.redhat.corp (headnet01.pony-001.prod.iad2.dc.redhat.com [10.2.32.101]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 40CBF30001A2; Wed, 1 Apr 2026 19:23:27 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH v6 0/3] vhost-user,udp: Handle multiple iovec entries per virtqueue element Date: Wed, 1 Apr 2026 21:23:23 +0200 Message-ID: <20260401192326.1783350-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: hDM1oX0oHeusf8wEukhvuRJS0YOrf1WkiDl18-cDEKE_1775071408 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Message-ID-Hash: 4GQD2ZTKTRURSMR7V3U5JPXO4775WM6C X-Message-ID-Hash: 4GQD2ZTKTRURSMR7V3U5JPXO4775WM6C X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Some virtio-net drivers (notably iPXE) provide descriptors where the vnet header and the frame payload are in separate buffers, resulting in two iovec entries per virtqueue element. Currently, the RX (host to guest) path assumes a single iovec per element, which triggers: ASSERTION FAILED in virtqueue_map_desc (virtio.c:403): num_sg < max_num_sg This series reworks the UDP vhost-user receive path to support multiple iovec entries per element, fixing the iPXE crash. This series only addresses the UDP path. TCP vhost-user will be updated to use multi-iov elements in a subsequent series. Based-on: 20260401191826.1782394-1-lvivier@redhat.com v6: - Rebased on top of [PATCH 00/10] vhost-user: Preparatory series for multiple iovec entries per virtqueue element v5: - This version doesn't change the padding system regarding v4, it's a complex task that will be addressed in another version - reorder patches and add new patches - remove IOV_PUT_HEADER()/with_header() and introduce IOV_PUSH_HEADER() - don't use the iov_tail to provide the headers to the functions - move vu_set_vnethdr() to vu_flush(), extract vu_queue_notify() - move vu_flush() inside loop in tcp_vu_data_from_sock() to flush data by frame and not by full data length v4: - rebase - replace ASSERT() by assert() v3: - include the series "Decouple iovec management from virtqueues elements" - because of this series, drop: "vu_common: Accept explicit iovec counts in vu_set_element()" "vu_common: Accept explicit iovec count per element in vu_init_elem()" "vu_common: Prepare to use multibuffer with guest RX" "vhost-user,udp: Use 2 iovec entries per element" - drop "vu_common: Pass iov_tail to vu_set_vnethdr()" as the specs insures a buffer is big enough to contain vnet header - introduce "with_header()" and merge "udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6()" and "udp_vu: Use iov_tail in udp_vu_prepare()" to use it v2: - add iov_truncate(), iov_memset() - remove iov_tail_truncate() and iov_tail_zero_end() - manage 802.3 minimum frame size Laurent Vivier (3): udp_vu: Allow virtqueue elements with multiple iovec entries iov: Introduce IOV_PUSH_HEADER() macro udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6() iov.c | 22 ++++++++++ iov.h | 11 +++++ udp.c | 70 +++++++++++++++++-------------- udp_internal.h | 4 +- udp_vu.c | 110 +++++++++++++++++++++++++------------------------ 5 files changed, 131 insertions(+), 86 deletions(-) -- 2.53.0