From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=axdZSxo4; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id F13585A0262 for ; Fri, 27 Mar 2026 18:58:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774634319; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=0j75WOPDqUUcg+Yc6sum/BcTL5o1ouXV5uzC7iVqgvU=; b=axdZSxo4CQBjG12pFQUATxeW0EibGD5vgu2o4tZkEquWLv1T8Dg+cOvV4mWtXXBPTpDRJS 4JbOlVi177vN3cAkQquTgEtripq5XWNmko7oKPSos76Sk6zZWHHRGZ7gi/1b1E6zRCYZ7v 5AzE4U4ifJgvhlR734JkeUpYCIchvlE= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-675-NDUFadlRPTKYZHdkKc9OpA-1; Fri, 27 Mar 2026 13:58:38 -0400 X-MC-Unique: NDUFadlRPTKYZHdkKc9OpA-1 X-Mimecast-MFC-AGG-ID: NDUFadlRPTKYZHdkKc9OpA_1774634317 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5623E19560A1 for ; Fri, 27 Mar 2026 17:58:37 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.44.32.96]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 60ABB1800351; Fri, 27 Mar 2026 17:58:36 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH v5 0/8] vhost-user,udp: Handle multiple iovec entries per virtqueue element Date: Fri, 27 Mar 2026 18:58:26 +0100 Message-ID: <20260327175834.831995-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: m3NbfSUsnNDugw1Di6ceB7nr2yIb6F88XNKBHqOOaYU_1774634317 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Message-ID-Hash: HG5L4J73YYMSY72HWKEIGDNNSVU46QSY X-Message-ID-Hash: HG5L4J73YYMSY72HWKEIGDNNSVU46QSY X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Some virtio-net drivers (notably iPXE) provide descriptors where the vnet header and the frame payload are in separate buffers, resulting in two iovec entries per virtqueue element. Currently, the RX (host to guest) path assumes a single iovec per element, which triggers: ASSERTION FAILED in virtqueue_map_desc (virtio.c:403): num_sg < max_num_sg This series reworks the UDP vhost-user receive path to support multiple iovec entries per element, fixing the iPXE crash. This series only addresses the UDP path. TCP vhost-user will be updated to use multi-iov elements in a subsequent series. v5: - This version doesn't change the padding system regarding v4, it's a complex task that will be addressed in another version - reorder patches and add new patches - remove IOV_PUT_HEADER()/with_header() and introduce IOV_PUSH_HEADER() - don't use the iov_tail to provide the headers to the functions - move vu_set_vnethdr() to vu_flush(), extract vu_queue_notify() - move vu_flush() inside loop in tcp_vu_data_from_sock() to flush data by frame and not by full data length v4: - rebase - replace ASSERT() by assert() v3: - include the series "Decouple iovec management from virtqueues elements" - because of this series, drop: "vu_common: Accept explicit iovec counts in vu_set_element()" "vu_common: Accept explicit iovec count per element in vu_init_elem()" "vu_common: Prepare to use multibuffer with guest RX" "vhost-user,udp: Use 2 iovec entries per element" - drop "vu_common: Pass iov_tail to vu_set_vnethdr()" as the specs insures a buffer is big enough to contain vnet header - introduce "with_header()" and merge "udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6()" and "udp_vu: Use iov_tail in udp_vu_prepare()" to use it v2: - add iov_truncate(), iov_memset() - remove iov_tail_truncate() and iov_tail_zero_end() - manage 802.3 minimum frame size Laurent Vivier (8): iov: Introduce iov_memset() vu_common: Move vnethdr setup into vu_flush() vhost-user: Centralise Ethernet frame padding in vu_collect(), vu_pad() and vu_flush() udp_vu: Move virtqueue management from udp_vu_sock_recv() to its caller udp_vu: Pass iov explicitly to helpers instead of using file-scoped array udp_vu: Allow virtqueue elements with multiple iovec entries iov: Introduce IOV_PUSH_HEADER() macro udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6() iov.c | 48 ++++++++++++ iov.h | 13 ++++ tcp_vu.c | 36 +++------ udp.c | 81 ++++++++++---------- udp_internal.h | 10 +-- udp_vu.c | 201 +++++++++++++++++++++++++------------------------ vu_common.c | 64 ++++++++++------ vu_common.h | 3 +- 8 files changed, 263 insertions(+), 193 deletions(-) -- 2.53.0