From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=SLk5zXJ5; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id 369375A0265 for ; Fri, 13 Mar 2026 19:26:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773426384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=w2k3yAPTkjhTu4aiEayZIQ3qBvz3bpMwqbRSeY1Yplw=; b=SLk5zXJ5l7wADR6lBYjM4D6kYRJ7Z9uPzIvTUC1SOGQprP0xodYlfyk/To6DrdGPoBq7Dt MuBzBFf3lms3e1BubUC9YgWwaSli3xR7+YDBR2w4qDx01a1HkCdQpRD79jX48DLko65+v9 Orn1DWejdNM/B9iyfn3r+fmN+jiXiwM= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-647-8cxy2vAqMh2T4n4XxYGKmw-1; Fri, 13 Mar 2026 14:26:22 -0400 X-MC-Unique: 8cxy2vAqMh2T4n4XxYGKmw-1 X-Mimecast-MFC-AGG-ID: 8cxy2vAqMh2T4n4XxYGKmw_1773426381 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 80854197750B for ; Fri, 13 Mar 2026 18:26:21 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.44.35.65]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5890E1800361; Fri, 13 Mar 2026 18:26:20 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH v2 0/3] Decouple iovec management from virtqueue elements Date: Fri, 13 Mar 2026 19:26:15 +0100 Message-ID: <20260313182618.4157365-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: MTTP4aSvWMx4Fmdg6t-e5qIvP5lPwB3_Iqg8NSje3lY_1773426381 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID-Hash: PIAPUNQ25DPZH3N7TCQ6GD7GFZVM7VJO X-Message-ID-Hash: PIAPUNQ25DPZH3N7TCQ6GD7GFZVM7VJO X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: This series prepares the vhost-user path for multi-buffer support,=0D where a single virtqueue element can use more than one iovec entry.=0D =0D Currently, iovec arrays are tightly coupled to virtqueue elements:=0D callers must pre-initialize each element's in_sg/out_sg pointers=0D before calling vu_queue_pop(), and each element is assumed to own=0D exactly one iovec slot. This makes it impossible for a single element=0D to span multiple iovec entries, which is needed for UDP multi-buffer=0D reception.=0D =0D The series decouples iovec storage from elements in three patches:=0D =0D - Patch 1 passes iovec arrays as separate parameters to vu_queue_pop()=0D and vu_queue_map_desc(), so the caller controls where descriptors=0D are mapped rather than reading them from pre-initialized element=0D fields.=0D =0D - Patch 2 passes the actual remaining out_sg capacity to=0D vu_queue_pop() in vu_handle_tx() instead of a fixed per-element=0D constant, enabling dynamic iovec allocation.=0D =0D - Patch 3 moves iovec pool management into vu_collect(), which now=0D accepts the iovec array and tracks consumed entries across elements=0D with a running counter. This removes vu_set_element() and=0D vu_init_elem() entirely. Callers that still assume one iovec per=0D element assert this invariant explicitly until they are updated for=0D multi-buffer.=0D =0D The follow-up udp-iov_vu series builds on this to implement actual=0D multi-buffer support in the UDP vhost-user path.=0D =0D v2:=0D - in patch 3, use iov_used in iov_truncate() rather than elem_cnt=0D as vu_collect() is now providing the number of iovec collected.=0D =0D Laurent Vivier (3):=0D virtio: Pass iovec arrays as separate parameters to vu_queue_pop()=0D vu_handle_tx: Pass actual remaining out_sg capacity to vu_queue_pop()=0D vu_common: Move iovec management into vu_collect()=0D =0D tcp_vu.c | 25 +++++++++-------=0D udp_vu.c | 21 ++++++++------=0D virtio.c | 29 ++++++++++++++-----=0D virtio.h | 4 ++-=0D vu_common.c | 83 ++++++++++++++++++++++++-----------------------------=0D vu_common.h | 22 ++------------=0D 6 files changed, 92 insertions(+), 92 deletions(-)=0D =0D --=20=0D 2.53.0=0D =0D