From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=M/C1BsNw; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id B49CF5A0265 for ; Fri, 13 Mar 2026 08:21:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773386501; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=FyuxNIDdVLbd0eHVOaxjh1tHqIeeh0hQVWpVoPdoObE=; b=M/C1BsNwhHh81ySJB60BWmT0IiKdtkdYKvVzwNuaa0QIc6cc+mpePZumtWBVS40tN/wnVx vSbfChoZ5P5PpwPO5VMMxrryeqVqDu1iGQ1SBQwSdhDe+twoAfbwvw6yaxx6ddeo87c+5V eqDJrEBBk0WLF2ktfSMFShndq8FTH58= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-683-ljKgW-Z_MSiNW25itt_0dw-1; Fri, 13 Mar 2026 03:21:39 -0400 X-MC-Unique: ljKgW-Z_MSiNW25itt_0dw-1 X-Mimecast-MFC-AGG-ID: ljKgW-Z_MSiNW25itt_0dw_1773386499 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E89521977687 for ; Fri, 13 Mar 2026 07:21:38 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.44.35.65]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id ECD1019540C2; Fri, 13 Mar 2026 07:21:37 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH 0/3] Decouple iovec management from virtqueue elements Date: Fri, 13 Mar 2026 08:21:33 +0100 Message-ID: <20260313072136.4075535-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: YAt0LEHCAxhUFw_ZlYtGGXTn-dTvoT_A3EXteuzzrdE_1773386499 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID-Hash: PT6WHI34KUEMAIXGBMXOK2RNX76T26ZU X-Message-ID-Hash: PT6WHI34KUEMAIXGBMXOK2RNX76T26ZU X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: This series prepares the vhost-user path for multi-buffer support,=0D where a single virtqueue element can use more than one iovec entry.=0D =0D Currently, iovec arrays are tightly coupled to virtqueue elements:=0D callers must pre-initialize each element's in_sg/out_sg pointers=0D before calling vu_queue_pop(), and each element is assumed to own=0D exactly one iovec slot. This makes it impossible for a single element=0D to span multiple iovec entries, which is needed for UDP multi-buffer=0D reception.=0D =0D The series decouples iovec storage from elements in three patches:=0D =0D - Patch 1 passes iovec arrays as separate parameters to vu_queue_pop()=0D and vu_queue_map_desc(), so the caller controls where descriptors=0D are mapped rather than reading them from pre-initialized element=0D fields.=0D =0D - Patch 2 passes the actual remaining out_sg capacity to=0D vu_queue_pop() in vu_handle_tx() instead of a fixed per-element=0D constant, enabling dynamic iovec allocation.=0D =0D - Patch 3 moves iovec pool management into vu_collect(), which now=0D accepts the iovec array and tracks consumed entries across elements=0D with a running counter. This removes vu_set_element() and=0D vu_init_elem() entirely. Callers that still assume one iovec per=0D element assert this invariant explicitly until they are updated for=0D multi-buffer.=0D =0D The follow-up udp-iov_vu series builds on this to implement actual=0D multi-buffer support in the UDP vhost-user path.=0D =0D Laurent Vivier (3):=0D virtio: Pass iovec arrays as separate parameters to vu_queue_pop()=0D vu_handle_tx: Pass actual remaining out_sg capacity to vu_queue_pop()=0D vu_common: Move iovec management into vu_collect()=0D =0D tcp_vu.c | 23 ++++++++-------=0D udp_vu.c | 21 ++++++++------=0D virtio.c | 29 ++++++++++++++-----=0D virtio.h | 4 ++-=0D vu_common.c | 83 ++++++++++++++++++++++++-----------------------------=0D vu_common.h | 22 ++------------=0D 6 files changed, 91 insertions(+), 91 deletions(-)=0D =0D --=20=0D 2.53.0=0D =0D