From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=UdlXx+3T; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id 218935A0265 for ; Wed, 18 Mar 2026 10:04:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773824686; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cTPdLCd1Xh0YOD9w78iMGNhj8aMSiKQZB7m+0qNizlQ=; b=UdlXx+3T1WVg7fVatBW4HLW7qH8pvXdULqjXkm1Ce8E8issYj2C0HRxakighKmdPsSuax1 I2D5rFfMXVxNgRuj0kR2u1H3yLrOVd2qmXBmMBYVrSpmCOcamek5OsOErnbev0uhs3ZwSq MjLU97zSu6khFDQcDNsauLXo8ULNhb0= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-76-WzOg2xSNMPW_elo6kpxhMg-1; Wed, 18 Mar 2026 05:04:44 -0400 X-MC-Unique: WzOg2xSNMPW_elo6kpxhMg-1 X-Mimecast-MFC-AGG-ID: WzOg2xSNMPW_elo6kpxhMg_1773824684 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-4868e691614so11061595e9.1 for ; Wed, 18 Mar 2026 02:04:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773824683; x=1774429483; h=date:content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0CmgdwSjMfC5tXBqxVVlEdklAUr0TpDc5W5zgQd6F18=; b=CICGOK7bw69XDgI5O4UrEUc3G6PzJoEedR0Zt0Be/pvPHvIMGFMJ2FQkJ/ioEdbzY3 +5da2Lb0OWvKIdNHJotlVUsL1LDHgVIDNIVmRZt3cPnk2QLF8jGymWwKIOyLJj3tRqVF KXAWeCn/Z9Rhg1UEwcQGTpGxl7FBEhnCQEa2madaagZP7+C68awKNPrIz/tGLVlfi2tp wabTqEoZGsalQSMEeSswWnaxVjffv5MSbk6y/nIFdVtDlGBcKslkYSEEMZFLZAOnp648 8T/vy4DRETxUPjYMnCLxsF8Q4rnViXl8wAcR1NoO1spkRgkmlIAmVApX56TElZ8i94fu iMbQ== X-Forwarded-Encrypted: i=1; AJvYcCX3rOy6vFPaYfrq9R1AiMHDvKt/XjUaFAMCDZ/3qArH9o1J7gnlK25h++/qcawfSRZpO05o/AE1bdE=@passt.top X-Gm-Message-State: AOJu0YwjOZ6ToTWpwUoRIVY6pvIhLY9mZb9AAivP1v7w6bfXAj9k2RWV 9sg7X5TjcTOIgFfu3my0s5s+pzCSTCxjPWEVXESjckn1eYTcYMDaLTOdQte81/aRFNqtFGTCXur T4gC3umz/L/PeTjdzgSNKMm/3SWx61VVGyuSPMV9OqBSQsQIYiqDBMA== X-Gm-Gg: ATEYQzz3H2zETLzKwRwnhF/6SeeSWL2wbjOESe1vmDN7GBFgxAPnFL7gjIuYotvXmNa AKH6qE7jIcbMdHD/4+0Si46uqlZpVEXz9cXUSrc5quOJ4qZ0dWzJUYWGeKTrUChbKtHCwN7+YcL pX26y9IaF+BDpjDKXkI7NES5s+qWz8mDnpOakgDw1tb5HyXakzvKWGHNluen6SGkRoVlXPNUZcg vkMJpv30rFUaRSYD2TPftSic2cfNCE0zbrHh5gjM7Ss8N/WFhBLtpImk9n+f/VEu5uEx4N/YEUH /y0IpmZGooEQch6zF6bqdCo/ewF+D2kiEPd4/hRW5YiE8r1/s8ffn2mBZ4iiD0cffe4noo9iV5M 9yYy0ZLHYSAgiPCNXcjUgdkyaGfCFqqZx X-Received: by 2002:a05:600c:45d5:b0:483:709e:f239 with SMTP id 5b1f17b1804b1-486f4572a37mr39134135e9.22.1773824683498; Wed, 18 Mar 2026 02:04:43 -0700 (PDT) X-Received: by 2002:a05:600c:45d5:b0:483:709e:f239 with SMTP id 5b1f17b1804b1-486f4572a37mr39131375e9.22.1773824680909; Wed, 18 Mar 2026 02:04:40 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [2a10:fc81:a806:d6a9::1]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4856ec4c9f3sm116041945e9.0.2026.03.18.02.04.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Mar 2026 02:04:40 -0700 (PDT) From: Stefano Brivio To: Laurent Vivier Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() Message-ID: <20260318100439.3a13730b@elisabeth> In-Reply-To: <062fad4c-39f9-489e-8c98-544a4c6ada30@redhat.com> References: <20260313182618.4157365-1-lvivier@redhat.com> <20260313182618.4157365-4-lvivier@redhat.com> <20260317162350.058e10e0@elisabeth> <4705f7ab-9277-4462-ada6-6bee39342627@redhat.com> <062fad4c-39f9-489e-8c98-544a4c6ada30@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.49; x86_64-pc-linux-gnu) MIME-Version: 1.0 Date: Wed, 18 Mar 2026 10:04:39 +0100 (CET) X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: g6Og4q_CXy5iPFbU44Ca4W8ABlqbJ98BOPV89XOYDUo_1773824684 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: CNEGYYWK37S3HJEWRPKGEGPG6DWSZ733 X-Message-ID-Hash: CNEGYYWK37S3HJEWRPKGEGPG6DWSZ733 X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: David Gibson , passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Wed, 18 Mar 2026 08:21:10 +0100 Laurent Vivier wrote: > On 3/18/26 02:16, David Gibson wrote: > > On Tue, Mar 17, 2026 at 05:30:32PM +0100, Laurent Vivier wrote: =20 > >> On 3/17/26 16:23, Stefano Brivio wrote: =20 > >>> On Tue, 17 Mar 2026 08:25:49 +0100 > >>> Laurent Vivier wrote: > >>> =20 > >>>> On 3/17/26 03:40, David Gibson wrote: =20 > >>>>> On Fri, Mar 13, 2026 at 07:26:18PM +0100, Laurent Vivier wrote: =20 > >>>>>> Previously, callers had to pre-initialize virtqueue elements with = iovec > >>>>>> entries using vu_set_element() or vu_init_elem() before calling > >>>>>> vu_collect(). This meant each element owned a fixed, pre-assigned= iovec > >>>>>> slot. > >>>>>> > >>>>>> Move the iovec array into vu_collect() as explicit parameters (in_= sg, > >>>>>> max_in_sg, and in_num), letting it pass the remaining iovec capaci= ty > >>>>>> directly to vu_queue_pop(). A running current_iov counter tracks > >>>>>> consumed entries across elements, so multiple elements share a sin= gle > >>>>>> iovec pool. The optional in_num output parameter reports how many= iovec > >>>>>> entries were consumed, allowing callers to track usage across mult= iple > >>>>>> vu_collect() calls. > >>>>>> > >>>>>> This removes vu_set_element() and vu_init_elem() which are no long= er > >>>>>> needed, and is a prerequisite for multi-buffer support where a sin= gle > >>>>>> virtqueue element can use more than one iovec entry. For now, cal= lers > >>>>>> assert the current single-iovec-per-element invariant until they a= re > >>>>>> updated to handle multiple iovecs. > >>>>>> > >>>>>> Signed-off-by: Laurent Vivier =20 > >>>>> > >>>>> Reviewed-by: David Gibson > >>>>> > >>>>> Couple of thoughts on possible polish below. > >>>>> > >>>>> [snip] =20 > >>>>>> /** > >>>>>> * vu_collect() - collect virtio buffers from a given virtqueu= e > >>>>>> * @vdev:=09=09vhost-user device > >>>>>> * @vq:=09=09=09virtqueue to collect from > >>>>>> - * @elem:=09=09Array of virtqueue element > >>>>>> - * =09=09=09each element must be initialized with one iovec entry > >>>>>> - * =09=09=09in the in_sg array. > >>>>>> + * @elem:=09=09Array of @max_elem virtqueue elements > >>>>>> * @max_elem:=09=09Number of virtqueue elements in the array > >>>>>> + * @in_sg:=09=09Incoming iovec array for device-writable descript= ors > >>>>>> + * @max_in_sg:=09=09Maximum number of entries in @in_sg > >>>>>> + * @in_num:=09=09Number of collected entries from @in_sg (output) > >>>>>> * @size:=09=09Maximum size of the data in the frame > >>>>>> * @collected:=09=09Collected buffer length, up to @size, set = on return > >>>>>> * > >>>>>> @@ -80,20 +67,21 @@ void vu_init_elem(struct vu_virtq_element *ele= m, struct iovec *iov, int elem_cnt > >>>>>> */ > >>>>>> int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, > >>>>>> =09 struct vu_virtq_element *elem, int max_elem, > >>>>>> +=09 struct iovec *in_sg, size_t max_in_sg, size_t *in_num, > >>>>>> =09 size_t size, size_t *collected) > >>>>>> { > >>>>>> =09size_t current_size =3D 0; > >>>>>> +=09size_t current_iov =3D 0; > >>>>>> =09int elem_cnt =3D 0; > >>>>>> -=09while (current_size < size && elem_cnt < max_elem) { > >>>>>> -=09=09struct iovec *iov; > >>>>>> +=09while (current_size < size && elem_cnt < max_elem && > >>>>>> +=09 current_iov < max_in_sg) { > >>>>>> =09=09int ret; > >>>>>> =09=09ret =3D vu_queue_pop(vdev, vq, &elem[elem_cnt], > >>>>>> -=09=09=09=09 elem[elem_cnt].in_sg, > >>>>>> -=09=09=09=09 elem[elem_cnt].in_num, > >>>>>> -=09=09=09=09 elem[elem_cnt].out_sg, > >>>>>> -=09=09=09=09 elem[elem_cnt].out_num); > >>>>>> +=09=09=09=09 &in_sg[current_iov], > >>>>>> +=09=09=09=09 max_in_sg - current_iov, > >>>>>> +=09=09=09=09 NULL, 0); > >>>>>> =09=09if (ret < 0) > >>>>>> =09=09=09break; > >>>>>> @@ -103,18 +91,22 @@ int vu_collect(const struct vu_dev *vdev, str= uct vu_virtq *vq, > >>>>>> =09=09=09break; > >>>>>> =09=09} > >>>>>> -=09=09iov =3D &elem[elem_cnt].in_sg[0]; > >>>>>> - > >>>>>> -=09=09if (iov->iov_len > size - current_size) > >>>>>> -=09=09=09iov->iov_len =3D size - current_size; > >>>>>> +=09=09elem[elem_cnt].in_num =3D iov_truncate(elem[elem_cnt].in_sg= , > >>>>>> +=09=09=09=09=09=09 elem[elem_cnt].in_num, > >>>>>> +=09=09=09=09=09=09 size - current_size); =20 > >>>>> > >>>>> Will elem[].in_num always end up with the same value as the @in_num > >>>>> parameter? If so, do we need the explicit parameter? =20 > >>>> > >>>> @in_num parameter of vu_collect()? > >>>> > >>>> @in_num is the sum of all elem[].in_num, it can be computed by the c= aller function from > >>>> elem, but it is simpler to return it as we need to compute it in the= loop. =20 > >>> > >>> I'm not sure I understood the point of David's comment here, and this > >>> explanation makes sense to me now, but it took me a bit to figure tha= t > >>> out. > >>> > >>> Could it be that @in_num is a bit confusing as it has "in" and "num" = in > >>> it, but it's actually an output representing how many "in" entries we > >>> used/need? =20 > >> > >> For an element, *=C3=ACn_*num is the number of *in_*sg we have read fr= om the ring for an element. > >> > >> It's virtio semantic, so *in_* means sg going *in* the guest. > >> > >> For *out_*sg we have *out_*num. > >> =20 > >>> > >>> What if we rename it to @in_used or @in_collected? > >>> =20 > >> > >> The idea was to keep the same name as in the element. But we can chang= e this to @in_used. =20 > >=20 > > Would "in_total" work better to suggest that it's the sum of all the > > elements' in_num? >=20 > Yes, I think it gives the information it's the sum of the in_num Okay, should I change this on merge, or do you plan to repost (which would be slightly more convenient for me but not really needed)? --=20 Stefano