From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=JiYFA6sY; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTPS id 11A3D5A0271 for ; Tue, 17 Mar 2026 08:25:57 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1773732357; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=+Ng9sSrp2fWfN1k5rkg/LftRA2t96E9pAE8edkPZeHM=; b=JiYFA6sY48WMcRHrjweRuo2Mk3eZokHfrc7Codnupzt1KAlB9njmbrDJHeYLlItWRX0yjU YFEn5GDW/UOgyflyuvpHXkwaSXfpAvmOQcw/6sliHvmY6UBq7BLytvj+47L3tXpBpRJTYJ wJMz4k8ItGx8BD7GM/4eKow4u4dB26A= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-661-hSnMM3hLMYeI7lcpHlxU9A-1; Tue, 17 Mar 2026 03:25:55 -0400 X-MC-Unique: hSnMM3hLMYeI7lcpHlxU9A-1 X-Mimecast-MFC-AGG-ID: hSnMM3hLMYeI7lcpHlxU9A_1773732354 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-483786a09b1so55175005e9.3 for ; Tue, 17 Mar 2026 00:25:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773732354; x=1774337154; h=content-transfer-encoding:in-reply-to:autocrypt:from :content-language:references:cc:to:subject:user-agent:mime-version :date:message-id:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=+Ng9sSrp2fWfN1k5rkg/LftRA2t96E9pAE8edkPZeHM=; b=Us+YU7IwQNyDUSAHY7p7+JEOCRBeHIXnDAtGFz2Lp+ZArRtsC8VctTIZBviy8lG7/0 ATv7ZV1TTtfnFfs0gPuiM+4WH73rj8gV44uhDf3xx+dtnhnqmFZ1bMJm3UUlMBJ2QcFP +8UETmym/cex/fOKb6uplUg0YCAwYB3wBbBM27BUsN9PfuLq8Mu0sXt+iUkXS1E9F4QT ybQkHH0cWxEGaWyqPslovUEer5KdkS48PM00xPFt2nc+ehU1rDtqp9yTWM+oxZXQQoN1 FVqzHBz4MFKM64yx4RLcembEntlAVwAAQqn8FejMTO/CVK4JpNB8wIGlmIhtytAfHels WyLg== X-Gm-Message-State: AOJu0YxI6Bi+MyOyrOu5suEQz8Be59K4EmMbMiX2ZJmTPDRVUcwrGLV/ wCfDAXXDQ0rLgtGWfHOZVfH/kPluRXM5CBupUXJcFuZgZhspq8o6bwYSMlDujBMFHSb1VhyX+6i y5YSBfo1Q5nrZEjtgzwo/GVX7W7JRwCNL9X7XFeNucyRHN53/PXLx8imrsN+ong== X-Gm-Gg: ATEYQzzGBWUpfXSoibox/zFHNf0xbRSgoOn4n3kfxJPPv0XBPy8w/DQpZAWZTNV0vqW v5/cdvxjC+pvPcyffgLlz6+6FfxcMa7DjY6kml42VJ15wWvthql1ERzX7163Vosk8vQFFgcSwXH pLE1vMppULHIh3ZfTQsr6YU5NkuWt7cCnvOziUCz8mL/ny7/r+O2yIhsWdbRLBpK8J4uzObCRFz Jhood99X6929ME7tUHjGRhz3q6owvnKdTgRoHwesLildqdsHhxGQua7zemA91fwC8Tn0aXVks+A XCfxEPiIQ3CqNIqJVNCJcx3Jt+Ya8oo4fTUOCPnCODtMxxPqjAqw+4yDAiGQ7HenYrHCL3juOKz rqhQMl9/Lh19wIB6faCIgcG5ls4jAI356X8i7EuXXizLRB0CmmI/6fmGrhJdUp433Rg== X-Received: by 2002:a05:600c:5942:b0:485:38fc:7080 with SMTP id 5b1f17b1804b1-4855670c01dmr173162165e9.28.1773732353768; Tue, 17 Mar 2026 00:25:53 -0700 (PDT) X-Received: by 2002:a05:600c:5942:b0:485:38fc:7080 with SMTP id 5b1f17b1804b1-4855670c01dmr173161995e9.28.1773732353226; Tue, 17 Mar 2026 00:25:53 -0700 (PDT) Received: from ?IPV6:2a01:e0a:e10:ef90:4326:a36e:a7cb:624b? ([2a01:e0a:e10:ef90:4326:a36e:a7cb:624b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48557c6c69fsm115966695e9.26.2026.03.17.00.25.50 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 17 Mar 2026 00:25:51 -0700 (PDT) Message-ID: Date: Tue, 17 Mar 2026 08:25:49 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 3/3] vu_common: Move iovec management into vu_collect() To: David Gibson References: <20260313182618.4157365-1-lvivier@redhat.com> <20260313182618.4157365-4-lvivier@redhat.com> From: Laurent Vivier Autocrypt: addr=lvivier@redhat.com; keydata= xsFNBFYFJhkBEAC2me7w2+RizYOKZM+vZCx69GTewOwqzHrrHSG07MUAxJ6AY29/+HYf6EY2 WoeuLWDmXE7A3oJoIsRecD6BXHTb0OYS20lS608anr3B0xn5g0BX7es9Mw+hV/pL+63EOCVm SUVTEQwbGQN62guOKnJJJfphbbv82glIC/Ei4Ky8BwZkUuXd7d5NFJKC9/GDrbWdj75cDNQx UZ9XXbXEKY9MHX83Uy7JFoiFDMOVHn55HnncflUncO0zDzY7CxFeQFwYRbsCXOUL9yBtqLer Ky8/yjBskIlNrp0uQSt9LMoMsdSjYLYhvk1StsNPg74+s4u0Q6z45+l8RAsgLw5OLtTa+ePM JyS7OIGNYxAX6eZk1+91a6tnqfyPcMbduxyBaYXn94HUG162BeuyBkbNoIDkB7pCByed1A7q q9/FbuTDwgVGVLYthYSfTtN0Y60OgNkWCMtFwKxRaXt1WFA5ceqinN/XkgA+vf2Ch72zBkJL RBIhfOPFv5f2Hkkj0MvsUXpOWaOjatiu0fpPo6Hw14UEpywke1zN4NKubApQOlNKZZC4hu6/ 8pv2t4HRi7s0K88jQYBRPObjrN5+owtI51xMaYzvPitHQ2053LmgsOdN9EKOqZeHAYG2SmRW LOxYWKX14YkZI5j/TXfKlTpwSMvXho+efN4kgFvFmP6WT+tPnwARAQABzSNMYXVyZW50IFZp dmllciA8bHZpdmllckByZWRoYXQuY29tPsLBeAQTAQIAIgUCVgVQgAIbAwYLCQgHAwIGFQgC CQoLBBYCAwECHgECF4AACgkQ8ww4vT8vvjwpgg//fSGy0Rs/t8cPFuzoY1cex4limJQfReLr SJXCANg9NOWy/bFK5wunj+h/RCFxIFhZcyXveurkBwYikDPUrBoBRoOJY/BHK0iZo7/WQkur 6H5losVZtrotmKOGnP/lJYZ3H6OWvXzdz8LL5hb3TvGOP68K8Bn8UsIaZJoeiKhaNR0sOJyI YYbgFQPWMHfVwHD/U+/gqRhD7apVysxv5by/pKDln1I5v0cRRH6hd8M8oXgKhF2+rAOL7gvh jEHSSWKUlMjC7YwwjSZmUkL+TQyE18e2XBk85X8Da3FznrLiHZFHQ/NzETYxRjnOzD7/kOVy gKD/o7asyWQVU65mh/ECrtjfhtCBSYmIIVkopoLaVJ/kEbVJQegT2P6NgERC/31kmTF69vn8 uQyW11Hk8tyubicByL3/XVBrq4jZdJW3cePNJbTNaT0d/bjMg5zCWHbMErUib2Nellnbg6bc 2HLDe0NLVPuRZhHUHM9hO/JNnHfvgiRQDh6loNOUnm9Iw2YiVgZNnT4soUehMZ7au8PwSl4I KYE4ulJ8RRiydN7fES3IZWmOPlyskp1QMQBD/w16o+lEtY6HSFEzsK3o0vuBRBVp2WKnssVH qeeV01ZHw0bvWKjxVNOksP98eJfWLfV9l9e7s6TaAeySKRRubtJ+21PRuYAxKsaueBfUE7ZT 7zfOwU0EVgUmGQEQALxSQRbl/QOnmssVDxWhHM5TGxl7oLNJms2zmBpcmlrIsn8nNz0rRyxT 460k2niaTwowSRK8KWVDeAW6ZAaWiYjLlTunoKwvF8vP3JyWpBz0diTxL5o+xpvy/Q6YU3BN efdq8Vy3rFsxgW7mMSrI/CxJ667y8ot5DVugeS2NyHfmZlPGE0Nsy7hlebS4liisXOrN3jFz asKyUws3VXek4V65lHwB23BVzsnFMn/bw/rPliqXGcwl8CoJu8dSyrCcd1Ibs0/Inq9S9+t0 VmWiQWfQkz4rvEeTQkp/VfgZ6z98JRW7S6l6eophoWs0/ZyRfOm+QVSqRfFZdxdP2PlGeIFM C3fXJgygXJkFPyWkVElr76JTbtSHsGWbt6xUlYHKXWo+xf9WgtLeby3cfSkEchACrxDrQpj+ Jt/JFP+q997dybkyZ5IoHWuPkn7uZGBrKIHmBunTco1+cKSuRiSCYpBIXZMHCzPgVDjk4viP brV9NwRkmaOxVvye0vctJeWvJ6KA7NoAURplIGCqkCRwg0MmLrfoZnK/gRqVJ/f6adhU1oo6 z4p2/z3PemA0C0ANatgHgBb90cd16AUxpdEQmOCmdNnNJF/3Zt3inzF+NFzHoM5Vwq6rc1JP jfC3oqRLJzqAEHBDjQFlqNR3IFCIAo4SYQRBdAHBCzkM4rWyRhuVABEBAAHCwV8EGAECAAkF AlYFJhkCGwwACgkQ8ww4vT8vvjwg9w//VQrcnVg3TsjEybxDEUBm8dBmnKqcnTBFmxN5FFtI WlEuY8+YMiWRykd8Ln9RJ/98/ghABHz9TN8TRo2b6WimV64FmlVn17Ri6FgFU3xNt9TTEChq AcNg88eYryKsYpFwegGpwUlaUaaGh1m9OrTzcQy+klVfZWaVJ9Nw0keoGRGb8j4XjVpL8+2x OhXKrM1fzzb8JtAuSbuzZSQPDwQEI5CKKxp7zf76J21YeRrEW4WDznPyVcDTa+tz++q2S/Bp P4W98bXCBIuQgs2m+OflERv5c3Ojldp04/S4NEjXEYRWdiCxN7ca5iPml5gLtuvhJMSy36gl U6IW9kn30IWuSoBpTkgV7rLUEhh9Ms82VWW/h2TxL8enfx40PrfbDtWwqRID3WY8jLrjKfTd R3LW8BnUDNkG+c4FzvvGUs8AvuqxxyHbXAfDx9o/jXfPHVRmJVhSmd+hC3mcQ+4iX5bBPBPM oDqSoLt5w9GoQQ6gDVP2ZjTWqwSRMLzNr37rJjZ1pt0DCMMTbiYIUcrhX8eveCJtY7NGWNyx FCRkhxRuGcpwPmRVDwOl39MB3iTsRighiMnijkbLXiKoJ5CDVvX5yicNqYJPKh5MFXN1bvsB kmYiStMRbrD0HoY1kx5/VozBtc70OU0EB8Wrv9hZD+Ofp0T3KOr1RUHvCZoLURfFhSQ= In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: fbNAu-leTaDUr-JqoGmo80j-kPIJ6uY6RHTEKdGRdIg_1773732354 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Message-ID-Hash: 4ORM6DOGUXDL2L2URKG3RHDSG7L25JFM X-Message-ID-Hash: 4ORM6DOGUXDL2L2URKG3RHDSG7L25JFM X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On 3/17/26 03:40, David Gibson wrote: > On Fri, Mar 13, 2026 at 07:26:18PM +0100, Laurent Vivier wrote: >> Previously, callers had to pre-initialize virtqueue elements with iovec >> entries using vu_set_element() or vu_init_elem() before calling >> vu_collect(). This meant each element owned a fixed, pre-assigned iovec >> slot. >> >> Move the iovec array into vu_collect() as explicit parameters (in_sg, >> max_in_sg, and in_num), letting it pass the remaining iovec capacity >> directly to vu_queue_pop(). A running current_iov counter tracks >> consumed entries across elements, so multiple elements share a single >> iovec pool. The optional in_num output parameter reports how many iovec >> entries were consumed, allowing callers to track usage across multiple >> vu_collect() calls. >> >> This removes vu_set_element() and vu_init_elem() which are no longer >> needed, and is a prerequisite for multi-buffer support where a single >> virtqueue element can use more than one iovec entry. For now, callers >> assert the current single-iovec-per-element invariant until they are >> updated to handle multiple iovecs. >> >> Signed-off-by: Laurent Vivier > > Reviewed-by: David Gibson > > Couple of thoughts on possible polish below. > > [snip] >> /** >> * vu_collect() - collect virtio buffers from a given virtqueue >> * @vdev: vhost-user device >> * @vq: virtqueue to collect from >> - * @elem: Array of virtqueue element >> - * each element must be initialized with one iovec entry >> - * in the in_sg array. >> + * @elem: Array of @max_elem virtqueue elements >> * @max_elem: Number of virtqueue elements in the array >> + * @in_sg: Incoming iovec array for device-writable descriptors >> + * @max_in_sg: Maximum number of entries in @in_sg >> + * @in_num: Number of collected entries from @in_sg (output) >> * @size: Maximum size of the data in the frame >> * @collected: Collected buffer length, up to @size, set on return >> * >> @@ -80,20 +67,21 @@ void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, int elem_cnt >> */ >> int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, >> struct vu_virtq_element *elem, int max_elem, >> + struct iovec *in_sg, size_t max_in_sg, size_t *in_num, >> size_t size, size_t *collected) >> { >> size_t current_size = 0; >> + size_t current_iov = 0; >> int elem_cnt = 0; >> >> - while (current_size < size && elem_cnt < max_elem) { >> - struct iovec *iov; >> + while (current_size < size && elem_cnt < max_elem && >> + current_iov < max_in_sg) { >> int ret; >> >> ret = vu_queue_pop(vdev, vq, &elem[elem_cnt], >> - elem[elem_cnt].in_sg, >> - elem[elem_cnt].in_num, >> - elem[elem_cnt].out_sg, >> - elem[elem_cnt].out_num); >> + &in_sg[current_iov], >> + max_in_sg - current_iov, >> + NULL, 0); >> if (ret < 0) >> break; >> >> @@ -103,18 +91,22 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, >> break; >> } >> >> - iov = &elem[elem_cnt].in_sg[0]; >> - >> - if (iov->iov_len > size - current_size) >> - iov->iov_len = size - current_size; >> + elem[elem_cnt].in_num = iov_truncate(elem[elem_cnt].in_sg, >> + elem[elem_cnt].in_num, >> + size - current_size); > > Will elem[].in_num always end up with the same value as the @in_num > parameter? If so, do we need the explicit parameter? @in_num parameter of vu_collect()? @in_num is the sum of all elem[].in_num, it can be computed by the caller function from elem, but it is simpler to return it as we need to compute it in the loop. > >> >> - current_size += iov->iov_len; >> + current_size += iov_size(elem[elem_cnt].in_sg, >> + elem[elem_cnt].in_num); >> + current_iov += elem[elem_cnt].in_num; >> elem_cnt++; >> >> if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) >> break; >> } >> >> + if (in_num) >> + *in_num = current_iov; >> + >> if (collected) >> *collected = current_size; >> >> @@ -147,8 +139,11 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, >> { >> int i; >> >> - for (i = 0; i < elem_cnt; i++) >> - vu_queue_fill(vdev, vq, &elem[i], elem[i].in_sg[0].iov_len, i); >> + for (i = 0; i < elem_cnt; i++) { >> + size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num); > > IIUC, the elem structure itself isn't shared with vhost, so we can > alter it. Would it make sense to cache the number of bytes allocated > to the element there, to avoid repeated calls to iov_size()? It's possible. But I think it could be complicated to keep in sync the actual size of the iovec array and the value we store in elem, as we alter the array at several points. Thanks, Laurent > >> + >> + vu_queue_fill(vdev, vq, &elem[i], elem_size, i); >> + } >> >> vu_queue_flush(vdev, vq, elem_cnt); >> vu_queue_notify(vdev, vq); >> @@ -246,7 +241,7 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) >> struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; >> struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; >> struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; >> - size_t total; >> + size_t total, in_num; >> int elem_cnt; >> int i; >> >> @@ -257,11 +252,10 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) >> return -1; >> } >> >> - vu_init_elem(elem, in_sg, VIRTQUEUE_MAX_SIZE); >> - >> size += VNET_HLEN; >> - elem_cnt = vu_collect(vdev, vq, elem, VIRTQUEUE_MAX_SIZE, size, &total); >> - if (total < size) { >> + elem_cnt = vu_collect(vdev, vq, elem, ARRAY_SIZE(elem), in_sg, >> + ARRAY_SIZE(in_sg), &in_num, size, &total); >> + if (elem_cnt == 0 || total < size) { >> debug("vu_send_single: no space to send the data " >> "elem_cnt %d size %zd", elem_cnt, total); >> goto err; >> @@ -272,10 +266,10 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) >> total -= VNET_HLEN; >> >> /* copy data from the buffer to the iovec */ >> - iov_from_buf(in_sg, elem_cnt, VNET_HLEN, buf, total); >> + iov_from_buf(in_sg, in_num, VNET_HLEN, buf, total); >> >> if (*c->pcap) >> - pcap_iov(in_sg, elem_cnt, VNET_HLEN); >> + pcap_iov(in_sg, in_num, VNET_HLEN); >> >> vu_flush(vdev, vq, elem, elem_cnt); >> >> diff --git a/vu_common.h b/vu_common.h >> index 865d9771fa89..6c31630e8712 100644 >> --- a/vu_common.h >> +++ b/vu_common.h >> @@ -35,26 +35,10 @@ static inline void *vu_payloadv6(void *base) >> return (struct ipv6hdr *)vu_ip(base) + 1; >> } >> >> -/** >> - * vu_set_element() - Initialize a vu_virtq_element >> - * @elem: Element to initialize >> - * @out_sg: One out iovec entry to set in elem >> - * @in_sg: One in iovec entry to set in elem >> - */ >> -static inline void vu_set_element(struct vu_virtq_element *elem, >> - struct iovec *out_sg, struct iovec *in_sg) >> -{ >> - elem->out_num = !!out_sg; >> - elem->out_sg = out_sg; >> - elem->in_num = !!in_sg; >> - elem->in_sg = in_sg; >> -} >> - >> -void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, >> - int elem_cnt); >> int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, >> - struct vu_virtq_element *elem, int max_elem, size_t size, >> - size_t *collected); >> + struct vu_virtq_element *elem, int max_elem, >> + struct iovec *in_sg, size_t max_in_sg, size_t *in_num, >> + size_t size, size_t *collected); >> void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers); >> void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, >> struct vu_virtq_element *elem, int elem_cnt); >> -- >> 2.53.0 >> >