From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=abUhHz9p; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id A02895A0626 for ; Wed, 15 Jan 2025 17:22:36 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1736958155; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=Cl/cYL6PWi/a8V2K+VVjlQI1xWZitAQf7AJMwKAgMbQ=; b=abUhHz9pqqrRMG1V01bXJGrX6tzNOlZm2/1rf+HbfhQYCYL8C17U8AqAmBZspfSxAI1KAT WyO83cBJNxiK9R3MorWsiUWmvO0vJODWhkZ7IFK6arQQ9rnmhGruy1ldLoOIy7lxZKIwLc cCp6aa8ifSr0JmY5tL73d3x4VBM9Pw4= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-584-i1zfeTpFPzaWSLHnlFy16w-1; Wed, 15 Jan 2025 11:22:34 -0500 X-MC-Unique: i1zfeTpFPzaWSLHnlFy16w-1 X-Mimecast-MFC-AGG-ID: i1zfeTpFPzaWSLHnlFy16w Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0F8BA1955D67 for ; Wed, 15 Jan 2025 16:22:33 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.39.192.232]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1821B30001BE; Wed, 15 Jan 2025 16:22:31 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH] vhost_user: fix multibuffer from linux Date: Wed, 15 Jan 2025 17:22:30 +0100 Message-ID: <20250115162230.813861-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: c6zibRKrd9YgTIXQJj97htezNw0i1l_Icz3xU_QGX8Y_1736958153 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true Message-ID-Hash: 5INCD3VB4X7ZR6ECYO3S6QQ5VRB5OOGH X-Message-ID-Hash: 5INCD3VB4X7ZR6ECYO3S6QQ5VRB5OOGH X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Under some conditions, linux can provide several buffers in the same element (multiple entries in the iovec array). I didn't identify what changed between the kernel guest that provides one buffer and the one that provides several (doesn't seem to be a kernel change or a configuration change). Fix the following assert: ASSERTION FAILED in virtqueue_map_desc (virtio.c:402): num_sg < max_num_sg What I can see is the buffer can be splitted in two iovecs: - vnet header - packet data This change manages this special case but the real fix will be to allow tap_add_packet() to manage iovec array. Signed-off-by: Laurent Vivier --- vu_common.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/vu_common.c b/vu_common.c index 6d365bea5fe2..431fba6be0c0 100644 --- a/vu_common.c +++ b/vu_common.c @@ -18,6 +18,8 @@ #include "pcap.h" #include "vu_common.h" +#define VU_MAX_TX_BUFFER_NB 2 + /** * vu_packet_check_range() - Check if a given memory zone is contained in * a mapped guest memory region @@ -168,10 +170,15 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, count = 0; out_sg_count = 0; - while (count < VIRTQUEUE_MAX_SIZE) { + while (count < VIRTQUEUE_MAX_SIZE && + out_sg_count + VU_MAX_TX_BUFFER_NB <= VIRTQUEUE_MAX_SIZE) { int ret; - vu_set_element(&elem[count], &out_sg[out_sg_count], NULL); + elem[count].out_num = VU_MAX_TX_BUFFER_NB; + elem[count].out_sg = &out_sg[out_sg_count]; + elem[count].in_num = 0; + elem[count].in_sg = NULL; + ret = vu_queue_pop(vdev, vq, &elem[count]); if (ret < 0) break; @@ -181,11 +188,20 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, warn("virtio-net transmit queue contains no out buffers"); break; } - ASSERT(elem[count].out_num == 1); + if (elem[count].out_num == 1) { + tap_add_packet(vdev->context, + elem[count].out_sg[0].iov_len - hdrlen, + (char *)elem[count].out_sg[0].iov_base + + hdrlen); + } else { + /* vnet header can be in a separate iovec */ + ASSERT(elem[count].out_num == 2); + ASSERT(elem[count].out_sg[0].iov_len == (size_t)hdrlen); + tap_add_packet(vdev->context, + elem[count].out_sg[1].iov_len, + (char *)elem[count].out_sg[1].iov_base); + } - tap_add_packet(vdev->context, - elem[count].out_sg[0].iov_len - hdrlen, - (char *)elem[count].out_sg[0].iov_base + hdrlen); count++; } tap_handler(vdev->context, now); -- 2.47.1