From: Laurent Vivier <lvivier@redhat.com>
To: passt-dev@passt.top
Cc: Laurent Vivier <lvivier@redhat.com>
Subject: [PATCH] vhost_user: fix multibuffer from linux
Date: Wed, 15 Jan 2025 17:22:30 +0100 [thread overview]
Message-ID: <20250115162230.813861-1-lvivier@redhat.com> (raw)
Under some conditions, linux can provide several buffers
in the same element (multiple entries in the iovec array).
I didn't identify what changed between the kernel guest that
provides one buffer and the one that provides several
(doesn't seem to be a kernel change or a configuration change).
Fix the following assert:
ASSERTION FAILED in virtqueue_map_desc (virtio.c:402): num_sg < max_num_sg
What I can see is the buffer can be splitted in two iovecs:
- vnet header
- packet data
This change manages this special case but the real fix will be to allow
tap_add_packet() to manage iovec array.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
vu_common.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/vu_common.c b/vu_common.c
index 6d365bea5fe2..431fba6be0c0 100644
--- a/vu_common.c
+++ b/vu_common.c
@@ -18,6 +18,8 @@
#include "pcap.h"
#include "vu_common.h"
+#define VU_MAX_TX_BUFFER_NB 2
+
/**
* vu_packet_check_range() - Check if a given memory zone is contained in
* a mapped guest memory region
@@ -168,10 +170,15 @@ static void vu_handle_tx(struct vu_dev *vdev, int index,
count = 0;
out_sg_count = 0;
- while (count < VIRTQUEUE_MAX_SIZE) {
+ while (count < VIRTQUEUE_MAX_SIZE &&
+ out_sg_count + VU_MAX_TX_BUFFER_NB <= VIRTQUEUE_MAX_SIZE) {
int ret;
- vu_set_element(&elem[count], &out_sg[out_sg_count], NULL);
+ elem[count].out_num = VU_MAX_TX_BUFFER_NB;
+ elem[count].out_sg = &out_sg[out_sg_count];
+ elem[count].in_num = 0;
+ elem[count].in_sg = NULL;
+
ret = vu_queue_pop(vdev, vq, &elem[count]);
if (ret < 0)
break;
@@ -181,11 +188,20 @@ static void vu_handle_tx(struct vu_dev *vdev, int index,
warn("virtio-net transmit queue contains no out buffers");
break;
}
- ASSERT(elem[count].out_num == 1);
+ if (elem[count].out_num == 1) {
+ tap_add_packet(vdev->context,
+ elem[count].out_sg[0].iov_len - hdrlen,
+ (char *)elem[count].out_sg[0].iov_base +
+ hdrlen);
+ } else {
+ /* vnet header can be in a separate iovec */
+ ASSERT(elem[count].out_num == 2);
+ ASSERT(elem[count].out_sg[0].iov_len == (size_t)hdrlen);
+ tap_add_packet(vdev->context,
+ elem[count].out_sg[1].iov_len,
+ (char *)elem[count].out_sg[1].iov_base);
+ }
- tap_add_packet(vdev->context,
- elem[count].out_sg[0].iov_len - hdrlen,
- (char *)elem[count].out_sg[0].iov_base + hdrlen);
count++;
}
tap_handler(vdev->context, now);
--
@@ -18,6 +18,8 @@
#include "pcap.h"
#include "vu_common.h"
+#define VU_MAX_TX_BUFFER_NB 2
+
/**
* vu_packet_check_range() - Check if a given memory zone is contained in
* a mapped guest memory region
@@ -168,10 +170,15 @@ static void vu_handle_tx(struct vu_dev *vdev, int index,
count = 0;
out_sg_count = 0;
- while (count < VIRTQUEUE_MAX_SIZE) {
+ while (count < VIRTQUEUE_MAX_SIZE &&
+ out_sg_count + VU_MAX_TX_BUFFER_NB <= VIRTQUEUE_MAX_SIZE) {
int ret;
- vu_set_element(&elem[count], &out_sg[out_sg_count], NULL);
+ elem[count].out_num = VU_MAX_TX_BUFFER_NB;
+ elem[count].out_sg = &out_sg[out_sg_count];
+ elem[count].in_num = 0;
+ elem[count].in_sg = NULL;
+
ret = vu_queue_pop(vdev, vq, &elem[count]);
if (ret < 0)
break;
@@ -181,11 +188,20 @@ static void vu_handle_tx(struct vu_dev *vdev, int index,
warn("virtio-net transmit queue contains no out buffers");
break;
}
- ASSERT(elem[count].out_num == 1);
+ if (elem[count].out_num == 1) {
+ tap_add_packet(vdev->context,
+ elem[count].out_sg[0].iov_len - hdrlen,
+ (char *)elem[count].out_sg[0].iov_base +
+ hdrlen);
+ } else {
+ /* vnet header can be in a separate iovec */
+ ASSERT(elem[count].out_num == 2);
+ ASSERT(elem[count].out_sg[0].iov_len == (size_t)hdrlen);
+ tap_add_packet(vdev->context,
+ elem[count].out_sg[1].iov_len,
+ (char *)elem[count].out_sg[1].iov_base);
+ }
- tap_add_packet(vdev->context,
- elem[count].out_sg[0].iov_len - hdrlen,
- (char *)elem[count].out_sg[0].iov_base + hdrlen);
count++;
}
tap_handler(vdev->context, now);
--
2.47.1
next reply other threads:[~2025-01-15 16:22 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-15 16:22 Laurent Vivier [this message]
2025-01-15 22:33 ` [PATCH] vhost_user: fix multibuffer from linux Stefano Brivio
2025-01-15 23:51 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250115162230.813861-1-lvivier@redhat.com \
--to=lvivier@redhat.com \
--cc=passt-dev@passt.top \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).