* [PATCH 1/3] vhost-user: Fix VHOST_USER_GET_QUEUE_NUM to return number of queues
2025-09-05 15:49 [PATCH 0/3] This series contains fixes and improvements to the vhost-user implementation Laurent Vivier
@ 2025-09-05 15:49 ` Laurent Vivier
2025-09-08 1:58 ` David Gibson
2025-09-05 15:49 ` [PATCH 2/3] udp_vu: Pass virtqueue pointer to udp_vu_sock_recv() Laurent Vivier
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Laurent Vivier @ 2025-09-05 15:49 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier
The vhost-user specification states that VHOST_USER_GET_QUEUE_NUM should
return the maximum number of queues supported by the back-end, not the
number of virtqueues. Since each queue pair consists of RX and TX
virtqueues, we need to divide VHOST_USER_MAX_QUEUES by 2 to get the
correct queue count.
Also rename VHOST_USER_MAX_QUEUES to VHOST_USER_MAX_VQS throughout the
codebase to better reflect that it represents the maximum number of
virtqueues, not queue pairs.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
vhost_user.c | 16 +++++++++-------
virtio.h | 4 ++--
2 files changed, 11 insertions(+), 9 deletions(-)
diff --git a/vhost_user.c b/vhost_user.c
index f97ec6064cac..fa343a86fac2 100644
--- a/vhost_user.c
+++ b/vhost_user.c
@@ -345,7 +345,7 @@ static void vu_set_enable_all_rings(struct vu_dev *vdev, bool enable)
{
uint16_t i;
- for (i = 0; i < VHOST_USER_MAX_QUEUES; i++)
+ for (i = 0; i < VHOST_USER_MAX_VQS; i++)
vdev->vq[i].enable = enable;
}
@@ -477,7 +477,7 @@ static bool vu_set_mem_table_exec(struct vu_dev *vdev,
close(vmsg->fds[i]);
}
- for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
+ for (i = 0; i < VHOST_USER_MAX_VQS; i++) {
if (vdev->vq[i].vring.desc) {
if (map_ring(vdev, &vdev->vq[i]))
die("remapping queue %d during setmemtable", i);
@@ -770,7 +770,7 @@ static void vu_check_queue_msg_file(struct vhost_user_msg *vmsg)
bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
int idx = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
- if (idx >= VHOST_USER_MAX_QUEUES)
+ if (idx >= VHOST_USER_MAX_VQS)
die("Invalid vhost-user queue index: %u", idx);
if (nofd) {
@@ -939,7 +939,9 @@ static bool vu_get_queue_num_exec(struct vu_dev *vdev,
{
(void)vdev;
- vmsg_set_reply_u64(vmsg, VHOST_USER_MAX_QUEUES);
+ vmsg_set_reply_u64(vmsg, VHOST_USER_MAX_VQS / 2);
+
+ debug("VHOST_USER_MAX_VQS %u", VHOST_USER_MAX_VQS / 2);
return true;
}
@@ -960,7 +962,7 @@ static bool vu_set_vring_enable_exec(struct vu_dev *vdev,
debug("State.index: %u", idx);
debug("State.enable: %u", enable);
- if (idx >= VHOST_USER_MAX_QUEUES)
+ if (idx >= VHOST_USER_MAX_VQS)
die("Invalid vring_enable index: %u", idx);
vdev->vq[idx].enable = enable;
@@ -1052,7 +1054,7 @@ void vu_init(struct ctx *c)
c->vdev = &vdev_storage;
c->vdev->context = c;
- for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
+ for (i = 0; i < VHOST_USER_MAX_VQS; i++) {
c->vdev->vq[i] = (struct vu_virtq){
.call_fd = -1,
.kick_fd = -1,
@@ -1075,7 +1077,7 @@ void vu_cleanup(struct vu_dev *vdev)
{
unsigned int i;
- for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
+ for (i = 0; i < VHOST_USER_MAX_VQS; i++) {
struct vu_virtq *vq = &vdev->vq[i];
vq->started = false;
diff --git a/virtio.h b/virtio.h
index b55cc4042521..12caaa0b6def 100644
--- a/virtio.h
+++ b/virtio.h
@@ -88,7 +88,7 @@ struct vu_dev_region {
uint64_t mmap_addr;
};
-#define VHOST_USER_MAX_QUEUES 2
+#define VHOST_USER_MAX_VQS 2
/*
* Set a reasonable maximum number of ram slots, which will be supported by
@@ -121,7 +121,7 @@ struct vdev_memory {
struct vu_dev {
struct ctx *context;
struct vdev_memory memory;
- struct vu_virtq vq[VHOST_USER_MAX_QUEUES];
+ struct vu_virtq vq[VHOST_USER_MAX_VQS];
uint64_t features;
uint64_t protocol_features;
int log_call_fd;
--
2.50.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/3] vhost-user: Fix VHOST_USER_GET_QUEUE_NUM to return number of queues
2025-09-05 15:49 ` [PATCH 1/3] vhost-user: Fix VHOST_USER_GET_QUEUE_NUM to return number of queues Laurent Vivier
@ 2025-09-08 1:58 ` David Gibson
0 siblings, 0 replies; 8+ messages in thread
From: David Gibson @ 2025-09-08 1:58 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
[-- Attachment #1: Type: text/plain, Size: 4001 bytes --]
On Fri, Sep 05, 2025 at 05:49:33PM +0200, Laurent Vivier wrote:
> The vhost-user specification states that VHOST_USER_GET_QUEUE_NUM should
> return the maximum number of queues supported by the back-end, not the
> number of virtqueues. Since each queue pair consists of RX and TX
> virtqueues, we need to divide VHOST_USER_MAX_QUEUES by 2 to get the
> correct queue count.
>
> Also rename VHOST_USER_MAX_QUEUES to VHOST_USER_MAX_VQS throughout the
> codebase to better reflect that it represents the maximum number of
> virtqueues, not queue pairs.
>
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> vhost_user.c | 16 +++++++++-------
> virtio.h | 4 ++--
> 2 files changed, 11 insertions(+), 9 deletions(-)
>
> diff --git a/vhost_user.c b/vhost_user.c
> index f97ec6064cac..fa343a86fac2 100644
> --- a/vhost_user.c
> +++ b/vhost_user.c
> @@ -345,7 +345,7 @@ static void vu_set_enable_all_rings(struct vu_dev *vdev, bool enable)
> {
> uint16_t i;
>
> - for (i = 0; i < VHOST_USER_MAX_QUEUES; i++)
> + for (i = 0; i < VHOST_USER_MAX_VQS; i++)
> vdev->vq[i].enable = enable;
> }
>
> @@ -477,7 +477,7 @@ static bool vu_set_mem_table_exec(struct vu_dev *vdev,
> close(vmsg->fds[i]);
> }
>
> - for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
> + for (i = 0; i < VHOST_USER_MAX_VQS; i++) {
> if (vdev->vq[i].vring.desc) {
> if (map_ring(vdev, &vdev->vq[i]))
> die("remapping queue %d during setmemtable", i);
> @@ -770,7 +770,7 @@ static void vu_check_queue_msg_file(struct vhost_user_msg *vmsg)
> bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
> int idx = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
>
> - if (idx >= VHOST_USER_MAX_QUEUES)
> + if (idx >= VHOST_USER_MAX_VQS)
> die("Invalid vhost-user queue index: %u", idx);
>
> if (nofd) {
> @@ -939,7 +939,9 @@ static bool vu_get_queue_num_exec(struct vu_dev *vdev,
> {
> (void)vdev;
>
> - vmsg_set_reply_u64(vmsg, VHOST_USER_MAX_QUEUES);
> + vmsg_set_reply_u64(vmsg, VHOST_USER_MAX_VQS / 2);
> +
> + debug("VHOST_USER_MAX_VQS %u", VHOST_USER_MAX_VQS / 2);
>
> return true;
> }
> @@ -960,7 +962,7 @@ static bool vu_set_vring_enable_exec(struct vu_dev *vdev,
> debug("State.index: %u", idx);
> debug("State.enable: %u", enable);
>
> - if (idx >= VHOST_USER_MAX_QUEUES)
> + if (idx >= VHOST_USER_MAX_VQS)
> die("Invalid vring_enable index: %u", idx);
>
> vdev->vq[idx].enable = enable;
> @@ -1052,7 +1054,7 @@ void vu_init(struct ctx *c)
>
> c->vdev = &vdev_storage;
> c->vdev->context = c;
> - for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
> + for (i = 0; i < VHOST_USER_MAX_VQS; i++) {
> c->vdev->vq[i] = (struct vu_virtq){
> .call_fd = -1,
> .kick_fd = -1,
> @@ -1075,7 +1077,7 @@ void vu_cleanup(struct vu_dev *vdev)
> {
> unsigned int i;
>
> - for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
> + for (i = 0; i < VHOST_USER_MAX_VQS; i++) {
> struct vu_virtq *vq = &vdev->vq[i];
>
> vq->started = false;
> diff --git a/virtio.h b/virtio.h
> index b55cc4042521..12caaa0b6def 100644
> --- a/virtio.h
> +++ b/virtio.h
> @@ -88,7 +88,7 @@ struct vu_dev_region {
> uint64_t mmap_addr;
> };
>
> -#define VHOST_USER_MAX_QUEUES 2
> +#define VHOST_USER_MAX_VQS 2
>
> /*
> * Set a reasonable maximum number of ram slots, which will be supported by
> @@ -121,7 +121,7 @@ struct vdev_memory {
> struct vu_dev {
> struct ctx *context;
> struct vdev_memory memory;
> - struct vu_virtq vq[VHOST_USER_MAX_QUEUES];
> + struct vu_virtq vq[VHOST_USER_MAX_VQS];
> uint64_t features;
> uint64_t protocol_features;
> int log_call_fd;
> --
> 2.50.1
>
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/3] udp_vu: Pass virtqueue pointer to udp_vu_sock_recv()
2025-09-05 15:49 [PATCH 0/3] This series contains fixes and improvements to the vhost-user implementation Laurent Vivier
2025-09-05 15:49 ` [PATCH 1/3] vhost-user: Fix VHOST_USER_GET_QUEUE_NUM to return number of queues Laurent Vivier
@ 2025-09-05 15:49 ` Laurent Vivier
2025-09-08 1:59 ` David Gibson
2025-09-05 15:49 ` [PATCH 3/3] tcp_vu: Pass virtqueue pointer to tcp_vu_sock_recv() Laurent Vivier
2025-09-09 20:24 ` [PATCH 0/3] This series contains fixes and improvements to the vhost-user implementation Stefano Brivio
3 siblings, 1 reply; 8+ messages in thread
From: Laurent Vivier @ 2025-09-05 15:49 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier
Pass the virtqueue pointer to udp_vu_sock_recv() to enable proper
queue selection for multiqueue support. This ensures that received
packets are processed on the same virtqueue as the caller.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
udp_vu.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/udp_vu.c b/udp_vu.c
index 2fb7b900ca31..099677f914e7 100644
--- a/udp_vu.c
+++ b/udp_vu.c
@@ -60,16 +60,17 @@ static size_t udp_vu_hdrlen(bool v6)
/**
* udp_vu_sock_recv() - Receive datagrams from socket into vhost-user buffers
* @c: Execution context
+ * @vq: virtqueue to use to receive data
* @s: Socket to receive from
* @v6: Set for IPv6 connections
* @dlen: Size of received data (output)
*
* Return: number of iov entries used to store the datagram
*/
-static int udp_vu_sock_recv(const struct ctx *c, int s, bool v6, ssize_t *dlen)
+static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s,
+ bool v6, ssize_t *dlen)
{
- struct vu_dev *vdev = c->vdev;
- struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ const struct vu_dev *vdev = c->vdev;
int iov_cnt, idx, iov_used;
struct msghdr msg = { 0 };
size_t off, hdrlen;
@@ -210,7 +211,7 @@ void udp_vu_sock_to_tap(const struct ctx *c, int s, int n, flow_sidx_t tosidx)
ssize_t dlen;
int iov_used;
- iov_used = udp_vu_sock_recv(c, s, v6, &dlen);
+ iov_used = udp_vu_sock_recv(c, vq, s, v6, &dlen);
if (iov_used <= 0)
break;
--
2.50.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] udp_vu: Pass virtqueue pointer to udp_vu_sock_recv()
2025-09-05 15:49 ` [PATCH 2/3] udp_vu: Pass virtqueue pointer to udp_vu_sock_recv() Laurent Vivier
@ 2025-09-08 1:59 ` David Gibson
0 siblings, 0 replies; 8+ messages in thread
From: David Gibson @ 2025-09-08 1:59 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
[-- Attachment #1: Type: text/plain, Size: 2054 bytes --]
On Fri, Sep 05, 2025 at 05:49:34PM +0200, Laurent Vivier wrote:
> Pass the virtqueue pointer to udp_vu_sock_recv() to enable proper
> queue selection for multiqueue support. This ensures that received
> packets are processed on the same virtqueue as the caller.
>
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Regardless of future plans, I think it's cleaner to pass the vq than
to rederive it.
> ---
> udp_vu.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/udp_vu.c b/udp_vu.c
> index 2fb7b900ca31..099677f914e7 100644
> --- a/udp_vu.c
> +++ b/udp_vu.c
> @@ -60,16 +60,17 @@ static size_t udp_vu_hdrlen(bool v6)
> /**
> * udp_vu_sock_recv() - Receive datagrams from socket into vhost-user buffers
> * @c: Execution context
> + * @vq: virtqueue to use to receive data
> * @s: Socket to receive from
> * @v6: Set for IPv6 connections
> * @dlen: Size of received data (output)
> *
> * Return: number of iov entries used to store the datagram
> */
> -static int udp_vu_sock_recv(const struct ctx *c, int s, bool v6, ssize_t *dlen)
> +static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s,
> + bool v6, ssize_t *dlen)
> {
> - struct vu_dev *vdev = c->vdev;
> - struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
> + const struct vu_dev *vdev = c->vdev;
> int iov_cnt, idx, iov_used;
> struct msghdr msg = { 0 };
> size_t off, hdrlen;
> @@ -210,7 +211,7 @@ void udp_vu_sock_to_tap(const struct ctx *c, int s, int n, flow_sidx_t tosidx)
> ssize_t dlen;
> int iov_used;
>
> - iov_used = udp_vu_sock_recv(c, s, v6, &dlen);
> + iov_used = udp_vu_sock_recv(c, vq, s, v6, &dlen);
> if (iov_used <= 0)
> break;
>
> --
> 2.50.1
>
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/3] tcp_vu: Pass virtqueue pointer to tcp_vu_sock_recv()
2025-09-05 15:49 [PATCH 0/3] This series contains fixes and improvements to the vhost-user implementation Laurent Vivier
2025-09-05 15:49 ` [PATCH 1/3] vhost-user: Fix VHOST_USER_GET_QUEUE_NUM to return number of queues Laurent Vivier
2025-09-05 15:49 ` [PATCH 2/3] udp_vu: Pass virtqueue pointer to udp_vu_sock_recv() Laurent Vivier
@ 2025-09-05 15:49 ` Laurent Vivier
2025-09-08 2:00 ` David Gibson
2025-09-09 20:24 ` [PATCH 0/3] This series contains fixes and improvements to the vhost-user implementation Stefano Brivio
3 siblings, 1 reply; 8+ messages in thread
From: Laurent Vivier @ 2025-09-05 15:49 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier
Pass the virtqueue pointer to tcp_vu_sock_recv() to enable proper
queue selection for multiqueue support. This ensures that received
packets are processed on the same virtqueue as the caller.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
tcp_vu.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tcp_vu.c b/tcp_vu.c
index cb39bc20355b..c6b5b91ec266 100644
--- a/tcp_vu.c
+++ b/tcp_vu.c
@@ -171,6 +171,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
/** tcp_vu_sock_recv() - Receive datastream from socket into vhost-user buffers
* @c: Execution context
+ * @vq: virtqueue to use to receive data
* @conn: Connection pointer
* @v6: Set for IPv6 connections
* @already_sent: Number of bytes already sent
@@ -181,13 +182,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
* Return: number of bytes received from the socket, or a negative error code
* on failure.
*/
-static ssize_t tcp_vu_sock_recv(const struct ctx *c,
+static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
const struct tcp_tap_conn *conn, bool v6,
uint32_t already_sent, size_t fillsize,
int *iov_cnt, int *head_cnt)
{
- struct vu_dev *vdev = c->vdev;
- struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ const struct vu_dev *vdev = c->vdev;
struct msghdr mh_sock = { 0 };
uint16_t mss = MSS_GET(conn);
int s = conn->sock;
@@ -398,7 +398,7 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
/* collect the buffers from vhost-user and fill them with the
* data from the socket
*/
- len = tcp_vu_sock_recv(c, conn, v6, already_sent, fillsize,
+ len = tcp_vu_sock_recv(c, vq, conn, v6, already_sent, fillsize,
&iov_cnt, &head_cnt);
if (len < 0) {
if (len != -EAGAIN && len != -EWOULDBLOCK) {
--
2.50.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] tcp_vu: Pass virtqueue pointer to tcp_vu_sock_recv()
2025-09-05 15:49 ` [PATCH 3/3] tcp_vu: Pass virtqueue pointer to tcp_vu_sock_recv() Laurent Vivier
@ 2025-09-08 2:00 ` David Gibson
0 siblings, 0 replies; 8+ messages in thread
From: David Gibson @ 2025-09-08 2:00 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
[-- Attachment #1: Type: text/plain, Size: 2368 bytes --]
On Fri, Sep 05, 2025 at 05:49:35PM +0200, Laurent Vivier wrote:
> Pass the virtqueue pointer to tcp_vu_sock_recv() to enable proper
> queue selection for multiqueue support. This ensures that received
> packets are processed on the same virtqueue as the caller.
>
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> tcp_vu.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/tcp_vu.c b/tcp_vu.c
> index cb39bc20355b..c6b5b91ec266 100644
> --- a/tcp_vu.c
> +++ b/tcp_vu.c
> @@ -171,6 +171,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>
> /** tcp_vu_sock_recv() - Receive datastream from socket into vhost-user buffers
> * @c: Execution context
> + * @vq: virtqueue to use to receive data
> * @conn: Connection pointer
> * @v6: Set for IPv6 connections
> * @already_sent: Number of bytes already sent
> @@ -181,13 +182,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
> * Return: number of bytes received from the socket, or a negative error code
> * on failure.
> */
> -static ssize_t tcp_vu_sock_recv(const struct ctx *c,
> +static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
> const struct tcp_tap_conn *conn, bool v6,
> uint32_t already_sent, size_t fillsize,
> int *iov_cnt, int *head_cnt)
> {
> - struct vu_dev *vdev = c->vdev;
> - struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
> + const struct vu_dev *vdev = c->vdev;
> struct msghdr mh_sock = { 0 };
> uint16_t mss = MSS_GET(conn);
> int s = conn->sock;
> @@ -398,7 +398,7 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
> /* collect the buffers from vhost-user and fill them with the
> * data from the socket
> */
> - len = tcp_vu_sock_recv(c, conn, v6, already_sent, fillsize,
> + len = tcp_vu_sock_recv(c, vq, conn, v6, already_sent, fillsize,
> &iov_cnt, &head_cnt);
> if (len < 0) {
> if (len != -EAGAIN && len != -EWOULDBLOCK) {
> --
> 2.50.1
>
--
David Gibson (he or they) | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you, not the other way
| around.
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] This series contains fixes and improvements to the vhost-user implementation
2025-09-05 15:49 [PATCH 0/3] This series contains fixes and improvements to the vhost-user implementation Laurent Vivier
` (2 preceding siblings ...)
2025-09-05 15:49 ` [PATCH 3/3] tcp_vu: Pass virtqueue pointer to tcp_vu_sock_recv() Laurent Vivier
@ 2025-09-09 20:24 ` Stefano Brivio
3 siblings, 0 replies; 8+ messages in thread
From: Stefano Brivio @ 2025-09-09 20:24 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
On Fri, 5 Sep 2025 17:49:32 +0200
Laurent Vivier <lvivier@redhat.com> wrote:
> The first patch fixes a protocol compliance issue where VHOST_USER_GET_QUEUE_NUM
> was incorrectly returning the number of virtqueues instead of queue pairs. The
> vhost-user specification clearly states this should return the maximum number
> of queues supported by the backend. Since each queue consists of an RX/TX pair,
> the correct value is VHOST_USER_MAX_VQS / 2. This patch also renames the
> constant to better reflect its meaning.
>
> The second and third patches address virtqueue handling in the UDP and TCP
> receive paths. These changes pass the virtqueue pointer explicitly to the
> receive functions, enabling proper queue selection for future multiqueue
> support. This ensures packets are processed on the same virtqueue as the
> caller.
>
> These changes lay the groundwork for full multiqueue support while fixing
> existing protocol compliance issues.
>
> Laurent Vivier (3):
> vhost-user: Fix VHOST_USER_GET_QUEUE_NUM to return number of queues
> udp_vu: Pass virtqueue pointer to udp_vu_sock_recv()
> tcp_vu: Pass virtqueue pointer to tcp_vu_sock_recv()
Applied.
--
Stefano
^ permalink raw reply [flat|nested] 8+ messages in thread