* [PATCH v4 0/4] Add vhost-user support to passt. (part 3)
@ 2024-09-06 16:04 Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 1/4] packet: replace struct desc by struct iovec Laurent Vivier
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Laurent Vivier @ 2024-09-06 16:04 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier
This series of patches adds vhost-user support to passt
and then allows passt to connect to QEMU network backend using
virtqueue rather than a socket.
With QEMU, rather than using to connect:
-netdev stream,id=s,server=off,addr.type=unix,addr.path=/tmp/passt_1.socket
we will use:
-chardev socket,id=chr0,path=/tmp/passt_1.socket
-netdev vhost-user,id=netdev0,chardev=chr0
-device virtio-net,netdev=netdev0
-object memory-backend-memfd,id=memfd0,share=on,size=$RAMSIZE
-numa node,memdev=memfd0
The memory backend is needed to share data between passt and QEMU.
Performance comparison between "-netdev stream" and "-netdev vhost-user":
$ iperf3 -c localhost -p 10001 -t 60 -6 -u -b 50G
socket:
[ 5] 0.00-60.05 sec 95.6 GBytes 13.7 Gbits/sec 0.017 ms 6998988/10132413 (69%) receiver
vhost-user:
[ 5] 0.00-60.04 sec 237 GBytes 33.9 Gbits/sec 0.006 ms 53673/7813770 (0.69%) receiver
$ iperf3 -c localhost -p 10001 -t 60 -4 -u -b 50G
socket:
[ 5] 0.00-60.05 sec 98.9 GBytes 14.1 Gbits/sec 0.018 ms 6260735/9501832 (66%) receiver
vhost-user:
[ 5] 0.00-60.05 sec 235 GBytes 33.7 Gbits/sec 0.008 ms 37581/7752699 (0.48%) receiver
$ iperf3 -c localhost -p 10001 -t 60 -6
socket:
[ 5] 0.00-60.00 sec 17.3 GBytes 2.48 Gbits/sec 0 sender
[ 5] 0.00-60.06 sec 17.3 GBytes 2.48 Gbits/sec receiver
vhost-user:
[ 5] 0.00-60.00 sec 191 GBytes 27.4 Gbits/sec 0 sender
[ 5] 0.00-60.05 sec 191 GBytes 27.3 Gbits/sec receiver
$ iperf3 -c localhost -p 10001 -t 60 -4
socket:
[ 5] 0.00-60.00 sec 15.6 GBytes 2.24 Gbits/sec 0 sender
[ 5] 0.00-60.06 sec 15.6 GBytes 2.24 Gbits/sec receiver
vhost-user:
[ 5] 0.00-60.00 sec 189 GBytes 27.1 Gbits/sec 0 sender
[ 5] 0.00-60.04 sec 189 GBytes 27.0 Gbits/sec receiver
v4:
- rebase on top of 2024_08_21.1d6142f
(rebasing on top of 620e19a1b48a ("udp: Merge udp[46]_mh_recv arrays")
introduces a regression in the measure of the latency with UDP
because I think I don't replace correctly ref.udp.v6 that is removed
by this commit)
- Addressed most of the comments from David and Stefano
(I didn't want to postpone this version to next week,
so I'll address the remaining comments in the next version).
v3:
- rebase on top of flow table
- update tcp_vu.c to look like udp_vu.c (recv()/prepare()/send_frame())
- address comments from Stefano and David on version 2
v2:
- remove PATCH 4
- rewrite PATCH 2 and 3 to follow passt coding style
- move some code from PATCH 3 to PATCH 4 (previously PATCH 5)
- partially addressed David's comment on PATCH 5
Laurent Vivier (4):
packet: replace struct desc by struct iovec
vhost-user: introduce virtio API
vhost-user: introduce vhost-user API
vhost-user: add vhost-user
Makefile | 6 +-
checksum.c | 1 -
conf.c | 23 +-
epoll_type.h | 4 +
iov.c | 1 -
isolation.c | 15 +-
packet.c | 91 ++--
packet.h | 22 +-
passt.1 | 10 +-
passt.c | 26 +-
passt.h | 6 +
pcap.c | 1 -
tap.c | 111 ++++-
tap.h | 5 +-
tcp.c | 31 +-
tcp_buf.c | 8 +-
tcp_internal.h | 3 +-
tcp_vu.c | 656 +++++++++++++++++++++++++
tcp_vu.h | 12 +
udp.c | 76 +--
udp.h | 8 +-
udp_internal.h | 34 ++
udp_vu.c | 386 +++++++++++++++
udp_vu.h | 13 +
util.h | 8 +
vhost_user.c | 1267 ++++++++++++++++++++++++++++++++++++++++++++++++
vhost_user.h | 203 ++++++++
virtio.c | 659 +++++++++++++++++++++++++
virtio.h | 185 +++++++
vu_common.c | 35 ++
vu_common.h | 34 ++
31 files changed, 3801 insertions(+), 139 deletions(-)
create mode 100644 tcp_vu.c
create mode 100644 tcp_vu.h
create mode 100644 udp_internal.h
create mode 100644 udp_vu.c
create mode 100644 udp_vu.h
create mode 100644 vhost_user.c
create mode 100644 vhost_user.h
create mode 100644 virtio.c
create mode 100644 virtio.h
create mode 100644 vu_common.c
create mode 100644 vu_common.h
--
2.46.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v4 1/4] packet: replace struct desc by struct iovec
2024-09-06 16:04 [PATCH v4 0/4] Add vhost-user support to passt. (part 3) Laurent Vivier
@ 2024-09-06 16:04 ` Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 2/4] vhost-user: introduce virtio API Laurent Vivier
` (2 subsequent siblings)
3 siblings, 0 replies; 15+ messages in thread
From: Laurent Vivier @ 2024-09-06 16:04 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier, David Gibson
To be able to manage buffers inside a shared memory provided
by a VM via a vhost-user interface, we cannot rely on the fact
that buffers are located in a pre-defined memory area and use
a base address and a 32bit offset to address them.
We need a 64bit address, so replace struct desc by struct iovec
and update range checking.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
packet.c | 80 ++++++++++++++++++++++++++++++--------------------------
packet.h | 14 ++--------
2 files changed, 45 insertions(+), 49 deletions(-)
diff --git a/packet.c b/packet.c
index ccfc84607709..37489961a37e 100644
--- a/packet.c
+++ b/packet.c
@@ -22,6 +22,35 @@
#include "util.h"
#include "log.h"
+/**
+ * packet_check_range() - Check if a packet memory range is valid
+ * @p: Packet pool
+ * @offset: Offset of data range in packet descriptor
+ * @len: Length of desired data range
+ * @start: Start of the packet descriptor
+ * @func: For tracing: name of calling function
+ * @line: For tracing: caller line of function call
+ *
+ * Return: 0 if the range is valid, -1 otherwise
+ */
+static int packet_check_range(const struct pool *p, size_t offset, size_t len,
+ const char *start, const char *func, int line)
+{
+ if (start < p->buf) {
+ trace("packet start %p before buffer start %p, "
+ "%s:%i", (void *)start, (void *)p->buf, func, line);
+ return -1;
+ }
+
+ if (start + len + offset > p->buf + p->buf_size) {
+ trace("packet offset plus length %lu from size %lu, "
+ "%s:%i", start - p->buf + len + offset,
+ p->buf_size, func, line);
+ return -1;
+ }
+
+ return 0;
+}
/**
* packet_add_do() - Add data as packet descriptor to given pool
* @p: Existing pool
@@ -41,34 +70,16 @@ void packet_add_do(struct pool *p, size_t len, const char *start,
return;
}
- if (start < p->buf) {
- trace("add packet start %p before buffer start %p, %s:%i",
- (void *)start, (void *)p->buf, func, line);
+ if (packet_check_range(p, 0, len, start, func, line))
return;
- }
-
- if (start + len > p->buf + p->buf_size) {
- trace("add packet start %p, length: %zu, buffer end %p, %s:%i",
- (void *)start, len, (void *)(p->buf + p->buf_size),
- func, line);
- return;
- }
if (len > UINT16_MAX) {
trace("add packet length %zu, %s:%i", len, func, line);
return;
}
-#if UINTPTR_MAX == UINT64_MAX
- if ((uintptr_t)start - (uintptr_t)p->buf > UINT32_MAX) {
- trace("add packet start %p, buffer start %p, %s:%i",
- (void *)start, (void *)p->buf, func, line);
- return;
- }
-#endif
-
- p->pkt[idx].offset = start - p->buf;
- p->pkt[idx].len = len;
+ p->pkt[idx].iov_base = (void *)start;
+ p->pkt[idx].iov_len = len;
p->count++;
}
@@ -96,36 +107,31 @@ void *packet_get_do(const struct pool *p, size_t idx, size_t offset,
return NULL;
}
- if (len > UINT16_MAX || len + offset > UINT32_MAX) {
+ if (len > UINT16_MAX) {
if (func) {
- trace("packet data length %zu, offset %zu, %s:%i",
- len, offset, func, line);
+ trace("packet data length %zu, %s:%i",
+ len, func, line);
}
return NULL;
}
- if (p->pkt[idx].offset + len + offset > p->buf_size) {
+ if (len + offset > p->pkt[idx].iov_len) {
if (func) {
- trace("packet offset plus length %zu from size %zu, "
- "%s:%i", p->pkt[idx].offset + len + offset,
- p->buf_size, func, line);
+ trace("data length %zu, offset %zu from length %zu, "
+ "%s:%i", len, offset, p->pkt[idx].iov_len,
+ func, line);
}
return NULL;
}
- if (len + offset > p->pkt[idx].len) {
- if (func) {
- trace("data length %zu, offset %zu from length %u, "
- "%s:%i", len, offset, p->pkt[idx].len,
- func, line);
- }
+ if (packet_check_range(p, offset, len, p->pkt[idx].iov_base,
+ func, line))
return NULL;
- }
if (left)
- *left = p->pkt[idx].len - offset - len;
+ *left = p->pkt[idx].iov_len - offset - len;
- return p->buf + p->pkt[idx].offset + offset;
+ return (char *)p->pkt[idx].iov_base + offset;
}
/**
diff --git a/packet.h b/packet.h
index a784b07bbed5..8377dcf678bb 100644
--- a/packet.h
+++ b/packet.h
@@ -6,16 +6,6 @@
#ifndef PACKET_H
#define PACKET_H
-/**
- * struct desc - Generic offset-based descriptor within buffer
- * @offset: Offset of descriptor relative to buffer start, 32-bit limit
- * @len: Length of descriptor, host order, 16-bit limit
- */
-struct desc {
- uint32_t offset;
- uint16_t len;
-};
-
/**
* struct pool - Generic pool of packets stored in a buffer
* @buf: Buffer storing packet descriptors
@@ -29,7 +19,7 @@ struct pool {
size_t buf_size;
size_t size;
size_t count;
- struct desc pkt[1];
+ struct iovec pkt[1];
};
void packet_add_do(struct pool *p, size_t len, const char *start,
@@ -54,7 +44,7 @@ struct _name ## _t { \
size_t buf_size; \
size_t size; \
size_t count; \
- struct desc pkt[_size]; \
+ struct iovec pkt[_size]; \
}
#define PACKET_POOL_INIT_NOCAST(_size, _buf, _buf_size) \
--
@@ -6,16 +6,6 @@
#ifndef PACKET_H
#define PACKET_H
-/**
- * struct desc - Generic offset-based descriptor within buffer
- * @offset: Offset of descriptor relative to buffer start, 32-bit limit
- * @len: Length of descriptor, host order, 16-bit limit
- */
-struct desc {
- uint32_t offset;
- uint16_t len;
-};
-
/**
* struct pool - Generic pool of packets stored in a buffer
* @buf: Buffer storing packet descriptors
@@ -29,7 +19,7 @@ struct pool {
size_t buf_size;
size_t size;
size_t count;
- struct desc pkt[1];
+ struct iovec pkt[1];
};
void packet_add_do(struct pool *p, size_t len, const char *start,
@@ -54,7 +44,7 @@ struct _name ## _t { \
size_t buf_size; \
size_t size; \
size_t count; \
- struct desc pkt[_size]; \
+ struct iovec pkt[_size]; \
}
#define PACKET_POOL_INIT_NOCAST(_size, _buf, _buf_size) \
--
2.46.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 2/4] vhost-user: introduce virtio API
2024-09-06 16:04 [PATCH v4 0/4] Add vhost-user support to passt. (part 3) Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 1/4] packet: replace struct desc by struct iovec Laurent Vivier
@ 2024-09-06 16:04 ` Laurent Vivier
2024-09-10 15:47 ` Stefano Brivio
2024-09-06 16:04 ` [PATCH v4 3/4] vhost-user: introduce vhost-user API Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 4/4] vhost-user: add vhost-user Laurent Vivier
3 siblings, 1 reply; 15+ messages in thread
From: Laurent Vivier @ 2024-09-06 16:04 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier
Add virtio.c and virtio.h that define the functions needed
to manage virtqueues.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
Makefile | 4 +-
util.h | 8 +
virtio.c | 665 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
virtio.h | 185 ++++++++++++++++
4 files changed, 860 insertions(+), 2 deletions(-)
create mode 100644 virtio.c
create mode 100644 virtio.h
diff --git a/Makefile b/Makefile
index 01fada45adc7..e9a154bdd718 100644
--- a/Makefile
+++ b/Makefile
@@ -47,7 +47,7 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS)
PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \
icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \
ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \
- tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c
+ tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c virtio.c
QRAP_SRCS = qrap.c
SRCS = $(PASST_SRCS) $(QRAP_SRCS)
@@ -57,7 +57,7 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \
flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \
lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \
siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \
- udp.h udp_flow.h util.h
+ udp.h udp_flow.h util.h virtio.h
HEADERS = $(PASST_HEADERS) seccomp.h
C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 };
diff --git a/util.h b/util.h
index 1463c92153d5..0960903ccaec 100644
--- a/util.h
+++ b/util.h
@@ -134,6 +134,14 @@ static inline uint32_t ntohl_unaligned(const void *p)
return ntohl(val);
}
+static inline void barrier(void) { __asm__ __volatile__("" ::: "memory"); }
+#define smp_mb() do { barrier(); __atomic_thread_fence(__ATOMIC_SEQ_CST); } while (0)
+#define smp_mb_release() do { barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); } while (0)
+#define smp_mb_acquire() do { barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); } while (0)
+
+#define smp_wmb() smp_mb_release()
+#define smp_rmb() smp_mb_acquire()
+
#define NS_FN_STACK_SIZE (RLIMIT_STACK_VAL * 1024 / 8)
int do_clone(int (*fn)(void *), char *stack_area, size_t stack_size, int flags,
void *arg);
diff --git a/virtio.c b/virtio.c
new file mode 100644
index 000000000000..380590afbca3
--- /dev/null
+++ b/virtio.c
@@ -0,0 +1,665 @@
+// SPDX-License-Identifier: GPL-2.0-or-later AND BSD-3-Clause
+/*
+ * virtio API, vring and virtqueue functions definition
+ *
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+/* Some parts copied from QEMU subprojects/libvhost-user/libvhost-user.c
+ * originally licensed under the following terms:
+ *
+ * --
+ *
+ * Copyright IBM, Corp. 2007
+ * Copyright (c) 2016 Red Hat, Inc.
+ *
+ * Authors:
+ * Anthony Liguori <aliguori@us.ibm.com>
+ * Marc-André Lureau <mlureau@redhat.com>
+ * Victor Kaplansky <victork@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later. See the COPYING file in the top-level directory.
+ *
+ * Some parts copied from QEMU hw/virtio/virtio.c
+ * licensed under the following terms:
+ *
+ * Copyright IBM, Corp. 2007
+ *
+ * Authors:
+ * Anthony Liguori <aliguori@us.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ *
+ * --
+ *
+ * virtq_used_event() and virtq_avail_event() from
+ * https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-712000A
+ * licensed under the following terms:
+ *
+ * --
+ *
+ * This header is BSD licensed so anyone can use the definitions
+ * to implement compatible drivers/servers.
+ *
+ * Copyright 2007, 2009, IBM Corporation
+ * Copyright 2011, Red Hat, Inc
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of IBM nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ‘‘AS IS’’ AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include <stddef.h>
+#include <endian.h>
+#include <string.h>
+#include <errno.h>
+#include <sys/eventfd.h>
+#include <sys/socket.h>
+
+#include "util.h"
+#include "virtio.h"
+
+#define VIRTQUEUE_MAX_SIZE 1024
+
+/**
+ * vu_gpa_to_va() - Translate guest physical address to our virtual address.
+ * @dev: Vhost-user device
+ * @plen: Physical length to map (input), capped to region (output)
+ * @guest_addr: Guest physical address
+ *
+ * Return: virtual address in our address space of the guest physical address
+ */
+static void *vu_gpa_to_va(struct vu_dev *dev, uint64_t *plen, uint64_t guest_addr)
+{
+ unsigned int i;
+
+ if (*plen == 0)
+ return NULL;
+
+ /* Find matching memory region. */
+ for (i = 0; i < dev->nregions; i++) {
+ const struct vu_dev_region *r = &dev->regions[i];
+
+ if ((guest_addr >= r->gpa) &&
+ (guest_addr < (r->gpa + r->size))) {
+ if ((guest_addr + *plen) > (r->gpa + r->size))
+ *plen = r->gpa + r->size - guest_addr;
+ /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
+ return (void *)(guest_addr - r->gpa + r->mmap_addr +
+ r->mmap_offset);
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * vring_avail_flags() - Read the available ring flags
+ * @vq: Virtqueue
+ *
+ * Return: the available ring descriptor flags of the given virtqueue
+ */
+static inline uint16_t vring_avail_flags(const struct vu_virtq *vq)
+{
+ return le16toh(vq->vring.avail->flags);
+}
+
+/**
+ * vring_avail_idx() - Read the available ring index
+ * @vq: Virtqueue
+ *
+ * Return: the available ring index of the given virtqueue
+ */
+static inline uint16_t vring_avail_idx(struct vu_virtq *vq)
+{
+ vq->shadow_avail_idx = le16toh(vq->vring.avail->idx);
+
+ return vq->shadow_avail_idx;
+}
+
+/**
+ * vring_avail_ring() - Read an available ring entry
+ * @vq: Virtqueue
+ * @i: Index of the entry to read
+ *
+ * Return: the ring entry content (head of the descriptor chain)
+ */
+static inline uint16_t vring_avail_ring(const struct vu_virtq *vq, int i)
+{
+ return le16toh(vq->vring.avail->ring[i]);
+}
+
+/**
+ * virtq_used_event - Get location of used event indices
+ * (only with VIRTIO_F_EVENT_IDX)
+ * @vq Virtqueue
+ *
+ * Return: return the location of the used event index
+ */
+static inline uint16_t *virtq_used_event(const struct vu_virtq *vq)
+{
+ /* For backwards compat, used event index is at *end* of avail ring. */
+ return &vq->vring.avail->ring[vq->vring.num];
+}
+
+/**
+ * vring_get_used_event() - Get the used event from the available ring
+ * @vq Virtqueue
+ *
+ * Return: the used event (available only if VIRTIO_RING_F_EVENT_IDX is set)
+ * used_event is a performant alternative where the driver
+ * specifies how far the device can progress before a notification
+ * is required.
+ */
+static inline uint16_t vring_get_used_event(const struct vu_virtq *vq)
+{
+ return le16toh(*virtq_used_event(vq));
+}
+
+/**
+ * virtqueue_get_head() - Get the head of the descriptor chain for a given
+ * index
+ * @vq: Virtqueue
+ * @idx: Available ring entry index
+ * @head: Head of the descriptor chain
+ */
+static void virtqueue_get_head(const struct vu_virtq *vq,
+ unsigned int idx, unsigned int *head)
+{
+ /* Grab the next descriptor number they're advertising, and increment
+ * the index we've seen.
+ */
+ *head = vring_avail_ring(vq, idx % vq->vring.num);
+
+ /* If their number is silly, that's a fatal mistake. */
+ if (*head >= vq->vring.num)
+ die("vhost-user: Guest says index %u is available", *head);
+}
+
+/**
+ * virtqueue_read_indirect_desc() - Copy virtio ring descriptors from guest
+ * memory
+ * @dev: Vhost-user device
+ * @desc: Destination address to copy the descriptors to
+ * @addr: Guest memory address to copy from
+ * @len: Length of memory to copy
+ *
+ * Return: -1 if there is an error, 0 otherwise
+ */
+static int virtqueue_read_indirect_desc(struct vu_dev *dev, struct vring_desc *desc,
+ uint64_t addr, size_t len)
+{
+ uint64_t read_len;
+
+ if (len > (VIRTQUEUE_MAX_SIZE * sizeof(struct vring_desc)))
+ return -1;
+
+ if (len == 0)
+ return -1;
+
+ while (len) {
+ const struct vring_desc *orig_desc;
+
+ read_len = len;
+ orig_desc = vu_gpa_to_va(dev, &read_len, addr);
+ if (!orig_desc)
+ return -1;
+
+ memcpy(desc, orig_desc, read_len);
+ len -= read_len;
+ addr += read_len;
+ desc += read_len / sizeof(struct vring_desc);
+ }
+
+ return 0;
+}
+
+/**
+ * enum virtqueue_read_desc_state - State in the descriptor chain
+ * @VIRTQUEUE_READ_DESC_ERROR Found an invalid descriptor
+ * @VIRTQUEUE_READ_DESC_DONE No more descriptors in the chain
+ * @VIRTQUEUE_READ_DESC_MORE there are more descriptors in the chain
+ */
+enum virtqueue_read_desc_state {
+ VIRTQUEUE_READ_DESC_ERROR = -1,
+ VIRTQUEUE_READ_DESC_DONE = 0, /* end of chain */
+ VIRTQUEUE_READ_DESC_MORE = 1, /* more buffers in chain */
+};
+
+/**
+ * virtqueue_read_next_desc() - Read the the next descriptor in the chain
+ * @desc: Virtio ring descriptors
+ * @i: Index of the current descriptor
+ * @max: Maximum value of the descriptor index
+ * @next: Index of the next descriptor in the chain (output value)
+ *
+ * Return: current chain descriptor state (error, next, done)
+ */
+static int virtqueue_read_next_desc(const struct vring_desc *desc,
+ int i, unsigned int max, unsigned int *next)
+{
+ /* If this descriptor says it doesn't chain, we're done. */
+ if (!(le16toh(desc[i].flags) & VRING_DESC_F_NEXT))
+ return VIRTQUEUE_READ_DESC_DONE;
+
+ /* Check they're not leading us off end of descriptors. */
+ *next = le16toh(desc[i].next);
+ /* Make sure compiler knows to grab that: we don't want it changing! */
+ smp_wmb();
+
+ if (*next >= max)
+ return VIRTQUEUE_READ_DESC_ERROR;
+
+ return VIRTQUEUE_READ_DESC_MORE;
+}
+
+/**
+ * vu_queue_empty() - Check if virtqueue is empty
+ * @vq: Virtqueue
+ *
+ * Return: true if the virtqueue is empty, false otherwise
+ */
+bool vu_queue_empty(struct vu_virtq *vq)
+{
+ if (!vq->vring.avail)
+ return true;
+
+ if (vq->shadow_avail_idx != vq->last_avail_idx)
+ return false;
+
+ return vring_avail_idx(vq) == vq->last_avail_idx;
+}
+
+/**
+ * vring_can_notify() - Check if a notification can be sent
+ * @dev: Vhost-user device
+ * @vq: Virtqueue
+ *
+ * Return: true if notification can be sent
+ */
+static bool vring_can_notify(const struct vu_dev *dev, struct vu_virtq *vq)
+{
+ uint16_t old, new;
+ bool v;
+
+ /* We need to expose used array entries before checking used event. */
+ smp_mb();
+
+ /* Always notify when queue is empty (when feature acknowledge) */
+ if (vu_has_feature(dev, VIRTIO_F_NOTIFY_ON_EMPTY) &&
+ !vq->inuse && vu_queue_empty(vq))
+ return true;
+
+ if (!vu_has_feature(dev, VIRTIO_RING_F_EVENT_IDX))
+ return !(vring_avail_flags(vq) & VRING_AVAIL_F_NO_INTERRUPT);
+
+ v = vq->signalled_used_valid;
+ vq->signalled_used_valid = true;
+ old = vq->signalled_used;
+ new = vq->signalled_used = vq->used_idx;
+ return !v || vring_need_event(vring_get_used_event(vq), new, old);
+}
+
+/**
+ * vu_queue_notify() - Send a notification to the given virtqueue
+ * @dev: Vhost-user device
+ * @vq: Virtqueue
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq)
+{
+ if (!vq->vring.avail)
+ return;
+
+ if (!vring_can_notify(dev, vq)) {
+ debug("vhost-user: virtqueue can skip notify...");
+ return;
+ }
+
+ if (eventfd_write(vq->call_fd, 1) < 0)
+ die_perror("Error writing vhost-user queue eventfd");
+}
+
+/* virtq_avail_event() - Get location of available event indices
+ * (only with VIRTIO_F_EVENT_IDX)
+ * @vq: Virtqueue
+ *
+ * Return: return the location of the available event index
+ */
+static inline uint16_t *virtq_avail_event(const struct vu_virtq *vq)
+{
+ /* For backwards compat, avail event index is at *end* of used ring. */
+ return (uint16_t *)&vq->vring.used->ring[vq->vring.num];
+}
+
+/**
+ * vring_set_avail_event() - Set avail_event
+ * @vq: Virtqueue
+ * @val: Value to set to avail_event
+ * avail_event is used in the same way the used_event is in the
+ * avail_ring.
+ * avail_event is used to advise the driver that notifications
+ * are unnecessary until the driver writes entry with an index
+ * specified by avail_event into the available ring.
+ */
+static inline void vring_set_avail_event(const struct vu_virtq *vq,
+ uint16_t val)
+{
+ uint16_t val_le = htole16(val);
+
+ if (!vq->notification)
+ return;
+
+ memcpy(virtq_avail_event(vq), &val_le, sizeof(val_le));
+}
+
+/**
+ * virtqueue_map_desc() - Translate descriptor ring physical address into our
+ * virtual address space
+ * @dev: Vhost-user device
+ * @p_num_sg: First iov entry to use (input),
+ * first iov entry not used (output)
+ * @iov: Iov array to use to store buffer virtual addresses
+ * @max_num_sg: Maximum number of iov entries
+ * @pa: Guest physical address of the buffer to map into our virtual
+ * address
+ * @sz: Size of the buffer
+ *
+ * Return: false on error, true otherwise
+ */
+static bool virtqueue_map_desc(struct vu_dev *dev,
+ unsigned int *p_num_sg, struct iovec *iov,
+ unsigned int max_num_sg,
+ uint64_t pa, size_t sz)
+{
+ unsigned int num_sg = *p_num_sg;
+
+ ASSERT(num_sg < max_num_sg);
+ ASSERT(sz);
+
+ while (sz) {
+ uint64_t len = sz;
+
+ iov[num_sg].iov_base = vu_gpa_to_va(dev, &len, pa);
+ if (iov[num_sg].iov_base == NULL)
+ die("vhost-user: invalid address for buffers");
+ iov[num_sg].iov_len = len;
+ num_sg++;
+ sz -= len;
+ pa += len;
+ }
+
+ *p_num_sg = num_sg;
+ return true;
+}
+
+/**
+ * vu_queue_map_desc - Map the virtqueue descriptor ring into our virtual
+ * address space
+ * @dev: Vhost-user device
+ * @vq: Virtqueue
+ * @idx: First descriptor ring entry to map
+ * @elem: Virtqueue element to store descriptor ring iov
+ *
+ * Return: -1 if there is an error, 0 otherwise
+ */
+static int vu_queue_map_desc(struct vu_dev *dev, struct vu_virtq *vq, unsigned int idx,
+ struct vu_virtq_element *elem)
+{
+ const struct vring_desc *desc = vq->vring.desc;
+ struct vring_desc desc_buf[VIRTQUEUE_MAX_SIZE];
+ unsigned int out_num = 0, in_num = 0;
+ unsigned int max = vq->vring.num;
+ unsigned int i = idx;
+ uint64_t read_len;
+ int rc;
+
+ if (le16toh(desc[i].flags) & VRING_DESC_F_INDIRECT) {
+ unsigned int desc_len;
+ uint64_t desc_addr;
+
+ if (le32toh(desc[i].len) % sizeof(struct vring_desc))
+ die("vhost-user: Invalid size for indirect buffer table");
+
+ /* loop over the indirect descriptor table */
+ desc_addr = le64toh(desc[i].addr);
+ desc_len = le32toh(desc[i].len);
+ max = desc_len / sizeof(struct vring_desc);
+ read_len = desc_len;
+ desc = vu_gpa_to_va(dev, &read_len, desc_addr);
+ if (desc && read_len != desc_len) {
+ /* Failed to use zero copy */
+ desc = NULL;
+ if (!virtqueue_read_indirect_desc(dev, desc_buf, desc_addr, desc_len))
+ desc = desc_buf;
+ }
+ if (!desc)
+ die("vhost-user: Invalid indirect buffer table");
+ i = 0;
+ }
+
+ /* Collect all the descriptors */
+ do {
+ if (le16toh(desc[i].flags) & VRING_DESC_F_WRITE) {
+ if (!virtqueue_map_desc(dev, &in_num, elem->in_sg,
+ elem->in_num,
+ le64toh(desc[i].addr),
+ le32toh(desc[i].len)))
+ return -1;
+ } else {
+ if (in_num)
+ die("Incorrect order for descriptors");
+ if (!virtqueue_map_desc(dev, &out_num, elem->out_sg,
+ elem->out_num,
+ le64toh(desc[i].addr),
+ le32toh(desc[i].len))) {
+ return -1;
+ }
+ }
+
+ /* If we've got too many, that implies a descriptor loop. */
+ if ((in_num + out_num) > max)
+ die("vhost-user: Loop in queue descriptor list");
+ rc = virtqueue_read_next_desc(desc, i, max, &i);
+ } while (rc == VIRTQUEUE_READ_DESC_MORE);
+
+ if (rc == VIRTQUEUE_READ_DESC_ERROR)
+ die("vhost-user: Failed to read descriptor list");
+
+ elem->index = idx;
+ elem->in_num = in_num;
+ elem->out_num = out_num;
+
+ return 0;
+}
+
+/**
+ * vu_queue_pop() - Pop an entry from the virtqueue
+ * @dev: Vhost-user device
+ * @vq: Virtqueue
+ * @elem: Virtqueue element to file with the entry information
+ *
+ * Return: -1 if there is an error, 0 otherwise
+ */
+/* cppcheck-suppress unusedFunction */
+int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virtq_element *elem)
+{
+ unsigned int head;
+ int ret;
+
+ if (!vq->vring.avail)
+ return -1;
+
+ if (vu_queue_empty(vq))
+ return -1;
+
+ /* Needed after vu_queue_empty(), see comment in
+ * virtqueue_num_heads().
+ */
+ smp_rmb();
+
+ if (vq->inuse >= vq->vring.num)
+ die("vhost-user queue size exceeded");
+
+ virtqueue_get_head(vq, vq->last_avail_idx++, &head);
+
+ if (vu_has_feature(dev, VIRTIO_RING_F_EVENT_IDX))
+ vring_set_avail_event(vq, vq->last_avail_idx);
+
+ ret = vu_queue_map_desc(dev, vq, head, elem);
+
+ if (ret < 0)
+ return ret;
+
+ vq->inuse++;
+
+ return 0;
+}
+
+/**
+ * vu_queue_detach_element() - Detach an element from the virqueue
+ * @vq: Virtqueue
+ */
+void vu_queue_detach_element(struct vu_virtq *vq)
+{
+ vq->inuse--;
+ /* unmap, when DMA support is added */
+}
+
+/**
+ * vu_queue_unpop() - Push back the previously popped element from the virqueue
+ * @vq: Virtqueue
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_queue_unpop(struct vu_virtq *vq)
+{
+ vq->last_avail_idx--;
+ vu_queue_detach_element(vq);
+}
+
+/**
+ * vu_queue_rewind() - Push back a given number of popped elements
+ * @vq: Virtqueue
+ * @num: Number of element to unpop
+ */
+/* cppcheck-suppress unusedFunction */
+bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num)
+{
+ if (num > vq->inuse)
+ return false;
+
+ vq->last_avail_idx -= num;
+ vq->inuse -= num;
+ return true;
+}
+
+/**
+ * vring_used_write() - Write an entry in the used ring
+ * @vq: Virtqueue
+ * @uelem: Entry to write
+ * @i: Index of the entry in the used ring
+ */
+static inline void vring_used_write(struct vu_virtq *vq,
+ const struct vring_used_elem *uelem, int i)
+{
+ struct vring_used *used = vq->vring.used;
+
+ used->ring[i] = *uelem;
+}
+
+/**
+ * vu_queue_fill_by_index() - Update information of a descriptor ring entry
+ * in the used ring
+ * @vq: Virtqueue
+ * @index: Descriptor ring index
+ * @len: Size of the element
+ * @idx: Used ring entry index
+ */
+void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index,
+ unsigned int len, unsigned int idx)
+{
+ struct vring_used_elem uelem;
+
+ if (!vq->vring.avail)
+ return;
+
+ idx = (idx + vq->used_idx) % vq->vring.num;
+
+ uelem.id = htole32(index);
+ uelem.len = htole32(len);
+ vring_used_write(vq, &uelem, idx);
+}
+
+/**
+ * vu_queue_fill() - Update information of a given element in the used ring
+ * @dev: Vhost-user device
+ * @vq: Virtqueue
+ * @elem: Element information to fill
+ * @len: Size of the element
+ * @idx: Used ring entry index
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_queue_fill(struct vu_virtq *vq, const struct vu_virtq_element *elem,
+ unsigned int len, unsigned int idx)
+{
+ vu_queue_fill_by_index(vq, elem->index, len, idx);
+}
+
+/**
+ * vring_used_idx_set() - Set the descriptor ring current index
+ * @vq: Virtqueue
+ * @val: Value to set in the index
+ */
+static inline void vring_used_idx_set(struct vu_virtq *vq, uint16_t val)
+{
+ vq->vring.used->idx = htole16(val);
+
+ vq->used_idx = val;
+}
+
+/**
+ * vu_queue_flush() - Flush the virtqueue
+ * @vq: Virtqueue
+ * @count: Number of entry to flush
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_queue_flush(struct vu_virtq *vq, unsigned int count)
+{
+ uint16_t old, new;
+
+ if (!vq->vring.avail)
+ return;
+
+ /* Make sure buffer is written before we update index. */
+ smp_wmb();
+
+ old = vq->used_idx;
+ new = old + count;
+ vring_used_idx_set(vq, new);
+ vq->inuse -= count;
+ if ((uint16_t)(new - vq->signalled_used) < (uint16_t)(new - old))
+ vq->signalled_used_valid = false;
+}
diff --git a/virtio.h b/virtio.h
new file mode 100644
index 000000000000..0e5705581bd2
--- /dev/null
+++ b/virtio.h
@@ -0,0 +1,185 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * virtio API, vring and virtqueue functions definition
+ *
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+#ifndef VIRTIO_H
+#define VIRTIO_H
+
+#include <stdbool.h>
+#include <linux/vhost_types.h>
+
+/* Maximum size of a virtqueue */
+#define VIRTQUEUE_MAX_SIZE 1024
+
+/**
+ * struct vu_ring - Virtqueue rings
+ * @num: Size of the queue
+ * @desc: Descriptor ring
+ * @avail: Available ring
+ * @used: Used ring
+ * @log_guest_addr: Guest address for logging
+ * @flags: Vring flags
+ * VHOST_VRING_F_LOG is set if log address is valid
+ */
+struct vu_ring {
+ unsigned int num;
+ struct vring_desc *desc;
+ struct vring_avail *avail;
+ struct vring_used *used;
+ uint64_t log_guest_addr;
+ uint32_t flags;
+};
+
+/**
+ * struct vu_virtq - Virtqueue definition
+ * @vring: Virtqueue rings
+ * @last_avail_idx: Next head to pop
+ * @shadow_avail_idx: Last avail_idx read from VQ.
+ * @used_idx: Descriptor ring current index
+ * @signalled_used: Last used index value we have signalled on
+ * @signalled_used_valid: True if signalled_used if valid
+ * @notification: True if the queues notify (via event
+ * index or interrupt)
+ * @inuse: Number of entries in use
+ * @call_fd: The event file descriptor to signal when
+ * buffers are used.
+ * @kick_fd: The event file descriptor for adding
+ * buffers to the vring
+ * @err_fd: The event file descriptor to signal when
+ * error occurs
+ * @enable: True if the virtqueue is enabled
+ * @started: True if the virtqueue is started
+ * @vra: QEMU address of our rings
+ */
+struct vu_virtq {
+ struct vu_ring vring;
+ uint16_t last_avail_idx;
+ uint16_t shadow_avail_idx;
+ uint16_t used_idx;
+ uint16_t signalled_used;
+ bool signalled_used_valid;
+ bool notification;
+ unsigned int inuse;
+ int call_fd;
+ int kick_fd;
+ int err_fd;
+ unsigned int enable;
+ bool started;
+ struct vhost_vring_addr vra;
+};
+
+/**
+ * struct vu_dev_region - guest shared memory region
+ * @gpa: Guest physical address of the region
+ * @size: Memory size in bytes
+ * @qva: QEMU virtual address
+ * @mmap_offset: Offset where the region starts in the mapped memory
+ * @mmap_addr: Address of the mapped memory
+ */
+struct vu_dev_region {
+ uint64_t gpa;
+ uint64_t size;
+ uint64_t qva;
+ uint64_t mmap_offset;
+ uint64_t mmap_addr;
+};
+
+#define VHOST_USER_MAX_QUEUES 2
+
+/*
+ * Set a reasonable maximum number of ram slots, which will be supported by
+ * any architecture.
+ */
+#define VHOST_USER_MAX_RAM_SLOTS 32
+
+/**
+ * struct vu_dev - vhost-user device information
+ * @context: Execution context
+ * @nregions: Number of shared memory regions
+ * @regions: Guest shared memory regions
+ * @features: Vhost-user features
+ * @protocol_features: Vhost-user protocol features
+ * @hdrlen: Virtio -net header length
+ */
+struct vu_dev {
+ uint32_t nregions;
+ struct vu_dev_region regions[VHOST_USER_MAX_RAM_SLOTS];
+ struct vu_virtq vq[VHOST_USER_MAX_QUEUES];
+ uint64_t features;
+ uint64_t protocol_features;
+ int hdrlen;
+};
+
+/**
+ * struct vu_virtq_element - virtqueue element
+ * @index: Descriptor ring index
+ * @out_num: Number of outgoing iovec buffers
+ * @in_num: Number of incoming iovec buffers
+ * @in_sg: Incoming iovec buffers
+ * @out_sg: Outgoing iovec buffers
+ */
+struct vu_virtq_element {
+ unsigned int index;
+ unsigned int out_num;
+ unsigned int in_num;
+ struct iovec *in_sg;
+ struct iovec *out_sg;
+};
+
+/**
+ * has_feature() - Check a feature bit in a features set
+ * @features: Features set
+ * @fb: Feature bit to check
+ *
+ * Return: True if the feature bit is set
+ */
+static inline bool has_feature(uint64_t features, unsigned int fbit)
+{
+ return !!(features & (1ULL << fbit));
+}
+
+/**
+ * vu_has_feature() - Check if a virtio-net feature is available
+ * @vdev: Vhost-user device
+ * @bit: Feature to check
+ *
+ * Return: True if the feature is available
+ */
+static inline bool vu_has_feature(const struct vu_dev *vdev,
+ unsigned int fbit)
+{
+ return has_feature(vdev->features, fbit);
+}
+
+/**
+ * vu_has_protocol_feature() - Check if a vhost-user feature is available
+ * @vdev: Vhost-user device
+ * @bit: Feature to check
+ *
+ * Return: True if the feature is available
+ */
+/* cppcheck-suppress unusedFunction */
+static inline bool vu_has_protocol_feature(const struct vu_dev *vdev,
+ unsigned int fbit)
+{
+ return has_feature(vdev->protocol_features, fbit);
+}
+
+bool vu_queue_empty(struct vu_virtq *vq);
+void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq);
+int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq,
+ struct vu_virtq_element *elem);
+void vu_queue_detach_element(struct vu_virtq *vq);
+void vu_queue_unpop(struct vu_virtq *vq);
+bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num);
+void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index,
+ unsigned int len, unsigned int idx);
+void vu_queue_fill(struct vu_virtq *vq,
+ const struct vu_virtq_element *elem, unsigned int len,
+ unsigned int idx);
+void vu_queue_flush(struct vu_virtq *vq, unsigned int count);
+#endif /* VIRTIO_H */
--
@@ -0,0 +1,185 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * virtio API, vring and virtqueue functions definition
+ *
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+#ifndef VIRTIO_H
+#define VIRTIO_H
+
+#include <stdbool.h>
+#include <linux/vhost_types.h>
+
+/* Maximum size of a virtqueue */
+#define VIRTQUEUE_MAX_SIZE 1024
+
+/**
+ * struct vu_ring - Virtqueue rings
+ * @num: Size of the queue
+ * @desc: Descriptor ring
+ * @avail: Available ring
+ * @used: Used ring
+ * @log_guest_addr: Guest address for logging
+ * @flags: Vring flags
+ * VHOST_VRING_F_LOG is set if log address is valid
+ */
+struct vu_ring {
+ unsigned int num;
+ struct vring_desc *desc;
+ struct vring_avail *avail;
+ struct vring_used *used;
+ uint64_t log_guest_addr;
+ uint32_t flags;
+};
+
+/**
+ * struct vu_virtq - Virtqueue definition
+ * @vring: Virtqueue rings
+ * @last_avail_idx: Next head to pop
+ * @shadow_avail_idx: Last avail_idx read from VQ.
+ * @used_idx: Descriptor ring current index
+ * @signalled_used: Last used index value we have signalled on
+ * @signalled_used_valid: True if signalled_used if valid
+ * @notification: True if the queues notify (via event
+ * index or interrupt)
+ * @inuse: Number of entries in use
+ * @call_fd: The event file descriptor to signal when
+ * buffers are used.
+ * @kick_fd: The event file descriptor for adding
+ * buffers to the vring
+ * @err_fd: The event file descriptor to signal when
+ * error occurs
+ * @enable: True if the virtqueue is enabled
+ * @started: True if the virtqueue is started
+ * @vra: QEMU address of our rings
+ */
+struct vu_virtq {
+ struct vu_ring vring;
+ uint16_t last_avail_idx;
+ uint16_t shadow_avail_idx;
+ uint16_t used_idx;
+ uint16_t signalled_used;
+ bool signalled_used_valid;
+ bool notification;
+ unsigned int inuse;
+ int call_fd;
+ int kick_fd;
+ int err_fd;
+ unsigned int enable;
+ bool started;
+ struct vhost_vring_addr vra;
+};
+
+/**
+ * struct vu_dev_region - guest shared memory region
+ * @gpa: Guest physical address of the region
+ * @size: Memory size in bytes
+ * @qva: QEMU virtual address
+ * @mmap_offset: Offset where the region starts in the mapped memory
+ * @mmap_addr: Address of the mapped memory
+ */
+struct vu_dev_region {
+ uint64_t gpa;
+ uint64_t size;
+ uint64_t qva;
+ uint64_t mmap_offset;
+ uint64_t mmap_addr;
+};
+
+#define VHOST_USER_MAX_QUEUES 2
+
+/*
+ * Set a reasonable maximum number of ram slots, which will be supported by
+ * any architecture.
+ */
+#define VHOST_USER_MAX_RAM_SLOTS 32
+
+/**
+ * struct vu_dev - vhost-user device information
+ * @context: Execution context
+ * @nregions: Number of shared memory regions
+ * @regions: Guest shared memory regions
+ * @features: Vhost-user features
+ * @protocol_features: Vhost-user protocol features
+ * @hdrlen: Virtio -net header length
+ */
+struct vu_dev {
+ uint32_t nregions;
+ struct vu_dev_region regions[VHOST_USER_MAX_RAM_SLOTS];
+ struct vu_virtq vq[VHOST_USER_MAX_QUEUES];
+ uint64_t features;
+ uint64_t protocol_features;
+ int hdrlen;
+};
+
+/**
+ * struct vu_virtq_element - virtqueue element
+ * @index: Descriptor ring index
+ * @out_num: Number of outgoing iovec buffers
+ * @in_num: Number of incoming iovec buffers
+ * @in_sg: Incoming iovec buffers
+ * @out_sg: Outgoing iovec buffers
+ */
+struct vu_virtq_element {
+ unsigned int index;
+ unsigned int out_num;
+ unsigned int in_num;
+ struct iovec *in_sg;
+ struct iovec *out_sg;
+};
+
+/**
+ * has_feature() - Check a feature bit in a features set
+ * @features: Features set
+ * @fb: Feature bit to check
+ *
+ * Return: True if the feature bit is set
+ */
+static inline bool has_feature(uint64_t features, unsigned int fbit)
+{
+ return !!(features & (1ULL << fbit));
+}
+
+/**
+ * vu_has_feature() - Check if a virtio-net feature is available
+ * @vdev: Vhost-user device
+ * @bit: Feature to check
+ *
+ * Return: True if the feature is available
+ */
+static inline bool vu_has_feature(const struct vu_dev *vdev,
+ unsigned int fbit)
+{
+ return has_feature(vdev->features, fbit);
+}
+
+/**
+ * vu_has_protocol_feature() - Check if a vhost-user feature is available
+ * @vdev: Vhost-user device
+ * @bit: Feature to check
+ *
+ * Return: True if the feature is available
+ */
+/* cppcheck-suppress unusedFunction */
+static inline bool vu_has_protocol_feature(const struct vu_dev *vdev,
+ unsigned int fbit)
+{
+ return has_feature(vdev->protocol_features, fbit);
+}
+
+bool vu_queue_empty(struct vu_virtq *vq);
+void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq);
+int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq,
+ struct vu_virtq_element *elem);
+void vu_queue_detach_element(struct vu_virtq *vq);
+void vu_queue_unpop(struct vu_virtq *vq);
+bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num);
+void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index,
+ unsigned int len, unsigned int idx);
+void vu_queue_fill(struct vu_virtq *vq,
+ const struct vu_virtq_element *elem, unsigned int len,
+ unsigned int idx);
+void vu_queue_flush(struct vu_virtq *vq, unsigned int count);
+#endif /* VIRTIO_H */
--
2.46.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 3/4] vhost-user: introduce vhost-user API
2024-09-06 16:04 [PATCH v4 0/4] Add vhost-user support to passt. (part 3) Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 1/4] packet: replace struct desc by struct iovec Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 2/4] vhost-user: introduce virtio API Laurent Vivier
@ 2024-09-06 16:04 ` Laurent Vivier
2024-09-10 15:47 ` Stefano Brivio
2024-09-06 16:04 ` [PATCH v4 4/4] vhost-user: add vhost-user Laurent Vivier
3 siblings, 1 reply; 15+ messages in thread
From: Laurent Vivier @ 2024-09-06 16:04 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier
Add vhost_user.c and vhost_user.h that define the functions needed
to implement vhost-user backend.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
Makefile | 4 +-
iov.c | 1 -
vhost_user.c | 1265 ++++++++++++++++++++++++++++++++++++++++++++++++++
vhost_user.h | 203 ++++++++
virtio.c | 5 -
virtio.h | 2 +-
6 files changed, 1471 insertions(+), 9 deletions(-)
create mode 100644 vhost_user.c
create mode 100644 vhost_user.h
diff --git a/Makefile b/Makefile
index e9a154bdd718..01e95ac1b62c 100644
--- a/Makefile
+++ b/Makefile
@@ -47,7 +47,7 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS)
PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \
icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \
ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \
- tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c virtio.c
+ tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c
QRAP_SRCS = qrap.c
SRCS = $(PASST_SRCS) $(QRAP_SRCS)
@@ -57,7 +57,7 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \
flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \
lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \
siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \
- udp.h udp_flow.h util.h virtio.h
+ udp.h udp_flow.h util.h vhost_user.h virtio.h
HEADERS = $(PASST_HEADERS) seccomp.h
C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 };
diff --git a/iov.c b/iov.c
index 3f9e229a305f..3741db21790f 100644
--- a/iov.c
+++ b/iov.c
@@ -68,7 +68,6 @@ size_t iov_skip_bytes(const struct iovec *iov, size_t n,
*
* Returns: The number of bytes successfully copied.
*/
-/* cppcheck-suppress unusedFunction */
size_t iov_from_buf(const struct iovec *iov, size_t iov_cnt,
size_t offset, const void *buf, size_t bytes)
{
diff --git a/vhost_user.c b/vhost_user.c
new file mode 100644
index 000000000000..6008a8adc967
--- /dev/null
+++ b/vhost_user.c
@@ -0,0 +1,1265 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * vhost-user API, command management and virtio interface
+ *
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+/* some parts from QEMU subprojects/libvhost-user/libvhost-user.c
+ * licensed under the following terms:
+ *
+ * Copyright IBM, Corp. 2007
+ * Copyright (c) 2016 Red Hat, Inc.
+ *
+ * Authors:
+ * Anthony Liguori <aliguori@us.ibm.com>
+ * Marc-André Lureau <mlureau@redhat.com>
+ * Victor Kaplansky <victork@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later. See the COPYING file in the top-level directory.
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <stddef.h>
+#include <string.h>
+#include <assert.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <time.h>
+#include <net/ethernet.h>
+#include <netinet/in.h>
+#include <sys/epoll.h>
+#include <sys/eventfd.h>
+#include <sys/mman.h>
+#include <linux/vhost_types.h>
+#include <linux/virtio_net.h>
+
+#include "util.h"
+#include "passt.h"
+#include "tap.h"
+#include "vhost_user.h"
+
+/* vhost-user version we are compatible with */
+#define VHOST_USER_VERSION 1
+
+/**
+ * vu_print_capabilities() - print vhost-user capabilities
+ * this is part of the vhost-user backend
+ * convention.
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_print_capabilities(void)
+{
+ info("{");
+ info(" \"type\": \"net\"");
+ info("}");
+ exit(EXIT_SUCCESS);
+}
+
+/**
+ * vu_request_to_string() - convert a vhost-user request number to its name
+ * @req: request number
+ *
+ * Return: the name of request number
+ */
+static const char *vu_request_to_string(unsigned int req)
+{
+ if (req < VHOST_USER_MAX) {
+#define REQ(req) [req] = #req
+ static const char * const vu_request_str[VHOST_USER_MAX] = {
+ REQ(VHOST_USER_NONE),
+ REQ(VHOST_USER_GET_FEATURES),
+ REQ(VHOST_USER_SET_FEATURES),
+ REQ(VHOST_USER_SET_OWNER),
+ REQ(VHOST_USER_RESET_OWNER),
+ REQ(VHOST_USER_SET_MEM_TABLE),
+ REQ(VHOST_USER_SET_LOG_BASE),
+ REQ(VHOST_USER_SET_LOG_FD),
+ REQ(VHOST_USER_SET_VRING_NUM),
+ REQ(VHOST_USER_SET_VRING_ADDR),
+ REQ(VHOST_USER_SET_VRING_BASE),
+ REQ(VHOST_USER_GET_VRING_BASE),
+ REQ(VHOST_USER_SET_VRING_KICK),
+ REQ(VHOST_USER_SET_VRING_CALL),
+ REQ(VHOST_USER_SET_VRING_ERR),
+ REQ(VHOST_USER_GET_PROTOCOL_FEATURES),
+ REQ(VHOST_USER_SET_PROTOCOL_FEATURES),
+ REQ(VHOST_USER_GET_QUEUE_NUM),
+ REQ(VHOST_USER_SET_VRING_ENABLE),
+ REQ(VHOST_USER_SEND_RARP),
+ REQ(VHOST_USER_NET_SET_MTU),
+ REQ(VHOST_USER_SET_BACKEND_REQ_FD),
+ REQ(VHOST_USER_IOTLB_MSG),
+ REQ(VHOST_USER_SET_VRING_ENDIAN),
+ REQ(VHOST_USER_GET_CONFIG),
+ REQ(VHOST_USER_SET_CONFIG),
+ REQ(VHOST_USER_POSTCOPY_ADVISE),
+ REQ(VHOST_USER_POSTCOPY_LISTEN),
+ REQ(VHOST_USER_POSTCOPY_END),
+ REQ(VHOST_USER_GET_INFLIGHT_FD),
+ REQ(VHOST_USER_SET_INFLIGHT_FD),
+ REQ(VHOST_USER_GPU_SET_SOCKET),
+ REQ(VHOST_USER_VRING_KICK),
+ REQ(VHOST_USER_GET_MAX_MEM_SLOTS),
+ REQ(VHOST_USER_ADD_MEM_REG),
+ REQ(VHOST_USER_REM_MEM_REG),
+ };
+#undef REQ
+ return vu_request_str[req];
+ }
+
+ return "unknown";
+}
+
+/**
+ * qva_to_va() - Translate front-end (QEMU) virtual address to our virtual
+ * address
+ * @dev: vhost-user device
+ * @qemu_addr: front-end userspace address
+ *
+ * Return: the memory address in our process virtual address space.
+ */
+static void *qva_to_va(struct vu_dev *dev, uint64_t qemu_addr)
+{
+ unsigned int i;
+
+ /* Find matching memory region. */
+ for (i = 0; i < dev->nregions; i++) {
+ const struct vu_dev_region *r = &dev->regions[i];
+
+ if ((qemu_addr >= r->qva) && (qemu_addr < (r->qva + r->size))) {
+ /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
+ return (void *)(qemu_addr - r->qva + r->mmap_addr +
+ r->mmap_offset);
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * vmsg_close_fds() - Close all file descriptors of a given message
+ * @vmsg: vhost-user message with the list of the file descriptors
+ */
+static void vmsg_close_fds(const struct vhost_user_msg *vmsg)
+{
+ int i;
+
+ for (i = 0; i < vmsg->fd_num; i++)
+ close(vmsg->fds[i]);
+}
+
+/**
+ * vu_remove_watch() - Remove a file descriptor from our passt epoll
+ * file descriptor
+ * @vdev: vhost-user device
+ * @fd: file descriptor to remove
+ */
+static void vu_remove_watch(const struct vu_dev *vdev, int fd)
+{
+ /* Placeholder to add passt related code */
+ (void)vdev;
+ (void)fd;
+}
+
+/**
+ * vmsg_set_reply_u64() - Set reply payload.u64 and clear request flags
+ * and fd_num
+ * @vmsg: vhost-user message
+ * @val: 64-bit value to reply
+ */
+static void vmsg_set_reply_u64(struct vhost_user_msg *vmsg, uint64_t val)
+{
+ vmsg->hdr.flags = 0; /* defaults will be set by vu_send_reply() */
+ vmsg->hdr.size = sizeof(vmsg->payload.u64);
+ vmsg->payload.u64 = val;
+ vmsg->fd_num = 0;
+}
+
+/**
+ * vu_message_read_default() - Read incoming vhost-user message from the
+ * front-end
+ * @conn_fd: vhost-user command socket
+ * @vmsg: vhost-user message
+ *
+ * Return: -1 there is an error,
+ * 0 if recvmsg() has been interrupted or if there's no data to read,
+ * 1 if a message has been received
+ */
+static int vu_message_read_default(int conn_fd, struct vhost_user_msg *vmsg)
+{
+ char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS *
+ sizeof(int))] = { 0 };
+ struct iovec iov = {
+ .iov_base = (char *)vmsg,
+ .iov_len = VHOST_USER_HDR_SIZE,
+ };
+ struct msghdr msg = {
+ .msg_iov = &iov,
+ .msg_iovlen = 1,
+ .msg_control = control,
+ .msg_controllen = sizeof(control),
+ };
+ ssize_t ret, sz_payload;
+ struct cmsghdr *cmsg;
+
+ ret = recvmsg(conn_fd, &msg, MSG_DONTWAIT);
+ if (ret < 0) {
+ if (errno == EINTR || errno == EAGAIN || errno == EWOULDBLOCK)
+ return 0;
+ die_perror("vhost-user message receive (recvmsg)");
+ }
+
+ vmsg->fd_num = 0;
+ for (cmsg = CMSG_FIRSTHDR(&msg); cmsg != NULL;
+ cmsg = CMSG_NXTHDR(&msg, cmsg)) {
+ if (cmsg->cmsg_level == SOL_SOCKET &&
+ cmsg->cmsg_type == SCM_RIGHTS) {
+ size_t fd_size;
+
+ ASSERT(cmsg->cmsg_len >= CMSG_LEN(0));
+ fd_size = cmsg->cmsg_len - CMSG_LEN(0);
+ ASSERT(fd_size <= sizeof(vmsg->fds));
+ vmsg->fd_num = fd_size / sizeof(int);
+ memcpy(vmsg->fds, CMSG_DATA(cmsg), fd_size);
+ break;
+ }
+ }
+
+ sz_payload = vmsg->hdr.size;
+ if ((size_t)sz_payload > sizeof(vmsg->payload)) {
+ die("vhost-user message request too big: %d,"
+ " size: vmsg->size: %zd, "
+ "while sizeof(vmsg->payload) = %zu",
+ vmsg->hdr.request, sz_payload, sizeof(vmsg->payload));
+ }
+
+ if (sz_payload) {
+ do
+ ret = recv(conn_fd, &vmsg->payload, sz_payload, 0);
+ while (ret < 0 && (errno == EINTR || errno == EAGAIN));
+
+ if (ret < 0)
+ die_perror("vhost-user message receive");
+
+ if (ret < sz_payload)
+ die("EOF on vhost-user message receive");
+ }
+
+ return 1;
+}
+
+/**
+ * vu_message_write() - Send a message to the front-end
+ * @conn_fd: vhost-user command socket
+ * @vmsg: vhost-user message
+ *
+ * #syscalls:vu sendmsg
+ */
+static void vu_message_write(int conn_fd, struct vhost_user_msg *vmsg)
+{
+ char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = { 0 };
+ struct iovec iov = {
+ .iov_base = (char *)vmsg,
+ .iov_len = VHOST_USER_HDR_SIZE + vmsg->hdr.size,
+ };
+ struct msghdr msg = {
+ .msg_iov = &iov,
+ .msg_iovlen = 1,
+ .msg_control = control,
+ };
+ int rc;
+
+ ASSERT(vmsg->fd_num <= VHOST_MEMORY_BASELINE_NREGIONS);
+ if (vmsg->fd_num > 0) {
+ size_t fdsize = vmsg->fd_num * sizeof(int);
+ struct cmsghdr *cmsg;
+
+ msg.msg_controllen = CMSG_SPACE(fdsize);
+ cmsg = CMSG_FIRSTHDR(&msg);
+ cmsg->cmsg_len = CMSG_LEN(fdsize);
+ cmsg->cmsg_level = SOL_SOCKET;
+ cmsg->cmsg_type = SCM_RIGHTS;
+ memcpy(CMSG_DATA(cmsg), vmsg->fds, fdsize);
+ }
+
+ do
+ rc = sendmsg(conn_fd, &msg, 0);
+ while (rc < 0 && (errno == EINTR || errno == EAGAIN));
+
+ if (rc < 0)
+ die_perror("vhost-user message send");
+
+ if ((uint32_t)rc < VHOST_USER_HDR_SIZE + vmsg->hdr.size)
+ die("EOF on vhost-user message send");
+}
+
+/**
+ * vu_send_reply() - Update message flags and send it to front-end
+ * @conn_fd: vhost-user command socket
+ * @vmsg: vhost-user message
+ */
+static void vu_send_reply(int conn_fd, struct vhost_user_msg *msg)
+{
+ msg->hdr.flags &= ~VHOST_USER_VERSION_MASK;
+ msg->hdr.flags |= VHOST_USER_VERSION;
+ msg->hdr.flags |= VHOST_USER_REPLY_MASK;
+
+ vu_message_write(conn_fd, msg);
+}
+
+/**
+ * vu_get_features_exec() - Provide back-end features bitmask to front-end
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: True as a reply is requested
+ */
+static bool vu_get_features_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ uint64_t features =
+ 1ULL << VIRTIO_F_VERSION_1 |
+ 1ULL << VIRTIO_NET_F_MRG_RXBUF |
+ 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
+
+ (void)vdev;
+
+ vmsg_set_reply_u64(msg, features);
+
+ debug("Sending back to guest u64: 0x%016"PRIx64, msg->payload.u64);
+
+ return true;
+}
+
+/**
+ * vu_set_enable_all_rings() - Enable/disable all the virtqueues
+ * @vdev: vhost-user device
+ * @enable: New virtqueues state
+ */
+static void vu_set_enable_all_rings(struct vu_dev *vdev, bool enable)
+{
+ uint16_t i;
+
+ for (i = 0; i < VHOST_USER_MAX_QUEUES; i++)
+ vdev->vq[i].enable = enable;
+}
+
+/**
+ * vu_set_features_exec() - Enable features of the back-end
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_features_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ debug("u64: 0x%016"PRIx64, msg->payload.u64);
+
+ vdev->features = msg->payload.u64;
+ /* We only support devices conforming to VIRTIO 1.0 or
+ * later
+ */
+ if (!vu_has_feature(vdev, VIRTIO_F_VERSION_1))
+ die("virtio legacy devices aren't supported by passt");
+
+ if (!vu_has_feature(vdev, VHOST_USER_F_PROTOCOL_FEATURES))
+ vu_set_enable_all_rings(vdev, true);
+
+ /* virtio-net features */
+
+ /* VIRTIO_F_VERSION_1 always uses struct virtio_net_hdr_mrg_rxbuf */
+ vdev->hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+
+ return false;
+}
+
+/**
+ * vu_set_owner_exec() - Session start flag, do nothing in our case
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_owner_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ (void)vdev;
+ (void)msg;
+
+ return false;
+}
+
+/**
+ * map_ring() - Convert ring front-end (QEMU) addresses to our process
+ * virtual address space.
+ * @vdev: vhost-user device
+ * @vq: Virtqueue
+ *
+ * Return: True if ring cannot be mapped to our address space
+ */
+static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq)
+{
+ vq->vring.desc = qva_to_va(vdev, vq->vra.desc_user_addr);
+ vq->vring.used = qva_to_va(vdev, vq->vra.used_user_addr);
+ vq->vring.avail = qva_to_va(vdev, vq->vra.avail_user_addr);
+
+ debug("Setting virtq addresses:");
+ debug(" vring_desc at %p", (void *)vq->vring.desc);
+ debug(" vring_used at %p", (void *)vq->vring.used);
+ debug(" vring_avail at %p", (void *)vq->vring.avail);
+
+ return !(vq->vring.desc && vq->vring.used && vq->vring.avail);
+}
+
+/**
+ * vu_packet_check_range() - Check if a given memory zone is contained in
+ * a mapped guest memory region
+ * @buf: Array of the available memory regions
+ * @offset: Offset of data range in packet descriptor
+ * @size: Length of desired data range
+ * @start: Start of the packet descriptor
+ *
+ * Return: 0 if the zone is in a mapped memory region, -1 otherwise
+ */
+/* cppcheck-suppress unusedFunction */
+int vu_packet_check_range(void *buf, size_t offset, size_t len,
+ const char *start)
+{
+ struct vu_dev_region *dev_region;
+
+ for (dev_region = buf; dev_region->mmap_addr; dev_region++) {
+ /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
+ char *m = (char *)dev_region->mmap_addr;
+
+ if (m <= start &&
+ start + offset + len <= m + dev_region->mmap_offset +
+ dev_region->size)
+ return 0;
+ }
+
+ return -1;
+}
+
+/**
+ * vu_set_mem_table_exec() - Sets the memory map regions to be able to
+ * translate the vring addresses.
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ *
+ * #syscalls:vu mmap munmap
+ */
+static bool vu_set_mem_table_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ struct vhost_user_memory m = msg->payload.memory, *memory = &m;
+ unsigned int i;
+
+ for (i = 0; i < vdev->nregions; i++) {
+ struct vu_dev_region *r = &vdev->regions[i];
+ /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
+ void *mm = (void *)r->mmap_addr;
+
+ if (mm)
+ munmap(mm, r->size + r->mmap_offset);
+ }
+ vdev->nregions = memory->nregions;
+
+ debug("vhost-user nregions: %u", memory->nregions);
+ for (i = 0; i < vdev->nregions; i++) {
+ struct vhost_user_memory_region *msg_region = &memory->regions[i];
+ struct vu_dev_region *dev_region = &vdev->regions[i];
+ void *mmap_addr;
+
+ debug("vhost-user region %d", i);
+ debug(" guest_phys_addr: 0x%016"PRIx64,
+ msg_region->guest_phys_addr);
+ debug(" memory_size: 0x%016"PRIx64,
+ msg_region->memory_size);
+ debug(" userspace_addr 0x%016"PRIx64,
+ msg_region->userspace_addr);
+ debug(" mmap_offset 0x%016"PRIx64,
+ msg_region->mmap_offset);
+
+ dev_region->gpa = msg_region->guest_phys_addr;
+ dev_region->size = msg_region->memory_size;
+ dev_region->qva = msg_region->userspace_addr;
+ dev_region->mmap_offset = msg_region->mmap_offset;
+
+ /* We don't use offset argument of mmap() since the
+ * mapped address has to be page aligned.
+ */
+ mmap_addr = mmap(0, dev_region->size + dev_region->mmap_offset,
+ PROT_READ | PROT_WRITE, MAP_SHARED |
+ MAP_NORESERVE, msg->fds[i], 0);
+
+ if (mmap_addr == MAP_FAILED)
+ die_perror("vhost-user region mmap error");
+
+ dev_region->mmap_addr = (uint64_t)(uintptr_t)mmap_addr;
+ debug(" mmap_addr: 0x%016"PRIx64,
+ dev_region->mmap_addr);
+
+ close(msg->fds[i]);
+ }
+
+ for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
+ if (vdev->vq[i].vring.desc) {
+ if (map_ring(vdev, &vdev->vq[i]))
+ die("remapping queue %d during setmemtable", i);
+ }
+ }
+
+ return false;
+}
+
+/**
+ * vu_set_vring_num_exec() - Set the size of the queue (vring size)
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_vring_num_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ unsigned int idx = msg->payload.state.index;
+ unsigned int num = msg->payload.state.num;
+
+ debug("State.index: %u", idx);
+ debug("State.num: %u", num);
+ vdev->vq[idx].vring.num = num;
+
+ return false;
+}
+
+/**
+ * vu_set_vring_addr_exec() - Set the addresses of the vring
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_vring_addr_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ /* We need to copy the payload to vhost_vring_addr structure
+ * to access index because address of msg->payload.addr
+ * can be unaligned as it is packed.
+ */
+ struct vhost_vring_addr addr = msg->payload.addr;
+ struct vu_virtq *vq = &vdev->vq[addr.index];
+
+ debug("vhost_vring_addr:");
+ debug(" index: %d", addr.index);
+ debug(" flags: %d", addr.flags);
+ debug(" desc_user_addr: 0x%016" PRIx64,
+ (uint64_t)addr.desc_user_addr);
+ debug(" used_user_addr: 0x%016" PRIx64,
+ (uint64_t)addr.used_user_addr);
+ debug(" avail_user_addr: 0x%016" PRIx64,
+ (uint64_t)addr.avail_user_addr);
+ debug(" log_guest_addr: 0x%016" PRIx64,
+ (uint64_t)addr.log_guest_addr);
+
+ vq->vra = msg->payload.addr;
+ vq->vring.flags = addr.flags;
+ vq->vring.log_guest_addr = addr.log_guest_addr;
+
+ if (map_ring(vdev, vq))
+ die("Invalid vring_addr message");
+
+ vq->used_idx = le16toh(vq->vring.used->idx);
+
+ if (vq->last_avail_idx != vq->used_idx) {
+ debug("Last avail index != used index: %u != %u",
+ vq->last_avail_idx, vq->used_idx);
+ }
+
+ return false;
+}
+/**
+ * vu_set_vring_base_exec() - Sets the next index to use for descriptors
+ * in this vring
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_vring_base_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ unsigned int idx = msg->payload.state.index;
+ unsigned int num = msg->payload.state.num;
+
+ debug("State.index: %u", idx);
+ debug("State.num: %u", num);
+ vdev->vq[idx].shadow_avail_idx = vdev->vq[idx].last_avail_idx = num;
+
+ return false;
+}
+
+/**
+ * vu_get_vring_base_exec() - Stops the vring and returns the current
+ * descriptor index or indices
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: True as a reply is requested
+ */
+static bool vu_get_vring_base_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ unsigned int idx = msg->payload.state.index;
+
+ debug("State.index: %u", idx);
+ msg->payload.state.num = vdev->vq[idx].last_avail_idx;
+ msg->hdr.size = sizeof(msg->payload.state);
+
+ vdev->vq[idx].started = false;
+
+ if (vdev->vq[idx].call_fd != -1) {
+ close(vdev->vq[idx].call_fd);
+ vdev->vq[idx].call_fd = -1;
+ }
+ if (vdev->vq[idx].kick_fd != -1) {
+ vu_remove_watch(vdev, vdev->vq[idx].kick_fd);
+ close(vdev->vq[idx].kick_fd);
+ vdev->vq[idx].kick_fd = -1;
+ }
+
+ return true;
+}
+
+/**
+ * vu_set_watch() - Add a file descriptor to the passt epoll file descriptor
+ * @vdev: vhost-user device
+ * @fd: file descriptor to add
+ */
+static void vu_set_watch(const struct vu_dev *vdev, int fd)
+{
+ /* Placeholder to add passt related code */
+ (void)vdev;
+ (void)fd;
+}
+
+/**
+ * vu_wait_queue() - wait for new free entries in the virtqueue
+ * @vq: virtqueue to wait on
+ */
+static int vu_wait_queue(const struct vu_virtq *vq)
+{
+ eventfd_t kick_data;
+ ssize_t rc;
+ int status;
+
+ /* wait for the kernel to put new entries in the queue */
+ status = fcntl(vq->kick_fd, F_GETFL);
+ if (status == -1)
+ return -1;
+
+ if (fcntl(vq->kick_fd, F_SETFL, status & ~O_NONBLOCK))
+ return -1;
+
+ rc = eventfd_read(vq->kick_fd, &kick_data);
+
+ if (fcntl(vq->kick_fd, F_SETFL, status))
+ return -1;
+
+ if (rc == -1)
+ return -1;
+
+ return 0;
+}
+
+/**
+ * vu_send() - Send a buffer to the front-end using the RX virtqueue
+ * @vdev: vhost-user device
+ * @buf: address of the buffer
+ * @size: size of the buffer
+ *
+ * Return: number of bytes sent, -1 if there is an error
+ */
+/* cppcheck-suppress unusedFunction */
+int vu_send(struct vu_dev *vdev, const void *buf, size_t size)
+{
+ struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
+ struct iovec in_sg[VIRTQUEUE_MAX_SIZE];
+ size_t lens[VIRTQUEUE_MAX_SIZE];
+ __virtio16 *num_buffers_ptr = NULL;
+ size_t hdrlen = vdev->hdrlen;
+ int in_sg_count = 0;
+ size_t offset = 0;
+ int i = 0, j;
+
+ debug("vu_send size %zu hdrlen %zu", size, hdrlen);
+
+ if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) {
+ err("Got packet, but no available descriptors on RX virtq.");
+ return 0;
+ }
+
+ while (offset < size) {
+ size_t len;
+ int total;
+ int ret;
+
+ total = 0;
+
+ if (i == ARRAY_SIZE(elem) ||
+ in_sg_count == ARRAY_SIZE(in_sg)) {
+ err("virtio-net unexpected long buffer chain");
+ goto err;
+ }
+
+ elem[i].out_num = 0;
+ elem[i].out_sg = NULL;
+ elem[i].in_num = ARRAY_SIZE(in_sg) - in_sg_count;
+ elem[i].in_sg = &in_sg[in_sg_count];
+
+ ret = vu_queue_pop(vdev, vq, &elem[i]);
+ if (ret < 0) {
+ if (vu_wait_queue(vq) != -1)
+ continue;
+ if (i) {
+ err("virtio-net unexpected empty queue: "
+ "i %d mergeable %d offset %zd, size %zd, "
+ "features 0x%" PRIx64,
+ i, vu_has_feature(vdev,
+ VIRTIO_NET_F_MRG_RXBUF),
+ offset, size, vdev->features);
+ }
+ offset = -1;
+ goto err;
+ }
+ in_sg_count += elem[i].in_num;
+
+ if (elem[i].in_num < 1) {
+ err("virtio-net receive queue contains no in buffers");
+ vu_queue_detach_element(vq);
+ offset = -1;
+ goto err;
+ }
+
+ if (i == 0) {
+ struct virtio_net_hdr hdr = {
+ .flags = VIRTIO_NET_HDR_F_DATA_VALID,
+ .gso_type = VIRTIO_NET_HDR_GSO_NONE,
+ };
+
+ ASSERT(offset == 0);
+ ASSERT(elem[i].in_sg[0].iov_len >= hdrlen);
+
+ len = iov_from_buf(elem[i].in_sg, elem[i].in_num, 0,
+ &hdr, sizeof(hdr));
+
+ num_buffers_ptr = (__virtio16 *)((char *)elem[i].in_sg[0].iov_base +
+ len);
+
+ total += hdrlen;
+ }
+
+ len = iov_from_buf(elem[i].in_sg, elem[i].in_num, total,
+ (char *)buf + offset, size - offset);
+
+ total += len;
+ offset += len;
+
+ /* If buffers can't be merged, at this point we
+ * must have consumed the complete packet.
+ * Otherwise, drop it.
+ */
+ if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) &&
+ offset < size) {
+ vu_queue_unpop(vq);
+ goto err;
+ }
+
+ lens[i] = total;
+ i++;
+ }
+
+ if (num_buffers_ptr && vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
+ *num_buffers_ptr = htole16(i);
+
+ for (j = 0; j < i; j++) {
+ debug("filling total %zd idx %d", lens[j], j);
+ vu_queue_fill(vq, &elem[j], lens[j], j);
+ }
+
+ vu_queue_flush(vq, i);
+ vu_queue_notify(vdev, vq);
+
+ debug("vhost-user sent %zu", offset);
+
+ return offset;
+err:
+ for (j = 0; j < i; j++)
+ vu_queue_detach_element(vq);
+
+ return offset;
+}
+
+/**
+ * vu_handle_tx() - Receive data from the TX virtqueue
+ * @vdev: vhost-user device
+ * @index: index of the virtqueue
+ * @now: Current timestamp
+ */
+static void vu_handle_tx(struct vu_dev *vdev, int index,
+ const struct timespec *now)
+{
+ struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
+ struct iovec out_sg[VIRTQUEUE_MAX_SIZE];
+ struct vu_virtq *vq = &vdev->vq[index];
+ int hdrlen = vdev->hdrlen;
+ int out_sg_count;
+ int count;
+
+
+ if (!VHOST_USER_IS_QUEUE_TX(index)) {
+ debug("vhost-user: index %d is not a TX queue", index);
+ return;
+ }
+
+ tap_flush_pools();
+
+ count = 0;
+ out_sg_count = 0;
+ while (count < VIRTQUEUE_MAX_SIZE) {
+ int ret;
+
+ elem[count].out_num = 1;
+ elem[count].out_sg = &out_sg[out_sg_count];
+ elem[count].in_num = 0;
+ elem[count].in_sg = NULL;
+ ret = vu_queue_pop(vdev, vq, &elem[count]);
+ if (ret < 0)
+ break;
+ out_sg_count += elem[count].out_num;
+
+ if (elem[count].out_num < 1) {
+ debug("virtio-net header not in first element");
+ break;
+ }
+ ASSERT(elem[count].out_num == 1);
+
+ tap_add_packet(vdev->context,
+ elem[count].out_sg[0].iov_len - hdrlen,
+ (char *)elem[count].out_sg[0].iov_base + hdrlen);
+ count++;
+ }
+ tap_handler(vdev->context, now);
+
+ if (count) {
+ int i;
+
+ for (i = 0; i < count; i++)
+ vu_queue_fill(vq, &elem[i], 0, i);
+ vu_queue_flush(vq, count);
+ vu_queue_notify(vdev, vq);
+ }
+}
+
+/**
+ * vu_kick_cb() - Called on a kick event to start to receive data
+ * @vdev: vhost-user device
+ * @ref: epoll reference information
+ * @now: Current timestamp
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref,
+ const struct timespec *now)
+{
+ eventfd_t kick_data;
+ ssize_t rc;
+ int idx;
+
+ for (idx = 0; idx < VHOST_USER_MAX_QUEUES; idx++) {
+ if (vdev->vq[idx].kick_fd == ref.fd)
+ break;
+ }
+
+ if (idx == VHOST_USER_MAX_QUEUES)
+ return;
+
+ rc = eventfd_read(ref.fd, &kick_data);
+ if (rc == -1)
+ die_perror("vhost-user kick eventfd_read()");
+
+ debug("vhost-user: ot kick_data: %016"PRIx64" idx:%d",
+ kick_data, idx);
+ if (VHOST_USER_IS_QUEUE_TX(idx))
+ vu_handle_tx(vdev, idx, now);
+}
+
+/**
+ * vu_check_queue_msg_file() - Check if a message is valid,
+ * close fds if NOFD bit is set
+ * @vmsg: vhost-user message
+ */
+static void vu_check_queue_msg_file(struct vhost_user_msg *msg)
+{
+ bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
+ int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+
+ if (idx >= VHOST_USER_MAX_QUEUES)
+ die("Invalid vhost-user queue index: %u", idx);
+
+ if (nofd) {
+ vmsg_close_fds(msg);
+ return;
+ }
+
+ if (msg->fd_num != 1)
+ die("Invalid fds in vhost-user request: %d", msg->hdr.request);
+}
+
+/**
+ * vu_set_vring_kick_exec() - Set the event file descriptor for adding buffers
+ * to the vring
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_vring_kick_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
+ int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+
+ debug("u64: 0x%016"PRIx64, msg->payload.u64);
+
+ vu_check_queue_msg_file(msg);
+
+ if (vdev->vq[idx].kick_fd != -1) {
+ vu_remove_watch(vdev, vdev->vq[idx].kick_fd);
+ close(vdev->vq[idx].kick_fd);
+ }
+
+ vdev->vq[idx].kick_fd = nofd ? -1 : msg->fds[0];
+ debug("Got kick_fd: %d for vq: %d", vdev->vq[idx].kick_fd, idx);
+
+ vdev->vq[idx].started = true;
+
+ if (vdev->vq[idx].kick_fd != -1 && VHOST_USER_IS_QUEUE_TX(idx)) {
+ vu_set_watch(vdev, vdev->vq[idx].kick_fd);
+ debug("Waiting for kicks on fd: %d for vq: %d",
+ vdev->vq[idx].kick_fd, idx);
+ }
+
+ return false;
+}
+
+/**
+ * vu_set_vring_call_exec() - Set the event file descriptor to signal when
+ * buffers are used
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_vring_call_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
+ int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+
+ debug("u64: 0x%016"PRIx64, msg->payload.u64);
+
+ vu_check_queue_msg_file(msg);
+
+ if (vdev->vq[idx].call_fd != -1)
+ close(vdev->vq[idx].call_fd);
+
+ vdev->vq[idx].call_fd = nofd ? -1 : msg->fds[0];
+
+ /* in case of I/O hang after reconnecting */
+ if (vdev->vq[idx].call_fd != -1)
+ eventfd_write(msg->fds[0], 1);
+
+ debug("Got call_fd: %d for vq: %d", vdev->vq[idx].call_fd, idx);
+
+ return false;
+}
+
+/**
+ * vu_set_vring_err_exec() - Set the event file descriptor to signal when
+ * error occurs
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_vring_err_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
+ int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+
+ debug("u64: 0x%016"PRIx64, msg->payload.u64);
+
+ vu_check_queue_msg_file(msg);
+
+ if (vdev->vq[idx].err_fd != -1) {
+ close(vdev->vq[idx].err_fd);
+ vdev->vq[idx].err_fd = -1;
+ }
+
+ /* cppcheck-suppress redundantAssignment */
+ vdev->vq[idx].err_fd = nofd ? -1 : msg->fds[0];
+
+ return false;
+}
+
+/**
+ * vu_get_protocol_features_exec() - Provide the protocol (vhost-user) features
+ * to the front-end
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: True as a reply is requested
+ */
+static bool vu_get_protocol_features_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK;
+
+ (void)vdev;
+ vmsg_set_reply_u64(msg, features);
+
+ return true;
+}
+
+/**
+ * vu_set_protocol_features_exec() - Enable protocol (vhost-user) features
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_protocol_features_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ uint64_t features = msg->payload.u64;
+
+ debug("u64: 0x%016"PRIx64, features);
+
+ vdev->protocol_features = msg->payload.u64;
+
+ if (vu_has_protocol_feature(vdev,
+ VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS) &&
+ (!vu_has_protocol_feature(vdev, VHOST_USER_PROTOCOL_F_BACKEND_REQ) ||
+ !vu_has_protocol_feature(vdev, VHOST_USER_PROTOCOL_F_REPLY_ACK))) {
+ /*
+ * The use case for using messages for kick/call is simulation, to make
+ * the kick and call synchronous. To actually get that behaviour, both
+ * of the other features are required.
+ * Theoretically, one could use only kick messages, or do them without
+ * having F_REPLY_ACK, but too many (possibly pending) messages on the
+ * socket will eventually cause the master to hang, to avoid this in
+ * scenarios where not desired enforce that the settings are in a way
+ * that actually enables the simulation case.
+ */
+ die("F_IN_BAND_NOTIFICATIONS requires F_BACKEND_REQ && F_REPLY_ACK");
+ }
+
+ return false;
+}
+
+/**
+ * vu_get_queue_num_exec() - Tell how many queues we support
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: True as a reply is requested
+ */
+static bool vu_get_queue_num_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ (void)vdev;
+
+ vmsg_set_reply_u64(msg, VHOST_USER_MAX_QUEUES);
+
+ return true;
+}
+
+/**
+ * vu_set_vring_enable_exec() - Enable or disable corresponding vring
+ * @vdev: vhost-user device
+ * @vmsg: vhost-user message
+ *
+ * Return: False as no reply is requested
+ */
+static bool vu_set_vring_enable_exec(struct vu_dev *vdev,
+ struct vhost_user_msg *msg)
+{
+ unsigned int enable = msg->payload.state.num;
+ unsigned int idx = msg->payload.state.index;
+
+ debug("State.index: %u", idx);
+ debug("State.enable: %u", enable);
+
+ if (idx >= VHOST_USER_MAX_QUEUES)
+ die("Invalid vring_enable index: %u", idx);
+
+ vdev->vq[idx].enable = enable;
+ return false;
+}
+
+/**
+ * vu_init() - Initialize vhost-user device structure
+ * @c: execution context
+ * @vdev: vhost-user device
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_init(struct ctx *c, struct vu_dev *vdev)
+{
+ int i;
+
+ vdev->context = c;
+ vdev->hdrlen = 0;
+ for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
+ vdev->vq[i] = (struct vu_virtq){
+ .call_fd = -1,
+ .kick_fd = -1,
+ .err_fd = -1,
+ .notification = true,
+ };
+ }
+}
+
+/**
+ * vu_cleanup() - Reset vhost-user device
+ * @vdev: vhost-user device
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_cleanup(struct vu_dev *vdev)
+{
+ unsigned int i;
+
+ for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
+ struct vu_virtq *vq = &vdev->vq[i];
+
+ vq->started = false;
+ vq->notification = true;
+
+ if (vq->call_fd != -1) {
+ close(vq->call_fd);
+ vq->call_fd = -1;
+ }
+ if (vq->err_fd != -1) {
+ close(vq->err_fd);
+ vq->err_fd = -1;
+ }
+ if (vq->kick_fd != -1) {
+ vu_remove_watch(vdev, vq->kick_fd);
+ close(vq->kick_fd);
+ vq->kick_fd = -1;
+ }
+
+ vq->vring.desc = 0;
+ vq->vring.used = 0;
+ vq->vring.avail = 0;
+ }
+ vdev->hdrlen = 0;
+
+ for (i = 0; i < vdev->nregions; i++) {
+ const struct vu_dev_region *r = &vdev->regions[i];
+ /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
+ void *m = (void *)r->mmap_addr;
+
+ if (m)
+ munmap(m, r->size + r->mmap_offset);
+ }
+ vdev->nregions = 0;
+}
+
+/**
+ * vu_sock_reset() - Reset connection socket
+ * @vdev: vhost-user device
+ */
+static void vu_sock_reset(struct vu_dev *vdev)
+{
+ /* Placeholder to add passt related code */
+ (void)vdev;
+}
+
+static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev,
+ struct vhost_user_msg *msg) = {
+ [VHOST_USER_GET_FEATURES] = vu_get_features_exec,
+ [VHOST_USER_SET_FEATURES] = vu_set_features_exec,
+ [VHOST_USER_GET_PROTOCOL_FEATURES] = vu_get_protocol_features_exec,
+ [VHOST_USER_SET_PROTOCOL_FEATURES] = vu_set_protocol_features_exec,
+ [VHOST_USER_GET_QUEUE_NUM] = vu_get_queue_num_exec,
+ [VHOST_USER_SET_OWNER] = vu_set_owner_exec,
+ [VHOST_USER_SET_MEM_TABLE] = vu_set_mem_table_exec,
+ [VHOST_USER_SET_VRING_NUM] = vu_set_vring_num_exec,
+ [VHOST_USER_SET_VRING_ADDR] = vu_set_vring_addr_exec,
+ [VHOST_USER_SET_VRING_BASE] = vu_set_vring_base_exec,
+ [VHOST_USER_GET_VRING_BASE] = vu_get_vring_base_exec,
+ [VHOST_USER_SET_VRING_KICK] = vu_set_vring_kick_exec,
+ [VHOST_USER_SET_VRING_CALL] = vu_set_vring_call_exec,
+ [VHOST_USER_SET_VRING_ERR] = vu_set_vring_err_exec,
+ [VHOST_USER_SET_VRING_ENABLE] = vu_set_vring_enable_exec,
+};
+
+/**
+ * vu_control_handler() - Handle control commands for vhost-user
+ * @vdev: vhost-user device
+ * @fd: vhost-user message socket
+ * @events: epoll events
+ */
+/* cppcheck-suppress unusedFunction */
+void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events)
+{
+ struct vhost_user_msg msg = { 0 };
+ bool need_reply, reply_requested;
+ int ret;
+
+ if (events & (EPOLLRDHUP | EPOLLHUP | EPOLLERR)) {
+ vu_sock_reset(vdev);
+ return;
+ }
+
+ ret = vu_message_read_default(fd, &msg);
+ if (ret == 0) {
+ vu_sock_reset(vdev);
+ return;
+ }
+ debug("================ Vhost user message ================");
+ debug("Request: %s (%d)", vu_request_to_string(msg.hdr.request),
+ msg.hdr.request);
+ debug("Flags: 0x%x", msg.hdr.flags);
+ debug("Size: %u", msg.hdr.size);
+
+ need_reply = msg.hdr.flags & VHOST_USER_NEED_REPLY_MASK;
+
+ if (msg.hdr.request >= 0 && msg.hdr.request < VHOST_USER_MAX &&
+ vu_handle[msg.hdr.request])
+ reply_requested = vu_handle[msg.hdr.request](vdev, &msg);
+ else
+ die("Unhandled request: %d", msg.hdr.request);
+
+ /* cppcheck-suppress legacyUninitvar */
+ if (!reply_requested && need_reply) {
+ msg.payload.u64 = 0;
+ msg.hdr.flags = 0;
+ msg.hdr.size = sizeof(msg.payload.u64);
+ msg.fd_num = 0;
+ reply_requested = true;
+ }
+
+ if (reply_requested)
+ vu_send_reply(fd, &msg);
+}
diff --git a/vhost_user.h b/vhost_user.h
new file mode 100644
index 000000000000..ed4074c6b915
--- /dev/null
+++ b/vhost_user.h
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * vhost-user API, command management and virtio interface
+ *
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+/* some parts from subprojects/libvhost-user/libvhost-user.h */
+
+#ifndef VHOST_USER_H
+#define VHOST_USER_H
+
+#include "virtio.h"
+#include "iov.h"
+
+#define VHOST_USER_F_PROTOCOL_FEATURES 30
+
+#define VHOST_MEMORY_BASELINE_NREGIONS 8
+
+/**
+ * enum vhost_user_protocol_feature - List of available vhost-user features
+ */
+enum vhost_user_protocol_feature {
+ VHOST_USER_PROTOCOL_F_MQ = 0,
+ VHOST_USER_PROTOCOL_F_LOG_SHMFD = 1,
+ VHOST_USER_PROTOCOL_F_RARP = 2,
+ VHOST_USER_PROTOCOL_F_REPLY_ACK = 3,
+ VHOST_USER_PROTOCOL_F_NET_MTU = 4,
+ VHOST_USER_PROTOCOL_F_BACKEND_REQ = 5,
+ VHOST_USER_PROTOCOL_F_CROSS_ENDIAN = 6,
+ VHOST_USER_PROTOCOL_F_CRYPTO_SESSION = 7,
+ VHOST_USER_PROTOCOL_F_PAGEFAULT = 8,
+ VHOST_USER_PROTOCOL_F_CONFIG = 9,
+ VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD = 10,
+ VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11,
+ VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12,
+ VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS = 14,
+ VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15,
+
+ VHOST_USER_PROTOCOL_F_MAX
+};
+
+/**
+ * enum vhost_user_request - List of available vhost-user requests
+ */
+enum vhost_user_request {
+ VHOST_USER_NONE = 0,
+ VHOST_USER_GET_FEATURES = 1,
+ VHOST_USER_SET_FEATURES = 2,
+ VHOST_USER_SET_OWNER = 3,
+ VHOST_USER_RESET_OWNER = 4,
+ VHOST_USER_SET_MEM_TABLE = 5,
+ VHOST_USER_SET_LOG_BASE = 6,
+ VHOST_USER_SET_LOG_FD = 7,
+ VHOST_USER_SET_VRING_NUM = 8,
+ VHOST_USER_SET_VRING_ADDR = 9,
+ VHOST_USER_SET_VRING_BASE = 10,
+ VHOST_USER_GET_VRING_BASE = 11,
+ VHOST_USER_SET_VRING_KICK = 12,
+ VHOST_USER_SET_VRING_CALL = 13,
+ VHOST_USER_SET_VRING_ERR = 14,
+ VHOST_USER_GET_PROTOCOL_FEATURES = 15,
+ VHOST_USER_SET_PROTOCOL_FEATURES = 16,
+ VHOST_USER_GET_QUEUE_NUM = 17,
+ VHOST_USER_SET_VRING_ENABLE = 18,
+ VHOST_USER_SEND_RARP = 19,
+ VHOST_USER_NET_SET_MTU = 20,
+ VHOST_USER_SET_BACKEND_REQ_FD = 21,
+ VHOST_USER_IOTLB_MSG = 22,
+ VHOST_USER_SET_VRING_ENDIAN = 23,
+ VHOST_USER_GET_CONFIG = 24,
+ VHOST_USER_SET_CONFIG = 25,
+ VHOST_USER_CREATE_CRYPTO_SESSION = 26,
+ VHOST_USER_CLOSE_CRYPTO_SESSION = 27,
+ VHOST_USER_POSTCOPY_ADVISE = 28,
+ VHOST_USER_POSTCOPY_LISTEN = 29,
+ VHOST_USER_POSTCOPY_END = 30,
+ VHOST_USER_GET_INFLIGHT_FD = 31,
+ VHOST_USER_SET_INFLIGHT_FD = 32,
+ VHOST_USER_GPU_SET_SOCKET = 33,
+ VHOST_USER_VRING_KICK = 35,
+ VHOST_USER_GET_MAX_MEM_SLOTS = 36,
+ VHOST_USER_ADD_MEM_REG = 37,
+ VHOST_USER_REM_MEM_REG = 38,
+ VHOST_USER_MAX
+};
+
+/**
+ * struct vhost_user_header - vhost-user message header
+ * @request: Request type of the message
+ * @flags: Request flags
+ * @size: The following payload size
+ */
+struct vhost_user_header {
+ enum vhost_user_request request;
+
+#define VHOST_USER_VERSION_MASK 0x3
+#define VHOST_USER_REPLY_MASK (0x1 << 2)
+#define VHOST_USER_NEED_REPLY_MASK (0x1 << 3)
+ uint32_t flags;
+ uint32_t size;
+} __attribute__ ((__packed__));
+
+/**
+ * struct vhost_user_memory_region - Front-end shared memory region information
+ * @guest_phys_addr: Guest physical address of the region
+ * @memory_size: Memory size
+ * @userspace_addr: front-end (QEMU) userspace address
+ * @mmap_offset: region offset in the shared memory area
+ */
+struct vhost_user_memory_region {
+ uint64_t guest_phys_addr;
+ uint64_t memory_size;
+ uint64_t userspace_addr;
+ uint64_t mmap_offset;
+};
+
+/**
+ * struct vhost_user_memory - List of all the shared memory regions
+ * @nregions: Number of memory regions
+ * @padding: Padding
+ * @regions: Memory regions list
+ */
+struct vhost_user_memory {
+ uint32_t nregions;
+ uint32_t padding;
+ struct vhost_user_memory_region regions[VHOST_MEMORY_BASELINE_NREGIONS];
+};
+
+/**
+ * union vhost_user_payload - vhost-user message payload
+ * @u64: 64-bit payload
+ * @state: vring state payload
+ * @addr: vring addresses payload
+ * vhost_user_memory: Memory regions information payload
+ */
+union vhost_user_payload {
+#define VHOST_USER_VRING_IDX_MASK 0xff
+#define VHOST_USER_VRING_NOFD_MASK (0x1 << 8)
+ uint64_t u64;
+ struct vhost_vring_state state;
+ struct vhost_vring_addr addr;
+ struct vhost_user_memory memory;
+};
+
+/**
+ * struct vhost_user_msg - vhost-use message
+ * @hdr: Message header
+ * @payload: Message payload
+ * @fds: File descriptors associated with the message
+ * in the ancillary data.
+ * (shared memory or event file descriptors)
+ * @fd_num: Number of file descriptors
+ */
+struct vhost_user_msg {
+ struct vhost_user_header hdr;
+ union vhost_user_payload payload;
+
+ int fds[VHOST_MEMORY_BASELINE_NREGIONS];
+ int fd_num;
+} __attribute__ ((__packed__));
+#define VHOST_USER_HDR_SIZE sizeof(struct vhost_user_header)
+
+/* index of the RX virtqueue */
+#define VHOST_USER_RX_QUEUE 0
+/* index of the TX virtqueue */
+#define VHOST_USER_TX_QUEUE 1
+
+/* in case of multiqueue, the RX and TX queues are interleaved */
+#define VHOST_USER_IS_QUEUE_TX(n) (n % 2)
+#define VHOST_USER_IS_QUEUE_RX(n) (!(n % 2))
+
+/**
+ * vu_queue_enabled - Return state of a virtqueue
+ * @vq: virtqueue to check
+ *
+ * Return: true if the virqueue is enabled, false otherwise
+ */
+static inline bool vu_queue_enabled(const struct vu_virtq *vq)
+{
+ return vq->enable;
+}
+
+/**
+ * vu_queue_started - Return state of a virtqueue
+ * @vq: virtqueue to check
+ *
+ * Return: true if the virqueue is started, false otherwise
+ */
+static inline bool vu_queue_started(const struct vu_virtq *vq)
+{
+ return vq->started;
+}
+
+int vu_send(struct vu_dev *vdev, const void *buf, size_t size);
+void vu_print_capabilities(void);
+void vu_init(struct ctx *c, struct vu_dev *vdev);
+void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref,
+ const struct timespec *now);
+void vu_cleanup(struct vu_dev *vdev);
+void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events);
+#endif /* VHOST_USER_H */
diff --git a/virtio.c b/virtio.c
index 380590afbca3..237395396606 100644
--- a/virtio.c
+++ b/virtio.c
@@ -328,7 +328,6 @@ static bool vring_can_notify(const struct vu_dev *dev, struct vu_virtq *vq)
* @dev: Vhost-user device
* @vq: Virtqueue
*/
-/* cppcheck-suppress unusedFunction */
void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq)
{
if (!vq->vring.avail)
@@ -504,7 +503,6 @@ static int vu_queue_map_desc(struct vu_dev *dev, struct vu_virtq *vq, unsigned i
*
* Return: -1 if there is an error, 0 otherwise
*/
-/* cppcheck-suppress unusedFunction */
int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virtq_element *elem)
{
unsigned int head;
@@ -553,7 +551,6 @@ void vu_queue_detach_element(struct vu_virtq *vq)
* vu_queue_unpop() - Push back the previously popped element from the virqueue
* @vq: Virtqueue
*/
-/* cppcheck-suppress unusedFunction */
void vu_queue_unpop(struct vu_virtq *vq)
{
vq->last_avail_idx--;
@@ -621,7 +618,6 @@ void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index,
* @len: Size of the element
* @idx: Used ring entry index
*/
-/* cppcheck-suppress unusedFunction */
void vu_queue_fill(struct vu_virtq *vq, const struct vu_virtq_element *elem,
unsigned int len, unsigned int idx)
{
@@ -645,7 +641,6 @@ static inline void vring_used_idx_set(struct vu_virtq *vq, uint16_t val)
* @vq: Virtqueue
* @count: Number of entry to flush
*/
-/* cppcheck-suppress unusedFunction */
void vu_queue_flush(struct vu_virtq *vq, unsigned int count)
{
uint16_t old, new;
diff --git a/virtio.h b/virtio.h
index 0e5705581bd2..d58b9ef7fc1d 100644
--- a/virtio.h
+++ b/virtio.h
@@ -106,6 +106,7 @@ struct vu_dev_region {
* @hdrlen: Virtio -net header length
*/
struct vu_dev {
+ struct ctx *context;
uint32_t nregions;
struct vu_dev_region regions[VHOST_USER_MAX_RAM_SLOTS];
struct vu_virtq vq[VHOST_USER_MAX_QUEUES];
@@ -162,7 +163,6 @@ static inline bool vu_has_feature(const struct vu_dev *vdev,
*
* Return: True if the feature is available
*/
-/* cppcheck-suppress unusedFunction */
static inline bool vu_has_protocol_feature(const struct vu_dev *vdev,
unsigned int fbit)
{
--
@@ -106,6 +106,7 @@ struct vu_dev_region {
* @hdrlen: Virtio -net header length
*/
struct vu_dev {
+ struct ctx *context;
uint32_t nregions;
struct vu_dev_region regions[VHOST_USER_MAX_RAM_SLOTS];
struct vu_virtq vq[VHOST_USER_MAX_QUEUES];
@@ -162,7 +163,6 @@ static inline bool vu_has_feature(const struct vu_dev *vdev,
*
* Return: True if the feature is available
*/
-/* cppcheck-suppress unusedFunction */
static inline bool vu_has_protocol_feature(const struct vu_dev *vdev,
unsigned int fbit)
{
--
2.46.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 4/4] vhost-user: add vhost-user
2024-09-06 16:04 [PATCH v4 0/4] Add vhost-user support to passt. (part 3) Laurent Vivier
` (2 preceding siblings ...)
2024-09-06 16:04 ` [PATCH v4 3/4] vhost-user: introduce vhost-user API Laurent Vivier
@ 2024-09-06 16:04 ` Laurent Vivier
2024-09-10 15:47 ` Stefano Brivio
3 siblings, 1 reply; 15+ messages in thread
From: Laurent Vivier @ 2024-09-06 16:04 UTC (permalink / raw)
To: passt-dev; +Cc: Laurent Vivier
add virtio and vhost-user functions to connect with QEMU.
$ ./passt --vhost-user
and
# qemu-system-x86_64 ... -m 4G \
-object memory-backend-memfd,id=memfd0,share=on,size=4G \
-numa node,memdev=memfd0 \
-chardev socket,id=chr0,path=/tmp/passt_1.socket \
-netdev vhost-user,id=netdev0,chardev=chr0 \
-device virtio-net,mac=9a:2b:2c:2d:2e:2f,netdev=netdev0 \
...
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
Makefile | 6 +-
checksum.c | 1 -
conf.c | 23 +-
epoll_type.h | 4 +
isolation.c | 15 +-
packet.c | 11 +
packet.h | 8 +-
passt.1 | 10 +-
passt.c | 26 +-
passt.h | 6 +
pcap.c | 1 -
tap.c | 111 +++++++--
tap.h | 5 +-
tcp.c | 31 ++-
tcp_buf.c | 8 +-
tcp_internal.h | 3 +-
tcp_vu.c | 656 +++++++++++++++++++++++++++++++++++++++++++++++++
tcp_vu.h | 12 +
udp.c | 76 +++---
udp.h | 8 +-
udp_internal.h | 34 +++
udp_vu.c | 386 +++++++++++++++++++++++++++++
udp_vu.h | 13 +
vhost_user.c | 32 +--
virtio.c | 1 -
vu_common.c | 35 +++
vu_common.h | 34 +++
27 files changed, 1451 insertions(+), 105 deletions(-)
create mode 100644 tcp_vu.c
create mode 100644 tcp_vu.h
create mode 100644 udp_internal.h
create mode 100644 udp_vu.c
create mode 100644 udp_vu.h
create mode 100644 vu_common.c
create mode 100644 vu_common.h
diff --git a/Makefile b/Makefile
index 01e95ac1b62c..e481a9430174 100644
--- a/Makefile
+++ b/Makefile
@@ -47,7 +47,8 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS)
PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \
icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \
ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \
- tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c
+ tcp_buf.c tcp_splice.c tcp_vu.c udp.c udp_flow.c udp_vu.c util.c \
+ vhost_user.c virtio.c vu_common.c
QRAP_SRCS = qrap.c
SRCS = $(PASST_SRCS) $(QRAP_SRCS)
@@ -57,7 +58,8 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \
flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \
lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \
siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \
- udp.h udp_flow.h util.h vhost_user.h virtio.h
+ tcp_vu.h udp.h udp_flow.h udp_internal.h udp_vu.h util.h vhost_user.h \
+ virtio.h vu_common.h
HEADERS = $(PASST_HEADERS) seccomp.h
C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 };
diff --git a/checksum.c b/checksum.c
index 006614fcbb28..aa5b7ae1cb66 100644
--- a/checksum.c
+++ b/checksum.c
@@ -501,7 +501,6 @@ uint16_t csum(const void *buf, size_t len, uint32_t init)
*
* Return: 16-bit folded, complemented checksum
*/
-/* cppcheck-suppress unusedFunction */
uint16_t csum_iov(const struct iovec *iov, size_t n, uint32_t init)
{
unsigned int i;
diff --git a/conf.c b/conf.c
index e29b6a9201e0..0633f82d231a 100644
--- a/conf.c
+++ b/conf.c
@@ -45,6 +45,7 @@
#include "lineread.h"
#include "isolation.h"
#include "log.h"
+#include "vhost_user.h"
/**
* next_chunk - Return the next piece of a string delimited by a character
@@ -759,9 +760,14 @@ static void usage(const char *name, FILE *f, int status)
" default: same interface name as external one\n");
} else {
fprintf(f,
- " -s, --socket PATH UNIX domain socket path\n"
+ " -s, --socket, --socket-path PATH UNIX domain socket path\n"
" default: probe free path starting from "
UNIX_SOCK_PATH "\n", 1);
+ fprintf(f,
+ " --vhost-user Enable vhost-user mode\n"
+ " UNIX domain socket is provided by -s option\n"
+ " --print-capabilities print back-end capabilities in JSON format,\n"
+ " only meaningful for vhost-user mode\n");
}
fprintf(f,
@@ -1281,6 +1287,10 @@ void conf(struct ctx *c, int argc, char **argv)
{"netns-only", no_argument, NULL, 20 },
{"map-host-loopback", required_argument, NULL, 21 },
{"map-guest-addr", required_argument, NULL, 22 },
+ {"vhost-user", no_argument, NULL, 23 },
+ /* vhost-user backend program convention */
+ {"print-capabilities", no_argument, NULL, 24 },
+ {"socket-path", required_argument, NULL, 's' },
{ 0 },
};
const char *logname = (c->mode == MODE_PASTA) ? "pasta" : "passt";
@@ -1419,7 +1429,6 @@ void conf(struct ctx *c, int argc, char **argv)
sizeof(c->ip6.ifname_out), "%s", optarg);
if (ret <= 0 || ret >= (int)sizeof(c->ip6.ifname_out))
die("Invalid interface name: %s", optarg);
-
break;
case 17:
if (c->mode != MODE_PASTA)
@@ -1458,6 +1467,16 @@ void conf(struct ctx *c, int argc, char **argv)
conf_nat(optarg, &c->ip4.map_guest_addr,
&c->ip6.map_guest_addr, NULL);
break;
+ case 23:
+ if (c->mode == MODE_PASTA) {
+ err("--vhost-user is for passt mode only");
+ usage(argv[0], stdout, EXIT_SUCCESS);
+ }
+ c->mode = MODE_VU;
+ break;
+ case 24:
+ vu_print_capabilities();
+ break;
case 'd':
c->debug = 1;
c->quiet = 0;
diff --git a/epoll_type.h b/epoll_type.h
index 0ad1efa0ccec..f3ef41584757 100644
--- a/epoll_type.h
+++ b/epoll_type.h
@@ -36,6 +36,10 @@ enum epoll_type {
EPOLL_TYPE_TAP_PASST,
/* socket listening for qemu socket connections */
EPOLL_TYPE_TAP_LISTEN,
+ /* vhost-user command socket */
+ EPOLL_TYPE_VHOST_CMD,
+ /* vhost-user kick event socket */
+ EPOLL_TYPE_VHOST_KICK,
EPOLL_NUM_TYPES,
};
diff --git a/isolation.c b/isolation.c
index 45fba1e68b9d..c2a3c7b7911d 100644
--- a/isolation.c
+++ b/isolation.c
@@ -379,12 +379,19 @@ void isolate_postfork(const struct ctx *c)
prctl(PR_SET_DUMPABLE, 0);
- if (c->mode == MODE_PASTA) {
- prog.len = (unsigned short)ARRAY_SIZE(filter_pasta);
- prog.filter = filter_pasta;
- } else {
+ switch (c->mode) {
+ case MODE_PASST:
prog.len = (unsigned short)ARRAY_SIZE(filter_passt);
prog.filter = filter_passt;
+ break;
+ case MODE_PASTA:
+ prog.len = (unsigned short)ARRAY_SIZE(filter_pasta);
+ prog.filter = filter_pasta;
+ break;
+ case MODE_VU:
+ prog.len = (unsigned short)ARRAY_SIZE(filter_vu);
+ prog.filter = filter_vu;
+ break;
}
if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) ||
diff --git a/packet.c b/packet.c
index 37489961a37e..e5a78d079231 100644
--- a/packet.c
+++ b/packet.c
@@ -36,6 +36,17 @@
static int packet_check_range(const struct pool *p, size_t offset, size_t len,
const char *start, const char *func, int line)
{
+ if (p->buf_size == 0) {
+ int ret;
+
+ ret = vu_packet_check_range((void *)p->buf, offset, len, start);
+
+ if (ret == -1)
+ trace("cannot find region, %s:%i", func, line);
+
+ return ret;
+ }
+
if (start < p->buf) {
trace("packet start %p before buffer start %p, "
"%s:%i", (void *)start, (void *)p->buf, func, line);
diff --git a/packet.h b/packet.h
index 8377dcf678bb..3f70e949c066 100644
--- a/packet.h
+++ b/packet.h
@@ -8,8 +8,10 @@
/**
* struct pool - Generic pool of packets stored in a buffer
- * @buf: Buffer storing packet descriptors
- * @buf_size: Total size of buffer
+ * @buf: Buffer storing packet descriptors,
+ * a struct vu_dev_region array for passt vhost-user mode
+ * @buf_size: Total size of buffer,
+ * 0 for passt vhost-user mode
* @size: Number of usable descriptors for the pool
* @count: Number of used descriptors for the pool
* @pkt: Descriptors: see macros below
@@ -22,6 +24,8 @@ struct pool {
struct iovec pkt[1];
};
+int vu_packet_check_range(void *buf, size_t offset, size_t len,
+ const char *start);
void packet_add_do(struct pool *p, size_t len, const char *start,
const char *func, int line);
void *packet_get_do(const struct pool *p, const size_t idx,
diff --git a/passt.1 b/passt.1
index 79d134dbe098..822714147be8 100644
--- a/passt.1
+++ b/passt.1
@@ -378,12 +378,20 @@ interface address are configured on a given host interface.
.SS \fBpasst\fR-only options
.TP
-.BR \-s ", " \-\-socket " " \fIpath
+.BR \-s ", " \-\-socket-path ", " \-\-socket " " \fIpath
Path for UNIX domain socket used by \fBqemu\fR(1) or \fBqrap\fR(1) to connect to
\fBpasst\fR.
Default is to probe a free socket, not accepting connections, starting from
\fI/tmp/passt_1.socket\fR to \fI/tmp/passt_64.socket\fR.
+.TP
+.BR \-\-vhost-user
+Enable vhost-user. The vhost-user command socket is provided by \fB--socket\fR.
+
+.TP
+.BR \-\-print-capabilities
+Print back-end capabilities in JSON format, only meaningful for vhost-user mode.
+
.TP
.BR \-F ", " \-\-fd " " \fIFD
Pass a pre-opened, connected socket to \fBpasst\fR. Usually the socket is opened
diff --git a/passt.c b/passt.c
index ad6f0bc32df6..b64efeaf346c 100644
--- a/passt.c
+++ b/passt.c
@@ -74,6 +74,8 @@ char *epoll_type_str[] = {
[EPOLL_TYPE_TAP_PASTA] = "/dev/net/tun device",
[EPOLL_TYPE_TAP_PASST] = "connected qemu socket",
[EPOLL_TYPE_TAP_LISTEN] = "listening qemu socket",
+ [EPOLL_TYPE_VHOST_CMD] = "vhost-user command socket",
+ [EPOLL_TYPE_VHOST_KICK] = "vhost-user kick socket",
};
static_assert(ARRAY_SIZE(epoll_type_str) == EPOLL_NUM_TYPES,
"epoll_type_str[] doesn't match enum epoll_type");
@@ -206,6 +208,7 @@ int main(int argc, char **argv)
struct rlimit limit;
struct timespec now;
struct sigaction sa;
+ struct vu_dev vdev;
clock_gettime(CLOCK_MONOTONIC, &log_start);
@@ -262,6 +265,8 @@ int main(int argc, char **argv)
pasta_netns_quit_init(&c);
tap_sock_init(&c);
+ if (c.mode == MODE_VU)
+ vu_init(&c, &vdev);
secret_init(&c);
@@ -352,14 +357,31 @@ loop:
tcp_timer_handler(&c, ref);
break;
case EPOLL_TYPE_UDP_LISTEN:
- udp_listen_sock_handler(&c, ref, eventmask, &now);
+ if (c.mode == MODE_VU) {
+ udp_vu_listen_sock_handler(&c, ref, eventmask,
+ &now);
+ } else {
+ udp_buf_listen_sock_handler(&c, ref, eventmask,
+ &now);
+ }
break;
case EPOLL_TYPE_UDP_REPLY:
- udp_reply_sock_handler(&c, ref, eventmask, &now);
+ if (c.mode == MODE_VU)
+ udp_vu_reply_sock_handler(&c, ref, eventmask,
+ &now);
+ else
+ udp_buf_reply_sock_handler(&c, ref, eventmask,
+ &now);
break;
case EPOLL_TYPE_PING:
icmp_sock_handler(&c, ref);
break;
+ case EPOLL_TYPE_VHOST_CMD:
+ vu_control_handler(&vdev, c.fd_tap, eventmask);
+ break;
+ case EPOLL_TYPE_VHOST_KICK:
+ vu_kick_cb(&vdev, ref, &now);
+ break;
default:
/* Can't happen */
ASSERT(0);
diff --git a/passt.h b/passt.h
index 031c9b669cc4..a98f043c7e64 100644
--- a/passt.h
+++ b/passt.h
@@ -25,6 +25,8 @@ union epoll_ref;
#include "fwd.h"
#include "tcp.h"
#include "udp.h"
+#include "udp_vu.h"
+#include "vhost_user.h"
/* Default address for our end on the tap interface. Bit 0 of byte 0 must be 0
* (unicast) and bit 1 of byte 1 must be 1 (locally administered). Otherwise
@@ -94,6 +96,7 @@ struct fqdn {
enum passt_modes {
MODE_PASST,
MODE_PASTA,
+ MODE_VU,
};
/**
@@ -227,6 +230,7 @@ struct ip6_ctx {
* @no_ra: Disable router advertisements
* @low_wmem: Low probed net.core.wmem_max
* @low_rmem: Low probed net.core.rmem_max
+ * @vdev: vhost-user device
*/
struct ctx {
enum passt_modes mode;
@@ -287,6 +291,8 @@ struct ctx {
int low_wmem;
int low_rmem;
+
+ struct vu_dev *vdev;
};
void proto_update_l2_buf(const unsigned char *eth_d,
diff --git a/pcap.c b/pcap.c
index 46cc4b0d72b6..7e9c56090041 100644
--- a/pcap.c
+++ b/pcap.c
@@ -140,7 +140,6 @@ void pcap_multiple(const struct iovec *iov, size_t frame_parts, unsigned int n,
* containing packet data to write, including L2 header
* @iovcnt: Number of buffers (@iov entries)
*/
-/* cppcheck-suppress unusedFunction */
void pcap_iov(const struct iovec *iov, size_t iovcnt)
{
struct timespec now;
diff --git a/tap.c b/tap.c
index 852d83769c29..4ad5e9f4e148 100644
--- a/tap.c
+++ b/tap.c
@@ -58,6 +58,7 @@
#include "packet.h"
#include "tap.h"
#include "log.h"
+#include "vhost_user.h"
/* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers */
static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS, pkt_buf);
@@ -78,16 +79,22 @@ void tap_send_single(const struct ctx *c, const void *data, size_t l2len)
struct iovec iov[2];
size_t iovcnt = 0;
- if (c->mode == MODE_PASST) {
+ switch (c->mode) {
+ case MODE_PASST:
iov[iovcnt] = IOV_OF_LVALUE(vnet_len);
iovcnt++;
- }
-
- iov[iovcnt].iov_base = (void *)data;
- iov[iovcnt].iov_len = l2len;
- iovcnt++;
+ /* fall through */
+ case MODE_PASTA:
+ iov[iovcnt].iov_base = (void *)data;
+ iov[iovcnt].iov_len = l2len;
+ iovcnt++;
- tap_send_frames(c, iov, iovcnt, 1);
+ tap_send_frames(c, iov, iovcnt, 1);
+ break;
+ case MODE_VU:
+ vu_send(c->vdev, data, l2len);
+ break;
+ }
}
/**
@@ -406,10 +413,18 @@ size_t tap_send_frames(const struct ctx *c, const struct iovec *iov,
if (!nframes)
return 0;
- if (c->mode == MODE_PASTA)
+ switch (c->mode) {
+ case MODE_PASTA:
m = tap_send_frames_pasta(c, iov, bufs_per_frame, nframes);
- else
+ break;
+ case MODE_PASST:
m = tap_send_frames_passt(c, iov, bufs_per_frame, nframes);
+ break;
+ case MODE_VU:
+ /* fall through */
+ default:
+ ASSERT(0);
+ }
if (m < nframes)
debug("tap: failed to send %zu frames of %zu",
@@ -968,7 +983,7 @@ void tap_add_packet(struct ctx *c, ssize_t l2len, char *p)
* tap_sock_reset() - Handle closing or failure of connect AF_UNIX socket
* @c: Execution context
*/
-static void tap_sock_reset(struct ctx *c)
+void tap_sock_reset(struct ctx *c)
{
info("Client connection closed%s", c->one_off ? ", exiting" : "");
@@ -979,6 +994,8 @@ static void tap_sock_reset(struct ctx *c)
epoll_ctl(c->epollfd, EPOLL_CTL_DEL, c->fd_tap, NULL);
close(c->fd_tap);
c->fd_tap = -1;
+ if (c->mode == MODE_VU)
+ vu_cleanup(c->vdev);
}
/**
@@ -1178,11 +1195,17 @@ static void tap_sock_unix_init(struct ctx *c)
ev.data.u64 = ref.u64;
epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap_listen, &ev);
- info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):");
- info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s",
- c->sock_path);
- info("or qrap, for earlier qemu versions:");
- info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio");
+ if (c->mode == MODE_VU) {
+ info("You can start qemu with:");
+ info(" kvm ... -chardev socket,id=chr0,path=%s -netdev vhost-user,id=netdev0,chardev=chr0 -device virtio-net,netdev=netdev0 -object memory-backend-memfd,id=memfd0,share=on,size=$RAMSIZE -numa node,memdev=memfd0\n",
+ c->sock_path);
+ } else {
+ info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):");
+ info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s",
+ c->sock_path);
+ info("or qrap, for earlier qemu versions:");
+ info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio");
+ }
}
/**
@@ -1192,8 +1215,8 @@ static void tap_sock_unix_init(struct ctx *c)
*/
void tap_listen_handler(struct ctx *c, uint32_t events)
{
- union epoll_ref ref = { .type = EPOLL_TYPE_TAP_PASST };
struct epoll_event ev = { 0 };
+ union epoll_ref ref;
int v = INT_MAX / 2;
struct ucred ucred;
socklen_t len;
@@ -1233,6 +1256,10 @@ void tap_listen_handler(struct ctx *c, uint32_t events)
trace("tap: failed to set SO_SNDBUF to %i", v);
ref.fd = c->fd_tap;
+ if (c->mode == MODE_VU)
+ ref.type = EPOLL_TYPE_VHOST_CMD;
+ else
+ ref.type = EPOLL_TYPE_TAP_PASST;
ev.events = EPOLLIN | EPOLLRDHUP;
ev.data.u64 = ref.u64;
epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev);
@@ -1294,21 +1321,52 @@ static void tap_sock_tun_init(struct ctx *c)
epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev);
}
+/**
+ * tap_sock_update_buf() - Set the buffer base and size for the pool of packets
+ * @base: Buffer base
+ * @size Buffer size
+ */
+void tap_sock_update_buf(void *base, size_t size)
+{
+ int i;
+
+ pool_tap4_storage.buf = base;
+ pool_tap4_storage.buf_size = size;
+ pool_tap6_storage.buf = base;
+ pool_tap6_storage.buf_size = size;
+
+ for (i = 0; i < TAP_SEQS; i++) {
+ tap4_l4[i].p.buf = base;
+ tap4_l4[i].p.buf_size = size;
+ tap6_l4[i].p.buf = base;
+ tap6_l4[i].p.buf_size = size;
+ }
+}
+
/**
* tap_sock_init() - Create and set up AF_UNIX socket or tuntap file descriptor
* @c: Execution context
*/
void tap_sock_init(struct ctx *c)
{
- size_t sz = sizeof(pkt_buf);
+ size_t sz;
+ char *buf;
int i;
- pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, pkt_buf, sz);
- pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, pkt_buf, sz);
+ if (c->mode == MODE_VU) {
+ buf = NULL;
+ sz = 0;
+ } else {
+ buf = pkt_buf;
+ sz = sizeof(pkt_buf);
+ }
+
+ pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, buf, sz);
+ pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, buf, sz);
for (i = 0; i < TAP_SEQS; i++) {
- tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz);
- tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz);
+ tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz);
+ tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz);
}
if (c->fd_tap != -1) { /* Passed as --fd */
@@ -1317,10 +1375,17 @@ void tap_sock_init(struct ctx *c)
ASSERT(c->one_off);
ref.fd = c->fd_tap;
- if (c->mode == MODE_PASST)
+ switch (c->mode) {
+ case MODE_PASST:
ref.type = EPOLL_TYPE_TAP_PASST;
- else
+ break;
+ case MODE_PASTA:
ref.type = EPOLL_TYPE_TAP_PASTA;
+ break;
+ case MODE_VU:
+ ref.type = EPOLL_TYPE_VHOST_CMD;
+ break;
+ }
ev.events = EPOLLIN | EPOLLRDHUP;
ev.data.u64 = ref.u64;
diff --git a/tap.h b/tap.h
index ec9e2acec460..c5447f7077eb 100644
--- a/tap.h
+++ b/tap.h
@@ -40,7 +40,8 @@ static inline struct iovec tap_hdr_iov(const struct ctx *c,
*/
static inline void tap_hdr_update(struct tap_hdr *thdr, size_t l2len)
{
- thdr->vnet_len = htonl(l2len);
+ if (thdr)
+ thdr->vnet_len = htonl(l2len);
}
void tap_udp4_send(const struct ctx *c, struct in_addr src, in_port_t sport,
@@ -68,6 +69,8 @@ void tap_handler_pasta(struct ctx *c, uint32_t events,
void tap_handler_passt(struct ctx *c, uint32_t events,
const struct timespec *now);
int tap_sock_unix_open(char *sock_path);
+void tap_sock_reset(struct ctx *c);
+void tap_sock_update_buf(void *base, size_t size);
void tap_sock_init(struct ctx *c);
void tap_flush_pools(void);
void tap_handler(struct ctx *c, const struct timespec *now);
diff --git a/tcp.c b/tcp.c
index 77c62f053f15..2a4a5c8b46b7 100644
--- a/tcp.c
+++ b/tcp.c
@@ -304,6 +304,7 @@
#include "flow_table.h"
#include "tcp_internal.h"
#include "tcp_buf.h"
+#include "tcp_vu.h"
/* MSS rounding: see SET_MSS() */
#define MSS_DEFAULT 536
@@ -903,6 +904,7 @@ static void tcp_fill_header(struct tcphdr *th,
* @dlen: TCP payload length
* @check: Checksum, if already known
* @seq: Sequence number for this segment
+ * @no_tcp_csum: Do not set TCP checksum
*
* Return: The IPv4 payload length, host order
*/
@@ -910,7 +912,7 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn,
struct tap_hdr *taph,
struct iphdr *iph, struct tcphdr *th,
size_t dlen, const uint16_t *check,
- uint32_t seq)
+ uint32_t seq, bool no_tcp_csum)
{
const struct flowside *tapside = TAPFLOW(conn);
const struct in_addr *src4 = inany_v4(&tapside->oaddr);
@@ -929,7 +931,10 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn,
tcp_fill_header(th, conn, seq);
- tcp_update_check_tcp4(iph, th);
+ if (no_tcp_csum)
+ th->check = 0;
+ else
+ tcp_update_check_tcp4(iph, th);
tap_hdr_update(taph, l3len + sizeof(struct ethhdr));
@@ -945,13 +950,14 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn,
* @dlen: TCP payload length
* @check: Checksum, if already known
* @seq: Sequence number for this segment
+ * @no_tcp_csum: Do not set TCP checksum
*
* Return: The IPv6 payload length, host order
*/
static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn,
struct tap_hdr *taph,
struct ipv6hdr *ip6h, struct tcphdr *th,
- size_t dlen, uint32_t seq)
+ size_t dlen, uint32_t seq, bool no_tcp_csum)
{
const struct flowside *tapside = TAPFLOW(conn);
size_t l4len = dlen + sizeof(*th);
@@ -970,7 +976,10 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn,
tcp_fill_header(th, conn, seq);
- tcp_update_check_tcp6(ip6h, th);
+ if (no_tcp_csum)
+ th->check = 0;
+ else
+ tcp_update_check_tcp6(ip6h, th);
tap_hdr_update(taph, l4len + sizeof(*ip6h) + sizeof(struct ethhdr));
@@ -984,12 +993,14 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn,
* @dlen: TCP payload length
* @check: Checksum, if already known
* @seq: Sequence number for this segment
+ * @no_tcp_csum: Do not set TCP checksum
*
* Return: IP payload length, host order
*/
size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn,
struct iovec *iov, size_t dlen,
- const uint16_t *check, uint32_t seq)
+ const uint16_t *check, uint32_t seq,
+ bool no_tcp_csum)
{
const struct flowside *tapside = TAPFLOW(conn);
const struct in_addr *a4 = inany_v4(&tapside->oaddr);
@@ -998,13 +1009,13 @@ size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn,
return tcp_fill_headers4(conn, iov[TCP_IOV_TAP].iov_base,
iov[TCP_IOV_IP].iov_base,
iov[TCP_IOV_PAYLOAD].iov_base, dlen,
- check, seq);
+ check, seq, no_tcp_csum);
}
return tcp_fill_headers6(conn, iov[TCP_IOV_TAP].iov_base,
iov[TCP_IOV_IP].iov_base,
iov[TCP_IOV_PAYLOAD].iov_base, dlen,
- seq);
+ seq, no_tcp_csum);
}
/**
@@ -1237,6 +1248,9 @@ int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn,
*/
int tcp_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags)
{
+ if (c->mode == MODE_VU)
+ return tcp_vu_send_flag(c, conn, flags);
+
return tcp_buf_send_flag(c, conn, flags);
}
@@ -1630,6 +1644,9 @@ static int tcp_sock_consume(const struct tcp_tap_conn *conn, uint32_t ack_seq)
*/
static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
{
+ if (c->mode == MODE_VU)
+ return tcp_vu_data_from_sock(c, conn);
+
return tcp_buf_data_from_sock(c, conn);
}
diff --git a/tcp_buf.c b/tcp_buf.c
index c31e9f31b438..7aa750596af8 100644
--- a/tcp_buf.c
+++ b/tcp_buf.c
@@ -321,7 +321,7 @@ int tcp_buf_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags)
return ret;
}
- l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq);
+ l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq, false);
iov[TCP_IOV_PAYLOAD].iov_len = l4len;
if (flags & DUP_ACK) {
@@ -378,7 +378,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn,
tcp4_frame_conns[tcp4_payload_used] = conn;
iov = tcp4_l2_iov[tcp4_payload_used++];
- l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq);
+ l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq,
+ false);
iov[TCP_IOV_PAYLOAD].iov_len = l4len;
if (tcp4_payload_used > TCP_FRAMES_MEM - 1)
tcp_payload_flush(c);
@@ -386,7 +387,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn,
tcp6_frame_conns[tcp6_payload_used] = conn;
iov = tcp6_l2_iov[tcp6_payload_used++];
- l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq);
+ l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq,
+ false);
iov[TCP_IOV_PAYLOAD].iov_len = l4len;
if (tcp6_payload_used > TCP_FRAMES_MEM - 1)
tcp_payload_flush(c);
diff --git a/tcp_internal.h b/tcp_internal.h
index aa8bb64f1f33..e7fe735bfcb4 100644
--- a/tcp_internal.h
+++ b/tcp_internal.h
@@ -91,7 +91,8 @@ void tcp_rst_do(struct ctx *c, struct tcp_tap_conn *conn);
size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn,
struct iovec *iov, size_t dlen,
- const uint16_t *check, uint32_t seq);
+ const uint16_t *check, uint32_t seq,
+ bool no_tcp_csum);
int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn,
int force_seq, struct tcp_info *tinfo);
int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn, int flags,
diff --git a/tcp_vu.c b/tcp_vu.c
new file mode 100644
index 000000000000..206f95980eaf
--- /dev/null
+++ b/tcp_vu.c
@@ -0,0 +1,656 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* tcp_vu.c - TCP L2 vhost-user management functions
+ *
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+#include <errno.h>
+#include <stddef.h>
+#include <stdint.h>
+
+#include <netinet/ip.h>
+
+#include <sys/socket.h>
+
+#include <linux/tcp.h>
+#include <linux/virtio_net.h>
+
+#include "util.h"
+#include "ip.h"
+#include "passt.h"
+#include "siphash.h"
+#include "inany.h"
+#include "vhost_user.h"
+#include "tcp.h"
+#include "pcap.h"
+#include "flow.h"
+#include "tcp_conn.h"
+#include "flow_table.h"
+#include "tcp_vu.h"
+#include "tcp_internal.h"
+#include "checksum.h"
+#include "vu_common.h"
+
+/**
+ * struct tcp_payload_t - TCP header and data to send segments with payload
+ * @th: TCP header
+ * @data: TCP data
+ */
+struct tcp_payload_t {
+ struct tcphdr th;
+ uint8_t data[IP_MAX_MTU - sizeof(struct tcphdr)];
+};
+
+/**
+ * struct tcp_flags_t - TCP header and data to send zero-length
+ * segments (flags)
+ * @th: TCP header
+ * @opts TCP options
+ */
+struct tcp_flags_t {
+ struct tcphdr th;
+ char opts[OPT_MSS_LEN + OPT_WS_LEN + 1];
+};
+
+/* vhost-user */
+static const struct virtio_net_hdr vu_header = {
+ .flags = VIRTIO_NET_HDR_F_DATA_VALID,
+ .gso_type = VIRTIO_NET_HDR_GSO_NONE,
+};
+
+static struct iovec iov_vu[VIRTQUEUE_MAX_SIZE];
+static struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
+
+/**
+ * tcp_vu_l2_hdrlen() - return the size of the header in level 2 frame (TCP)
+ * @vdev: vhost-user device
+ * @v6: Set for IPv6 packet
+ *
+ * Return: Return the size of the header
+ */
+static size_t tcp_vu_l2_hdrlen(const struct vu_dev *vdev, bool v6)
+{
+ size_t l2_hdrlen;
+
+ l2_hdrlen = vdev->hdrlen + sizeof(struct ethhdr) +
+ sizeof(struct tcphdr);
+
+ if (v6)
+ l2_hdrlen += sizeof(struct ipv6hdr);
+ else
+ l2_hdrlen += sizeof(struct iphdr);
+
+ return l2_hdrlen;
+}
+
+/**
+ * tcp_vu_pcap() - Capture a single frame to pcap file (TCP)
+ * @c: Execution context
+ * @tapside: Address information for one side of the flow
+ * @iov: Pointer to the array of IO vectors
+ * @iov_used: Length of the array
+ * @l4len: IPv4 Payload length
+ */
+static void tcp_vu_pcap(const struct ctx *c, const struct flowside *tapside,
+ struct iovec *iov, int iov_used, size_t l4len)
+{
+ const struct in_addr *src = inany_v4(&tapside->oaddr);
+ const struct in_addr *dst = inany_v4(&tapside->eaddr);
+ const struct vu_dev *vdev = c->vdev;
+ char *base = iov[0].iov_base;
+ size_t size = iov[0].iov_len;
+ struct tcp_payload_t *bp;
+ uint32_t sum;
+
+ if (!*c->pcap)
+ return;
+
+ if (src && dst) {
+ bp = vu_payloadv4(vdev, base);
+ sum = proto_ipv4_header_psum(l4len, IPPROTO_TCP,
+ *src, *dst);
+ } else {
+ bp = vu_payloadv6(vdev, base);
+ sum = proto_ipv6_header_psum(l4len, IPPROTO_TCP,
+ &tapside->oaddr.a6,
+ &tapside->eaddr.a6);
+ }
+ iov[0].iov_base = &bp->th;
+ iov[0].iov_len = size - ((char *)iov[0].iov_base - base);
+ bp->th.check = 0;
+ bp->th.check = csum_iov(iov, iov_used, sum);
+
+ /* set iov for pcap logging */
+ iov[0].iov_base = base + vdev->hdrlen;
+ iov[0].iov_len = size - vdev->hdrlen;
+
+ pcap_iov(iov, iov_used);
+
+ /* restore iov[0] */
+ iov[0].iov_base = base;
+ iov[0].iov_len = size;
+}
+
+/**
+ * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payload)
+ * @c: Execution context
+ * @conn: Connection pointer
+ * @flags: TCP flags: if not set, send segment only if ACK is due
+ *
+ * Return: negative error code on connection reset, 0 otherwise
+ */
+int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags)
+{
+ struct vu_dev *vdev = c->vdev;
+ struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ const struct flowside *tapside = TAPFLOW(conn);
+ struct virtio_net_hdr_mrg_rxbuf *vh;
+ struct iovec l2_iov[TCP_NUM_IOVS];
+ size_t l2len, l4len, optlen;
+ struct iovec in_sg;
+ struct ethhdr *eh;
+ int nb_ack;
+ int ret;
+
+ elem[0].out_num = 0;
+ elem[0].out_sg = NULL;
+ elem[0].in_num = 1;
+ elem[0].in_sg = &in_sg;
+ ret = vu_queue_pop(vdev, vq, &elem[0]);
+ if (ret < 0)
+ return 0;
+
+ if (elem[0].in_num < 1) {
+ debug("virtio-net receive queue contains no in buffers");
+ vu_queue_rewind(vq, 1);
+ return 0;
+ }
+
+ vh = elem[0].in_sg[0].iov_base;
+
+ vh->hdr = vu_header;
+ if (vdev->hdrlen == sizeof(struct virtio_net_hdr_mrg_rxbuf))
+ vh->num_buffers = htole16(1);
+
+ l2_iov[TCP_IOV_TAP].iov_base = NULL;
+ l2_iov[TCP_IOV_TAP].iov_len = 0;
+ l2_iov[TCP_IOV_ETH].iov_base = (char *)elem[0].in_sg[0].iov_base + vdev->hdrlen;
+ l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr);
+
+ eh = l2_iov[TCP_IOV_ETH].iov_base;
+
+ memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest));
+ memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source));
+
+ if (CONN_V4(conn)) {
+ struct tcp_flags_t *payload;
+ struct iphdr *iph;
+ uint32_t seq;
+
+ l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base +
+ l2_iov[TCP_IOV_ETH].iov_len;
+ l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr);
+ l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base +
+ l2_iov[TCP_IOV_IP].iov_len;
+
+ eh->h_proto = htons(ETH_P_IP);
+
+ iph = l2_iov[TCP_IOV_IP].iov_base;
+ *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP);
+
+ payload = l2_iov[TCP_IOV_PAYLOAD].iov_base;
+ payload->th = (struct tcphdr){
+ .doff = offsetof(struct tcp_flags_t, opts) / 4,
+ .ack = 1
+ };
+
+ seq = conn->seq_to_tap;
+ ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen);
+ if (ret <= 0) {
+ vu_queue_rewind(vq, 1);
+ return ret;
+ }
+
+ l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq,
+ true);
+ /* keep the following assignment for clarity */
+ /* cppcheck-suppress unreadVariable */
+ l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len;
+
+ l2len = l4len + sizeof(*iph) + sizeof(struct ethhdr);
+ } else {
+ struct tcp_flags_t *payload;
+ struct ipv6hdr *ip6h;
+ uint32_t seq;
+
+ l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base +
+ l2_iov[TCP_IOV_ETH].iov_len;
+ l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr);
+ l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base +
+ l2_iov[TCP_IOV_IP].iov_len;
+
+ eh->h_proto = htons(ETH_P_IPV6);
+
+ ip6h = l2_iov[TCP_IOV_IP].iov_base;
+ *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP);
+
+ payload = l2_iov[TCP_IOV_PAYLOAD].iov_base;
+ payload->th = (struct tcphdr){
+ .doff = offsetof(struct tcp_flags_t, opts) / 4,
+ .ack = 1
+ };
+
+ seq = conn->seq_to_tap;
+ ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen);
+ if (ret <= 0) {
+ vu_queue_rewind(vq, 1);
+ return ret;
+ }
+
+ l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq,
+ true);
+ /* keep the following assignment for clarity */
+ /* cppcheck-suppress unreadVariable */
+ l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len;
+
+ l2len = l4len + sizeof(*ip6h) + sizeof(struct ethhdr);
+ }
+ l2len += vdev->hdrlen;
+ ASSERT(l2len <= elem[0].in_sg[0].iov_len);
+
+ elem[0].in_sg[0].iov_len = l2len;
+ tcp_vu_pcap(c, tapside, &elem[0].in_sg[0], 1, l4len);
+
+ vu_queue_fill(vq, &elem[0], l2len, 0);
+ nb_ack = 1;
+
+ if (flags & DUP_ACK) {
+ struct iovec in_sg_dup;
+
+ elem[1].out_num = 0;
+ elem[1].out_sg = NULL;
+ elem[1].in_num = 1;
+ elem[1].in_sg = &in_sg_dup;
+ ret = vu_queue_pop(vdev, vq, &elem[1]);
+ if (ret == 0) {
+ if (elem[1].in_num < 1 || elem[1].in_sg[0].iov_len < l2len) {
+ vu_queue_rewind(vq, 1);
+ } else {
+ memcpy(elem[1].in_sg[0].iov_base, vh, l2len);
+ nb_ack++;
+
+ tcp_vu_pcap(c, tapside, &elem[1].in_sg[0], 1,
+ l4len);
+
+ vu_queue_fill(vq, &elem[1], l2len, 1);
+ }
+ }
+ }
+
+ vu_queue_flush(vq, nb_ack);
+ vu_queue_notify(vdev, vq);
+
+ return 0;
+}
+
+/** tcp_vu_sock_recv() - Receive datastream from socket into vhost-user buffers
+ * @c: Execution context
+ * @conn: Connection pointer
+ * @v4: Set for IPv4 connections
+ * @fillsize: Number of bytes we can receive
+ * @datalen: Size of received data (output)
+ *
+ * Return: Number of iov entries used to store the data
+ */
+static ssize_t tcp_vu_sock_recv(struct ctx *c,
+ struct tcp_tap_conn *conn, bool v4,
+ size_t fillsize, ssize_t *data_len)
+{
+ struct vu_dev *vdev = c->vdev;
+ struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ static struct iovec in_sg[VIRTQUEUE_MAX_SIZE];
+ struct msghdr mh_sock = { 0 };
+ uint16_t mss = MSS_GET(conn);
+ static int in_sg_count;
+ int s = conn->sock;
+ size_t l2_hdrlen;
+ int segment_size;
+ int iov_cnt;
+ ssize_t ret;
+
+ l2_hdrlen = tcp_vu_l2_hdrlen(vdev, !v4);
+
+ iov_cnt = 0;
+ in_sg_count = 0;
+ segment_size = 0;
+ *data_len = 0;
+ while (fillsize > 0 && iov_cnt < VIRTQUEUE_MAX_SIZE - 1 &&
+ in_sg_count < ARRAY_SIZE(in_sg)) {
+
+ elem[iov_cnt].out_num = 0;
+ elem[iov_cnt].out_sg = NULL;
+ elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count;
+ elem[iov_cnt].in_sg = &in_sg[in_sg_count];
+ ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]);
+ if (ret < 0)
+ break;
+
+ if (elem[iov_cnt].in_num < 1) {
+ warn("virtio-net receive queue contains no in buffers");
+ break;
+ }
+
+ in_sg_count += elem[iov_cnt].in_num;
+
+ ASSERT(elem[iov_cnt].in_num == 1);
+ ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen);
+
+ if (segment_size == 0) {
+ iov_vu[iov_cnt + 1].iov_base =
+ (char *)elem[iov_cnt].in_sg[0].iov_base + l2_hdrlen;
+ iov_vu[iov_cnt + 1].iov_len =
+ elem[iov_cnt].in_sg[0].iov_len - l2_hdrlen;
+ } else {
+ iov_vu[iov_cnt + 1].iov_base = elem[iov_cnt].in_sg[0].iov_base;
+ iov_vu[iov_cnt + 1].iov_len = elem[iov_cnt].in_sg[0].iov_len;
+ }
+
+ if (iov_vu[iov_cnt + 1].iov_len > fillsize)
+ iov_vu[iov_cnt + 1].iov_len = fillsize;
+
+ segment_size += iov_vu[iov_cnt + 1].iov_len;
+ if (vdev->hdrlen != sizeof(struct virtio_net_hdr_mrg_rxbuf)) {
+ segment_size = 0;
+ } else if (segment_size >= mss) {
+ iov_vu[iov_cnt + 1].iov_len -= segment_size - mss;
+ segment_size = 0;
+ }
+ fillsize -= iov_vu[iov_cnt + 1].iov_len;
+
+ iov_cnt++;
+ }
+ if (iov_cnt == 0)
+ return 0;
+
+ mh_sock.msg_iov = iov_vu;
+ mh_sock.msg_iovlen = iov_cnt + 1;
+
+ do
+ ret = recvmsg(s, &mh_sock, MSG_PEEK);
+ while (ret < 0 && errno == EINTR);
+
+ if (ret < 0) {
+ vu_queue_rewind(vq, iov_cnt);
+ if (errno != EAGAIN && errno != EWOULDBLOCK) {
+ ret = -errno;
+ tcp_rst(c, conn);
+ }
+ return ret;
+ }
+ if (!ret) {
+ vu_queue_rewind(vq, iov_cnt);
+
+ if ((conn->events & (SOCK_FIN_RCVD | TAP_FIN_SENT)) == SOCK_FIN_RCVD) {
+ int retf = tcp_vu_send_flag(c, conn, FIN | ACK);
+ if (retf) {
+ tcp_rst(c, conn);
+ return retf;
+ }
+
+ conn_event(c, conn, TAP_FIN_SENT);
+ }
+ return 0;
+ }
+
+ *data_len = ret;
+ return iov_cnt;
+}
+
+/**
+ * tcp_vu_prepare() - Prepare the packet header
+ * @c: Execution context
+ * @conn: Connection pointer
+ * @first: Pointer to the array of IO vectors
+ * @data_len: Packet data length
+ * @check: Checksum, if already known
+ *
+ * Return: Level-4 length
+ */
+static size_t tcp_vu_prepare(const struct ctx *c,
+ struct tcp_tap_conn *conn, struct iovec *first,
+ size_t data_len, const uint16_t **check)
+{
+ const struct flowside *toside = TAPFLOW(conn);
+ const struct vu_dev *vdev = c->vdev;
+ struct iovec l2_iov[TCP_NUM_IOVS];
+ char *base = first->iov_base;
+ struct ethhdr *eh;
+ size_t l4len;
+
+ /* we guess the first iovec provided by the guest can embed
+ * all the headers needed by L2 frame
+ */
+
+ l2_iov[TCP_IOV_TAP].iov_base = NULL;
+ l2_iov[TCP_IOV_TAP].iov_len = 0;
+ l2_iov[TCP_IOV_ETH].iov_base = base + vdev->hdrlen;
+ l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr);
+
+ eh = l2_iov[TCP_IOV_ETH].iov_base;
+
+ memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest));
+ memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source));
+
+ /* initialize header */
+ if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) {
+ struct tcp_payload_t *payload;
+ struct iphdr *iph;
+
+ ASSERT(first[0].iov_len >= vdev->hdrlen +
+ sizeof(struct ethhdr) + sizeof(struct iphdr) +
+ sizeof(struct tcphdr));
+
+ l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base +
+ l2_iov[TCP_IOV_ETH].iov_len;
+ l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr);
+ l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base +
+ l2_iov[TCP_IOV_IP].iov_len;
+
+
+ eh->h_proto = htons(ETH_P_IP);
+
+ iph = l2_iov[TCP_IOV_IP].iov_base;
+ *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP);
+ payload = l2_iov[TCP_IOV_PAYLOAD].iov_base;
+ payload->th = (struct tcphdr){
+ .doff = offsetof(struct tcp_payload_t, data) / 4,
+ .ack = 1
+ };
+
+ l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, *check,
+ conn->seq_to_tap, true);
+ /* keep the following assignment for clarity */
+ /* cppcheck-suppress unreadVariable */
+ l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len;
+
+ *check = &iph->check;
+ } else {
+ struct tcp_payload_t *payload;
+ struct ipv6hdr *ip6h;
+
+ ASSERT(first[0].iov_len >= vdev->hdrlen +
+ sizeof(struct ethhdr) + sizeof(struct ipv6hdr) +
+ sizeof(struct tcphdr));
+
+ l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base +
+ l2_iov[TCP_IOV_ETH].iov_len;
+ l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr);
+ l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base +
+ l2_iov[TCP_IOV_IP].iov_len;
+
+
+ eh->h_proto = htons(ETH_P_IPV6);
+
+ ip6h = l2_iov[TCP_IOV_IP].iov_base;
+ *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP);
+
+ payload = l2_iov[TCP_IOV_PAYLOAD].iov_base;
+ payload->th = (struct tcphdr){
+ .doff = offsetof(struct tcp_payload_t, data) / 4,
+ .ack = 1
+ };
+;
+ l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, NULL,
+ conn->seq_to_tap, true);
+ /* keep the following assignment for clarity */
+ /* cppcheck-suppress unreadVariable */
+ l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len;
+ }
+
+ return l4len;
+}
+
+/**
+ * tcp_vu_data_from_sock() - Handle new data from socket, queue to vhost-user,
+ * in window
+ * @c: Execution context
+ * @conn: Connection pointer
+ *
+ * Return: Negative on connection reset, 0 otherwise
+ */
+int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)
+{
+ uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap;
+ struct vu_dev *vdev = c->vdev;
+ struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ const struct flowside *tapside = TAPFLOW(conn);
+ uint16_t mss = MSS_GET(conn);
+ size_t l2_hdrlen, fillsize;
+ int i, iov_cnt, iov_used;
+ int v4 = CONN_V4(conn);
+ uint32_t already_sent = 0;
+ const uint16_t *check;
+ struct iovec *first;
+ int segment_size;
+ int num_buffers;
+ ssize_t len;
+
+ if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) {
+ flow_err(conn,
+ "Got packet, but RX virtqueue not usable yet");
+ return 0;
+ }
+
+ already_sent = conn->seq_to_tap - conn->seq_ack_from_tap;
+
+ if (SEQ_LT(already_sent, 0)) {
+ /* RFC 761, section 2.1. */
+ flow_trace(conn, "ACK sequence gap: ACK for %u, sent: %u",
+ conn->seq_ack_from_tap, conn->seq_to_tap);
+ conn->seq_to_tap = conn->seq_ack_from_tap;
+ already_sent = 0;
+ }
+
+ if (!wnd_scaled || already_sent >= wnd_scaled) {
+ conn_flag(c, conn, STALLED);
+ conn_flag(c, conn, ACK_FROM_TAP_DUE);
+ return 0;
+ }
+
+ /* Set up buffer descriptors we'll fill completely and partially. */
+
+ fillsize = wnd_scaled;
+
+ if (peek_offset_cap)
+ already_sent = 0;
+
+ iov_vu[0].iov_base = tcp_buf_discard;
+ iov_vu[0].iov_len = already_sent;
+ fillsize -= already_sent;
+
+ /* collect the buffers from vhost-user and fill them with the
+ * data from the socket
+ */
+ iov_cnt = tcp_vu_sock_recv(c, conn, v4, fillsize, &len);
+ if (iov_cnt <= 0)
+ return iov_cnt;
+
+ len -= already_sent;
+ if (len <= 0) {
+ conn_flag(c, conn, STALLED);
+ vu_queue_rewind(vq, iov_cnt);
+ return 0;
+ }
+
+ conn_flag(c, conn, ~STALLED);
+
+ /* Likely, some new data was acked too. */
+ tcp_update_seqack_wnd(c, conn, 0, NULL);
+
+ /* initialize headers */
+ l2_hdrlen = tcp_vu_l2_hdrlen(vdev, !v4);
+ iov_used = 0;
+ num_buffers = 0;
+ check = NULL;
+ segment_size = 0;
+
+ /* iov_vu is an array of buffers and the buffer size can be
+ * smaller than the segment size we want to use but with
+ * num_buffer we can merge several virtio iov buffers in one packet
+ * we need only to set the packet headers in the first iov and
+ * num_buffer to the number of iov entries
+ */
+ for (i = 0; i < iov_cnt && len; i++) {
+
+ if (segment_size == 0)
+ first = &iov_vu[i + 1];
+
+ if (iov_vu[i + 1].iov_len > (size_t)len)
+ iov_vu[i + 1].iov_len = len;
+
+ len -= iov_vu[i + 1].iov_len;
+ iov_used++;
+
+ segment_size += iov_vu[i + 1].iov_len;
+ num_buffers++;
+
+ if (segment_size >= mss || len == 0 ||
+ i + 1 == iov_cnt || vdev->hdrlen != sizeof(struct virtio_net_hdr_mrg_rxbuf)) {
+ struct virtio_net_hdr_mrg_rxbuf *vh;
+ size_t l4len;
+
+ if (i + 1 == iov_cnt)
+ check = NULL;
+
+ /* restore first iovec base: point to vnet header */
+ first->iov_base = (char *)first->iov_base - l2_hdrlen;
+ first->iov_len = first->iov_len + l2_hdrlen;
+
+ vh = first->iov_base;
+
+ vh->hdr = vu_header;
+ if (vdev->hdrlen == sizeof(struct virtio_net_hdr_mrg_rxbuf))
+ vh->num_buffers = htole16(num_buffers);
+
+ l4len = tcp_vu_prepare(c, conn, first, segment_size, &check);
+
+ tcp_vu_pcap(c, tapside, first, num_buffers, l4len);
+
+ conn->seq_to_tap += segment_size;
+
+ segment_size = 0;
+ num_buffers = 0;
+ }
+ }
+
+ /* release unused buffers */
+ vu_queue_rewind(vq, iov_cnt - iov_used);
+
+ /* send packets */
+ vu_send_frame(vdev, vq, elem, &iov_vu[1], iov_used);
+
+ conn_flag(c, conn, ACK_FROM_TAP_DUE);
+
+ return 0;
+}
diff --git a/tcp_vu.h b/tcp_vu.h
new file mode 100644
index 000000000000..b433c3e0d06f
--- /dev/null
+++ b/tcp_vu.h
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+#ifndef TCP_VU_H
+#define TCP_VU_H
+
+int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags);
+int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn);
+
+#endif /*TCP_VU_H */
diff --git a/udp.c b/udp.c
index 8a93aad6272b..b7df5c117596 100644
--- a/udp.c
+++ b/udp.c
@@ -109,8 +109,7 @@
#include "pcap.h"
#include "log.h"
#include "flow_table.h"
-
-#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */
+#include "udp_internal.h"
/* "Spliced" sockets indexed by bound port (host order) */
static int udp_splice_ns [IP_VERSIONS][NUM_PORTS];
@@ -118,20 +117,8 @@ static int udp_splice_init[IP_VERSIONS][NUM_PORTS];
/* Static buffers */
-/**
- * struct udp_payload_t - UDP header and data for inbound messages
- * @uh: UDP header
- * @data: UDP data
- */
-static struct udp_payload_t {
- struct udphdr uh;
- char data[USHRT_MAX - sizeof(struct udphdr)];
-#ifdef __AVX2__
-} __attribute__ ((packed, aligned(32)))
-#else
-} __attribute__ ((packed, aligned(__alignof__(unsigned int))))
-#endif
-udp_payload[UDP_MAX_FRAMES];
+/* UDP header and data for inbound messages */
+static struct udp_payload_t udp_payload[UDP_MAX_FRAMES];
/* Ethernet header for IPv4 frames */
static struct ethhdr udp4_eth_hdr;
@@ -311,6 +298,7 @@ static void udp_splice_send(const struct ctx *c, size_t start, size_t n,
/**
* udp_update_hdr4() - Update headers for one IPv4 datagram
+ * @c: Execution context
* @ip4h: Pre-filled IPv4 header (except for tot_len and saddr)
* @bp: Pointer to udp_payload_t to update
* @toside: Flowside for destination side
@@ -318,8 +306,9 @@ static void udp_splice_send(const struct ctx *c, size_t start, size_t n,
*
* Return: size of IPv4 payload (UDP header + data)
*/
-static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp,
- const struct flowside *toside, size_t dlen)
+size_t udp_update_hdr4(const struct ctx *c,
+ struct iphdr *ip4h, struct udp_payload_t *bp,
+ const struct flowside *toside, size_t dlen)
{
const struct in_addr *src = inany_v4(&toside->oaddr);
const struct in_addr *dst = inany_v4(&toside->eaddr);
@@ -336,13 +325,17 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp,
bp->uh.source = htons(toside->oport);
bp->uh.dest = htons(toside->eport);
bp->uh.len = htons(l4len);
- csum_udp4(&bp->uh, *src, *dst, bp->data, dlen);
+ if (c->mode != MODE_VU)
+ csum_udp4(&bp->uh, *src, *dst, bp->data, dlen);
+ else
+ bp->uh.check = 0;
return l4len;
}
/**
* udp_update_hdr6() - Update headers for one IPv6 datagram
+ * @c: Execution context
* @ip6h: Pre-filled IPv6 header (except for payload_len and addresses)
* @bp: Pointer to udp_payload_t to update
* @toside: Flowside for destination side
@@ -350,8 +343,9 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp,
*
* Return: size of IPv6 payload (UDP header + data)
*/
-static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp,
- const struct flowside *toside, size_t dlen)
+size_t udp_update_hdr6(const struct ctx *c,
+ struct ipv6hdr *ip6h, struct udp_payload_t *bp,
+ const struct flowside *toside, size_t dlen)
{
uint16_t l4len = dlen + sizeof(bp->uh);
@@ -365,19 +359,29 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp,
bp->uh.source = htons(toside->oport);
bp->uh.dest = htons(toside->eport);
bp->uh.len = ip6h->payload_len;
- csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, bp->data, dlen);
+ if (c->mode != MODE_VU) {
+ csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6,
+ bp->data, dlen);
+ } else {
+ /* O is an invalid checksum for UDP IPv6 and dropped by
+ * the kernel stack, even if the checksum is disabled by virtio
+ * flags. We need to put any non-zero value here.
+ */
+ bp->uh.check = 0xffff;
+ }
return l4len;
}
/**
* udp_tap_prepare() - Convert one datagram into a tap frame
+ * @c: Execution context
* @mmh: Receiving mmsghdr array
* @idx: Index of the datagram to prepare
* @toside: Flowside for destination side
*/
-static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx,
- const struct flowside *toside)
+static void udp_tap_prepare(const struct ctx *c, const struct mmsghdr *mmh,
+ unsigned idx, const struct flowside *toside)
{
struct iovec (*tap_iov)[UDP_NUM_IOVS] = &udp_l2_iov[idx];
struct udp_payload_t *bp = &udp_payload[idx];
@@ -385,13 +389,15 @@ static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx,
size_t l4len;
if (!inany_v4(&toside->eaddr) || !inany_v4(&toside->oaddr)) {
- l4len = udp_update_hdr6(&bm->ip6h, bp, toside, mmh[idx].msg_len);
+ l4len = udp_update_hdr6(c, &bm->ip6h, bp, toside,
+ mmh[idx].msg_len);
tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip6h) +
sizeof(udp6_eth_hdr));
(*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp6_eth_hdr);
(*tap_iov)[UDP_IOV_IP] = IOV_OF_LVALUE(bm->ip6h);
} else {
- l4len = udp_update_hdr4(&bm->ip4h, bp, toside, mmh[idx].msg_len);
+ l4len = udp_update_hdr4(c, &bm->ip4h, bp, toside,
+ mmh[idx].msg_len);
tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip4h) +
sizeof(udp4_eth_hdr));
(*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp4_eth_hdr);
@@ -408,7 +414,7 @@ static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx,
*
* #syscalls recvmsg
*/
-static bool udp_sock_recverr(int s)
+bool udp_sock_recverr(int s)
{
const struct sock_extended_err *ee;
const struct cmsghdr *hdr;
@@ -495,7 +501,7 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events,
}
/**
- * udp_listen_sock_handler() - Handle new data from socket
+ * udp_buf_listen_sock_handler() - Handle new data from socket
* @c: Execution context
* @ref: epoll reference
* @events: epoll events bitmap
@@ -503,8 +509,8 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events,
*
* #syscalls recvmmsg
*/
-void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
- uint32_t events, const struct timespec *now)
+void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now)
{
struct mmsghdr *mmh_recv = ref.udp.v6 ? udp6_mh_recv : udp4_mh_recv;
int n, i;
@@ -527,7 +533,7 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
if (pif_is_socket(batchpif)) {
udp_splice_prepare(mmh_recv, i);
} else if (batchpif == PIF_TAP) {
- udp_tap_prepare(mmh_recv, i,
+ udp_tap_prepare(c, mmh_recv, i,
flowside_at_sidx(batchsidx));
}
@@ -561,7 +567,7 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
}
/**
- * udp_reply_sock_handler() - Handle new data from flow specific socket
+ * udp_buf_reply_sock_handler() - Handle new data from flow specific socket
* @c: Execution context
* @ref: epoll reference
* @events: epoll events bitmap
@@ -569,8 +575,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
*
* #syscalls recvmmsg
*/
-void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref,
- uint32_t events, const struct timespec *now)
+void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now)
{
const struct flowside *fromside = flowside_at_sidx(ref.flowside);
flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside);
@@ -594,7 +600,7 @@ void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref,
if (pif_is_socket(topif))
udp_splice_prepare(mmh_recv, i);
else if (topif == PIF_TAP)
- udp_tap_prepare(mmh_recv, i, toside);
+ udp_tap_prepare(c, mmh_recv, i, toside);
}
if (pif_is_socket(topif)) {
diff --git a/udp.h b/udp.h
index fb42e1c50d70..77b29260e8d1 100644
--- a/udp.h
+++ b/udp.h
@@ -9,10 +9,10 @@
#define UDP_TIMER_INTERVAL 1000 /* ms */
void udp_portmap_clear(void);
-void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
- uint32_t events, const struct timespec *now);
-void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref,
- uint32_t events, const struct timespec *now);
+void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now);
+void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now);
int udp_tap_handler(const struct ctx *c, uint8_t pif,
sa_family_t af, const void *saddr, const void *daddr,
const struct pool *p, int idx, const struct timespec *now);
diff --git a/udp_internal.h b/udp_internal.h
new file mode 100644
index 000000000000..7dd45753698f
--- /dev/null
+++ b/udp_internal.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later
+ * Copyright (c) 2021 Red Hat GmbH
+ * Author: Stefano Brivio <sbrivio@redhat.com>
+ */
+
+#ifndef UDP_INTERNAL_H
+#define UDP_INTERNAL_H
+
+#include "tap.h" /* needed by udp_meta_t */
+
+#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */
+
+/**
+ * struct udp_payload_t - UDP header and data for inbound messages
+ * @uh: UDP header
+ * @data: UDP data
+ */
+struct udp_payload_t {
+ struct udphdr uh;
+ char data[USHRT_MAX - sizeof(struct udphdr)];
+#ifdef __AVX2__
+} __attribute__ ((packed, aligned(32)));
+#else
+} __attribute__ ((packed, aligned(__alignof__(unsigned int))));
+#endif
+
+size_t udp_update_hdr4(const struct ctx *c,
+ struct iphdr *ip4h, struct udp_payload_t *bp,
+ const struct flowside *toside, size_t dlen);
+size_t udp_update_hdr6(const struct ctx *c,
+ struct ipv6hdr *ip6h, struct udp_payload_t *bp,
+ const struct flowside *toside, size_t dlen);
+bool udp_sock_recverr(int s);
+#endif /* UDP_INTERNAL_H */
diff --git a/udp_vu.c b/udp_vu.c
new file mode 100644
index 000000000000..39a7a30b209c
--- /dev/null
+++ b/udp_vu.c
@@ -0,0 +1,386 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* udp_vu.c - UDP L2 vhost-user management functions
+ *
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+#include <unistd.h>
+#include <assert.h>
+#include <net/ethernet.h>
+#include <net/if.h>
+#include <netinet/in.h>
+#include <netinet/ip.h>
+#include <netinet/udp.h>
+#include <stdint.h>
+#include <stddef.h>
+#include <sys/uio.h>
+#include <linux/virtio_net.h>
+
+#include "checksum.h"
+#include "util.h"
+#include "ip.h"
+#include "siphash.h"
+#include "inany.h"
+#include "passt.h"
+#include "pcap.h"
+#include "log.h"
+#include "vhost_user.h"
+#include "udp_internal.h"
+#include "flow.h"
+#include "flow_table.h"
+#include "udp_flow.h"
+#include "udp_vu.h"
+#include "vu_common.h"
+
+/* vhost-user */
+static const struct virtio_net_hdr vu_header = {
+ .flags = VIRTIO_NET_HDR_F_DATA_VALID,
+ .gso_type = VIRTIO_NET_HDR_GSO_NONE,
+};
+
+static struct iovec iov_vu [VIRTQUEUE_MAX_SIZE];
+static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE];
+static struct iovec in_sg[VIRTQUEUE_MAX_SIZE];
+static int in_sg_count;
+
+/**
+ * udp_vu_l2_hdrlen() - return the size of the header in level 2 frame (UDP)
+ * @vdev: vhost-user device
+ * @v6: Set for IPv6 packet
+ *
+ * Return: Return the size of the header
+ */
+static size_t udp_vu_l2_hdrlen(const struct vu_dev *vdev, bool v6)
+{
+ size_t l2_hdrlen;
+
+ l2_hdrlen = vdev->hdrlen + sizeof(struct ethhdr) +
+ sizeof(struct udphdr);
+
+ if (v6)
+ l2_hdrlen += sizeof(struct ipv6hdr);
+ else
+ l2_hdrlen += sizeof(struct iphdr);
+
+ return l2_hdrlen;
+}
+
+/**
+ * udp_vu_sock_recv() - Receive datagrams from socket into vhost-user buffers
+ * @c: Execution context
+ * @s_in: Source socket address, filled in by recvmsg()
+ * @s: Socket to receive from
+ * @events: epoll events bitmap
+ * @v6: Set for IPv6 connections
+ * @datalen: Size of received data (output)
+ *
+ * Return: Number of iov entries used to store the datagram
+ */
+static int udp_vu_sock_recv(const struct ctx *c, union sockaddr_inany *s_in,
+ int s, uint32_t events, bool v6, ssize_t *data_len)
+{
+ struct vu_dev *vdev = c->vdev;
+ struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ int virtqueue_max, iov_cnt, idx, iov_used;
+ size_t fillsize, size, off, l2_hdrlen;
+ struct virtio_net_hdr_mrg_rxbuf *vh;
+ struct msghdr msg = { 0 };
+ char *base;
+
+ ASSERT(!c->no_udp);
+
+ /* Clear any errors first */
+ if (events & EPOLLERR) {
+ while (udp_sock_recverr(s))
+ ;
+ }
+
+ if (!(events & EPOLLIN))
+ return 0;
+
+ /* compute L2 header length */
+
+ if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
+ virtqueue_max = VIRTQUEUE_MAX_SIZE;
+ else
+ virtqueue_max = 1;
+
+ l2_hdrlen = udp_vu_l2_hdrlen(vdev, v6);
+
+ msg.msg_name = s_in;
+ msg.msg_namelen = sizeof(union sockaddr_inany);
+
+ fillsize = USHRT_MAX;
+ iov_cnt = 0;
+ in_sg_count = 0;
+ while (fillsize && iov_cnt < virtqueue_max &&
+ in_sg_count < ARRAY_SIZE(in_sg)) {
+ int ret;
+
+ elem[iov_cnt].out_num = 0;
+ elem[iov_cnt].out_sg = NULL;
+ elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count;
+ elem[iov_cnt].in_sg = &in_sg[in_sg_count];
+ ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]);
+ if (ret < 0)
+ break;
+ in_sg_count += elem[iov_cnt].in_num;
+
+ if (elem[iov_cnt].in_num < 1) {
+ err("virtio-net receive queue contains no in buffers");
+ vu_queue_rewind(vq, iov_cnt);
+ return 0;
+ }
+ ASSERT(elem[iov_cnt].in_num == 1);
+ ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen);
+
+ if (iov_cnt == 0) {
+ base = elem[iov_cnt].in_sg[0].iov_base;
+ size = elem[iov_cnt].in_sg[0].iov_len;
+
+ /* keep space for the headers */
+ iov_vu[0].iov_base = base + l2_hdrlen;
+ iov_vu[0].iov_len = size - l2_hdrlen;
+ } else {
+ iov_vu[iov_cnt].iov_base = elem[iov_cnt].in_sg[0].iov_base;
+ iov_vu[iov_cnt].iov_len = elem[iov_cnt].in_sg[0].iov_len;
+ }
+
+ if (iov_vu[iov_cnt].iov_len > fillsize)
+ iov_vu[iov_cnt].iov_len = fillsize;
+
+ fillsize -= iov_vu[iov_cnt].iov_len;
+
+ iov_cnt++;
+ }
+ if (iov_cnt == 0)
+ return 0;
+
+ msg.msg_iov = iov_vu;
+ msg.msg_iovlen = iov_cnt;
+
+ *data_len = recvmsg(s, &msg, 0);
+ if (*data_len < 0) {
+ vu_queue_rewind(vq, iov_cnt);
+ return 0;
+ }
+
+ /* restore original values */
+ iov_vu[0].iov_base = base;
+ iov_vu[0].iov_len = size;
+
+ /* count the numbers of buffer filled by recvmsg() */
+ idx = iov_skip_bytes(iov_vu, iov_cnt, l2_hdrlen + *data_len,
+ &off);
+ /* adjust last iov length */
+ if (idx < iov_cnt)
+ iov_vu[idx].iov_len = off;
+ iov_used = idx + !!off;
+
+ /* release unused buffers */
+ vu_queue_rewind(vq, iov_cnt - iov_used);
+
+ vh = (struct virtio_net_hdr_mrg_rxbuf *)base;
+ vh->hdr = vu_header;
+ if (vdev->hdrlen == sizeof(struct virtio_net_hdr_mrg_rxbuf))
+ vh->num_buffers = htole16(iov_used);
+
+ return iov_used;
+}
+
+/**
+ * udp_vu_prepare() - Prepare the packet header
+ * @c: Execution context
+ * @toside: Address information for one side of the flow
+ * @datalen: Packet data length
+ *
+ * Return:i Level-4 length
+ */
+static size_t udp_vu_prepare(const struct ctx *c,
+ const struct flowside *toside, ssize_t data_len)
+{
+ const struct vu_dev *vdev = c->vdev;
+ struct ethhdr *eh;
+ size_t l4len;
+
+ /* ethernet header */
+ eh = vu_eth(vdev, iov_vu[0].iov_base);
+
+ memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest));
+ memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source));
+
+ /* initialize header */
+ if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) {
+ struct iphdr *iph = vu_ip(vdev, iov_vu[0].iov_base);
+ struct udp_payload_t *bp = vu_payloadv4(vdev,
+ iov_vu[0].iov_base);
+
+ eh->h_proto = htons(ETH_P_IP);
+
+ *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_UDP);
+
+ l4len = udp_update_hdr4(c, iph, bp, toside, data_len);
+ } else {
+ struct ipv6hdr *ip6h = vu_ip(vdev, iov_vu[0].iov_base);
+ struct udp_payload_t *bp = vu_payloadv6(vdev,
+ iov_vu[0].iov_base);
+
+ eh->h_proto = htons(ETH_P_IPV6);
+
+ *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_UDP);
+
+ l4len = udp_update_hdr6(c, ip6h, bp, toside, data_len);
+ }
+
+ return l4len;
+}
+
+/**
+ * udp_vu_pcap() - Capture a single frame to pcap file (UDP)
+ * @c: Execution context
+ * @toside: ddress information for one side of the flow
+ * @l4len: IPv4 Payload length
+ * @iov_used: Length of the array
+ */
+static void udp_vu_pcap(const struct ctx *c, const struct flowside *toside,
+ size_t l4len, int iov_used)
+{
+ const struct in_addr *src4 = inany_v4(&toside->oaddr);
+ const struct in_addr *dst4 = inany_v4(&toside->eaddr);
+ const struct vu_dev *vdev = c->vdev;
+ char *base = iov_vu[0].iov_base;
+ size_t size = iov_vu[0].iov_len;
+ struct udp_payload_t *bp;
+ uint32_t sum;
+
+ if (!*c->pcap)
+ return;
+
+ if (src4 && dst4) {
+ bp = vu_payloadv4(vdev, base);
+ sum = proto_ipv4_header_psum(l4len, IPPROTO_UDP, *src4, *dst4);
+ } else {
+ bp = vu_payloadv6(vdev, base);
+ sum = proto_ipv6_header_psum(l4len, IPPROTO_UDP,
+ &toside->oaddr.a6,
+ &toside->eaddr.a6);
+ bp->uh.check = 0; /* by default, set to 0xffff */
+ }
+
+ iov_vu[0].iov_base = &bp->uh;
+ iov_vu[0].iov_len = size - ((char *)iov_vu[0].iov_base - base);
+
+ bp->uh.check = csum_iov(iov_vu, iov_used, sum);
+
+ /* set iov for pcap logging */
+ iov_vu[0].iov_base = base + vdev->hdrlen;
+ iov_vu[0].iov_len = size - vdev->hdrlen;
+ pcap_iov(iov_vu, iov_used);
+
+ /* restore iov_vu[0] */
+ iov_vu[0].iov_base = base;
+ iov_vu[0].iov_len = size;
+}
+
+/**
+ * udp_vu_listen_sock_handler() - Handle new data from socket
+ * @c: Execution context
+ * @ref: epoll reference
+ * @events: epoll events bitmap
+ * @now: Current timestamp
+ */
+void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now)
+{
+ struct vu_dev *vdev = c->vdev;
+ struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ bool v6 = ref.udp.v6;
+ int i;
+
+ for (i = 0; i < UDP_MAX_FRAMES; i++) {
+ union sockaddr_inany s_in;
+ flow_sidx_t batchsidx;
+ uint8_t batchpif;
+ ssize_t data_len;
+ int iov_used;
+
+ iov_used = udp_vu_sock_recv(c, &s_in, ref.fd,
+ events, v6, &data_len);
+ if (iov_used <= 0)
+ return;
+
+ batchsidx = udp_flow_from_sock(c, ref, &s_in, now);
+ batchpif = pif_at_sidx(batchsidx);
+
+ if (batchpif == PIF_TAP) {
+ size_t l4len;
+
+ l4len = udp_vu_prepare(c, flowside_at_sidx(batchsidx),
+ data_len);
+ udp_vu_pcap(c, flowside_at_sidx(batchsidx), l4len,
+ iov_used);
+ vu_send_frame(vdev, vq, elem, iov_vu, iov_used);
+ } else if (flow_sidx_valid(batchsidx)) {
+ flow_sidx_t fromsidx = flow_sidx_opposite(batchsidx);
+ struct udp_flow *uflow = udp_at_sidx(batchsidx);
+
+ flow_err(uflow,
+ "No support for forwarding UDP from %s to %s",
+ pif_name(pif_at_sidx(fromsidx)),
+ pif_name(batchpif));
+ } else {
+ debug("Discarding 1 datagram without flow");
+ }
+ }
+}
+
+/**
+ * udp_vu_reply_sock_handler() - Handle new data from flow specific socket
+ * @c: Execution context
+ * @ref: epoll reference
+ * @events: epoll events bitmap
+ * @now: Current timestamp
+ */
+void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now)
+{
+ flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside);
+ const struct flowside *toside = flowside_at_sidx(tosidx);
+ struct vu_dev *vdev = c->vdev;
+ struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
+ struct udp_flow *uflow = udp_at_sidx(ref.flowside);
+ uint8_t topif = pif_at_sidx(tosidx);
+ bool v6 = ref.udp.v6;
+ int i;
+
+ ASSERT(!c->no_udp);
+ ASSERT(uflow);
+
+ for (i = 0; i < UDP_MAX_FRAMES; i++) {
+ union sockaddr_inany s_in;
+ ssize_t data_len;
+ int iov_used;
+
+ iov_used = udp_vu_sock_recv(c, &s_in, ref.fd,
+ events, v6, &data_len);
+ if (iov_used <= 0)
+ return;
+ flow_trace(uflow, "Received 1 datagram on reply socket");
+ uflow->ts = now->tv_sec;
+
+ if (topif == PIF_TAP) {
+ size_t l4len;
+
+ l4len = udp_vu_prepare(c, toside, data_len);
+ udp_vu_pcap(c, toside, l4len, iov_used);
+ vu_send_frame(vdev, vq, elem, iov_vu, iov_used);
+ } else {
+ uint8_t frompif = pif_at_sidx(ref.flowside);
+
+ flow_err(uflow,
+ "No support for forwarding UDP from %s to %s",
+ pif_name(frompif), pif_name(topif));
+ }
+ }
+}
diff --git a/udp_vu.h b/udp_vu.h
new file mode 100644
index 000000000000..ba7018d3bf01
--- /dev/null
+++ b/udp_vu.h
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ */
+
+#ifndef UDP_VU_H
+#define UDP_VU_H
+
+void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now);
+void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref,
+ uint32_t events, const struct timespec *now);
+#endif /* UDP_VU_H */
diff --git a/vhost_user.c b/vhost_user.c
index 6008a8adc967..7df503e3d8b1 100644
--- a/vhost_user.c
+++ b/vhost_user.c
@@ -52,7 +52,6 @@
* this is part of the vhost-user backend
* convention.
*/
-/* cppcheck-suppress unusedFunction */
void vu_print_capabilities(void)
{
info("{");
@@ -162,9 +161,7 @@ static void vmsg_close_fds(const struct vhost_user_msg *vmsg)
*/
static void vu_remove_watch(const struct vu_dev *vdev, int fd)
{
- /* Placeholder to add passt related code */
- (void)vdev;
- (void)fd;
+ epoll_ctl(vdev->context->epollfd, EPOLL_CTL_DEL, fd, NULL);
}
/**
@@ -428,7 +425,6 @@ static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq)
*
* Return: 0 if the zone is in a mapped memory region, -1 otherwise
*/
-/* cppcheck-suppress unusedFunction */
int vu_packet_check_range(void *buf, size_t offset, size_t len,
const char *start)
{
@@ -518,6 +514,14 @@ static bool vu_set_mem_table_exec(struct vu_dev *vdev,
}
}
+ /* As vu_packet_check_range() has no access to the number of
+ * memory regions, mark the end of the array with mmap_addr = 0
+ */
+ ASSERT(vdev->nregions < VHOST_USER_MAX_RAM_SLOTS - 1);
+ vdev->regions[vdev->nregions].mmap_addr = 0;
+
+ tap_sock_update_buf(vdev->regions, 0);
+
return false;
}
@@ -646,9 +650,12 @@ static bool vu_get_vring_base_exec(struct vu_dev *vdev,
*/
static void vu_set_watch(const struct vu_dev *vdev, int fd)
{
- /* Placeholder to add passt related code */
- (void)vdev;
- (void)fd;
+ union epoll_ref ref = { .type = EPOLL_TYPE_VHOST_KICK, .fd = fd };
+ struct epoll_event ev = { 0 };
+
+ ev.data.u64 = ref.u64;
+ ev.events = EPOLLIN;
+ epoll_ctl(vdev->context->epollfd, EPOLL_CTL_ADD, fd, &ev);
}
/**
@@ -688,7 +695,6 @@ static int vu_wait_queue(const struct vu_virtq *vq)
*
* Return: number of bytes sent, -1 if there is an error
*/
-/* cppcheck-suppress unusedFunction */
int vu_send(struct vu_dev *vdev, const void *buf, size_t size)
{
struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
@@ -876,7 +882,6 @@ static void vu_handle_tx(struct vu_dev *vdev, int index,
* @ref: epoll reference information
* @now: Current timestamp
*/
-/* cppcheck-suppress unusedFunction */
void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref,
const struct timespec *now)
{
@@ -1122,11 +1127,11 @@ static bool vu_set_vring_enable_exec(struct vu_dev *vdev,
* @c: execution context
* @vdev: vhost-user device
*/
-/* cppcheck-suppress unusedFunction */
void vu_init(struct ctx *c, struct vu_dev *vdev)
{
int i;
+ c->vdev = vdev;
vdev->context = c;
vdev->hdrlen = 0;
for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
@@ -1143,7 +1148,6 @@ void vu_init(struct ctx *c, struct vu_dev *vdev)
* vu_cleanup() - Reset vhost-user device
* @vdev: vhost-user device
*/
-/* cppcheck-suppress unusedFunction */
void vu_cleanup(struct vu_dev *vdev)
{
unsigned int i;
@@ -1191,8 +1195,7 @@ void vu_cleanup(struct vu_dev *vdev)
*/
static void vu_sock_reset(struct vu_dev *vdev)
{
- /* Placeholder to add passt related code */
- (void)vdev;
+ tap_sock_reset(vdev->context);
}
static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev,
@@ -1220,7 +1223,6 @@ static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev,
* @fd: vhost-user message socket
* @events: epoll events
*/
-/* cppcheck-suppress unusedFunction */
void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events)
{
struct vhost_user_msg msg = { 0 };
diff --git a/virtio.c b/virtio.c
index 237395396606..31e56def2c23 100644
--- a/virtio.c
+++ b/virtio.c
@@ -562,7 +562,6 @@ void vu_queue_unpop(struct vu_virtq *vq)
* @vq: Virtqueue
* @num: Number of element to unpop
*/
-/* cppcheck-suppress unusedFunction */
bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num)
{
if (num > vq->inuse)
diff --git a/vu_common.c b/vu_common.c
new file mode 100644
index 000000000000..5b469da9731f
--- /dev/null
+++ b/vu_common.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ *
+ * common_vu.c - vhost-user common UDP and TCP functions
+ */
+
+#include <unistd.h>
+#include <sys/uio.h>
+
+#include "util.h"
+#include "passt.h"
+#include "vhost_user.h"
+#include "vu_common.h"
+
+/**
+ * vu_send_frame() - Send one frame to the vhost-user interface
+ * @vdev: vhost-user device
+ * @vq: vhost-user virtqueue
+ * @elem: virtqueue element array to send back to the virqueue
+ * @iov_vu: iovec array containing the data to send
+ * @iov_used: Length of the array
+ */
+void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq,
+ struct vu_virtq_element *elem, const struct iovec *iov_vu,
+ int iov_used)
+{
+ int i;
+
+ for (i = 0; i < iov_used; i++)
+ vu_queue_fill(vq, &elem[i], iov_vu[i].iov_len, i);
+
+ vu_queue_flush(vq, iov_used);
+ vu_queue_notify(vdev, vq);
+}
diff --git a/vu_common.h b/vu_common.h
new file mode 100644
index 000000000000..d2ea46bd379b
--- /dev/null
+++ b/vu_common.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ *
+ * vhost-user common UDP and TCP functions
+ */
+
+#ifndef VU_COMMON_H
+#define VU_COMMON_H
+
+static inline void *vu_eth(const struct vu_dev *vdev, void *base)
+{
+ return ((char *)base + vdev->hdrlen);
+}
+
+static inline void *vu_ip(const struct vu_dev *vdev, void *base)
+{
+ return (struct ethhdr *)vu_eth(vdev, base) + 1;
+}
+
+static inline void *vu_payloadv4(const struct vu_dev *vdev, void *base)
+{
+ return (struct iphdr *)vu_ip(vdev, base) + 1;
+}
+
+static inline void *vu_payloadv6(const struct vu_dev *vdev, void *base)
+{
+ return (struct ipv6hdr *)vu_ip(vdev, base) + 1;
+}
+
+void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq,
+ struct vu_virtq_element *elem, const struct iovec *iov_vu,
+ int iov_used);
+#endif /* VU_COMMON_H */
--
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later
+ * Copyright Red Hat
+ * Author: Laurent Vivier <lvivier@redhat.com>
+ *
+ * vhost-user common UDP and TCP functions
+ */
+
+#ifndef VU_COMMON_H
+#define VU_COMMON_H
+
+static inline void *vu_eth(const struct vu_dev *vdev, void *base)
+{
+ return ((char *)base + vdev->hdrlen);
+}
+
+static inline void *vu_ip(const struct vu_dev *vdev, void *base)
+{
+ return (struct ethhdr *)vu_eth(vdev, base) + 1;
+}
+
+static inline void *vu_payloadv4(const struct vu_dev *vdev, void *base)
+{
+ return (struct iphdr *)vu_ip(vdev, base) + 1;
+}
+
+static inline void *vu_payloadv6(const struct vu_dev *vdev, void *base)
+{
+ return (struct ipv6hdr *)vu_ip(vdev, base) + 1;
+}
+
+void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq,
+ struct vu_virtq_element *elem, const struct iovec *iov_vu,
+ int iov_used);
+#endif /* VU_COMMON_H */
--
2.46.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v4 2/4] vhost-user: introduce virtio API
2024-09-06 16:04 ` [PATCH v4 2/4] vhost-user: introduce virtio API Laurent Vivier
@ 2024-09-10 15:47 ` Stefano Brivio
2024-09-12 11:23 ` Laurent Vivier
0 siblings, 1 reply; 15+ messages in thread
From: Stefano Brivio @ 2024-09-10 15:47 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
Just one comment here:
On Fri, 6 Sep 2024 18:04:47 +0200
Laurent Vivier <lvivier@redhat.com> wrote:
> Add virtio.c and virtio.h that define the functions needed
> to manage virtqueues.
>
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
> ---
> Makefile | 4 +-
> util.h | 8 +
> virtio.c | 665 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
> virtio.h | 185 ++++++++++++++++
> 4 files changed, 860 insertions(+), 2 deletions(-)
> create mode 100644 virtio.c
> create mode 100644 virtio.h
>
> diff --git a/Makefile b/Makefile
> index 01fada45adc7..e9a154bdd718 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -47,7 +47,7 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS)
> PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \
> icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \
> ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \
> - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c
> + tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c virtio.c
> QRAP_SRCS = qrap.c
> SRCS = $(PASST_SRCS) $(QRAP_SRCS)
>
> @@ -57,7 +57,7 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \
> flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \
> lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \
> siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \
> - udp.h udp_flow.h util.h
> + udp.h udp_flow.h util.h virtio.h
> HEADERS = $(PASST_HEADERS) seccomp.h
>
> C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 };
> diff --git a/util.h b/util.h
> index 1463c92153d5..0960903ccaec 100644
> --- a/util.h
> +++ b/util.h
> @@ -134,6 +134,14 @@ static inline uint32_t ntohl_unaligned(const void *p)
> return ntohl(val);
> }
>
> +static inline void barrier(void) { __asm__ __volatile__("" ::: "memory"); }
> +#define smp_mb() do { barrier(); __atomic_thread_fence(__ATOMIC_SEQ_CST); } while (0)
> +#define smp_mb_release() do { barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); } while (0)
> +#define smp_mb_acquire() do { barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); } while (0)
> +
> +#define smp_wmb() smp_mb_release()
> +#define smp_rmb() smp_mb_acquire()
> +
> #define NS_FN_STACK_SIZE (RLIMIT_STACK_VAL * 1024 / 8)
> int do_clone(int (*fn)(void *), char *stack_area, size_t stack_size, int flags,
> void *arg);
> diff --git a/virtio.c b/virtio.c
> new file mode 100644
> index 000000000000..380590afbca3
> --- /dev/null
> +++ b/virtio.c
> @@ -0,0 +1,665 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later AND BSD-3-Clause
> +/*
> + * virtio API, vring and virtqueue functions definition
> + *
> + * Copyright Red Hat
> + * Author: Laurent Vivier <lvivier@redhat.com>
> + */
> +
> +/* Some parts copied from QEMU subprojects/libvhost-user/libvhost-user.c
> + * originally licensed under the following terms:
> + *
> + * --
> + *
> + * Copyright IBM, Corp. 2007
> + * Copyright (c) 2016 Red Hat, Inc.
> + *
> + * Authors:
> + * Anthony Liguori <aliguori@us.ibm.com>
> + * Marc-André Lureau <mlureau@redhat.com>
> + * Victor Kaplansky <victork@redhat.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later. See the COPYING file in the top-level directory.
> + *
> + * Some parts copied from QEMU hw/virtio/virtio.c
> + * licensed under the following terms:
> + *
> + * Copyright IBM, Corp. 2007
> + *
> + * Authors:
> + * Anthony Liguori <aliguori@us.ibm.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2. See
> + * the COPYING file in the top-level directory.
> + *
> + * --
> + *
> + * virtq_used_event() and virtq_avail_event() from
> + * https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-712000A
> + * licensed under the following terms:
> + *
> + * --
> + *
> + * This header is BSD licensed so anyone can use the definitions
> + * to implement compatible drivers/servers.
> + *
> + * Copyright 2007, 2009, IBM Corporation
> + * Copyright 2011, Red Hat, Inc
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in the
> + * documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of IBM nor the names of its contributors
> + * may be used to endorse or promote products derived from this software
> + * without specific prior written permission.
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ‘‘AS IS’’ AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +
> +#include <stddef.h>
> +#include <endian.h>
> +#include <string.h>
> +#include <errno.h>
> +#include <sys/eventfd.h>
> +#include <sys/socket.h>
> +
> +#include "util.h"
> +#include "virtio.h"
> +
> +#define VIRTQUEUE_MAX_SIZE 1024
> +
> +/**
> + * vu_gpa_to_va() - Translate guest physical address to our virtual address.
> + * @dev: Vhost-user device
> + * @plen: Physical length to map (input), capped to region (output)
> + * @guest_addr: Guest physical address
> + *
> + * Return: virtual address in our address space of the guest physical address
> + */
> +static void *vu_gpa_to_va(struct vu_dev *dev, uint64_t *plen, uint64_t guest_addr)
> +{
> + unsigned int i;
> +
> + if (*plen == 0)
> + return NULL;
> +
> + /* Find matching memory region. */
> + for (i = 0; i < dev->nregions; i++) {
> + const struct vu_dev_region *r = &dev->regions[i];
> +
> + if ((guest_addr >= r->gpa) &&
> + (guest_addr < (r->gpa + r->size))) {
> + if ((guest_addr + *plen) > (r->gpa + r->size))
> + *plen = r->gpa + r->size - guest_addr;
> + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
> + return (void *)(guest_addr - r->gpa + r->mmap_addr +
> + r->mmap_offset);
> + }
> + }
> +
> + return NULL;
> +}
> +
> +/**
> + * vring_avail_flags() - Read the available ring flags
> + * @vq: Virtqueue
> + *
> + * Return: the available ring descriptor flags of the given virtqueue
> + */
> +static inline uint16_t vring_avail_flags(const struct vu_virtq *vq)
> +{
> + return le16toh(vq->vring.avail->flags);
> +}
> +
> +/**
> + * vring_avail_idx() - Read the available ring index
> + * @vq: Virtqueue
> + *
> + * Return: the available ring index of the given virtqueue
> + */
> +static inline uint16_t vring_avail_idx(struct vu_virtq *vq)
> +{
> + vq->shadow_avail_idx = le16toh(vq->vring.avail->idx);
> +
> + return vq->shadow_avail_idx;
> +}
> +
> +/**
> + * vring_avail_ring() - Read an available ring entry
> + * @vq: Virtqueue
> + * @i: Index of the entry to read
> + *
> + * Return: the ring entry content (head of the descriptor chain)
> + */
> +static inline uint16_t vring_avail_ring(const struct vu_virtq *vq, int i)
> +{
> + return le16toh(vq->vring.avail->ring[i]);
> +}
> +
> +/**
> + * virtq_used_event - Get location of used event indices
> + * (only with VIRTIO_F_EVENT_IDX)
> + * @vq Virtqueue
> + *
> + * Return: return the location of the used event index
> + */
> +static inline uint16_t *virtq_used_event(const struct vu_virtq *vq)
> +{
> + /* For backwards compat, used event index is at *end* of avail ring. */
> + return &vq->vring.avail->ring[vq->vring.num];
> +}
> +
> +/**
> + * vring_get_used_event() - Get the used event from the available ring
> + * @vq Virtqueue
> + *
> + * Return: the used event (available only if VIRTIO_RING_F_EVENT_IDX is set)
> + * used_event is a performant alternative where the driver
> + * specifies how far the device can progress before a notification
> + * is required.
> + */
> +static inline uint16_t vring_get_used_event(const struct vu_virtq *vq)
> +{
> + return le16toh(*virtq_used_event(vq));
> +}
> +
> +/**
> + * virtqueue_get_head() - Get the head of the descriptor chain for a given
> + * index
> + * @vq: Virtqueue
> + * @idx: Available ring entry index
> + * @head: Head of the descriptor chain
> + */
> +static void virtqueue_get_head(const struct vu_virtq *vq,
> + unsigned int idx, unsigned int *head)
> +{
> + /* Grab the next descriptor number they're advertising, and increment
> + * the index we've seen.
> + */
> + *head = vring_avail_ring(vq, idx % vq->vring.num);
> +
> + /* If their number is silly, that's a fatal mistake. */
> + if (*head >= vq->vring.num)
> + die("vhost-user: Guest says index %u is available", *head);
> +}
> +
> +/**
> + * virtqueue_read_indirect_desc() - Copy virtio ring descriptors from guest
> + * memory
> + * @dev: Vhost-user device
> + * @desc: Destination address to copy the descriptors to
> + * @addr: Guest memory address to copy from
> + * @len: Length of memory to copy
> + *
> + * Return: -1 if there is an error, 0 otherwise
> + */
> +static int virtqueue_read_indirect_desc(struct vu_dev *dev, struct vring_desc *desc,
> + uint64_t addr, size_t len)
> +{
> + uint64_t read_len;
> +
> + if (len > (VIRTQUEUE_MAX_SIZE * sizeof(struct vring_desc)))
> + return -1;
> +
> + if (len == 0)
> + return -1;
> +
> + while (len) {
> + const struct vring_desc *orig_desc;
> +
> + read_len = len;
> + orig_desc = vu_gpa_to_va(dev, &read_len, addr);
In case you missed this in my review of v3 (I'm not sure if it's a
valid concern):
--
Should we also return if read_len < sizeof(struct vring_desc) after
this call? Can that ever happen, if we pick a particular value of addr
so that it's almost at the end of a region?
--
--
Stefano
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 3/4] vhost-user: introduce vhost-user API
2024-09-06 16:04 ` [PATCH v4 3/4] vhost-user: introduce vhost-user API Laurent Vivier
@ 2024-09-10 15:47 ` Stefano Brivio
2024-09-12 12:41 ` Laurent Vivier
0 siblings, 1 reply; 15+ messages in thread
From: Stefano Brivio @ 2024-09-10 15:47 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
Nits and a couple of questions only:
On Fri, 6 Sep 2024 18:04:48 +0200
Laurent Vivier <lvivier@redhat.com> wrote:
> Add vhost_user.c and vhost_user.h that define the functions needed
> to implement vhost-user backend.
>
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
> ---
> Makefile | 4 +-
> iov.c | 1 -
> vhost_user.c | 1265 ++++++++++++++++++++++++++++++++++++++++++++++++++
> vhost_user.h | 203 ++++++++
> virtio.c | 5 -
> virtio.h | 2 +-
> 6 files changed, 1471 insertions(+), 9 deletions(-)
> create mode 100644 vhost_user.c
> create mode 100644 vhost_user.h
>
> diff --git a/Makefile b/Makefile
> index e9a154bdd718..01e95ac1b62c 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -47,7 +47,7 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS)
> PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \
> icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \
> ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \
> - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c virtio.c
> + tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c
> QRAP_SRCS = qrap.c
> SRCS = $(PASST_SRCS) $(QRAP_SRCS)
>
> @@ -57,7 +57,7 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \
> flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \
> lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \
> siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \
> - udp.h udp_flow.h util.h virtio.h
> + udp.h udp_flow.h util.h vhost_user.h virtio.h
> HEADERS = $(PASST_HEADERS) seccomp.h
>
> C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 };
> diff --git a/iov.c b/iov.c
> index 3f9e229a305f..3741db21790f 100644
> --- a/iov.c
> +++ b/iov.c
> @@ -68,7 +68,6 @@ size_t iov_skip_bytes(const struct iovec *iov, size_t n,
> *
> * Returns: The number of bytes successfully copied.
> */
> -/* cppcheck-suppress unusedFunction */
> size_t iov_from_buf(const struct iovec *iov, size_t iov_cnt,
> size_t offset, const void *buf, size_t bytes)
> {
> diff --git a/vhost_user.c b/vhost_user.c
> new file mode 100644
> index 000000000000..6008a8adc967
> --- /dev/null
> +++ b/vhost_user.c
> @@ -0,0 +1,1265 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * vhost-user API, command management and virtio interface
> + *
> + * Copyright Red Hat
> + * Author: Laurent Vivier <lvivier@redhat.com>
> + */
> +/* some parts from QEMU subprojects/libvhost-user/libvhost-user.c
s/some/Some/, no need to split comments (just leave one extra line...).
> + * licensed under the following terms:
> + *
> + * Copyright IBM, Corp. 2007
> + * Copyright (c) 2016 Red Hat, Inc.
> + *
> + * Authors:
> + * Anthony Liguori <aliguori@us.ibm.com>
> + * Marc-André Lureau <mlureau@redhat.com>
> + * Victor Kaplansky <victork@redhat.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later. See the COPYING file in the top-level directory.
> + */
> +
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <stddef.h>
> +#include <string.h>
> +#include <assert.h>
> +#include <stdbool.h>
> +#include <inttypes.h>
> +#include <time.h>
> +#include <net/ethernet.h>
> +#include <netinet/in.h>
> +#include <sys/epoll.h>
> +#include <sys/eventfd.h>
> +#include <sys/mman.h>
> +#include <linux/vhost_types.h>
> +#include <linux/virtio_net.h>
> +
> +#include "util.h"
> +#include "passt.h"
> +#include "tap.h"
> +#include "vhost_user.h"
> +
> +/* vhost-user version we are compatible with */
> +#define VHOST_USER_VERSION 1
> +
> +/**
> + * vu_print_capabilities() - print vhost-user capabilities
> + * this is part of the vhost-user backend
> + * convention.
> + */
> +/* cppcheck-suppress unusedFunction */
> +void vu_print_capabilities(void)
> +{
> + info("{");
> + info(" \"type\": \"net\"");
> + info("}");
> + exit(EXIT_SUCCESS);
> +}
> +
> +/**
> + * vu_request_to_string() - convert a vhost-user request number to its name
> + * @req: request number
> + *
> + * Return: the name of request number
> + */
> +static const char *vu_request_to_string(unsigned int req)
> +{
> + if (req < VHOST_USER_MAX) {
> +#define REQ(req) [req] = #req
> + static const char * const vu_request_str[VHOST_USER_MAX] = {
> + REQ(VHOST_USER_NONE),
> + REQ(VHOST_USER_GET_FEATURES),
> + REQ(VHOST_USER_SET_FEATURES),
> + REQ(VHOST_USER_SET_OWNER),
> + REQ(VHOST_USER_RESET_OWNER),
> + REQ(VHOST_USER_SET_MEM_TABLE),
> + REQ(VHOST_USER_SET_LOG_BASE),
> + REQ(VHOST_USER_SET_LOG_FD),
> + REQ(VHOST_USER_SET_VRING_NUM),
> + REQ(VHOST_USER_SET_VRING_ADDR),
> + REQ(VHOST_USER_SET_VRING_BASE),
> + REQ(VHOST_USER_GET_VRING_BASE),
> + REQ(VHOST_USER_SET_VRING_KICK),
> + REQ(VHOST_USER_SET_VRING_CALL),
> + REQ(VHOST_USER_SET_VRING_ERR),
> + REQ(VHOST_USER_GET_PROTOCOL_FEATURES),
> + REQ(VHOST_USER_SET_PROTOCOL_FEATURES),
> + REQ(VHOST_USER_GET_QUEUE_NUM),
> + REQ(VHOST_USER_SET_VRING_ENABLE),
> + REQ(VHOST_USER_SEND_RARP),
> + REQ(VHOST_USER_NET_SET_MTU),
> + REQ(VHOST_USER_SET_BACKEND_REQ_FD),
> + REQ(VHOST_USER_IOTLB_MSG),
> + REQ(VHOST_USER_SET_VRING_ENDIAN),
> + REQ(VHOST_USER_GET_CONFIG),
> + REQ(VHOST_USER_SET_CONFIG),
> + REQ(VHOST_USER_POSTCOPY_ADVISE),
> + REQ(VHOST_USER_POSTCOPY_LISTEN),
> + REQ(VHOST_USER_POSTCOPY_END),
> + REQ(VHOST_USER_GET_INFLIGHT_FD),
> + REQ(VHOST_USER_SET_INFLIGHT_FD),
> + REQ(VHOST_USER_GPU_SET_SOCKET),
> + REQ(VHOST_USER_VRING_KICK),
> + REQ(VHOST_USER_GET_MAX_MEM_SLOTS),
> + REQ(VHOST_USER_ADD_MEM_REG),
> + REQ(VHOST_USER_REM_MEM_REG),
> + };
> +#undef REQ
> + return vu_request_str[req];
> + }
> +
> + return "unknown";
> +}
> +
> +/**
> + * qva_to_va() - Translate front-end (QEMU) virtual address to our virtual
> + * address
> + * @dev: vhost-user device
> + * @qemu_addr: front-end userspace address
> + *
> + * Return: the memory address in our process virtual address space.
> + */
> +static void *qva_to_va(struct vu_dev *dev, uint64_t qemu_addr)
> +{
> + unsigned int i;
> +
> + /* Find matching memory region. */
> + for (i = 0; i < dev->nregions; i++) {
> + const struct vu_dev_region *r = &dev->regions[i];
> +
> + if ((qemu_addr >= r->qva) && (qemu_addr < (r->qva + r->size))) {
> + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
> + return (void *)(qemu_addr - r->qva + r->mmap_addr +
> + r->mmap_offset);
> + }
> + }
> +
> + return NULL;
> +}
> +
> +/**
> + * vmsg_close_fds() - Close all file descriptors of a given message
> + * @vmsg: vhost-user message with the list of the file descriptors
> + */
> +static void vmsg_close_fds(const struct vhost_user_msg *vmsg)
> +{
> + int i;
> +
> + for (i = 0; i < vmsg->fd_num; i++)
> + close(vmsg->fds[i]);
> +}
> +
> +/**
> + * vu_remove_watch() - Remove a file descriptor from our passt epoll
> + * file descriptor
> + * @vdev: vhost-user device
> + * @fd: file descriptor to remove
> + */
> +static void vu_remove_watch(const struct vu_dev *vdev, int fd)
> +{
> + /* Placeholder to add passt related code */
> + (void)vdev;
> + (void)fd;
> +}
> +
> +/**
> + * vmsg_set_reply_u64() - Set reply payload.u64 and clear request flags
> + * and fd_num
> + * @vmsg: vhost-user message
> + * @val: 64-bit value to reply
> + */
> +static void vmsg_set_reply_u64(struct vhost_user_msg *vmsg, uint64_t val)
> +{
> + vmsg->hdr.flags = 0; /* defaults will be set by vu_send_reply() */
> + vmsg->hdr.size = sizeof(vmsg->payload.u64);
> + vmsg->payload.u64 = val;
> + vmsg->fd_num = 0;
> +}
> +
> +/**
> + * vu_message_read_default() - Read incoming vhost-user message from the
> + * front-end
> + * @conn_fd: vhost-user command socket
> + * @vmsg: vhost-user message
> + *
> + * Return: -1 there is an error,
It doesn't return on error anymore.
> + * 0 if recvmsg() has been interrupted or if there's no data to read,
> + * 1 if a message has been received
> + */
> +static int vu_message_read_default(int conn_fd, struct vhost_user_msg *vmsg)
> +{
> + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS *
> + sizeof(int))] = { 0 };
> + struct iovec iov = {
> + .iov_base = (char *)vmsg,
> + .iov_len = VHOST_USER_HDR_SIZE,
> + };
> + struct msghdr msg = {
> + .msg_iov = &iov,
> + .msg_iovlen = 1,
> + .msg_control = control,
> + .msg_controllen = sizeof(control),
> + };
> + ssize_t ret, sz_payload;
> + struct cmsghdr *cmsg;
> +
> + ret = recvmsg(conn_fd, &msg, MSG_DONTWAIT);
> + if (ret < 0) {
> + if (errno == EINTR || errno == EAGAIN || errno == EWOULDBLOCK)
> + return 0;
> + die_perror("vhost-user message receive (recvmsg)");
> + }
> +
> + vmsg->fd_num = 0;
> + for (cmsg = CMSG_FIRSTHDR(&msg); cmsg != NULL;
> + cmsg = CMSG_NXTHDR(&msg, cmsg)) {
> + if (cmsg->cmsg_level == SOL_SOCKET &&
> + cmsg->cmsg_type == SCM_RIGHTS) {
> + size_t fd_size;
> +
> + ASSERT(cmsg->cmsg_len >= CMSG_LEN(0));
> + fd_size = cmsg->cmsg_len - CMSG_LEN(0);
> + ASSERT(fd_size <= sizeof(vmsg->fds));
> + vmsg->fd_num = fd_size / sizeof(int);
> + memcpy(vmsg->fds, CMSG_DATA(cmsg), fd_size);
> + break;
> + }
> + }
> +
> + sz_payload = vmsg->hdr.size;
> + if ((size_t)sz_payload > sizeof(vmsg->payload)) {
> + die("vhost-user message request too big: %d,"
> + " size: vmsg->size: %zd, "
> + "while sizeof(vmsg->payload) = %zu",
> + vmsg->hdr.request, sz_payload, sizeof(vmsg->payload));
> + }
> +
> + if (sz_payload) {
> + do
> + ret = recv(conn_fd, &vmsg->payload, sz_payload, 0);
> + while (ret < 0 && (errno == EINTR || errno == EAGAIN));
Perhaps you missed this from my comment to v3: the socket is blocking,
so checking for EAGAIN shouldn't be needed.
> + if (ret < 0)
> + die_perror("vhost-user message receive");
> +
> + if (ret < sz_payload)
> + die("EOF on vhost-user message receive");
I guess you want to terminate on a short read (which, as far as I
understand, you never expect as normal behaviour), but if ret > 0, can
you still call it EOF?
Perhaps we should distinguish the two cases here, ret == 0 (EOF) and
ret < sz_payload (short read).
> + }
> +
> + return 1;
> +}
> +
> +/**
> + * vu_message_write() - Send a message to the front-end
> + * @conn_fd: vhost-user command socket
> + * @vmsg: vhost-user message
> + *
> + * #syscalls:vu sendmsg
> + */
> +static void vu_message_write(int conn_fd, struct vhost_user_msg *vmsg)
> +{
> + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = { 0 };
> + struct iovec iov = {
> + .iov_base = (char *)vmsg,
> + .iov_len = VHOST_USER_HDR_SIZE + vmsg->hdr.size,
> + };
> + struct msghdr msg = {
> + .msg_iov = &iov,
> + .msg_iovlen = 1,
> + .msg_control = control,
> + };
> + int rc;
> +
> + ASSERT(vmsg->fd_num <= VHOST_MEMORY_BASELINE_NREGIONS);
> + if (vmsg->fd_num > 0) {
> + size_t fdsize = vmsg->fd_num * sizeof(int);
> + struct cmsghdr *cmsg;
> +
> + msg.msg_controllen = CMSG_SPACE(fdsize);
> + cmsg = CMSG_FIRSTHDR(&msg);
> + cmsg->cmsg_len = CMSG_LEN(fdsize);
> + cmsg->cmsg_level = SOL_SOCKET;
> + cmsg->cmsg_type = SCM_RIGHTS;
> + memcpy(CMSG_DATA(cmsg), vmsg->fds, fdsize);
> + }
> +
> + do
> + rc = sendmsg(conn_fd, &msg, 0);
> + while (rc < 0 && (errno == EINTR || errno == EAGAIN));
Same as above with EAGAIN.
> +
> + if (rc < 0)
> + die_perror("vhost-user message send");
> +
> + if ((uint32_t)rc < VHOST_USER_HDR_SIZE + vmsg->hdr.size)
> + die("EOF on vhost-user message send");
> +}
> +
> +/**
> + * vu_send_reply() - Update message flags and send it to front-end
> + * @conn_fd: vhost-user command socket
> + * @vmsg: vhost-user message
> + */
> +static void vu_send_reply(int conn_fd, struct vhost_user_msg *msg)
> +{
> + msg->hdr.flags &= ~VHOST_USER_VERSION_MASK;
> + msg->hdr.flags |= VHOST_USER_VERSION;
> + msg->hdr.flags |= VHOST_USER_REPLY_MASK;
> +
> + vu_message_write(conn_fd, msg);
> +}
> +
> +/**
> + * vu_get_features_exec() - Provide back-end features bitmask to front-end
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: True as a reply is requested
> + */
> +static bool vu_get_features_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + uint64_t features =
> + 1ULL << VIRTIO_F_VERSION_1 |
> + 1ULL << VIRTIO_NET_F_MRG_RXBUF |
> + 1ULL << VHOST_USER_F_PROTOCOL_FEATURES;
> +
> + (void)vdev;
> +
> + vmsg_set_reply_u64(msg, features);
> +
> + debug("Sending back to guest u64: 0x%016"PRIx64, msg->payload.u64);
> +
> + return true;
> +}
> +
> +/**
> + * vu_set_enable_all_rings() - Enable/disable all the virtqueues
> + * @vdev: vhost-user device
> + * @enable: New virtqueues state
> + */
> +static void vu_set_enable_all_rings(struct vu_dev *vdev, bool enable)
> +{
> + uint16_t i;
> +
> + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++)
> + vdev->vq[i].enable = enable;
> +}
> +
> +/**
> + * vu_set_features_exec() - Enable features of the back-end
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_features_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + debug("u64: 0x%016"PRIx64, msg->payload.u64);
> +
> + vdev->features = msg->payload.u64;
> + /* We only support devices conforming to VIRTIO 1.0 or
> + * later
> + */
> + if (!vu_has_feature(vdev, VIRTIO_F_VERSION_1))
> + die("virtio legacy devices aren't supported by passt");
> +
> + if (!vu_has_feature(vdev, VHOST_USER_F_PROTOCOL_FEATURES))
> + vu_set_enable_all_rings(vdev, true);
> +
> + /* virtio-net features */
> +
> + /* VIRTIO_F_VERSION_1 always uses struct virtio_net_hdr_mrg_rxbuf */
> + vdev->hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf);
> +
> + return false;
> +}
> +
> +/**
> + * vu_set_owner_exec() - Session start flag, do nothing in our case
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_owner_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + (void)vdev;
> + (void)msg;
> +
> + return false;
> +}
> +
> +/**
> + * map_ring() - Convert ring front-end (QEMU) addresses to our process
> + * virtual address space.
> + * @vdev: vhost-user device
> + * @vq: Virtqueue
> + *
> + * Return: True if ring cannot be mapped to our address space
> + */
> +static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq)
> +{
> + vq->vring.desc = qva_to_va(vdev, vq->vra.desc_user_addr);
> + vq->vring.used = qva_to_va(vdev, vq->vra.used_user_addr);
> + vq->vring.avail = qva_to_va(vdev, vq->vra.avail_user_addr);
> +
> + debug("Setting virtq addresses:");
> + debug(" vring_desc at %p", (void *)vq->vring.desc);
> + debug(" vring_used at %p", (void *)vq->vring.used);
> + debug(" vring_avail at %p", (void *)vq->vring.avail);
> +
> + return !(vq->vring.desc && vq->vring.used && vq->vring.avail);
> +}
> +
> +/**
> + * vu_packet_check_range() - Check if a given memory zone is contained in
> + * a mapped guest memory region
> + * @buf: Array of the available memory regions
> + * @offset: Offset of data range in packet descriptor
> + * @size: Length of desired data range
> + * @start: Start of the packet descriptor
> + *
> + * Return: 0 if the zone is in a mapped memory region, -1 otherwise
> + */
> +/* cppcheck-suppress unusedFunction */
> +int vu_packet_check_range(void *buf, size_t offset, size_t len,
> + const char *start)
> +{
> + struct vu_dev_region *dev_region;
> +
> + for (dev_region = buf; dev_region->mmap_addr; dev_region++) {
> + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
> + char *m = (char *)dev_region->mmap_addr;
> +
> + if (m <= start &&
> + start + offset + len <= m + dev_region->mmap_offset +
> + dev_region->size)
> + return 0;
> + }
> +
> + return -1;
> +}
> +
> +/**
> + * vu_set_mem_table_exec() - Sets the memory map regions to be able to
> + * translate the vring addresses.
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + *
> + * #syscalls:vu mmap munmap
> + */
> +static bool vu_set_mem_table_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + struct vhost_user_memory m = msg->payload.memory, *memory = &m;
> + unsigned int i;
> +
> + for (i = 0; i < vdev->nregions; i++) {
> + struct vu_dev_region *r = &vdev->regions[i];
> + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
> + void *mm = (void *)r->mmap_addr;
> +
> + if (mm)
> + munmap(mm, r->size + r->mmap_offset);
> + }
> + vdev->nregions = memory->nregions;
> +
> + debug("vhost-user nregions: %u", memory->nregions);
> + for (i = 0; i < vdev->nregions; i++) {
> + struct vhost_user_memory_region *msg_region = &memory->regions[i];
> + struct vu_dev_region *dev_region = &vdev->regions[i];
> + void *mmap_addr;
> +
> + debug("vhost-user region %d", i);
> + debug(" guest_phys_addr: 0x%016"PRIx64,
> + msg_region->guest_phys_addr);
> + debug(" memory_size: 0x%016"PRIx64,
> + msg_region->memory_size);
> + debug(" userspace_addr 0x%016"PRIx64,
> + msg_region->userspace_addr);
> + debug(" mmap_offset 0x%016"PRIx64,
> + msg_region->mmap_offset);
> +
> + dev_region->gpa = msg_region->guest_phys_addr;
> + dev_region->size = msg_region->memory_size;
> + dev_region->qva = msg_region->userspace_addr;
> + dev_region->mmap_offset = msg_region->mmap_offset;
> +
> + /* We don't use offset argument of mmap() since the
> + * mapped address has to be page aligned.
> + */
> + mmap_addr = mmap(0, dev_region->size + dev_region->mmap_offset,
> + PROT_READ | PROT_WRITE, MAP_SHARED |
> + MAP_NORESERVE, msg->fds[i], 0);
> +
> + if (mmap_addr == MAP_FAILED)
> + die_perror("vhost-user region mmap error");
> +
> + dev_region->mmap_addr = (uint64_t)(uintptr_t)mmap_addr;
> + debug(" mmap_addr: 0x%016"PRIx64,
> + dev_region->mmap_addr);
> +
> + close(msg->fds[i]);
> + }
> +
> + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
> + if (vdev->vq[i].vring.desc) {
> + if (map_ring(vdev, &vdev->vq[i]))
> + die("remapping queue %d during setmemtable", i);
> + }
> + }
> +
> + return false;
> +}
> +
> +/**
> + * vu_set_vring_num_exec() - Set the size of the queue (vring size)
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_vring_num_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + unsigned int idx = msg->payload.state.index;
> + unsigned int num = msg->payload.state.num;
> +
> + debug("State.index: %u", idx);
> + debug("State.num: %u", num);
> + vdev->vq[idx].vring.num = num;
> +
> + return false;
> +}
> +
> +/**
> + * vu_set_vring_addr_exec() - Set the addresses of the vring
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_vring_addr_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + /* We need to copy the payload to vhost_vring_addr structure
> + * to access index because address of msg->payload.addr
> + * can be unaligned as it is packed.
> + */
> + struct vhost_vring_addr addr = msg->payload.addr;
> + struct vu_virtq *vq = &vdev->vq[addr.index];
> +
> + debug("vhost_vring_addr:");
> + debug(" index: %d", addr.index);
> + debug(" flags: %d", addr.flags);
> + debug(" desc_user_addr: 0x%016" PRIx64,
> + (uint64_t)addr.desc_user_addr);
> + debug(" used_user_addr: 0x%016" PRIx64,
> + (uint64_t)addr.used_user_addr);
> + debug(" avail_user_addr: 0x%016" PRIx64,
> + (uint64_t)addr.avail_user_addr);
> + debug(" log_guest_addr: 0x%016" PRIx64,
> + (uint64_t)addr.log_guest_addr);
> +
> + vq->vra = msg->payload.addr;
> + vq->vring.flags = addr.flags;
> + vq->vring.log_guest_addr = addr.log_guest_addr;
> +
> + if (map_ring(vdev, vq))
> + die("Invalid vring_addr message");
> +
> + vq->used_idx = le16toh(vq->vring.used->idx);
> +
> + if (vq->last_avail_idx != vq->used_idx) {
> + debug("Last avail index != used index: %u != %u",
> + vq->last_avail_idx, vq->used_idx);
> + }
> +
> + return false;
> +}
> +/**
> + * vu_set_vring_base_exec() - Sets the next index to use for descriptors
> + * in this vring
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_vring_base_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + unsigned int idx = msg->payload.state.index;
> + unsigned int num = msg->payload.state.num;
> +
> + debug("State.index: %u", idx);
> + debug("State.num: %u", num);
> + vdev->vq[idx].shadow_avail_idx = vdev->vq[idx].last_avail_idx = num;
> +
> + return false;
> +}
> +
> +/**
> + * vu_get_vring_base_exec() - Stops the vring and returns the current
> + * descriptor index or indices
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: True as a reply is requested
> + */
> +static bool vu_get_vring_base_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + unsigned int idx = msg->payload.state.index;
> +
> + debug("State.index: %u", idx);
> + msg->payload.state.num = vdev->vq[idx].last_avail_idx;
> + msg->hdr.size = sizeof(msg->payload.state);
> +
> + vdev->vq[idx].started = false;
> +
> + if (vdev->vq[idx].call_fd != -1) {
> + close(vdev->vq[idx].call_fd);
> + vdev->vq[idx].call_fd = -1;
> + }
> + if (vdev->vq[idx].kick_fd != -1) {
> + vu_remove_watch(vdev, vdev->vq[idx].kick_fd);
> + close(vdev->vq[idx].kick_fd);
> + vdev->vq[idx].kick_fd = -1;
> + }
> +
> + return true;
> +}
> +
> +/**
> + * vu_set_watch() - Add a file descriptor to the passt epoll file descriptor
> + * @vdev: vhost-user device
> + * @fd: file descriptor to add
> + */
> +static void vu_set_watch(const struct vu_dev *vdev, int fd)
> +{
> + /* Placeholder to add passt related code */
> + (void)vdev;
> + (void)fd;
> +}
> +
> +/**
> + * vu_wait_queue() - wait for new free entries in the virtqueue
> + * @vq: virtqueue to wait on
> + */
> +static int vu_wait_queue(const struct vu_virtq *vq)
> +{
> + eventfd_t kick_data;
> + ssize_t rc;
> + int status;
> +
> + /* wait for the kernel to put new entries in the queue */
> + status = fcntl(vq->kick_fd, F_GETFL);
> + if (status == -1)
> + return -1;
Same as on v3 (I see you changed this below, but not here): if you
don't use status later, you can omit storing it.
> +
> + if (fcntl(vq->kick_fd, F_SETFL, status & ~O_NONBLOCK))
> + return -1;
> +
> + rc = eventfd_read(vq->kick_fd, &kick_data);
> +
> + if (fcntl(vq->kick_fd, F_SETFL, status))
> + return -1;
> +
> + if (rc == -1)
> + return -1;
> +
> + return 0;
> +}
> +
> +/**
> + * vu_send() - Send a buffer to the front-end using the RX virtqueue
> + * @vdev: vhost-user device
> + * @buf: address of the buffer
> + * @size: size of the buffer
> + *
> + * Return: number of bytes sent, -1 if there is an error
> + */
> +/* cppcheck-suppress unusedFunction */
> +int vu_send(struct vu_dev *vdev, const void *buf, size_t size)
> +{
> + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
> + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
> + struct iovec in_sg[VIRTQUEUE_MAX_SIZE];
> + size_t lens[VIRTQUEUE_MAX_SIZE];
> + __virtio16 *num_buffers_ptr = NULL;
> + size_t hdrlen = vdev->hdrlen;
> + int in_sg_count = 0;
> + size_t offset = 0;
> + int i = 0, j;
> +
> + debug("vu_send size %zu hdrlen %zu", size, hdrlen);
> +
> + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) {
> + err("Got packet, but no available descriptors on RX virtq.");
> + return 0;
> + }
> +
> + while (offset < size) {
> + size_t len;
> + int total;
> + int ret;
> +
> + total = 0;
> +
> + if (i == ARRAY_SIZE(elem) ||
> + in_sg_count == ARRAY_SIZE(in_sg)) {
> + err("virtio-net unexpected long buffer chain");
> + goto err;
> + }
> +
> + elem[i].out_num = 0;
> + elem[i].out_sg = NULL;
> + elem[i].in_num = ARRAY_SIZE(in_sg) - in_sg_count;
> + elem[i].in_sg = &in_sg[in_sg_count];
> +
> + ret = vu_queue_pop(vdev, vq, &elem[i]);
> + if (ret < 0) {
> + if (vu_wait_queue(vq) != -1)
> + continue;
> + if (i) {
> + err("virtio-net unexpected empty queue: "
> + "i %d mergeable %d offset %zd, size %zd, "
> + "features 0x%" PRIx64,
> + i, vu_has_feature(vdev,
> + VIRTIO_NET_F_MRG_RXBUF),
> + offset, size, vdev->features);
> + }
> + offset = -1;
> + goto err;
> + }
> + in_sg_count += elem[i].in_num;
> +
> + if (elem[i].in_num < 1) {
> + err("virtio-net receive queue contains no in buffers");
> + vu_queue_detach_element(vq);
> + offset = -1;
> + goto err;
> + }
> +
> + if (i == 0) {
> + struct virtio_net_hdr hdr = {
> + .flags = VIRTIO_NET_HDR_F_DATA_VALID,
> + .gso_type = VIRTIO_NET_HDR_GSO_NONE,
> + };
> +
> + ASSERT(offset == 0);
> + ASSERT(elem[i].in_sg[0].iov_len >= hdrlen);
> +
> + len = iov_from_buf(elem[i].in_sg, elem[i].in_num, 0,
> + &hdr, sizeof(hdr));
> +
> + num_buffers_ptr = (__virtio16 *)((char *)elem[i].in_sg[0].iov_base +
> + len);
> +
> + total += hdrlen;
> + }
> +
> + len = iov_from_buf(elem[i].in_sg, elem[i].in_num, total,
> + (char *)buf + offset, size - offset);
> +
> + total += len;
> + offset += len;
> +
> + /* If buffers can't be merged, at this point we
> + * must have consumed the complete packet.
> + * Otherwise, drop it.
> + */
> + if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) &&
> + offset < size) {
> + vu_queue_unpop(vq);
> + goto err;
> + }
> +
> + lens[i] = total;
> + i++;
> + }
> +
> + if (num_buffers_ptr && vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
> + *num_buffers_ptr = htole16(i);
> +
> + for (j = 0; j < i; j++) {
> + debug("filling total %zd idx %d", lens[j], j);
> + vu_queue_fill(vq, &elem[j], lens[j], j);
> + }
> +
> + vu_queue_flush(vq, i);
> + vu_queue_notify(vdev, vq);
> +
> + debug("vhost-user sent %zu", offset);
> +
> + return offset;
> +err:
> + for (j = 0; j < i; j++)
> + vu_queue_detach_element(vq);
> +
> + return offset;
> +}
> +
> +/**
> + * vu_handle_tx() - Receive data from the TX virtqueue
> + * @vdev: vhost-user device
> + * @index: index of the virtqueue
> + * @now: Current timestamp
> + */
> +static void vu_handle_tx(struct vu_dev *vdev, int index,
> + const struct timespec *now)
> +{
> + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
> + struct iovec out_sg[VIRTQUEUE_MAX_SIZE];
> + struct vu_virtq *vq = &vdev->vq[index];
> + int hdrlen = vdev->hdrlen;
> + int out_sg_count;
> + int count;
> +
Excess newline (same as v3).
> +
> + if (!VHOST_USER_IS_QUEUE_TX(index)) {
> + debug("vhost-user: index %d is not a TX queue", index);
> + return;
> + }
> +
> + tap_flush_pools();
> +
> + count = 0;
> + out_sg_count = 0;
> + while (count < VIRTQUEUE_MAX_SIZE) {
So, I see that this is limited to 1024 iterations now (it was limited
also earlier, but I didn't realise that).
If we loop at most VIRTQUEUE_MAX_SIZE times, that means, I guess, that
while we're popping elements, the queue can't be written to, correct?
Or it can be written to, but we'll get an additional kick after
vu_queue_notify() if that happens?
> + int ret;
> +
> + elem[count].out_num = 1;
> + elem[count].out_sg = &out_sg[out_sg_count];
> + elem[count].in_num = 0;
> + elem[count].in_sg = NULL;
> + ret = vu_queue_pop(vdev, vq, &elem[count]);
> + if (ret < 0)
> + break;
> + out_sg_count += elem[count].out_num;
> +
> + if (elem[count].out_num < 1) {
> + debug("virtio-net header not in first element");
> + break;
> + }
> + ASSERT(elem[count].out_num == 1);
> +
> + tap_add_packet(vdev->context,
> + elem[count].out_sg[0].iov_len - hdrlen,
> + (char *)elem[count].out_sg[0].iov_base + hdrlen);
> + count++;
> + }
> + tap_handler(vdev->context, now);
> +
> + if (count) {
> + int i;
> +
> + for (i = 0; i < count; i++)
> + vu_queue_fill(vq, &elem[i], 0, i);
> + vu_queue_flush(vq, count);
> + vu_queue_notify(vdev, vq);
> + }
> +}
> +
> +/**
> + * vu_kick_cb() - Called on a kick event to start to receive data
> + * @vdev: vhost-user device
> + * @ref: epoll reference information
> + * @now: Current timestamp
> + */
> +/* cppcheck-suppress unusedFunction */
> +void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref,
> + const struct timespec *now)
> +{
> + eventfd_t kick_data;
> + ssize_t rc;
> + int idx;
> +
> + for (idx = 0; idx < VHOST_USER_MAX_QUEUES; idx++) {
> + if (vdev->vq[idx].kick_fd == ref.fd)
> + break;
> + }
> +
> + if (idx == VHOST_USER_MAX_QUEUES)
> + return;
> +
> + rc = eventfd_read(ref.fd, &kick_data);
> + if (rc == -1)
> + die_perror("vhost-user kick eventfd_read()");
> +
> + debug("vhost-user: ot kick_data: %016"PRIx64" idx:%d",
> + kick_data, idx);
> + if (VHOST_USER_IS_QUEUE_TX(idx))
> + vu_handle_tx(vdev, idx, now);
> +}
> +
> +/**
> + * vu_check_queue_msg_file() - Check if a message is valid,
> + * close fds if NOFD bit is set
> + * @vmsg: vhost-user message
> + */
> +static void vu_check_queue_msg_file(struct vhost_user_msg *msg)
> +{
> + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
> + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
> +
> + if (idx >= VHOST_USER_MAX_QUEUES)
> + die("Invalid vhost-user queue index: %u", idx);
> +
> + if (nofd) {
> + vmsg_close_fds(msg);
> + return;
> + }
> +
> + if (msg->fd_num != 1)
> + die("Invalid fds in vhost-user request: %d", msg->hdr.request);
> +}
> +
> +/**
> + * vu_set_vring_kick_exec() - Set the event file descriptor for adding buffers
> + * to the vring
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_vring_kick_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
> + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
> +
> + debug("u64: 0x%016"PRIx64, msg->payload.u64);
> +
> + vu_check_queue_msg_file(msg);
> +
> + if (vdev->vq[idx].kick_fd != -1) {
> + vu_remove_watch(vdev, vdev->vq[idx].kick_fd);
> + close(vdev->vq[idx].kick_fd);
> + }
> +
> + vdev->vq[idx].kick_fd = nofd ? -1 : msg->fds[0];
> + debug("Got kick_fd: %d for vq: %d", vdev->vq[idx].kick_fd, idx);
> +
> + vdev->vq[idx].started = true;
> +
> + if (vdev->vq[idx].kick_fd != -1 && VHOST_USER_IS_QUEUE_TX(idx)) {
> + vu_set_watch(vdev, vdev->vq[idx].kick_fd);
> + debug("Waiting for kicks on fd: %d for vq: %d",
> + vdev->vq[idx].kick_fd, idx);
> + }
> +
> + return false;
> +}
> +
> +/**
> + * vu_set_vring_call_exec() - Set the event file descriptor to signal when
> + * buffers are used
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_vring_call_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
> + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
> +
> + debug("u64: 0x%016"PRIx64, msg->payload.u64);
> +
> + vu_check_queue_msg_file(msg);
> +
> + if (vdev->vq[idx].call_fd != -1)
> + close(vdev->vq[idx].call_fd);
> +
> + vdev->vq[idx].call_fd = nofd ? -1 : msg->fds[0];
> +
> + /* in case of I/O hang after reconnecting */
> + if (vdev->vq[idx].call_fd != -1)
> + eventfd_write(msg->fds[0], 1);
> +
> + debug("Got call_fd: %d for vq: %d", vdev->vq[idx].call_fd, idx);
> +
> + return false;
> +}
> +
> +/**
> + * vu_set_vring_err_exec() - Set the event file descriptor to signal when
> + * error occurs
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_vring_err_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
> + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
> +
> + debug("u64: 0x%016"PRIx64, msg->payload.u64);
> +
> + vu_check_queue_msg_file(msg);
> +
> + if (vdev->vq[idx].err_fd != -1) {
> + close(vdev->vq[idx].err_fd);
> + vdev->vq[idx].err_fd = -1;
> + }
> +
> + /* cppcheck-suppress redundantAssignment */
> + vdev->vq[idx].err_fd = nofd ? -1 : msg->fds[0];
Maybe you missed this comment to v3:
--
Wouldn't it be easier (and not require a suppression) to say:
if (!nofd)
vdev->vq[idx].err_fd = msg->fds[0];
?
--
> +
> + return false;
> +}
> +
> +/**
> + * vu_get_protocol_features_exec() - Provide the protocol (vhost-user) features
> + * to the front-end
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: True as a reply is requested
> + */
> +static bool vu_get_protocol_features_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK;
> +
> + (void)vdev;
> + vmsg_set_reply_u64(msg, features);
> +
> + return true;
> +}
> +
> +/**
> + * vu_set_protocol_features_exec() - Enable protocol (vhost-user) features
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_protocol_features_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + uint64_t features = msg->payload.u64;
> +
> + debug("u64: 0x%016"PRIx64, features);
> +
> + vdev->protocol_features = msg->payload.u64;
> +
> + if (vu_has_protocol_feature(vdev,
> + VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS) &&
> + (!vu_has_protocol_feature(vdev, VHOST_USER_PROTOCOL_F_BACKEND_REQ) ||
> + !vu_has_protocol_feature(vdev, VHOST_USER_PROTOCOL_F_REPLY_ACK))) {
Same as v3:
--
Do we actually care about VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS at
all, I wonder? This whole part (coming from ff1320050a3a "libvhost-user:
implement in-band notifications") is rather hard to read/understand, so
it would be great if we could just get rid of it altogether.
But if not, sure, let's leave it like the original, I'd say.
--
> + /*
> + * The use case for using messages for kick/call is simulation, to make
> + * the kick and call synchronous. To actually get that behaviour, both
> + * of the other features are required.
> + * Theoretically, one could use only kick messages, or do them without
> + * having F_REPLY_ACK, but too many (possibly pending) messages on the
> + * socket will eventually cause the master to hang, to avoid this in
> + * scenarios where not desired enforce that the settings are in a way
> + * that actually enables the simulation case.
> + */
> + die("F_IN_BAND_NOTIFICATIONS requires F_BACKEND_REQ && F_REPLY_ACK");
> + }
> +
> + return false;
> +}
> +
> +/**
> + * vu_get_queue_num_exec() - Tell how many queues we support
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: True as a reply is requested
> + */
> +static bool vu_get_queue_num_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + (void)vdev;
> +
> + vmsg_set_reply_u64(msg, VHOST_USER_MAX_QUEUES);
> +
> + return true;
> +}
> +
> +/**
> + * vu_set_vring_enable_exec() - Enable or disable corresponding vring
> + * @vdev: vhost-user device
> + * @vmsg: vhost-user message
> + *
> + * Return: False as no reply is requested
> + */
> +static bool vu_set_vring_enable_exec(struct vu_dev *vdev,
> + struct vhost_user_msg *msg)
> +{
> + unsigned int enable = msg->payload.state.num;
> + unsigned int idx = msg->payload.state.index;
> +
> + debug("State.index: %u", idx);
> + debug("State.enable: %u", enable);
> +
> + if (idx >= VHOST_USER_MAX_QUEUES)
> + die("Invalid vring_enable index: %u", idx);
> +
> + vdev->vq[idx].enable = enable;
> + return false;
> +}
> +
> +/**
> + * vu_init() - Initialize vhost-user device structure
> + * @c: execution context
> + * @vdev: vhost-user device
> + */
> +/* cppcheck-suppress unusedFunction */
> +void vu_init(struct ctx *c, struct vu_dev *vdev)
> +{
> + int i;
> +
> + vdev->context = c;
> + vdev->hdrlen = 0;
> + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
> + vdev->vq[i] = (struct vu_virtq){
> + .call_fd = -1,
> + .kick_fd = -1,
> + .err_fd = -1,
> + .notification = true,
> + };
> + }
> +}
> +
> +/**
> + * vu_cleanup() - Reset vhost-user device
> + * @vdev: vhost-user device
> + */
> +/* cppcheck-suppress unusedFunction */
> +void vu_cleanup(struct vu_dev *vdev)
> +{
> + unsigned int i;
> +
> + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) {
> + struct vu_virtq *vq = &vdev->vq[i];
> +
> + vq->started = false;
> + vq->notification = true;
> +
> + if (vq->call_fd != -1) {
> + close(vq->call_fd);
> + vq->call_fd = -1;
> + }
> + if (vq->err_fd != -1) {
> + close(vq->err_fd);
> + vq->err_fd = -1;
> + }
> + if (vq->kick_fd != -1) {
> + vu_remove_watch(vdev, vq->kick_fd);
> + close(vq->kick_fd);
> + vq->kick_fd = -1;
> + }
> +
> + vq->vring.desc = 0;
> + vq->vring.used = 0;
> + vq->vring.avail = 0;
> + }
> + vdev->hdrlen = 0;
> +
> + for (i = 0; i < vdev->nregions; i++) {
> + const struct vu_dev_region *r = &vdev->regions[i];
> + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */
> + void *m = (void *)r->mmap_addr;
> +
> + if (m)
> + munmap(m, r->size + r->mmap_offset);
> + }
> + vdev->nregions = 0;
> +}
> +
> +/**
> + * vu_sock_reset() - Reset connection socket
> + * @vdev: vhost-user device
> + */
> +static void vu_sock_reset(struct vu_dev *vdev)
> +{
> + /* Placeholder to add passt related code */
> + (void)vdev;
> +}
> +
> +static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev,
> + struct vhost_user_msg *msg) = {
> + [VHOST_USER_GET_FEATURES] = vu_get_features_exec,
> + [VHOST_USER_SET_FEATURES] = vu_set_features_exec,
> + [VHOST_USER_GET_PROTOCOL_FEATURES] = vu_get_protocol_features_exec,
> + [VHOST_USER_SET_PROTOCOL_FEATURES] = vu_set_protocol_features_exec,
> + [VHOST_USER_GET_QUEUE_NUM] = vu_get_queue_num_exec,
> + [VHOST_USER_SET_OWNER] = vu_set_owner_exec,
> + [VHOST_USER_SET_MEM_TABLE] = vu_set_mem_table_exec,
> + [VHOST_USER_SET_VRING_NUM] = vu_set_vring_num_exec,
> + [VHOST_USER_SET_VRING_ADDR] = vu_set_vring_addr_exec,
> + [VHOST_USER_SET_VRING_BASE] = vu_set_vring_base_exec,
> + [VHOST_USER_GET_VRING_BASE] = vu_get_vring_base_exec,
> + [VHOST_USER_SET_VRING_KICK] = vu_set_vring_kick_exec,
> + [VHOST_USER_SET_VRING_CALL] = vu_set_vring_call_exec,
> + [VHOST_USER_SET_VRING_ERR] = vu_set_vring_err_exec,
> + [VHOST_USER_SET_VRING_ENABLE] = vu_set_vring_enable_exec,
> +};
> +
> +/**
> + * vu_control_handler() - Handle control commands for vhost-user
> + * @vdev: vhost-user device
> + * @fd: vhost-user message socket
> + * @events: epoll events
> + */
> +/* cppcheck-suppress unusedFunction */
> +void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events)
> +{
> + struct vhost_user_msg msg = { 0 };
> + bool need_reply, reply_requested;
> + int ret;
> +
> + if (events & (EPOLLRDHUP | EPOLLHUP | EPOLLERR)) {
> + vu_sock_reset(vdev);
> + return;
> + }
> +
> + ret = vu_message_read_default(fd, &msg);
> + if (ret == 0) {
> + vu_sock_reset(vdev);
> + return;
> + }
> + debug("================ Vhost user message ================");
> + debug("Request: %s (%d)", vu_request_to_string(msg.hdr.request),
> + msg.hdr.request);
> + debug("Flags: 0x%x", msg.hdr.flags);
> + debug("Size: %u", msg.hdr.size);
> +
> + need_reply = msg.hdr.flags & VHOST_USER_NEED_REPLY_MASK;
> +
> + if (msg.hdr.request >= 0 && msg.hdr.request < VHOST_USER_MAX &&
> + vu_handle[msg.hdr.request])
> + reply_requested = vu_handle[msg.hdr.request](vdev, &msg);
> + else
> + die("Unhandled request: %d", msg.hdr.request);
> +
> + /* cppcheck-suppress legacyUninitvar */
> + if (!reply_requested && need_reply) {
> + msg.payload.u64 = 0;
> + msg.hdr.flags = 0;
> + msg.hdr.size = sizeof(msg.payload.u64);
> + msg.fd_num = 0;
> + reply_requested = true;
> + }
> +
> + if (reply_requested)
> + vu_send_reply(fd, &msg);
> +}
> diff --git a/vhost_user.h b/vhost_user.h
> new file mode 100644
> index 000000000000..ed4074c6b915
> --- /dev/null
> +++ b/vhost_user.h
> @@ -0,0 +1,203 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * vhost-user API, command management and virtio interface
> + *
> + * Copyright Red Hat
> + * Author: Laurent Vivier <lvivier@redhat.com>
> + */
> +
> +/* some parts from subprojects/libvhost-user/libvhost-user.h */
> +
> +#ifndef VHOST_USER_H
> +#define VHOST_USER_H
> +
> +#include "virtio.h"
> +#include "iov.h"
> +
> +#define VHOST_USER_F_PROTOCOL_FEATURES 30
> +
> +#define VHOST_MEMORY_BASELINE_NREGIONS 8
> +
> +/**
> + * enum vhost_user_protocol_feature - List of available vhost-user features
> + */
> +enum vhost_user_protocol_feature {
> + VHOST_USER_PROTOCOL_F_MQ = 0,
> + VHOST_USER_PROTOCOL_F_LOG_SHMFD = 1,
> + VHOST_USER_PROTOCOL_F_RARP = 2,
> + VHOST_USER_PROTOCOL_F_REPLY_ACK = 3,
> + VHOST_USER_PROTOCOL_F_NET_MTU = 4,
> + VHOST_USER_PROTOCOL_F_BACKEND_REQ = 5,
> + VHOST_USER_PROTOCOL_F_CROSS_ENDIAN = 6,
> + VHOST_USER_PROTOCOL_F_CRYPTO_SESSION = 7,
> + VHOST_USER_PROTOCOL_F_PAGEFAULT = 8,
> + VHOST_USER_PROTOCOL_F_CONFIG = 9,
> + VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD = 10,
> + VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11,
> + VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12,
> + VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS = 14,
> + VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15,
> +
> + VHOST_USER_PROTOCOL_F_MAX
> +};
> +
> +/**
> + * enum vhost_user_request - List of available vhost-user requests
> + */
> +enum vhost_user_request {
> + VHOST_USER_NONE = 0,
> + VHOST_USER_GET_FEATURES = 1,
> + VHOST_USER_SET_FEATURES = 2,
> + VHOST_USER_SET_OWNER = 3,
> + VHOST_USER_RESET_OWNER = 4,
> + VHOST_USER_SET_MEM_TABLE = 5,
> + VHOST_USER_SET_LOG_BASE = 6,
> + VHOST_USER_SET_LOG_FD = 7,
> + VHOST_USER_SET_VRING_NUM = 8,
> + VHOST_USER_SET_VRING_ADDR = 9,
> + VHOST_USER_SET_VRING_BASE = 10,
> + VHOST_USER_GET_VRING_BASE = 11,
> + VHOST_USER_SET_VRING_KICK = 12,
> + VHOST_USER_SET_VRING_CALL = 13,
> + VHOST_USER_SET_VRING_ERR = 14,
> + VHOST_USER_GET_PROTOCOL_FEATURES = 15,
> + VHOST_USER_SET_PROTOCOL_FEATURES = 16,
> + VHOST_USER_GET_QUEUE_NUM = 17,
> + VHOST_USER_SET_VRING_ENABLE = 18,
> + VHOST_USER_SEND_RARP = 19,
> + VHOST_USER_NET_SET_MTU = 20,
> + VHOST_USER_SET_BACKEND_REQ_FD = 21,
> + VHOST_USER_IOTLB_MSG = 22,
> + VHOST_USER_SET_VRING_ENDIAN = 23,
> + VHOST_USER_GET_CONFIG = 24,
> + VHOST_USER_SET_CONFIG = 25,
> + VHOST_USER_CREATE_CRYPTO_SESSION = 26,
> + VHOST_USER_CLOSE_CRYPTO_SESSION = 27,
> + VHOST_USER_POSTCOPY_ADVISE = 28,
> + VHOST_USER_POSTCOPY_LISTEN = 29,
> + VHOST_USER_POSTCOPY_END = 30,
> + VHOST_USER_GET_INFLIGHT_FD = 31,
> + VHOST_USER_SET_INFLIGHT_FD = 32,
> + VHOST_USER_GPU_SET_SOCKET = 33,
> + VHOST_USER_VRING_KICK = 35,
> + VHOST_USER_GET_MAX_MEM_SLOTS = 36,
> + VHOST_USER_ADD_MEM_REG = 37,
> + VHOST_USER_REM_MEM_REG = 38,
> + VHOST_USER_MAX
> +};
> +
> +/**
> + * struct vhost_user_header - vhost-user message header
> + * @request: Request type of the message
> + * @flags: Request flags
> + * @size: The following payload size
> + */
> +struct vhost_user_header {
> + enum vhost_user_request request;
> +
> +#define VHOST_USER_VERSION_MASK 0x3
> +#define VHOST_USER_REPLY_MASK (0x1 << 2)
> +#define VHOST_USER_NEED_REPLY_MASK (0x1 << 3)
> + uint32_t flags;
> + uint32_t size;
> +} __attribute__ ((__packed__));
> +
> +/**
> + * struct vhost_user_memory_region - Front-end shared memory region information
> + * @guest_phys_addr: Guest physical address of the region
> + * @memory_size: Memory size
> + * @userspace_addr: front-end (QEMU) userspace address
> + * @mmap_offset: region offset in the shared memory area
> + */
> +struct vhost_user_memory_region {
> + uint64_t guest_phys_addr;
> + uint64_t memory_size;
> + uint64_t userspace_addr;
> + uint64_t mmap_offset;
> +};
> +
> +/**
> + * struct vhost_user_memory - List of all the shared memory regions
> + * @nregions: Number of memory regions
> + * @padding: Padding
> + * @regions: Memory regions list
> + */
> +struct vhost_user_memory {
> + uint32_t nregions;
> + uint32_t padding;
> + struct vhost_user_memory_region regions[VHOST_MEMORY_BASELINE_NREGIONS];
> +};
> +
> +/**
> + * union vhost_user_payload - vhost-user message payload
> + * @u64: 64-bit payload
> + * @state: vring state payload
> + * @addr: vring addresses payload
> + * vhost_user_memory: Memory regions information payload
> + */
> +union vhost_user_payload {
> +#define VHOST_USER_VRING_IDX_MASK 0xff
> +#define VHOST_USER_VRING_NOFD_MASK (0x1 << 8)
> + uint64_t u64;
> + struct vhost_vring_state state;
> + struct vhost_vring_addr addr;
> + struct vhost_user_memory memory;
> +};
> +
> +/**
> + * struct vhost_user_msg - vhost-use message
> + * @hdr: Message header
> + * @payload: Message payload
> + * @fds: File descriptors associated with the message
> + * in the ancillary data.
> + * (shared memory or event file descriptors)
> + * @fd_num: Number of file descriptors
> + */
> +struct vhost_user_msg {
> + struct vhost_user_header hdr;
> + union vhost_user_payload payload;
> +
> + int fds[VHOST_MEMORY_BASELINE_NREGIONS];
> + int fd_num;
> +} __attribute__ ((__packed__));
> +#define VHOST_USER_HDR_SIZE sizeof(struct vhost_user_header)
> +
> +/* index of the RX virtqueue */
> +#define VHOST_USER_RX_QUEUE 0
> +/* index of the TX virtqueue */
> +#define VHOST_USER_TX_QUEUE 1
> +
> +/* in case of multiqueue, the RX and TX queues are interleaved */
> +#define VHOST_USER_IS_QUEUE_TX(n) (n % 2)
> +#define VHOST_USER_IS_QUEUE_RX(n) (!(n % 2))
> +
> +/**
> + * vu_queue_enabled - Return state of a virtqueue
> + * @vq: virtqueue to check
> + *
> + * Return: true if the virqueue is enabled, false otherwise
> + */
> +static inline bool vu_queue_enabled(const struct vu_virtq *vq)
> +{
> + return vq->enable;
> +}
> +
> +/**
> + * vu_queue_started - Return state of a virtqueue
> + * @vq: virtqueue to check
> + *
> + * Return: true if the virqueue is started, false otherwise
> + */
> +static inline bool vu_queue_started(const struct vu_virtq *vq)
> +{
> + return vq->started;
> +}
> +
> +int vu_send(struct vu_dev *vdev, const void *buf, size_t size);
> +void vu_print_capabilities(void);
> +void vu_init(struct ctx *c, struct vu_dev *vdev);
> +void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref,
> + const struct timespec *now);
> +void vu_cleanup(struct vu_dev *vdev);
> +void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events);
> +#endif /* VHOST_USER_H */
> diff --git a/virtio.c b/virtio.c
> index 380590afbca3..237395396606 100644
> --- a/virtio.c
> +++ b/virtio.c
> @@ -328,7 +328,6 @@ static bool vring_can_notify(const struct vu_dev *dev, struct vu_virtq *vq)
> * @dev: Vhost-user device
> * @vq: Virtqueue
> */
> -/* cppcheck-suppress unusedFunction */
> void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq)
> {
> if (!vq->vring.avail)
> @@ -504,7 +503,6 @@ static int vu_queue_map_desc(struct vu_dev *dev, struct vu_virtq *vq, unsigned i
> *
> * Return: -1 if there is an error, 0 otherwise
> */
> -/* cppcheck-suppress unusedFunction */
> int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virtq_element *elem)
> {
> unsigned int head;
> @@ -553,7 +551,6 @@ void vu_queue_detach_element(struct vu_virtq *vq)
> * vu_queue_unpop() - Push back the previously popped element from the virqueue
> * @vq: Virtqueue
> */
> -/* cppcheck-suppress unusedFunction */
> void vu_queue_unpop(struct vu_virtq *vq)
> {
> vq->last_avail_idx--;
> @@ -621,7 +618,6 @@ void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index,
> * @len: Size of the element
> * @idx: Used ring entry index
> */
> -/* cppcheck-suppress unusedFunction */
> void vu_queue_fill(struct vu_virtq *vq, const struct vu_virtq_element *elem,
> unsigned int len, unsigned int idx)
> {
> @@ -645,7 +641,6 @@ static inline void vring_used_idx_set(struct vu_virtq *vq, uint16_t val)
> * @vq: Virtqueue
> * @count: Number of entry to flush
> */
> -/* cppcheck-suppress unusedFunction */
> void vu_queue_flush(struct vu_virtq *vq, unsigned int count)
> {
> uint16_t old, new;
> diff --git a/virtio.h b/virtio.h
> index 0e5705581bd2..d58b9ef7fc1d 100644
> --- a/virtio.h
> +++ b/virtio.h
> @@ -106,6 +106,7 @@ struct vu_dev_region {
> * @hdrlen: Virtio -net header length
> */
> struct vu_dev {
> + struct ctx *context;
> uint32_t nregions;
> struct vu_dev_region regions[VHOST_USER_MAX_RAM_SLOTS];
> struct vu_virtq vq[VHOST_USER_MAX_QUEUES];
> @@ -162,7 +163,6 @@ static inline bool vu_has_feature(const struct vu_dev *vdev,
> *
> * Return: True if the feature is available
> */
> -/* cppcheck-suppress unusedFunction */
> static inline bool vu_has_protocol_feature(const struct vu_dev *vdev,
> unsigned int fbit)
> {
The rest looks good to me.
--
Stefano
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 4/4] vhost-user: add vhost-user
2024-09-06 16:04 ` [PATCH v4 4/4] vhost-user: add vhost-user Laurent Vivier
@ 2024-09-10 15:47 ` Stefano Brivio
2024-09-12 14:05 ` Laurent Vivier
0 siblings, 1 reply; 15+ messages in thread
From: Stefano Brivio @ 2024-09-10 15:47 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
On Fri, 6 Sep 2024 18:04:49 +0200
Laurent Vivier <lvivier@redhat.com> wrote:
> add virtio and vhost-user functions to connect with QEMU.
>
> $ ./passt --vhost-user
>
> and
>
> # qemu-system-x86_64 ... -m 4G \
> -object memory-backend-memfd,id=memfd0,share=on,size=4G \
> -numa node,memdev=memfd0 \
> -chardev socket,id=chr0,path=/tmp/passt_1.socket \
> -netdev vhost-user,id=netdev0,chardev=chr0 \
> -device virtio-net,mac=9a:2b:2c:2d:2e:2f,netdev=netdev0 \
> ...
>
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
I reviewed it a bit, but it looks like you didn't have time yet to
address my comments from v3, so I guess I'd better wait with this one.
--
Stefano
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 2/4] vhost-user: introduce virtio API
2024-09-10 15:47 ` Stefano Brivio
@ 2024-09-12 11:23 ` Laurent Vivier
2024-09-12 13:36 ` Stefano Brivio
0 siblings, 1 reply; 15+ messages in thread
From: Laurent Vivier @ 2024-09-12 11:23 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-dev
On 10/09/2024 17:47, Stefano Brivio wrote:
>> +
>> +/**
>> + * virtqueue_read_indirect_desc() - Copy virtio ring descriptors from guest
>> + * memory
>> + * @dev: Vhost-user device
>> + * @desc: Destination address to copy the descriptors to
>> + * @addr: Guest memory address to copy from
>> + * @len: Length of memory to copy
>> + *
>> + * Return: -1 if there is an error, 0 otherwise
>> + */
>> +static int virtqueue_read_indirect_desc(struct vu_dev *dev, struct vring_desc *desc,
>> + uint64_t addr, size_t len)
>> +{
>> + uint64_t read_len;
>> +
>> + if (len > (VIRTQUEUE_MAX_SIZE * sizeof(struct vring_desc)))
>> + return -1;
>> +
>> + if (len == 0)
>> + return -1;
>> +
>> + while (len) {
>> + const struct vring_desc *orig_desc;
>> +
>> + read_len = len;
>> + orig_desc = vu_gpa_to_va(dev, &read_len, addr);
> In case you missed this in my review of v3 (I'm not sure if it's a
> valid concern):
>
> --
> Should we also return if read_len < sizeof(struct vring_desc) after
> this call? Can that ever happen, if we pick a particular value of addr
> so that it's almost at the end of a region?
> --
In fact, read_len can be < sizeof(struct vring_desc) after this call but if orig_desc !=
NULL it means we can continue in another region to continue to fill the structure.
If there is not enough memory to fill "len" bytes it exits with -1.
Thanks,
Laurent
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 3/4] vhost-user: introduce vhost-user API
2024-09-10 15:47 ` Stefano Brivio
@ 2024-09-12 12:41 ` Laurent Vivier
2024-09-12 13:40 ` Stefano Brivio
0 siblings, 1 reply; 15+ messages in thread
From: Laurent Vivier @ 2024-09-12 12:41 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-dev
On 10/09/2024 17:47, Stefano Brivio wrote:
> Nits and a couple of questions only:
>
> On Fri, 6 Sep 2024 18:04:48 +0200
> Laurent Vivier <lvivier@redhat.com> wrote:
>
>> Add vhost_user.c and vhost_user.h that define the functions needed
>> to implement vhost-user backend.
>>
>> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
>> ---
>> Makefile | 4 +-
>> iov.c | 1 -
>> vhost_user.c | 1265 ++++++++++++++++++++++++++++++++++++++++++++++++++
>> vhost_user.h | 203 ++++++++
>> virtio.c | 5 -
>> virtio.h | 2 +-
>> 6 files changed, 1471 insertions(+), 9 deletions(-)
>> create mode 100644 vhost_user.c
>> create mode 100644 vhost_user.h
...
>> diff --git a/vhost_user.c b/vhost_user.c
>> new file mode 100644
>> index 000000000000..6008a8adc967
>> --- /dev/null
>> +++ b/vhost_user.c
...
>> +/**
>> + * vu_wait_queue() - wait for new free entries in the virtqueue
>> + * @vq: virtqueue to wait on
>> + */
>> +static int vu_wait_queue(const struct vu_virtq *vq)
>> +{
>> + eventfd_t kick_data;
>> + ssize_t rc;
>> + int status;
>> +
>> + /* wait for the kernel to put new entries in the queue */
>> + status = fcntl(vq->kick_fd, F_GETFL);
>> + if (status == -1)
>> + return -1;
>
> Same as on v3 (I see you changed this below, but not here): if you
> don't use status later, you can omit storing it.
We need status with F_SETFL below:
>
>> +
>> + if (fcntl(vq->kick_fd, F_SETFL, status & ~O_NONBLOCK))
>> + return -1;
>> +
>> + rc = eventfd_read(vq->kick_fd, &kick_data);
>> +
>> + if (fcntl(vq->kick_fd, F_SETFL, status))
>> + return -1;
>> +
>> + if (rc == -1)
>> + return -1;
>> +
>> + return 0;
>> +}
...
>> +/**
>> + * vu_handle_tx() - Receive data from the TX virtqueue
>> + * @vdev: vhost-user device
>> + * @index: index of the virtqueue
>> + * @now: Current timestamp
>> + */
>> +static void vu_handle_tx(struct vu_dev *vdev, int index,
>> + const struct timespec *now)
>> +{
>> + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
>> + struct iovec out_sg[VIRTQUEUE_MAX_SIZE];
>> + struct vu_virtq *vq = &vdev->vq[index];
>> + int hdrlen = vdev->hdrlen;
>> + int out_sg_count;
>> + int count;
>> +
>
> Excess newline (same as v3).
Done.
>
>> +
>> + if (!VHOST_USER_IS_QUEUE_TX(index)) {
>> + debug("vhost-user: index %d is not a TX queue", index);
>> + return;
>> + }
>> +
>> + tap_flush_pools();
>> +
>> + count = 0;
>> + out_sg_count = 0;
>> + while (count < VIRTQUEUE_MAX_SIZE) {
>
> So, I see that this is limited to 1024 iterations now (it was limited
> also earlier, but I didn't realise that).
>
> If we loop at most VIRTQUEUE_MAX_SIZE times, that means, I guess, that
> while we're popping elements, the queue can't be written to, correct?
No, I think the queue can be read and write at the same time.
>
> Or it can be written to, but we'll get an additional kick after
> vu_queue_notify() if that happens?
I could check the protocol and the code, but I think it should work like that.
>
>> + int ret;
>> +
>> + elem[count].out_num = 1;
>> + elem[count].out_sg = &out_sg[out_sg_count];
>> + elem[count].in_num = 0;
>> + elem[count].in_sg = NULL;
>> + ret = vu_queue_pop(vdev, vq, &elem[count]);
>> + if (ret < 0)
>> + break;
>> + out_sg_count += elem[count].out_num;
>> +
>> + if (elem[count].out_num < 1) {
>> + debug("virtio-net header not in first element");
>> + break;
>> + }
>> + ASSERT(elem[count].out_num == 1);
>> +
>> + tap_add_packet(vdev->context,
>> + elem[count].out_sg[0].iov_len - hdrlen,
>> + (char *)elem[count].out_sg[0].iov_base + hdrlen);
>> + count++;
>> + }
>> + tap_handler(vdev->context, now);
>> +
>> + if (count) {
>> + int i;
>> +
>> + for (i = 0; i < count; i++)
>> + vu_queue_fill(vq, &elem[i], 0, i);
>> + vu_queue_flush(vq, count);
>> + vu_queue_notify(vdev, vq);
>> + }
>> +}
>> +
...
>> +/**
>> + * vu_set_vring_err_exec() - Set the event file descriptor to signal when
>> + * error occurs
>> + * @vdev: vhost-user device
>> + * @vmsg: vhost-user message
>> + *
>> + * Return: False as no reply is requested
>> + */
>> +static bool vu_set_vring_err_exec(struct vu_dev *vdev,
>> + struct vhost_user_msg *msg)
>> +{
>> + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
>> + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
>> +
>> + debug("u64: 0x%016"PRIx64, msg->payload.u64);
>> +
>> + vu_check_queue_msg_file(msg);
>> +
>> + if (vdev->vq[idx].err_fd != -1) {
>> + close(vdev->vq[idx].err_fd);
>> + vdev->vq[idx].err_fd = -1;
>> + }
>> +
>> + /* cppcheck-suppress redundantAssignment */
>> + vdev->vq[idx].err_fd = nofd ? -1 : msg->fds[0];
>
> Maybe you missed this comment to v3:
>
> --
> Wouldn't it be easier (and not require a suppression) to say:
>
> if (!nofd)
> vdev->vq[idx].err_fd = msg->fds[0];
>
Yes, you're right. I thought I fixed that but I think I have overwritten my changes...
(I also changed in this way call_fd and kick_fd).
...
>> +/**
>> + * vu_set_protocol_features_exec() - Enable protocol (vhost-user) features
>> + * @vdev: vhost-user device
>> + * @vmsg: vhost-user message
>> + *
>> + * Return: False as no reply is requested
>> + */
>> +static bool vu_set_protocol_features_exec(struct vu_dev *vdev,
>> + struct vhost_user_msg *msg)
>> +{
>> + uint64_t features = msg->payload.u64;
>> +
>> + debug("u64: 0x%016"PRIx64, features);
>> +
>> + vdev->protocol_features = msg->payload.u64;
>> +
>> + if (vu_has_protocol_feature(vdev,
>> + VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS) &&
>> + (!vu_has_protocol_feature(vdev, VHOST_USER_PROTOCOL_F_BACKEND_REQ) ||
>> + !vu_has_protocol_feature(vdev, VHOST_USER_PROTOCOL_F_REPLY_ACK))) {
>
> Same as v3:
>
> --
> Do we actually care about VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS at
> all, I wonder? This whole part (coming from ff1320050a3a "libvhost-user:
> implement in-band notifications") is rather hard to read/understand, so
> it would be great if we could just get rid of it altogether.
>
> But if not, sure, let's leave it like the original, I'd say.
I remove it.
> --
>
>> + /*
>> + * The use case for using messages for kick/call is simulation, to make
>> + * the kick and call synchronous. To actually get that behaviour, both
>> + * of the other features are required.
>> + * Theoretically, one could use only kick messages, or do them without
>> + * having F_REPLY_ACK, but too many (possibly pending) messages on the
>> + * socket will eventually cause the master to hang, to avoid this in
>> + * scenarios where not desired enforce that the settings are in a way
>> + * that actually enables the simulation case.
>> + */
>> + die("F_IN_BAND_NOTIFICATIONS requires F_BACKEND_REQ && F_REPLY_ACK");
>> + }
>> +
>> + return false;
>> +}
Thanks,
Laurent
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 2/4] vhost-user: introduce virtio API
2024-09-12 11:23 ` Laurent Vivier
@ 2024-09-12 13:36 ` Stefano Brivio
2024-09-12 14:03 ` Laurent Vivier
0 siblings, 1 reply; 15+ messages in thread
From: Stefano Brivio @ 2024-09-12 13:36 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
On Thu, 12 Sep 2024 13:23:58 +0200
Laurent Vivier <lvivier@redhat.com> wrote:
> On 10/09/2024 17:47, Stefano Brivio wrote:
> >> +
> >> +/**
> >> + * virtqueue_read_indirect_desc() - Copy virtio ring descriptors from guest
> >> + * memory
> >> + * @dev: Vhost-user device
> >> + * @desc: Destination address to copy the descriptors to
> >> + * @addr: Guest memory address to copy from
> >> + * @len: Length of memory to copy
> >> + *
> >> + * Return: -1 if there is an error, 0 otherwise
> >> + */
> >> +static int virtqueue_read_indirect_desc(struct vu_dev *dev, struct vring_desc *desc,
> >> + uint64_t addr, size_t len)
> >> +{
> >> + uint64_t read_len;
> >> +
> >> + if (len > (VIRTQUEUE_MAX_SIZE * sizeof(struct vring_desc)))
> >> + return -1;
> >> +
> >> + if (len == 0)
> >> + return -1;
> >> +
> >> + while (len) {
> >> + const struct vring_desc *orig_desc;
> >> +
> >> + read_len = len;
> >> + orig_desc = vu_gpa_to_va(dev, &read_len, addr);
> > In case you missed this in my review of v3 (I'm not sure if it's a
> > valid concern):
> >
> > --
> > Should we also return if read_len < sizeof(struct vring_desc) after
> > this call? Can that ever happen, if we pick a particular value of addr
> > so that it's almost at the end of a region?
> > --
>
> In fact, read_len can be < sizeof(struct vring_desc) after this call but if orig_desc !=
> NULL it means we can continue in another region to continue to fill the structure.
Right, I see that.
> If there is not enough memory to fill "len" bytes it exits with -1.
...and this as well. But let's say that read_len is 1 (and struct
vring_desc is 16 bytes). Then:
memcpy(desc, orig_desc, read_len);
copies one byte
[...]
desc += read_len / sizeof(struct vring_desc);
doesn't increase desc.
At the next iteration with len > 0 and read_len > 0, the memcpy() will
overwrite that one byte, as we didn't increase desc. Or it's not
possible for some other reason?
--
Stefano
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 3/4] vhost-user: introduce vhost-user API
2024-09-12 12:41 ` Laurent Vivier
@ 2024-09-12 13:40 ` Stefano Brivio
0 siblings, 0 replies; 15+ messages in thread
From: Stefano Brivio @ 2024-09-12 13:40 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
On Thu, 12 Sep 2024 14:41:53 +0200
Laurent Vivier <lvivier@redhat.com> wrote:
> On 10/09/2024 17:47, Stefano Brivio wrote:
> > Nits and a couple of questions only:
> >
> > On Fri, 6 Sep 2024 18:04:48 +0200
> > Laurent Vivier <lvivier@redhat.com> wrote:
> >
> >> Add vhost_user.c and vhost_user.h that define the functions needed
> >> to implement vhost-user backend.
> >>
> >> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
> >> ---
> >> Makefile | 4 +-
> >> iov.c | 1 -
> >> vhost_user.c | 1265 ++++++++++++++++++++++++++++++++++++++++++++++++++
> >> vhost_user.h | 203 ++++++++
> >> virtio.c | 5 -
> >> virtio.h | 2 +-
> >> 6 files changed, 1471 insertions(+), 9 deletions(-)
> >> create mode 100644 vhost_user.c
> >> create mode 100644 vhost_user.h
> ...
> >> diff --git a/vhost_user.c b/vhost_user.c
> >> new file mode 100644
> >> index 000000000000..6008a8adc967
> >> --- /dev/null
> >> +++ b/vhost_user.c
> ...
> >> +/**
> >> + * vu_wait_queue() - wait for new free entries in the virtqueue
> >> + * @vq: virtqueue to wait on
> >> + */
> >> +static int vu_wait_queue(const struct vu_virtq *vq)
> >> +{
> >> + eventfd_t kick_data;
> >> + ssize_t rc;
> >> + int status;
> >> +
> >> + /* wait for the kernel to put new entries in the queue */
> >> + status = fcntl(vq->kick_fd, F_GETFL);
> >> + if (status == -1)
> >> + return -1;
> >
> > Same as on v3 (I see you changed this below, but not here): if you
> > don't use status later, you can omit storing it.
>
> We need status with F_SETFL below:
Oops, sorry, of course!
> >> +
> >> + if (fcntl(vq->kick_fd, F_SETFL, status & ~O_NONBLOCK))
> >> + return -1;
> >> +
> >> + rc = eventfd_read(vq->kick_fd, &kick_data);
> >> +
> >> + if (fcntl(vq->kick_fd, F_SETFL, status))
> >> + return -1;
> >> +
> >> + if (rc == -1)
> >> + return -1;
> >> +
> >> + return 0;
> >> +}
> ...
> >> +/**
> >> + * vu_handle_tx() - Receive data from the TX virtqueue
> >> + * @vdev: vhost-user device
> >> + * @index: index of the virtqueue
> >> + * @now: Current timestamp
> >> + */
> >> +static void vu_handle_tx(struct vu_dev *vdev, int index,
> >> + const struct timespec *now)
> >> +{
> >> + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];
> >> + struct iovec out_sg[VIRTQUEUE_MAX_SIZE];
> >> + struct vu_virtq *vq = &vdev->vq[index];
> >> + int hdrlen = vdev->hdrlen;
> >> + int out_sg_count;
> >> + int count;
> >> +
> >
> > Excess newline (same as v3).
>
> Done.
>
> >
> >> +
> >> + if (!VHOST_USER_IS_QUEUE_TX(index)) {
> >> + debug("vhost-user: index %d is not a TX queue", index);
> >> + return;
> >> + }
> >> +
> >> + tap_flush_pools();
> >> +
> >> + count = 0;
> >> + out_sg_count = 0;
> >> + while (count < VIRTQUEUE_MAX_SIZE) {
> >
> > So, I see that this is limited to 1024 iterations now (it was limited
> > also earlier, but I didn't realise that).
> >
> > If we loop at most VIRTQUEUE_MAX_SIZE times, that means, I guess, that
> > while we're popping elements, the queue can't be written to, correct?
>
> No, I think the queue can be read and write at the same time.
>
> > Or it can be written to, but we'll get an additional kick after
> > vu_queue_notify() if that happens?
>
> I could check the protocol and the code, but I think it should work like that.
Well, okay, it should be obvious enough.
--
Stefano
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 2/4] vhost-user: introduce virtio API
2024-09-12 13:36 ` Stefano Brivio
@ 2024-09-12 14:03 ` Laurent Vivier
2024-09-12 14:08 ` Stefano Brivio
0 siblings, 1 reply; 15+ messages in thread
From: Laurent Vivier @ 2024-09-12 14:03 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-dev
On 12/09/2024 15:36, Stefano Brivio wrote:
> On Thu, 12 Sep 2024 13:23:58 +0200
> Laurent Vivier <lvivier@redhat.com> wrote:
>
>> On 10/09/2024 17:47, Stefano Brivio wrote:
>>>> +
>>>> +/**
>>>> + * virtqueue_read_indirect_desc() - Copy virtio ring descriptors from guest
>>>> + * memory
>>>> + * @dev: Vhost-user device
>>>> + * @desc: Destination address to copy the descriptors to
>>>> + * @addr: Guest memory address to copy from
>>>> + * @len: Length of memory to copy
>>>> + *
>>>> + * Return: -1 if there is an error, 0 otherwise
>>>> + */
>>>> +static int virtqueue_read_indirect_desc(struct vu_dev *dev, struct vring_desc *desc,
>>>> + uint64_t addr, size_t len)
>>>> +{
>>>> + uint64_t read_len;
>>>> +
>>>> + if (len > (VIRTQUEUE_MAX_SIZE * sizeof(struct vring_desc)))
>>>> + return -1;
>>>> +
>>>> + if (len == 0)
>>>> + return -1;
>>>> +
>>>> + while (len) {
>>>> + const struct vring_desc *orig_desc;
>>>> +
>>>> + read_len = len;
>>>> + orig_desc = vu_gpa_to_va(dev, &read_len, addr);
>>> In case you missed this in my review of v3 (I'm not sure if it's a
>>> valid concern):
>>>
>>> --
>>> Should we also return if read_len < sizeof(struct vring_desc) after
>>> this call? Can that ever happen, if we pick a particular value of addr
>>> so that it's almost at the end of a region?
>>> --
>>
>> In fact, read_len can be < sizeof(struct vring_desc) after this call but if orig_desc !=
>> NULL it means we can continue in another region to continue to fill the structure.
>
> Right, I see that.
>
>> If there is not enough memory to fill "len" bytes it exits with -1.
>
> ...and this as well. But let's say that read_len is 1 (and struct
> vring_desc is 16 bytes). Then:
>
> memcpy(desc, orig_desc, read_len);
>
> copies one byte
>
> [...]
>
> desc += read_len / sizeof(struct vring_desc);
>
> doesn't increase desc.
>
> At the next iteration with len > 0 and read_len > 0, the memcpy() will
> overwrite that one byte, as we didn't increase desc. Or it's not
> possible for some other reason?
>
We can add a check for that case, but my guess is memory region size is a multiple of page
size (4k or 64k or ...) so I think we will always be able to read 16 bytes (if we loop on
16 bytes reads).
Thanks,
Laurent
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 4/4] vhost-user: add vhost-user
2024-09-10 15:47 ` Stefano Brivio
@ 2024-09-12 14:05 ` Laurent Vivier
0 siblings, 0 replies; 15+ messages in thread
From: Laurent Vivier @ 2024-09-12 14:05 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-dev
On 10/09/2024 17:47, Stefano Brivio wrote:
> On Fri, 6 Sep 2024 18:04:49 +0200
> Laurent Vivier <lvivier@redhat.com> wrote:
>
>> add virtio and vhost-user functions to connect with QEMU.
>>
>> $ ./passt --vhost-user
>>
>> and
>>
>> # qemu-system-x86_64 ... -m 4G \
>> -object memory-backend-memfd,id=memfd0,share=on,size=4G \
>> -numa node,memdev=memfd0 \
>> -chardev socket,id=chr0,path=/tmp/passt_1.socket \
>> -netdev vhost-user,id=netdev0,chardev=chr0 \
>> -device virtio-net,mac=9a:2b:2c:2d:2e:2f,netdev=netdev0 \
>> ...
>>
>> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
>
> I reviewed it a bit, but it looks like you didn't have time yet to
> address my comments from v3, so I guess I'd better wait with this one.
>
Right, I needed to rewrite some part because of the unification of IPv4 and IPv6 sockets.
I'll send v5 soon.
Thanks,
Laurent
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 2/4] vhost-user: introduce virtio API
2024-09-12 14:03 ` Laurent Vivier
@ 2024-09-12 14:08 ` Stefano Brivio
0 siblings, 0 replies; 15+ messages in thread
From: Stefano Brivio @ 2024-09-12 14:08 UTC (permalink / raw)
To: Laurent Vivier; +Cc: passt-dev
On Thu, 12 Sep 2024 16:03:46 +0200
Laurent Vivier <lvivier@redhat.com> wrote:
> On 12/09/2024 15:36, Stefano Brivio wrote:
> > On Thu, 12 Sep 2024 13:23:58 +0200
> > Laurent Vivier <lvivier@redhat.com> wrote:
> >
> >> On 10/09/2024 17:47, Stefano Brivio wrote:
> >>>> +
> >>>> +/**
> >>>> + * virtqueue_read_indirect_desc() - Copy virtio ring descriptors from guest
> >>>> + * memory
> >>>> + * @dev: Vhost-user device
> >>>> + * @desc: Destination address to copy the descriptors to
> >>>> + * @addr: Guest memory address to copy from
> >>>> + * @len: Length of memory to copy
> >>>> + *
> >>>> + * Return: -1 if there is an error, 0 otherwise
> >>>> + */
> >>>> +static int virtqueue_read_indirect_desc(struct vu_dev *dev, struct vring_desc *desc,
> >>>> + uint64_t addr, size_t len)
> >>>> +{
> >>>> + uint64_t read_len;
> >>>> +
> >>>> + if (len > (VIRTQUEUE_MAX_SIZE * sizeof(struct vring_desc)))
> >>>> + return -1;
> >>>> +
> >>>> + if (len == 0)
> >>>> + return -1;
> >>>> +
> >>>> + while (len) {
> >>>> + const struct vring_desc *orig_desc;
> >>>> +
> >>>> + read_len = len;
> >>>> + orig_desc = vu_gpa_to_va(dev, &read_len, addr);
> >>> In case you missed this in my review of v3 (I'm not sure if it's a
> >>> valid concern):
> >>>
> >>> --
> >>> Should we also return if read_len < sizeof(struct vring_desc) after
> >>> this call? Can that ever happen, if we pick a particular value of addr
> >>> so that it's almost at the end of a region?
> >>> --
> >>
> >> In fact, read_len can be < sizeof(struct vring_desc) after this call but if orig_desc !=
> >> NULL it means we can continue in another region to continue to fill the structure.
> >
> > Right, I see that.
> >
> >> If there is not enough memory to fill "len" bytes it exits with -1.
> >
> > ...and this as well. But let's say that read_len is 1 (and struct
> > vring_desc is 16 bytes). Then:
> >
> > memcpy(desc, orig_desc, read_len);
> >
> > copies one byte
> >
> > [...]
> >
> > desc += read_len / sizeof(struct vring_desc);
> >
> > doesn't increase desc.
> >
> > At the next iteration with len > 0 and read_len > 0, the memcpy() will
> > overwrite that one byte, as we didn't increase desc. Or it's not
> > possible for some other reason?
> >
>
> We can add a check for that case, but my guess is memory region size is a multiple of page
> size (4k or 64k or ...) so I think we will always be able to read 16 bytes (if we loop on
> 16 bytes reads).
Ah, okay, it makes sense. I would still suggest to have an explicit
check, so that we don't crash if there's an issue in the hypervisor,
but it's not a strong preference.
--
Stefano
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-09-12 14:08 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-06 16:04 [PATCH v4 0/4] Add vhost-user support to passt. (part 3) Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 1/4] packet: replace struct desc by struct iovec Laurent Vivier
2024-09-06 16:04 ` [PATCH v4 2/4] vhost-user: introduce virtio API Laurent Vivier
2024-09-10 15:47 ` Stefano Brivio
2024-09-12 11:23 ` Laurent Vivier
2024-09-12 13:36 ` Stefano Brivio
2024-09-12 14:03 ` Laurent Vivier
2024-09-12 14:08 ` Stefano Brivio
2024-09-06 16:04 ` [PATCH v4 3/4] vhost-user: introduce vhost-user API Laurent Vivier
2024-09-10 15:47 ` Stefano Brivio
2024-09-12 12:41 ` Laurent Vivier
2024-09-12 13:40 ` Stefano Brivio
2024-09-06 16:04 ` [PATCH v4 4/4] vhost-user: add vhost-user Laurent Vivier
2024-09-10 15:47 ` Stefano Brivio
2024-09-12 14:05 ` Laurent Vivier
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).