From: "Eugenio Pérez" <eperezma@redhat.com>
To: passt-dev@passt.top
Cc: jasowang@redhat.com
Subject: [RFC v2 01/11] tap: implement vhost_call_cb
Date: Wed, 9 Jul 2025 19:47:38 +0200 [thread overview]
Message-ID: <20250709174748.3514693-2-eperezma@redhat.com> (raw)
In-Reply-To: <20250709174748.3514693-1-eperezma@redhat.com>
Implement the rx side of the vhost-kernel, where the namespace send
packets through its tap device and the kernel passes them to
pasta using a virtqueue. The virtqueue is build using a
virtual ring (vring), which includes:
* A descriptor table (buffer info)
* An available ring (buffers ready for the kernel)
* A used ring (buffers that the kernel has filled)
The descriptor table holds an array of buffers defined by address and
length. The kernel writes the packets transmitted by the namespace into
these buffers. The number of descriptors (vq size) is set by
VHOST_NDESCS. Pasta fills this table using pkt_buf, splitting it
evenly across all descriptors. This table is read-only for the kernel.
The available ring is where pasta marks which buffers the kernel can
use. It's read only for the kernel. It includes a ring[] array with
the descriptor indexes and an avail->idx index. Pasta increments
avail->idx when a new buffer is added, modulo the size of the
virtqueue. As pasta writes the rx buffers sequentially, ring[] is
always [0, 1, 2...] and only avail->idx is incremented when new buffers
are available for the kernel. avail->idx can be incremented by more
than one at a time.
Pasta also notifies the kernel of new available buffers by writing to
the kick eventfd.
Once the kernel has written a frame in a descriptor it writes its index
into used_ring->ring[] and increments the used_ring->idx accordly.
Like the avail idx the kernel can increase it by more than one. Pasta
gets a notification in the call eventfd, so we add it into the epoll ctx.
Pasta assumes buffers are used in order. QEMU also assumes it in the
virtio-net migration code so it is safe.
Now, vhost-kernel is designed to read the virtqueues and the buffers as
*guest* physical addresses (GPA), not process virtual addresses (HVA).
The way QEMU tells the translations is through the memory regions.
Since we don't have GPAs, let's just send the memory regions as a 1:1
translations of the HVA.
TODO: Evaluate if we can reuse the tap fd code instead of making a new
epoll event type.
TODO: Split a new file for vhost (Stefano)
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
RFC v2:
* Need to integrate "now" parameter in tap_add_packet and replace
TAP_MSGS to TAP_MSGS{4,6}.
* Actually refill rx queue at the end of operation.
* Explain virtqueues and memory regions theory of operation.
* Extrack vhost_kick so it can be reused by tx.
* Add macro for VHOST regions.
* Only register call_cb in rx queue, as tx calls are handled just if
needed.
* Renamed vhost_call_cb to tap_vhost_input (David)
* Move the inuse and last_used_idx to a writable struct from vhost tx
function. Context is readonly for vhost tx code path.
* Changed from inuse to num_free tracking, more aligned with kernel drv
* Use always the same tap_pool instead of having the two splitted for v4
and v6.
* Space between (struct vhost_memory_region) {.XXX = ...}. (Stefano)
* Expand comments about memory region size (David)
* Add some TODOs.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
epoll_type.h | 2 +
passt.c | 7 +-
passt.h | 10 +-
tap.c | 311 ++++++++++++++++++++++++++++++++++++++++++++++++++-
tap.h | 8 ++
5 files changed, 335 insertions(+), 3 deletions(-)
diff --git a/epoll_type.h b/epoll_type.h
index 12ac59b..0371c14 100644
--- a/epoll_type.h
+++ b/epoll_type.h
@@ -44,6 +44,8 @@ enum epoll_type {
EPOLL_TYPE_REPAIR_LISTEN,
/* TCP_REPAIR helper socket */
EPOLL_TYPE_REPAIR,
+ /* vhost-kernel call socket */
+ EPOLL_TYPE_VHOST_CALL,
EPOLL_NUM_TYPES,
};
diff --git a/passt.c b/passt.c
index 388d10f..0f2659c 100644
--- a/passt.c
+++ b/passt.c
@@ -79,6 +79,7 @@ char *epoll_type_str[] = {
[EPOLL_TYPE_VHOST_KICK] = "vhost-user kick socket",
[EPOLL_TYPE_REPAIR_LISTEN] = "TCP_REPAIR helper listening socket",
[EPOLL_TYPE_REPAIR] = "TCP_REPAIR helper socket",
+ [EPOLL_TYPE_VHOST_CALL] = "vhost-kernel call socket",
};
static_assert(ARRAY_SIZE(epoll_type_str) == EPOLL_NUM_TYPES,
"epoll_type_str[] doesn't match enum epoll_type");
@@ -310,7 +311,8 @@ loop:
switch (ref.type) {
case EPOLL_TYPE_TAP_PASTA:
- tap_handler_pasta(&c, eventmask, &now);
+ // TODO: Find a better way, maybe reuse the fd.
+ // tap_handler_pasta(&c, eventmask, &now);
break;
case EPOLL_TYPE_TAP_PASST:
tap_handler_passt(&c, eventmask, &now);
@@ -357,6 +359,9 @@ loop:
case EPOLL_TYPE_REPAIR:
repair_handler(&c, eventmask);
break;
+ case EPOLL_TYPE_VHOST_CALL:
+ tap_vhost_input(&c, ref, &now);
+ break;
default:
/* Can't happen */
ASSERT(0);
diff --git a/passt.h b/passt.h
index 8693794..7bb86c4 100644
--- a/passt.h
+++ b/passt.h
@@ -45,7 +45,7 @@ union epoll_ref;
* @icmp: ICMP-specific reference part
* @data: Data handled by protocol handlers
* @nsdir_fd: netns dirfd for fallback timer checking if namespace is gone
- * @queue: vhost-user queue index for this fd
+ * @queue: vhost queue index for this fd
* @u64: Opaque reference for epoll_ctl() and epoll_wait()
*/
union epoll_ref {
@@ -269,11 +269,14 @@ struct ctx {
int fd_tap;
int fd_repair_listen;
int fd_repair;
+ /* TODO document all added fields */
+ int fd_vhost;
unsigned char our_tap_mac[ETH_ALEN];
unsigned char guest_mac[ETH_ALEN];
uint16_t mtu;
uint64_t hash_secret[2];
+ uint64_t virtio_features;
int ifi4;
struct ip4_ctx ip4;
@@ -297,6 +300,11 @@ struct ctx {
int no_icmp;
struct icmp_ctx icmp;
+ struct {
+ int kick_fd;
+ int call_fd;
+ } vq[2];
+
int no_dns;
int no_dns_search;
int no_dhcp_dns;
diff --git a/tap.c b/tap.c
index 6db5d88..e4a3822 100644
--- a/tap.c
+++ b/tap.c
@@ -31,6 +31,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
+#include <sys/eventfd.h>
#include <sys/uio.h>
#include <stdbool.h>
#include <stdlib.h>
@@ -101,6 +102,51 @@ static PACKET_POOL_NOINIT(pool_tap6, TAP_MSGS_IP6, pkt_buf);
#define TAP_SEQS 128 /* Different L4 tuples in one batch */
#define FRAGMENT_MSG_RATE 10 /* # seconds between fragment warnings */
+#define VHOST_VIRTIO 0xAF
+#define VHOST_GET_FEATURES _IOR(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_FEATURES _IOW(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
+#define VHOST_SET_MEM_TABLE _IOW(VHOST_VIRTIO, 0x03, struct vhost_memory)
+#define VHOST_SET_VRING_NUM _IOW(VHOST_VIRTIO, 0x10, struct vhost_vring_state)
+#define VHOST_SET_VRING_ADDR _IOW(VHOST_VIRTIO, 0x11, struct vhost_vring_addr)
+#define VHOST_SET_VRING_KICK _IOW(VHOST_VIRTIO, 0x20, struct vhost_vring_file)
+#define VHOST_SET_VRING_CALL _IOW(VHOST_VIRTIO, 0x21, struct vhost_vring_file)
+#define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file)
+#define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
+#define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file)
+
+#define VHOST_NDESCS (PKT_BUF_BYTES / 65520)
+static_assert(!(VHOST_NDESCS & (VHOST_NDESCS - 1)),
+ "Number of vhost descs must be a power of two by standard");
+static struct {
+ /* Number of free descriptors */
+ uint16_t num_free;
+
+ /* Last used idx processed */
+ uint16_t last_used_idx;
+} vqs[2];
+
+static struct vring_desc vring_desc[2][VHOST_NDESCS] __attribute__((aligned(PAGE_SIZE)));
+static union {
+ struct vring_avail avail;
+ char buf[offsetof(struct vring_avail, ring[VHOST_NDESCS])];
+} vring_avail_0 __attribute__((aligned(PAGE_SIZE))), vring_avail_1 __attribute__((aligned(PAGE_SIZE)));
+static union {
+ struct vring_used used;
+ char buf[offsetof(struct vring_used, ring[VHOST_NDESCS])];
+} vring_used_0 __attribute__((aligned(PAGE_SIZE))), vring_used_1 __attribute__((aligned(PAGE_SIZE)));
+
+/* all descs ring + 2rings * 2vqs + tx pkt buf + rx pkt buf */
+#define N_VHOST_REGIONS 6
+union {
+ struct vhost_memory mem;
+ char buf[offsetof(struct vhost_memory, regions[N_VHOST_REGIONS])];
+} vhost_memory = {
+ .mem = {
+ .nregions = N_VHOST_REGIONS,
+ },
+};
+
/**
* tap_l2_max_len() - Maximum frame size (including L2 header) for current mode
* @c: Execution context
@@ -399,6 +445,18 @@ void tap_icmp6_send(const struct ctx *c,
tap_send_single(c, buf, l4len + ((char *)icmp6h - buf));
}
+static void vhost_kick(struct vring_used *used, int kick_fd) {
+ /* We need to expose available array entries before checking avail
+ * event.
+ *
+ * TODO: Does eventfd_write already do this?
+ */
+ smp_mb();
+
+ if (!(used->flags & VRING_USED_F_NO_NOTIFY))
+ eventfd_write(kick_fd, 1);
+}
+
/**
* tap_send_frames_pasta() - Send multiple frames to the pasta tap
* @c: Execution context
@@ -1386,6 +1444,89 @@ void tap_listen_handler(struct ctx *c, uint32_t events)
tap_start_connection(c);
}
+static void *virtqueue_get_rx_buf(unsigned qid, unsigned *len)
+{
+ struct vring_used *used = !qid ? &vring_used_0.used : &vring_used_1.used;
+ uint32_t i;
+ uint16_t used_idx, last_used;
+
+ /* TODO think if this has races with previous eventfd_read */
+ /* TODO we could improve performance with a shadow_used_idx */
+ used_idx = le16toh(used->idx);
+
+ smp_rmb();
+
+ if (used_idx == vqs[0].last_used_idx) {
+ *len = 0;
+ return NULL;
+ }
+
+ last_used = vqs[0].last_used_idx % VHOST_NDESCS;
+ i = le32toh(used->ring[last_used].id);
+ *len = le32toh(used->ring[last_used].len);
+
+ /* Make sure the kernel is consuming the descriptors in order */
+ if (i != last_used) {
+ die("vhost: id %u at used position %u != %u", i, last_used, i);
+ return NULL;
+ }
+
+ if (*len > PKT_BUF_BYTES/VHOST_NDESCS) {
+ warn("vhost: id %d len %u > %zu", i, *len, PKT_BUF_BYTES/VHOST_NDESCS);
+ return NULL;
+ }
+
+ /* TODO check if the id is valid and it has not been double used */
+ vqs[0].last_used_idx++;
+ vqs[0].num_free++;
+ return pkt_buf + i * (PKT_BUF_BYTES/VHOST_NDESCS);
+}
+
+/* TODO this assumes the kernel consumes descriptors in order */
+static void rx_pkt_refill(struct ctx *c)
+{
+ /* TODO: tune this threshold */
+ if (!vqs[0].num_free)
+ return;
+
+ vring_avail_0.avail.idx += vqs[0].num_free;
+ vqs[0].num_free = 0;
+ vhost_kick(&vring_used_0.used, c->vq[0].kick_fd);
+}
+
+void tap_vhost_input(struct ctx *c, union epoll_ref ref, const struct timespec *now)
+{
+ eventfd_read(ref.fd, (eventfd_t[]){ 0 });
+
+ tap_flush_pools();
+
+ while (true) {
+ struct virtio_net_hdr_mrg_rxbuf *hdr;
+ unsigned len;
+
+ hdr = virtqueue_get_rx_buf(ref.queue, &len);
+ if (!hdr)
+ break;
+
+ if (len < sizeof(*hdr)) {
+ warn("vhost: invalid len %u", len);
+ continue;
+ }
+
+ /* TODO this will break from this moment */
+ if (hdr->num_buffers != 1) {
+ warn("vhost: Too many buffers %u, %zu bytes should be enough for everybody!", hdr->num_buffers, PKT_BUF_BYTES/VHOST_NDESCS);
+ continue;
+ }
+
+ /* TODO fix the v6 pool to support ipv6 */
+ tap_add_packet(c, len - sizeof(*hdr), (void *)(hdr+1), now);
+ }
+
+ tap_handler(c, now);
+ rx_pkt_refill(c);
+}
+
/**
* tap_ns_tun() - Get tuntap fd in namespace
* @c: Execution context
@@ -1396,10 +1537,14 @@ void tap_listen_handler(struct ctx *c, uint32_t events)
*/
static int tap_ns_tun(void *arg)
{
+ /* TODO we need to check if vhost support VIRTIO_NET_F_MRG_RXBUF and VHOST_NET_F_VIRTIO_NET_HDR actually */
+ static const uint64_t features =
+ (1ULL << VIRTIO_F_VERSION_1) | (1ULL << VIRTIO_NET_F_MRG_RXBUF) | (1ULL << VHOST_NET_F_VIRTIO_NET_HDR);
struct ifreq ifr = { .ifr_flags = IFF_TAP | IFF_NO_PI };
int flags = O_RDWR | O_NONBLOCK | O_CLOEXEC;
struct ctx *c = (struct ctx *)arg;
- int fd, rc;
+ unsigned i;
+ int fd, vhost_fd, rc;
c->fd_tap = -1;
memcpy(ifr.ifr_name, c->pasta_ifn, IFNAMSIZ);
@@ -1409,6 +1554,143 @@ static int tap_ns_tun(void *arg)
if (fd < 0)
die_perror("Failed to open() /dev/net/tun");
+ vhost_fd = open("/dev/vhost-net", flags);
+ if (vhost_fd < 0)
+ die_perror("Failed to open() /dev/vhost-net");
+
+ rc = ioctl(vhost_fd, VHOST_SET_OWNER, NULL);
+ if (rc < 0)
+ die_perror("VHOST_SET_OWNER ioctl on /dev/vhost-net failed");
+
+ rc = ioctl(vhost_fd, VHOST_GET_FEATURES, &c->virtio_features);
+ if (rc < 0)
+ die_perror("VHOST_GET_FEATURES ioctl on /dev/vhost-net failed");
+
+ /* TODO inform more explicitely */
+ fprintf(stderr, "vhost features: %lx\n", c->virtio_features);
+ fprintf(stderr, "req features: %lx\n", features);
+ c->virtio_features &= features;
+ if (c->virtio_features != features)
+ die("vhost does not support required features");
+
+ for (i = 0; i < ARRAY_SIZE(c->vq); i++) {
+ struct vhost_vring_file file = {
+ .index = i,
+ };
+ union epoll_ref ref = { .type = EPOLL_TYPE_VHOST_CALL,
+ .queue = i };
+ struct epoll_event ev;
+
+ file.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ ref.fd = file.fd;
+ if (file.fd < 0)
+ die_perror("Failed to create call eventfd");
+
+ rc = ioctl(vhost_fd, VHOST_SET_VRING_CALL, &file);
+ if (rc < 0)
+ die_perror(
+ "VHOST_SET_VRING_CALL ioctl on /dev/vhost-net failed");
+
+ ev = (struct epoll_event){ .data.u64 = ref.u64, .events = EPOLLIN };
+ if (i == 0) {
+ rc = epoll_ctl(c->epollfd, EPOLL_CTL_ADD, ref.fd, &ev);
+ if (rc < 0)
+ die_perror("Failed to add call eventfd to epoll");
+ }
+ c->vq[i].call_fd = file.fd;
+ }
+
+ /*
+ * Unlike most of C-style APIs, userspace_addr+memory_size is
+ * also accesible by the kernel. Include a -1 to adjust.
+ */
+#define VHOST_MEMORY_REGION_PTR(addr, size) \
+ (struct vhost_memory_region) { \
+ .guest_phys_addr = (uintptr_t)addr, \
+ .memory_size = size - 1, \
+ .userspace_addr = (uintptr_t)addr, \
+ }
+#define VHOST_MEMORY_REGION(elem) VHOST_MEMORY_REGION_PTR(&elem, sizeof(elem))
+
+ /* 1:1 translation */
+ vhost_memory.mem.regions[0] = VHOST_MEMORY_REGION(vring_desc);
+ vhost_memory.mem.regions[1] = VHOST_MEMORY_REGION(vring_avail_0);
+ vhost_memory.mem.regions[2] = VHOST_MEMORY_REGION(vring_avail_1);
+ vhost_memory.mem.regions[3] = VHOST_MEMORY_REGION(vring_used_0);
+ vhost_memory.mem.regions[4] = VHOST_MEMORY_REGION(vring_used_1);
+ vhost_memory.mem.regions[5] = VHOST_MEMORY_REGION(pkt_buf);
+ static_assert(5 < N_VHOST_REGIONS);
+#undef VHOST_MEMORY_REGION
+#undef VHOST_MEMORY_REGION_PTR
+
+ rc = ioctl(vhost_fd, VHOST_SET_MEM_TABLE, &vhost_memory.mem);
+ if (rc < 0)
+ die_perror(
+ "VHOST_SET_MEM_TABLE ioctl on /dev/vhost-net failed");
+
+ /* TODO: probably it increases RX perf */
+#if 0
+ struct ifreq ifr;
+ memset(&ifr, 0, sizeof(ifr));
+
+ if (ioctl(fd, TUNGETIFF, &ifr) != 0)
+ die_perror("Unable to query TUNGETIFF on FD %d", fd);
+ }
+
+ if (ifr.ifr_flags & IFF_VNET_HDR)
+ net->dev.features &= ~(1ULL << VIRTIO_NET_F_MRG_RXBUF);
+#endif
+ rc = ioctl(vhost_fd, VHOST_SET_FEATURES, &c->virtio_features);
+ if (rc < 0)
+ die_perror("VHOST_SET_FEATURES ioctl on /dev/vhost-net failed");
+
+ /* Duplicating foreach queue to follow the exact order from QEMU */
+ for (i = 0; i < ARRAY_SIZE(c->vq); i++) {
+ struct vhost_vring_addr addr = {
+ .index = i,
+ .desc_user_addr = (unsigned long)vring_desc[i],
+ .avail_user_addr = i == 0 ? (unsigned long)&vring_avail_0 :
+ (unsigned long)&vring_avail_1,
+ .used_user_addr = i == 0 ? (unsigned long)&vring_used_0 :
+ (unsigned long)&vring_used_1,
+ /* GPA addr */
+ .log_guest_addr = i == 0 ? (unsigned long)&vring_used_0 :
+ (unsigned long)&vring_used_1,
+ };
+ struct vhost_vring_state state = {
+ .index = i,
+ .num = VHOST_NDESCS,
+ };
+ struct vhost_vring_file file = {
+ .index = i,
+ };
+
+ rc = ioctl(vhost_fd, VHOST_SET_VRING_NUM, &state);
+ if (rc < 0)
+ die_perror(
+ "VHOST_SET_VRING_NUM ioctl on /dev/vhost-net failed");
+
+ fprintf(stderr, "qid: %d\n", i);
+ fprintf(stderr, "vhost desc addr: 0x%llx\n", addr.desc_user_addr);
+ fprintf(stderr, "vhost avail addr: 0x%llx\n", addr.avail_user_addr);
+ fprintf(stderr, "vhost used addr: 0x%llx\n", addr.used_user_addr);
+ rc = ioctl(vhost_fd, VHOST_SET_VRING_ADDR, &addr);
+ if (rc < 0)
+ die_perror(
+ "VHOST_SET_VRING_ADDR ioctl on /dev/vhost-net failed");
+
+ file.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (file.fd < 0)
+ die_perror("Failed to create kick eventfd");
+ rc = ioctl(vhost_fd, VHOST_SET_VRING_KICK, &file);
+ if (rc < 0)
+ die_perror(
+ "VHOST_SET_VRING_KICK ioctl on /dev/vhost-net failed");
+ c->vq[i].kick_fd = file.fd;
+
+ vqs[i].num_free = VHOST_NDESCS;
+ }
+
rc = ioctl(fd, (int)TUNSETIFF, &ifr);
if (rc < 0)
die_perror("TUNSETIFF ioctl on /dev/net/tun failed");
@@ -1416,7 +1698,34 @@ static int tap_ns_tun(void *arg)
if (!(c->pasta_ifi = if_nametoindex(c->pasta_ifn)))
die("Tap device opened but no network interface found");
+ for (i = 0; i < VHOST_NDESCS; ++i) {
+ vring_desc[0][i].addr = (uintptr_t)pkt_buf + i * (PKT_BUF_BYTES/VHOST_NDESCS);
+ vring_desc[0][i].len = PKT_BUF_BYTES/VHOST_NDESCS;
+ vring_desc[0][i].flags = VRING_DESC_F_WRITE;
+ }
+ for (i = 0; i < VHOST_NDESCS; ++i) {
+ vring_avail_0.avail.ring[i] = i;
+ }
+
+ rx_pkt_refill(c);
+
+ /* Duplicating foreach queue to follow the exact order from QEMU */
+ for (int i = 0; i < ARRAY_SIZE(c->vq); i++) {
+ struct vhost_vring_file file = {
+ .index = i,
+ .fd = fd,
+ };
+
+ fprintf(stderr, "qid: %d\n", file.index);
+ fprintf(stderr, "tap fd: %d\n", file.fd);
+ rc = ioctl(vhost_fd, VHOST_NET_SET_BACKEND, &file);
+ if (rc < 0)
+ die_perror(
+ "VHOST_NET_SET_BACKEND ioctl on /dev/vhost-net failed");
+ }
+
c->fd_tap = fd;
+ c->fd_vhost = vhost_fd;
return 0;
}
diff --git a/tap.h b/tap.h
index 6fe3d15..ff8cee5 100644
--- a/tap.h
+++ b/tap.h
@@ -69,6 +69,14 @@ static inline void tap_hdr_update(struct tap_hdr *thdr, size_t l2len)
thdr->vnet_len = htonl(l2len);
}
+/**
+ * tap_vhost_input() - Handler for new data on the socket to hypervisor vq
+ * @c: Execution context
+ * @ref: epoll reference
+ * @now: Current timestamp
+ */
+void tap_vhost_input(struct ctx *c, union epoll_ref ref, const struct timespec *now);
+
unsigned long tap_l2_max_len(const struct ctx *c);
void *tap_push_l2h(const struct ctx *c, void *buf, uint16_t proto);
void *tap_push_ip4h(struct iphdr *ip4h, struct in_addr src,
--
@@ -69,6 +69,14 @@ static inline void tap_hdr_update(struct tap_hdr *thdr, size_t l2len)
thdr->vnet_len = htonl(l2len);
}
+/**
+ * tap_vhost_input() - Handler for new data on the socket to hypervisor vq
+ * @c: Execution context
+ * @ref: epoll reference
+ * @now: Current timestamp
+ */
+void tap_vhost_input(struct ctx *c, union epoll_ref ref, const struct timespec *now);
+
unsigned long tap_l2_max_len(const struct ctx *c);
void *tap_push_l2h(const struct ctx *c, void *buf, uint16_t proto);
void *tap_push_ip4h(struct iphdr *ip4h, struct in_addr src,
--
2.50.0
next prev parent reply other threads:[~2025-07-09 17:47 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-09 17:47 [RFC v2 00/11] Add vhost-net kernel support Eugenio Pérez
2025-07-09 17:47 ` Eugenio Pérez [this message]
2025-07-09 17:47 ` [RFC v2 02/11] tap: add die() on vhost error Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 03/11] tap: replace tx tap hdr with virtio_nethdr_mrg_rxbuf Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 04/11] tcp: export memory regions to vhost Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 05/11] virtio: Fill .next in tx queue Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 06/11] tap: move static iov_sock to tcp_buf_data_from_sock Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 07/11] tap: support tx through vhost Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 08/11] tap: add tap_free_old_xmit Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 09/11] tcp: start conversion to circular buffer Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 10/11] tap: add poll(2) to used_idx Eugenio Pérez
2025-07-09 17:47 ` [RFC v2 11/11] tcp_buf: adding TCP tx circular buffer Eugenio Pérez
2025-07-10 9:46 ` [RFC v2 00/11] Add vhost-net kernel support Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250709174748.3514693-2-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=passt-dev@passt.top \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://passt.top/passt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).