From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=none (p=none dis=none) header.from=gibson.dropbear.id.au Authentication-Results: passt.top; dkim=pass (2048-bit key; secure) header.d=gibson.dropbear.id.au header.i=@gibson.dropbear.id.au header.a=rsa-sha256 header.s=202410 header.b=ii61lefh; dkim-atps=neutral Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by passt.top (Postfix) with ESMTPS id E9C1D5A004E for ; Tue, 15 Oct 2024 05:23:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gibson.dropbear.id.au; s=202410; t=1728962595; bh=9b0a52NdUL93L278pRrpbQrrj5Wfb2bn5+MYt+l2FWI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ii61lefhYjVWthw4gXx5bbpvR2bcm1fruZiuwbxsGGdEiROM3DN1CK9XvvQJj9mz/ TY+rq03KBm4xL7Zehvzf1wIqL0icwmnUGFN/OU4z0wB3zQBAJaEh7riAyW+qDC7VdU POD28feh9+j5nglFtuPHSiE0jecU+OVwxQrzytAtaj0v5WBs4B2UrjZRCAnGsni1un Ek+eV2r5VyAxmC9dgvHJeFTSxdG1MblkPQ67fBo8JehmxA+fF9TIP7nUxeVW5+0kVa GBtPeY8hEXyLyyFDBokLxelhc9R8MnA9HbIOhkEMAbWg3Ajycchq/mNUZCRT0Y9TQD wdgH0q9b1kezA== Received: by gandalf.ozlabs.org (Postfix, from userid 1007) id 4XSKDg687Sz4x89; Tue, 15 Oct 2024 14:23:15 +1100 (AEDT) Date: Tue, 15 Oct 2024 14:23:11 +1100 From: David Gibson To: Laurent Vivier Subject: Re: [PATCH v8 7/8] vhost-user: add vhost-user Message-ID: References: <20241010122903.1188992-1-lvivier@redhat.com> <20241010122903.1188992-8-lvivier@redhat.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="D8Xvp+ufqIIx3Vuq" Content-Disposition: inline In-Reply-To: <20241010122903.1188992-8-lvivier@redhat.com> Message-ID-Hash: JM6EFJXU3UWELLSSRI6NBTHJ47CVAGWB X-Message-ID-Hash: JM6EFJXU3UWELLSSRI6NBTHJ47CVAGWB X-MailFrom: dgibson@gandalf.ozlabs.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: --D8Xvp+ufqIIx3Vuq Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Oct 10, 2024 at 02:29:01PM +0200, Laurent Vivier wrote: > add virtio and vhost-user functions to connect with QEMU. >=20 > $ ./passt --vhost-user >=20 > and >=20 > # qemu-system-x86_64 ... -m 4G \ > -object memory-backend-memfd,id=3Dmemfd0,share=3Don,size=3D4G \ > -numa node,memdev=3Dmemfd0 \ > -chardev socket,id=3Dchr0,path=3D/tmp/passt_1.socket \ > -netdev vhost-user,id=3Dnetdev0,chardev=3Dchr0 \ > -device virtio-net,mac=3D9a:2b:2c:2d:2e:2f,netdev=3Dnetdev0 \ > ... >=20 > Signed-off-by: Laurent Vivier > --- > Makefile | 6 +- > conf.c | 21 ++- > epoll_type.h | 4 + > iov.c | 1 - > isolation.c | 15 +- > packet.c | 11 ++ > packet.h | 8 +- > passt.1 | 10 +- > passt.c | 9 + > passt.h | 6 + > pcap.c | 1 - > tap.c | 80 +++++++-- > tap.h | 5 +- > tcp.c | 7 + > tcp_vu.c | 476 +++++++++++++++++++++++++++++++++++++++++++++++++++ > tcp_vu.h | 12 ++ > udp.c | 10 ++ > udp_vu.c | 336 ++++++++++++++++++++++++++++++++++++ > udp_vu.h | 13 ++ > vhost_user.c | 37 ++-- > vhost_user.h | 4 +- > virtio.c | 5 - > vu_common.c | 385 +++++++++++++++++++++++++++++++++++++++++ > vu_common.h | 47 +++++ > 24 files changed, 1454 insertions(+), 55 deletions(-) > create mode 100644 tcp_vu.c > create mode 100644 tcp_vu.h > create mode 100644 udp_vu.c > create mode 100644 udp_vu.h > create mode 100644 vu_common.c > create mode 100644 vu_common.h >=20 > diff --git a/Makefile b/Makefile > index 0e8ed60a0da1..1e8910dda1f4 100644 > --- a/Makefile > +++ b/Makefile > @@ -54,7 +54,8 @@ FLAGS +=3D -DDUAL_STACK_SOCKETS=3D$(DUAL_STACK_SOCKETS) > PASST_SRCS =3D arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd= =2Ec \ > icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \ > ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \ > - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c > + tcp_buf.c tcp_splice.c tcp_vu.c udp.c udp_flow.c udp_vu.c util.c \ > + vhost_user.c virtio.c vu_common.c > QRAP_SRCS =3D qrap.c > SRCS =3D $(PASST_SRCS) $(QRAP_SRCS) > =20 > @@ -64,7 +65,8 @@ PASST_HEADERS =3D arch.h arp.h checksum.h conf.h dhcp.h= dhcpv6.h flow.h fwd.h \ > flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \ > lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \ > siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \ > - udp.h udp_flow.h util.h vhost_user.h virtio.h > + tcp_vu.h udp.h udp_flow.h udp_internal.h udp_vu.h util.h vhost_user.h \ > + virtio.h vu_common.h > HEADERS =3D $(PASST_HEADERS) seccomp.h > =20 > C :=3D \#include \nstruct tcp_info x =3D { .tcpi_snd_wnd = =3D 0 }; > diff --git a/conf.c b/conf.c > index c63101970155..29d6e41f5770 100644 > --- a/conf.c > +++ b/conf.c > @@ -45,6 +45,7 @@ > #include "lineread.h" > #include "isolation.h" > #include "log.h" > +#include "vhost_user.h" > =20 > /** > * next_chunk - Return the next piece of a string delimited by a charact= er > @@ -762,9 +763,14 @@ static void usage(const char *name, FILE *f, int sta= tus) > " default: same interface name as external one\n"); > } else { > fprintf(f, > - " -s, --socket PATH UNIX domain socket path\n" > + " -s, --socket, --socket-path PATH UNIX domain socket path\n" > " default: probe free path starting from " > UNIX_SOCK_PATH "\n", 1); > + fprintf(f, > + " --vhost-user Enable vhost-user mode\n" > + " UNIX domain socket is provided by -s option\n" > + " --print-capabilities print back-end capabilities in JSON format,\n" > + " only meaningful for vhost-user mode\n"); > } > =20 > fprintf(f, > @@ -1290,6 +1296,10 @@ void conf(struct ctx *c, int argc, char **argv) > {"map-host-loopback", required_argument, NULL, 21 }, > {"map-guest-addr", required_argument, NULL, 22 }, > {"dns-host", required_argument, NULL, 24 }, > + {"vhost-user", no_argument, NULL, 25 }, > + /* vhost-user backend program convention */ > + {"print-capabilities", no_argument, NULL, 26 }, > + {"socket-path", required_argument, NULL, 's' }, > { 0 }, > }; > const char *logname =3D (c->mode =3D=3D MODE_PASTA) ? "pasta" : "passt"; > @@ -1478,6 +1488,15 @@ void conf(struct ctx *c, int argc, char **argv) > break; > =20 > die("Invalid host nameserver address: %s", optarg); > + case 25: > + if (c->mode =3D=3D MODE_PASTA) { > + err("--vhost-user is for passt mode only"); > + usage(argv[0], stdout, EXIT_SUCCESS); > + } > + c->mode =3D MODE_VU; > + break; > + case 26: > + vu_print_capabilities(); > break; > case 'd': > c->debug =3D 1; > diff --git a/epoll_type.h b/epoll_type.h > index 0ad1efa0ccec..f3ef41584757 100644 > --- a/epoll_type.h > +++ b/epoll_type.h > @@ -36,6 +36,10 @@ enum epoll_type { > EPOLL_TYPE_TAP_PASST, > /* socket listening for qemu socket connections */ > EPOLL_TYPE_TAP_LISTEN, > + /* vhost-user command socket */ > + EPOLL_TYPE_VHOST_CMD, > + /* vhost-user kick event socket */ > + EPOLL_TYPE_VHOST_KICK, > =20 > EPOLL_NUM_TYPES, > }; > diff --git a/iov.c b/iov.c > index 3f9e229a305f..3741db21790f 100644 > --- a/iov.c > +++ b/iov.c > @@ -68,7 +68,6 @@ size_t iov_skip_bytes(const struct iovec *iov, size_t n, > * > * Returns: The number of bytes successfully copied. > */ > -/* cppcheck-suppress unusedFunction */ > size_t iov_from_buf(const struct iovec *iov, size_t iov_cnt, > size_t offset, const void *buf, size_t bytes) > { > diff --git a/isolation.c b/isolation.c > index 45fba1e68b9d..c2a3c7b7911d 100644 > --- a/isolation.c > +++ b/isolation.c > @@ -379,12 +379,19 @@ void isolate_postfork(const struct ctx *c) > =20 > prctl(PR_SET_DUMPABLE, 0); > =20 > - if (c->mode =3D=3D MODE_PASTA) { > - prog.len =3D (unsigned short)ARRAY_SIZE(filter_pasta); > - prog.filter =3D filter_pasta; > - } else { > + switch (c->mode) { > + case MODE_PASST: > prog.len =3D (unsigned short)ARRAY_SIZE(filter_passt); > prog.filter =3D filter_passt; > + break; > + case MODE_PASTA: > + prog.len =3D (unsigned short)ARRAY_SIZE(filter_pasta); > + prog.filter =3D filter_pasta; > + break; > + case MODE_VU: > + prog.len =3D (unsigned short)ARRAY_SIZE(filter_vu); > + prog.filter =3D filter_vu; > + break; I'd feel more comfortable with a default case that calls die() or ASSERT(). Obviously that shouldn't happen, but not applying any filter doesn't seem like a good way to fail if somehow we get an unexpected mode. > } > =20 > if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) || > diff --git a/packet.c b/packet.c > index 37489961a37e..e5a78d079231 100644 > --- a/packet.c > +++ b/packet.c > @@ -36,6 +36,17 @@ > static int packet_check_range(const struct pool *p, size_t offset, size_= t len, > const char *start, const char *func, int line) > { > + if (p->buf_size =3D=3D 0) { > + int ret; > + > + ret =3D vu_packet_check_range((void *)p->buf, offset, len, start); > + > + if (ret =3D=3D -1) > + trace("cannot find region, %s:%i", func, line); > + > + return ret; > + } > + > if (start < p->buf) { > trace("packet start %p before buffer start %p, " > "%s:%i", (void *)start, (void *)p->buf, func, line); > diff --git a/packet.h b/packet.h > index 8377dcf678bb..3f70e949c066 100644 > --- a/packet.h > +++ b/packet.h > @@ -8,8 +8,10 @@ > =20 > /** > * struct pool - Generic pool of packets stored in a buffer > - * @buf: Buffer storing packet descriptors > - * @buf_size: Total size of buffer > + * @buf: Buffer storing packet descriptors, > + * a struct vu_dev_region array for passt vhost-user mode > + * @buf_size: Total size of buffer, > + * 0 for passt vhost-user mode > * @size: Number of usable descriptors for the pool > * @count: Number of used descriptors for the pool > * @pkt: Descriptors: see macros below > @@ -22,6 +24,8 @@ struct pool { > struct iovec pkt[1]; > }; > =20 > +int vu_packet_check_range(void *buf, size_t offset, size_t len, > + const char *start); > void packet_add_do(struct pool *p, size_t len, const char *start, > const char *func, int line); > void *packet_get_do(const struct pool *p, const size_t idx, > diff --git a/passt.1 b/passt.1 > index ef33267e9cd7..96532dd39aa2 100644 > --- a/passt.1 > +++ b/passt.1 > @@ -397,12 +397,20 @@ interface address are configured on a given host in= terface. > .SS \fBpasst\fR-only options > =20 > .TP > -.BR \-s ", " \-\-socket " " \fIpath > +.BR \-s ", " \-\-socket-path ", " \-\-socket " " \fIpath > Path for UNIX domain socket used by \fBqemu\fR(1) or \fBqrap\fR(1) to co= nnect to > \fBpasst\fR. > Default is to probe a free socket, not accepting connections, starting f= rom > \fI/tmp/passt_1.socket\fR to \fI/tmp/passt_64.socket\fR. > =20 > +.TP > +.BR \-\-vhost-user > +Enable vhost-user. The vhost-user command socket is provided by \fB--soc= ket\fR. > + > +.TP > +.BR \-\-print-capabilities > +Print back-end capabilities in JSON format, only meaningful for vhost-us= er mode. > + > .TP > .BR \-F ", " \-\-fd " " \fIFD > Pass a pre-opened, connected socket to \fBpasst\fR. Usually the socket i= s opened > diff --git a/passt.c b/passt.c > index 79093ee02d62..2d105e81218d 100644 > --- a/passt.c > +++ b/passt.c > @@ -52,6 +52,7 @@ > #include "arch.h" > #include "log.h" > #include "tcp_splice.h" > +#include "vu_common.h" > =20 > #define EPOLL_EVENTS 8 > =20 > @@ -74,6 +75,8 @@ char *epoll_type_str[] =3D { > [EPOLL_TYPE_TAP_PASTA] =3D "/dev/net/tun device", > [EPOLL_TYPE_TAP_PASST] =3D "connected qemu socket", > [EPOLL_TYPE_TAP_LISTEN] =3D "listening qemu socket", > + [EPOLL_TYPE_VHOST_CMD] =3D "vhost-user command socket", > + [EPOLL_TYPE_VHOST_KICK] =3D "vhost-user kick socket", > }; > static_assert(ARRAY_SIZE(epoll_type_str) =3D=3D EPOLL_NUM_TYPES, > "epoll_type_str[] doesn't match enum epoll_type"); > @@ -360,6 +363,12 @@ loop: > case EPOLL_TYPE_PING: > icmp_sock_handler(&c, ref); > break; > + case EPOLL_TYPE_VHOST_CMD: > + vu_control_handler(c.vdev, c.fd_tap, eventmask); > + break; > + case EPOLL_TYPE_VHOST_KICK: > + vu_kick_cb(c.vdev, ref, &now); > + break; > default: > /* Can't happen */ > ASSERT(0); > diff --git a/passt.h b/passt.h > index 4908ed937dc8..311482d36257 100644 > --- a/passt.h > +++ b/passt.h > @@ -25,6 +25,8 @@ union epoll_ref; > #include "fwd.h" > #include "tcp.h" > #include "udp.h" > +#include "udp_vu.h" Why does udp_vu.h need to be included here? > +#include "vhost_user.h" > =20 > /* Default address for our end on the tap interface. Bit 0 of byte 0 mu= st be 0 > * (unicast) and bit 1 of byte 1 must be 1 (locally administered). Othe= rwise > @@ -94,6 +96,7 @@ struct fqdn { > enum passt_modes { > MODE_PASST, > MODE_PASTA, > + MODE_VU, > }; > =20 > /** > @@ -228,6 +231,7 @@ struct ip6_ctx { > * @freebind: Allow binding of non-local addresses for forwarding > * @low_wmem: Low probed net.core.wmem_max > * @low_rmem: Low probed net.core.rmem_max > + * @vdev: vhost-user device > */ > struct ctx { > enum passt_modes mode; > @@ -289,6 +293,8 @@ struct ctx { > =20 > int low_wmem; > int low_rmem; > + > + struct vu_dev *vdev; > }; > =20 > void proto_update_l2_buf(const unsigned char *eth_d, > diff --git a/pcap.c b/pcap.c > index 6ee6cdfd261a..718d6ad61732 100644 > --- a/pcap.c > +++ b/pcap.c > @@ -140,7 +140,6 @@ void pcap_multiple(const struct iovec *iov, size_t fr= ame_parts, unsigned int n, > * @iovcnt: Number of buffers (@iov entries) > * @offset: Offset of the L2 frame within the full data length > */ > -/* cppcheck-suppress unusedFunction */ > void pcap_iov(const struct iovec *iov, size_t iovcnt, size_t offset) > { > struct timespec now; > diff --git a/tap.c b/tap.c > index 4b826fdf7adc..22d19f1833f7 100644 > --- a/tap.c > +++ b/tap.c > @@ -58,6 +58,8 @@ > #include "packet.h" > #include "tap.h" > #include "log.h" > +#include "vhost_user.h" > +#include "vu_common.h" > =20 > /* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handler= s */ > static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS, pkt_buf); > @@ -78,16 +80,22 @@ void tap_send_single(const struct ctx *c, const void = *data, size_t l2len) > struct iovec iov[2]; > size_t iovcnt =3D 0; > =20 > - if (c->mode =3D=3D MODE_PASST) { > + switch (c->mode) { > + case MODE_PASST: > iov[iovcnt] =3D IOV_OF_LVALUE(vnet_len); > iovcnt++; > - } > - > - iov[iovcnt].iov_base =3D (void *)data; > - iov[iovcnt].iov_len =3D l2len; > - iovcnt++; > + /* fall through */ > + case MODE_PASTA: > + iov[iovcnt].iov_base =3D (void *)data; > + iov[iovcnt].iov_len =3D l2len; > + iovcnt++; > =20 > - tap_send_frames(c, iov, iovcnt, 1); > + tap_send_frames(c, iov, iovcnt, 1); > + break; > + case MODE_VU: > + vu_send_single(c, data, l2len); > + break; > + } > } > =20 > /** > @@ -414,10 +422,18 @@ size_t tap_send_frames(const struct ctx *c, const s= truct iovec *iov, > if (!nframes) > return 0; > =20 > - if (c->mode =3D=3D MODE_PASTA) > + switch (c->mode) { > + case MODE_PASTA: > m =3D tap_send_frames_pasta(c, iov, bufs_per_frame, nframes); > - else > + break; > + case MODE_PASST: > m =3D tap_send_frames_passt(c, iov, bufs_per_frame, nframes); > + break; > + case MODE_VU: > + /* fall through */ > + default: > + ASSERT(0); > + } > =20 > if (m < nframes) > debug("tap: failed to send %zu frames of %zu", > @@ -976,7 +992,7 @@ void tap_add_packet(struct ctx *c, ssize_t l2len, cha= r *p) > * tap_sock_reset() - Handle closing or failure of connect AF_UNIX socket > * @c: Execution context > */ > -static void tap_sock_reset(struct ctx *c) > +void tap_sock_reset(struct ctx *c) > { > info("Client connection closed%s", c->one_off ? ", exiting" : ""); > =20 > @@ -987,6 +1003,8 @@ static void tap_sock_reset(struct ctx *c) > epoll_ctl(c->epollfd, EPOLL_CTL_DEL, c->fd_tap, NULL); > close(c->fd_tap); > c->fd_tap =3D -1; > + if (c->mode =3D=3D MODE_VU) > + vu_cleanup(c->vdev); > } > =20 > /** > @@ -1205,6 +1223,11 @@ static void tap_backend_show_hints(struct ctx *c) > info("or qrap, for earlier qemu versions:"); > info(" ./qrap 5 kvm ... -net socket,fd=3D5 -net nic,model=3Dvirtio"= ); > break; > + case MODE_VU: > + info("You can start qemu with:"); > + info(" kvm ... -chardev socket,id=3Dchr0,path=3D%s -netdev vhost-us= er,id=3Dnetdev0,chardev=3Dchr0 -device virtio-net,netdev=3Dnetdev0 -object = memory-backend-memfd,id=3Dmemfd0,share=3Don,size=3D$RAMSIZE -numa node,memd= ev=3Dmemfd0\n", > + c->sock_path); > + break; > } > } > =20 > @@ -1232,8 +1255,8 @@ static void tap_sock_unix_init(const struct ctx *c) > */ > void tap_listen_handler(struct ctx *c, uint32_t events) > { > - union epoll_ref ref =3D { .type =3D EPOLL_TYPE_TAP_PASST }; > struct epoll_event ev =3D { 0 }; > + union epoll_ref ref =3D { 0 }; > int v =3D INT_MAX / 2; > struct ucred ucred; > socklen_t len; > @@ -1273,6 +1296,10 @@ void tap_listen_handler(struct ctx *c, uint32_t ev= ents) > trace("tap: failed to set SO_SNDBUF to %i", v); > =20 > ref.fd =3D c->fd_tap; > + if (c->mode =3D=3D MODE_VU) > + ref.type =3D EPOLL_TYPE_VHOST_CMD; > + else > + ref.type =3D EPOLL_TYPE_TAP_PASST; > ev.events =3D EPOLLIN | EPOLLRDHUP; > ev.data.u64 =3D ref.u64; > epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev); > @@ -1339,7 +1366,7 @@ static void tap_sock_tun_init(struct ctx *c) > * @base: Buffer base > * @size Buffer size > */ > -static void tap_sock_update_pool(void *base, size_t size) > +void tap_sock_update_pool(void *base, size_t size) > { > int i; > =20 > @@ -1353,13 +1380,15 @@ static void tap_sock_update_pool(void *base, size= _t size) > } > =20 > /** > - * tap_backend_init() - Create and set up AF_UNIX socket or > - * tuntap file descriptor > + * tap_sock_init() - Create and set up AF_UNIX socket or tuntap file des= criptor I'm guessing you didn't mean to revert the name change in the comment here. > * @c: Execution context > */ > void tap_backend_init(struct ctx *c) > { > - tap_sock_update_pool(pkt_buf, sizeof(pkt_buf)); > + if (c->mode =3D=3D MODE_VU) > + tap_sock_update_pool(NULL, 0); > + else > + tap_sock_update_pool(pkt_buf, sizeof(pkt_buf)); > =20 > if (c->fd_tap !=3D -1) { /* Passed as --fd */ > struct epoll_event ev =3D { 0 }; > @@ -1367,10 +1396,17 @@ void tap_backend_init(struct ctx *c) > =20 > ASSERT(c->one_off); > ref.fd =3D c->fd_tap; > - if (c->mode =3D=3D MODE_PASST) > + switch (c->mode) { > + case MODE_PASST: > ref.type =3D EPOLL_TYPE_TAP_PASST; > - else > + break; > + case MODE_PASTA: > ref.type =3D EPOLL_TYPE_TAP_PASTA; > + break; > + case MODE_VU: > + ref.type =3D EPOLL_TYPE_VHOST_CMD; > + break; > + } > =20 > ev.events =3D EPOLLIN | EPOLLRDHUP; > ev.data.u64 =3D ref.u64; > @@ -1378,9 +1414,14 @@ void tap_backend_init(struct ctx *c) > return; > } > =20 > - if (c->mode =3D=3D MODE_PASTA) { > + switch (c->mode) { > + case MODE_PASTA: > tap_sock_tun_init(c); > - } else { > + break; > + case MODE_VU: > + vu_init(c); > + /* fall through */ > + case MODE_PASST: > tap_sock_unix_init(c); > =20 > /* In passt mode, we don't know the guest's MAC address until it > @@ -1388,6 +1429,7 @@ void tap_backend_init(struct ctx *c) > * first packets will reach it. > */ > memset(&c->guest_mac, 0xff, sizeof(c->guest_mac)); > + break; > } > =20 > tap_backend_show_hints(c); > diff --git a/tap.h b/tap.h > index 8728cc5c09c3..dfbd8b9ebd72 100644 > --- a/tap.h > +++ b/tap.h > @@ -40,7 +40,8 @@ static inline struct iovec tap_hdr_iov(const struct ctx= *c, > */ > static inline void tap_hdr_update(struct tap_hdr *thdr, size_t l2len) > { > - thdr->vnet_len =3D htonl(l2len); > + if (thdr) > + thdr->vnet_len =3D htonl(l2len); So.. I do think you could treat the virtio_net_hdr_mrg_rxbuf structure as the "tap" header for vhost-user, though it will require some refactoring. I think that could simplify a bunch of iov juggling further on. Obviously, this function would need more parameters: at least ctx for the mode and the number of (payload) buffers. The thdr parameter would need to become a void * or similar too. Note that this function is already conceptually dependent on the mode: for passt it needs to fill in the vnet_len, for pasta it doesn't need to do anything. For now it fills in the vnet_len in both cases just for simplicity because it's harmless though pointless in the pasta case. More comments further down, where I think that approach will simplify things a bit. > } > =20 > void tap_udp4_send(const struct ctx *c, struct in_addr src, in_port_t sp= ort, > @@ -68,6 +69,8 @@ void tap_handler_pasta(struct ctx *c, uint32_t events, > void tap_handler_passt(struct ctx *c, uint32_t events, > const struct timespec *now); > int tap_sock_unix_open(char *sock_path); > +void tap_sock_reset(struct ctx *c); > +void tap_sock_update_pool(void *base, size_t size); > void tap_backend_init(struct ctx *c); > void tap_flush_pools(void); > void tap_handler(struct ctx *c, const struct timespec *now); > diff --git a/tcp.c b/tcp.c > index eae02b1647e3..fd2def0d8a39 100644 > --- a/tcp.c > +++ b/tcp.c > @@ -304,6 +304,7 @@ > #include "flow_table.h" > #include "tcp_internal.h" > #include "tcp_buf.h" > +#include "tcp_vu.h" > =20 > /* MSS rounding: see SET_MSS() */ > #define MSS_DEFAULT 536 > @@ -1328,6 +1329,9 @@ int tcp_prepare_flags(const struct ctx *c, struct t= cp_tap_conn *conn, > static int tcp_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, > int flags) > { > + if (c->mode =3D=3D MODE_VU) > + return tcp_vu_send_flag(c, conn, flags); > + > return tcp_buf_send_flag(c, conn, flags); > } > =20 > @@ -1721,6 +1725,9 @@ static int tcp_sock_consume(const struct tcp_tap_co= nn *conn, uint32_t ack_seq) > */ > static int tcp_data_from_sock(const struct ctx *c, struct tcp_tap_conn *= conn) > { > + if (c->mode =3D=3D MODE_VU) > + return tcp_vu_data_from_sock(c, conn); > + > return tcp_buf_data_from_sock(c, conn); > } > =20 > diff --git a/tcp_vu.c b/tcp_vu.c > new file mode 100644 > index 000000000000..1126fb39d138 > --- /dev/null > +++ b/tcp_vu.c > @@ -0,0 +1,476 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* tcp_vu.c - TCP L2 vhost-user management functions > + * > + * Copyright Red Hat > + * Author: Laurent Vivier > + */ > + > +#include > +#include > +#include > + > +#include > + > +#include > + > +#include > +#include > + > +#include "util.h" > +#include "ip.h" > +#include "passt.h" > +#include "siphash.h" > +#include "inany.h" > +#include "vhost_user.h" > +#include "tcp.h" > +#include "pcap.h" > +#include "flow.h" > +#include "tcp_conn.h" > +#include "flow_table.h" > +#include "tcp_vu.h" > +#include "tap.h" > +#include "tcp_internal.h" > +#include "checksum.h" > +#include "vu_common.h" > + > +static struct iovec iov_vu[VIRTQUEUE_MAX_SIZE + 1]; > +static struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; > + > +/** > + * tcp_vu_l2_hdrlen() - return the size of the header in level 2 frame (= TDP) I don't love this name. To be meaningful "header length" needs to say both the first and last header that's relevant here, so "l2" doesn't really disambiguate anything. I'd suggest just "tcp_vu_hdrlen()" then in the longer comment say it's the total length of the L2, L3 & L4 headers. Although... looking deeper it seems like most (all?) times you use the output from this you add the size of virtio_net_hdr_mrg_rxbuf to it anyway. So would it be simpler to just include that here in the first place. Also s/TDP/TCP/? > + * @v6: Set for IPv6 packet > + * > + * Return: Return the size of the header > + */ > +static size_t tcp_vu_l2_hdrlen(bool v6) > +{ > + size_t l2_hdrlen; > + > + l2_hdrlen =3D sizeof(struct ethhdr) + sizeof(struct tcphdr); > + > + if (v6) > + l2_hdrlen +=3D sizeof(struct ipv6hdr); > + else > + l2_hdrlen +=3D sizeof(struct iphdr); > + > + return l2_hdrlen; > +} > + > +/** > + * tcp_vu_update_check() - Calculate TCP checksum > + * @tapside: Address information for one side of the flow > + * @iov: Pointer to the array of IO vectors > + * @iov_used: Length of the array > + */ > +static void tcp_vu_update_check(const struct flowside *tapside, > + struct iovec *iov, int iov_used) AFAICT this is only used for the pcap path. Rather than filling in the checksum at a different point from normal, I think it would be easier to just clear the no_tcp_csum flag when pcap is enabled. That would, AFAICT, remove the need for this function entirely. > +{ > + char *base =3D iov[0].iov_base; > + > + if (inany_v4(&tapside->oaddr)) { > + const struct iphdr *iph =3D vu_ip(base); > + > + tcp_update_check_tcp4(iph, iov, iov_used, > + (char *)vu_payloadv4(base) - base); > + } else { > + const struct ipv6hdr *ip6h =3D vu_ip(base); > + > + tcp_update_check_tcp6(ip6h, iov, iov_used, > + (char *)vu_payloadv6(base) - base); > + } > +} > + > +/** > + * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payloa= d) > + * @c: Execution context > + * @conn: Connection pointer > + * @flags: TCP flags: if not set, send segment only if ACK is due > + * > + * Return: negative error code on connection reset, 0 otherwise > + */ > +int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int= flags) > +{ > + struct vu_dev *vdev =3D c->vdev; > + struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > + const struct flowside *tapside =3D TAPFLOW(conn); > + size_t l2len, l4len, optlen, hdrlen; > + struct ethhdr *eh; > + int elem_cnt; > + int nb_ack; > + int ret; > + > + hdrlen =3D tcp_vu_l2_hdrlen(CONN_V6(conn)); > + > + vu_init_elem(elem, iov_vu, 2); I'm finding the use of iov_vu here confusing. AFAICT when transferring data from sock to vu, it represents the "sock side" IOV - so it points to just the pieces of VU memory that are used for the actual data payloads. Here, where there's no socket side data, I'm not sure what it's supposed to mean. I think it might be better to just use a local 2-element iovec. > + elem_cnt =3D vu_collect_one_frame(vdev, vq, elem, 1, > + hdrlen + OPT_MSS_LEN + OPT_WS_LEN + 1, > + 0, NULL); AFAICT the size passed to vu_collect_one_frame() doesn't include the headers, but here you do include it. Also, IIUC, this carefully removes space for the headers from the IOV... > + if (elem_cnt < 1) > + return 0; > + > + vu_set_vnethdr(vdev, &iov_vu[0], 1, 0); =2E.. then this puts it right back again. That also seems confusing. > + > + eh =3D vu_eth(iov_vu[0].iov_base); > + > + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); > + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); > + > + if (CONN_V4(conn)) { > + struct tcp_payload_t *payload; > + struct iphdr *iph; > + uint32_t seq; > + > + eh->h_proto =3D htons(ETH_P_IP); > + > + iph =3D vu_ip(iov_vu[0].iov_base); > + *iph =3D (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP); > + > + payload =3D vu_payloadv4(iov_vu[0].iov_base); I'm not sure using tcp_payload_t is a good idea here, since it kind of implies a buffer for a max-size frame, which is not necessary and might not be the case here. tcp_flags_t might make more sense. > + memset(&payload->th, 0, sizeof(payload->th)); > + payload->th.doff =3D offsetof(struct tcp_flags_t, opts) / 4; > + payload->th.ack =3D 1; > + > + seq =3D conn->seq_to_tap; > + ret =3D tcp_prepare_flags(c, conn, flags, &payload->th, > + (char *)payload->data, &optlen); vu_collect_one_frame() verifies that the fixed size portion of the headers are contiguous and in the first buffer. However, here, you're also requiring that there's contiguous space in that first buffer for the TCP header options as well. That, too, needs to be verified. > + if (ret <=3D 0) { > + vu_queue_rewind(vq, 1); > + return ret; > + } > + > + l4len =3D tcp_fill_headers4(conn, NULL, iph, payload, optlen, > + NULL, seq, true); > + l2len =3D sizeof(*iph); > + } else { > + struct tcp_payload_t *payload; > + struct ipv6hdr *ip6h; > + uint32_t seq; > + > + eh->h_proto =3D htons(ETH_P_IPV6); > + > + ip6h =3D vu_ip(iov_vu[0].iov_base); > + *ip6h =3D (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP); > + > + payload =3D vu_payloadv6(iov_vu[0].iov_base); > + memset(&payload->th, 0, sizeof(payload->th)); > + payload->th.doff =3D offsetof(struct tcp_flags_t, opts) / 4; > + payload->th.ack =3D 1; > + > + seq =3D conn->seq_to_tap; > + ret =3D tcp_prepare_flags(c, conn, flags, &payload->th, > + (char *)payload->data, &optlen); > + if (ret <=3D 0) { > + vu_queue_rewind(vq, 1); > + return ret; > + } > + > + l4len =3D tcp_fill_headers6(conn, NULL, ip6h, payload, optlen, > + seq, true); > + l2len =3D sizeof(*ip6h); > + } > + l2len +=3D l4len + sizeof(struct ethhdr); > + > + elem[0].in_sg[0].iov_len =3D l2len + > + sizeof(struct virtio_net_hdr_mrg_rxbuf); > + if (*c->pcap) { > + tcp_vu_update_check(tapside, &elem[0].in_sg[0], 1); > + pcap_iov(&elem[0].in_sg[0], 1, > + sizeof(struct virtio_net_hdr_mrg_rxbuf)); > + } > + nb_ack =3D 1; > + > + if (flags & DUP_ACK) { > + elem_cnt =3D vu_collect_one_frame(vdev, vq, &elem[1], 1, l2len, > + 0, NULL); > + if (elem_cnt =3D=3D 1) { > + memcpy(elem[1].in_sg[0].iov_base, > + elem[0].in_sg[0].iov_base, l2len); IIUC, the iovs in in_sg will include the virtio_net_hdr_mrg_rxbuf, but l2len does not include it, so I think this might fail to copy the tail of the packet. > + vu_set_vnethdr(vdev, &elem[1].in_sg[0], 1, 0); > + nb_ack++; > + > + if (*c->pcap) > + pcap_iov(&elem[1].in_sg[0], 1, 0); > + } > + } > + > + vu_flush(vdev, vq, elem, nb_ack); Is there a VU specific reason to flush immediately, rather than processing more stuff and flushing everything in the deferred handler? > + return 0; > +} > + > +/** tcp_vu_sock_recv() - Receive datastream from socket into vhost-user = buffers > + * @c: Execution context > + * @conn: Connection pointer > + * @v4: Set for IPv4 connections Most places we have a flag for v6 rather than a flag for v4, so maybe invert this for consistency? > + * @fillsize: Number of bytes we can receive > + * @datalen: Size of received data (output) s/datalen/dlen. ? > + * > + * Return: Number of iov entries used to store the data > + */ > +static ssize_t tcp_vu_sock_recv(const struct ctx *c, > + struct tcp_tap_conn *conn, bool v4, > + size_t fillsize, ssize_t *dlen) > +{ > + struct vu_dev *vdev =3D c->vdev; > + struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > + struct msghdr mh_sock =3D { 0 }; > + uint16_t mss =3D MSS_GET(conn); > + int s =3D conn->sock; > + size_t l2_hdrlen; > + int elem_cnt; > + ssize_t ret; > + > + *dlen =3D 0; > + > + l2_hdrlen =3D tcp_vu_l2_hdrlen(!v4); > + > + vu_init_elem(elem, &iov_vu[1], VIRTQUEUE_MAX_SIZE); > + > + elem_cnt =3D vu_collect(vdev, vq, elem, VIRTQUEUE_MAX_SIZE, mss, > + l2_hdrlen, fillsize); > + if (elem_cnt < 0) { > + tcp_rst(c, conn); > + return -ENOMEM; Does this warrant a reset? Couldn't this happen due to a temporary shortage of receive buffers on the guest side, in which case we'd want to just resend later. > + } > + > + mh_sock.msg_iov =3D iov_vu; > + mh_sock.msg_iovlen =3D elem_cnt + 1; > + > + do > + ret =3D recvmsg(s, &mh_sock, MSG_PEEK); > + while (ret < 0 && errno =3D=3D EINTR); > + > + if (ret < 0) { > + vu_queue_rewind(vq, elem_cnt); > + if (errno !=3D EAGAIN && errno !=3D EWOULDBLOCK) { I'm assuming this is supposed to be the errno from recvmsg()? It's generally not a good idea to check errno after calling any other functions, because it's so easy to clobber accidentally. > + ret =3D -errno; > + tcp_rst(c, conn); > + } > + return ret; > + } > + if (!ret) { > + vu_queue_rewind(vq, elem_cnt); > + > + if ((conn->events & (SOCK_FIN_RCVD | TAP_FIN_SENT)) =3D=3D SOCK_FIN_RC= VD) { > + int retf =3D tcp_vu_send_flag(c, conn, FIN | ACK); > + if (retf) { > + tcp_rst(c, conn); > + return retf; > + } > + > + conn_event(c, conn, TAP_FIN_SENT); > + } > + return 0; > + } > + > + *dlen =3D ret; > + > + return elem_cnt; > +} > + > +/** > + * tcp_vu_prepare() - Prepare the packet header > + * @c: Execution context > + * @conn: Connection pointer > + * @first: Pointer to the array of IO vectors > + * @dlen: Packet data length > + * @check: Checksum, if already known > + */ > +static void tcp_vu_prepare(const struct ctx *c, > + struct tcp_tap_conn *conn, struct iovec *first, > + size_t dlen, const uint16_t **check) > +{ > + const struct flowside *toside =3D TAPFLOW(conn); > + char *base =3D first->iov_base; > + struct ethhdr *eh; > + > + /* we guess the first iovec provided by the guest can embed > + * all the headers needed by L2 frame > + */ > + > + eh =3D vu_eth(base); > + > + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); > + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); > + > + /* initialize header */ > + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { > + struct tcp_payload_t *payload; > + struct iphdr *iph; > + > + ASSERT(first[0].iov_len >=3D sizeof(struct virtio_net_hdr_mrg_rxbuf) + > + sizeof(struct ethhdr) + sizeof(struct iphdr) + > + sizeof(struct tcphdr)); > + > + eh->h_proto =3D htons(ETH_P_IP); > + > + iph =3D vu_ip(base); > + *iph =3D (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP); > + payload =3D vu_payloadv4(base); > + memset(&payload->th, 0, sizeof(payload->th)); > + payload->th.doff =3D offsetof(struct tcp_payload_t, data) / 4; > + payload->th.ack =3D 1; > + > + tcp_fill_headers4(conn, NULL, iph, payload, dlen, > + *check, conn->seq_to_tap, true); I wonder if there's another opportunity for more logic sharing between the buf and vu paths here: tcp_fill_headers[46]() fills in the dynamic (vary per packet) parts of the header. Could we add a function that fills in the static (ish) parts. VU would call it on every packet, buf would call it during buffer setup. > + *check =3D &iph->check; > + } else { > + struct tcp_payload_t *payload; > + struct ipv6hdr *ip6h; > + > + ASSERT(first[0].iov_len >=3D sizeof(struct virtio_net_hdr_mrg_rxbuf) + > + sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + > + sizeof(struct tcphdr)); > + > + eh->h_proto =3D htons(ETH_P_IPV6); > + > + ip6h =3D vu_ip(base); > + *ip6h =3D (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP); > + > + payload =3D vu_payloadv6(base); > + memset(&payload->th, 0, sizeof(payload->th)); > + payload->th.doff =3D offsetof(struct tcp_payload_t, data) / 4; > + payload->th.ack =3D 1; > + > + tcp_fill_headers6(conn, NULL, ip6h, payload, dlen, > + conn->seq_to_tap, true); > + } > +} > + > +/** > + * tcp_vu_data_from_sock() - Handle new data from socket, queue to vhost= -user, > + * in window > + * @c: Execution context > + * @conn: Connection pointer > + * > + * Return: Negative on connection reset, 0 otherwise > + */ > +int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn) > +{ > + uint32_t wnd_scaled =3D conn->wnd_from_tap << conn->ws_from_tap; > + struct vu_dev *vdev =3D c->vdev; > + struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > + const struct flowside *tapside =3D TAPFLOW(conn); > + uint16_t mss =3D MSS_GET(conn); > + size_t l2_hdrlen, fillsize; > + int i, iov_cnt, iov_used; > + int v4 =3D CONN_V4(conn); > + uint32_t already_sent =3D 0; > + const uint16_t *check; > + struct iovec *first; > + int frame_size; > + int num_buffers; > + ssize_t len; > + > + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) { Under what circumstances could we get here? I'm wondering if we risk eithe= r: 1. Spinning on a socket EPOLLIN event until the virtqueue is ready or 2. Essentially losing the event here due to EPOLLET, and never coming back to read the data queued on the socket > + flow_err(conn, > + "Got packet, but RX virtqueue not usable yet"); > + return 0; > + } > + > + already_sent =3D conn->seq_to_tap - conn->seq_ack_from_tap; > + > + if (SEQ_LT(already_sent, 0)) { > + /* RFC 761, section 2.1. */ > + flow_trace(conn, "ACK sequence gap: ACK for %u, sent: %u", > + conn->seq_ack_from_tap, conn->seq_to_tap); > + conn->seq_to_tap =3D conn->seq_ack_from_tap; > + already_sent =3D 0; > + } > + > + if (!wnd_scaled || already_sent >=3D wnd_scaled) { > + conn_flag(c, conn, STALLED); > + conn_flag(c, conn, ACK_FROM_TAP_DUE); > + return 0; > + } > + > + /* Set up buffer descriptors we'll fill completely and partially. */ This seems tantalizingly close to allowing more code to be shared between the vu and other paths: 1. Common already_sent calculations 2. Backend specific code constructs an mh array 3. Common code does the recvmsg() and follow up event / flag handling 4. Backend specific code forwards to tap The need for vu_queue_rewind()s complicates things a bit, but I wonder if we can work around that. > + > + fillsize =3D wnd_scaled; > + > + if (peek_offset_cap) > + already_sent =3D 0; > + > + iov_vu[0].iov_base =3D tcp_buf_discard; > + iov_vu[0].iov_len =3D already_sent; > + fillsize -=3D already_sent; > + > + /* collect the buffers from vhost-user and fill them with the > + * data from the socket > + */ > + iov_cnt =3D tcp_vu_sock_recv(c, conn, v4, fillsize, &len); > + if (iov_cnt <=3D 0) > + return iov_cnt; > + > + len -=3D already_sent; > + if (len <=3D 0) { > + conn_flag(c, conn, STALLED); > + vu_queue_rewind(vq, iov_cnt); > + return 0; > + } > + > + conn_flag(c, conn, ~STALLED); > + > + /* Likely, some new data was acked too. */ > + tcp_update_seqack_wnd(c, conn, 0, NULL); > + > + /* initialize headers */ > + l2_hdrlen =3D tcp_vu_l2_hdrlen(!v4); > + iov_used =3D 0; > + num_buffers =3D 0; > + check =3D NULL; > + frame_size =3D 0; > + > + /* iov_vu is an array of buffers and the buffer size can be > + * smaller than the frame size we want to use but with > + * num_buffer we can merge several virtio iov buffers in one packet > + * we need only to set the packet headers in the first iov and > + * num_buffer to the number of iov entries > + */ > + for (i =3D 0; i < iov_cnt && len; i++) { AFAICT, you always use (i + 1) never bare i in the body, so it would probably be simpler to change the loop to be from i=3D1 to i <=3D iov_cnt. > + > + if (frame_size =3D=3D 0) > + first =3D &iov_vu[i + 1]; > + > + if (iov_vu[i + 1].iov_len > (size_t)len) > + iov_vu[i + 1].iov_len =3D len; > + > + len -=3D iov_vu[i + 1].iov_len; > + iov_used++; > + > + frame_size +=3D iov_vu[i + 1].iov_len; > + num_buffers++; > + > + if (frame_size >=3D mss || len =3D=3D 0 || > + i + 1 =3D=3D iov_cnt || !vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXB= UF)) { > + if (i + 1 =3D=3D iov_cnt) > + check =3D NULL; > + > + /* restore first iovec base: point to vnet header */ > + vu_set_vnethdr(vdev, first, num_buffers, l2_hdrlen); > + > + tcp_vu_prepare(c, conn, first, frame_size, &check); > + if (*c->pcap) { > + tcp_vu_update_check(tapside, first, num_buffers); > + pcap_iov(first, num_buffers, > + sizeof(struct virtio_net_hdr_mrg_rxbuf)); > + } > + > + conn->seq_to_tap +=3D frame_size; > + > + frame_size =3D 0; > + num_buffers =3D 0; > + } > + } > + > + /* release unused buffers */ > + vu_queue_rewind(vq, iov_cnt - iov_used); > + > + /* send packets */ > + vu_flush(vdev, vq, elem, iov_used); > + > + conn_flag(c, conn, ACK_FROM_TAP_DUE); > + > + return 0; > +} > diff --git a/tcp_vu.h b/tcp_vu.h > new file mode 100644 > index 000000000000..6ab6057f352a > --- /dev/null > +++ b/tcp_vu.h > @@ -0,0 +1,12 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* Copyright Red Hat > + * Author: Laurent Vivier > + */ > + > +#ifndef TCP_VU_H > +#define TCP_VU_H > + > +int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int= flags); > +int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn= ); > + > +#endif /*TCP_VU_H */ > diff --git a/udp.c b/udp.c > index 8fc5d8099310..1171d9d1a75b 100644 > --- a/udp.c > +++ b/udp.c > @@ -628,6 +628,11 @@ void udp_listen_sock_handler(const struct ctx *c, > union epoll_ref ref, uint32_t events, > const struct timespec *now) > { > + if (c->mode =3D=3D MODE_VU) { > + udp_vu_listen_sock_handler(c, ref, events, now); > + return; > + } > + > udp_buf_listen_sock_handler(c, ref, events, now); > } > =20 > @@ -697,6 +702,11 @@ static void udp_buf_reply_sock_handler(const struct = ctx *c, union epoll_ref ref, > void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > uint32_t events, const struct timespec *now) > { > + if (c->mode =3D=3D MODE_VU) { > + udp_vu_reply_sock_handler(c, ref, events, now); > + return; > + } > + > udp_buf_reply_sock_handler(c, ref, events, now); > } > =20 > diff --git a/udp_vu.c b/udp_vu.c > new file mode 100644 > index 000000000000..3cb76945c9c1 > --- /dev/null > +++ b/udp_vu.c > @@ -0,0 +1,336 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* udp_vu.c - UDP L2 vhost-user management functions > + * > + * Copyright Red Hat > + * Author: Laurent Vivier > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "checksum.h" > +#include "util.h" > +#include "ip.h" > +#include "siphash.h" > +#include "inany.h" > +#include "passt.h" > +#include "pcap.h" > +#include "log.h" > +#include "vhost_user.h" > +#include "udp_internal.h" > +#include "flow.h" > +#include "flow_table.h" > +#include "udp_flow.h" > +#include "udp_vu.h" > +#include "vu_common.h" > + > +static struct iovec iov_vu [VIRTQUEUE_MAX_SIZE]; > +static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE]; > + > +/** > + * udp_vu_l2_hdrlen() - return the size of the header in level 2 frame (= UDP) > + * @v6: Set for IPv6 packet > + * > + * Return: Return the size of the header > + */ > +static size_t udp_vu_l2_hdrlen(bool v6) > +{ > + size_t l2_hdrlen; > + > + l2_hdrlen =3D sizeof(struct ethhdr) + sizeof(struct udphdr); > + > + if (v6) > + l2_hdrlen +=3D sizeof(struct ipv6hdr); > + else > + l2_hdrlen +=3D sizeof(struct iphdr); > + > + return l2_hdrlen; > +} > + > +static int udp_vu_sock_init(int s, union sockaddr_inany *s_in) > +{ > + struct msghdr msg =3D { > + .msg_name =3D s_in, > + .msg_namelen =3D sizeof(union sockaddr_inany), > + }; > + > + return recvmsg(s, &msg, MSG_PEEK | MSG_DONTWAIT); > +} > + > +/** > + * udp_vu_sock_recv() - Receive datagrams from socket into vhost-user bu= ffers > + * @c: Execution context > + * @s: Socket to receive from > + * @events: epoll events bitmap > + * @v6: Set for IPv6 connections > + * @datalen: Size of received data (output) > + * > + * Return: Number of iov entries used to store the datagram > + */ > +static int udp_vu_sock_recv(const struct ctx *c, int s, uint32_t events, > + bool v6, ssize_t *dlen) > +{ > + struct vu_dev *vdev =3D c->vdev; > + struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > + int max_elem, iov_cnt, idx, iov_used; > + struct msghdr msg =3D { 0 }; > + size_t off, l2_hdrlen; > + > + ASSERT(!c->no_udp); > + > + if (!(events & EPOLLIN)) > + return 0; > + > + /* compute L2 header length */ > + > + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > + max_elem =3D VIRTQUEUE_MAX_SIZE; > + else > + max_elem =3D 1; > + > + l2_hdrlen =3D udp_vu_l2_hdrlen(v6); > + > + vu_init_elem(elem, iov_vu, max_elem); > + > + iov_cnt =3D vu_collect_one_frame(vdev, vq, elem, max_elem, > + ETH_MAX_MTU - l2_hdrlen, > + l2_hdrlen, NULL); > + if (iov_cnt =3D=3D 0) > + return 0; > + > + msg.msg_iov =3D iov_vu; > + msg.msg_iovlen =3D iov_cnt; > + > + *dlen =3D recvmsg(s, &msg, 0); > + if (*dlen < 0) { > + vu_queue_rewind(vq, iov_cnt); > + return 0; > + } It'd be nice if we could use recvmmsg() here. I think it's not super hard, with a preliminary step a bit like the vu_collect() in the TCP code. That would gather a bunch of buffers from the queue into an IOV, then construct an array of msghdrs each with a slice of that IO vector large enough to hold a max size frame. It would be reasonable to postpone that after the initial version though. > + /* count the numbers of buffer filled by recvmsg() */ > + idx =3D iov_skip_bytes(iov_vu, iov_cnt, *dlen, &off); > + > + /* adjust last iov length */ > + if (idx < iov_cnt) > + iov_vu[idx].iov_len =3D off; > + iov_used =3D idx + !!off; > + > + /* we have at least the header */ > + if (iov_used =3D=3D 0) > + iov_used =3D 1; > + > + /* release unused buffers */ > + vu_queue_rewind(vq, iov_cnt - iov_used); > + > + vu_set_vnethdr(vdev, &iov_vu[0], iov_used, l2_hdrlen); > + > + return iov_used; > +} > + > +/** > + * udp_vu_prepare() - Prepare the packet header > + * @c: Execution context > + * @toside: Address information for one side of the flow > + * @datalen: Packet data length > + * > + * Return: Layer-4 length > + */ > +static size_t udp_vu_prepare(const struct ctx *c, > + const struct flowside *toside, ssize_t dlen) > +{ > + struct ethhdr *eh; > + size_t l4len; > + > + /* ethernet header */ > + eh =3D vu_eth(iov_vu[0].iov_base); > + > + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); > + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); > + > + /* initialize header */ > + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { > + struct iphdr *iph =3D vu_ip(iov_vu[0].iov_base); > + struct udp_payload_t *bp =3D vu_payloadv4(iov_vu[0].iov_base); > + > + eh->h_proto =3D htons(ETH_P_IP); > + > + *iph =3D (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_UDP); > + > + l4len =3D udp_update_hdr4(iph, bp, toside, dlen, true); > + } else { > + struct ipv6hdr *ip6h =3D vu_ip(iov_vu[0].iov_base); > + struct udp_payload_t *bp =3D vu_payloadv6(iov_vu[0].iov_base); > + > + eh->h_proto =3D htons(ETH_P_IPV6); > + > + *ip6h =3D (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_UDP); > + > + l4len =3D udp_update_hdr6(ip6h, bp, toside, dlen, true); > + } > + > + return l4len; > +} > + > +/** > + * udp_vu_csum() - Calculate and set checksum for a UDP packet > + * @toside: ddress information for one side of the flow > + * @l4len: IPv4 Payload length > + * @iov_used: Length of the array > + */ > +static void udp_vu_csum(const struct flowside *toside, int iov_used) > +{ > + const struct in_addr *src4 =3D inany_v4(&toside->oaddr); > + const struct in_addr *dst4 =3D inany_v4(&toside->eaddr); > + char *base =3D iov_vu[0].iov_base; > + struct udp_payload_t *bp; > + > + if (src4 && dst4) { > + bp =3D vu_payloadv4(base); > + csum_udp4(&bp->uh, *src4, *dst4, iov_vu, iov_used, > + (char *)&bp->data - base); > + } else { > + bp =3D vu_payloadv6(base); > + csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, > + iov_vu, iov_used, (char *)&bp->data - base); > + } > +} > + > +/** > + * udp_vu_listen_sock_handler() - Handle new data from socket > + * @c: Execution context > + * @ref: epoll reference > + * @events: epoll events bitmap > + * @now: Current timestamp > + */ > +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) > +{ > + struct vu_dev *vdev =3D c->vdev; > + struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > + int i; > + > + if (udp_sock_errs(c, ref.fd, events) < 0) { > + err("UDP: Unrecoverable error on listening socket:" > + " (%s port %hu)", pif_name(ref.udp.pif), ref.udp.port); > + return; > + } > + > + for (i =3D 0; i < UDP_MAX_FRAMES; i++) { > + const struct flowside *toside; > + union sockaddr_inany s_in; > + flow_sidx_t batchsidx; > + uint8_t batchpif; These names are a bit misleading in their new context. Since you're processing one datagram at a time, there are no "batches". > + ssize_t dlen; > + int iov_used; > + bool v6; > + > + if (udp_vu_sock_init(ref.fd, &s_in) < 0) > + break; > + > + batchsidx =3D udp_flow_from_sock(c, ref, &s_in, now); > + batchpif =3D pif_at_sidx(batchsidx); > + > + if (batchpif !=3D PIF_TAP) { > + if (flow_sidx_valid(batchsidx)) { > + flow_sidx_t fromsidx =3D flow_sidx_opposite(batchsidx); > + struct udp_flow *uflow =3D udp_at_sidx(batchsidx); > + > + flow_err(uflow, > + "No support for forwarding UDP from %s to %s", > + pif_name(pif_at_sidx(fromsidx)), > + pif_name(batchpif)); > + } else { > + debug("Discarding 1 datagram without flow"); > + } > + > + continue; > + } > + > + toside =3D flowside_at_sidx(batchsidx); > + > + v6 =3D !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); > + > + iov_used =3D udp_vu_sock_recv(c, ref.fd, events, v6, &dlen); > + if (iov_used <=3D 0) > + break; > + > + udp_vu_prepare(c, toside, dlen); > + if (*c->pcap) { > + udp_vu_csum(toside, iov_used); > + pcap_iov(iov_vu, iov_used, > + sizeof(struct virtio_net_hdr_mrg_rxbuf)); > + } > + vu_flush(vdev, vq, elem, iov_used); You're flushing on every datagram. Is that intentional, rather than leaving it until you've processed the whole loop's worth? > + } > +} > + > +/** > + * udp_vu_reply_sock_handler() - Handle new data from flow specific sock= et > + * @c: Execution context > + * @ref: epoll reference > + * @events: epoll events bitmap > + * @now: Current timestamp > + */ > +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) > +{ > + flow_sidx_t tosidx =3D flow_sidx_opposite(ref.flowside); > + const struct flowside *toside =3D flowside_at_sidx(tosidx); > + struct udp_flow *uflow =3D udp_at_sidx(ref.flowside); > + int from_s =3D uflow->s[ref.flowside.sidei]; > + struct vu_dev *vdev =3D c->vdev; > + struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > + int i; > + > + ASSERT(!c->no_udp); > + > + if (udp_sock_errs(c, from_s, events) < 0) { > + flow_err(uflow, "Unrecoverable error on reply socket"); > + flow_err_details(uflow); > + udp_flow_close(c, uflow); > + return; > + } > + > + for (i =3D 0; i < UDP_MAX_FRAMES; i++) { > + uint8_t topif =3D pif_at_sidx(tosidx); > + ssize_t dlen; > + int iov_used; > + bool v6; > + > + ASSERT(uflow); > + > + if (topif !=3D PIF_TAP) { > + uint8_t frompif =3D pif_at_sidx(ref.flowside); > + > + flow_err(uflow, > + "No support for forwarding UDP from %s to %s", > + pif_name(frompif), pif_name(topif)); > + continue; > + } > + > + v6 =3D !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); > + > + iov_used =3D udp_vu_sock_recv(c, from_s, events, v6, &dlen); > + if (iov_used <=3D 0) > + break; > + flow_trace(uflow, "Received 1 datagram on reply socket"); > + uflow->ts =3D now->tv_sec; > + > + udp_vu_prepare(c, toside, dlen); > + if (*c->pcap) { > + udp_vu_csum(toside, iov_used); > + pcap_iov(iov_vu, iov_used, > + sizeof(struct virtio_net_hdr_mrg_rxbuf)); > + } > + vu_flush(vdev, vq, elem, iov_used); > + } > +} > diff --git a/udp_vu.h b/udp_vu.h > new file mode 100644 > index 000000000000..ba7018d3bf01 > --- /dev/null > +++ b/udp_vu.h > @@ -0,0 +1,13 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* Copyright Red Hat > + * Author: Laurent Vivier > + */ > + > +#ifndef UDP_VU_H > +#define UDP_VU_H > + > +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > +#endif /* UDP_VU_H */ > diff --git a/vhost_user.c b/vhost_user.c > index 1e302926b8fe..e905f3329f71 100644 > --- a/vhost_user.c > +++ b/vhost_user.c > @@ -48,12 +48,13 @@ > /* vhost-user version we are compatible with */ > #define VHOST_USER_VERSION 1 > =20 > +static struct vu_dev vdev_storage; > + > /** > * vu_print_capabilities() - print vhost-user capabilities > * this is part of the vhost-user backend > * convention. > */ > -/* cppcheck-suppress unusedFunction */ > void vu_print_capabilities(void) > { > info("{"); > @@ -163,9 +164,7 @@ static void vmsg_close_fds(const struct vhost_user_ms= g *vmsg) > */ > static void vu_remove_watch(const struct vu_dev *vdev, int fd) > { > - /* Placeholder to add passt related code */ > - (void)vdev; > - (void)fd; > + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_DEL, fd, NULL); > } > =20 > /** > @@ -487,6 +486,14 @@ static bool vu_set_mem_table_exec(struct vu_dev *vde= v, > } > } > =20 > + /* As vu_packet_check_range() has no access to the number of > + * memory regions, mark the end of the array with mmap_addr =3D 0 > + */ > + ASSERT(vdev->nregions < VHOST_USER_MAX_RAM_SLOTS - 1); > + vdev->regions[vdev->nregions].mmap_addr =3D 0; > + > + tap_sock_update_pool(vdev->regions, 0); > + > return false; > } > =20 > @@ -615,9 +622,12 @@ static bool vu_get_vring_base_exec(struct vu_dev *vd= ev, > */ > static void vu_set_watch(const struct vu_dev *vdev, int fd) > { > - /* Placeholder to add passt related code */ > - (void)vdev; > - (void)fd; > + union epoll_ref ref =3D { .type =3D EPOLL_TYPE_VHOST_KICK, .fd =3D fd }; > + struct epoll_event ev =3D { 0 }; > + > + ev.data.u64 =3D ref.u64; > + ev.events =3D EPOLLIN; > + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_ADD, fd, &ev); > } > =20 > /** > @@ -829,14 +839,14 @@ static bool vu_set_vring_enable_exec(struct vu_dev = *vdev, > * @c: execution context > * @vdev: vhost-user device > */ > -/* cppcheck-suppress unusedFunction */ > -void vu_init(struct ctx *c, struct vu_dev *vdev) > +void vu_init(struct ctx *c) > { > int i; > =20 > - vdev->context =3D c; > + c->vdev =3D &vdev_storage; > + c->vdev->context =3D c; > for (i =3D 0; i < VHOST_USER_MAX_QUEUES; i++) { > - vdev->vq[i] =3D (struct vu_virtq){ > + c->vdev->vq[i] =3D (struct vu_virtq){ > .call_fd =3D -1, > .kick_fd =3D -1, > .err_fd =3D -1, > @@ -849,7 +859,6 @@ void vu_init(struct ctx *c, struct vu_dev *vdev) > * vu_cleanup() - Reset vhost-user device > * @vdev: vhost-user device > */ > -/* cppcheck-suppress unusedFunction */ > void vu_cleanup(struct vu_dev *vdev) > { > unsigned int i; > @@ -896,8 +905,7 @@ void vu_cleanup(struct vu_dev *vdev) > */ > static void vu_sock_reset(struct vu_dev *vdev) > { > - /* Placeholder to add passt related code */ > - (void)vdev; > + tap_sock_reset(vdev->context); > } > =20 > static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, > @@ -925,7 +933,6 @@ static bool (*vu_handle[VHOST_USER_MAX])(struct vu_de= v *vdev, > * @fd: vhost-user message socket > * @events: epoll events > */ > -/* cppcheck-suppress unusedFunction */ > void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events) > { > struct vhost_user_msg msg =3D { 0 }; > diff --git a/vhost_user.h b/vhost_user.h > index 5af349ba58b8..464ba21e962f 100644 > --- a/vhost_user.h > +++ b/vhost_user.h > @@ -183,7 +183,6 @@ struct vhost_user_msg { > * > * Return: true if the virqueue is enabled, false otherwise > */ > -/* cppcheck-suppress unusedFunction */ > static inline bool vu_queue_enabled(const struct vu_virtq *vq) > { > return vq->enable; > @@ -195,14 +194,13 @@ static inline bool vu_queue_enabled(const struct vu= _virtq *vq) > * > * Return: true if the virqueue is started, false otherwise > */ > -/* cppcheck-suppress unusedFunction */ > static inline bool vu_queue_started(const struct vu_virtq *vq) > { > return vq->started; > } > =20 > void vu_print_capabilities(void); > -void vu_init(struct ctx *c, struct vu_dev *vdev); > +void vu_init(struct ctx *c); > void vu_cleanup(struct vu_dev *vdev); > void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events); > #endif /* VHOST_USER_H */ > diff --git a/virtio.c b/virtio.c > index 380590afbca3..0598ff479858 100644 > --- a/virtio.c > +++ b/virtio.c > @@ -328,7 +328,6 @@ static bool vring_can_notify(const struct vu_dev *dev= , struct vu_virtq *vq) > * @dev: Vhost-user device > * @vq: Virtqueue > */ > -/* cppcheck-suppress unusedFunction */ > void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq) > { > if (!vq->vring.avail) > @@ -504,7 +503,6 @@ static int vu_queue_map_desc(struct vu_dev *dev, stru= ct vu_virtq *vq, unsigned i > * > * Return: -1 if there is an error, 0 otherwise > */ > -/* cppcheck-suppress unusedFunction */ > int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virt= q_element *elem) > { > unsigned int head; > @@ -565,7 +563,6 @@ void vu_queue_unpop(struct vu_virtq *vq) > * @vq: Virtqueue > * @num: Number of element to unpop > */ > -/* cppcheck-suppress unusedFunction */ > bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num) > { > if (num > vq->inuse) > @@ -621,7 +618,6 @@ void vu_queue_fill_by_index(struct vu_virtq *vq, unsi= gned int index, > * @len: Size of the element > * @idx: Used ring entry index > */ > -/* cppcheck-suppress unusedFunction */ > void vu_queue_fill(struct vu_virtq *vq, const struct vu_virtq_element *e= lem, > unsigned int len, unsigned int idx) > { > @@ -645,7 +641,6 @@ static inline void vring_used_idx_set(struct vu_virtq= *vq, uint16_t val) > * @vq: Virtqueue > * @count: Number of entry to flush > */ > -/* cppcheck-suppress unusedFunction */ > void vu_queue_flush(struct vu_virtq *vq, unsigned int count) > { > uint16_t old, new; > diff --git a/vu_common.c b/vu_common.c > new file mode 100644 > index 000000000000..4977d6af0f92 > --- /dev/null > +++ b/vu_common.c > @@ -0,0 +1,385 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* Copyright Red Hat > + * Author: Laurent Vivier > + * > + * common_vu.c - vhost-user common UDP and TCP functions > + */ > + > +#include > +#include > +#include > +#include > + > +#include "util.h" > +#include "passt.h" > +#include "tap.h" > +#include "vhost_user.h" > +#include "pcap.h" > +#include "vu_common.h" > + > +/** > + * vu_packet_check_range() - Check if a given memory zone is contained in > + * a mapped guest memory region > + * @buf: Array of the available memory regions > + * @offset: Offset of data range in packet descriptor > + * @size: Length of desired data range > + * @start: Start of the packet descriptor > + * > + * Return: 0 if the zone is in a mapped memory region, -1 otherwise > + */ > +int vu_packet_check_range(void *buf, size_t offset, size_t len, > + const char *start) > +{ > + struct vu_dev_region *dev_region; > + > + for (dev_region =3D buf; dev_region->mmap_addr; dev_region++) { > + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */ > + char *m =3D (char *)dev_region->mmap_addr; > + > + if (m <=3D start && > + start + offset + len <=3D m + dev_region->mmap_offset + > + dev_region->size) > + return 0; > + } > + > + return -1; > +} > + > +/** > + * vu_init_elem() - initialize an array of virtqueue element with 1 iov = in each > + * @elem: Array of virtqueue element to initialize > + * @iov: Array of iovec to assign to virtqueue element > + * @elem_cnt: Number of virtqueue element > + */ > +void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, int = elem_cnt) > +{ > + int i; > + > + for (i =3D 0; i < elem_cnt; i++) { > + elem[i].out_num =3D 0; > + elem[i].out_sg =3D NULL; > + elem[i].in_num =3D 1; > + elem[i].in_sg =3D &iov[i]; > + } > +} > + > +/** > + * vu_collect_one_frame() - collect virtio buffers from a given virtqueu= e for > + * one frame > + * @vdev: vhost-user device > + * @vq: virtqueue to collect from > + * @elem: Array of virtqueue element > + * each element must be initialized with one iovec entry > + * in the in_sg array. > + * @max_elem: Number of virtqueue element in the array > + * @size: Maximum size of the data in the frame > + * @hdrlen: Size of the frame header No comment for @frame_size. > + */ > +int vu_collect_one_frame(struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, int max_elem, > + size_t size, size_t hdrlen, size_t *frame_size) > +{ > + size_t current_size =3D 0; > + struct iovec *iov; > + int elem_cnt =3D 0; > + int ret; > + > + /* header is at least virtio_net_hdr_mrg_rxbuf */ > + hdrlen +=3D sizeof(struct virtio_net_hdr_mrg_rxbuf); > + > + /* collect first (or unique) element, it will contain header */ > + ret =3D vu_queue_pop(vdev, vq, &elem[0]); > + if (ret < 0) > + goto out; > + > + if (elem[0].in_num < 1) { > + warn("virtio-net receive queue contains no in buffers"); If this is always called for the receive queue, do you actually need the vq parameter? > + vu_queue_detach_element(vq); > + goto out; > + } > + > + iov =3D &elem[elem_cnt].in_sg[0]; > + > + ASSERT(iov->iov_len >=3D hdrlen); This could occur because of the guest/VMM doing something odd, not a passt internal bug, yes? In which case it should not be an ASSERT(), but a die() (or better yet, a less violent recovery). > + /* add space for header */ > + iov->iov_base =3D (char *)iov->iov_base + hdrlen; > + iov->iov_len -=3D hdrlen; > + > + if (iov->iov_len > size) > + iov->iov_len =3D size; > + > + elem_cnt++; > + current_size =3D iov->iov_len; > + > + if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > + goto out; > + > + /* if possible, coalesce buffers to reach size */ > + while (current_size < size && elem_cnt < max_elem) { > + > + ret =3D vu_queue_pop(vdev, vq, &elem[elem_cnt]); > + if (ret < 0) > + break; > + > + if (elem[elem_cnt].in_num < 1) { > + warn("virtio-net receive queue contains no in buffers"); > + vu_queue_detach_element(vq); > + break; > + } > + > + iov =3D &elem[elem_cnt].in_sg[0]; > + > + if (iov->iov_len > size - current_size) > + iov->iov_len =3D size - current_size; > + > + current_size +=3D iov->iov_len; > + elem_cnt++; > + } > + > +out: > + if (frame_size) > + *frame_size =3D current_size; > + > + return elem_cnt; > +} > + > +/** > + * vu_collect() - collect virtio buffers from a given virtqueue > + * @vdev: vhost-user device > + * @vq: virtqueue to collect from > + * @elem: Array of virtqueue element > + * each element must be initialized with one iovec entry > + * in the in_sg array. > + * @max_elem: Number of virtqueue element in the array > + * @max_frame_size: Maximum size of the data in the frame > + * @hdrlen: Size of the frame header > + * @size: Total size of the buffers we need to collect > + * (if size > max_frame_size, we collect several frame) > + */ > +int vu_collect(struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, int max_elem, > + size_t max_frame_size, size_t hdrlen, size_t size) > +{ > + int elem_cnt =3D 0; > + > + while (size > 0 && elem_cnt < max_elem) { > + size_t frame_size; > + int cnt; > + > + if (max_frame_size > size) > + max_frame_size =3D size; > + > + cnt =3D vu_collect_one_frame(vdev, vq, > + &elem[elem_cnt], max_elem - elem_cnt, > + max_frame_size, hdrlen, &frame_size); > + if (cnt =3D=3D 0) > + break; > + > + size -=3D frame_size; > + elem_cnt +=3D cnt; > + > + if (frame_size < max_frame_size) > + break; The reason for exiting the loop here is not clear to me. > + } > + > + return elem_cnt; > +} > + > +/** > + * vu_set_vnethdr() - set virtio-net headers in a given iovec > + * @vdev: vhost-user device > + * @iov: One iovec to initialize > + * @num_buffers: Number of guest buffers of the frame > + * @hdrlen: Size of the frame header > + */ > +void vu_set_vnethdr(const struct vu_dev *vdev, struct iovec *iov, > + int num_buffers, size_t hdrlen) > +{ > + struct virtio_net_hdr_mrg_rxbuf *vnethdr; > + > + /* header is at least virtio_net_hdr_mrg_rxbuf */ > + hdrlen +=3D sizeof(struct virtio_net_hdr_mrg_rxbuf); > + > + /* NOLINTNEXTLINE(clang-analyzer-core.UndefinedBinaryOperatorResult) */ > + iov->iov_base =3D (char *)iov->iov_base - hdrlen; Altering the IOV on the fly between the whole frames and just the data payload part I find kind of hard to track. If it's possible I think things would be clearer if we separately kept the "whole frame" and "just payload" IOVs. The former could reasonably remain embedded within the elem structures. As a side effect I think that would remove the need for the clang suppression. > + iov->iov_len +=3D hdrlen; > + > + vnethdr =3D iov->iov_base; > + vnethdr->hdr =3D VU_HEADER; > + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > + vnethdr->num_buffers =3D htole16(num_buffers); > +} > + > +/** > + * vu_flush() - flush all the collected buffers to the vhost-user interf= ace > + * @vdev: vhost-user device > + * @vq: vhost-user virtqueue > + * @elem: virtqueue element array to send back to the virqueue > + * @iov_used: Length of the array > + */ > +void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, int elem_cnt) > +{ > + int i; > + > + for (i =3D 0; i < elem_cnt; i++) > + vu_queue_fill(vq, &elem[i], elem[i].in_sg[0].iov_len, i); > + > + vu_queue_flush(vq, elem_cnt); > + vu_queue_notify(vdev, vq); > +} > + > +/** > + * vu_handle_tx() - Receive data from the TX virtqueue > + * @vdev: vhost-user device > + * @index: index of the virtqueue > + * @now: Current timestamp > + */ > +static void vu_handle_tx(struct vu_dev *vdev, int index, > + const struct timespec *now) > +{ > + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; > + struct iovec out_sg[VIRTQUEUE_MAX_SIZE]; > + struct vu_virtq *vq =3D &vdev->vq[index]; > + int hdrlen =3D sizeof(struct virtio_net_hdr_mrg_rxbuf); > + int out_sg_count; > + int count; > + > + if (!VHOST_USER_IS_QUEUE_TX(index)) { > + debug("vhost-user: index %d is not a TX queue", index); > + return; > + } > + > + tap_flush_pools(); > + > + count =3D 0; > + out_sg_count =3D 0; > + while (count < VIRTQUEUE_MAX_SIZE) { > + int ret; > + > + elem[count].out_num =3D 1; > + elem[count].out_sg =3D &out_sg[out_sg_count]; > + elem[count].in_num =3D 0; > + elem[count].in_sg =3D NULL; > + ret =3D vu_queue_pop(vdev, vq, &elem[count]); > + if (ret < 0) > + break; > + out_sg_count +=3D elem[count].out_num; > + > + if (elem[count].out_num < 1) { > + debug("virtio-net header not in first element"); This error message doesn't seem right for the check. > + break; > + } > + ASSERT(elem[count].out_num =3D=3D 1); > + > + tap_add_packet(vdev->context, > + elem[count].out_sg[0].iov_len - hdrlen, > + (char *)elem[count].out_sg[0].iov_base + hdrlen); > + count++; > + } > + tap_handler(vdev->context, now); > + > + if (count) { > + int i; > + > + for (i =3D 0; i < count; i++) > + vu_queue_fill(vq, &elem[i], 0, i); > + vu_queue_flush(vq, count); > + vu_queue_notify(vdev, vq); For just recycling buffers, doing the flush/notify for every frame seems even more dubious. > + } > +} > + > +/** > + * vu_kick_cb() - Called on a kick event to start to receive data > + * @vdev: vhost-user device > + * @ref: epoll reference information > + * @now: Current timestamp > + */ > +void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, > + const struct timespec *now) > +{ > + eventfd_t kick_data; > + ssize_t rc; > + int idx; > + > + for (idx =3D 0; idx < VHOST_USER_MAX_QUEUES; idx++) { > + if (vdev->vq[idx].kick_fd =3D=3D ref.fd) > + break; > + } We should be able to put the queue index into the epoll_ref, rather than doing a linear scan for the right queue. > + > + if (idx =3D=3D VHOST_USER_MAX_QUEUES) > + return; > + > + rc =3D eventfd_read(ref.fd, &kick_data); > + if (rc =3D=3D -1) > + die_perror("vhost-user kick eventfd_read()"); > + > + debug("vhost-user: ot kick_data: %016"PRIx64" idx:%d", > + kick_data, idx); > + if (VHOST_USER_IS_QUEUE_TX(idx)) > + vu_handle_tx(vdev, idx, now); > +} > + > +/** > + * vu_send_single() - Send a buffer to the front-end using the RX virtqu= eue > + * @c: execution context > + * @buf: address of the buffer > + * @size: size of the buffer > + * > + * Return: number of bytes sent, -1 if there is an error > + */ > +int vu_send_single(const struct ctx *c, const void *buf, size_t size) > +{ > + struct vu_dev *vdev =3D c->vdev; > + struct vu_virtq *vq =3D &vdev->vq[VHOST_USER_RX_QUEUE]; > + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; > + struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; > + size_t total; > + int elem_cnt, max_elem; > + int i; > + > + debug("vu_send_single size %zu", size); > + > + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) { > + err("Got packet, but no available descriptors on RX virtq."); Error message doesn't seem quite right. > + return 0; > + } > + > + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > + max_elem =3D VIRTQUEUE_MAX_SIZE; > + else > + max_elem =3D 1; > + > + vu_init_elem(elem, in_sg, max_elem); > + > + elem_cnt =3D vu_collect_one_frame(vdev, vq, elem, max_elem, size, > + 0, &total); > + if (total < size) { > + debug("vu_send_single: no space to send the data " > + "elem_cnt %d size %zd", elem_cnt, total); > + goto err; > + } > + > + vu_set_vnethdr(vdev, in_sg, elem_cnt, 0); > + > + /* copy data from the buffer to the iovec */ > + iov_from_buf(in_sg, elem_cnt, sizeof(struct virtio_net_hdr_mrg_rxbuf), > + buf, size); > + > + if (*c->pcap) { > + pcap_iov(in_sg, elem_cnt, > + sizeof(struct virtio_net_hdr_mrg_rxbuf)); > + } > + > + vu_flush(vdev, vq, elem, elem_cnt); > + > + debug("vhost-user sent %zu", total); > + > + return total; > +err: > + for (i =3D 0; i < elem_cnt; i++) > + vu_queue_detach_element(vq); > + > + return 0; > +} > diff --git a/vu_common.h b/vu_common.h > new file mode 100644 > index 000000000000..1d6048060059 > --- /dev/null > +++ b/vu_common.h > @@ -0,0 +1,47 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later > + * Copyright Red Hat > + * Author: Laurent Vivier > + * > + * vhost-user common UDP and TCP functions > + */ > + > +#ifndef VU_COMMON_H > +#define VU_COMMON_H > +#include > + > +static inline void *vu_eth(void *base) > +{ > + return ((char *)base + sizeof(struct virtio_net_hdr_mrg_rxbuf)); > +} > + > +static inline void *vu_ip(void *base) > +{ > + return (struct ethhdr *)vu_eth(base) + 1; > +} > + > +static inline void *vu_payloadv4(void *base) > +{ > + return (struct iphdr *)vu_ip(base) + 1; > +} > + > +static inline void *vu_payloadv6(void *base) > +{ > + return (struct ipv6hdr *)vu_ip(base) + 1; > +} > + > +void vu_init_elem(struct vu_virtq_element *elem, struct iovec *iov, > + int elem_cnt); > +int vu_collect_one_frame(struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, int max_elem, > + size_t size, size_t hdrlen, size_t *frame_size); > +int vu_collect(struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, int max_elem, > + size_t max_frame_size, size_t hdrlen, size_t size); > +void vu_set_vnethdr(const struct vu_dev *vdev, struct iovec *iov, > + int num_buffers, size_t hdrlen); > +void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, int elem_cnt); > +void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, > + const struct timespec *now); > +int vu_send_single(const struct ctx *c, const void *buf, size_t size); > +#endif /* VU_COMMON_H */ --=20 David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson --D8Xvp+ufqIIx3Vuq Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEO+dNsU4E3yXUXRK2zQJF27ox2GcFAmcN4B4ACgkQzQJF27ox 2Gckbg/+Oma+8ZurAMSJmmjl9hp8x2uWZmkUeFuKpv8nZUJP5Y4MnaXpPYGnjbIB RRVbuQ81YKJLXVzEDaZoeDTUHQdPZXikLB7xJVeTB8MsB1r7u8t/jpbNKdwCZKKd qMYdhdpHZt9ixPeZjElejwN7/RSdaO75EGFVU/kYOsWCoSXjVxHgbONsBYYzPCSJ 2sxvxwECqkVMhyXEmNQKnqRnmZb4jRuiGYdgpZ17sCOeSx2uyV2jUZeOAqPZwnag kPTVxhXLvFvGzL7ylEZK2IqAzRRpiU1wPHS63V6NO2ke0+LkqELD+Ja2LztrtlKk 1UlCHa4ETCpRHztXu3ru4O18H7MTkB+28jiFluZqE4Ft/IQSrSn0pgVSNvlOEYFD PngCw3E9TygjneO1XEJVooxhCgxZGE2Fx50uorM+3R8kn4OK7UvcgpMuDuF7uYLn Mqxu7h7em5oDWAW46FcwKeXGumsBoQRTiq3Axb+7ufkkhNcbc6HjH1nND0AMNg06 NUyl6VHTg+Jbla+wyRpRRWsh4eatGkPZupSW9VA4uYi4c1ngoZ796VtP+ss1r8bv ReaQd9TAEU55s7yBS3EVA8u3iOmoD2atJqUppgfkfq3y002krk4cKeBEzhIXqIAN d+NmKc5fPclNQu/bqGrRJHKjIyDIr+kF8uTFOizjKCsRcHegfHA= =/exs -----END PGP SIGNATURE----- --D8Xvp+ufqIIx3Vuq--