From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=dmNZwvgt; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTP id 77F8C5A0262 for ; Fri, 23 Aug 2024 14:32:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724416363; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N/hlUJOHyMlU8Lyl4LyG1eBlmZTBNDUCErssI3IEADQ=; b=dmNZwvgtXXJjM9rdwn/tgC26Pzm79JjcArVIA61jO/zNZcbfcyDnfRUuWZxH80ewhYdp/+ MmE0dpnOd39g3Bx2aYRdEBlfwyrVbAx5p2iJ7S4HPtG9ac4tb7eXVC3kjvHnqWdsTFGOnG Xy5U/VtF/oEVsqadVamAvIt616It6Z0= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-310-rB6H9iLfOX2Gxg61KhSgAQ-1; Fri, 23 Aug 2024 08:32:42 -0400 X-MC-Unique: rB6H9iLfOX2Gxg61KhSgAQ-1 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-4280cf2be19so16673185e9.3 for ; Fri, 23 Aug 2024 05:32:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724416360; x=1725021160; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=N/hlUJOHyMlU8Lyl4LyG1eBlmZTBNDUCErssI3IEADQ=; b=SzggZHwtQXuZvcTAxcv6cJWel+MQtML6DrsVSAa2LRsO5StDHqxFViGt2I5eOdy4XA /hYE26Kxi8FpNG3sZtjpUKYVFUnEQak4u426LqYFXfaG3CZvV0Xl2vlrkV+lLSWUgWHk XxpN2+7FeSfPJsvt5VY5jWKQ15JVearzw+7eCZPyXdpYdvWDl8hTctu2mGKWyEfzawMY Ee9r1Uk4O8eLfvXPY+RkkHMIkDOXzcf0ioSc7ZoQVtKfsdb/7gQUORejshdXqDcurGPE v4y6T5v2A8GQnuzNrInhOUTjYZs+DnXoIWGZN5F3s1ZnFme4HOYwLELgbvrdlx4C9v8a jy/Q== X-Gm-Message-State: AOJu0YyoyqPLKTAKwz5Ba+7CgosJahBkjTjAwEp/OZNT9D/T3tv0UNXA +gZTE4FboKXeB2HZSy3g4TMVWDzgnssIQs24h0Og9p1/ZTM/F7AbYZ/KOQ6nZDRHJ3hNKIZUgD+ IVb95/0kqhQprKvNaPbhvc32zN1KGAYBiMU4bVq5yYyMgl+GFluMcX4KCZHtXSgnfdLrWQGacsU zsuFhE4tKbcjXo1Okb4aKJCuHPuV/WPpga X-Received: by 2002:a5d:4685:0:b0:368:7adc:fbf9 with SMTP id ffacd0b85a97d-37311855ecbmr1303349f8f.21.1724416359790; Fri, 23 Aug 2024 05:32:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE6TH+25kuEmuT0oCtljbpifNtjYFIqeev1L3i5JdTw/oL6QpEpVfweyqmcdNjoellE4Bh5ZQ== X-Received: by 2002:a5d:4685:0:b0:368:7adc:fbf9 with SMTP id ffacd0b85a97d-37311855ecbmr1303321f8f.21.1724416358849; Fri, 23 Aug 2024 05:32:38 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [176.103.220.4]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-373081ffb71sm4050322f8f.83.2024.08.23.05.32.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Aug 2024 05:32:38 -0700 (PDT) Date: Fri, 23 Aug 2024 14:32:36 +0200 From: Stefano Brivio To: Laurent Vivier Subject: Re: [PATCH v3 4/4] vhost-user: add vhost-user Message-ID: <20240823143236.25ca0ac7@elisabeth> In-Reply-To: <20240815155024.827956-5-lvivier@redhat.com> References: <20240815155024.827956-1-lvivier@redhat.com> <20240815155024.827956-5-lvivier@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.41; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: 5A4KDSFWCPNBUHSNK5LEOZPYMR4YGAJF X-Message-ID-Hash: 5A4KDSFWCPNBUHSNK5LEOZPYMR4YGAJF X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Second part of review: On Thu, 15 Aug 2024 17:50:23 +0200 Laurent Vivier wrote: > +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) Missing function comment. This one is especially relevant because I'm not sure where the boundary is between this and tcp_vu_sock_recv(). > +{ > + uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap; > + struct vu_dev *vdev = c->vdev; > + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > + const struct flowside *tapside = TAPFLOW(conn); > + uint16_t mss = MSS_GET(conn); > + size_t l2_hdrlen, fillsize; > + int i, iov_cnt, iov_used; > + int v4 = CONN_V4(conn); > + uint32_t already_sent; > + const uint16_t *check; > + struct iovec *first; > + int segment_size; > + int num_buffers; > + ssize_t len; > + > + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) { > + flow_err(conn, > + "Got packet, but no available descriptors on RX virtq."); That seems to rather describe the case where tcp_vu_sock_recv() gets < 0 from vu_queue_pop(). Strictly speaking, here, there are no descriptors available either, but the message looks misleading. What about: flow_err(conn, "Got packet, but RX virtqueue not usable yet"); ? > + return 0; > + } > + > + already_sent = conn->seq_to_tap - conn->seq_ack_from_tap; > + > + if (SEQ_LT(already_sent, 0)) { > + /* RFC 761, section 2.1. */ > + flow_trace(conn, "ACK sequence gap: ACK for %u, sent: %u", > + conn->seq_ack_from_tap, conn->seq_to_tap); > + conn->seq_to_tap = conn->seq_ack_from_tap; > + already_sent = 0; > + } > + > + if (!wnd_scaled || already_sent >= wnd_scaled) { > + conn_flag(c, conn, STALLED); > + conn_flag(c, conn, ACK_FROM_TAP_DUE); > + return 0; > + } > + > + /* Set up buffer descriptors we'll fill completely and partially. */ > + > + fillsize = wnd_scaled; > + > + iov_vu[0].iov_base = tcp_buf_discard; This should now be conditional to if (!peek_offset_cap), see the (updated) tcp_buf_data_from_sock(). > + iov_vu[0].iov_len = already_sent; > + fillsize -= already_sent; > + > + iov_cnt = tcp_vu_sock_recv(c, conn, v4, fillsize, mss, &len); > + if (iov_cnt <= 0) > + return iov_cnt; > + > + len -= already_sent; > + if (len <= 0) { > + conn_flag(c, conn, STALLED); > + vu_queue_rewind(vq, iov_cnt); > + return 0; > + } > + > + conn_flag(c, conn, ~STALLED); > + > + /* Likely, some new data was acked too. */ > + tcp_update_seqack_wnd(c, conn, 0, NULL); > + > + /* initialize headers */ > + l2_hdrlen = tcp_vu_l2_hdrlen(vdev, !v4); > + iov_used = 0; > + num_buffers = 0; > + check = NULL; > + segment_size = 0; > + for (i = 0; i < iov_cnt && len; i++) { > + > + if (segment_size == 0) > + first = &iov_vu[i + 1]; I don't understand why we have this loop on top of the loop from tcp_vu_sock_recv(). I mean, it works, but this is a bit obscure to me. I didn't really manage to review this function. > + > + if (iov_vu[i + 1].iov_len > (size_t)len) > + iov_vu[i + 1].iov_len = len; > + > + len -= iov_vu[i + 1].iov_len; > + iov_used++; > + > + segment_size += iov_vu[i + 1].iov_len; > + num_buffers++; > + > + if (segment_size >= mss || len == 0 || > + i + 1 == iov_cnt || vdev->hdrlen != sizeof(struct virtio_net_hdr_mrg_rxbuf)) { > + struct virtio_net_hdr_mrg_rxbuf *vh; > + size_t l4len; > + > + if (i + 1 == iov_cnt) > + check = NULL; > + > + /* restore first iovec base: point to vnet header */ > + first->iov_base = (char *)first->iov_base - l2_hdrlen; > + first->iov_len = first->iov_len + l2_hdrlen; > + > + vh = first->iov_base; > + > + vh->hdr = vu_header; > + if (vdev->hdrlen == sizeof(struct virtio_net_hdr_mrg_rxbuf)) > + vh->num_buffers = htole16(num_buffers); > + > + l4len = tcp_vu_prepare(c, conn, first, segment_size, &check); > + > + tcp_vu_pcap(c, tapside, first, num_buffers, l4len); > + > + conn->seq_to_tap += segment_size; > + > + segment_size = 0; > + num_buffers = 0; > + } > + } > + > + /* release unused buffers */ > + vu_queue_rewind(vq, iov_cnt - iov_used); > + > + /* send packets */ > + vu_send_frame(vdev, vq, elem, &iov_vu[1], iov_used); > + > + conn_flag(c, conn, ACK_FROM_TAP_DUE); > + > + return 0; > +} > diff --git a/tcp_vu.h b/tcp_vu.h > new file mode 100644 > index 000000000000..99daba5b34ed > --- /dev/null > +++ b/tcp_vu.h > @@ -0,0 +1,12 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later > + * Copyright Red Hat > + * Author: Laurent Vivier > + */ > + > +#ifndef TCP_VU_H > +#define TCP_VU_H > + > +int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags); > +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn); > + > +#endif /*TCP_VU_H */ > diff --git a/udp.c b/udp.c > index 7731257292e1..4d2afc62478a 100644 > --- a/udp.c > +++ b/udp.c > @@ -109,8 +109,7 @@ > #include "pcap.h" > #include "log.h" > #include "flow_table.h" > - > -#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ > +#include "udp_internal.h" > > /* "Spliced" sockets indexed by bound port (host order) */ > static int udp_splice_ns [IP_VERSIONS][NUM_PORTS]; > @@ -118,20 +117,8 @@ static int udp_splice_init[IP_VERSIONS][NUM_PORTS]; > > /* Static buffers */ > > -/** > - * struct udp_payload_t - UDP header and data for inbound messages > - * @uh: UDP header > - * @data: UDP data > - */ > -static struct udp_payload_t { > - struct udphdr uh; > - char data[USHRT_MAX - sizeof(struct udphdr)]; > -#ifdef __AVX2__ > -} __attribute__ ((packed, aligned(32))) > -#else > -} __attribute__ ((packed, aligned(__alignof__(unsigned int)))) > -#endif > -udp_payload[UDP_MAX_FRAMES]; > +/* UDP header and data for inbound messages */ > +static struct udp_payload_t udp_payload[UDP_MAX_FRAMES]; > > /* Ethernet header for IPv4 frames */ > static struct ethhdr udp4_eth_hdr; > @@ -311,6 +298,7 @@ static void udp_splice_send(const struct ctx *c, size_t start, size_t n, > > /** > * udp_update_hdr4() - Update headers for one IPv4 datagram > + * @c: Execution context > * @ip4h: Pre-filled IPv4 header (except for tot_len and saddr) > * @bp: Pointer to udp_payload_t to update > * @toside: Flowside for destination side > @@ -318,8 +306,9 @@ static void udp_splice_send(const struct ctx *c, size_t start, size_t n, > * > * Return: size of IPv4 payload (UDP header + data) > */ > -static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, > - const struct flowside *toside, size_t dlen) > +size_t udp_update_hdr4(const struct ctx *c, > + struct iphdr *ip4h, struct udp_payload_t *bp, > + const struct flowside *toside, size_t dlen) > { > const struct in_addr *src = inany_v4(&toside->faddr); > const struct in_addr *dst = inany_v4(&toside->eaddr); > @@ -336,13 +325,17 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, > bp->uh.source = htons(toside->fport); > bp->uh.dest = htons(toside->eport); > bp->uh.len = htons(l4len); > - csum_udp4(&bp->uh, *src, *dst, bp->data, dlen); > + if (c->mode != MODE_VU) > + csum_udp4(&bp->uh, *src, *dst, bp->data, dlen); > + else > + bp->uh.check = 0; > > return l4len; > } > > /** > * udp_update_hdr6() - Update headers for one IPv6 datagram > + * @c: Execution context > * @ip6h: Pre-filled IPv6 header (except for payload_len and addresses) > * @bp: Pointer to udp_payload_t to update > * @toside: Flowside for destination side > @@ -350,8 +343,9 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, > * > * Return: size of IPv6 payload (UDP header + data) > */ > -static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, > - const struct flowside *toside, size_t dlen) > +size_t udp_update_hdr6(const struct ctx *c, > + struct ipv6hdr *ip6h, struct udp_payload_t *bp, > + const struct flowside *toside, size_t dlen) > { > uint16_t l4len = dlen + sizeof(bp->uh); > > @@ -365,19 +359,24 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, > bp->uh.source = htons(toside->fport); > bp->uh.dest = htons(toside->eport); > bp->uh.len = ip6h->payload_len; > - csum_udp6(&bp->uh, &toside->faddr.a6, &toside->eaddr.a6, bp->data, dlen); > + if (c->mode != MODE_VU) Curly brackets for multi-line statement. > + csum_udp6(&bp->uh, &toside->faddr.a6, &toside->eaddr.a6, > + bp->data, dlen); > + else > + bp->uh.check = 0xffff; /* zero checksum is invalid with IPv6 */ 0xffff is the value virtio requires in case we want to ignore the checksum, right? Or would any non-zero value work? The comment should say that. > > return l4len; > } > > /** > * udp_tap_prepare() - Convert one datagram into a tap frame > + * @c: Execution context > * @mmh: Receiving mmsghdr array > * @idx: Index of the datagram to prepare > * @toside: Flowside for destination side > */ > -static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, > - const struct flowside *toside) > +static void udp_tap_prepare(const struct ctx *c, const struct mmsghdr *mmh, > + unsigned idx, const struct flowside *toside) > { > struct iovec (*tap_iov)[UDP_NUM_IOVS] = &udp_l2_iov[idx]; > struct udp_payload_t *bp = &udp_payload[idx]; > @@ -385,13 +384,15 @@ static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, > size_t l4len; > > if (!inany_v4(&toside->eaddr) || !inany_v4(&toside->faddr)) { > - l4len = udp_update_hdr6(&bm->ip6h, bp, toside, mmh[idx].msg_len); > + l4len = udp_update_hdr6(c, &bm->ip6h, bp, toside, > + mmh[idx].msg_len); > tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip6h) + > sizeof(udp6_eth_hdr)); > (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp6_eth_hdr); > (*tap_iov)[UDP_IOV_IP] = IOV_OF_LVALUE(bm->ip6h); > } else { > - l4len = udp_update_hdr4(&bm->ip4h, bp, toside, mmh[idx].msg_len); > + l4len = udp_update_hdr4(c, &bm->ip4h, bp, toside, > + mmh[idx].msg_len); > tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip4h) + > sizeof(udp4_eth_hdr)); > (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp4_eth_hdr); > @@ -408,7 +409,7 @@ static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, > * > * #syscalls recvmsg > */ > -static bool udp_sock_recverr(int s) > +bool udp_sock_recverr(int s) > { > const struct sock_extended_err *ee; > const struct cmsghdr *hdr; > @@ -495,7 +496,7 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, > } > > /** > - * udp_listen_sock_handler() - Handle new data from socket > + * udp_buf_listen_sock_handler() - Handle new data from socket > * @c: Execution context > * @ref: epoll reference > * @events: epoll events bitmap > @@ -503,8 +504,8 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, > * > * #syscalls recvmmsg > */ > -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now) > +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) > { > struct mmsghdr *mmh_recv = ref.udp.v6 ? udp6_mh_recv : udp4_mh_recv; > int n, i; > @@ -527,7 +528,7 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > if (pif_is_socket(batchpif)) { > udp_splice_prepare(mmh_recv, i); > } else if (batchpif == PIF_TAP) { > - udp_tap_prepare(mmh_recv, i, > + udp_tap_prepare(c, mmh_recv, i, > flowside_at_sidx(batchsidx)); > } > > @@ -561,7 +562,7 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > } > > /** > - * udp_reply_sock_handler() - Handle new data from flow specific socket > + * udp_buf_reply_sock_handler() - Handle new data from flow specific socket > * @c: Execution context > * @ref: epoll reference > * @events: epoll events bitmap > @@ -569,8 +570,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > * > * #syscalls recvmmsg > */ > -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now) > +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) > { > const struct flowside *fromside = flowside_at_sidx(ref.flowside); > flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); > @@ -594,7 +595,7 @@ void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > if (pif_is_socket(topif)) > udp_splice_prepare(mmh_recv, i); > else if (topif == PIF_TAP) > - udp_tap_prepare(mmh_recv, i, toside); > + udp_tap_prepare(c, mmh_recv, i, toside); > } > > if (pif_is_socket(topif)) { > diff --git a/udp.h b/udp.h > index fb42e1c50d70..77b29260e8d1 100644 > --- a/udp.h > +++ b/udp.h > @@ -9,10 +9,10 @@ > #define UDP_TIMER_INTERVAL 1000 /* ms */ > > void udp_portmap_clear(void); > -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now); > -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now); > +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > int udp_tap_handler(const struct ctx *c, uint8_t pif, > sa_family_t af, const void *saddr, const void *daddr, > const struct pool *p, int idx, const struct timespec *now); > diff --git a/udp_internal.h b/udp_internal.h > new file mode 100644 > index 000000000000..7dd45753698f > --- /dev/null > +++ b/udp_internal.h > @@ -0,0 +1,34 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later // SPDX... > + * Copyright (c) 2021 Red Hat GmbH > + * Author: Stefano Brivio > + */ > + > +#ifndef UDP_INTERNAL_H > +#define UDP_INTERNAL_H > + > +#include "tap.h" /* needed by udp_meta_t */ > + > +#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ > + > +/** > + * struct udp_payload_t - UDP header and data for inbound messages > + * @uh: UDP header > + * @data: UDP data > + */ > +struct udp_payload_t { > + struct udphdr uh; > + char data[USHRT_MAX - sizeof(struct udphdr)]; > +#ifdef __AVX2__ > +} __attribute__ ((packed, aligned(32))); > +#else > +} __attribute__ ((packed, aligned(__alignof__(unsigned int)))); > +#endif > + > +size_t udp_update_hdr4(const struct ctx *c, > + struct iphdr *ip4h, struct udp_payload_t *bp, > + const struct flowside *toside, size_t dlen); > +size_t udp_update_hdr6(const struct ctx *c, > + struct ipv6hdr *ip6h, struct udp_payload_t *bp, > + const struct flowside *toside, size_t dlen); > +bool udp_sock_recverr(int s); > +#endif /* UDP_INTERNAL_H */ > diff --git a/udp_vu.c b/udp_vu.c > new file mode 100644 > index 000000000000..f9e7afcf4ddb > --- /dev/null > +++ b/udp_vu.c > @@ -0,0 +1,338 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later // SPDX... > + * Copyright Red Hat > + * Author: Laurent Vivier > + * > + * udp_vu.c - UDP L2 vhost-user management functions > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "checksum.h" > +#include "util.h" > +#include "ip.h" > +#include "siphash.h" > +#include "inany.h" > +#include "passt.h" > +#include "pcap.h" > +#include "log.h" > +#include "vhost_user.h" > +#include "udp_internal.h" > +#include "flow.h" > +#include "flow_table.h" > +#include "udp_flow.h" > +#include "udp_vu.h" > +#include "vu_common.h" > + > +/* vhost-user */ > +static const struct virtio_net_hdr vu_header = { > + .flags = VIRTIO_NET_HDR_F_DATA_VALID, > + .gso_type = VIRTIO_NET_HDR_GSO_NONE, > +}; > + > +static struct iovec iov_vu [VIRTQUEUE_MAX_SIZE]; > +static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE]; > +static struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; > +static int in_sg_count; > + > +static size_t udp_vu_l2_hdrlen(const struct vu_dev *vdev, bool v6) Function comment missing. > +{ > + size_t l2_hdrlen; > + > + l2_hdrlen = vdev->hdrlen + sizeof(struct ethhdr) + > + sizeof(struct udphdr); > + > + if (v6) > + l2_hdrlen += sizeof(struct ipv6hdr); > + else > + l2_hdrlen += sizeof(struct iphdr); > + > + return l2_hdrlen; > +} > + > +static int udp_vu_sock_recv(const struct ctx *c, union sockaddr_inany *s_in, > + int s, uint32_t events, bool v6, ssize_t *data_len) Function comment missing. > +{ > + struct vu_dev *vdev = c->vdev; > + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > + int virtqueue_max, iov_cnt, idx, iov_used; > + size_t fillsize, size, off, l2_hdrlen; > + struct virtio_net_hdr_mrg_rxbuf *vh; > + struct msghdr msg = { 0 }; > + char *base; > + > + ASSERT(!c->no_udp); > + > + /* Clear any errors first */ > + if (events & EPOLLERR) { > + while (udp_sock_recverr(s)) > + ; > + } > + > + if (!(events & EPOLLIN)) > + return 0; > + > + /* compute L2 header length */ > + > + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > + virtqueue_max = VIRTQUEUE_MAX_SIZE; > + else > + virtqueue_max = 1; > + > + l2_hdrlen = udp_vu_l2_hdrlen(vdev, v6); > + > + msg.msg_name = s_in; > + msg.msg_namelen = sizeof(union sockaddr_inany); > + > + fillsize = USHRT_MAX; > + iov_cnt = 0; > + in_sg_count = 0; > + while (fillsize && iov_cnt < virtqueue_max && > + in_sg_count < ARRAY_SIZE(in_sg)) { This is much easier to understand compared to the TCP version, by the way, but of course the TCP version needs to be a bit more complicated because of the segmentation. Both iov_cnt and in_sg_count are indices rather than... counts, right? What about iov_idx (or i) and in_sg_idx? > + int ret; > + > + elem[iov_cnt].out_num = 0; > + elem[iov_cnt].out_sg = NULL; > + elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count; > + elem[iov_cnt].in_sg = &in_sg[in_sg_count]; > + ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]); > + if (ret < 0) > + break; > + in_sg_count += elem[iov_cnt].in_num; > + > + if (elem[iov_cnt].in_num < 1) { > + err("virtio-net receive queue contains no in buffers"); > + vu_queue_rewind(vq, iov_cnt); > + return 0; > + } > + ASSERT(elem[iov_cnt].in_num == 1); > + ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen); > + > + if (iov_cnt == 0) { > + base = elem[iov_cnt].in_sg[0].iov_base; > + size = elem[iov_cnt].in_sg[0].iov_len; > + > + /* keep space for the headers */ > + iov_vu[0].iov_base = base + l2_hdrlen; > + iov_vu[0].iov_len = size - l2_hdrlen; > + } else { > + iov_vu[iov_cnt].iov_base = elem[iov_cnt].in_sg[0].iov_base; > + iov_vu[iov_cnt].iov_len = elem[iov_cnt].in_sg[0].iov_len; > + } > + > + if (iov_vu[iov_cnt].iov_len > fillsize) > + iov_vu[iov_cnt].iov_len = fillsize; > + > + fillsize -= iov_vu[iov_cnt].iov_len; > + > + iov_cnt++; > + } > + if (iov_cnt == 0) > + return 0; > + > + msg.msg_iov = iov_vu; > + msg.msg_iovlen = iov_cnt; > + > + *data_len = recvmsg(s, &msg, 0); Is recvmsg() instead of recvmmsg() by choice/constraint, or just to keep the initial implementation simpler? > + if (*data_len < 0) { > + vu_queue_rewind(vq, iov_cnt); > + return 0; > + } > + > + /* restore original values */ > + iov_vu[0].iov_base = base; > + iov_vu[0].iov_len = size; > + > + /* count the numbers of buffer filled by recvmsg() */ > + idx = iov_skip_bytes(iov_vu, iov_cnt, l2_hdrlen + *data_len, > + &off); > + /* adjust last iov length */ > + if (idx < iov_cnt) > + iov_vu[idx].iov_len = off; > + iov_used = idx + !!off; > + > + /* release unused buffers */ > + vu_queue_rewind(vq, iov_cnt - iov_used); > + > + vh = (struct virtio_net_hdr_mrg_rxbuf *)base; > + vh->hdr = vu_header; > + if (vdev->hdrlen == sizeof(struct virtio_net_hdr_mrg_rxbuf)) > + vh->num_buffers = htole16(iov_used); > + > + return iov_used; > +} > + > +static size_t udp_vu_prepare(const struct ctx *c, > + const struct flowside *toside, ssize_t data_len) Function comment missing. > +{ > + const struct vu_dev *vdev = c->vdev; > + struct ethhdr *eh; > + size_t l4len; > + > + /* ethernet header */ > + eh = vu_eth(vdev, iov_vu[0].iov_base); > + > + memcpy(eh->h_dest, c->mac_guest, sizeof(eh->h_dest)); > + memcpy(eh->h_source, c->mac, sizeof(eh->h_source)); > + > + /* initialize header */ > + if (inany_v4(&toside->eaddr) && inany_v4(&toside->faddr)) { > + struct iphdr *iph = vu_ip(vdev, iov_vu[0].iov_base); > + struct udp_payload_t *bp = vu_payloadv4(vdev, > + iov_vu[0].iov_base); > + > + eh->h_proto = htons(ETH_P_IP); > + > + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_UDP); > + > + l4len = udp_update_hdr4(c, iph, bp, toside, data_len); > + } else { > + struct ipv6hdr *ip6h = vu_ip(vdev, iov_vu[0].iov_base); > + struct udp_payload_t *bp = vu_payloadv6(vdev, > + iov_vu[0].iov_base); > + > + eh->h_proto = htons(ETH_P_IPV6); > + > + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_UDP); > + > + l4len = udp_update_hdr6(c, ip6h, bp, toside, data_len); > + } > + > + return l4len; > +} > + > +static void udp_vu_pcap(const struct ctx *c, const struct flowside *toside, > + size_t l4len, int iov_used) Function comment missing. > +{ > + const struct in_addr *src = inany_v4(&toside->faddr); > + const struct in_addr *dst = inany_v4(&toside->eaddr); > + const struct vu_dev *vdev = c->vdev; > + char *base = iov_vu[0].iov_base; > + size_t size = iov_vu[0].iov_len; > + struct udp_payload_t *bp; > + uint32_t sum; > + > + if (!*c->pcap) > + return; > + > + if (src && dst) { If we call them src4 and dst4, this logic becomes easier to understand. > + bp = vu_payloadv4(vdev, base); > + sum = proto_ipv4_header_psum(l4len, IPPROTO_UDP, *src, *dst); > + } else { > + bp = vu_payloadv6(vdev, base); > + sum = proto_ipv6_header_psum(l4len, IPPROTO_UDP, > + &toside->faddr.a6, > + &toside->eaddr.a6); > + bp->uh.check = 0; /* by default, set to 0xffff */ > + } > + > + iov_vu[0].iov_base = &bp->uh; > + iov_vu[0].iov_len = size - ((char *)iov_vu[0].iov_base - base); > + > + bp->uh.check = csum_iov(iov_vu, iov_used, sum); > + > + /* set iov for pcap logging */ > + iov_vu[0].iov_base = base + vdev->hdrlen; > + iov_vu[0].iov_len = size - vdev->hdrlen; > + pcap_iov(iov_vu, iov_used); > + > + /* restore iov_vu[0] */ > + iov_vu[0].iov_base = base; > + iov_vu[0].iov_len = size; > +} > + > +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) Function comment missing. > +{ > + struct vu_dev *vdev = c->vdev; > + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > + bool v6 = ref.udp.v6; > + int i; > + > + for (i = 0; i < UDP_MAX_FRAMES; i++) { > + union sockaddr_inany s_in; > + flow_sidx_t batchsidx; > + uint8_t batchpif; > + ssize_t data_len; > + int iov_used; > + > + iov_used = udp_vu_sock_recv(c, &s_in, ref.fd, > + events, v6, &data_len); > + if (iov_used <= 0) > + return; > + > + batchsidx = udp_flow_from_sock(c, ref, &s_in, now); > + batchpif = pif_at_sidx(batchsidx); > + > + if (batchpif == PIF_TAP) { > + size_t l4len; > + > + l4len = udp_vu_prepare(c, flowside_at_sidx(batchsidx), > + data_len); > + udp_vu_pcap(c, flowside_at_sidx(batchsidx), l4len, > + iov_used); > + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); > + } else if (flow_sidx_valid(batchsidx)) { > + flow_sidx_t fromsidx = flow_sidx_opposite(batchsidx); > + struct udp_flow *uflow = udp_at_sidx(batchsidx); > + > + flow_err(uflow, > + "No support for forwarding UDP from %s to %s", > + pif_name(pif_at_sidx(fromsidx)), > + pif_name(batchpif)); > + } else { > + debug("Discarding 1 datagram without flow"); > + } > + } > +} > + > +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) Function comment missing. > +{ > + flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); > + const struct flowside *toside = flowside_at_sidx(tosidx); > + struct vu_dev *vdev = c->vdev; > + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > + struct udp_flow *uflow = udp_at_sidx(ref.flowside); > + uint8_t topif = pif_at_sidx(tosidx); > + bool v6 = ref.udp.v6; > + int i; > + > + ASSERT(!c->no_udp && uflow); Pre-existing (in udp_buf_reply_sock_handler()), but I'd rather keep this as two assertions, so that, should one ever trigger, we know which case it is. > + > + for (i = 0; i < UDP_MAX_FRAMES; i++) { > + union sockaddr_inany s_in; > + ssize_t data_len; > + int iov_used; > + > + iov_used = udp_vu_sock_recv(c, &s_in, ref.fd, > + events, v6, &data_len); > + if (iov_used <= 0) > + return; > + flow_trace(uflow, "Received 1 datagram on reply socket"); > + uflow->ts = now->tv_sec; > + > + if (topif == PIF_TAP) { > + size_t l4len; > + > + l4len = udp_vu_prepare(c, toside, data_len); > + udp_vu_pcap(c, toside, l4len, iov_used); > + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); > + } else { > + uint8_t frompif = pif_at_sidx(ref.flowside); > + > + flow_err(uflow, > + "No support for forwarding UDP from %s to %s", > + pif_name(frompif), pif_name(topif)); > + } > + } > +} > diff --git a/udp_vu.h b/udp_vu.h > new file mode 100644 > index 000000000000..0db7558914d9 > --- /dev/null > +++ b/udp_vu.h > @@ -0,0 +1,13 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later > + * Copyright Red Hat > + * Author: Laurent Vivier > + */ > + > +#ifndef UDP_VU_H > +#define UDP_VU_H > + > +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > +#endif /* UDP_VU_H */ > diff --git a/vhost_user.c b/vhost_user.c > index c4cd25fae84e..e65b550774b7 100644 > --- a/vhost_user.c > +++ b/vhost_user.c > @@ -52,7 +52,6 @@ > * this is part of the vhost-user backend > * convention. > */ > -/* cppcheck-suppress unusedFunction */ > void vu_print_capabilities(void) > { > info("{"); > @@ -163,8 +162,7 @@ static void vmsg_close_fds(const struct vhost_user_msg *vmsg) > */ > static void vu_remove_watch(const struct vu_dev *vdev, int fd) > { > - (void)vdev; > - (void)fd; > + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_DEL, fd, NULL); > } > > /** > @@ -426,7 +424,6 @@ static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq) > * > * Return: 0 if the zone in a mapped memory region, -1 otherwise > */ > -/* cppcheck-suppress unusedFunction */ > int vu_packet_check_range(void *buf, size_t offset, size_t len, > const char *start) > { > @@ -517,6 +514,14 @@ static bool vu_set_mem_table_exec(struct vu_dev *vdev, > } > } > > + /* As vu_packet_check_range() has no access to the number of > + * memory regions, mark the end of the array with mmap_addr = 0 > + */ > + ASSERT(vdev->nregions < VHOST_USER_MAX_RAM_SLOTS - 1); > + vdev->regions[vdev->nregions].mmap_addr = 0; > + > + tap_sock_update_buf(vdev->regions, 0); > + > return false; > } > > @@ -637,8 +642,12 @@ static bool vu_get_vring_base_exec(struct vu_dev *vdev, > */ > static void vu_set_watch(const struct vu_dev *vdev, int fd) > { > - (void)vdev; > - (void)fd; > + union epoll_ref ref = { .type = EPOLL_TYPE_VHOST_KICK, .fd = fd }; > + struct epoll_event ev = { 0 }; > + > + ev.data.u64 = ref.u64; > + ev.events = EPOLLIN; > + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_ADD, fd, &ev); > } > > /** > @@ -678,7 +687,6 @@ static int vu_wait_queue(const struct vu_virtq *vq) > * > * Return: number of bytes sent, -1 if there is an error > */ > -/* cppcheck-suppress unusedFunction */ > int vu_send(struct vu_dev *vdev, const void *buf, size_t size) > { > struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > @@ -864,7 +872,6 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, > * @vdev: vhost-user device > * @ref: epoll reference information > */ > -/* cppcheck-suppress unusedFunction */ > void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, > const struct timespec *now) > { > @@ -1102,11 +1109,11 @@ static bool vu_set_vring_enable_exec(struct vu_dev *vdev, > * @c: execution context > * @vdev: vhost-user device > */ > -/* cppcheck-suppress unusedFunction */ > void vu_init(struct ctx *c, struct vu_dev *vdev) > { > int i; > > + c->vdev = vdev; > vdev->context = c; > vdev->hdrlen = 0; > for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) { > @@ -1170,7 +1177,7 @@ void vu_cleanup(struct vu_dev *vdev) > */ > static void vu_sock_reset(struct vu_dev *vdev) > { > - (void)vdev; > + tap_sock_reset(vdev->context); > } > > /** > @@ -1179,7 +1186,6 @@ static void vu_sock_reset(struct vu_dev *vdev) > * @fd: vhost-user message socket > * @events: epoll events > */ > -/* cppcheck-suppress unusedFunction */ > void tap_handler_vu(struct vu_dev *vdev, int fd, uint32_t events) > { > struct vhost_user_msg msg = { 0 }; > diff --git a/virtio.c b/virtio.c > index d02e6e04701d..55fc647842bb 100644 > --- a/virtio.c > +++ b/virtio.c > @@ -559,7 +559,6 @@ void vu_queue_unpop(struct vu_virtq *vq) > * @vq: Virtqueue > * @num: Number of element to unpop > */ > -/* cppcheck-suppress unusedFunction */ > bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num) > { > if (num > vq->inuse) > diff --git a/vu_common.c b/vu_common.c > new file mode 100644 > index 000000000000..611c44a39142 > --- /dev/null > +++ b/vu_common.c > @@ -0,0 +1,27 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later // SPDX... > + * Copyright Red Hat > + * Author: Laurent Vivier > + * > + * common_vu.c - vhost-user common UDP and TCP functions > + */ > + > +#include > +#include > + > +#include "util.h" > +#include "passt.h" > +#include "vhost_user.h" > +#include "vu_common.h" > + > +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, const struct iovec *iov_vu, > + int iov_used) Function comment missing. > +{ > + int i; > + > + for (i = 0; i < iov_used; i++) > + vu_queue_fill(vq, &elem[i], iov_vu[i].iov_len, i); > + > + vu_queue_flush(vq, iov_used); > + vu_queue_notify(vdev, vq); > +} > diff --git a/vu_common.h b/vu_common.h > new file mode 100644 > index 000000000000..d2ea46bd379b > --- /dev/null > +++ b/vu_common.h > @@ -0,0 +1,34 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later > + * Copyright Red Hat > + * Author: Laurent Vivier > + * > + * vhost-user common UDP and TCP functions > + */ > + > +#ifndef VU_COMMON_H > +#define VU_COMMON_H > + > +static inline void *vu_eth(const struct vu_dev *vdev, void *base) Function comments missing (but these are really obvious ones, so I don't actually care that much). > +{ > + return ((char *)base + vdev->hdrlen); > +} > + > +static inline void *vu_ip(const struct vu_dev *vdev, void *base) > +{ > + return (struct ethhdr *)vu_eth(vdev, base) + 1; > +} > + > +static inline void *vu_payloadv4(const struct vu_dev *vdev, void *base) > +{ > + return (struct iphdr *)vu_ip(vdev, base) + 1; > +} > + > +static inline void *vu_payloadv6(const struct vu_dev *vdev, void *base) > +{ > + return (struct ipv6hdr *)vu_ip(vdev, base) + 1; > +} > + > +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, > + struct vu_virtq_element *elem, const struct iovec *iov_vu, > + int iov_used); > +#endif /* VU_COMMON_H */ -- Stefano