From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTP id A33C75A004F for ; Wed, 10 Jul 2024 19:14:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720631686; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fnp0o/L72IM9bkM/pw9WsX0DUTwR+nhFoQ/yIroD1MI=; b=ECVNl4dj/UJ/9+gyo/vwD1SbAyFw2e7Bq1aBGHRjUu1OMbzPWiIg1RAhW1YZ6i94E+kmWG VieVWqJ2sK6y2A6QvPIa3vYBLQmGgUcc04wF3mPl6zpyixdhagwhuHu3//HmkwHyDvaSmy aVSI+BYI2QW5TnGpGhGk+ZVnDhObM3s= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-519-LNFrPxYwP3eyaSp_rgNLFA-1; Wed, 10 Jul 2024 13:14:44 -0400 X-MC-Unique: LNFrPxYwP3eyaSp_rgNLFA-1 Received: by mail-qt1-f200.google.com with SMTP id d75a77b69052e-44b00e8d6baso12449101cf.2 for ; Wed, 10 Jul 2024 10:14:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720631684; x=1721236484; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=fnp0o/L72IM9bkM/pw9WsX0DUTwR+nhFoQ/yIroD1MI=; b=PcjG49BO0PRKWI1fXtWfk3CMQgxvcVea90wVWxtNmIUFirB0aHCgWiRLQb2aqYSsT6 oSht0JRO1W242y3DNswzUzUBdjCWLVMmRJHXQb0E/iuAmuYK2fUTjV4oRiP/K3uFk9Et rInhFzX7/k11nOjAdusAFCAac+nlzrOsX520ah+uR24ivq/TR4y8Ah8nq+KznpkinXXy ORalVBmV2tJ5rttpnQBUwovAMAS4fvAogVO2jntPDHT6vvyJLhzTZ8BT8T0vWnHyYM+U nr5tw0u+hGzp6atBQeVhjWQkhCcsbScR8vsGVpvVnpwmkL6sIUBwX7+vzoAfPzd6hXj4 1WEQ== X-Gm-Message-State: AOJu0YxG2g+pT47MBiW4RpWh1JXI+LrBK1iewtAR6DFNrEr1CIJxuXn+ L7mwOkAFqis9m2hKGxE/foLWeaxjKZHy0p8vo5RPQWjDDUAYdjq32GcmHYbmr6gaLEGuoYf0oc2 +FdnXKWi8TWp6ttjyoc3Rk0Ao+HcCAJciCzAqOxb/k/uF5EH3hA== X-Received: by 2002:a05:622a:189b:b0:444:9085:58f1 with SMTP id d75a77b69052e-447fa8adebdmr74705801cf.2.1720631683751; Wed, 10 Jul 2024 10:14:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFeTa3tj2ctOkq5HxKkQXffIId5MylzWVlyn4Z1Bnu3vRnuraIN/ySGD7fVNbH7MxD22xFhAA== X-Received: by 2002:a05:622a:189b:b0:444:9085:58f1 with SMTP id d75a77b69052e-447fa8adebdmr74705331cf.2.1720631683019; Wed, 10 Jul 2024 10:14:43 -0700 (PDT) Received: from maya.cloud.tilaa.com (maya.cloud.tilaa.com. [164.138.29.33]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-447f9b4e536sm22032831cf.39.2024.07.10.10.14.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Jul 2024 10:14:42 -0700 (PDT) Date: Wed, 10 Jul 2024 19:13:26 +0200 From: Stefano Brivio To: David Gibson Subject: Re: [PATCH v7 21/27] udp: Handle "spliced" datagrams with per-flow sockets Message-ID: <20240710191316.53b7ac5d@elisabeth> In-Reply-To: References: <20240705020724.3447719-1-david@gibson.dropbear.id.au> <20240705020724.3447719-22-david@gibson.dropbear.id.au> <20240710003233.1d9937fd@elisabeth> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.41; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: MKPW5HHONFBHPMD6NPJK2USU36YONTNY X-Message-ID-Hash: MKPW5HHONFBHPMD6NPJK2USU36YONTNY X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top, jmaloy@redhat.com X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Wed, 10 Jul 2024 10:23:14 +1000 David Gibson wrote: > On Wed, Jul 10, 2024 at 12:32:33AM +0200, Stefano Brivio wrote: > > On Fri, 5 Jul 2024 12:07:18 +1000 > > David Gibson wrote: > > > > > When forwarding a datagram to a socket, we need to find a socket with a > > > suitable local address to send it. Currently we keep track of such sockets > > > in an array indexed by local port, but this can't properly handle cases > > > where we have multiple local addresses in active use. > > > > > > For "spliced" (socket to socket) cases, improve this by instead opening > > > a socket specifically for the target side of the flow. We connect() as > > > well as bind()ing that socket, so that it will only receive the flow's > > > reply packets, not anything else. We direct datagrams sent via that socket > > > using the addresses from the flow table, effectively replacing bespoke > > > addressing logic with the unified logic in fwd.c > > > > > > When we create the flow, we also take a duplicate of the originating > > > socket, and use that to deliver reply datagrams back to the origin, again > > > using addresses from the flow table entry. > > > > > > Signed-off-by: David Gibson > > > --- > > > epoll_type.h | 2 + > > > flow.c | 20 +++ > > > flow.h | 2 + > > > flow_table.h | 14 ++ > > > passt.c | 4 + > > > udp.c | 403 ++++++++++++++++++--------------------------------- > > > udp.h | 6 +- > > > udp_flow.h | 2 + > > > util.c | 1 + > > > 9 files changed, 194 insertions(+), 260 deletions(-) > > > > > > diff --git a/epoll_type.h b/epoll_type.h > > > index b6c04199..7a752ed1 100644 > > > --- a/epoll_type.h > > > +++ b/epoll_type.h > > > @@ -22,6 +22,8 @@ enum epoll_type { > > > EPOLL_TYPE_TCP_TIMER, > > > /* UDP sockets */ > > > EPOLL_TYPE_UDP, > > > + /* UDP socket for replies on a specific flow */ > > > + EPOLL_TYPE_UDP_REPLY, > > > /* ICMP/ICMPv6 ping sockets */ > > > EPOLL_TYPE_PING, > > > /* inotify fd watching for end of netns (pasta) */ > > > diff --git a/flow.c b/flow.c > > > index 0cb9495b..2e100ddb 100644 > > > --- a/flow.c > > > +++ b/flow.c > > > @@ -236,6 +236,26 @@ int flowside_sock_l4(const struct ctx *c, enum epoll_type type, uint8_t pif, > > > } > > > } > > > > > > +/** flowside_connect() - Connect a socket based on flowside > > > + * @c: Execution context > > > + * @s: Socket to connect > > > + * @pif: Target pif > > > + * @tgt: Target flowside > > > + * > > > + * Connect @s to the endpoint address and port from @tgt. > > > + * > > > + * Return: 0 on success, negative on error > > > + */ > > > +int flowside_connect(const struct ctx *c, int s, > > > + uint8_t pif, const struct flowside *tgt) > > > +{ > > > + union sockaddr_inany sa; > > > + socklen_t sl; > > > + > > > + pif_sockaddr(c, &sa, &sl, pif, &tgt->eaddr, tgt->eport); > > > + return connect(s, &sa.sa, sl); > > > +} > > > + > > > /** flow_log_ - Log flow-related message > > > * @f: flow the message is related to > > > * @pri: Log priority > > > diff --git a/flow.h b/flow.h > > > index 3752e5ee..3f65ceb9 100644 > > > --- a/flow.h > > > +++ b/flow.h > > > @@ -168,6 +168,8 @@ static inline bool flowside_eq(const struct flowside *left, > > > > > > int flowside_sock_l4(const struct ctx *c, enum epoll_type type, uint8_t pif, > > > const struct flowside *tgt, uint32_t data); > > > +int flowside_connect(const struct ctx *c, int s, > > > + uint8_t pif, const struct flowside *tgt); > > > > > > /** > > > * struct flow_common - Common fields for packet flows > > > diff --git a/flow_table.h b/flow_table.h > > > index 3fbc7c8d..1faac4a7 100644 > > > --- a/flow_table.h > > > +++ b/flow_table.h > > > @@ -92,6 +92,20 @@ static inline flow_sidx_t flow_sidx_opposite(flow_sidx_t sidx) > > > return (flow_sidx_t){.flow = sidx.flow, .side = !sidx.side}; > > > } > > > > > > +/** pif_at_sidx - Interface for a given flow and side > > > > pif_at_sidx() > > Done. > > > > + * @sidx: Flow & side index > > > + * > > > + * Return: pif for the flow & side given by @sidx > > > + */ > > > +static inline uint8_t pif_at_sidx(flow_sidx_t sidx) > > > +{ > > > + const union flow *flow = flow_at_sidx(sidx); > > > + > > > + if (!flow) > > > + return PIF_NONE; > > > + return flow->f.pif[sidx.side]; > > > +} > > > + > > > /** flow_sidx_t - Index of one side of a flow from common structure > > > * @f: Common flow fields pointer > > > * @side: Which side to refer to (0 or 1) > > > diff --git a/passt.c b/passt.c > > > index e4d45daa..f9405bee 100644 > > > --- a/passt.c > > > +++ b/passt.c > > > @@ -67,6 +67,7 @@ char *epoll_type_str[] = { > > > [EPOLL_TYPE_TCP_LISTEN] = "listening TCP socket", > > > [EPOLL_TYPE_TCP_TIMER] = "TCP timer", > > > [EPOLL_TYPE_UDP] = "UDP socket", > > > + [EPOLL_TYPE_UDP_REPLY] = "UDP reply socket", > > > [EPOLL_TYPE_PING] = "ICMP/ICMPv6 ping socket", > > > [EPOLL_TYPE_NSQUIT_INOTIFY] = "namespace inotify watch", > > > [EPOLL_TYPE_NSQUIT_TIMER] = "namespace timer watch", > > > @@ -349,6 +350,9 @@ loop: > > > case EPOLL_TYPE_UDP: > > > udp_buf_sock_handler(&c, ref, eventmask, &now); > > > break; > > > + case EPOLL_TYPE_UDP_REPLY: > > > + udp_reply_sock_handler(&c, ref, eventmask, &now); > > > + break; > > > case EPOLL_TYPE_PING: > > > icmp_sock_handler(&c, ref); > > > break; > > > diff --git a/udp.c b/udp.c > > > index daf4fe26..f4c696db 100644 > > > --- a/udp.c > > > +++ b/udp.c > > > @@ -35,7 +35,31 @@ > > > * =================== > > > * > > > * UDP doesn't use listen(), but we consider long term sockets which are allowed > > > - * to create new flows "listening" by analogy with TCP. > > > + * to create new flows "listening" by analogy with TCP. This listening socket > > > + * could receive packets from multiple flows, so we use a hash table match to > > > + * find the specific flow for a datagram. > > > + * > > > + * When a UDP flow is initiated from a listening socket we take a duplicate of > > > > These are always "inbound" flows, right? > > No, they could come from a socket listening in the namespace (-U). > > > > + * the socket and store it in uflow->s[INISIDE]. This will last for the > > > + * lifetime of the flow, even if the original listening socket is closed due to > > > + * port auto-probing. The duplicate is used to deliver replies back to the > > > + * originating side. > > > + * > > > + * Reply sockets > > > + * ============= > > > + * > > > + * When a UDP flow targets a socket, we create a "reply" socket in > > > > ...and these are outbound ones? > > No. Again, we could be creating this socket in the namespace. A > spliced flow will have both the dup() of a listening socket, and a > reply socket, regardless of whether it's inbound (HOST -> SPLICE) or > outbound (SPLICE -> HOST). > > > > + * uflow->s[TGTSIDE] both to deliver datagrams to the target side and receive > > > + * replies on the target side. This socket is both bound and connected and has > > > + * EPOLL_TYPE_UDP_REPLY. The connect() means it will only receive datagrams > > > + * associated with this flow, so the epoll reference directly points to the flow > > > + * and we don't need a hash lookup. > > > + * > > > + * NOTE: it's possible that the reply socket could have a bound address > > > + * overlapping with an unrelated listening socket. We assume datagrams for the > > > + * flow will come to the reply socket in preference to a listening socket. The > > > + * sample program contrib/udp-reuseaddr/reuseaddr-priority.c documents and tests > > > > Now it's under doc/platform-requirements/ > > Thanks for the catch, corrected. > > > > + * that assumption. > > > * > > > * Port tracking > > > * ============= > > > @@ -56,62 +80,6 @@ > > > * > > > * Packets are forwarded back and forth, by prepending and stripping UDP headers > > > * in the obvious way, with no port translation. > > > - * > > > - * In PASTA mode, the L2-L4 translation is skipped for connections to ports > > > - * bound between namespaces using the loopback interface, messages are directly > > > - * transferred between L4 sockets instead. These are called spliced connections > > > - * for consistency with the TCP implementation, but the splice() syscall isn't > > > - * actually used as it wouldn't make sense for datagram-based connections: a > > > - * pair of recvmmsg() and sendmmsg() deals with this case. > > > > I think we should keep this paragraph... or did you move/add this > > information somewhere else I missed? Given that the "Port tracking" > > section is going to be away, maybe this could be moved at the end of > > "UDP Flows". > > Fair point. I've paraphrahsed and updated this a bit and added it as > a new section about spliced flows. > > > > - * > > > - * The connection tracking for PASTA mode is slightly complicated by the absence > > > - * of actual connections, see struct udp_splice_port, and these examples: > > > - * > > > - * - from init to namespace: > > > - * > > > - * - forward direction: 127.0.0.1:5000 -> 127.0.0.1:80 in init from socket s, > > > - * with epoll reference: index = 80, splice = 1, orig = 1, ns = 0 > > > - * - if udp_splice_ns[V4][5000].sock: > > > - * - send packet to udp_splice_ns[V4][5000].sock, with destination port > > > - * 80 > > > - * - otherwise: > > > - * - create new socket udp_splice_ns[V4][5000].sock > > > - * - bind in namespace to 127.0.0.1:5000 > > > - * - add to epoll with reference: index = 5000, splice = 1, orig = 0, > > > - * ns = 1 > > > - * - update udp_splice_init[V4][80].ts and udp_splice_ns[V4][5000].ts with > > > - * current time > > > - * > > > - * - reverse direction: 127.0.0.1:80 -> 127.0.0.1:5000 in namespace socket s, > > > - * having epoll reference: index = 5000, splice = 1, orig = 0, ns = 1 > > > - * - if udp_splice_init[V4][80].sock: > > > - * - send to udp_splice_init[V4][80].sock, with destination port 5000 > > > - * - update udp_splice_init[V4][80].ts and udp_splice_ns[V4][5000].ts with > > > - * current time > > > - * - otherwise, discard > > > - * > > > - * - from namespace to init: > > > - * > > > - * - forward direction: 127.0.0.1:2000 -> 127.0.0.1:22 in namespace from > > > - * socket s, with epoll reference: index = 22, splice = 1, orig = 1, ns = 1 > > > - * - if udp4_splice_init[V4][2000].sock: > > > - * - send packet to udp_splice_init[V4][2000].sock, with destination > > > - * port 22 > > > - * - otherwise: > > > - * - create new socket udp_splice_init[V4][2000].sock > > > - * - bind in init to 127.0.0.1:2000 > > > - * - add to epoll with reference: index = 2000, splice = 1, orig = 0, > > > - * ns = 0 > > > - * - update udp_splice_ns[V4][22].ts and udp_splice_init[V4][2000].ts with > > > - * current time > > > - * > > > - * - reverse direction: 127.0.0.1:22 -> 127.0.0.1:2000 in init from socket s, > > > - * having epoll reference: index = 2000, splice = 1, orig = 0, ns = 0 > > > - * - if udp_splice_ns[V4][22].sock: > > > - * - send to udp_splice_ns[V4][22].sock, with destination port 2000 > > > - * - update udp_splice_ns[V4][22].ts and udp_splice_init[V4][2000].ts with > > > - * current time > > > - * - otherwise, discard > > > */ > > > > > > #include > > > @@ -134,6 +102,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > > > > #include "checksum.h" > > > #include "util.h" > > > @@ -223,7 +192,6 @@ static struct ethhdr udp6_eth_hdr; > > > * @ip4h: Pre-filled IPv4 header (except for tot_len and saddr) > > > * @taph: Tap backend specific header > > > * @s_in: Source socket address, filled in by recvmmsg() > > > - * @splicesrc: Source port for splicing, or -1 if not spliceable > > > * @tosidx: sidx for the destination side of this datagram's flow > > > */ > > > static struct udp_meta_t { > > > @@ -232,7 +200,6 @@ static struct udp_meta_t { > > > struct tap_hdr taph; > > > > > > union sockaddr_inany s_in; > > > - int splicesrc; > > > flow_sidx_t tosidx; > > > } > > > #ifdef __AVX2__ > > > @@ -270,7 +237,6 @@ static struct mmsghdr udp_mh_splice [UDP_MAX_FRAMES]; > > > /* IOVs for L2 frames */ > > > static struct iovec udp_l2_iov [UDP_MAX_FRAMES][UDP_NUM_IOVS]; > > > > > > - > > > /** > > > * udp_portmap_clear() - Clear UDP port map before configuration > > > */ > > > @@ -383,140 +349,6 @@ static void udp_iov_init(const struct ctx *c) > > > udp_iov_init_one(c, i); > > > } > > > > > > -/** > > > - * udp_splice_new() - Create and prepare socket for "spliced" binding > > > - * @c: Execution context > > > - * @v6: Set for IPv6 sockets > > > - * @src: Source port of original connection, host order > > > - * @ns: Does the splice originate in the ns or not > > > - * > > > - * Return: prepared socket, negative error code on failure > > > - * > > > - * #syscalls:pasta getsockname > > > - */ > > > -int udp_splice_new(const struct ctx *c, int v6, in_port_t src, bool ns) > > > -{ > > > - struct epoll_event ev = { .events = EPOLLIN | EPOLLRDHUP | EPOLLHUP }; > > > - union epoll_ref ref = { .type = EPOLL_TYPE_UDP, > > > - .udp = { .splice = true, .v6 = v6, .port = src } > > > - }; > > > - struct udp_splice_port *sp; > > > - int act, s; > > > - > > > - if (ns) { > > > - ref.udp.pif = PIF_SPLICE; > > > - sp = &udp_splice_ns[v6 ? V6 : V4][src]; > > > - act = UDP_ACT_SPLICE_NS; > > > - } else { > > > - ref.udp.pif = PIF_HOST; > > > - sp = &udp_splice_init[v6 ? V6 : V4][src]; > > > - act = UDP_ACT_SPLICE_INIT; > > > - } > > > - > > > - s = socket(v6 ? AF_INET6 : AF_INET, SOCK_DGRAM | SOCK_NONBLOCK, > > > - IPPROTO_UDP); > > > - > > > - if (s > FD_REF_MAX) { > > > - close(s); > > > - return -EIO; > > > - } > > > - > > > - if (s < 0) > > > - return s; > > > - > > > - ref.fd = s; > > > - > > > - if (v6) { > > > - struct sockaddr_in6 addr6 = { > > > - .sin6_family = AF_INET6, > > > - .sin6_port = htons(src), > > > - .sin6_addr = IN6ADDR_LOOPBACK_INIT, > > > - }; > > > - if (bind(s, (struct sockaddr *)&addr6, sizeof(addr6))) > > > - goto fail; > > > - } else { > > > - struct sockaddr_in addr4 = { > > > - .sin_family = AF_INET, > > > - .sin_port = htons(src), > > > - .sin_addr = IN4ADDR_LOOPBACK_INIT, > > > - }; > > > - if (bind(s, (struct sockaddr *)&addr4, sizeof(addr4))) > > > - goto fail; > > > - } > > > - > > > - sp->sock = s; > > > - bitmap_set(udp_act[v6 ? V6 : V4][act], src); > > > - > > > - ev.data.u64 = ref.u64; > > > - epoll_ctl(c->epollfd, EPOLL_CTL_ADD, s, &ev); > > > - return s; > > > - > > > -fail: > > > - close(s); > > > - return -1; > > > -} > > > - > > > -/** > > > - * struct udp_splice_new_ns_arg - Arguments for udp_splice_new_ns() > > > - * @c: Execution context > > > - * @v6: Set for IPv6 > > > - * @src: Source port of originating datagram, host order > > > - * @dst: Destination port of originating datagram, host order > > > - * @s: Newly created socket or negative error code > > > - */ > > > -struct udp_splice_new_ns_arg { > > > - const struct ctx *c; > > > - int v6; > > > - in_port_t src; > > > - int s; > > > -}; > > > - > > > -/** > > > - * udp_splice_new_ns() - Enter namespace and call udp_splice_new() > > > - * @arg: See struct udp_splice_new_ns_arg > > > - * > > > - * Return: 0 > > > - */ > > > -static int udp_splice_new_ns(void *arg) > > > -{ > > > - struct udp_splice_new_ns_arg *a; > > > - > > > - a = (struct udp_splice_new_ns_arg *)arg; > > > - > > > - ns_enter(a->c); > > > - > > > - a->s = udp_splice_new(a->c, a->v6, a->src, true); > > > - > > > - return 0; > > > -} > > > - > > > -/** > > > - * udp_mmh_splice_port() - Is source address of message suitable for splicing? > > > - * @ref: epoll reference for incoming message's origin socket > > > - * @mmh: mmsghdr of incoming message > > > - * > > > - * Return: if source address of message in @mmh refers to localhost (127.0.0.1 > > > - * or ::1) its source port (host order), otherwise -1. > > > - */ > > > -static int udp_mmh_splice_port(union epoll_ref ref, const struct mmsghdr *mmh) > > > -{ > > > - const struct sockaddr_in6 *sa6 = mmh->msg_hdr.msg_name; > > > - const struct sockaddr_in *sa4 = mmh->msg_hdr.msg_name; > > > - > > > - ASSERT(ref.type == EPOLL_TYPE_UDP); > > > - > > > - if (!ref.udp.splice) > > > - return -1; > > > - > > > - if (ref.udp.v6 && IN6_IS_ADDR_LOOPBACK(&sa6->sin6_addr)) > > > - return ntohs(sa6->sin6_port); > > > - > > > - if (!ref.udp.v6 && IN4_IS_ADDR_LOOPBACK(&sa4->sin_addr)) > > > - return ntohs(sa4->sin_port); > > > - > > > - return -1; > > > -} > > > - > > > /** > > > * udp_at_sidx() - Get UDP specific flow at given sidx > > > * @sidx: Flow and side to retrieve > > > @@ -542,6 +374,16 @@ struct udp_flow *udp_at_sidx(flow_sidx_t sidx) > > > */ > > > static void udp_flow_close(const struct ctx *c, const struct udp_flow *uflow) > > > { > > > + if (uflow->s[INISIDE] >= 0) { > > > + /* The listening socket needs to stay in epoll */ > > > + close(uflow->s[INISIDE]); > > > + } > > > + > > > + if (uflow->s[TGTSIDE] >= 0) { > > > + /* But the flow specific one needs to be removed */ > > > + epoll_ctl(c->epollfd, EPOLL_CTL_DEL, uflow->s[TGTSIDE], NULL); > > > + close(uflow->s[TGTSIDE]); > > > > Keeping numbers of closed sockets around, instead of setting them to -1 > > right away, makes me a bit nervous. > > > > On the other hand it's obvious that udp_flow_new() sets them to -1 > > anyway, so there isn't much that can go wrong. > > Right, we're about to destroy the flow entry entirely, so it is safe. > But I agree that it's nervous-making. It's pretty cheap to set them > to -1 explicitly, so I'm doing that now. > > > > > > + } > > > flow_hash_remove(c, FLOW_SIDX(uflow, INISIDE)); > > > } > > > > > > @@ -549,26 +391,80 @@ static void udp_flow_close(const struct ctx *c, const struct udp_flow *uflow) > > > * udp_flow_new() - Common setup for a new UDP flow > > > * @c: Execution context > > > * @flow: Initiated flow > > > + * @s_ini: Initiating socket (or -1) > > > * @now: Timestamp > > > * > > > * Return: UDP specific flow, if successful, NULL on failure > > > */ > > > static flow_sidx_t udp_flow_new(const struct ctx *c, union flow *flow, > > > - const struct timespec *now) > > > + int s_ini, const struct timespec *now) > > > { > > > const struct flowside *ini = &flow->f.side[INISIDE]; > > > struct udp_flow *uflow = NULL; > > > + const struct flowside *tgt; > > > + uint8_t tgtpif; > > > > > > if (!inany_is_unicast(&ini->eaddr) || ini->eport == 0) { > > > flow_dbg(flow, "Invalid endpoint to initiate UDP flow"); > > > goto cancel; > > > } > > > > > > - if (!flow_target(c, flow, IPPROTO_UDP)) > > > + if (!(tgt = flow_target(c, flow, IPPROTO_UDP))) > > > goto cancel; > > > + tgtpif = flow->f.pif[TGTSIDE]; > > > > > > uflow = FLOW_SET_TYPE(flow, FLOW_UDP, udp); > > > uflow->ts = now->tv_sec; > > > + uflow->s[INISIDE] = uflow->s[TGTSIDE] = -1; > > > + > > > + if (s_ini >= 0) { > > > + /* When using auto port-scanning the listening port could go > > > + * away, so we need to duplicate it */ > > > > For consistency: closing */ on a newline. I would also say "the socket" > > instead of "it", otherwise it seems to be referring to the port at a > > first glance. > > Done. > > > > + uflow->s[INISIDE] = fcntl(s_ini, F_DUPFD_CLOEXEC, 0); > > > > There's one aspect of this I don't understand: if s_ini is closed while > > checking for bound ports (is it? I didn't really reach the end of this > > series), aren't duplicates also closed? > > > > That is, the documentation of dup2(2), which should be the same for > > this purpose, states that the duplicate inherits "file status flags", > > which I would assume also includes the fact that a socket is closed. I > > didn't test that though. > > I don't believe so. My understanding is that dup() (and the rest) > make a new fd referencing the same underlying file object, yes. But > AIUI, close() just closes one fd - the underlying object is only > closed only when all fds are gone. Ah, probably, yes. > > If duplicates are closed, I guess an alternative solution could be to > > introduce some kind of reference counting for sockets... somewhere. > > .. in other words, I believe the kernel does the reference counting. > > I should verify this though, I'll try to come up with something new > for doc/platform-requirements. I didn't really find the time to sketch this but I guess the easiest way to check this behaviour is to have a TCP connection between a socket pair, with one socket having two descriptors, then closing one descriptor and check if the peer socket sees a closed connection (recv() returning 0 or similar). I was wondering whether it's worth to use the vforked namespaced peer trick I drafted here: https://archives.passt.top/passt-dev/20231206160808.3d312733@elisabeth/ just in case we want to use some of those test cases for actual tests, where we don't want to bind an actual TCP port on the machine we're running on. But if it adds complexity I'd say it's not worth it. > > > + if (uflow->s[INISIDE] < 0) { > > > + flow_err(uflow, > > > + "Couldn't duplicate listening socket: %s", > > > + strerror(errno)); > > > + goto cancel; > > > + } > > > + } > > > + > > > + if (pif_is_socket(tgtpif)) { > > > + union { > > > + flow_sidx_t sidx; > > > + uint32_t data; > > > + } fref = { > > > + .sidx = FLOW_SIDX(flow, TGTSIDE), > > > + }; > > > + > > > + uflow->s[TGTSIDE] = flowside_sock_l4(c, EPOLL_TYPE_UDP_REPLY, > > > + tgtpif, tgt, fref.data); > > > + if (uflow->s[TGTSIDE] < 0) { > > > + flow_dbg(uflow, > > > + "Couldn't open socket for spliced flow: %s", > > > + strerror(errno)); > > > + goto cancel; > > > + } > > > + > > > + if (flowside_connect(c, uflow->s[TGTSIDE], tgtpif, tgt) < 0) { > > > + flow_dbg(uflow, > > > + "Couldn't connect flow socket: %s", > > > + strerror(errno)); > > > + goto cancel; > > > + } > > > + > > > + /* It's possible, if unlikely, that we could receive some > > > + * unrelated packets in between the bind() and connect() of this > > > + * socket. For now we just discard these. We could consider > > > + * trying to re-direct these to an appropriate handler, if we > > > > Simply "redirect"? > > Done. > > > > + * need to. > > > + */ > > > + /* cppcheck-suppress nullPointer */ > > > + while (recv(uflow->s[TGTSIDE], NULL, 0, MSG_DONTWAIT) >= 0) > > > + ; > > > > Could a local attacker (another user) attempt to use this for denial of > > service? > > Ah, interesting question. > > > Of course, somebody could flood us anyway and we would get and handle > > all the events that that causes. But this case is different because we > > could get stuck for an unlimited amount of time without serving other > > sockets at all. > > Right. > > > If that's a possibility, perhaps a limit for this loop (a maximum > > amount of recv()) tries would be a good idea. I'm not sure how we should > > handle the case where we exceed the threshold. > > We could fail to create the flow. That would limit the damage, but > see below. > > > Another one, which adds some complexity, but looks more correct to me, > > would be to try a single recv() call, and if we get data from it, fail > > creating the new flow entirely. > > Right. I also considered a single recvmmsg(); the difficulty with > that is making suitable arrays for it, since the normal ones may be in > use at this point. For UDP, we don't need to make the buffers large enough to fit packets into them, see udp(7): All receive operations return only one packet. When the packet is smaller than the passed buffer, only that much data is returned; when it is bigger, the packet is truncated and the MSG_TRUNC flag is set. so probably 1024 bytes (1 * UIO_MAXIOV) on the stack would be enough... or maybe we can even pass 0 as size and NULL buffers? If recvmmsg() returns UIO_MAXIOV, then something weird is going on and we can abort the "connection" attempt. > This removes the potential DoS of wedging passt completely, but raises > the possibility of a different one. Could an attacker - not > necessarily on the same host, but network-closer than the guest's > intended endpoint - prevent the guest from connecting to a particular > service by spamming fake reply packets here. Oops, I didn't think about that, that's also concerning. With my suggestion above, 1024 packets is all an attacker would need. But maybe there's a safe way to deal with this that's still simpler than a separate handler: using recvmmsg(), going through msg_name of the (truncated) messages we received with it, and trying to draw some conclusion about what kind of attack we're dealing with from there. I'm not sure exactly which conclusion, though. > I believe it would need to anticipate the guest's source port to do so, > which might be good enough. Linux does source port randomisation for UDP starting from 2.6.24, so that should be good enough, but given that we try to preserve the source port from the guest, we might make a guest that doesn't do port randomisation (albeit unlikely to be found) vulnerable to this. -- Stefano