From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTP id C3F765A004E for ; Wed, 10 Jul 2024 00:33:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720564391; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zz7rLfVUx1khv5AbfVPvp0bK42QVsX+PxKpXvzaxXp8=; b=anJeBUrzQwDXO8foKFITJYs2Si1Dli3Obw15QSBswauVNg2xzt4R21Ooj2XbY+VaL3Wx/E R/8fXT+2js7dMqZyO8Abah/B+IVFAvjGstmpcwdQOHoE3uSWhnp8Lp3TR5DsJE/J5dDJ27 9BfihbThs55ji6KhavycfOA4BI6+qqc= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-335-GGzIqJG2MraxuUgILq5hAw-1; Tue, 09 Jul 2024 18:33:10 -0400 X-MC-Unique: GGzIqJG2MraxuUgILq5hAw-1 Received: by mail-qk1-f200.google.com with SMTP id af79cd13be357-79f1864227dso239340585a.2 for ; Tue, 09 Jul 2024 15:33:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720564389; x=1721169189; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Zz7rLfVUx1khv5AbfVPvp0bK42QVsX+PxKpXvzaxXp8=; b=kp3VJtZsoW/c34hL4W91FGDMC1SkmBQVia62PGsil/Bs3Mz/s7Cs8q0AIDbvv2K0Nz mwy1Umb1hwqu0h6zGJ/Mxla0qb+G2Kb4IMOEcZF7Uit2JEPhe6ri3MWmn8gqHJm/2wmo HEhrr87f1NHbpnFI+O5T/HbKCop/aTyd/mbT8AZhbMvRdPBchj8/kvlcxw0SGs2Z+qxr 10h+j3P9anwIlyp8hghO7rw4UNqGf/thxbbrny9YxAu6qVbnieKjDttYX5d8Xc1Y3tUN dcFkC1Uq5AOHvi2qSqiJLgNsSlgEmBFIFZVSrfKjleY3MVYwB4CKu1oZYNt9hZTjtwYQ 2y4w== X-Gm-Message-State: AOJu0Yw6F8GCI9tmibZ1FQROc8Enc3Dn1HjPZw36qPlxKsRl3R1z4+iU LV5EjXm7a2N8Z1THTQ90NTnf+weMFs0g4cOGNQU79boO17AgWoxtf3TxZ0I0XH7f/Hnu1pQw2dC a9kIpdUT4ktTdpXTj13PjDNSTKFKT+ivlZqzxnXzwjVQEScvu0zu6n+et/PRV X-Received: by 2002:a05:620a:1998:b0:79f:197d:fe6e with SMTP id af79cd13be357-79f19a514aamr499405585a.45.1720564389093; Tue, 09 Jul 2024 15:33:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFMKeiZEoIMp5SegcteEdjDzG1cxeBJbslNqPLyNpoj0Gc2GGhDcq4MvhCIBa8Fwp3QH0Sxqg== X-Received: by 2002:a05:620a:1998:b0:79f:197d:fe6e with SMTP id af79cd13be357-79f19a514aamr499402185a.45.1720564388578; Tue, 09 Jul 2024 15:33:08 -0700 (PDT) Received: from maya.cloud.tilaa.com (maya.cloud.tilaa.com. [164.138.29.33]) by smtp.gmail.com with ESMTPSA id af79cd13be357-79f18ff65d3sm139455585a.26.2024.07.09.15.33.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Jul 2024 15:33:07 -0700 (PDT) Date: Wed, 10 Jul 2024 00:32:33 +0200 From: Stefano Brivio To: David Gibson Subject: Re: [PATCH v7 21/27] udp: Handle "spliced" datagrams with per-flow sockets Message-ID: <20240710003233.1d9937fd@elisabeth> In-Reply-To: <20240705020724.3447719-22-david@gibson.dropbear.id.au> References: <20240705020724.3447719-1-david@gibson.dropbear.id.au> <20240705020724.3447719-22-david@gibson.dropbear.id.au> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.41; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: UZSP4Z6JBB3KXDUEZ63A5N33RPPGHI2W X-Message-ID-Hash: UZSP4Z6JBB3KXDUEZ63A5N33RPPGHI2W X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top, jmaloy@redhat.com X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Fri, 5 Jul 2024 12:07:18 +1000 David Gibson wrote: > When forwarding a datagram to a socket, we need to find a socket with a > suitable local address to send it. Currently we keep track of such sockets > in an array indexed by local port, but this can't properly handle cases > where we have multiple local addresses in active use. > > For "spliced" (socket to socket) cases, improve this by instead opening > a socket specifically for the target side of the flow. We connect() as > well as bind()ing that socket, so that it will only receive the flow's > reply packets, not anything else. We direct datagrams sent via that socket > using the addresses from the flow table, effectively replacing bespoke > addressing logic with the unified logic in fwd.c > > When we create the flow, we also take a duplicate of the originating > socket, and use that to deliver reply datagrams back to the origin, again > using addresses from the flow table entry. > > Signed-off-by: David Gibson > --- > epoll_type.h | 2 + > flow.c | 20 +++ > flow.h | 2 + > flow_table.h | 14 ++ > passt.c | 4 + > udp.c | 403 ++++++++++++++++++--------------------------------- > udp.h | 6 +- > udp_flow.h | 2 + > util.c | 1 + > 9 files changed, 194 insertions(+), 260 deletions(-) > > diff --git a/epoll_type.h b/epoll_type.h > index b6c04199..7a752ed1 100644 > --- a/epoll_type.h > +++ b/epoll_type.h > @@ -22,6 +22,8 @@ enum epoll_type { > EPOLL_TYPE_TCP_TIMER, > /* UDP sockets */ > EPOLL_TYPE_UDP, > + /* UDP socket for replies on a specific flow */ > + EPOLL_TYPE_UDP_REPLY, > /* ICMP/ICMPv6 ping sockets */ > EPOLL_TYPE_PING, > /* inotify fd watching for end of netns (pasta) */ > diff --git a/flow.c b/flow.c > index 0cb9495b..2e100ddb 100644 > --- a/flow.c > +++ b/flow.c > @@ -236,6 +236,26 @@ int flowside_sock_l4(const struct ctx *c, enum epoll_type type, uint8_t pif, > } > } > > +/** flowside_connect() - Connect a socket based on flowside > + * @c: Execution context > + * @s: Socket to connect > + * @pif: Target pif > + * @tgt: Target flowside > + * > + * Connect @s to the endpoint address and port from @tgt. > + * > + * Return: 0 on success, negative on error > + */ > +int flowside_connect(const struct ctx *c, int s, > + uint8_t pif, const struct flowside *tgt) > +{ > + union sockaddr_inany sa; > + socklen_t sl; > + > + pif_sockaddr(c, &sa, &sl, pif, &tgt->eaddr, tgt->eport); > + return connect(s, &sa.sa, sl); > +} > + > /** flow_log_ - Log flow-related message > * @f: flow the message is related to > * @pri: Log priority > diff --git a/flow.h b/flow.h > index 3752e5ee..3f65ceb9 100644 > --- a/flow.h > +++ b/flow.h > @@ -168,6 +168,8 @@ static inline bool flowside_eq(const struct flowside *left, > > int flowside_sock_l4(const struct ctx *c, enum epoll_type type, uint8_t pif, > const struct flowside *tgt, uint32_t data); > +int flowside_connect(const struct ctx *c, int s, > + uint8_t pif, const struct flowside *tgt); > > /** > * struct flow_common - Common fields for packet flows > diff --git a/flow_table.h b/flow_table.h > index 3fbc7c8d..1faac4a7 100644 > --- a/flow_table.h > +++ b/flow_table.h > @@ -92,6 +92,20 @@ static inline flow_sidx_t flow_sidx_opposite(flow_sidx_t sidx) > return (flow_sidx_t){.flow = sidx.flow, .side = !sidx.side}; > } > > +/** pif_at_sidx - Interface for a given flow and side pif_at_sidx() > + * @sidx: Flow & side index > + * > + * Return: pif for the flow & side given by @sidx > + */ > +static inline uint8_t pif_at_sidx(flow_sidx_t sidx) > +{ > + const union flow *flow = flow_at_sidx(sidx); > + > + if (!flow) > + return PIF_NONE; > + return flow->f.pif[sidx.side]; > +} > + > /** flow_sidx_t - Index of one side of a flow from common structure > * @f: Common flow fields pointer > * @side: Which side to refer to (0 or 1) > diff --git a/passt.c b/passt.c > index e4d45daa..f9405bee 100644 > --- a/passt.c > +++ b/passt.c > @@ -67,6 +67,7 @@ char *epoll_type_str[] = { > [EPOLL_TYPE_TCP_LISTEN] = "listening TCP socket", > [EPOLL_TYPE_TCP_TIMER] = "TCP timer", > [EPOLL_TYPE_UDP] = "UDP socket", > + [EPOLL_TYPE_UDP_REPLY] = "UDP reply socket", > [EPOLL_TYPE_PING] = "ICMP/ICMPv6 ping socket", > [EPOLL_TYPE_NSQUIT_INOTIFY] = "namespace inotify watch", > [EPOLL_TYPE_NSQUIT_TIMER] = "namespace timer watch", > @@ -349,6 +350,9 @@ loop: > case EPOLL_TYPE_UDP: > udp_buf_sock_handler(&c, ref, eventmask, &now); > break; > + case EPOLL_TYPE_UDP_REPLY: > + udp_reply_sock_handler(&c, ref, eventmask, &now); > + break; > case EPOLL_TYPE_PING: > icmp_sock_handler(&c, ref); > break; > diff --git a/udp.c b/udp.c > index daf4fe26..f4c696db 100644 > --- a/udp.c > +++ b/udp.c > @@ -35,7 +35,31 @@ > * =================== > * > * UDP doesn't use listen(), but we consider long term sockets which are allowed > - * to create new flows "listening" by analogy with TCP. > + * to create new flows "listening" by analogy with TCP. This listening socket > + * could receive packets from multiple flows, so we use a hash table match to > + * find the specific flow for a datagram. > + * > + * When a UDP flow is initiated from a listening socket we take a duplicate of These are always "inbound" flows, right? > + * the socket and store it in uflow->s[INISIDE]. This will last for the > + * lifetime of the flow, even if the original listening socket is closed due to > + * port auto-probing. The duplicate is used to deliver replies back to the > + * originating side. > + * > + * Reply sockets > + * ============= > + * > + * When a UDP flow targets a socket, we create a "reply" socket in ...and these are outbound ones? > + * uflow->s[TGTSIDE] both to deliver datagrams to the target side and receive > + * replies on the target side. This socket is both bound and connected and has > + * EPOLL_TYPE_UDP_REPLY. The connect() means it will only receive datagrams > + * associated with this flow, so the epoll reference directly points to the flow > + * and we don't need a hash lookup. > + * > + * NOTE: it's possible that the reply socket could have a bound address > + * overlapping with an unrelated listening socket. We assume datagrams for the > + * flow will come to the reply socket in preference to a listening socket. The > + * sample program contrib/udp-reuseaddr/reuseaddr-priority.c documents and tests Now it's under doc/platform-requirements/ > + * that assumption. > * > * Port tracking > * ============= > @@ -56,62 +80,6 @@ > * > * Packets are forwarded back and forth, by prepending and stripping UDP headers > * in the obvious way, with no port translation. > - * > - * In PASTA mode, the L2-L4 translation is skipped for connections to ports > - * bound between namespaces using the loopback interface, messages are directly > - * transferred between L4 sockets instead. These are called spliced connections > - * for consistency with the TCP implementation, but the splice() syscall isn't > - * actually used as it wouldn't make sense for datagram-based connections: a > - * pair of recvmmsg() and sendmmsg() deals with this case. I think we should keep this paragraph... or did you move/add this information somewhere else I missed? Given that the "Port tracking" section is going to be away, maybe this could be moved at the end of "UDP Flows". > - * > - * The connection tracking for PASTA mode is slightly complicated by the absence > - * of actual connections, see struct udp_splice_port, and these examples: > - * > - * - from init to namespace: > - * > - * - forward direction: 127.0.0.1:5000 -> 127.0.0.1:80 in init from socket s, > - * with epoll reference: index = 80, splice = 1, orig = 1, ns = 0 > - * - if udp_splice_ns[V4][5000].sock: > - * - send packet to udp_splice_ns[V4][5000].sock, with destination port > - * 80 > - * - otherwise: > - * - create new socket udp_splice_ns[V4][5000].sock > - * - bind in namespace to 127.0.0.1:5000 > - * - add to epoll with reference: index = 5000, splice = 1, orig = 0, > - * ns = 1 > - * - update udp_splice_init[V4][80].ts and udp_splice_ns[V4][5000].ts with > - * current time > - * > - * - reverse direction: 127.0.0.1:80 -> 127.0.0.1:5000 in namespace socket s, > - * having epoll reference: index = 5000, splice = 1, orig = 0, ns = 1 > - * - if udp_splice_init[V4][80].sock: > - * - send to udp_splice_init[V4][80].sock, with destination port 5000 > - * - update udp_splice_init[V4][80].ts and udp_splice_ns[V4][5000].ts with > - * current time > - * - otherwise, discard > - * > - * - from namespace to init: > - * > - * - forward direction: 127.0.0.1:2000 -> 127.0.0.1:22 in namespace from > - * socket s, with epoll reference: index = 22, splice = 1, orig = 1, ns = 1 > - * - if udp4_splice_init[V4][2000].sock: > - * - send packet to udp_splice_init[V4][2000].sock, with destination > - * port 22 > - * - otherwise: > - * - create new socket udp_splice_init[V4][2000].sock > - * - bind in init to 127.0.0.1:2000 > - * - add to epoll with reference: index = 2000, splice = 1, orig = 0, > - * ns = 0 > - * - update udp_splice_ns[V4][22].ts and udp_splice_init[V4][2000].ts with > - * current time > - * > - * - reverse direction: 127.0.0.1:22 -> 127.0.0.1:2000 in init from socket s, > - * having epoll reference: index = 2000, splice = 1, orig = 0, ns = 0 > - * - if udp_splice_ns[V4][22].sock: > - * - send to udp_splice_ns[V4][22].sock, with destination port 2000 > - * - update udp_splice_ns[V4][22].ts and udp_splice_init[V4][2000].ts with > - * current time > - * - otherwise, discard > */ > > #include > @@ -134,6 +102,7 @@ > #include > #include > #include > +#include > > #include "checksum.h" > #include "util.h" > @@ -223,7 +192,6 @@ static struct ethhdr udp6_eth_hdr; > * @ip4h: Pre-filled IPv4 header (except for tot_len and saddr) > * @taph: Tap backend specific header > * @s_in: Source socket address, filled in by recvmmsg() > - * @splicesrc: Source port for splicing, or -1 if not spliceable > * @tosidx: sidx for the destination side of this datagram's flow > */ > static struct udp_meta_t { > @@ -232,7 +200,6 @@ static struct udp_meta_t { > struct tap_hdr taph; > > union sockaddr_inany s_in; > - int splicesrc; > flow_sidx_t tosidx; > } > #ifdef __AVX2__ > @@ -270,7 +237,6 @@ static struct mmsghdr udp_mh_splice [UDP_MAX_FRAMES]; > /* IOVs for L2 frames */ > static struct iovec udp_l2_iov [UDP_MAX_FRAMES][UDP_NUM_IOVS]; > > - > /** > * udp_portmap_clear() - Clear UDP port map before configuration > */ > @@ -383,140 +349,6 @@ static void udp_iov_init(const struct ctx *c) > udp_iov_init_one(c, i); > } > > -/** > - * udp_splice_new() - Create and prepare socket for "spliced" binding > - * @c: Execution context > - * @v6: Set for IPv6 sockets > - * @src: Source port of original connection, host order > - * @ns: Does the splice originate in the ns or not > - * > - * Return: prepared socket, negative error code on failure > - * > - * #syscalls:pasta getsockname > - */ > -int udp_splice_new(const struct ctx *c, int v6, in_port_t src, bool ns) > -{ > - struct epoll_event ev = { .events = EPOLLIN | EPOLLRDHUP | EPOLLHUP }; > - union epoll_ref ref = { .type = EPOLL_TYPE_UDP, > - .udp = { .splice = true, .v6 = v6, .port = src } > - }; > - struct udp_splice_port *sp; > - int act, s; > - > - if (ns) { > - ref.udp.pif = PIF_SPLICE; > - sp = &udp_splice_ns[v6 ? V6 : V4][src]; > - act = UDP_ACT_SPLICE_NS; > - } else { > - ref.udp.pif = PIF_HOST; > - sp = &udp_splice_init[v6 ? V6 : V4][src]; > - act = UDP_ACT_SPLICE_INIT; > - } > - > - s = socket(v6 ? AF_INET6 : AF_INET, SOCK_DGRAM | SOCK_NONBLOCK, > - IPPROTO_UDP); > - > - if (s > FD_REF_MAX) { > - close(s); > - return -EIO; > - } > - > - if (s < 0) > - return s; > - > - ref.fd = s; > - > - if (v6) { > - struct sockaddr_in6 addr6 = { > - .sin6_family = AF_INET6, > - .sin6_port = htons(src), > - .sin6_addr = IN6ADDR_LOOPBACK_INIT, > - }; > - if (bind(s, (struct sockaddr *)&addr6, sizeof(addr6))) > - goto fail; > - } else { > - struct sockaddr_in addr4 = { > - .sin_family = AF_INET, > - .sin_port = htons(src), > - .sin_addr = IN4ADDR_LOOPBACK_INIT, > - }; > - if (bind(s, (struct sockaddr *)&addr4, sizeof(addr4))) > - goto fail; > - } > - > - sp->sock = s; > - bitmap_set(udp_act[v6 ? V6 : V4][act], src); > - > - ev.data.u64 = ref.u64; > - epoll_ctl(c->epollfd, EPOLL_CTL_ADD, s, &ev); > - return s; > - > -fail: > - close(s); > - return -1; > -} > - > -/** > - * struct udp_splice_new_ns_arg - Arguments for udp_splice_new_ns() > - * @c: Execution context > - * @v6: Set for IPv6 > - * @src: Source port of originating datagram, host order > - * @dst: Destination port of originating datagram, host order > - * @s: Newly created socket or negative error code > - */ > -struct udp_splice_new_ns_arg { > - const struct ctx *c; > - int v6; > - in_port_t src; > - int s; > -}; > - > -/** > - * udp_splice_new_ns() - Enter namespace and call udp_splice_new() > - * @arg: See struct udp_splice_new_ns_arg > - * > - * Return: 0 > - */ > -static int udp_splice_new_ns(void *arg) > -{ > - struct udp_splice_new_ns_arg *a; > - > - a = (struct udp_splice_new_ns_arg *)arg; > - > - ns_enter(a->c); > - > - a->s = udp_splice_new(a->c, a->v6, a->src, true); > - > - return 0; > -} > - > -/** > - * udp_mmh_splice_port() - Is source address of message suitable for splicing? > - * @ref: epoll reference for incoming message's origin socket > - * @mmh: mmsghdr of incoming message > - * > - * Return: if source address of message in @mmh refers to localhost (127.0.0.1 > - * or ::1) its source port (host order), otherwise -1. > - */ > -static int udp_mmh_splice_port(union epoll_ref ref, const struct mmsghdr *mmh) > -{ > - const struct sockaddr_in6 *sa6 = mmh->msg_hdr.msg_name; > - const struct sockaddr_in *sa4 = mmh->msg_hdr.msg_name; > - > - ASSERT(ref.type == EPOLL_TYPE_UDP); > - > - if (!ref.udp.splice) > - return -1; > - > - if (ref.udp.v6 && IN6_IS_ADDR_LOOPBACK(&sa6->sin6_addr)) > - return ntohs(sa6->sin6_port); > - > - if (!ref.udp.v6 && IN4_IS_ADDR_LOOPBACK(&sa4->sin_addr)) > - return ntohs(sa4->sin_port); > - > - return -1; > -} > - > /** > * udp_at_sidx() - Get UDP specific flow at given sidx > * @sidx: Flow and side to retrieve > @@ -542,6 +374,16 @@ struct udp_flow *udp_at_sidx(flow_sidx_t sidx) > */ > static void udp_flow_close(const struct ctx *c, const struct udp_flow *uflow) > { > + if (uflow->s[INISIDE] >= 0) { > + /* The listening socket needs to stay in epoll */ > + close(uflow->s[INISIDE]); > + } > + > + if (uflow->s[TGTSIDE] >= 0) { > + /* But the flow specific one needs to be removed */ > + epoll_ctl(c->epollfd, EPOLL_CTL_DEL, uflow->s[TGTSIDE], NULL); > + close(uflow->s[TGTSIDE]); Keeping numbers of closed sockets around, instead of setting them to -1 right away, makes me a bit nervous. On the other hand it's obvious that udp_flow_new() sets them to -1 anyway, so there isn't much that can go wrong. > + } > flow_hash_remove(c, FLOW_SIDX(uflow, INISIDE)); > } > > @@ -549,26 +391,80 @@ static void udp_flow_close(const struct ctx *c, const struct udp_flow *uflow) > * udp_flow_new() - Common setup for a new UDP flow > * @c: Execution context > * @flow: Initiated flow > + * @s_ini: Initiating socket (or -1) > * @now: Timestamp > * > * Return: UDP specific flow, if successful, NULL on failure > */ > static flow_sidx_t udp_flow_new(const struct ctx *c, union flow *flow, > - const struct timespec *now) > + int s_ini, const struct timespec *now) > { > const struct flowside *ini = &flow->f.side[INISIDE]; > struct udp_flow *uflow = NULL; > + const struct flowside *tgt; > + uint8_t tgtpif; > > if (!inany_is_unicast(&ini->eaddr) || ini->eport == 0) { > flow_dbg(flow, "Invalid endpoint to initiate UDP flow"); > goto cancel; > } > > - if (!flow_target(c, flow, IPPROTO_UDP)) > + if (!(tgt = flow_target(c, flow, IPPROTO_UDP))) > goto cancel; > + tgtpif = flow->f.pif[TGTSIDE]; > > uflow = FLOW_SET_TYPE(flow, FLOW_UDP, udp); > uflow->ts = now->tv_sec; > + uflow->s[INISIDE] = uflow->s[TGTSIDE] = -1; > + > + if (s_ini >= 0) { > + /* When using auto port-scanning the listening port could go > + * away, so we need to duplicate it */ For consistency: closing */ on a newline. I would also say "the socket" instead of "it", otherwise it seems to be referring to the port at a first glance. > + uflow->s[INISIDE] = fcntl(s_ini, F_DUPFD_CLOEXEC, 0); There's one aspect of this I don't understand: if s_ini is closed while checking for bound ports (is it? I didn't really reach the end of this series), aren't duplicates also closed? That is, the documentation of dup2(2), which should be the same for this purpose, states that the duplicate inherits "file status flags", which I would assume also includes the fact that a socket is closed. I didn't test that though. If duplicates are closed, I guess an alternative solution could be to introduce some kind of reference counting for sockets... somewhere. > + if (uflow->s[INISIDE] < 0) { > + flow_err(uflow, > + "Couldn't duplicate listening socket: %s", > + strerror(errno)); > + goto cancel; > + } > + } > + > + if (pif_is_socket(tgtpif)) { > + union { > + flow_sidx_t sidx; > + uint32_t data; > + } fref = { > + .sidx = FLOW_SIDX(flow, TGTSIDE), > + }; > + > + uflow->s[TGTSIDE] = flowside_sock_l4(c, EPOLL_TYPE_UDP_REPLY, > + tgtpif, tgt, fref.data); > + if (uflow->s[TGTSIDE] < 0) { > + flow_dbg(uflow, > + "Couldn't open socket for spliced flow: %s", > + strerror(errno)); > + goto cancel; > + } > + > + if (flowside_connect(c, uflow->s[TGTSIDE], tgtpif, tgt) < 0) { > + flow_dbg(uflow, > + "Couldn't connect flow socket: %s", > + strerror(errno)); > + goto cancel; > + } > + > + /* It's possible, if unlikely, that we could receive some > + * unrelated packets in between the bind() and connect() of this > + * socket. For now we just discard these. We could consider > + * trying to re-direct these to an appropriate handler, if we Simply "redirect"? > + * need to. > + */ > + /* cppcheck-suppress nullPointer */ > + while (recv(uflow->s[TGTSIDE], NULL, 0, MSG_DONTWAIT) >= 0) > + ; Could a local attacker (another user) attempt to use this for denial of service? Of course, somebody could flood us anyway and we would get and handle all the events that that causes. But this case is different because we could get stuck for an unlimited amount of time without serving other sockets at all. If that's a possibility, perhaps a limit for this loop (a maximum amount of recv()) tries would be a good idea. I'm not sure how we should handle the case where we exceed the threshold. Another one, which adds some complexity, but looks more correct to me, would be to try a single recv() call, and if we get data from it, fail creating the new flow entirely. I'm still reviewing the rest (of this patch and of this series). -- Stefano