On Fri, Apr 25, 2025 at 10:07:06AM +0200, Stefano Brivio wrote: > On Fri, 25 Apr 2025 13:49:26 +0700 > David Gibson wrote: > > > On Fri, Apr 25, 2025 at 08:27:00AM +0200, Stefano Brivio wrote: > > > On Fri, 4 Apr 2025 21:15:30 +1100 > > > David Gibson wrote: > > > > > > > As discussed, I've been working on using connect()ed sockets, rather > > > > than dups of the listening sockets for handling traffic on the > > > > initiating side of UDP flows. This improves consistency, avoids some > > > > problems (bug 103) and will allow for some useful future improvements. > > > > > > > > It has the nice side effect of allowing some more code to be shared > > > > between various paths, resulting in a pretty nice negative diffstat. > > > > > > > > David Gibson (12): > > > > udp: Use connect()ed sockets for initiating side > > > > udp: Make udp_sock_recv() take max number of frames as a parameter > > > > udp: Polish udp_vu_sock_info() and remove from vu specific code > > > > udp: Don't bother to batch datagrams from "listening" socket > > > > udp: Parameterize number of datagrams handled by > > > > udp_*_reply_sock_data() > > > > udp: Split spliced forwarding path from udp_buf_reply_sock_data() > > > > udp: Merge vhost-user and "buf" listening socket paths > > > > udp: Move UDP_MAX_FRAMES to udp.c > > > > udp_flow: Take pif and port as explicit parameters to > > > > udp_flow_from_sock() > > > > udp: Rework udp_listen_sock_data() into udp_sock_fwd() > > > > udp: Fold udp_splice_prepare and udp_splice_send into udp_sock_to_sock > > > > udp_flow: Don't discard packets that arrive between bind() and > > > > connect() > > > > > > Just for the record: it's likely that something here made > > > https://github.com/containers/podman/issues/25959 more visible (or > > > directly caused it). I couldn't rule out recent ICMP changes yet, > > > but I'm fairly sure it's not those. > > > > Drat. I concur this series is the likely culprit. First place to > > check would be the error paths for a flow initiated from the host side > > (there are new ones because this now involves opening a new socket). > > Maybe we didn't clean something up in one of those cases leaving a > > bomb for a future allocation. > > Right, either that, or perhaps the flow_defer_handler() loop setting > free_head to NULL if the UDP flow is (!closed) regardless of what > happened in the previous loop iterations... that looks a bit weird to > me. I'm pretty sure that's correct (and hasn't changed). free_head points to the first slot in the current "cluster" of free flow table slots. If it's NULL, we're not in a cluster of free slots, which indeed we're not if !closed - that indicates the current slot is (still) occupied. Setting it to NULL means we'll, correctly, start a new free cluster when we next encounter a free - or free-able - slot. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson