On Tue, Sep 23, 2025 at 01:00:39PM +0200, Stefano Brivio wrote: > On Tue, 23 Sep 2025 17:53:09 +1000 > David Gibson wrote: > > > On Mon, Sep 22, 2025 at 10:03:30PM +0200, Stefano Brivio wrote: > > > On Mon, 22 Sep 2025 15:17:12 +0800 > > > Yumei Huang wrote: > > > > On Fri, Sep 19, 2025 at 9:38 AM David Gibson > > > > wrote: > > > > > On Thu, Sep 18, 2025 at 09:17:14AM +0200, Stefano Brivio wrote: > > > > > > On Thu, 18 Sep 2025 14:28:37 +1000 > > > > > > David Gibson wrote: > > [snip] > > > > > > Does it work to cover situations where users might start passt a bit > > > > > > before the guest connects, and try to connect to services right away? > > > > > > > > > > > > I suggested using ssh which should have a quite long timeout and retry > > > > > > connecting for a while. You mentioned you would assist Yumei in testing > > > > > > this if needed. > > > > > > > > > > Ah, yes, you're right and I'd forgotten that. Following up today. > > > > > > > > I tried both 'ssh' and 'socat'(writing a big file) before a guest > > > > connects, they get a 'Connection reset' after 10s, even if the guest > > > > connects in ~2s. > > > > It's because, when start ssh or socat, passt would try to finish the > > > > tcp handshake with the guest. It sends SYN to the guest immediately > > > > and waits for SYN-ACK. However, the SYN frame is dropped/lost due to > > > > no guest connected. So though the guest connects in seconds, the tcp > > > > handshake would timeout, and returns rst via tcp_rst(). > > > > > > Ah, right. We won't try to resend the SYN, that's simply not > > > implemented. > > > > > > The timeout you see is SYN_TIMEOUT, timer set by tcp_timer_ctl() and > > > handled by tcp_timer_handler(). > > > > > > > Either with or without this patch, they got the same 'connection reset'. > > > > Maybe it's something to fix? > > > > > > First off, this shows that the current patch is harmless, so I would go > > > ahead and apply it (but see 2. below). > > > > > > Strictly speaking, I don't think we really *need* to fix anything, but > > > for sure the behaviour isn't ideal. I see two alternatives: > > > > > > 1. we implement a periodic retry for the SYN segment. This would *seem* > > > to give the best behaviour in this case, but: > > > > > > a. it's quite complicated (we need to calculate some delays for the > > > timers, etc.), and not really transparent (which is in general a > > > goal of passt) > > > > I'm not really sure why you say it's not transparent, or at least what > > other option you're comparing it to. The peer has initiated a > > connection to us in the normal way (which may include resending SYNs). > > Now we're initiating a connection to the guest in the normal way > > (which may include resending SYNs). > > I was comparing this to b. or to doing nothing. > > But, actually, you're right, the kernel wouldn't tell us about a > repeated SYN, it would still be the same socket returned from accept(), > so it's not necessarily less transparent. Not only can't it tell us about a repeated SYN, but there won't *be* a repeated SYN, unless the host kernel's SYN-ACK gets lost. > I was thinking that we know when the guest connects, so we could just > delay the SYN segment until then, by introducing a separate TAP_SYN_SENT > event (right now it's implicit in SOCK_ACCEPTED). But when the guest > connects, services are typically not up yet. You would typically get a > RST while the guest is booting. We could; I would have thought the timescale over which we expect a guest to be attached would mean that resending SYNs on a timer would achieve the same thing more simply (and handle other cases, too). Getting an RST during boot would be typical, but as we've seen in Volker's case, this may not be a newly booting guest, but a reconnecting guest (or pasta is backing a NIC being hotplugged into the guest). > > > b. if the guest never appears, we're just wasting client's time. See > > > db2c91ae86c7 ("tcp: Set ACK flag on *all* RST segments, even for > > > client in SYN-SENT state") for an example where it's important to > > > fail fast > > > > Sure. I'd say RSTing here would be *less* transparent, but it might > > still be worth it to make the peer fail fast. > > But that's what happens naturally (with Linux) if nobody is listening, > and in RFC 9293 terms, I'd say we should approximate a CLOSED state, > 3.10.7.1: > > If the state is CLOSED (i.e., TCB does not exist), then [...] [a]n > incoming segment not containing a RST causes a RST to be sent in response. > > rather than a LISTEN state (3.10.7.2). However, see below. Well, it depends on what physical model we're trying to emulate here. My assumption was that we were trying to make this look like the guest was off, or had its network cable unplugged. In which case we want to just discard packets to the extent we can. We could model an unconnected guest as a host with no listens, in which case, yes, we should RST ASAP. That seems less natural to me. > > > 2. reset right away as I was suggesting in > > > https://archives.passt.top/passt-dev/20250915081319.00e72e53@elisabeth/: > > > > > > > We could mitigate that by making the TCP handler aware of this, and by > > > > resetting the connection if the guest isn't there. This would at least > > > > be consistent with the case where the guest isn't listening on the port > > > > (we accept(), fail to connect to it, eventually call tcp_rst()). > > > > > > and let the client retry as appropriate (if implemented). Those retries > > > can be quite fast, see this report (from IRC) for 722d347c1932 ("tcp: > > > Don't reset outbound connection on SYN retries"): > > > > I don't see how that commit is relevant to this situation. That's > > talking about SYN retries. > > That's just an example about how SYN segments are retried. It's not > otherwise relevant for this situation. Sure, but again, SYNs *won't* be retried in this case, because the host completes the handshake. Well.. there could be retried SYNs, but the host kernel either won't tell us about it (lost SYN-ACK) or doesn't know itself (lost SYN). > > We can see those in the case of outbound > > connections bot we'll never see them for the case of inbound > > connections, because the host kernel has already completed the > > handshake. For inbound we essentially have two options: > > > > a) Retry SYNs ourselves, emulating what the peer would do if it was > > talking directly to an absent guest. > > b) Reject SYNs quickly, trusting that the guest will have some sort of > > application level retry. That will depend on the client. I guess > > my fear here is that a client seeing a completed handshake + RST > > might assume that the guest server is permanently broken, rather > > than just temporarily missing as it might if there's no response at > > all. Fwiw, these two options basically come down to whether we're trying to make a missing guest look like a machine that's off, or a machine that's not listening. > Oops, that's a detail I forgot: we complete the handshake and then > reset... which brings us to https://bugs.passt.top/show_bug.cgi?id=131. > > Once that's implemented, perhaps it will be low effort to not listen() > at all in that case. Right now, I'm not sure anymore. Delaying the listen() would make us more closely (pretty much exactly?) resemble the case where we're pretending the guest is a machine that's not listening. It doesn't get us closer to treating the guest as a machine that's off or physicall disconnected. It would just mean we get SYN -> RST instead of SYN -> SYN-ACK -> ACK -> RST. > On the other hand, with just this patch, we will reset the connection > after 10 seconds (no matter what happens), which is just like this, but > delayed. > > > I suggested Yumei's approach here to aim for (a) on the basis of > > transparency - it's as close as I think we can get to a bridged guest > > that's just missing. I'm not necessarily opposed to (b), but I think > > it's less transparent, so we need an argument that it will lead to > > better outcomes regardless. > > Given the problem above, maybe we should really look into a) (but this > patch doesn't do it). It doesn't, because we don't retry SYNs (which I hadn't realised when I suggested it). Arguably we should do that anyway. It would be extremely rare, but it's not impossible for our SYN to be really truly lost on its way to the guest due to, say, a full buffer in the tap device, or in the guest itself. > Well, let me merge this, and other than that I would suggest looking > into a) if time allows. > > b) looks still slightly better than the current situation, because right > now we'll accept and RST after 10 seconds. So if time doesn't allow, > let's settle for b) for the moment being? > > > > 3.3223: pasta: epoll event on /dev/net/tun device 18 (events: 0x00000001) > > > 3.3223: pasta: epoll event on /dev/net/tun device 18 (events: 0x00000001) > > > 3.3224: tap: protocol 6, 192.168.122.14:55532 -> 192.0.0.1:80 (1 packet) > > > 3.3224: Flow 0 (NEW): FREE -> NEW > > > 3.3224: Flow 0 (INI): NEW -> INI > > > 3.3224: Flow 0 (INI): TAP [192.168.122.14]:55532 -> [192.0.0.1]:80 => ? > > > 3.3224: Flow 0 (TGT): INI -> TGT > > > 3.3224: Flow 0 (TGT): TAP [192.168.122.14]:55532 -> [192.0.0.1]:80 => HOST [0.0.0.0]:0 -> [192.0.0.1]:80 > > > 3.3224: Flow 0 (TCP connection): TGT -> TYPED > > > 3.3224: Flow 0 (TCP connection): TAP [192.168.122.14]:55532 -> [192.0.0.1]:80 => HOST [0.0.0.0]:0 -> [192.0.0.1]:80 > > > 3.3224: Flow 0 (TCP connection): event at tcp_conn_from_tap:1489 > > > 3.3224: Flow 0 (TCP connection): TAP_SYN_RCVD: CLOSED -> SYN_SENT > > > 3.3224: Flow 0 (TCP connection): failed to set TCP_MAXSEG on socket 21 > > > 3.3224: Flow 0 (TCP connection): Side 0 hash table insert: bucket: 294539 > > > 3.3225: Flow 0 (TCP connection): TYPED -> ACTIVE > > > 3.3225: Flow 0 (TCP connection): TAP [192.168.122.14]:55532 -> [192.0.0.1]:80 => HOST [0.0.0.0]:0 -> [192.0.0.1]:80 > > > 4.0027: pasta: epoll event on namespace timer watch 17 (events: 0x00000001) > > > 4.3612: pasta: epoll event on /dev/net/tun device 18 (events: 0x00000001) > > > 4.3613: tap: protocol 6, 192.168.122.14:55532 -> 192.0.0.1:80 (1 packet) > > > 4.3613: Flow 0 (TCP connection): packet length 40 from tap > > > 4.3613: Flow 0 (TCP connection): TCP reset at tcp_tap_handler:1989 > > > 4.3613: Flow 0 (TCP connection): flag at tcp_prepare_flags:1163 > > > 4.3613: Flow 0 (TCP connection): event at tcp_rst_do:1206 > > > 4.3613: Flow 0 (TCP connection): CLOSED: SYN_SENT -> CLOSED > > > 4.3614: Flow 0 (TCP connection): Side 0 hash table remove: bucket: 294539 > > > 4.3614: Flow 0 (FREE): ACTIVE -> FREE > > > 4.3614: Flow 0 (FREE): TAP [192.168.122.14]:55532 -> [192.0.0.1]:80 => HOST [0.0.0.0]:0 -> [192.0.0.1]:80 > > > > > > ...the retry happened within one second. This is a container, so Linux > > > kernel, and the client was wget. > > > > I'm not seeing a retry at all in this log, plus it's an outbound > > connection, which is not the case we're dealing with here. > > It's two SYN segments from a guest (yes, an outbound connection): > > 3.3224: tap: protocol 6, 192.168.122.14:55532 -> 192.0.0.1:80 (1 packet) > > 4.3613: tap: protocol 6, 192.168.122.14:55532 -> 192.0.0.1:80 (1 packet) > > that's a retry and that's all I wanted to show: the typical timing you > get from Linux. Ah, right, I see. Right, but again, irrelevant to this case since we don't get / can't see the repeated SYNs. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson