From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=none (p=none dis=none) header.from=gibson.dropbear.id.au Authentication-Results: passt.top; dkim=pass (2048-bit key; secure) header.d=gibson.dropbear.id.au header.i=@gibson.dropbear.id.au header.a=rsa-sha256 header.s=202502 header.b=U72JoeGp; dkim-atps=neutral Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by passt.top (Postfix) with ESMTPS id 0417C5A0008 for ; Wed, 26 Feb 2025 09:52:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gibson.dropbear.id.au; s=202502; t=1740559914; bh=DgHSSUNBrqrKqqlM5iVEEICkijxQRIr+2Yex2T3pLig=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=U72JoeGpO2y9IidHKx3q2tPsRVNmZ4bFcMErFin3CpQLGuj5QyFZI2gGTnG7wUdtB NAp2DYCt42bLXN0SYedxn3k0o4VNJiUvj+ev2ClSWQ+GJNgFgSRcKBWvYp1QKVdbY7 CoO3I6Dx9QqFOS968GH/VWjnG0UfLFyUUU4EHgf+6aubU+UUbsyJ8xoBBHJzlJ0nYo P8uwM32Fk5YMLHgSEHq916r7K20WvFiannRHD61JFOeHmhOpI1AdOh178R6POkNaKl 393wFQ8szbqiX5HE4i/ECTcOL8KBALJRAd51cTo2PCazV0UCqjmW36OrLVSjWL73oO iU8msoHgInS3A== Received: by gandalf.ozlabs.org (Postfix, from userid 1007) id 4Z2pB22TZBz4wcj; Wed, 26 Feb 2025 19:51:54 +1100 (AEDT) Date: Wed, 26 Feb 2025 19:51:11 +1100 From: David Gibson To: Stefano Brivio Subject: Re: [PATCH v2 0/2] More graceful handling of migration without passt-repair Message-ID: References: <20250225055132.3677190-1-david@gibson.dropbear.id.au> <20250225184316.407247f4@elisabeth> <20250226090948.3d1fff91@elisabeth> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="BxrC52W+tu3yaCsQ" Content-Disposition: inline In-Reply-To: <20250226090948.3d1fff91@elisabeth> Message-ID-Hash: IZBDEF4XL3J724U6AWKUQCMA2N2NVAHQ X-Message-ID-Hash: IZBDEF4XL3J724U6AWKUQCMA2N2NVAHQ X-MailFrom: dgibson@gandalf.ozlabs.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: --BxrC52W+tu3yaCsQ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Feb 26, 2025 at 09:09:48AM +0100, Stefano Brivio wrote: > On Wed, 26 Feb 2025 11:27:32 +1100 > David Gibson wrote: >=20 > > On Tue, Feb 25, 2025 at 06:43:16PM +0100, Stefano Brivio wrote: > > > On Tue, 25 Feb 2025 16:51:30 +1100 > > > David Gibson wrote: > > > =20 > > > > From Red Hat internal testing we've had some reports that if > > > > attempting to migrate without passt-repair, the failure mode is ugl= ier > > > > than we'd like. The migration fails, which is somewhat expected, b= ut > > > > we don't correctly roll things back on the source, so it breaks > > > > network there as well. > > > >=20 > > > > Handle this more gracefully allowing the migration to proceed in th= is > > > > case, but allow TCP connections to break > > > >=20 > > > > I've now tested this reasonably: > > > > * I get a clean migration if there are now active flows > > > > * Migration completes, although connections are broken if > > > > passt-repair isn't connected > > > > * Basic test suite (minus perf) > > > >=20 > > > > I didn't manage to test with libvirt yet, but I'm pretty convinced = the > > > > behaviour should be better than it was. =20 > > >=20 > > > I did, and it is. The series looks good to me and I would apply it as > > > it is, but I'm waiting a bit longer in case you want to try out some > > > variations based on my tests as well. Here's what I did. =20 > >=20 > > [snip] > >=20 > > Thanks for the detailed instructions. More complex than I might have > > liked, but oh well. > >=20 > > > $ virsh migrate --verbose --p2p --live --unsafe alpine --tunneled q= emu+ssh://88.198.0.161:10951/session > > > Migration: [97.59 %]error: End of file while reading data: : Input/= output error > > >=20 > > > ...despite --verbose the error doesn't tell much (perhaps I need > > > LIBVIRT_DEBUG=3D1 instead?), but passt terminates at this point. With > > > this series (I just used 'make install' from the local build), migrat= ion > > > succeeds instead: > > >=20 > > > $ virsh migrate --verbose --p2p --live --unsafe alpine --tunneled q= emu+ssh://88.198.0.161:10951/session > > > Migration: [100.00 %] > > >=20 > > > Now, on the target, I still have to figure out how to tell libvirt > > > to start QEMU and prepare for the migration (equivalent of '-incoming' > > > as we use in our tests), instead of just starting a new instance like > > > it does. Otherwise, I have no chance to start passt-repair there. > > > Perhaps it has something to do with persistent mode described here: = =20 > >=20 > > Ah. So I'm pretty sure virsh migrate will automatically start qemu > > with --incoming on the target. >=20 > ("-incoming"), yes, see src/qemu/qemu_migration.c, > qemuMigrationDstPrepare(). >=20 > > IIUC the problem here is more about > > timing: we want it to start it early, so that we have a chance to > > start passt-repair and let it connect before the migration actually > > happens. >=20 > For the timing itself, we could actually wait for passt-repair to be > there, with a timeout (say, 100ms). I guess. That still requires some way for KubeVirt (or whatever) to know at least roughly when it needs to launch passt-repair, and I'm not sure if that's something we can currently get from libvirt. > We could also modify passt-repair to set up an inotify watcher if the > socket isn't there yet. Maybe, yes. This kind of breaks our "passt starts first, passt-repair connects to it" model though, and I wonder if we need to revisit the security implications of that. > > Crud... I didn't think of this before. I don't know that there's any > > sensible way to do this without having libvirt managing passt-repair > > as well. >=20 > But we can't really use it as we're assuming that passt-repair will run > with capabilities virtqemud doesn't want/need. Oh. True. > > I mean it's not impossible there's some option to do this, > > but I doubt there's been any reason before for something outside of > > libvirt to control the timing of the target qemu's creation. I think > > we need to ask libvirt people about this. >=20 > I'm looking into it (and perhaps virtiofsd had similar troubles?). I'm guessing libvirt already knows how to start virtiofsd - just as it already knows how to start passt, just not passt-repair. > > > https://libvirt.org/migration.html#configuration-file-handling =20 > >=20 > > Yeah.. I don't think this is relevant. > >=20 > > > and --listen-address, but I'm not quite sure yet. > > >=20 > > > That is, I could only test different failures (early one on source, or > > > later one on target) with this, not a complete successful migration. > > > =20 > > > > There are more fragile cases that I'm looking to fix, particularly = the > > > > die()s in flow_migrate_source_rollback() and elsewhere, however I r= an > > > > into various complications that I didn't manage to sort out today. > > > > I'll continue looking at those tomorrow. I'm now pretty confident > > > > that those additional fixes won't entirely supersede the changes in > > > > this series, so it should be fine to apply these on their own. =20 > > >=20 > > > By the way, I think the somewhat less fragile/more obvious case where > > > we fail clumsily is when the target doesn't have the same address as > > > the source (among other possible addresses). In that case, we fail (a= nd > > > terminate) with a rather awkward: =20 > >=20 > > Ah, yes, that is a higher priority fragile case. > >=20 > > > 93.7217: ERROR: Failed to bind socket for migrated flow: Cannot a= ssign requested address > > > 93.7218: ERROR: Flow 0 (TCP connection): Can't set up socket: (nu= ll), drop > > > 93.7331: ERROR: Selecting TCP_SEND_QUEUE, socket 1: Socket operat= ion on non-socket > > > 93.7333: ERROR: Unexpected reply from TCP_REPAIR helper: -100 > > >=20 > > > that's because, oops, I only took care of socket() failures in > > > tcp_flow_repair_socket(), but not bind() failures (!). Sorry. =20 > >=20 > > No, you check for errors on both. >=20 > Well, "check", yes, but I'm not even setting an error code. I haven't > tried your 3/3 yet but look at "(null)" resulting from: >=20 > flow_err(flow, "Can't set up socket: %s, drop", strerror_(rc)); >=20 > ...rc is 0. -1, not 0, otherwise we wouldn't enter that if clause at all. But, still, out of bounds for strerror(). I did spot that bug - tcp_flow_repair_socket() is directly passing on the return code from bind(), whereas it should be returning -errno. So, two bugs actually: 1) in the existing code we should return -errno not rc if bind() fails, 2) in my 3/3 it should be calling strerror() on -rc, not rc. > > The problem is that in > > tcp_flow_migrate_target() we cancel the flow allocation and carry on - > > but the source will still send information for this flow, putting us > > out of sync with the stream. >=20 > That, too, yes. >=20 > > > Once that's fixed, flow_migrate_target() should also take care of > > > decreasing 'count' accordingly. I just had a glimpse but didn't > > > really try to sketch a fix. =20 > >=20 > > Adjusting count won't do the job. Instead we'd need to keep the flow > > around, but marked as "dead" somehow, so that we read but discard the > > incoming information for it. The MIGRATING state I added in one of my > > drafts was supposed to help with this sort of thing. But that's quite > > a complex change. >=20 > I think it's great that you could (practically) solve it with three > lines... Yeah, I sent that email at the beginning of my day, by the end I'd come up with the simpler approach. > > Hrm... at least in the near term, I think it might actually be easier > > to set IP_FREEBIND when we create sockets for in-migrating flows. > > That way we can process them normally, they just won't do much without > > the address set. It has the additional advantage that it should work > > if the higher layers only move the IP just after the migration, > > instead of in advance. >=20 > Perhaps we want it anyway, but I wonder: Right, I'm no longer considering this as a short term solution, since checking for fd < 0 I think works better for the immediate problems. > what happens if we turn repair > mode off and we bound to a non-local address? I suppose we won't send > out anything, but I'm not sure. If we send out the first keep-alive > segment with a wrong address, we probably ruined the connection. That's a good point. More specifically, I think IP_FREEBIND is generally used for listen()ing sockets, I'm guessing you'll get an error if you try to connect() a socket that's bound to a non-local address. It's possible TCP_REPAIR would defer that until repair mode is switched off, which wouldn't make a lot of difference to us. It's also possible there could be bug in repair mode that would let you construct a non-locally bound, connected socket that way. I'm not entirely sure what the consequences would be. I guess that might already be possible in a different way: what happens if you have a connect()ed socket, then the admin removes the address to which it is bound? > Once I find a solution for the target libvirt/passt-repair thing (and > the remaining SELinux issues), I'll try to have a look at this too. I > haven't tried yet a migration with a mismatching address on the target > and passt-repair available. Right, I was trying to set up a test case for this today. I made some progress but didn't really get it working. I was using qemu directly with scripts to put the two ends into different net namespaces, rather than libvirt on separate L1 VMs. Working out how to get the two namespaces connected in a way I could do the migration, while still being separate enough was doing my head in a bit. In doing that, I also spotted another wrinkle. I don't think this is one we can reasonably fix - but we should be aware, since someone will probably try it at some point: migration is not going to work if the two hosts have their own connectivity provided by (separate instances of) passt or pasta (or slirp for that matter). The migrating VM can have its TCP stream reconstructed perfectly, so the right L2 packets come out of the host, but the host's own passt/pasta instance won't know about the flows and so will just drop/reject the packets. To make that work we'd basically have to migrate state for every "ancestor" passt/pasta until we hit a common namespace. That seems pretty infeasible to me, since the pieces that know about the migration probably don't own those layers of the network. --=20 David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson --BxrC52W+tu3yaCsQ Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEO+dNsU4E3yXUXRK2zQJF27ox2GcFAme+1fEACgkQzQJF27ox 2GeZEQ//eEDVfsXAGQvAp4qIteEOVep0oZ4K7l5ajgdB8nKpBoLdEM0TMJjdrbJ1 NIAbqNteGnywzjrGgI5bqXipYBt0688cEHS25qfcsBGpWmh+XdyGH8Hg8l08EKZ6 6X+gkPWPY58THOy5DUXwoSFgqjhYuIiaCYB7qu9/JVy6nRzxDrrz91UAqsvihPq8 6W25Omhaa/0wuLSg1rwLcAXcohi/zAnpenLiKjsTrgcjYK5WvRhJnS2Sxf5xu91Q SQA7HbbHAnSQ6QwmIcAIFf52BTpO6g9L/m7o6GEt/5rQ4X6xSYr5v6BE+94vlIlh pfGGfH4MOj9wBehYMC/wwoC/1IO6+fZAIgNIqIhCW/VzQT4rCg2HIHk8F5TX0FEG xjaxWcEtGccn6g962v7HXitMZPi6nhSPgwgSZjrXYGB+KgY2IoCXupgI/Ghre27Z 3xvkbOz7Eu/eEqE3ngQmfSK1fWlkp/d3j5G2Bb7r3FVDgLkGnu9X1nH/mNe5vN0v uEiiAZW6k8FxcLJerQjkquBX61iLrSOtLflVGfBjwFyTlslgeZX6FYFd3El+s2xV 4U68x3nSakPagF8N/cLVblVE5tKUvfgy/mymUlzU42DAULfVdw1OZlHKO9vdjSKI B7AigLhioFfjKEWSLnlV0TX9LlhOA8h7UVkBl59Uo7ZqHy0DtBY= =dKpk -----END PGP SIGNATURE----- --BxrC52W+tu3yaCsQ--