On Wed, Mar 12, 2025 at 09:39:10PM +0100, Stefano Brivio wrote: > On Wed, 12 Mar 2025 12:29:11 +1100 > David Gibson wrote: > > > On Tue, Mar 11, 2025 at 10:55:32PM +0100, Stefano Brivio wrote: > > > On Tue, 11 Mar 2025 12:13:46 +1100 > > > David Gibson wrote: [snip] > > > > Now, as it happens, the default downtime limit is 300ms, so an > > > > additional 10ms is probably fine (though 100ms really wasn't). > > > > Nonetheless the reasoning above isn't valid. > > > > > > ~50 ms is actually quite easy to get with a few (8) gigabytes of > > > memory, > > > > 50ms as measured above? That's a bit surprising, because there's no > > particular reason for it to depend on memory size. AFAICT > > SET_DEVICE_STATE_FD is called close to immediately before actually > > reading/writing the stream from the backend. > > Oops, right, this figure I had in mind actually came from a rather > different measurement, that is, checking when the guest appeared to > resume from traffic captures with iperf3 running. Ok. That is a reasonable measure of the downtime, at least as long as the guest is continuously trying to send, which it will with iperf3. Which means by adding a 100ms delay, you'd triple the downtime, which isn't really ok. With more RAM and/or smaller migration bandwidth this would increase up to the 300ms limit. In that case 100ms would still be a 33% (unaccounted for) increase, which still isn't really ok. > I definitely can't see this difference if I repeat the same measurement > as above. > > > The memory size will of course affect the total migration time, and > > maybe the downtime. As soon as qemu thinks it can transfer all > > remaining RAM within its downtime limit, qemu will go to the stopped > > phase. With a fast local to local connection, it's possible qemu > > could enter that stopped phase almost immediately. > > > > > that's why 100 ms also looked fine to me, but sure, 10 ms > > > sounds more reasonable. > -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson