On Thu, Feb 13, 2025 at 11:16:50PM +0100, Stefano Brivio wrote: > I added this a long long time ago because it dramatically improved > throughput back then: with rmem_max and wmem_max >= 4 MiB, we would > force send and receive buffer sizes for TCP sockets to the maximum > allowed value. > > This effectively disables TCP auto-tuning, which would otherwise allow > us to exceed those limits, as crazy as it might sound. But in any > case, it made sense. > > Now that we have zero (internal) copies on every path, plus vhost-user > support, it turns out that these settings are entirely obsolete. I get > substantially the same throughput in every test we perform, even with > very short durations (one second). > > The settings are not just useless: they actually cause us quite some > trouble on guest state migration, because they lead to huge queues > that need to be moved as well. > > Drop those settings. > > Signed-off-by: Stefano Brivio Hooray! Reviewed-by: David Gibson -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson