From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTP id ABC625A026F for ; Wed, 27 Sep 2023 19:06:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695834384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8hQpiIPtNB1s9Be4lQIEFlmyuq78S42w9v8wm02yTHY=; b=fKoa3MELPu9JqQRz+EhE62+gmtHm59o+Ev3R5/JLPuhH8xLh4d3DCQxmVoynrgsKnt8UoB sOgBuYdQwgQqVWBh9zIOB4/dR6rmszTN8wka2zAld89mLV6OcxlKw4RP7S4SDZd6ZM12N0 DXA+NhgT8hcYzVurtWZV7DQCSkfy3tY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-563-Gz8o1jeEPUKUKl6UjvErBQ-1; Wed, 27 Sep 2023 13:06:21 -0400 X-MC-Unique: Gz8o1jeEPUKUKl6UjvErBQ-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C932F101A550; Wed, 27 Sep 2023 17:06:20 +0000 (UTC) Received: from elisabeth (unknown [10.39.208.37]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 27C83492B16; Wed, 27 Sep 2023 17:06:18 +0000 (UTC) Date: Wed, 27 Sep 2023 19:06:16 +0200 From: Stefano Brivio To: David Gibson Subject: Re: [PATCH RFT 5/5] passt.1: Add note about tuning rmem_max and wmem_max for throughput Message-ID: <20230927190616.24821407@elisabeth> In-Reply-To: References: <20230922220610.58767-1-sbrivio@redhat.com> <20230922220610.58767-6-sbrivio@redhat.com> Organization: Red Hat MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: 5WQOZLLML3W7BPATPZIFP5SRWQWDSPHZ X-Message-ID-Hash: 5WQOZLLML3W7BPATPZIFP5SRWQWDSPHZ X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Matej Hrica , passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Mon, 25 Sep 2023 14:57:40 +1000 David Gibson wrote: > On Sat, Sep 23, 2023 at 12:06:10AM +0200, Stefano Brivio wrote: > > Signed-off-by: Stefano Brivio > > --- > > passt.1 | 33 +++++++++++++++++++++++++++++++++ > > 1 file changed, 33 insertions(+) > > > > diff --git a/passt.1 b/passt.1 > > index 1ad4276..bcbe6fd 100644 > > --- a/passt.1 > > +++ b/passt.1 > > @@ -926,6 +926,39 @@ If the sending window cannot be queried, it will always be announced as the > > current sending buffer size to guest or target namespace. This might affect > > throughput of TCP connections. > > > > +.SS Tuning for high throughput > > + > > +On Linux, by default, the maximum memory that can be set for receive and send > > +socket buffers is 208 KiB. Those limits are set by the > > +\fI/proc/sys/net/core/rmem_max\fR and \fI/proc/sys/net/core/wmem_max\fR files, > > +see \fBsocket\fR(7). > > + > > +As of Linux 6.5, while the TCP implementation can dynamically shrink buffers > > +depending on utilisation even above those limits, such a small limit will > > "shrink buffers" and "even above those limits" don't seem to quite > work together. Oops. I guess I should simply s/shrink/grow/ here. > > +reflect on the advertised TCP window at the beginning of a connection, and the > > Hmmm.... while [rw]mem_max might limit that initial window size, I > wouldn't expect increasing the limits alone to increase that initial > window size: wouldn't that instead be affected by the TCP default > buffer size i.e. the middle value in net.ipv4.tcp_rmem? If we don't use SO_RCVBUF, yes... but we currently do, and with that, we can get a much larger initial window (as we do now). On the other hand, maybe, as mentioned in my follow-up about 3/5, we should drop SO_RCVBUF for TCP sockets. > > +buffer size of the UNIX domain socket buffer used by \fBpasst\fR cannot exceed > > +these limits anyway. > > + > > +Further, as of Linux 6.5, using socket options \fBSO_RCVBUF\fR and > > +\fBSO_SNDBUF\fR will prevent TCP buffers to expand above the \fIrmem_max\fR and > > +\fIwmem_max\fR limits because the automatic adjustment provided by the TCP > > +implementation is then disabled. > > + > > +As a consequence, \fBpasst\fR and \fBpasta\fR probe these limits at start-up and > > +will not set TCP socket buffer sizes if they are lower than 2 MiB, because this > > +would affect the maximum size of TCP buffers for the whole duration of a > > +connection. > > + > > +Note that 208 KiB is, accounting for kernel overhead, enough to fit less than > > +three TCP packets at the default MSS. In applications where high throughput is > > +expected, it is therefore advisable to increase those limits to at least 2 MiB, > > +or even 16 MiB: > > + > > +.nf > > + sysctl -w net.core.rmem_max=$((16 << 20) > > + sysctl -w net.core.wmem_max=$((16 << 20) > > +.fi > > As noted in a previous mail, empirically, this doesn't necessarily > seem to work better for me. I'm wondering if we'd be better off never > touching RCFBUF and SNDBUF for TCP sockets, and letting the kernel do > its adaptive thing. We probably still want to expand the buffers as > much as we can for the Unix socket, though. And we likely still want > expanded limits for the tests so that iperf3 can use large buffers Right. Let's keep this patch for a later time then, and meanwhile check if we should drop SO_RCVBUF, SO_SNDBUF, or both, for TCP sockets. -- Stefano