From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=TXgl5m3L; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id 0357B5A028A for ; Tue, 10 Jun 2025 17:29:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1749569376; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y9s05os5vPzKMzFTJ5A7sU5LeKtPdyqzNJ2LkU2V9sg=; b=TXgl5m3LxFPIDI2lfERRU6HwyoyfLhbxofqGIFKij29c9V2RUdOutcgJQCm99mdIEcPxRL FnQpBNQYnSELXyAu+h+SwfIoOAs/F3MzY0AM3O3VPqr7RaaXo9LnyoMJt3r912roaVpQm7 uyDWawhKIg/n8WxG5kjFqkSGIsDwVZs= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-528-ZMFwP55XPPOH_VLKmBfmIw-1; Tue, 10 Jun 2025 11:29:35 -0400 X-MC-Unique: ZMFwP55XPPOH_VLKmBfmIw-1 X-Mimecast-MFC-AGG-ID: ZMFwP55XPPOH_VLKmBfmIw_1749569374 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-441c122fa56so28872845e9.2 for ; Tue, 10 Jun 2025 08:29:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749569374; x=1750174174; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Y9s05os5vPzKMzFTJ5A7sU5LeKtPdyqzNJ2LkU2V9sg=; b=FkDLzDaLDGUdyKQ08DyJCQpBtJGao3AVQV3If8qkwHikt29VHRKuMopKjOMP1TFP2b qxrC626v9d4zSMl56A0pwdD2gR6Q6+dm7OxVNt+RSvUuKW351Q6Baf5zBI4VQHeam+QV iS/uXO8Fl7UQMOWU+Cq5vYTQy1J5Y3MKldFK39AKoRSkcBWx2sa0XfREadVaYTdjw9y+ rh8LjEVC9lGhyDbs5Ws45kVaTS3qTXhU/0FcB3NDhHqKxCrbxuzMRjwiPZKG2Z0mESYJ 31AWUtGXIR//hAIXad3mTLLzUdXXSsQIIT/0UMp+GSiBlHwtCxNnQnvDT6n0yoZXosv/ K7rg== X-Gm-Message-State: AOJu0YzRTfzWkgQR6QY5wmyRy0Iqls4mRUnhpsO0jb0d1WdIzWUbMybF IYz9SNr4KxxdLxySlFUDf3zzq/AWWFzQmxHGcpLypS8TOisJwrJlrSSPQifURPqlLb+R5JiWoUe +/gkrIOc2281iWJmVLNPMzokQGGpVAnUwuBjDkedKPxRWXxxUo4cl5Yz1Hn1I/0ISEXbxM2LSXT KptbfaV57/wxD2rJDjp2GVs2Am0aFMFE8KRMtm X-Gm-Gg: ASbGncvB/QSc7Qsz6tQtRvtplBwP2Tzmy7Cw6vkuezTYvUbpHRchMxJ/lBGFmLu8BH0 87iWKzq8Qybg9ESLi0hNJNSgntFY/hgB9TSmoP9w69gDMuWuZzkql2/YAYXuZqFJgbS2iTKiMhe vYDbj3tTZ7Ra5XI2K90CjfGJxePKujqv00biH56ygjQL/+QgSM5NtljZf1vyjB4ZpL4mm/8n7GR SF2rRsoU0/QPtl9AxheTvAMfhBjl5+9jRSLchh/23Q5FBAV0jNPo6eo+sRB3sXKB6x3IWcCzLNb GErZNi1q0qdmG/wNLlUKOp72ZP6e79fUHgcS45TJTOV7KPaQhrE= X-Received: by 2002:a5d:588a:0:b0:3a4:ee40:715c with SMTP id ffacd0b85a97d-3a552270e62mr2966157f8f.14.1749569373619; Tue, 10 Jun 2025 08:29:33 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGTxk4BESJRL+eZYTPH3s69Rf6b2pD16jhZcg8s5R1qbFlZ3ja/cKiBpcgSr6df0b/CjzvDcA== X-Received: by 2002:a5d:588a:0:b0:3a4:ee40:715c with SMTP id ffacd0b85a97d-3a552270e62mr2966115f8f.14.1749569373096; Tue, 10 Jun 2025 08:29:33 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [176.103.220.4]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4526e0563b6sm143590125e9.7.2025.06.10.08.29.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jun 2025 08:29:32 -0700 (PDT) Date: Tue, 10 Jun 2025 17:29:31 +0200 From: Stefano Brivio To: Eugenio Perez Martin Subject: Re: vhost-kernel net on pasta: from 26 to 37Gbit/s Message-ID: <20250610172931.4c730f04@elisabeth> In-Reply-To: References: <20250521120855.5cdaeb04@elisabeth> <20250606183702.0ff9a3c7@elisabeth> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.49; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: _PU1LAcLOJvaCH3JkR6wYgQ0M2jCBztBEgdW_0_C6l0_1749569374 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: WXYZC7T6ZKUTNWBKI7WCW5DE4R3QZO22 X-Message-ID-Hash: WXYZC7T6ZKUTNWBKI7WCW5DE4R3QZO22 X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top, Jason Wang , Jeff Nelson , Paul Holzinger X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: [Adding Paul as Podman developer] On Mon, 9 Jun 2025 11:59:21 +0200 Eugenio Perez Martin wrote: > On Fri, Jun 6, 2025 at 6:37=E2=80=AFPM Stefano Brivio wrote: > > > > On Fri, 6 Jun 2025 16:32:38 +0200 > > Eugenio Perez Martin wrote: > > =20 > > > On Wed, May 21, 2025 at 12:35=E2=80=AFPM Eugenio Perez Martin > > > wrote: =20 > > > > > > > > On Wed, May 21, 2025 at 12:09=E2=80=AFPM Stefano Brivio wrote: =20 > > > > > > > > > > On Tue, 20 May 2025 17:09:44 +0200 > > > > > Eugenio Perez Martin wrote: > > > > > =20 > > > > > > [...] > > > > > > > > > > > > Now if I isolate the vhost kernel thread [1] I get way more > > > > > > performance as expected: > > > > > > - - - - - - - - - - - - - - - - - - - - - - - - - > > > > > > [ ID] Interval Transfer Bitrate Retr > > > > > > [ 5] 0.00-10.00 sec 43.1 GBytes 37.1 Gbits/sec 0 = sender > > > > > > [ 5] 0.00-10.04 sec 43.1 GBytes 36.9 Gbits/sec = receiver > > > > > > > > > > > > After analyzing perf output, rep_movs_alternative is the most c= alled > > > > > > function in the three iperf3 (~20%Self), passt.avx2 (~15%Self) = and > > > > > > vhost (~15%Self) =20 > > > > > > > > > > Interesting... s/most called function/function using the most cyc= les/, I > > > > > suppose. > > > > > =20 > > > > > > > > Right! > > > > =20 > > > > > So it looks somewhat similar to > > > > > > > > > > https://archives.passt.top/passt-dev/20241017021027.2ac9ea53@el= isabeth/ > > > > > > > > > > now? > > > > > =20 > > > > > > > > Kind of. Below tcp_sendmsg_locked I don't see sk_page_frag_refill b= ut > > > > skb_do_copy_data_nocache. Not sure if that means something, as it > > > > should not be affected by vhost. > > > > =20 > > > > > > But I don't see any of them consuming 100% of CPU in > > > > > > top: pasta consumes ~85% %CPU, both iperf3 client and server co= nsumes > > > > > > 60%, and vhost consumes ~53%. > > > > > > > > > > > > So... I have mixed feelings about this :). By "default" it seem= s to > > > > > > have less performance, but my test is maybe too synthetic. =20 > > > > > > > > > > Well, surely we can't ask Podman users to pin specific stuff to g= iven > > > > > CPU threads. :) > > > > > =20 > > > > > > > > Yes but maybe the result changes under the right schedule? I'm > > > > isolating the CPUs entirely, which is not the usual case for pasta = for > > > > sure :). > > > > =20 > > > > > > There is room for improvement with the mentioned optimizations = so I'd > > > > > > continue applying them, continuing with UDP and TCP zerocopy, a= nd > > > > > > developing zerocopy vhost rx. =20 > > > > > > > > > > That definitely makes sense to me. > > > > > =20 > > > > > > > > Good! > > > > =20 > > > > > > With these numbers I think the series should not be > > > > > > merged at the moment. I could send it as RFC if you want but I'= ve not > > > > > > applied the comments the first one received, POC style :). =20 > > > > > > > > > > I don't think it's really needed for you to spend time on > > > > > semi-polishing something just to have an RFC if you're still work= ing on > > > > > it. I guess the implementation will change substantially anyway o= nce > > > > > you factor in further optimisations. > > > > > =20 > > > > > > > > Agree! I'll keep iterating on this then. > > > > =20 > > > > > > Actually, if I remove all the taskset etc, and trust the kernel > > > scheduler, vanilla pasta gives me: > > > [pasta@virtlab716 ~]$ /home/passt/pasta --config-net iperf3 -c 10.6.6= 8.254 -w 8M > > > Connecting to host 10.6.68.254, port 5201 > > > [ 5] local 10.6.68.20 port 40408 connected to 10.6.68.254 port 5201 > > > [ ID] Interval Transfer Bitrate Retr Cwnd > > > [ 5] 0.00-1.00 sec 3.11 GBytes 26.7 Gbits/sec 0 25.4 MByt= es > > > [ 5] 1.00-2.00 sec 3.11 GBytes 26.7 Gbits/sec 0 25.4 MByt= es > > > [ 5] 2.00-3.00 sec 3.12 GBytes 26.8 Gbits/sec 0 25.4 MByt= es > > > [ 5] 3.00-4.00 sec 3.11 GBytes 26.7 Gbits/sec 0 25.4 MByt= es > > > [ 5] 4.00-5.00 sec 3.10 GBytes 26.6 Gbits/sec 0 25.4 MByt= es > > > [ 5] 5.00-6.00 sec 3.11 GBytes 26.7 Gbits/sec 0 25.4 MByt= es > > > [ 5] 6.00-7.00 sec 3.11 GBytes 26.7 Gbits/sec 0 25.4 MByt= es > > > [ 5] 7.00-8.00 sec 3.09 GBytes 26.6 Gbits/sec 0 25.4 MByt= es > > > [ 5] 8.00-9.00 sec 3.08 GBytes 26.5 Gbits/sec 0 25.4 MByt= es > > > [ 5] 9.00-10.00 sec 3.10 GBytes 26.6 Gbits/sec 0 25.4 MByt= es > > > - - - - - - - - - - - - - - - - - - - - - - - - - > > > [ ID] Interval Transfer Bitrate Retr > > > [ 5] 0.00-10.00 sec 31.0 GBytes 26.7 Gbits/sec 0 = sender > > > [ 5] 0.00-10.04 sec 31.0 GBytes 26.5 Gbits/sec = receiver > > > > > > And with vhost-net : > > > [pasta@virtlab716 ~]$ /home/passt/pasta --config-net iperf3 -c 10.6.6= 8.254 -w 8M > > > ... > > > Connecting to host 10.6.68.254, port 5201 > > > [ 5] local 10.6.68.20 port 46720 connected to 10.6.68.254 port 5201 > > > [ ID] Interval Transfer Bitrate Retr Cwnd > > > [ 5] 0.00-1.00 sec 4.17 GBytes 35.8 Gbits/sec 0 11.9 MByt= es > > > [ 5] 1.00-2.00 sec 4.17 GBytes 35.9 Gbits/sec 0 11.9 MByt= es > > > [ 5] 2.00-3.00 sec 4.16 GBytes 35.7 Gbits/sec 0 11.9 MByt= es > > > [ 5] 3.00-4.00 sec 4.14 GBytes 35.6 Gbits/sec 0 11.9 MByt= es > > > [ 5] 4.00-5.00 sec 4.16 GBytes 35.7 Gbits/sec 0 11.9 MByt= es > > > [ 5] 5.00-6.00 sec 4.16 GBytes 35.8 Gbits/sec 0 11.9 MByt= es > > > [ 5] 6.00-7.00 sec 4.18 GBytes 35.9 Gbits/sec 0 11.9 MByt= es > > > [ 5] 7.00-8.00 sec 4.19 GBytes 35.9 Gbits/sec 0 11.9 MByt= es > > > [ 5] 8.00-9.00 sec 4.18 GBytes 35.9 Gbits/sec 0 11.9 MByt= es > > > [ 5] 9.00-10.00 sec 4.18 GBytes 35.9 Gbits/sec 0 11.9 MByt= es > > > - - - - - - - - - - - - - - - - - - - - - - - - - > > > [ ID] Interval Transfer Bitrate Retr > > > [ 5] 0.00-10.00 sec 41.7 GBytes 35.8 Gbits/sec 0 = sender > > > [ 5] 0.00-10.04 sec 41.7 GBytes 35.7 Gbits/sec = receiver > > > > > > If I go the extra mile and disable notifications (it might be just > > > noise, but...) > > > [pasta@virtlab716 ~]$ /home/passt/pasta --config-net iperf3 -c 10.6.6= 8.254 -w 8M > > > ... > > > Connecting to host 10.6.68.254, port 5201 > > > [ 5] local 10.6.68.20 port 56590 connected to 10.6.68.254 port 5201 > > > [ ID] Interval Transfer Bitrate Retr Cwnd > > > [ 5] 0.00-1.00 sec 4.19 GBytes 36.0 Gbits/sec 0 12.4 MByt= es > > > [ 5] 1.00-2.00 sec 4.18 GBytes 35.9 Gbits/sec 0 12.4 MByt= es > > > [ 5] 2.00-3.00 sec 4.18 GBytes 35.9 Gbits/sec 0 12.4 MByt= es > > > [ 5] 3.00-4.00 sec 4.20 GBytes 36.1 Gbits/sec 0 12.4 MByt= es > > > [ 5] 4.00-5.00 sec 4.21 GBytes 36.2 Gbits/sec 0 12.4 MByt= es > > > [ 5] 5.00-6.00 sec 4.21 GBytes 36.1 Gbits/sec 0 12.4 MByt= es > > > [ 5] 6.00-7.00 sec 4.20 GBytes 36.1 Gbits/sec 0 12.4 MByt= es > > > [ 5] 7.00-8.00 sec 4.23 GBytes 36.4 Gbits/sec 0 12.4 MByt= es > > > [ 5] 8.00-9.00 sec 4.24 GBytes 36.4 Gbits/sec 0 12.4 MByt= es > > > [ 5] 9.00-10.00 sec 4.21 GBytes 36.2 Gbits/sec 0 12.4 MByt= es > > > - - - - - - - - - - - - - - - - - - - - - - - - - > > > [ ID] Interval Transfer Bitrate Retr > > > [ 5] 0.00-10.00 sec 42.1 GBytes 36.1 Gbits/sec 0 = sender > > > [ 5] 0.00-10.04 sec 42.1 GBytes 36.0 Gbits/sec = receiver > > > > > > So I guess the best is to actually run performance tests closer to > > > real-world workload against the new version and see if it works > > > better? =20 > > > > Well, that's certainly a possibility. > > > > I'd say the biggest value for vhost-net usage in pasta is reaching > > throughput figures that are comparable with veth, with or without > > multithreading (keeping an eye on bytes per cycle, of course), with or > > without kernel changes, so that users won't need to choose between > > rootless and performance anymore. > > > > It would also simplify things in Podman quite a lot (and to some extent > > in rootlesskit / Docker as well). We're pretty much there with virtual > > machines, just not quite with containers (which is somewhat ironic, but > > of course there's a good reason for that). > > > > If we're clearly wasting cycles in vhost-net (because of the bounce > > buffer, plus something else perhaps?) *and* there's a somewhat possible > > solution for that in sight *and* the interface would change anyway, > > running throughput tests and polishing up the current version with a > > half-baked solution at the moment sounds a bit wasteful to me. >=20 > My point is that I'm testing a very synthetic scenario. If everybody > agree this is close enough to real world ones, I'm ok to continue > improving the edges we see. If not, maybe we're picking the wrong > fruit even if it is low hand? >=20 > Getting a table like [1] would give us light about this, especially if > it is just a matter of running "make performance" or similar. Maybe we > need to include longer queues? Focus on a given scenario? UDP goes > better but TCP? Well, it's a matter of running ./run under tests (or 'make' there). Have you tried that with your patch? It's kind of representative in the sense that it uses several message sizes and different values for the sending window. > Now more points about this scenario: > 1) I don't see 100% CPU usage in any element: > CPU% > 84.2 passt.avx2 > 57.9 iperf3 > 57.2 iperf3 > 50.7 vhost-1805109 Still, I bet we're using an awful amount of cycles compared to veth. > 2) The most used (Self%) function in vhost is rep_movs_alternative, > called from skb_copy_datagram_iter, so yes, ZeroCopy should help a lot > here. >=20 > Now, is "iperf3 -w 8M" representative? I'm sure ZC helps in this > scenario, does it make it worse if we have small packets? Do we care? We don't care _a lot_ about small packets because we can typically use large packets, inbound and outbound, at least for TCP (bulk) transfers. But users are doing all sort of things with containers, including bulk transfers and VPN traffic over UDP, so we do, a bit. Again, the main value of using vhost-net, I think, is making "rootful" networking essentially unnecessary, or necessary just for niche use cases (say, non-TCP, non-UDP traffic, or macvlan-like cases). If there are relatively common use cases where pasta performs pretty bad compared to veth, we'll still need rootful networking. So, yes, it is representative, but not necessarily universal. > I'm totally ok with continuing trying with ZC, I just want to make > sure we're not missing anything :). In any case, it looks like vhost-net zero-copy is a bigger task than we thought, so, even if we don't reach a universal solution that makes rootful networking essentially unnecessary, but we have a big improvement ready, there's of course a lot of value in it. Your call... --=20 Stefano