From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by passt.top (Postfix) with ESMTP id 5F9A25A026D for ; Thu, 14 Mar 2024 17:27:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710433619; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=73xKNx8qFZwQ65EQ3gbtz4Jv6SRXL8MdtkBOBahjuy8=; b=ipqvPwJpGN4PpP5hBt8UiTZESPYHKiEaj+icC97qSIY68P8/Rgq/OVmB5caqhN7/wwDSlh I+SRKK9F/nwMqecYCkjfixIuDz+MI2K9cIhMhcEArGpMG+BcRtnRxj7KROvwAALfxxa15y X6VPdSDvhxDvzjC8GZ4WApcLtDXLg/s= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-515-LpPm7rvQNaaj48dPp8ygig-1; Thu, 14 Mar 2024 12:26:56 -0400 X-MC-Unique: LpPm7rvQNaaj48dPp8ygig-1 Received: by mail-ej1-f72.google.com with SMTP id a640c23a62f3a-a4675f4361dso49156966b.1 for ; Thu, 14 Mar 2024 09:26:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710433614; x=1711038414; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=73xKNx8qFZwQ65EQ3gbtz4Jv6SRXL8MdtkBOBahjuy8=; b=QQM2GCjCqVZzyLG7Ytfnbkm/uKglCqZ3RvGwu86Xw6+mwaJ9KPoxMh1qxWmytnWEkL Cgdj9Wcj9IawJK0muR4ILFaEc2NOdOx67XG2DCKuNt6+l/Vqd2MFD8FHb9MrW6Gr81Zr GKiVJ/xeBDluXY5p7/TKmLlTiPPr6bEdeY9+u7Wrk8uLltaJMs9UXbAPI91En204zZ2a OjjA11SIiiuBJEXSnTGe3o5/iOjRmZZZaVGcidmlpC+QfPIKWfdk/AoO+8wB1xHJ2fhQ qBxa31euwoYOXEaCPrtN2+pV8+o/xX6XQmsIobt8SeLrumBh49nvZXJhp4G0m2dW4qb/ X8gw== X-Gm-Message-State: AOJu0Yw7HFfTqQj/yngoTSLrDmeAxg9vm5VypqBNqdmxgJ4JF/GS2jtf 6LkizRXIF3l9i/avNySTN6R4HRcBgVIjW598mb0vN0eIcB7Lo+nrliHxqAykGXKhk1NLidniRPN 0YO9Aka9zvJ3A3kFMFKhp0zZs7X0k9DZDw4yvCAyI4aVXCXVNyICgcAFv8UGby7HESUxisg64d4 oGMgFtfFBIjBRyO46juICAQod39g/XDPnlDpE= X-Received: by 2002:a17:906:6b85:b0:a46:5100:5d2c with SMTP id l5-20020a1709066b8500b00a4651005d2cmr1433477ejr.72.1710433614483; Thu, 14 Mar 2024 09:26:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF4zB5/G4KC2NoH6w/sAXX/flauFQfOyF92fdcVceAQ1eHlMWfn9wrcbmEyjUllfmV8dwhmQA== X-Received: by 2002:a17:906:6b85:b0:a46:5100:5d2c with SMTP id l5-20020a1709066b8500b00a4651005d2cmr1433463ejr.72.1710433613857; Thu, 14 Mar 2024 09:26:53 -0700 (PDT) Received: from maya.cloud.tilaa.com (maya.cloud.tilaa.com. [164.138.29.33]) by smtp.gmail.com with ESMTPSA id k4-20020a170906578400b00a464b3ee655sm869278ejq.13.2024.03.14.09.26.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 14 Mar 2024 09:26:53 -0700 (PDT) Date: Thu, 14 Mar 2024 17:26:17 +0100 From: Stefano Brivio To: Laurent Vivier Subject: Re: [RFC] tcp: Replace TCP buffer structure by an iovec array Message-ID: <20240314172617.22c28caa@elisabeth> In-Reply-To: <893d5b17-cb92-49bf-8752-7ba1d798ceeb@redhat.com> References: <20240311133356.1405001-1-lvivier@redhat.com> <20240313123725.7a37f311@elisabeth> <84cadd0b-4102-4bde-bad6-45705cca34ce@redhat.com> <20240314164707.75ee6501@elisabeth> <893d5b17-cb92-49bf-8752-7ba1d798ceeb@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.36; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: NAKJGR2PK77XHVTY6LKPQEVCFECH5O6O X-Message-ID-Hash: NAKJGR2PK77XHVTY6LKPQEVCFECH5O6O X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Thu, 14 Mar 2024 16:54:02 +0100 Laurent Vivier wrote: > On 3/14/24 16:47, Stefano Brivio wrote: > > On Thu, 14 Mar 2024 15:07:48 +0100 > > Laurent Vivier wrote: > > > >> On 3/13/24 12:37, Stefano Brivio wrote: > >> ... > >>>> @@ -390,6 +414,42 @@ static size_t tap_send_frames_passt(const struct ctx *c, > >>>> return i; > >>>> } > >>>> > >>>> +/** > >>>> + * tap_send_iov_passt() - Send out multiple prepared frames > >>> > >>> ...I would argue that this function prepares frames as well. Maybe: > >>> > >>> * tap_send_iov_passt() - Prepare TCP_IOV_VNET parts and send multiple frames > >>> > >>>> + * @c: Execution context > >>>> + * @iov: Array of frames, each frames is divided in an array of iovecs. > >>>> + * The first entry of the iovec is updated to point to an > >>>> + * uint32_t storing the frame length. > >>> > >>> * @iov: Array of frames, each one a vector of parts, TCP_IOV_VNET blank > >>> > >>>> + * @n: Number of frames in @iov > >>>> + * > >>>> + * Return: number of frames actually sent > >>>> + */ > >>>> +static size_t tap_send_iov_passt(const struct ctx *c, > >>>> + struct iovec iov[][TCP_IOV_NUM], > >>>> + size_t n) > >>>> +{ > >>>> + unsigned int i; > >>>> + > >>>> + for (i = 0; i < n; i++) { > >>>> + uint32_t vnet_len; > >>>> + int j; > >>>> + > >>>> + vnet_len = 0; > >>> > >>> This could be initialised in the declaration (yes, it's "reset" at > >>> every loop iteration). > >>> > >>>> + for (j = TCP_IOV_ETH; j < TCP_IOV_NUM; j++) > >>>> + vnet_len += iov[i][j].iov_len; > >>>> + > >>>> + vnet_len = htonl(vnet_len); > >>>> + iov[i][TCP_IOV_VNET].iov_base = &vnet_len; > >>>> + iov[i][TCP_IOV_VNET].iov_len = sizeof(vnet_len); > >>>> + > >>>> + if (!tap_send_frames_passt(c, iov[i], TCP_IOV_NUM)) > >>> > >>> ...which would now send a single frame at a time, but actually it can > >>> already send everything in one shot because it's using sendmsg(), if you > >>> move it outside of the loop and do something like (untested): > >>> > >>> return tap_send_frames_passt(c, iov, TCP_IOV_NUM * n); > >>> > >>>> + break; > >>>> + } > >>>> + > >>>> + return i; > >>>> + > >>>> +} > >>>> + > >> > >> I tried to do something like that but I have a performance drop: > >> > >> static size_t tap_send_iov_passt(const struct ctx *c, > >> struct iovec iov[][TCP_IOV_NUM], > >> size_t n) > >> { > >> unsigned int i; > >> uint32_t vnet_len[n]; > >> > >> for (i = 0; i < n; i++) { > >> int j; > >> > >> vnet_len[i] = 0; > >> for (j = TCP_IOV_ETH; j < TCP_IOV_NUM; j++) > >> vnet_len[i] += iov[i][j].iov_len; > >> > >> vnet_len[i] = htonl(vnet_len[i]); > >> iov[i][TCP_IOV_VNET].iov_base = &vnet_len[i]; > >> iov[i][TCP_IOV_VNET].iov_len = sizeof(uint32_t); > >> } > >> > >> return tap_send_frames_passt(c, &iov[0][0], TCP_IOV_NUM * n) / TCP_IOV_NUM; > >> } > >> > >> iperf3 -c localhost -p 10001 -t 60 -4 > >> > >> berfore > >> [ ID] Interval Transfer Bitrate Retr > >> [ 5] 0.00-60.00 sec 33.0 GBytes 4.72 Gbits/sec 1 sender > >> [ 5] 0.00-60.06 sec 33.0 GBytes 4.72 Gbits/sec receiver > >> > >> after: > >> [ ID] Interval Transfer Bitrate Retr > >> [ 5] 0.00-60.00 sec 18.2 GBytes 2.60 Gbits/sec 0 sender > >> [ 5] 0.00-60.07 sec 18.2 GBytes 2.60 Gbits/sec receiver > > > > Weird, it looks like doing one sendmsg() per frame results in a higher > > throughput than one sendmsg() per multiple frames, which sounds rather > > absurd. Perhaps we should start looking into what perf(1) reports, in > > terms of both syscall overhead and cache misses. > > > > I'll have a look later today or tomorrow -- unless you have other > > ideas as to why this might happen... > > Perhaps in first case we only update one vnet_len and in the second case we have to update > an array of vnet_len, so there is an use of more cache lines? Yes, I'm wondering if for example this: iov[i][TCP_IOV_VNET].iov_base = &vnet_len[i]; causes a prefetch of everything pointed by iov[i][...], so we would prefetch (and throw away) each buffer, one by one. Another interesting experiment to verify if this is the case could be to "flush" a few frames at a time (say, 4), with something like this on top of your original change (completely untested): [...] if (!((i + 1) % 4) && !tap_send_frames_passt(c, iov[i / 4], TCP_IOV_NUM * 4)) break; } if ((i + 1) % 4) { tap_send_frames_passt(c, iov[i / 4], TCP_IOV_NUM * ((i + 1) % 4)); } Or maybe we could set vnet_len right after we receive data in the buffers. -- Stefano