From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Up0vD700; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id 206ED5A0274 for ; Fri, 27 Mar 2026 18:58:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774634322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2ia8tI7keKUotJyDrWqviObmfoci7rVWoz2mdLT51jQ=; b=Up0vD700j6I/4IQkW2EPDnbHnamAwwy/knAZeHdCXLxH1QkIs7UeEaeUbYlwlNlrHr+ydc iEM9wM2crKutu9jLHjqu9qnqxC9QTv6FhsftEB8DLEVIAVFP1uAZeQabb/O48y6Iql3IAJ qADfBlktYSCazQ/KDH5Uzcg5S1MlL8I= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-356-50-xhQu0M_-wm0bVynwA0w-1; Fri, 27 Mar 2026 13:58:41 -0400 X-MC-Unique: 50-xhQu0M_-wm0bVynwA0w-1 X-Mimecast-MFC-AGG-ID: 50-xhQu0M_-wm0bVynwA0w_1774634320 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 96D9218002CA for ; Fri, 27 Mar 2026 17:58:40 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.44.32.96]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8E37A1800351; Fri, 27 Mar 2026 17:58:39 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH v5 2/8] vu_common: Move vnethdr setup into vu_flush() Date: Fri, 27 Mar 2026 18:58:28 +0100 Message-ID: <20260327175834.831995-3-lvivier@redhat.com> In-Reply-To: <20260327175834.831995-1-lvivier@redhat.com> References: <20260327175834.831995-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: O9_KYDx0CRw15XXS9SgNt3tkIOJRvWq1pkw0gkdgOcs_1774634320 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true Message-ID-Hash: SNTZKACSH64G73G2NN54TJXFNSJJ67YO X-Message-ID-Hash: SNTZKACSH64G73G2NN54TJXFNSJJ67YO X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Every caller of vu_flush() was calling vu_set_vnethdr() beforehand with the same pattern. Move it into vu_flush(). Remove vu_queue_notify() from vu_flush() and let callers invoke it explicitly. This allows paths that perform multiple flushes, such as tcp_vu_send_flag() and tcp_vu_data_from_sock(), to issue a single guest notification at the end. Signed-off-by: Laurent Vivier --- tcp_vu.c | 19 ++++++++----------- udp_vu.c | 3 +-- vu_common.c | 9 +++++---- vu_common.h | 1 - 4 files changed, 14 insertions(+), 18 deletions(-) diff --git a/tcp_vu.c b/tcp_vu.c index dc0e17c0f03f..0cd01190d612 100644 --- a/tcp_vu.c +++ b/tcp_vu.c @@ -82,7 +82,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) struct ethhdr *eh; uint32_t seq; int elem_cnt; - int nb_ack; int ret; hdrlen = tcp_vu_hdrlen(CONN_V6(conn)); @@ -97,8 +96,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) assert(flags_elem[0].in_sg[0].iov_len >= MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN)); - vu_set_vnethdr(flags_elem[0].in_sg[0].iov_base, 1); - eh = vu_eth(flags_elem[0].in_sg[0].iov_base); memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); @@ -143,9 +140,10 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) l2len = optlen + hdrlen - VNET_HLEN; vu_pad(&flags_elem[0].in_sg[0], l2len); + vu_flush(vdev, vq, flags_elem, 1); + if (*c->pcap) pcap_iov(&flags_elem[0].in_sg[0], 1, VNET_HLEN); - nb_ack = 1; if (flags & DUP_ACK) { elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1, @@ -157,14 +155,14 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) memcpy(flags_elem[1].in_sg[0].iov_base, flags_elem[0].in_sg[0].iov_base, flags_elem[0].in_sg[0].iov_len); - nb_ack++; + + vu_flush(vdev, vq, &flags_elem[1], 1); if (*c->pcap) pcap_iov(&flags_elem[1].in_sg[0], 1, VNET_HLEN); } } - - vu_flush(vdev, vq, flags_elem, nb_ack); + vu_queue_notify(vdev, vq); return 0; } @@ -451,7 +449,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn) assert(frame_size >= hdrlen); dlen = frame_size - hdrlen; - vu_set_vnethdr(iov->iov_base, buf_cnt); /* The IPv4 header checksum varies only with dlen */ if (previous_dlen != dlen) @@ -464,14 +461,14 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn) l2len = dlen + hdrlen - VNET_HLEN; vu_pad(iov, l2len); + vu_flush(vdev, vq, &elem[head[i]], buf_cnt); + if (*c->pcap) pcap_iov(iov, buf_cnt, VNET_HLEN); conn->seq_to_tap += dlen; } - - /* send packets */ - vu_flush(vdev, vq, elem, iov_cnt); + vu_queue_notify(vdev, vq); conn_flag(c, conn, ACK_FROM_TAP_DUE); diff --git a/udp_vu.c b/udp_vu.c index cc69654398f0..f8629af58ab5 100644 --- a/udp_vu.c +++ b/udp_vu.c @@ -124,8 +124,6 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s, l2len = *dlen + hdrlen - VNET_HLEN; vu_pad(&iov_vu[0], l2len); - vu_set_vnethdr(iov_vu[0].iov_base, elem_used); - /* release unused buffers */ vu_queue_rewind(vq, elem_cnt - elem_used); @@ -230,6 +228,7 @@ void udp_vu_sock_to_tap(const struct ctx *c, int s, int n, flow_sidx_t tosidx) pcap_iov(iov_vu, iov_used, VNET_HLEN); } vu_flush(vdev, vq, elem, iov_used); + vu_queue_notify(vdev, vq); } } } diff --git a/vu_common.c b/vu_common.c index 92381cd33c85..7627fad5976b 100644 --- a/vu_common.c +++ b/vu_common.c @@ -118,7 +118,8 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, * @vnethdr: Address of the header to set * @num_buffers: Number of guest buffers of the frame */ -void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers) +static void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, + int num_buffers) { vnethdr->hdr = VU_HEADER; /* Note: if VIRTIO_NET_F_MRG_RXBUF is not negotiated, @@ -139,6 +140,8 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, { int i; + vu_set_vnethdr(elem[0].in_sg[0].iov_base, elem_cnt); + for (i = 0; i < elem_cnt; i++) { size_t elem_size = iov_size(elem[i].in_sg, elem[i].in_num); @@ -146,7 +149,6 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, } vu_queue_flush(vdev, vq, elem_cnt); - vu_queue_notify(vdev, vq); } /** @@ -260,8 +262,6 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) goto err; } - vu_set_vnethdr(in_sg[0].iov_base, elem_cnt); - total -= VNET_HLEN; /* copy data from the buffer to the iovec */ @@ -271,6 +271,7 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size) pcap_iov(in_sg, in_total, VNET_HLEN); vu_flush(vdev, vq, elem, elem_cnt); + vu_queue_notify(vdev, vq); trace("vhost-user sent %zu", total); diff --git a/vu_common.h b/vu_common.h index 7b060eb6184f..4037ab765b7d 100644 --- a/vu_common.h +++ b/vu_common.h @@ -39,7 +39,6 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq, struct vu_virtq_element *elem, int max_elem, struct iovec *in_sg, size_t max_in_sg, size_t *in_total, size_t size, size_t *collected); -void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers); void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq, struct vu_virtq_element *elem, int elem_cnt); void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, -- 2.53.0