From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=HWtXN0ej; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id E14B75A0276 for ; Wed, 03 Dec 2025 19:54:49 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1764788088; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xBr+ZAXZAHMqRYuaXSK1QKDa1uMuuwB1PQawv0yqqLM=; b=HWtXN0ejfWJOQZUX/oQjRgsRCNZ4/hU8Ktm6TKLKZP9aGBUP6bSED1MREhCqpLmvuNDdLt QJmH1anO2NtSGpjDZXHGoDMENh7EMYP4kyrndY1ewMb/y0Eqg3wAMXCnD9/pOerd+s+VEy 1okjAaQmDHYo2z9MOy2XvZfg3CWJ00s= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-444-i0qv-BEcPyK6CwpFZNHcCQ-1; Wed, 03 Dec 2025 13:54:47 -0500 X-MC-Unique: i0qv-BEcPyK6CwpFZNHcCQ-1 X-Mimecast-MFC-AGG-ID: i0qv-BEcPyK6CwpFZNHcCQ_1764788086 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AF11A19560B2 for ; Wed, 3 Dec 2025 18:54:46 +0000 (UTC) Received: from lenovo-t14s.redhat.com (unknown [10.45.225.59]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9CBBB180028C; Wed, 3 Dec 2025 18:54:45 +0000 (UTC) From: Laurent Vivier To: passt-dev@passt.top Subject: [PATCH v3 5/6] tap: Convert packet pools to per-queue-pair arrays for multiqueue Date: Wed, 3 Dec 2025 19:54:33 +0100 Message-ID: <20251203185435.582096-6-lvivier@redhat.com> In-Reply-To: <20251203185435.582096-1-lvivier@redhat.com> References: <20251203185435.582096-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: p-zrKiDLnTC-MPTM-6INxt46pbhpNjrPrq6BZcwbA90_1764788086 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true Message-ID-Hash: ZDAWZELEBTSS5U25CY37T2FMZLHL4YI7 X-Message-ID-Hash: ZDAWZELEBTSS5U25CY37T2FMZLHL4YI7 X-MailFrom: lvivier@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Laurent Vivier X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Convert the global pool_tap4 and pool_tap6 packet pools from single pools to arrays of pools, one for each queue pair. This change is necessary to support multiqueue operation in vhost-user mode, where multiple queue pairs may be processing packets concurrently. The pool storage structures (pool_tap4_storage and pool_tap6_storage) are now arrays of VHOST_USER_MAX_VQS/2 elements, with corresponding pointer arrays (pool_tap4 and pool_tap6) for accessing them. Update tap_flush_pools() and tap_handler() to take a qpair parameter that selects which pool to operate on. Add bounds checking assertions to ensure qpair is within valid range. In passt and pasta modes, all operations use queue pair 0 (hardcoded in tap_passt_input and tap_pasta_input). In vhost-user mode, the queue pair is derived from the virtqueue index (index / 2, as TX/RX queues come in pairs). All pools within the array share the same buffer pointer: - In vhost-user mode: Points to the vhost-user memory structure, which is safe as packet data remains in guest memory and pools only track iovecs - In passt/pasta mode: Points to pkt_buf, which is safe as only queue pair 0 is used Signed-off-by: Laurent Vivier --- tap.c | 77 ++++++++++++++++++++++++++++++----------------------- tap.h | 5 ++-- vu_common.c | 6 ++--- 3 files changed, 50 insertions(+), 38 deletions(-) diff --git a/tap.c b/tap.c index 0d1f05865d60..c56afb73fd7e 100644 --- a/tap.c +++ b/tap.c @@ -94,9 +94,13 @@ CHECK_FRAME_LEN(L2_MAX_LEN_VU); DIV_ROUND_UP(sizeof(pkt_buf), \ ETH_HLEN + sizeof(struct ipv6hdr) + sizeof(struct udphdr)) -/* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers */ -static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS_IP4); -static PACKET_POOL_NOINIT(pool_tap6, TAP_MSGS_IP6); +/* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers + * One pool per queue pair for multiqueue support + */ +static PACKET_POOL_DECL(pool_tap4, TAP_MSGS_IP4) pool_tap4_storage[VHOST_USER_MAX_VQS / 2]; +static struct pool *pool_tap4[VHOST_USER_MAX_VQS / 2]; +static PACKET_POOL_DECL(pool_tap6, TAP_MSGS_IP6) pool_tap6_storage[VHOST_USER_MAX_VQS / 2]; +static struct pool *pool_tap6[VHOST_USER_MAX_VQS / 2]; #define TAP_SEQS 128 /* Different L4 tuples in one batch */ #define FRAGMENT_MSG_RATE 10 /* # seconds between fragment warnings */ @@ -714,12 +718,12 @@ static int tap4_handler(struct ctx *c, unsigned int qpair, unsigned int i, j, seq_count; struct tap4_l4_t *seq; - if (!c->ifi4 || !pool_tap4->count) - return pool_tap4->count; + if (!c->ifi4 || !pool_tap4[qpair]->count) + return pool_tap4[qpair]->count; i = 0; resume: - for (seq_count = 0, seq = NULL; i < pool_tap4->count; i++) { + for (seq_count = 0, seq = NULL; i < pool_tap4[qpair]->count; i++) { size_t l3len, hlen, l4len; struct ethhdr eh_storage; struct iphdr iph_storage; @@ -729,7 +733,7 @@ resume: struct iov_tail data; struct iphdr *iph; - if (!packet_get(pool_tap4, i, &data)) + if (!packet_get(pool_tap4[qpair], i, &data)) continue; eh = IOV_PEEK_HEADER(&data, eh_storage); @@ -796,7 +800,7 @@ resume: if (iph->protocol == IPPROTO_UDP) { struct iov_tail eh_data; - packet_get(pool_tap4, i, &eh_data); + packet_get(pool_tap4[qpair], i, &eh_data); if (dhcp(c, qpair, &eh_data)) continue; } @@ -827,7 +831,7 @@ resume: goto append; if (seq_count == TAP_SEQS) - break; /* Resume after flushing if i < pool_tap4->count */ + break; /* Resume after flushing if i < pool_tap4[qpair]->count */ for (seq = tap4_l4 + seq_count - 1; seq >= tap4_l4; seq--) { if (L4_MATCH(iph, uh, seq)) { @@ -873,10 +877,10 @@ append: } } - if (i < pool_tap4->count) + if (i < pool_tap4[qpair]->count) goto resume; - return pool_tap4->count; + return pool_tap4[qpair]->count; } /** @@ -892,12 +896,12 @@ static int tap6_handler(struct ctx *c, unsigned int qpair, const struct timespec unsigned int i, j, seq_count = 0; struct tap6_l4_t *seq; - if (!c->ifi6 || !pool_tap6->count) - return pool_tap6->count; + if (!c->ifi6 || !pool_tap6[qpair]->count) + return pool_tap6[qpair]->count; i = 0; resume: - for (seq_count = 0, seq = NULL; i < pool_tap6->count; i++) { + for (seq_count = 0, seq = NULL; i < pool_tap6[qpair]->count; i++) { size_t l4len, plen, check; struct in6_addr *saddr, *daddr; struct ipv6hdr ip6h_storage; @@ -909,7 +913,7 @@ resume: struct ipv6hdr *ip6h; uint8_t proto; - if (!packet_get(pool_tap6, i, &data)) + if (!packet_get(pool_tap6[qpair], i, &data)) return -1; eh = IOV_REMOVE_HEADER(&data, eh_storage); @@ -1017,7 +1021,7 @@ resume: goto append; if (seq_count == TAP_SEQS) - break; /* Resume after flushing if i < pool_tap6->count */ + break; /* Resume after flushing if i < pool_tap6[qpair]->count */ for (seq = tap6_l4 + seq_count - 1; seq >= tap6_l4; seq--) { if (L4_MATCH(ip6h, proto, uh, seq)) { @@ -1064,19 +1068,19 @@ append: } } - if (i < pool_tap6->count) + if (i < pool_tap6[qpair]->count) goto resume; - return pool_tap6->count; + return pool_tap6[qpair]->count; } /** - * tap_flush_pools() - Flush both IPv4 and IPv6 packet pools + * tap_flush_pools() - Flush both IPv4 and IPv6 packet pools for a given qpair */ -void tap_flush_pools(void) +void tap_flush_pools(unsigned int qpair) { - pool_flush(pool_tap4); - pool_flush(pool_tap6); + pool_flush(pool_tap4[qpair]); + pool_flush(pool_tap6[qpair]); } /** @@ -1087,6 +1091,7 @@ void tap_flush_pools(void) */ void tap_handler(struct ctx *c, unsigned int qpair, const struct timespec *now) { + ASSERT(qpair < VHOST_USER_MAX_VQS / 2); tap4_handler(c, qpair, now); tap6_handler(c, qpair, now); } @@ -1119,21 +1124,23 @@ void tap_add_packet(struct ctx *c, unsigned int qpair, struct iov_tail *data, proto_update_l2_buf(c->guest_mac); } + ASSERT(qpair < VHOST_USER_MAX_VQS / 2); + switch (ntohs(eh->h_proto)) { case ETH_P_ARP: case ETH_P_IP: - if (!pool_can_fit(pool_tap4, data)) { + if (!pool_can_fit(pool_tap4[qpair], data)) { tap4_handler(c, qpair, now); - pool_flush(pool_tap4); + pool_flush(pool_tap4[qpair]); } - packet_add(pool_tap4, data); + packet_add(pool_tap4[qpair], data); break; case ETH_P_IPV6: - if (!pool_can_fit(pool_tap6, data)) { + if (!pool_can_fit(pool_tap6[qpair], data)) { tap6_handler(c, qpair, now); - pool_flush(pool_tap6); + pool_flush(pool_tap6[qpair]); } - packet_add(pool_tap6, data); + packet_add(pool_tap6[qpair], data); break; default: break; @@ -1173,7 +1180,7 @@ static void tap_passt_input(struct ctx *c, const struct timespec *now) ssize_t n; char *p; - tap_flush_pools(); + tap_flush_pools(0); if (partial_len) { /* We have a partial frame from an earlier pass. Move it to the @@ -1256,7 +1263,7 @@ static void tap_pasta_input(struct ctx *c, const struct timespec *now) { ssize_t n, len; - tap_flush_pools(); + tap_flush_pools(0); for (n = 0; n <= (ssize_t)(sizeof(pkt_buf) - L2_MAX_LEN_PASTA); @@ -1512,10 +1519,14 @@ static void tap_sock_tun_init(struct ctx *c) */ static void tap_sock_update_pool(void *base, size_t size) { - int i; + unsigned int i; - pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS_IP4, base, size); - pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS_IP6, base, size); + for (i = 0; i < VHOST_USER_MAX_VQS / 2; i++) { + pool_tap4_storage[i] = PACKET_INIT(pool_tap4, TAP_MSGS_IP4, base, size); + pool_tap4[i] = (struct pool *)&pool_tap4_storage[i]; + pool_tap6_storage[i] = PACKET_INIT(pool_tap6, TAP_MSGS_IP6, base, size); + pool_tap6[i] = (struct pool *)&pool_tap6_storage[i]; + } for (i = 0; i < TAP_SEQS; i++) { tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, base, size); diff --git a/tap.h b/tap.h index d3ac0cb6a233..6d4f8bd156fb 100644 --- a/tap.h +++ b/tap.h @@ -119,8 +119,9 @@ void tap_handler_passt(struct ctx *c, uint32_t events, int tap_sock_unix_open(char *sock_path); void tap_sock_reset(struct ctx *c); void tap_backend_init(struct ctx *c); -void tap_flush_pools(void); -void tap_handler(struct ctx *c, unsigned int qpair, const struct timespec *now); +void tap_flush_pools(unsigned int qpair); +void tap_handler(struct ctx *c, unsigned int qpair, + const struct timespec *now); void tap_add_packet(struct ctx *c, unsigned int qpair, struct iov_tail *data, const struct timespec *now); #endif /* TAP_H */ diff --git a/vu_common.c b/vu_common.c index 80d9a30f6f71..8f0fa1180c78 100644 --- a/vu_common.c +++ b/vu_common.c @@ -170,7 +170,7 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, ASSERT(VHOST_USER_IS_QUEUE_TX(index)); - tap_flush_pools(); + tap_flush_pools(index / 2); count = 0; out_sg_count = 0; @@ -196,11 +196,11 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, data = IOV_TAIL(elem[count].out_sg, elem[count].out_num, 0); if (IOV_DROP_HEADER(&data, struct virtio_net_hdr_mrg_rxbuf)) - tap_add_packet(vdev->context, 0, &data, now); + tap_add_packet(vdev->context, index / 2, &data, now); count++; } - tap_handler(vdev->context, 0, now); + tap_handler(vdev->context, index / 2, now); if (count) { int i; -- 2.51.1