public inbox for passt-dev@passt.top
 help / color / mirror / code / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Laurent Vivier <lvivier@redhat.com>
Cc: passt-dev@passt.top
Subject: Re: [PATCH v2 02/13] vhost-user: Centralise 802.3 frame padding in vu_collect() and vu_flush()
Date: Thu, 12 Mar 2026 13:05:13 +1100	[thread overview]
Message-ID: <abIfWT2I3rykm7wa@zatzit> (raw)
In-Reply-To: <20260309094744.1907754-3-lvivier@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 15868 bytes --]

On Mon, Mar 09, 2026 at 10:47:33AM +0100, Laurent Vivier wrote:
> The per-protocol padding done by vu_pad() in tcp_vu.c and udp_vu.c was
> only correct for single-buffer frames, and assumed the padding area always
> fell within the first iov.  It also relied on each caller computing the
> right MAX(..., ETH_ZLEN + VNET_HLEN) size for vu_collect() and calling
> vu_pad() at the right point.
> 
> Centralise padding logic into the two shared vhost-user helpers instead:
> 
>  - vu_collect() now ensures at least ETH_ZLEN + VNET_HLEN bytes of buffer
>    space are collected, so there is always room for a minimum-sized frame.
> 
>  - vu_flush() computes the actual frame length (accounting for
>    VIRTIO_NET_F_MRG_RXBUF multi-buffer frames) and passes the padded
>    length to vu_queue_fill().
> 
> A new iov_memset() helper in iov.c zero-fills the padding area in each
> buffer before iov_truncate() sets the logical frame size.  The callers in
> tcp_vu.c, udp_vu.c and vu_send_single() use iov_memset() directly,
> replacing the now-removed vu_pad() helper and the MAX(..., ETH_ZLEN +
> VNET_HLEN) size calculations passed to vu_collect().
> 
> Centralising padding here will also ease the move to multi-iovec per
> element support, since there will be a single place to update.
> 
> In vu_send_single(), fix padding, truncation and data copy to use the
> requested frame size rather than the total available buffer space from
> vu_collect(), which could be larger.  Also add matching padding, truncation
> and explicit size to vu_collect() for the DUP_ACK path in
> tcp_vu_send_flag().
> 
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>

AFAICT this is correct, but some notes for polish below.

> ---
>  iov.c       | 24 +++++++++++++++++++
>  iov.h       |  2 ++
>  tcp_vu.c    | 35 +++++++++++++++++----------
>  udp_vu.c    | 12 ++++++----
>  vu_common.c | 69 +++++++++++++++++++++++++++++++----------------------
>  vu_common.h |  1 -
>  6 files changed, 96 insertions(+), 47 deletions(-)
> 
> diff --git a/iov.c b/iov.c
> index 31a3f5bc29e5..cd48667226f3 100644
> --- a/iov.c
> +++ b/iov.c
> @@ -169,6 +169,30 @@ size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size)
>  	return i;
>  }
>  
> +/**
> + * iov_memset() - Set bytes of an IO vector to a given value
> + * @iov:	IO vector
> + * @iov_cnt:	Number of elements in @iov
> + * @offset:	Byte offset in the iovec at which to start
> + * @c:		Byte value to fill with
> + * @length:	Number of bytes to set
> + */

Nit: This will write less than @length bytes if it runs out of space
in the iov.  I think that's the correct choice, but it might be worth
noting that explicitly in the description.  Not worth a respin on its
own, obviously.

> +void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c,
> +		size_t length)
> +{
> +	size_t i;
> +
> +	i = iov_skip_bytes(iov, iov_cnt, offset, &offset);
> +
> +	for ( ; i < iov_cnt; i++) {
> +		size_t n = MIN(iov[i].iov_len - offset, length);
> +
> +		memset((char *)iov[i].iov_base + offset, c, n);
> +		offset = 0;
> +		length -= n;
> +	}
> +}
> +
>  /**
>   * iov_tail_prune() - Remove any unneeded buffers from an IOV tail
>   * @tail:	IO vector tail (modified)
> diff --git a/iov.h b/iov.h
> index b4e50b0fca5a..d295d05b3bab 100644
> --- a/iov.h
> +++ b/iov.h
> @@ -30,6 +30,8 @@ size_t iov_to_buf(const struct iovec *iov, size_t iov_cnt,
>  		  size_t offset, void *buf, size_t bytes);
>  size_t iov_size(const struct iovec *iov, size_t iov_cnt);
>  size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size);
> +void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c,
> +		size_t length);
>  
>  /*
>   * DOC: Theory of Operation, struct iov_tail
> diff --git a/tcp_vu.c b/tcp_vu.c
> index fd734e857b3b..3adead5f33fa 100644
> --- a/tcp_vu.c
> +++ b/tcp_vu.c
> @@ -72,12 +72,12 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>  	struct vu_dev *vdev = c->vdev;
>  	struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE];
>  	struct vu_virtq_element flags_elem[2];
> -	size_t optlen, hdrlen, l2len;
>  	struct ipv6hdr *ip6h = NULL;
>  	struct iphdr *ip4h = NULL;
>  	struct iovec flags_iov[2];
>  	struct tcp_syn_opts *opts;
>  	struct iov_tail payload;
> +	size_t optlen, hdrlen;
>  	struct tcphdr *th;
>  	struct ethhdr *eh;
>  	uint32_t seq;
> @@ -90,7 +90,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>  	vu_set_element(&flags_elem[0], NULL, &flags_iov[0]);
>  
>  	elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1,
> -			      MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL);
> +			      hdrlen + sizeof(*opts), NULL);
>  	if (elem_cnt != 1)
>  		return -1;
>  
> @@ -131,6 +131,11 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>  		return ret;
>  	}
>  
> +	/* Pad short frames to ETH_ZLEN */
> +	if (ETH_ZLEN + VNET_HLEN > hdrlen + optlen) {
> +		iov_memset(&flags_iov[0], 1, hdrlen + optlen, 0,
> +			   ETH_ZLEN + VNET_HLEN - (hdrlen + optlen));
> +	}

Nit: this is a mildly bulky construction for a conceptually simple
operation, that you need to repeat several times.  I wonder if it
might be worth having an iov_memset() variant that takes and end point
instead of a length (and safely no-ops if end < start).

>  	iov_truncate(&flags_iov[0], 1, hdrlen + optlen);
>  	payload = IOV_TAIL(flags_elem[0].in_sg, 1, hdrlen);
>  
> @@ -140,9 +145,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>  	tcp_fill_headers(c, conn, eh, ip4h, ip6h, th, &payload,
>  			 NULL, seq, !*c->pcap);
>  
> -	l2len = optlen + hdrlen - VNET_HLEN;
> -	vu_pad(&flags_elem[0].in_sg[0], l2len);
> -
>  	if (*c->pcap)
>  		pcap_iov(&flags_elem[0].in_sg[0], 1, VNET_HLEN);
>  	nb_ack = 1;
> @@ -151,10 +153,17 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
>  		vu_set_element(&flags_elem[1], NULL, &flags_iov[1]);
>  
>  		elem_cnt = vu_collect(vdev, vq, &flags_elem[1], 1,
> -				      flags_elem[0].in_sg[0].iov_len, NULL);
> +				      hdrlen + optlen, NULL);
>  		if (elem_cnt == 1 &&
>  		    flags_elem[1].in_sg[0].iov_len >=
>  		    flags_elem[0].in_sg[0].iov_len) {
> +			/* Pad short frames to ETH_ZLEN */
> +			if (ETH_ZLEN + VNET_HLEN > hdrlen + optlen) {
> +				iov_memset(&flags_iov[1], 1, hdrlen + optlen, 0,
> +					   ETH_ZLEN + VNET_HLEN -
> +					   (hdrlen + optlen));
> +			}
> +			iov_truncate(&flags_iov[1], 1, hdrlen + optlen);
>  			memcpy(flags_elem[1].in_sg[0].iov_base,
>  			       flags_elem[0].in_sg[0].iov_base,
>  			       flags_elem[0].in_sg[0].iov_len);
> @@ -212,8 +221,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
>  
>  		cnt = vu_collect(vdev, vq, &elem[elem_cnt],
>  				 VIRTQUEUE_MAX_SIZE - elem_cnt,
> -				 MAX(MIN(mss, fillsize) + hdrlen, ETH_ZLEN + VNET_HLEN),
> -				 &frame_size);
> +				 MIN(mss, fillsize) + hdrlen, &frame_size);
>  		if (cnt == 0)
>  			break;
>  
> @@ -222,6 +230,7 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
>  		/* reserve space for headers in iov */
>  		iov = &elem[elem_cnt].in_sg[0];
>  		ASSERT(iov->iov_len >= hdrlen);
> +
>  		iov->iov_base = (char *)iov->iov_base + hdrlen;
>  		iov->iov_len -= hdrlen;
>  		head[(*head_cnt)++] = elem_cnt;
> @@ -246,6 +255,11 @@ static ssize_t tcp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq,
>  	if (!peek_offset_cap)
>  		ret -= already_sent;
>  
> +	/* Pad short frames to ETH_ZLEN */
> +	if (ETH_ZLEN + VNET_HLEN > (size_t)ret + hdrlen) {
> +		iov_memset(&iov_vu[DISCARD_IOV_NUM], elem_cnt, ret, 0,
> +			   ETH_ZLEN + VNET_HLEN - (ret + hdrlen));
> +	}
>  	/* adjust iov number and length of the last iov */
>  	i = iov_truncate(&iov_vu[DISCARD_IOV_NUM], elem_cnt, ret);
>  
> @@ -443,7 +457,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
>  		size_t frame_size = iov_size(iov, buf_cnt);
>  		bool push = i == head_cnt - 1;
>  		ssize_t dlen;
> -		size_t l2len;
>  
>  		ASSERT(frame_size >= hdrlen);
>  
> @@ -457,10 +470,6 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
>  
>  		tcp_vu_prepare(c, conn, iov, buf_cnt, &check, !*c->pcap, push);
>  
> -		/* Pad first/single buffer only, it's at least ETH_ZLEN long */
> -		l2len = dlen + hdrlen - VNET_HLEN;
> -		vu_pad(iov, l2len);
> -
>  		if (*c->pcap)
>  			pcap_iov(iov, buf_cnt, VNET_HLEN);
>  
> diff --git a/udp_vu.c b/udp_vu.c
> index 5effca777e0a..ef9d26118eaf 100644
> --- a/udp_vu.c
> +++ b/udp_vu.c
> @@ -73,7 +73,7 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s,
>  	const struct vu_dev *vdev = c->vdev;
>  	struct msghdr msg  = { 0 };
>  	int iov_cnt, iov_used;
> -	size_t hdrlen, l2len;
> +	size_t hdrlen;
>  
>  	ASSERT(!c->no_udp);
>  
> @@ -98,6 +98,7 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s,
>  
>  	/* reserve space for the headers */
>  	ASSERT(iov_vu[0].iov_len >= MAX(hdrlen, ETH_ZLEN + VNET_HLEN));
> +
>  	iov_vu[0].iov_base = (char *)iov_vu[0].iov_base + hdrlen;
>  	iov_vu[0].iov_len -= hdrlen;
>  
> @@ -115,12 +116,13 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s,
>  	iov_vu[0].iov_base = (char *)iov_vu[0].iov_base - hdrlen;
>  	iov_vu[0].iov_len += hdrlen;
>  
> +	/* Pad short frames to ETH_ZLEN */
> +	if (ETH_ZLEN + VNET_HLEN > *dlen + hdrlen) {
> +		iov_memset(iov_vu, iov_cnt, *dlen + hdrlen, 0,
> +			   ETH_ZLEN + VNET_HLEN - (*dlen + hdrlen));
> +	}
>  	iov_used = iov_truncate(iov_vu, iov_cnt, *dlen + hdrlen);
>  
> -	/* pad frame to 60 bytes: first buffer is at least ETH_ZLEN long */
> -	l2len = *dlen + hdrlen - VNET_HLEN;
> -	vu_pad(&iov_vu[0], l2len);
> -
>  	vu_set_vnethdr(iov_vu[0].iov_base, iov_used);
>  
>  	/* release unused buffers */
> diff --git a/vu_common.c b/vu_common.c
> index 5f2ce18e5b71..8ea05dd30890 100644
> --- a/vu_common.c
> +++ b/vu_common.c
> @@ -87,8 +87,8 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
>  	size_t current_size = 0;
>  	int elem_cnt = 0;
>  
> +	size = MAX(size, ETH_ZLEN + VNET_HLEN); /* 802.3 minimum size */

Nit: I usually prefer "Ethernet" to "802.3", since in practice most
frames we actually use are in Ethernet-II format (ethertype field),
rather than 802.3 format (length field).

>  	while (current_size < size && elem_cnt < max_elem) {
> -		struct iovec *iov;
>  		int ret;
>  
>  		ret = vu_queue_pop(vdev, vq, &elem[elem_cnt]);
> @@ -101,12 +101,12 @@ int vu_collect(const struct vu_dev *vdev, struct vu_virtq *vq,
>  			break;
>  		}
>  
> -		iov = &elem[elem_cnt].in_sg[0];
> +		elem[elem_cnt].in_num = iov_truncate(elem[elem_cnt].in_sg,
> +						     elem[elem_cnt].in_num,
> +						     size - current_size);
> -		if (iov->iov_len > size - current_size)
> -			iov->iov_len = size - current_size;
> -
> -		current_size += iov->iov_len;
> +		current_size += iov_size(elem[elem_cnt].in_sg,
> +					 elem[elem_cnt].in_num);

Double scanning the iovs of the element (once for iov_truncate(), once
for iov_size()) is a pity.  I guess it's cache hot, so it's probably
not a big deal.  Could be avoided by adding a "truncated length"
return parameter to iov_truncate(), but not sure it's worth the uglier
interface.

>  		elem_cnt++;
>  
>  		if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
> @@ -143,10 +143,30 @@ void vu_set_vnethdr(struct virtio_net_hdr_mrg_rxbuf *vnethdr, int num_buffers)
>  void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq,
>  	      struct vu_virtq_element *elem, int elem_cnt)
>  {
> -	int i;
> -
> -	for (i = 0; i < elem_cnt; i++)
> -		vu_queue_fill(vdev, vq, &elem[i], elem[i].in_sg[0].iov_len, i);
> +	int i, j, num_buffers;
> +
> +	for (i = 0; i < elem_cnt; i += num_buffers) {

The name "num_buffers" is slightly confusing.  AFAICT this is the
number of elements in the.. group (?is there a proper term?).  Each
element in that group could have multiple buffers in its in_sg list.

> +		const struct virtio_net_hdr_mrg_rxbuf *vnethdr;
> +		size_t len, padding, elem_size;
> +
> +		vnethdr = elem[i].in_sg[0].iov_base;

This assumes that the vnethdr itself fits in the first element.  I'm
guessing that really is a constraint of the vhost protocol, though?

> +		num_buffers = le16toh(vnethdr->num_buffers);
> +
> +		len = 0;
> +		for (j = 0; j < num_buffers - 1; j++) {
> +			elem_size = iov_size(elem[i + j].in_sg,
> +					     elem[i + j].in_num);
> +			vu_queue_fill(vdev, vq, &elem[i + j],
> +				      elem_size, i + j);
> +			len += elem_size;
> +		}
> +		/* pad the last element to have an 802.3 minimum frame size */
> +		elem_size = iov_size(elem[i + j].in_sg, elem[i + j].in_num);

elem_size should already have this value from the inner loop, no?

> +		padding = MAX(0, (ssize_t)(ETH_ZLEN + VNET_HLEN) -
> +			         (ssize_t)(len + elem_size));

I tend to prefer an x > y test followed by unsigned subtraction,
rather than signed subtraction followed by checking for negative
because it avoids thinking about whether each of the sighed/unsigned
casts is strictly safe.

> +		vu_queue_fill(vdev, vq, &elem[i + j], elem_size + padding,
> +			      i + j);
> +	}

I'm not entirely clear on what makes using the padded size here safe.

>  
>  	vu_queue_flush(vdev, vq, elem_cnt);
>  	vu_queue_notify(vdev, vq);
> @@ -268,38 +288,31 @@ int vu_send_single(const struct ctx *c, const void *buf, size_t size)
>  		goto err;
>  	}
>  
> +	/* Pad short frames to ETH_ZLEN */
> +	if (size < ETH_ZLEN + VNET_HLEN) {
> +		iov_memset(in_sg, elem_cnt, size, 0,
> +			   ETH_ZLEN + VNET_HLEN - size);
> +	}
> +	elem_cnt = iov_truncate(in_sg, elem_cnt, size);

Truncating to the unpadded size here seems odd.

>  	vu_set_vnethdr(in_sg[0].iov_base, elem_cnt);
>  
> -	total -= VNET_HLEN;
> +	size -= VNET_HLEN;
>  
>  	/* copy data from the buffer to the iovec */
> -	iov_from_buf(in_sg, elem_cnt, VNET_HLEN, buf, total);
> +	iov_from_buf(in_sg, elem_cnt, VNET_HLEN, buf, size);
>  
>  	if (*c->pcap)
>  		pcap_iov(in_sg, elem_cnt, VNET_HLEN);
>  
>  	vu_flush(vdev, vq, elem, elem_cnt);
>  
> -	trace("vhost-user sent %zu", total);
> +	trace("vhost-user sent %zu", size);
>  
> -	return total;
> +	return size;
>  err:
>  	for (i = 0; i < elem_cnt; i++)
>  		vu_queue_detach_element(vq);
>  
>  	return -1;
>  }
> -
> -/**
> - * vu_pad() - Pad 802.3 frame to minimum length (60 bytes) if needed
> - * @iov:	Buffer in iovec array where end of 802.3 frame is stored
> - * @l2len:	Layer-2 length already filled in frame
> - */
> -void vu_pad(struct iovec *iov, size_t l2len)
> -{
> -	if (l2len >= ETH_ZLEN)
> -		return;
> -
> -	memset((char *)iov->iov_base + iov->iov_len, 0, ETH_ZLEN - l2len);
> -	iov->iov_len += ETH_ZLEN - l2len;
> -}
> diff --git a/vu_common.h b/vu_common.h
> index 865d9771fa89..5de0c987b936 100644
> --- a/vu_common.h
> +++ b/vu_common.h
> @@ -61,6 +61,5 @@ void vu_flush(const struct vu_dev *vdev, struct vu_virtq *vq,
>  void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref,
>  		const struct timespec *now);
>  int vu_send_single(const struct ctx *c, const void *buf, size_t size);
> -void vu_pad(struct iovec *iov, size_t l2len);
>  
>  #endif /* VU_COMMON_H */
> -- 
> 2.53.0
> 

-- 
David Gibson (he or they)	| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you, not the other way
				| around.
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2026-03-12  2:05 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-09  9:47 [PATCH v2 00/13] vhost-user,udp: Handle multiple iovec entries per virtqueue element Laurent Vivier
2026-03-09  9:47 ` [PATCH v2 01/13] iov: Add iov_truncate() helper and use it in vu handlers Laurent Vivier
2026-03-09  9:47 ` [PATCH v2 02/13] vhost-user: Centralise 802.3 frame padding in vu_collect() and vu_flush() Laurent Vivier
2026-03-12  2:05   ` David Gibson [this message]
2026-03-09  9:47 ` [PATCH v2 03/13] vhost-user: Use ARRAY_SIZE(elem) instead of VIRTQUEUE_MAX_SIZE Laurent Vivier
2026-03-09  9:47 ` [PATCH v2 04/13] udp_vu: Use iov_tail to manage virtqueue buffers Laurent Vivier
2026-03-12  2:38   ` David Gibson
2026-03-09  9:47 ` [PATCH v2 05/13] udp_vu: Move virtqueue management from udp_vu_sock_recv() to its caller Laurent Vivier
2026-03-12  3:44   ` David Gibson
2026-03-09  9:47 ` [PATCH v2 06/13] iov: Add IOV_PUT_HEADER() to write header data back to iov_tail Laurent Vivier
2026-03-12  4:12   ` David Gibson
2026-03-12  8:20     ` Laurent Vivier
2026-03-09  9:47 ` [PATCH v2 07/13] udp: Pass iov_tail to udp_update_hdr4()/udp_update_hdr6() Laurent Vivier
2026-03-12  4:16   ` David Gibson
2026-03-09  9:47 ` [PATCH v2 08/13] udp_vu: Use iov_tail in udp_vu_prepare() Laurent Vivier
2026-03-12  4:30   ` David Gibson
2026-03-12  8:19     ` Laurent Vivier
2026-03-12  9:51       ` David Gibson
2026-03-09  9:47 ` [PATCH v2 09/13] vu_common: Pass iov_tail to vu_set_vnethdr() Laurent Vivier
2026-03-12  4:34   ` David Gibson
2026-03-09  9:47 ` [PATCH v2 10/13] vu_common: Accept explicit iovec counts in vu_set_element() Laurent Vivier
2026-03-09  9:47 ` [PATCH v2 11/13] vu_common: Accept explicit iovec count per element in vu_init_elem() Laurent Vivier
2026-03-09  9:47 ` [PATCH v2 12/13] vu_common: Prepare to use multibuffer with guest RX Laurent Vivier
2026-03-09  9:47 ` [PATCH v2 13/13] vhost-user,udp: Use 2 iovec entries per element Laurent Vivier
2026-03-12  4:39   ` David Gibson
2026-03-12  8:08     ` Laurent Vivier
2026-03-12  9:47       ` David Gibson
2026-03-12 10:42         ` Laurent Vivier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abIfWT2I3rykm7wa@zatzit \
    --to=david@gibson.dropbear.id.au \
    --cc=lvivier@redhat.com \
    --cc=passt-dev@passt.top \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://passt.top/passt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).