From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Cd+2ZH+Q; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTP id 802F75A004E for ; Wed, 27 Nov 2024 11:04:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1732701845; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mEZMBoFY+FxomN5dXDgXNZliiKzELv43P+tsjATpoZE=; b=Cd+2ZH+Q+zrqze7HOT2S6Ytjbp3Ha8aB1jA0qXrRZkNPT1mLlSMbSfmrcH2QhTBUZUhu8j vqMYM8l5j5oZxq5E0GAYsgHN5QKbybn1sG2CRUSJT3UC3GiIskgjpT4jzJgm1FvU0K8SW/ m8ziVKDNNUomi1zFt5+AgFWpvnrgsXY= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-364-OMyVY-UGPMiW9dFCleWPFA-1; Wed, 27 Nov 2024 05:04:02 -0500 X-MC-Unique: OMyVY-UGPMiW9dFCleWPFA-1 X-Mimecast-MFC-AGG-ID: OMyVY-UGPMiW9dFCleWPFA Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-434996c1aa7so28424145e9.2 for ; Wed, 27 Nov 2024 02:04:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732701840; x=1733306640; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=mEZMBoFY+FxomN5dXDgXNZliiKzELv43P+tsjATpoZE=; b=ocTLPnJNfzsU650+7fT7bD/+dBtq8fpeWNFh0y1S656GbdPEYqo+C4EzSiZkBaES3b KMybDs0+0TsZlezQVADIcwljyQvD+7zX1chineYgB0nDvTxWF7t0w93h8zdO1TfwsXkv vLp0edNlIQR2nWjscdTWO/xWoGCU+q1HAdV2DBdcKKU0L49XcF9xfaIMmz81tmidaYC8 Mt54l/lLZ5zUrLGYavj4Ekr/i81bdEoqAb+LybBcqm2+B3T3KnTZhseWVNauPtVPJmQq g+H8slFT8GtxqzEsPuHNFx9M/9h/FHe3LfOKG4DVykcgfBo0A7Bg8+nvd2GHxgKs8ddQ DAEw== X-Gm-Message-State: AOJu0Yzw7n9ODccT+llLs9roz8uD63fsar1HyEgncMf8RQWZI2ZU3NRS J9tKOP0vkxK5KDwOtKBGt1Re/buGdrPlWdsdE/WmeB4bebkBv2Qxlp/r540JgvU0wzCy3rsVDTc XgOVYVRfbz50epKSrnWXkP/M1/A7I2NkaE+W9WULA+TwVm4mg01t3vEwSA7iaWXN2JzH6EhgUbO XM9WbgC8zsDoq3uWEUfoPGL8nFKWRjXHnr X-Gm-Gg: ASbGncvZAvapSLySzC+GWctyv4wOc8e8z+NoLvMAJBuavO8GZwkr7XeOWr/KM1WTs4T kDj7Sse8XU6lrk+l9Tki+dF7ts3XasCzO+y54yReRsHT1KCvSe4xHx3t3cPjf6SskwocvNYcHZT HDfH1jEMNCKMu6K1aR+gbI76b3NjzK8uuMTsKgiiObkPMbye8P/6jeWKErtZUGultz2h5T0Unlp E9/TJavTNqcEtPiNWml+LM+rzSCb8wDs7d+/646MOrO3vEUONdgzu8oEOVWV0dTk/SYGSquiFAI 7Sc= X-Received: by 2002:a05:600c:354e:b0:434:a684:9b1 with SMTP id 5b1f17b1804b1-434a9dbbfc6mr20557165e9.4.1732701839629; Wed, 27 Nov 2024 02:03:59 -0800 (PST) X-Google-Smtp-Source: AGHT+IF5tuLRnG6XFp3k/i3ErRtJtzVykfnOGnWHnmOsP5tILWUL2ZHMIlPh/u9oLEtuiJ68l6sK7Q== X-Received: by 2002:a05:600c:354e:b0:434:a684:9b1 with SMTP id 5b1f17b1804b1-434a9dbbfc6mr20556835e9.4.1732701839089; Wed, 27 Nov 2024 02:03:59 -0800 (PST) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [176.103.220.4]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa7d25d1sm16055695e9.31.2024.11.27.02.03.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 02:03:57 -0800 (PST) Date: Wed, 27 Nov 2024 11:03:55 +0100 From: Stefano Brivio To: Laurent Vivier Subject: Re: [PATCH v14 7/9] vhost-user: add vhost-user Message-ID: <20241127110355.402b1dbe@elisabeth> In-Reply-To: <83566556-2d9b-42ae-8876-588fe6b02b17@redhat.com> References: <20241122164337.3377854-1-lvivier@redhat.com> <20241122164337.3377854-8-lvivier@redhat.com> <20241127054749.7f1cfb25@elisabeth> <20241127104514.5a09c0d0@elisabeth> <83566556-2d9b-42ae-8876-588fe6b02b17@redhat.com> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.41; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: IpR3XhyfOkfzw2zLHoJuKqmA5unKsw2syo1R_8cLm3w_1732701841 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: 6GI3QISZNOY2JKXRO3GORQWSCFHCI7ED X-Message-ID-Hash: 6GI3QISZNOY2JKXRO3GORQWSCFHCI7ED X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Wed, 27 Nov 2024 10:48:41 +0100 Laurent Vivier wrote: > On 27/11/2024 10:45, Stefano Brivio wrote: > > On Wed, 27 Nov 2024 10:09:53 +0100 > > Laurent Vivier wrote: > > > >> On 27/11/2024 05:47, Stefano Brivio wrote: > >>> On Fri, 22 Nov 2024 17:43:34 +0100 > >>> Laurent Vivier wrote: > >>> > >>>> +/** > >>>> + * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payload) > >>>> + * @c: Execution context > >>>> + * @conn: Connection pointer > >>>> + * @flags: TCP flags: if not set, send segment only if ACK is due > >>>> + * > >>>> + * Return: negative error code on connection reset, 0 otherwise > >>>> + */ > >>>> +int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > >>>> +{ > >>>> + struct vu_dev *vdev = c->vdev; > >>>> + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > >>>> + const struct flowside *tapside = TAPFLOW(conn); > >>>> + size_t l2len, l4len, optlen, hdrlen; > >>>> + struct vu_virtq_element flags_elem[2]; > >>>> + struct tcp_payload_t *payload; > >>>> + struct ipv6hdr *ip6h = NULL; > >>>> + struct iovec flags_iov[2]; > >>>> + struct iphdr *iph = NULL; > >>>> + struct ethhdr *eh; > >>>> + uint32_t seq; > >>>> + int elem_cnt; > >>>> + int nb_ack; > >>>> + int ret; > >>>> + > >>>> + hdrlen = tcp_vu_hdrlen(CONN_V6(conn)); > >>>> + > >>>> + vu_set_element(&flags_elem[0], NULL, &flags_iov[0]); > >>>> + > >>>> + elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1, > >>>> + hdrlen + sizeof(struct tcp_syn_opts), NULL); > >>> > >>> Oops, I made this crash, by starting a number of iperf3 client threads > >>> on the host: > >>> > >>> $ iperf3 -c localhost -p 6001 -Z -l 500 -w 256M -t 600 -P20 > >>> > >>> with matching server in the guest, then terminating QEMU while the test > >>> is running. > >>> > >>> Details (I saw it first, then I reproduced it under gdb): > >>> > >>> accepted connection from PID 3115463 > >>> NDP: received RS, sending RA > >>> DHCP: offer to discover > >>> from 52:54:00:12:34:56 > >>> DHCP: ack to request > >>> from 52:54:00:12:34:56 > >>> NDP: sending unsolicited RA, next in 212s > >>> Client connection closed > >>> > >>> Program received signal SIGSEGV, Segmentation fault. > >>> 0x00005555555884f5 in vring_avail_idx (vq=0x555559343f10 ) at virtio.c:138 > >>> 138 vq->shadow_avail_idx = le16toh(vq->vring.avail->idx); > >>> (gdb) list > >>> 133 * > >>> 134 * Return: the available ring index of the given virtqueue > >>> 135 */ > >>> 136 static inline uint16_t vring_avail_idx(struct vu_virtq *vq) > >>> 137 { > >>> 138 vq->shadow_avail_idx = le16toh(vq->vring.avail->idx); > >>> 139 > >>> 140 return vq->shadow_avail_idx; > >>> 141 } > >>> 142 > >>> (gdb) bt > >>> #0 0x00005555555884f5 in vring_avail_idx (vq=0x555559343f10 ) at virtio.c:138 > >>> #1 vu_queue_empty (vq=vq@entry=0x555559343f10 ) at virtio.c:290 > >>> #2 vu_queue_pop (dev=dev@entry=0x555559343a00 , vq=vq@entry=0x555559343f10 , elem=elem@entry=0x7ffffff6f510) at virtio.c:505 > >>> #3 0x0000555555588c8c in vu_collect (vdev=vdev@entry=0x555559343a00 , vq=vq@entry=0x555559343f10 , elem=elem@entry=0x7ffffff6f510, max_elem=max_elem@entry=1, > >>> size=size@entry=74, frame_size=frame_size@entry=0x0) at vu_common.c:86 > >>> #4 0x000055555557e00e in tcp_vu_send_flag (c=0x7ffffff6f7a0, conn=0x5555555bd2d0 , flags=4) at tcp_vu.c:116 > >>> #5 0x0000555555578125 in tcp_send_flag (flags=4, conn=0x5555555bd2d0 , c=0x7ffffff6f7a0) at tcp.c:1278 > >>> #6 tcp_rst_do (conn=, c=) at tcp.c:1293 > >>> #7 tcp_timer_handler (c=c@entry=0x7ffffff6f7a0, ref=..., ref@entry=...) at tcp.c:2266 > >>> #8 0x0000555555558f26 in main (argc=, argv=) at passt.c:342 > >>> (gdb) p *vq > >>> $1 = {vring = {num = 256, desc = 0x0, avail = 0x0, used = 0x0, log_guest_addr = 4338774592, flags = 0}, last_avail_idx = 35133, shadow_avail_idx = 35133, used_idx = 35133, signalled_used = 0, > >>> signalled_used_valid = false, notification = true, inuse = 0, call_fd = -1, kick_fd = -1, err_fd = -1, enable = 1, started = false, vra = {index = 0, flags = 0, desc_user_addr = 139660501995520, > >>> used_user_addr = 139660502000192, avail_user_addr = 139660501999616, log_guest_addr = 4338774592}} > >>> (gdb) p *vq->vring.avail > >>> Cannot access memory at address 0x0 > >>> > >>> ...so we're sending a RST segment to the guest, but the ring doesn't > >>> exist anymore. > >>> > >>> By the way, I still have the gdb session running, if you need something > >>> else out of it. > >>> > >>> Now, I guess we should eventually introduce a more comprehensive > >>> handling of the case where the guest suddenly terminates (not specific > >>> to vhost-user), but given that we have vu_cleanup() working as expected > >>> in this case, I wonder if we shouldn't simply avoid calling > >>> vring_avail_idx() (it has a single caller) by checking for !vring.avail > >>> in the caller, or something like that. > >>> > >> > >> Yes, I think it's the lines I removed during the reviews: > >> > >> if (!vq->vring.avail) > >> return true; > > > > Ah, right: > > > > https://archives.passt.top/passt-dev/20241114163859.7eeafa38@elisabeth/ > > > > ...so, at least in our case, it's more than "sanity checks" after all. > > :) Well, I guess it depends on the definition. > > > >> Could you try to checkout virtio.c from v11? > > > > That would take a rather lengthy rebase, but I tried to reintroduce all > > the checks you had: > > > > -- > > diff --git a/virtio.c b/virtio.c > > index 6a97435..0598ff4 100644 > > --- a/virtio.c > > +++ b/virtio.c > > @@ -284,6 +284,9 @@ static int virtqueue_read_next_desc(const struct vring_desc *desc, > > */ > > bool vu_queue_empty(struct vu_virtq *vq) > > { > > + if (!vq->vring.avail) > > + return true; > > + > > if (vq->shadow_avail_idx != vq->last_avail_idx) > > return false; > > > > @@ -327,6 +330,9 @@ static bool vring_can_notify(const struct vu_dev *dev, struct vu_virtq *vq) > > */ > > void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq) > > { > > + if (!vq->vring.avail) > > + return; > > + > > if (!vring_can_notify(dev, vq)) { > > debug("vhost-user: virtqueue can skip notify..."); > > return; > > @@ -502,6 +508,9 @@ int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virtq_elemen > > unsigned int head; > > int ret; > > > > + if (!vq->vring.avail) > > + return -1; > > + > > if (vu_queue_empty(vq)) > > return -1; > > > > @@ -591,6 +600,9 @@ void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index, > > { > > struct vring_used_elem uelem; > > > > + if (!vq->vring.avail) > > + return; > > + > > idx = (idx + vq->used_idx) % vq->vring.num; > > > > uelem.id = htole32(index); > > @@ -633,6 +645,9 @@ void vu_queue_flush(struct vu_virtq *vq, unsigned int count) > > { > > uint16_t old, new; > > > > + if (!vq->vring.avail) > > + return; > > + > > /* Make sure buffer is written before we update index. */ > > smp_wmb(); > > > > -- > > > > and it's all fine with those, I tried doing a few nasty things and > > didn't observe any issue. > > > > Any check I missed? Do you want to submit it as follow-up patch? I can > > also do that. I'd rather (still) avoid a re-post of v14 if possible. > > As you prefer. Let me know. It would save me some time if you could... it should be based on v14 as it is. I didn't have time to take care of gcc warnings on 32-bit and of the build failure on musl, yet. -- Stefano