From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Otk8M0OT; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTP id 5A6635A004E for ; Wed, 27 Nov 2024 10:45:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1732700721; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z9OFFxNa8JqAj2EOUbPFoUENuaSxp8/TVfnXLfAPoQU=; b=Otk8M0OTHcaig+XOxDE7Y2Mg9espSK6g3PZvUEZ5Hm8WANF76CpMtNdz2fHtRcPznh+FpJ JljJBCVZO4FY2bV8gfmnyWo1uJmrMXhfOv9/vWtCb0/OSFWiMNKgwBGas2CkutbcPKTeVt HAuKVzeVjEz4ywkytyvf9SLgaRSFBr0= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-664-ZMkducjaOPO_3ljKLdaTDw-1; Wed, 27 Nov 2024 04:45:19 -0500 X-MC-Unique: ZMkducjaOPO_3ljKLdaTDw-1 X-Mimecast-MFC-AGG-ID: ZMkducjaOPO_3ljKLdaTDw Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-434a9861222so6977655e9.2 for ; Wed, 27 Nov 2024 01:45:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732700717; x=1733305517; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=z9OFFxNa8JqAj2EOUbPFoUENuaSxp8/TVfnXLfAPoQU=; b=bkthFIwXJ2mHjG9AAtzWcpDelPNF1o6OPRSbesiJ5qlVp8rFpVmCLyPYKMDNa7uaPz tSN2PN0GqUu6mSuLINlzuMjj434A90nGJVj+nywFLqW3RXB6kEDjiyiO+6Bwx8/yK3Fq MXOEi/UQR5sa/IKSbqPu+LOs0CgQ2bkFppOkDZ09VQBjoNCoMJi+b/l9WsGGYCEpRTUh kleKSjm8gA2SUH+3zt6Ib7DEK4sb2xJjyCbcmBCh7Y9Gx43hmCp0PZILm69Gn6BUMxT9 qg59twwqnWkHOrAM8T7ZaNrlpTtzOLZp5jT3AUuLqOuasjnL4wWg3ssyxPuGj3Kv56Ov +7Rg== X-Gm-Message-State: AOJu0Yz0WD/gV7vKsmABIBqAxFXux2G6Esat8UB6j9/1a6CQ0ydcTfbq TqJTkNR1S29ucvF3CJm+F/zgIprTY40wwUSpLQh5Cmbk6k9gpBryY42Qa728/iTb6eEjsNfmvrQ 7K7ce/J+hDXEPLPJ/PkXR8v1ITimqxZVdhiPba58/92EkGBCXwy02vQ0rSJAroM/4YdTzIf0rkf yC3izyVDLcmRLBO1wKlkLgOIS/lCdxn1rY X-Gm-Gg: ASbGncvKfTXbPtd8R3VTh5egf9bHZGLY2DS9yU77pT9hdHnm3NtMLine4SaH5HcM/XY F1cMCfEx3E+RQP7qemLgvg8U99CUVlH3mKWZ32UrZR2ivxSI4JNoYrH7CwADMhTI7EhRP45s7MH tdVVM26WLHU5F3FT7oifI+LKDgLa7dUG/1qH4QKqFp5AUN/wQ8+FhrLd+X0i3SLnJYp+a5bLtHC UMiHpxgIBsr+l8THbS20AMwMR8jOpi3dJrgw5pj8CUqRwrGcmxE/Wps3o+C+Q== X-Received: by 2002:a05:600c:3b9e:b0:434:a4a6:5212 with SMTP id 5b1f17b1804b1-434a9d483a2mr22461765e9.0.1732700717371; Wed, 27 Nov 2024 01:45:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjyAboe82m6SAgbpQWZGLqgr0uC3nNjTIXLU5i+mfldaQz545DR8nH5Q1MsJSGop0sk6WLmA== X-Received: by 2002:a05:600c:3b9e:b0:434:a4a6:5212 with SMTP id 5b1f17b1804b1-434a9d483a2mr22461535e9.0.1732700716931; Wed, 27 Nov 2024 01:45:16 -0800 (PST) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [2a10:fc81:a806:d6a9::1]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbc4140sm16341594f8f.77.2024.11.27.01.45.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 01:45:15 -0800 (PST) Date: Wed, 27 Nov 2024 10:45:13 +0100 From: Stefano Brivio To: Laurent Vivier Subject: Re: [PATCH v14 7/9] vhost-user: add vhost-user Message-ID: <20241127104514.5a09c0d0@elisabeth> In-Reply-To: References: <20241122164337.3377854-1-lvivier@redhat.com> <20241122164337.3377854-8-lvivier@redhat.com> <20241127054749.7f1cfb25@elisabeth> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.41; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: PEJKNzHA5IEuOqSdPdravCpNI9KYxWCjJcGHaIiWsYY_1732700718 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: OTTFYHW5MSQFR3NBIXVGURI5VKKCJETY X-Message-ID-Hash: OTTFYHW5MSQFR3NBIXVGURI5VKKCJETY X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Wed, 27 Nov 2024 10:09:53 +0100 Laurent Vivier wrote: > On 27/11/2024 05:47, Stefano Brivio wrote: > > On Fri, 22 Nov 2024 17:43:34 +0100 > > Laurent Vivier wrote: > > > >> +/** > >> + * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payload) > >> + * @c: Execution context > >> + * @conn: Connection pointer > >> + * @flags: TCP flags: if not set, send segment only if ACK is due > >> + * > >> + * Return: negative error code on connection reset, 0 otherwise > >> + */ > >> +int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) > >> +{ > >> + struct vu_dev *vdev = c->vdev; > >> + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > >> + const struct flowside *tapside = TAPFLOW(conn); > >> + size_t l2len, l4len, optlen, hdrlen; > >> + struct vu_virtq_element flags_elem[2]; > >> + struct tcp_payload_t *payload; > >> + struct ipv6hdr *ip6h = NULL; > >> + struct iovec flags_iov[2]; > >> + struct iphdr *iph = NULL; > >> + struct ethhdr *eh; > >> + uint32_t seq; > >> + int elem_cnt; > >> + int nb_ack; > >> + int ret; > >> + > >> + hdrlen = tcp_vu_hdrlen(CONN_V6(conn)); > >> + > >> + vu_set_element(&flags_elem[0], NULL, &flags_iov[0]); > >> + > >> + elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1, > >> + hdrlen + sizeof(struct tcp_syn_opts), NULL); > > > > Oops, I made this crash, by starting a number of iperf3 client threads > > on the host: > > > > $ iperf3 -c localhost -p 6001 -Z -l 500 -w 256M -t 600 -P20 > > > > with matching server in the guest, then terminating QEMU while the test > > is running. > > > > Details (I saw it first, then I reproduced it under gdb): > > > > accepted connection from PID 3115463 > > NDP: received RS, sending RA > > DHCP: offer to discover > > from 52:54:00:12:34:56 > > DHCP: ack to request > > from 52:54:00:12:34:56 > > NDP: sending unsolicited RA, next in 212s > > Client connection closed > > > > Program received signal SIGSEGV, Segmentation fault. > > 0x00005555555884f5 in vring_avail_idx (vq=0x555559343f10 ) at virtio.c:138 > > 138 vq->shadow_avail_idx = le16toh(vq->vring.avail->idx); > > (gdb) list > > 133 * > > 134 * Return: the available ring index of the given virtqueue > > 135 */ > > 136 static inline uint16_t vring_avail_idx(struct vu_virtq *vq) > > 137 { > > 138 vq->shadow_avail_idx = le16toh(vq->vring.avail->idx); > > 139 > > 140 return vq->shadow_avail_idx; > > 141 } > > 142 > > (gdb) bt > > #0 0x00005555555884f5 in vring_avail_idx (vq=0x555559343f10 ) at virtio.c:138 > > #1 vu_queue_empty (vq=vq@entry=0x555559343f10 ) at virtio.c:290 > > #2 vu_queue_pop (dev=dev@entry=0x555559343a00 , vq=vq@entry=0x555559343f10 , elem=elem@entry=0x7ffffff6f510) at virtio.c:505 > > #3 0x0000555555588c8c in vu_collect (vdev=vdev@entry=0x555559343a00 , vq=vq@entry=0x555559343f10 , elem=elem@entry=0x7ffffff6f510, max_elem=max_elem@entry=1, > > size=size@entry=74, frame_size=frame_size@entry=0x0) at vu_common.c:86 > > #4 0x000055555557e00e in tcp_vu_send_flag (c=0x7ffffff6f7a0, conn=0x5555555bd2d0 , flags=4) at tcp_vu.c:116 > > #5 0x0000555555578125 in tcp_send_flag (flags=4, conn=0x5555555bd2d0 , c=0x7ffffff6f7a0) at tcp.c:1278 > > #6 tcp_rst_do (conn=, c=) at tcp.c:1293 > > #7 tcp_timer_handler (c=c@entry=0x7ffffff6f7a0, ref=..., ref@entry=...) at tcp.c:2266 > > #8 0x0000555555558f26 in main (argc=, argv=) at passt.c:342 > > (gdb) p *vq > > $1 = {vring = {num = 256, desc = 0x0, avail = 0x0, used = 0x0, log_guest_addr = 4338774592, flags = 0}, last_avail_idx = 35133, shadow_avail_idx = 35133, used_idx = 35133, signalled_used = 0, > > signalled_used_valid = false, notification = true, inuse = 0, call_fd = -1, kick_fd = -1, err_fd = -1, enable = 1, started = false, vra = {index = 0, flags = 0, desc_user_addr = 139660501995520, > > used_user_addr = 139660502000192, avail_user_addr = 139660501999616, log_guest_addr = 4338774592}} > > (gdb) p *vq->vring.avail > > Cannot access memory at address 0x0 > > > > ...so we're sending a RST segment to the guest, but the ring doesn't > > exist anymore. > > > > By the way, I still have the gdb session running, if you need something > > else out of it. > > > > Now, I guess we should eventually introduce a more comprehensive > > handling of the case where the guest suddenly terminates (not specific > > to vhost-user), but given that we have vu_cleanup() working as expected > > in this case, I wonder if we shouldn't simply avoid calling > > vring_avail_idx() (it has a single caller) by checking for !vring.avail > > in the caller, or something like that. > > > > Yes, I think it's the lines I removed during the reviews: > > if (!vq->vring.avail) > return true; Ah, right: https://archives.passt.top/passt-dev/20241114163859.7eeafa38@elisabeth/ ...so, at least in our case, it's more than "sanity checks" after all. :) Well, I guess it depends on the definition. > Could you try to checkout virtio.c from v11? That would take a rather lengthy rebase, but I tried to reintroduce all the checks you had: -- diff --git a/virtio.c b/virtio.c index 6a97435..0598ff4 100644 --- a/virtio.c +++ b/virtio.c @@ -284,6 +284,9 @@ static int virtqueue_read_next_desc(const struct vring_desc *desc, */ bool vu_queue_empty(struct vu_virtq *vq) { + if (!vq->vring.avail) + return true; + if (vq->shadow_avail_idx != vq->last_avail_idx) return false; @@ -327,6 +330,9 @@ static bool vring_can_notify(const struct vu_dev *dev, struct vu_virtq *vq) */ void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq) { + if (!vq->vring.avail) + return; + if (!vring_can_notify(dev, vq)) { debug("vhost-user: virtqueue can skip notify..."); return; @@ -502,6 +508,9 @@ int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virtq_elemen unsigned int head; int ret; + if (!vq->vring.avail) + return -1; + if (vu_queue_empty(vq)) return -1; @@ -591,6 +600,9 @@ void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index, { struct vring_used_elem uelem; + if (!vq->vring.avail) + return; + idx = (idx + vq->used_idx) % vq->vring.num; uelem.id = htole32(index); @@ -633,6 +645,9 @@ void vu_queue_flush(struct vu_virtq *vq, unsigned int count) { uint16_t old, new; + if (!vq->vring.avail) + return; + /* Make sure buffer is written before we update index. */ smp_wmb(); -- and it's all fine with those, I tried doing a few nasty things and didn't observe any issue. Any check I missed? Do you want to submit it as follow-up patch? I can also do that. I'd rather (still) avoid a re-post of v14 if possible. -- Stefano