From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=fiYkQdL4; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id 776D35A0271 for ; Fri, 25 Jul 2025 10:21:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1753431679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RkOgRUJ+bLEXsuexcSqrqiizTqvx6wGtBWgrXb0vaxw=; b=fiYkQdL4khvtN+3zC3vKZvAryot8Z06uyKRNtTFq1R+FK8bMZkEVGBQ3Ufib2y9jHgrjaB Ojej5B+d120VDuZQILZO40KOnXRQa8t56/1eIJNQnUGyvtpAKhAyozMreM4zNrWDRUcjAM m6+q81hcyi2iWh0uYOgA3zMJVgZsbJw= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-26-YnHN2k_jP6iLC6MdlgqZfA-1; Fri, 25 Jul 2025 04:21:17 -0400 X-MC-Unique: YnHN2k_jP6iLC6MdlgqZfA-1 X-Mimecast-MFC-AGG-ID: YnHN2k_jP6iLC6MdlgqZfA_1753431676 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-3a4f6ba526eso1379964f8f.1 for ; Fri, 25 Jul 2025 01:21:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1753431676; x=1754036476; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=RkOgRUJ+bLEXsuexcSqrqiizTqvx6wGtBWgrXb0vaxw=; b=A+bM18EmmAauq9FoCj5AEtdpFVd9RqIWAYz71zuH7rhoybTtH5ZoEIFJmrUl51mBnX Mdk8jH2kSB/Gq/kV7vkwSLy64zHD7ffKOjDVl2XPoAlBsRSjva7c5YJ/4YgBil+rQ7y/ s1NzwGNfNHeGBUJMM1jlcXpXuNvEc/bahJTT6sBjZ4FFdA/YLPc/+DWUOv2jVmRnP95f oXp7lH5tSQGbUpQGEOQVACbYLUf9WUFD1pLD28+9lLf7Au3xRu6RZBy4t5wQeTXMDRdY Fc9zezJj8HOPAVRVVm7Y1PoAewZfeYF1ZMBmWKuCYDMKkeRgA1HOAtgKHsUjXSTtc8U/ DYIQ== X-Gm-Message-State: AOJu0YzsX11fmje9TAN9SnjG6xT9yvHC+2bNpFzQHOJeyfNcbl23qImt s6+ozNSJ5nLPT6dYSG0nVqVftNgQrMIIzTvpCmgrOO7/wuAr8XfWStlpvOd4C/fqpr/D/11D2wQ nawFETbXKU0X2E98dmg9dheN0MKpwney4jTBIaLS4ao0iAzn3BK59Dg== X-Gm-Gg: ASbGnctVrnFGmidXZvPl0S8wKHsRTfXcJRuevnMghq0F1XNjJwnski6K9ZVzHlgpYj1 bC766djk3asAMZZCchlla1OjaRcTudJCO8Rl6vuBjiNicFEulQCv0/AQXWyok8bCofNbshyMnzm ounPt/RjN6WEdoxJ44HkKM46rOH/MoNr7YdPjbQr5/2Qmk/xaAwOEWAoap49+hmkkoT0Dx0sEhB Kafb08rLgKJbNM9ZxRnDSzjqmkykN5iq4yU8lDyJ14od6W4AGghvrFD2PsqhclhcIKsSRL/F05S kbwGGcPZB0Aot8K8niD2MAOmUhRD/GRpkfskAueJF5VF7mUJ1II= X-Received: by 2002:a05:6000:288c:b0:3a4:f8fa:8a3a with SMTP id ffacd0b85a97d-3b776733ca3mr788234f8f.18.1753431675753; Fri, 25 Jul 2025 01:21:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IECAXij6i3xS3js8gd3b6UlCpG461qn0HsugaUNCrIaNq8+VT5ducP6KjFdIWYEyQuIWTx/DQ== X-Received: by 2002:a05:6000:288c:b0:3a4:f8fa:8a3a with SMTP id ffacd0b85a97d-3b776733ca3mr788194f8f.18.1753431674994; Fri, 25 Jul 2025 01:21:14 -0700 (PDT) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [2a10:fc81:a806:d6a9::1]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3b76fcad794sm4262118f8f.52.2025.07.25.01.21.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 25 Jul 2025 01:21:14 -0700 (PDT) Date: Fri, 25 Jul 2025 10:21:12 +0200 From: Stefano Brivio To: David Gibson Subject: Re: [PATCH v3] treewide: By default, don't quit source after migration, keep sockets open Message-ID: <20250725102112.55910998@elisabeth> In-Reply-To: References: <20250724172858.1189615-1-sbrivio@redhat.com> <20250725071058.0842f7a2@elisabeth> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.49; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: AytXVTARmkt4Q6l6SI4QNrxMk1wDJIXhUSTpjkuHASU_1753431676 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: MAQZKSCIZ6B4IUYS2H247GJ5QUA7WCBS X-Message-ID-Hash: MAQZKSCIZ6B4IUYS2H247GJ5QUA7WCBS X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top, Nir Dothan X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Fri, 25 Jul 2025 16:50:23 +1000 David Gibson wrote: > On Fri, Jul 25, 2025 at 07:10:58AM +0200, Stefano Brivio wrote: > > On Fri, 25 Jul 2025 14:04:17 +1000 > > David Gibson wrote: > > > > > On Thu, Jul 24, 2025 at 07:28:58PM +0200, Stefano Brivio wrote: > > > > We are hitting an issue in the KubeVirt integration where some data is > > > > still sent to the source instance even after migration is complete. As > > > > we exit, the kernel closes our sockets and resets connections. The > > > > resulting RST segments are sent to peers, effectively terminating > > > > connections that were meanwhile migrated. > > > > > > > > At the moment, this is not done intentionally, but in the future > > > > KubeVirt might enable OVN-Kubernetes features where source and > > > > destination nodes are explicitly getting mirrored traffic for a while, > > > > in order to decrease migration downtime. > > > > > > > > By default, don't quit after migration is completed on the source: the > > > > previous behaviour can be enabled with the new, but deprecated, > > > > --migrate-exit option. After migration (as source), the -1 / --one-off > > > > option has no effect. > > > > > > > > Also, by default, keep migrated TCP sockets open (in repair mode) as > > > > long as we're running, and ignore events on any epoll descriptor > > > > representing data channels. The previous behaviour can be enabled with > > > > the new, equally deprecated, --migrate-no-linger option. > > > > > > > > By keeping sockets open, and not exiting, we prevent the kernel > > > > running on the source node to send out RST segments if further data > > > > reaches us. > > > > > > > > Reported-by: Nir Dothan > > > > Signed-off-by: Stefano Brivio > > > > --- > > > > v2: > > > > - assorted changes in commit message > > > > - context variable ignore_linger becomes ignore_no_linger > > > > - new options are deprecated > > > > - don't ignore events on some descriptors, drop them from epoll > > > > > > > > v3: > > > > - Nir reported occasional failures (connections being reset) > > > > with both v1 and v2, because, in KubeVirt's usage, we quit as > > > > QEMU exits. Disable --one-off after migration as source, and > > > > document this exception > > > > > > This seems like an awful, awful hack. > > > > Well, of course, it is, and long term it should be fixed in > > either KubeVirt or libvirt (even though I'm not sure how, see below) > > instead. > > But this hack means that even when it's fixed we'll still have this > wildly counterintuitive behaviour that every future user will have to > work around. No, why? We can change that as well. We changed semantics of options in the past and I don't see an issue doing that as long as we coordinate things to a reasonable extent (like we do with Podman and rootlesskit and with distributions and LSMs...). This is just to get things working properly in KubeVirt 1.6 as far as I'm concerned. Otherwise they can as well drop the whole feature (at least, that would be my recommendation). > There's no sensible internal reason for out-migration to > affect lifetime, it's a workaround for problems that are quite > specific to this stack of layers above. > > > > We're abandoning consistent > > > semantics on a wild guess as to what the layers above us need. > > > > No, not really, we tested this and tested the alternative. > > With just one use case. ...better than zero? > Creating semantics to work with exactly how > something is used now, without thought to whether they make sense in > general is the definition of fragile software. ...better than useless? > > > Specifically, --once-off used to mean that the layer above us didn't > > > > --one-off > > > > > need to manage passt's lifetime; it was tied to qemu's. Now it still > > > needs to manually manage passt's lifetime, so what's the point. So, > > > if it needs passt to outlive qemu it should actually manage that and > > > not use --once-off. > > > > The main point is that it does *not* manually manage passt's lifetime > > if there's no migration (which is the general case for libvirt and all > > other users). > > That's exactly my point. With this hack it's neither one model nor > the other so you have to be aware of both. Current users except for KubeVirt use --one-off with that model, and we surely want and need to keep that. Now it turns out that there's an issue with KubeVirt and that (obvious) model, so here's a workaround for the only documented user of the migration feature, because it *currently* *needs* the other (obviously wrong) model. > > We don't have any other user with an implementation of the migration > > workflow anyway (libvirt itself doesn't do that, yet). It's otherwise > > unusable for KubeVirt. So I'd say let's fix it for the only user we > > have. > > Please not at the expense of forcing every future user to deal with > this suckage. That's not the case. We can (and really should) fix this in passt later. We need anyway to rework a fair amount of code here because, for example, as you mentioned, listening sockets are still there. > > > Requring passt to outlive qemu already seems pretty dubious to me: > > > having the source still connected when passt was quitting is one thing > > > - indeed it's arguably hard to avoid. Having it still connected when > > > *qemu* quits is much less defensible. > > > > The fundamental problem here is that there's an issue in KubeVirt > > (and working around it is the whole point of this patch) which implies > > that packets are sent to the source pod *for a while* after migration. > > > > We found out that the guest is generally suspended during that while, > > but sometimes it might even have already exited. The pod remains, > > though, as long as it's needed. That's the only certainty we have. > > Keeping the pod around is fine. What needs to change is that the > guest's IP(s) needs to be removed from the source host before qemu > (and therefore passt) is terminated. The pod must have at least one > other IP, or it would be impossible to perform the migration in the > first place. Maybe, yes. I'm not sure if it's doable. > This essentially matches the situation for bridged networking: with > the source guest suspended the source host will no longer respond to > the guest IP > > > So, do we want to drop --one-off from the libvirt integration, and have > > libvirt manage passt's lifecycle entirely (note that all users outside > > KubeVirt don't use migration, so we would make the general case vastly > > more complicated for the sake of correctness for a single usage...)? > > Hmm.. if I understand correctly the network swizzling is handled by > KubeVirt, not libvirt. That's OVN-Kubernetes in KubeVirt's case. > I'm hoping that means there's a suitable point > at which it can remove the IP without having to alter libvirt. I hope so too, eventually. Or we could make sure that QEMU is alive as long as needed, this is probably easier to ensure from virt-launcher. I haven't looked at the details yet, but in passt it's one line and we can drop it later as needed, in KubeVirt it's probably much more complicated than that. > > Well, we can try to do that. Except that libvirt doesn't know either > > for how long this traffic will reach the source pod (that's a KubeVirt > > concept). So it should implement the same hack: let it outlive QEMU on > > migration... as long as we have that issue in KubeVirt. > > > > But I asked KubeVirt people, and it turns out that it's extremely > > complicated to fix this in KubeVirt. So, actually, I don't see another > > way to fix this in the short term. And without KubeVirt using this we > > could also drop the whole feature... -- Stefano