From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: passt.top; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=LG6B98oC; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by passt.top (Postfix) with ESMTPS id 4C3125A061F for ; Thu, 20 Feb 2025 11:38:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1740047892; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vD4vv4B/ryES5fKmEny5kY8L57Ain6MGukrKhWAbEwg=; b=LG6B98oCuu2hkM3v0KWehPZRbKZLxlmjUdEOTHVboVbGXlJARVRi3fwyuw+1IPfRse1Dtj S69EPyl+p4PJRBXE1L/EFcZ0qaWaaE5ri7P6XaquQ/WSHK+AzVreYd8dxBbziB+3yahF77 4guloiLJb7HuRIvKn1h0JNwPTlppaHk= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-690-0GFe_lldPDiTa00OeJrY7Q-1; Thu, 20 Feb 2025 05:38:10 -0500 X-MC-Unique: 0GFe_lldPDiTa00OeJrY7Q-1 X-Mimecast-MFC-AGG-ID: 0GFe_lldPDiTa00OeJrY7Q_1740047889 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-38f338525e1so384665f8f.3 for ; Thu, 20 Feb 2025 02:38:10 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740047884; x=1740652684; h=content-transfer-encoding:mime-version:organization:references :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=vD4vv4B/ryES5fKmEny5kY8L57Ain6MGukrKhWAbEwg=; b=fc3Poa8wpdkd+dc0G/jxKXHNwsfsTCywP4de8vf5lFdJ7Oz34zwYnSZ1MX4wMhcMLY w8NhwkQfhTuhHCQcc/+olR+S5Qq/u0AKUL0R7Hj6JMCNHmPX19Fq3xsAwDTAjcUoTsYA Tw4lJ9HUoGkC3+Ijuf7MyWquzspQ3sDGmE2A7Rs42l+hmuCvUZBj+9arGr6l6ZMDjgN4 +iA3tvu838dInfRMrciHOz4wwI9FXQKewEEOL0uANlMgqgiJadA3fRRg3K8dG8eClRXx PYDD5lRyDQk1A5OmIN+8An68iooKq+ifgWJeJUlvmAbWn3ydcdjG1Kkt+VMno1alOTud 6dMQ== X-Gm-Message-State: AOJu0YzmOeZZ+OY3+fyEO817o14ip1x7JOayy4mhDa4GtBNt1c1mC0O7 350aRq8yXsASQ2GSiqvL+Dyr6SWiLJ97t+V5ch5lPA4o7bfRWGJLbGzVzN36yvDVMqTAsuvQTVL X2Pvd1HHe3Bz8Yc7dS8vbfXM3+icmM7AmxT5gbaLVM3700RxwdiwDB6BBxg== X-Gm-Gg: ASbGncuCJAsYxq/UpWUoJ0y6rmpEloig4edi9AsUnXTCMpvjGBFzhl+PdBn0mYCw5Up iMe6bGxlbitEhpyPONP3QhiiranpN+ux7oK6cvPb2zG/ZsdlWhbnAyz3MO/yhlierPgdK03JMTo BIeGjVPXNBwuQG1FiS+xyTlaPpEZoCNyXkn98gAU1NKXZ2Th8oIu7T0PQ5rWyCMzegBShMS/QYx AWSexHXvogKd5hOGNCY3x3jKJmZw1f7KsG246ZhI/m5I9mnClBY5sSj35ftEfW1Z3jOsbIsqrM9 6jw1WTilTsHQB7zW1KtAloENxwxF3u0Qfw== X-Received: by 2002:a5d:47c9:0:b0:38f:4493:e274 with SMTP id ffacd0b85a97d-38f4493ffe2mr15144914f8f.54.1740047884293; Thu, 20 Feb 2025 02:38:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IGV4yzYN6YgwW/4nkKCwk2ngA95gRlPP3LihrslUms9sY5IjwhNRt+9Qq80bTUpp1KKx+BNBQ== X-Received: by 2002:a5d:47c9:0:b0:38f:4493:e274 with SMTP id ffacd0b85a97d-38f4493ffe2mr15144855f8f.54.1740047883568; Thu, 20 Feb 2025 02:38:03 -0800 (PST) Received: from maya.myfinge.rs (ifcgrfdd.trafficplex.cloud. [176.103.220.4]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38f258dab74sm20523557f8f.32.2025.02.20.02.38.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2025 02:38:03 -0800 (PST) Date: Thu, 20 Feb 2025 11:38:00 +0100 From: Stefano Brivio To: David Gibson Subject: Re: [PATCH 2/2] migrate, flow: Don't attempt to migrate TCP flows without passt-repair Message-ID: <20250220113800.05be8f5f@elisabeth> In-Reply-To: References: <20250220060318.1796504-1-david@gibson.dropbear.id.au> <20250220060318.1796504-3-david@gibson.dropbear.id.au> <20250220090726.43432475@elisabeth> Organization: Red Hat X-Mailer: Claws Mail 4.2.0 (GTK 3.24.41; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: _-5JLGjG-o0Hawlot2QoItkej8RPJhGHCooB__7kIjc_1740047889 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Message-ID-Hash: HJHHDVLWWXKFQ6WC7KC5DJOP27S5IRQA X-Message-ID-Hash: HJHHDVLWWXKFQ6WC7KC5DJOP27S5IRQA X-MailFrom: sbrivio@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: passt-dev@passt.top X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Thu, 20 Feb 2025 21:18:06 +1100 David Gibson wrote: > On Thu, Feb 20, 2025 at 09:07:26AM +0100, Stefano Brivio wrote: > > On Thu, 20 Feb 2025 17:03:18 +1100 > > David Gibson wrote: > > > > > Migrating TCP flows requires passt-repair in order to use TCP_REPAIR. If > > > passt-repair is not started, our failure mode is pretty ugly though: we'll > > > attempt the migration, hitting various problems when we can't enter repair > > > mode. In some cases we may not roll back these changes properly, meaning > > > we break network connections on the source. > > > > > > Our general approach is not to completely block migration if there are > > > problems, but simply to break any flows we can't migrate. So, if we have > > > no connection from passt-repair carry on with the migration, but don't > > > attempt to migrate any TCP connections. > > > > > > Signed-off-by: David Gibson > > > --- > > > flow.c | 11 +++++++++-- > > > 1 file changed, 9 insertions(+), 2 deletions(-) > > > > > > diff --git a/flow.c b/flow.c > > > index 6cf96c26..749c4984 100644 > > > --- a/flow.c > > > +++ b/flow.c > > > @@ -923,6 +923,10 @@ static int flow_migrate_repair_all(struct ctx *c, bool enable) > > > union flow *flow; > > > int rc; > > > > > > + /* If we don't have a repair helper, there's nothing we can do */ > > > + if (c->fd_repair < 0) > > > + return 0; > > > + > > > > This doesn't fix the behaviour in a relatively likely failure mode: > > passt-repair is there, but we can't communicate to it (LSM policy > > issues or similar). > > Ah... true. Although it shouldn't make it any worse for that case, > right, so that could be a separate fix. Sure. > > In that case, unconditionally terminating on failure in the rollback > > function: > > > > if (tcp_flow_repair_off(c, &flow->tcp)) > > die("Failed to roll back TCP_REPAIR mode"); > > > > if (repair_flush(c)) > > die("Failed to roll back TCP_REPAIR mode"); > > > > isn't a very productive thing to do: we go from an uneventful failure > > where flows were not affected at all to a guest left without > > connectivity. > > So, the issue is that leaving sockets in repair mode after we leave > the migration path would be very bad. Why? I really can't see anything catastrophic happening as a result of that (hence my v12 version of this). Surely not as bad as the guest losing connectivity without any possible recovery. > We can't easily close > sockets/flows for which that's the case, because the batching means if > there's a failure we don't actually know which sockets are in which > mode, hence the die(). They can be closed (via tcp_rst()) anyway. If they're in repair mode, no RST will reach the peer, and if they aren't, it will. > > That starts looking less robust than the alternative (e.g. what I > > implemented in v12: silently fail and continue) at least without > > https://patchew.org/QEMU/20250217092550.1172055-1-lvivier@redhat.com/ > > in a general case as well: if we continue, we'll have hanging flows > > that will expire on timeout, but if we don't, again, we'll have a > > guest without connectivity. > > > > I understand that leaving flows around for that long might present a > > relevant inconsistency, though. > > > > So I'm wondering about some alternatives: actually, the rollback > > function shouldn't be called at all in this case. Or it could just > > (indirectly) call tcp_rst() on all the flows that were possibly > > affected. > > Making it be a safe no-op if we never managed to turn repair on for > anything would make sense to me. Unfortunately, in this situation we > won't see an error until we do a repair_flush() which means we now > don't know the state of any sockets we already passed to > repair_set(). > > We could, I suppose, close all flows that we passed to repair_set() by > the time we see the error. If we have < one batch's worth of > connections that will kill connectivity almost as much as die()ing, > though. I guess it will come back without needing qemu to restart us, > though, so that's something. Right, yes, that's what I'm suggesting. > This sort of thing is, incidentally, why I did way back suggest the > possibility of passt-repair reporting failures per-fd, rather than > just per-batch. Sorry, I somehow missed that proposal, and I can't find any trace of it. But anyway, the problem is that if we fail to read a batch for any reason (invalid ancillary data... maybe always implying a kernel issue, but I'm not sure), you can't _reliably_ report per-fd failures. *Usually*, you can. Worth it? In any case, if it's simple, we can still do it, because passt and passt-repair are distributed together. You can't pass back the file descriptors via SCM_RIGHTS though, because we want to close() them before we reply. Another alternative could be that passt-repair reverts back the state of the file descriptors that were already switched, on failure. -- Stefano