From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: passt.top; dmarc=none (p=none dis=none) header.from=gibson.dropbear.id.au Authentication-Results: passt.top; dkim=pass (2048-bit key; secure) header.d=gibson.dropbear.id.au header.i=@gibson.dropbear.id.au header.a=rsa-sha256 header.s=202502 header.b=ICdOy45/; dkim-atps=neutral Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by passt.top (Postfix) with ESMTPS id F32995A0639 for ; Fri, 14 Feb 2025 14:08:55 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gibson.dropbear.id.au; s=202502; t=1739538527; bh=glbmdIPn6XO+0VXYAopKtI9TJX9mUN7ejnjbxYNJxjQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ICdOy45/Qbb5ILvQav7YMzcn4rcT+4onMJWRyeP1hDQUkD9a/cllUAAjLzhS/rIET vLw7Yl3yhalWnd2qrr+yp5K7x/ZtqVxSFhKC/hygnZ/OOi5qIMCTmzxb2WGFTG+9Ho vhdeCvhSQbNRIRkAYoPhLDoZq5+OrCdQ+xGkp17bm0zc5yiO+CnX2FMLQfd9iG7W0q 4Oqn5g8+VQKeuCcVmDTAaXF7qWedmyDRwSLsQTPJflyd31BaVcAFUdGbYjrOxwrcCJ sgUxiTlgfKVubhPDeuQGRhqxqRJRaR2m73orVTlJO279Z2lpINwTR6DoHCyZWfK88N 8j3nBFtfLXvHA== Received: by gandalf.ozlabs.org (Postfix, from userid 1007) id 4YvXRz72Fhz4x5K; Sat, 15 Feb 2025 00:08:47 +1100 (AEDT) From: David Gibson To: Stefano Brivio , passt-dev@passt.top Subject: [PATCH v24 4/5] migrate: Migrate TCP flows Date: Sat, 15 Feb 2025 00:08:44 +1100 Message-ID: <20250214130845.3475757-5-david@gibson.dropbear.id.au> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250214130845.3475757-1-david@gibson.dropbear.id.au> References: <20250214130845.3475757-1-david@gibson.dropbear.id.au> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Message-ID-Hash: 4TRSJWZPJER3XTSNCOLQEGZPYPGVZXO6 X-Message-ID-Hash: 4TRSJWZPJER3XTSNCOLQEGZPYPGVZXO6 X-MailFrom: dgibson@gandalf.ozlabs.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: David Gibson X-Mailman-Version: 3.3.8 Precedence: list List-Id: Development discussion and patches for passt Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Stefano Brivio This implements flow preparation on the source, transfer of data with a format roughly inspired by struct tcp_tap_conn, and flow insertion on the target, with all the appropriate window options, window scaling, MSS, etc. The target side is rather convoluted because we first need to create sockets and switch them to repair mode, before we can apply options that are *not* stored in the flow table. However, we don't want to request repair mode for sockets one by one. So we need to do this in several steps. [dwg: Assorted cleanups] Signed-off-by: Stefano Brivio Signed-off-by: David Gibson --- contrib/selinux/passt.te | 4 +- flow.c | 181 +++++++- flow.h | 8 + flow_table.h | 1 + migrate.c | 22 + passt.c | 6 +- repair.c | 1 - tcp.c | 950 +++++++++++++++++++++++++++++++++++++++ tcp_conn.h | 99 ++++ 9 files changed, 1243 insertions(+), 29 deletions(-) diff --git a/contrib/selinux/passt.te b/contrib/selinux/passt.te index c6cea34f..3eb11e68 100644 --- a/contrib/selinux/passt.te +++ b/contrib/selinux/passt.te @@ -38,7 +38,7 @@ require { type net_conf_t; type proc_net_t; type node_t; - class tcp_socket { create accept listen name_bind name_connect }; + class tcp_socket { create accept listen name_bind name_connect getattr }; class udp_socket { create accept listen }; class icmp_socket { bind create name_bind node_bind setopt read write }; class sock_file { create unlink write }; @@ -119,7 +119,7 @@ corenet_udp_sendrecv_all_ports(passt_t) allow passt_t node_t:icmp_socket { name_bind node_bind }; allow passt_t port_t:icmp_socket name_bind; -allow passt_t self:tcp_socket { create getopt setopt connect bind listen accept shutdown read write }; +allow passt_t self:tcp_socket { create getopt setopt connect bind listen accept shutdown read write getattr }; allow passt_t self:udp_socket { create getopt setopt connect bind read write }; allow passt_t self:icmp_socket { bind create setopt read write }; diff --git a/flow.c b/flow.c index 90bff884..96aef29d 100644 --- a/flow.c +++ b/flow.c @@ -19,6 +19,7 @@ #include "inany.h" #include "flow.h" #include "flow_table.h" +#include "repair.h" const char *flow_state_str[] = { [FLOW_STATE_FREE] = "FREE", @@ -874,6 +875,23 @@ void flow_defer_handler(const struct ctx *c, const struct timespec *now) *last_next = FLOW_MAX; } +/** + * flow_alloc_migrate() - Allocate a new flow to be migrated in + * + * Return: pointer to an unused flow entry, or NULL if the table is full + */ +union flow *flow_alloc_migrate(void) +{ + union flow *flow = flow_alloc(); + + if (!flow) + return NULL; + + flow_set_state(&flow->f, FLOW_STATE_MIGRATING); + flow_new_entry = NULL; + return flow; +} + /** * flow_freeze() - Select and prepare flows for migration * @c: Execution context @@ -887,19 +905,21 @@ void flow_defer_handler(const struct ctx *c, const struct timespec *now) int flow_freeze(struct ctx *c, const struct migrate_stage *stage, int fd) { union flow *flow; + int rc; (void)stage; (void)fd; - (void)c; flow_foreach(flow) { /* rc == 0 : not a migration candidate * rc > 0 : migration candidate * rc < 0 : error, fail migration */ - int rc; switch (flow->f.type) { + case FLOW_TCP: + rc = tcp_freeze(c, &flow->tcp); + break; default: /* Otherwise assume it doesn't migrate */ rc = 0; @@ -913,6 +933,13 @@ int flow_freeze(struct ctx *c, const struct migrate_stage *stage, int fd) flow_set_state(&flow->f, FLOW_STATE_MIGRATING); } + if ((rc = repair_flush(c))) { + debug("Can't enable repair mode: %s", strerror_(-rc)); + if (flow_thaw(c, stage, fd)) + die("Unable to roll back migration"); + return rc; + } + return 0; } @@ -931,15 +958,13 @@ int flow_thaw(struct ctx *c, const struct migrate_stage *stage, int fd) struct flow_free_cluster *free_head = NULL; unsigned *last_next = &flow_first_free; union flow *flow; + int rc; (void)stage; (void)fd; - (void)c; /* FIXME: Share logic with flow_defer_handler to rebuild free list */ flow_foreach_slot(flow) { - int rc; - if (flow->f.state == FLOW_STATE_FREE) { unsigned skip = flow->free.n; @@ -969,23 +994,31 @@ int flow_thaw(struct ctx *c, const struct migrate_stage *stage, int fd) ASSERT(flow->f.state == FLOW_STATE_MIGRATING); - rc = 0; + /* rc > 0 : migration completed successfully + * rc == 0 : migration failed, clear flow + * rc < 0 : unrecoverable error, fail migration + */ switch (flow->f.type) { + case FLOW_TCP: + rc = tcp_thaw(c, &flow->tcp); + break; default: /* Bug. We marked a flow as migrating, but we don't * know how to resume it */ ASSERT(0); } - if (rc == 0) { + if (rc < 0) + die("Unrecoverable migration error"); + + if (rc > 0) { /* Successfully resumed flow */ flow_set_state(&flow->f, FLOW_STATE_ACTIVE); free_head = NULL; continue; } - flow_err(flow, "Failed to unfreeze resume flow: %s", - strerror_(-rc)); + flow_err(flow, "Failed to thaw flow"); flow_set_state(&flow->f, FLOW_STATE_FREE); memset(flow, 0, sizeof(*flow)); @@ -1006,44 +1039,146 @@ int flow_thaw(struct ctx *c, const struct migrate_stage *stage, int fd) } *last_next = FLOW_MAX; + + if ((rc = repair_flush(c))) { + debug("Can't disable repair mode: %s", strerror_(-rc)); + return rc; + } return 0; } /** - * flow_migrate_source() - Transfer migrating flows to device state stream - * @c: Execution context - * @stage: Migration stage information, unused + * flow_migrate_source() - Dump all the remaining information and send data + * @c: Execution context (unused) + * @stage: Migration stage information (unused) * @fd: Migration file descriptor * * Return: 0 on success, positive error code on failure */ -int flow_migrate_source(struct ctx *c, const struct migrate_stage *stage, int fd) +int flow_migrate_source(struct ctx *c, const struct migrate_stage *stage, + int fd) { + /* Set once we can no longer rollback on the source */ + bool noreturn = false; + uint32_t count = 0; + union flow *flow; + int rc; + (void)c; (void)stage; - (void)fd; - /* FIXME: todo */ - return ENOTSUP; + flow_foreach_migrating(flow) + count++; + + count = htonl(count); + if (write_all_buf(fd, &count, sizeof(count))) { + rc = errno; + err_perror("Can't send flow count (%u)", ntohl(count)); + goto fail; + } + + debug("Sending %u flows", ntohl(count)); + + /* Dump and send information that can be stored in the flow table */ + flow_foreach_migrating(flow) { + switch (flow->f.type) { + case FLOW_TCP: + if ((rc = tcp_flow_migrate_source(fd, &flow->tcp))) { + flow_err(flow, "Can't send data: %s", strerror_(-rc)); + rc = -rc; + goto fail; + } + + /* We've closed sockets now, no going back */ + /* FIXME: move this later, if we can */ + noreturn = true; + break; + default: + /* Bug. Flow marked for migration, but we don't know how */ + ASSERT(0); + } + } + + /* And then "extended" data (including window data we saved previously): + * the target needs to set repair mode on sockets before it can set + * this stuff, but it needs sockets (and flows) for that. + * + * This also closes sockets so that the target can start connecting + * theirs: you can't sendmsg() to queues (using the socket) if the + * socket is not connected (EPIPE), not even in repair mode. And the + * target needs to restore queues now because we're sending the data. + * + * So, no rollback here, just try as hard as we can. + */ + flow_foreach_migrating(flow) { + switch (flow->f.type) { + case FLOW_TCP: + if ((rc = tcp_flow_migrate_source_ext(fd, &flow->tcp))) { + flow_err(flow, "Extended data: %s", strerror_(-rc)); + goto fail; + } + break; + default: + /* Bug. Flow marked for migration, but we don't know how */ + ASSERT(0); + } + } + + return 0; + +fail: + if (noreturn) + die("Unable to roll back migration"); + + return rc; } /** - * flow_migrate_target() - Build flows from device state stream + * flow_migrate_target() - Receive flows and insert in flow table * @c: Execution context - * @stage: Migration stage information, unused + * @stage: Migration stage information (unused) * @fd: Migration file descriptor * * Return: 0 on success, positive error code on failure */ -int flow_migrate_target(struct ctx *c, const struct migrate_stage *stage, int fd) +int flow_migrate_target(struct ctx *c, const struct migrate_stage *stage, + int fd) { - (void)c; + uint32_t count; + unsigned i; + int rc; + (void)stage; - (void)fd; - /* FIXME: todo */ - return ENOTSUP; + if (read_all_buf(fd, &count, sizeof(count))) + return errno; + + count = ntohl(count); + debug("Receiving %u flows", count); + + /* TODO: flow header with type, instead? */ + for (i = 0; i < count; i++) { + rc = tcp_flow_migrate_target(c, fd); + if (rc) { + debug("Bad target data for flow %u: %s, abort", + i, strerror_(-rc)); + return -rc; + } + } + + repair_flush(c); + + for (i = 0; i < count; i++) { + rc = tcp_flow_migrate_target_ext(c, flowtab + i, fd); + if (rc) { + debug("Bad target extended data for flow %u: %s, abort", + i, strerror_(-rc)); + return -rc; + } + } + + return 0; } /** diff --git a/flow.h b/flow.h index deb70eb1..eb81bbc7 100644 --- a/flow.h +++ b/flow.h @@ -254,6 +254,14 @@ union flow; void flow_init(void); void flow_defer_handler(const struct ctx *c, const struct timespec *now); +int flow_migrate_source_early(struct ctx *c, const struct migrate_stage *stage, + int fd); +int flow_migrate_source_pre(struct ctx *c, const struct migrate_stage *stage, + int fd); +int flow_migrate_source(struct ctx *c, const struct migrate_stage *stage, + int fd); +int flow_migrate_target(struct ctx *c, const struct migrate_stage *stage, + int fd); void flow_log_(const struct flow_common *f, int pri, const char *fmt, ...) __attribute__((format(printf, 3, 4))); diff --git a/flow_table.h b/flow_table.h index 5ccf6644..9ac3ce64 100644 --- a/flow_table.h +++ b/flow_table.h @@ -226,6 +226,7 @@ void flow_activate(struct flow_common *f); #define FLOW_ACTIVATE(flow_) \ (flow_activate(&(flow_)->f)) +union flow *flow_alloc_migrate(void); int flow_freeze(struct ctx *c, const struct migrate_stage *stage, int fd); int flow_thaw(struct ctx *c, const struct migrate_stage *stage, int fd); int flow_migrate_source(struct ctx *c, const struct migrate_stage *stage, int fd); diff --git a/migrate.c b/migrate.c index d802e2f9..0a90fba3 100644 --- a/migrate.c +++ b/migrate.c @@ -96,6 +96,17 @@ static int seen_addrs_target_v1(struct ctx *c, return 0; } +/** no_rollback() - dummy callback indicating a stage can't be rolled back + * TODO + */ +static int no_rollback(struct ctx *c, const struct migrate_stage *stage, int fd) +{ + (void)c; + (void)stage; + (void)fd; + return ENXIO; +} + /* Stages for version 1 */ static const struct migrate_stage stages_v1[] = { { @@ -104,6 +115,15 @@ static const struct migrate_stage stages_v1[] = { .rollback = flow_thaw, .target = NULL, }, + /* FIXME: With this step, close() in tcp_flow_migrate_source_ext() + * *sometimes* closes the connection for real. + */ +/* { + .name = "shrink TCP windows", + .source = flow_migrate_source_early, + .target = NULL, + }, +*/ { .name = "observed addresses", .source = seen_addrs_source_v1, @@ -112,6 +132,8 @@ static const struct migrate_stage stages_v1[] = { { .name = "transfer flows", .source = flow_migrate_source, + /* This closes sockets, so can't be rolled back */ + .rollback = no_rollback, .target = flow_migrate_target, }, { diff --git a/passt.c b/passt.c index 6f9fb4d9..68d1a283 100644 --- a/passt.c +++ b/passt.c @@ -223,9 +223,6 @@ int main(int argc, char **argv) if (sigaction(SIGCHLD, &sa, NULL)) die_perror("Couldn't install signal handlers"); - if (signal(SIGPIPE, SIG_IGN) == SIG_ERR) - die_perror("Couldn't set disposition for SIGPIPE"); - c.mode = MODE_PASTA; } else if (strstr(name, "passt")) { c.mode = MODE_PASST; @@ -233,6 +230,9 @@ int main(int argc, char **argv) _exit(EXIT_FAILURE); } + if (signal(SIGPIPE, SIG_IGN) == SIG_ERR) + die_perror("Couldn't set disposition for SIGPIPE"); + madvise(pkt_buf, TAP_BUF_BYTES, MADV_HUGEPAGE); c.epollfd = epoll_create1(EPOLL_CLOEXEC); diff --git a/repair.c b/repair.c index d2886173..c2e04501 100644 --- a/repair.c +++ b/repair.c @@ -197,7 +197,6 @@ int repair_flush(struct ctx *c) * * Return: 0 on success, negative error code on failure */ -/* cppcheck-suppress unusedFunction */ int repair_set(struct ctx *c, int s, int cmd) { int rc; diff --git a/tcp.c b/tcp.c index b978b30d..20bfa496 100644 --- a/tcp.c +++ b/tcp.c @@ -280,6 +280,7 @@ #include #include #include +#include #include #include #include @@ -287,6 +288,8 @@ #include #include +#include + #include "checksum.h" #include "util.h" #include "iov.h" @@ -299,6 +302,7 @@ #include "log.h" #include "inany.h" #include "flow.h" +#include "repair.h" #include "linux_dep.h" #include "flow_table.h" @@ -326,6 +330,19 @@ ((conn)->events & (SOCK_FIN_RCVD | TAP_FIN_RCVD))) #define CONN_HAS(conn, set) (((conn)->events & (set)) == (set)) +/* Buffers to migrate pending data from send and receive queues. No, they don't + * use memory if we don't use them. And we're going away after this, so splurge. + */ +#define TCP_MIGRATE_SND_QUEUE_MAX (64 << 20) +#define TCP_MIGRATE_RCV_QUEUE_MAX (64 << 20) +uint8_t tcp_migrate_snd_queue [TCP_MIGRATE_SND_QUEUE_MAX]; +uint8_t tcp_migrate_rcv_queue [TCP_MIGRATE_RCV_QUEUE_MAX]; + +#define TCP_MIGRATE_RESTORE_CHUNK_MIN 1024 /* Try smaller when above this */ + +/* "Extended" data (not stored in the flow table) for TCP flow migration */ +static struct tcp_tap_transfer_ext migrate_ext[FLOW_MAX]; + static const char *tcp_event_str[] __attribute((__unused__)) = { "SOCK_ACCEPTED", "TAP_SYN_RCVD", "ESTABLISHED", "TAP_SYN_ACK_SENT", @@ -1468,6 +1485,7 @@ static void tcp_conn_from_tap(const struct ctx *c, sa_family_t af, conn->sock = s; conn->timer = -1; + conn->listening_sock = -1; conn_event(c, conn, TAP_SYN_RCVD); conn->wnd_to_tap = WINDOW_DEFAULT; @@ -1968,10 +1986,27 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, ack_due = 1; if ((conn->events & TAP_FIN_RCVD) && !(conn->events & SOCK_FIN_SENT)) { + socklen_t sl; + struct tcp_info tinfo; + shutdown(conn->sock, SHUT_WR); conn_event(c, conn, SOCK_FIN_SENT); tcp_send_flag(c, conn, ACK); ack_due = 0; + + /* If we received a FIN, but the socket is in TCP_ESTABLISHED + * state, it must be a migrated socket. The kernel saw the FIN + * on the source socket, but not on the target socket. + * + * Approximate the effect of that FIN: as we're sending a FIN + * out ourselves, the socket is now in a state equivalent to + * LAST_ACK. Now that we sent the FIN out, close it with a RST. + */ + sl = sizeof(tinfo); + getsockopt(conn->sock, SOL_TCP, TCP_INFO, &tinfo, &sl); + if (tinfo.tcpi_state == TCP_ESTABLISHED && + conn->events & SOCK_FIN_RCVD) + goto reset; } if (ack_due) @@ -2054,6 +2089,7 @@ static void tcp_tap_conn_from_sock(const struct ctx *c, union flow *flow, void tcp_listen_handler(const struct ctx *c, union epoll_ref ref, const struct timespec *now) { + struct tcp_tap_conn *conn; union sockaddr_inany sa; socklen_t sl = sizeof(sa); struct flowside *ini; @@ -2069,6 +2105,9 @@ void tcp_listen_handler(const struct ctx *c, union epoll_ref ref, if (s < 0) goto cancel; + conn = (struct tcp_tap_conn *)flow; + conn->listening_sock = ref.fd; + tcp_sock_set_nodelay(s); /* FIXME: If useful: when the listening port has a specific bound @@ -2634,3 +2673,914 @@ void tcp_timer(struct ctx *c, const struct timespec *now) if (c->mode == MODE_PASTA) tcp_splice_refill(c); } + +/** + * tcp_freeze() - Prepare TCP flow for migration + * @c: Execution context + * @conn: Pointer to the TCP connection structure + * + * Return: 1 if migratable, 0 if not migratable, negative error code on failure + */ +int tcp_freeze(struct ctx *c, const struct tcp_tap_conn *conn) +{ + int rc = 0; + + if (!(conn->events & ESTABLISHED)) + return 0; + + /* Disable SO_PEEK_OFF, we don't want it for repair mode */ + if (tcp_set_peek_offset(conn->sock, -1)) + return -errno; + + if ((rc = repair_set(c, conn->sock, TCP_REPAIR_ON))) { + err("Failed to set TCP_REPAIR"); + return rc; + } + + return 1; +} + +/** + * tcp_thaw() - Final resume of flow after migration + * @c: Execution context + * @conn: Pointer to the TCP connection structure + * + * Return: 1 if thawed, 0 if not thawed, negative error code on unrecoverable + * failure + */ +int tcp_thaw(struct ctx *c, struct tcp_tap_conn *conn) +{ + uint32_t peek_offset = conn->seq_to_tap - conn->seq_ack_from_tap; + int rc = 0; + + /* Might already be done, but that's ok it's idempotent */ + if ((rc = repair_set(c, conn->sock, TCP_REPAIR_OFF))) { + err("Failed to clear TCP_REPAIR"); + return rc; + } + + /* Re-enable SO_PEEK_OFF, when available */ + if (tcp_set_peek_offset(conn->sock, peek_offset)) + goto reset; + + tcp_send_flag(c, conn, ACK); + tcp_data_from_sock(c, conn); + + return 1; + +reset: + tcp_rst(c, conn); + return 0; +} + +/** + * tcp_flow_dump_tinfo() - Dump window scale, tcpi_state, tcpi_options + * @c: Execution context + * @t: Extended migration data + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_dump_tinfo(int s, struct tcp_tap_transfer_ext *t) +{ + struct tcp_info tinfo; + socklen_t sl; + + sl = sizeof(tinfo); + if (getsockopt(s, SOL_TCP, TCP_INFO, &tinfo, &sl)) { + int rc = -errno; + err_perror("Querying TCP_INFO, socket %i", s); + return rc; + } + + t->snd_ws = tinfo.tcpi_snd_wscale; + t->rcv_ws = tinfo.tcpi_rcv_wscale; + t->tcpi_state = tinfo.tcpi_state; + t->tcpi_options = tinfo.tcpi_options; + + return 0; +} + +/** + * tcp_flow_dump_mss() - Dump MSS clamp (not current MSS) via TCP_MAXSEG + * @c: Execution context + * @t: Extended migration data + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_dump_mss(int s, struct tcp_tap_transfer_ext *t) +{ + socklen_t sl = sizeof(t->mss); + + if (getsockopt(s, SOL_TCP, TCP_MAXSEG, &t->mss, &sl)) { + int rc = -errno; + err_perror("Getting MSS, socket %i", s); + return rc; + } + + return 0; +} + +/** + * tcp_flow_dump_wnd() - Dump current tcp_repair_window parameters + * @c: Execution context + * @t: Extended migration data + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_dump_wnd(int s, struct tcp_tap_transfer_ext *t) +{ + struct tcp_repair_window wnd; + socklen_t sl = sizeof(wnd); + + if (getsockopt(s, IPPROTO_TCP, TCP_REPAIR_WINDOW, &wnd, &sl)) { + int rc = -errno; + err_perror("Getting window repair data, socket %i", s); + return rc; + } + + t->snd_wl1 = wnd.snd_wl1; + t->snd_wnd = wnd.snd_wnd; + t->max_window = wnd.max_window; + t->rcv_wnd = wnd.rcv_wnd; + t->rcv_wup = wnd.rcv_wup; + + /* If we received a FIN, we also need to adjust window parameters. + * + * This must be called after tcp_flow_dump_tinfo(), for t->tcpi_state. + */ + if (t->tcpi_state == TCP_CLOSE_WAIT || t->tcpi_state == TCP_LAST_ACK) { + t->rcv_wup--; + t->rcv_wnd++; + } + + return 0; +} + +/** + * tcp_flow_repair_wnd() - Restore window parameters from extended data + * @c: Execution context + * @t: Extended migration data + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_repair_wnd(int s, const struct tcp_tap_transfer_ext *t) +{ + struct tcp_repair_window wnd; + + wnd.snd_wl1 = t->snd_wl1; + wnd.snd_wnd = t->snd_wnd; + wnd.max_window = t->max_window; + wnd.rcv_wnd = t->rcv_wnd; + wnd.rcv_wup = t->rcv_wup; + + if (setsockopt(s, IPPROTO_TCP, TCP_REPAIR_WINDOW, &wnd, sizeof(wnd))) { + int rc = -errno; + err_perror("Setting window data, socket %i", s); + return rc; + } + + return 0; +} + +/** + * tcp_flow_select_queue() - Select queue (receive or send) for next operation + * @s: Socket + * @queue: TCP_RECV_QUEUE or TCP_SEND_QUEUE + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_select_queue(int s, int queue) +{ + if (setsockopt(s, SOL_TCP, TCP_REPAIR_QUEUE, &queue, sizeof(queue))) { + int rc = -errno; + err_perror("Selecting TCP_SEND_QUEUE, socket %i", s); + return rc; + } + + return 0; +} + +/** + * tcp_flow_dump_sndqueue() - Dump send queue, length of sent and not sent data + * @s: Socket + * @t: Extended migration data + * + * Return: 0 on success, negative error code on failure + * + * #syscalls:vu ioctl + */ +static int tcp_flow_dump_sndqueue(int s, struct tcp_tap_transfer_ext *t) +{ + ssize_t rc; + + if (ioctl(s, SIOCOUTQ, &t->sndq) < 0) { + rc = -errno; + err_perror("Getting send queue size, socket %i", s); + return rc; + } + + if (ioctl(s, SIOCOUTQNSD, &t->notsent) < 0) { + rc = -errno; + err_perror("Getting not sent count, socket %i", s); + return rc; + } + + /* If we sent a FIN, SIOCOUTQ and SIOCOUTQNSD are one greater than the + * actual pending queue length, because they are based on the sequence + * numbers, not directly on the buffer contents. + * + * This must be called after tcp_flow_dump_tinfo(), for t->tcpi_state. + */ + if (t->tcpi_state == TCP_FIN_WAIT1 || t->tcpi_state == TCP_FIN_WAIT2 || + t->tcpi_state == TCP_LAST_ACK || t->tcpi_state == TCP_CLOSING) { + if (t->sndq) + t->sndq--; + if (t->notsent) + t->notsent--; + } + + if (t->notsent > t->sndq) { + err("Invalid notsent count socket %i, send: %u, not sent: %u", + s, t->sndq, t->notsent); + return -EINVAL; + } + + if (t->sndq > TCP_MIGRATE_SND_QUEUE_MAX) { + err("Send queue too large to migrate socket %i: %u bytes", + s, t->sndq); + return -ENOBUFS; + } + + rc = recv(s, tcp_migrate_snd_queue, + MIN(t->sndq, TCP_MIGRATE_SND_QUEUE_MAX), MSG_PEEK); + if (rc < 0) { + if (errno == EAGAIN) { /* EAGAIN means empty */ + rc = 0; + } else { + rc = -errno; + err_perror("Can't read send queue, socket %i", s); + return rc; + } + } + + if (rc < t->sndq) { + err("Short read migrating send queue"); + return -ENXIO; + } + + t->notsent = MIN(t->notsent, t->sndq); + + return 0; +} + +/** + * tcp_flow_repair_queue() - Restore contents of a given (pre-selected) queue + * @s: Socket + * @len: Length of data to be restored + * @buf: Buffer with content of pending data queue + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_repair_queue(int s, size_t len, uint8_t *buf) +{ + size_t chunk = len; + uint8_t *p = buf; + + while (len > 0) { + ssize_t rc = send(s, p, MIN(len, chunk), 0); + + if (rc < 0) { + if ((errno == ENOBUFS || errno == ENOMEM) && + chunk >= TCP_MIGRATE_RESTORE_CHUNK_MIN) { + chunk /= 2; + continue; + } + + rc = -errno; + err_perror("Can't write queue, socket %i", s); + return rc; + } + + len -= rc; + p += rc; + } + + return 0; +} + +/** + * tcp_flow_dump_seq() - Dump current sequence of pre-selected queue + * @s: Socket + * @v: Sequence value, set on return + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_dump_seq(int s, uint32_t *v) +{ + socklen_t sl = sizeof(*v); + + if (getsockopt(s, SOL_TCP, TCP_QUEUE_SEQ, v, &sl)) { + int rc = -errno; + err_perror("Dumping sequence, socket %i", s); + return rc; + } + + return 0; +} + +/** + * tcp_flow_repair_seq() - Restore sequence for pre-selected queue + * @s: Socket + * @v: Sequence value to be set + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_repair_seq(int s, const uint32_t *v) +{ + if (setsockopt(s, SOL_TCP, TCP_QUEUE_SEQ, v, sizeof(*v))) { + int rc = -errno; + err_perror("Setting sequence, socket %i", s); + return rc; + } + + return 0; +} + +/** + * tcp_flow_dump_rcvqueue() - Dump receive queue and its length, seal/block it + * @s: Socket + * @t: Extended migration data + * @filled: Bytes we injected in the queue to block it, set on return + * + * Return: 0 on success, negative error code on failure + * + * #syscalls:vu ioctl + */ +static int tcp_flow_dump_rcvqueue(int s, struct tcp_tap_transfer_ext *t, + size_t *filled) +{ + ssize_t rc, n; + + if (ioctl(s, SIOCINQ, &t->rcvq) < 0) { + rc = -errno; + err_perror("Get receive queue size, socket %i", s); + return rc; + } + + /* Observed race, seemingly hard to reproduce: we dump queue content and + * receive sequence, but more data comes and is acknowledged meanwhile, + * so we lose it. Make sure the queue is full before we dump it, so that + * nothing can be appended. + * + * Note that these send() calls are not atomic, so this is again + * theoretically racy, but apparently not in practice. TODO: Fix this in + * the kernel. + */ + do { + n = send(s, tcp_migrate_rcv_queue, TCP_MIGRATE_RCV_QUEUE_MAX, + 0); + if (n > 0) + *filled += n; + } while (n > 0); + debug("Filled up receive queue with %zi bytes", *filled); + + /* If we received a FIN, SIOCINQ is one greater than the actual number + * of bytes on the queue, because it's based on the sequence number + * rather than directly on the buffer contents. + * + * This must be called after tcp_flow_dump_tinfo(), for t->tcpi_state. + */ + if (t->tcpi_state == TCP_CLOSE_WAIT || t->tcpi_state == TCP_LAST_ACK) + t->rcvq--; + + if (t->rcvq > TCP_MIGRATE_RCV_QUEUE_MAX) { + err("Receive queue too large to migrate socket %i: %u bytes", + s, t->rcvq); + return -ENOBUFS; + } + + rc = recv(s, tcp_migrate_rcv_queue, t->rcvq, MSG_PEEK); + if (rc < 0) { + if (errno == EAGAIN) { /* EAGAIN means empty */ + rc = 0; + } else { + rc = -errno; + err_perror("Can't read receive queue for socket %i", s); + return rc; + } + } + + if (rc < t->rcvq) { + err("Short read migrating receive queue"); + return -ENXIO; + } + + return 0; +} + +/** + * tcp_flow_repair_opt() - Set repair "options" (MSS, scale, SACK, timestamps) + * @s: Socket + * @t: Extended migration data + * + * Return: 0 on success, negative error code on failure + */ +int tcp_flow_repair_opt(int s, const struct tcp_tap_transfer_ext *t) +{ + const struct tcp_repair_opt opts[] = { + { TCPOPT_WINDOW, t->snd_ws + (t->rcv_ws << 16) }, + { TCPOPT_MAXSEG, t->mss }, + { TCPOPT_SACK_PERMITTED, 0 }, + { TCPOPT_TIMESTAMP, 0 }, + }; + socklen_t sl; + + sl = sizeof(opts[0]) * (2 + + !!(t->tcpi_options & TCPI_OPT_SACK) + + !!(t->tcpi_options & TCPI_OPT_TIMESTAMPS)); + + if (setsockopt(s, SOL_TCP, TCP_REPAIR_OPTIONS, opts, sl)) { + int rc = -errno; + err_perror("Setting repair options, socket %i", s); + return rc; + } + + return 0; +} + +#if 0 +/** + * tcp_flow_migrate_shrink_window() - Dump window data, decrease socket window + * @flow: Flow to shrink window for + * @conn: Pointer to the TCP connection structure + * + * Return: 0 on success, negative error code on failure + */ +int tcp_flow_migrate_shrink_window(const union flow *flow, + const struct tcp_tap_conn *conn) +{ + struct tcp_tap_transfer_ext *t = &migrate_ext[FLOW_IDX(flow)]; + struct tcp_repair_window wnd; + socklen_t sl = sizeof(wnd); + int s = conn->sock; + + if (setsockopt(s, SOL_SOCKET, SO_RCVBUF, &((int){ 0 }), sizeof(int))) + debug("TCP: failed to set SO_RCVBUF to minimum value"); + + /* Dump window data as it is for the target, before touching stuff */ + tcp_flow_dump_wnd(s, t); + + wnd.rcv_wnd = 0; + + if (setsockopt(s, IPPROTO_TCP, TCP_REPAIR_WINDOW, &wnd, sl)) + debug_perror("Setting window repair data, socket %i", s); + + return 0; +} +#endif + +/** + * tcp_flow_migrate_source() - Send data (flow table) for flow, close listening + * @fd: Descriptor for state migration + * @conn: Pointer to the TCP connection structure + * + * Return: 0 on success, negative error code on failure + */ +int tcp_flow_migrate_source(int fd, struct tcp_tap_conn *conn) +{ + struct tcp_tap_transfer t = { + .retrans = conn->retrans, + .ws_from_tap = conn->ws_from_tap, + .ws_to_tap = conn->ws_to_tap, + .events = conn->events, + + .tap_mss = htonl(MSS_GET(conn)), + + .sndbuf = htonl(conn->sndbuf), + + .flags = conn->flags, + .seq_dup_ack_approx = conn->seq_dup_ack_approx, + + .wnd_from_tap = htons(conn->wnd_from_tap), + .wnd_to_tap = htons(conn->wnd_to_tap), + + .seq_to_tap = htonl(conn->seq_to_tap), + .seq_ack_from_tap = htonl(conn->seq_ack_from_tap), + .seq_from_tap = htonl(conn->seq_from_tap), + .seq_ack_to_tap = htonl(conn->seq_ack_to_tap), + .seq_init_from_tap = htonl(conn->seq_init_from_tap), + }; + + memcpy(&t.pif, conn->f.pif, sizeof(t.pif)); + memcpy(&t.side, conn->f.side, sizeof(t.side)); + + if (write_all_buf(fd, &t, sizeof(t))) { + int rc = -errno; + err_perror("Can't write migration data, socket %i", conn->sock); + return rc; + } + + if (conn->listening_sock != -1 && !fcntl(conn->listening_sock, F_GETFD)) + close(conn->listening_sock); + + return 0; +} + +/** + * tcp_flow_migrate_source_ext() - Dump queues, close sockets, send final data + * @fd: Descriptor for state migration + * @conn: Pointer to the TCP connection structure + * + * Return: 0 on success, negative error code on failure + */ +int tcp_flow_migrate_source_ext(int fd, const struct tcp_tap_conn *conn) +{ + struct tcp_tap_transfer_ext *t = &migrate_ext[FLOW_IDX(conn)]; + size_t seq_rcv_rewind = 0; + int s = conn->sock; + int rc; + + if ((rc = tcp_flow_dump_tinfo(s, t))) + goto dumpfail; + + if ((rc = tcp_flow_dump_mss(s, t))) + goto dumpfail; + + if ((rc = tcp_flow_dump_wnd(s, t))) + goto dumpfail; + + if ((rc = tcp_flow_select_queue(s, TCP_SEND_QUEUE))) + goto dumpfail; + + if ((rc = tcp_flow_dump_sndqueue(s, t))) + goto dumpfail; + + if ((rc = tcp_flow_dump_seq(s, &t->seq_snd))) + goto dumpfail; + + if ((rc = tcp_flow_select_queue(s, TCP_RECV_QUEUE))) + goto dumpfail; + + if ((rc = tcp_flow_dump_rcvqueue(s, t, &seq_rcv_rewind))) + goto dumpfail; + + if ((rc = tcp_flow_dump_seq(s, &t->seq_rcv))) + goto dumpfail; + + close(s); + + /* Adjustments unrelated to FIN segments: sequence numbers we dumped are + * based on the end of the queues. + */ + t->seq_rcv -= t->rcvq + seq_rcv_rewind; + t->seq_snd -= t->sndq; + + debug("Extended migration data, socket %i sequences send %u receive %u", + s, t->seq_snd, t->seq_rcv); + debug(" pending queues: send %u not sent %u receive %u", + t->sndq, t->notsent, t->rcvq); + debug(" window: snd_wl1 %u snd_wnd %u max %u rcv_wnd %u rcv_wup %u", + t->snd_wl1, t->snd_wnd, t->max_window, t->rcv_wnd, t->rcv_wup); + + /* Endianness fix-ups */ + t->seq_snd = htonl(t->seq_snd); + t->seq_rcv = htonl(t->seq_rcv); + t->sndq = htonl(t->sndq); + t->notsent = htonl(t->notsent); + t->rcvq = htonl(t->rcvq); + + t->snd_wl1 = htonl(t->snd_wl1); + t->snd_wnd = htonl(t->snd_wnd); + t->max_window = htonl(t->max_window); + t->rcv_wnd = htonl(t->rcv_wnd); + t->rcv_wup = htonl(t->rcv_wup); + + if (write_all_buf(fd, t, sizeof(*t))) { + rc = -errno; + err_perror("Failed to write extended data, socket %i", s); + return rc; + } + + if (write_all_buf(fd, tcp_migrate_snd_queue, ntohl(t->sndq))) { + rc = -errno; + err_perror("Failed to write send queue data, socket %i", s); + return rc; + } + + if (write_all_buf(fd, tcp_migrate_rcv_queue, ntohl(t->rcvq))) { + rc = -errno; + err_perror("Failed to write receive queue data, socket %i", s); + return rc; + } + + return 0; + +dumpfail: + /* For any type of failure dumping data, write an invalid extended data + * descriptor that allows us to keep the stream in sync, but tells the + * target to skip the flow. If we fail to transfer data, that's fatal: + * return -EIO in that case (and only in that case). + */ + flow_err(conn, "Unable to dump migration data"); + t->tcpi_state = 0; /* Not defined: tell the target to skip this flow */ + + if (write_all_buf(fd, t, sizeof(*t))) { + err_perror("Failed to write extended data, socket %i", s); + return rc; + } + + return 0; +} + +/** + * tcp_flow_repair_socket() - Open and bind socket, request repair mode + * @c: Execution context + * @conn: Pointer to the TCP connection structure + * + * Return: 0 on success, negative error code on failure + */ +int tcp_flow_repair_socket(struct ctx *c, struct tcp_tap_conn *conn) +{ + sa_family_t af = CONN_V4(conn) ? AF_INET : AF_INET6; + const struct flowside *sockside = HOSTFLOW(conn); + union sockaddr_inany a; + socklen_t sl; + int s, rc; + + pif_sockaddr(c, &a, &sl, PIF_HOST, &sockside->oaddr, sockside->oport); + + if ((conn->sock = socket(af, SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC, + IPPROTO_TCP)) < 0) { + rc = -errno; + err_perror("Failed to create socket for migrated flow"); + return rc; + } + s = conn->sock; + + if (setsockopt(s, SOL_SOCKET, SO_REUSEADDR, &(int){ 1 }, sizeof(int))) + debug_perror("Setting SO_REUSEADDR on socket %i", s); + + tcp_sock_set_nodelay(s); + + if ((rc = bind(s, &a.sa, sizeof(a)))) { + err_perror("Failed to bind socket for migrated flow"); + goto err; + } + + if ((rc = repair_set(c, conn->sock, TCP_REPAIR_ON))) { + err("Failed to set TCP_REPAIR on new socket"); + goto err; + } + + return 0; + +err: + close(s); + conn->sock = -1; + return rc; +} + +/** + * tcp_flow_repair_connect() - Connect socket in repair mode, then turn it off + * @c: Execution context + * @conn: Pointer to the TCP connection structure + * + * Return: 0 on success, negative error code on failure + */ +static int tcp_flow_repair_connect(const struct ctx *c, + struct tcp_tap_conn *conn) +{ + const struct flowside *tgt = HOSTFLOW(conn); + int rc; + + rc = flowside_connect(c, conn->sock, PIF_HOST, tgt); + if (rc) { + rc = -errno; + err_perror("Failed to connect migrated socket %i", conn->sock); + return rc; + } + + conn->in_epoll = 0; + conn->timer = -1; + + return 0; +} + +/** + * tcp_flow_migrate_target() - Receive data (flow table part) for flow, insert + * @c: Execution context + * @fd: Descriptor for state migration + * + * Return: 0 on success, negative error code on failure + */ +int tcp_flow_migrate_target(struct ctx *c, int fd) +{ + struct tcp_tap_transfer t; + struct tcp_tap_conn *conn; + union flow *flow; + int rc; + + if (!(flow = flow_alloc_migrate())) { + err("Flow table full on migration target"); + return -ENOMEM; + } + + if (read_all_buf(fd, &t, sizeof(t))) { + err_perror("Failed to receive migration data"); + return -errno; + } + + memcpy(&flow->f.pif, &t.pif, sizeof(flow->f.pif)); + memcpy(&flow->f.side, &t.side, sizeof(flow->f.side)); + + flow->f.type = FLOW_TCP; + conn = &flow->tcp; + + conn->retrans = t.retrans; + conn->ws_from_tap = t.ws_from_tap; + conn->ws_to_tap = t.ws_to_tap; + conn->events = t.events; + + conn->sndbuf = htonl(t.sndbuf); + + conn->flags = t.flags; + conn->seq_dup_ack_approx = t.seq_dup_ack_approx; + + MSS_SET(conn, ntohl(t.tap_mss)); + + conn->wnd_from_tap = ntohs(t.wnd_from_tap); + conn->wnd_to_tap = ntohs(t.wnd_to_tap); + + conn->seq_to_tap = ntohl(t.seq_to_tap); + conn->seq_ack_from_tap = ntohl(t.seq_ack_from_tap); + conn->seq_from_tap = ntohl(t.seq_from_tap); + conn->seq_ack_to_tap = ntohl(t.seq_ack_to_tap); + conn->seq_init_from_tap = ntohl(t.seq_init_from_tap); + + if ((rc = tcp_flow_repair_socket(c, conn))) + return rc; + + flow_hash_insert(c, TAP_SIDX(conn)); + + return 0; +} + +/** + * tcp_flow_migrate_target_ext() - Receive extended data for flow, set, connect + * @c: Execution context + * @flow: Existing flow for this connection data + * @fd: Descriptor for state migration + * + * Return: 0 on success, negative code on failure, but 0 on connection reset + */ +int tcp_flow_migrate_target_ext(struct ctx *c, union flow *flow, int fd) +{ + struct tcp_tap_conn *conn = &flow->tcp; + uint32_t peek_offset = conn->seq_to_tap - conn->seq_ack_from_tap; + struct tcp_tap_transfer_ext t; + int s = conn->sock, rc; + + if (read_all_buf(fd, &t, sizeof(t))) { + rc = -errno; + err_perror("Failed to read extended data for socket %i", s); + return rc; + } + + if (!t.tcpi_state) { /* Source wants us to skip this flow */ + flow_err(flow, "Bad migration data, dropping"); + + if ((rc = repair_set(c, conn->sock, TCP_REPAIR_OFF))) + return rc; + if ((rc = repair_flush(c))) + return rc; + + tcp_rst(c, conn); + return 0; + } + + /* Endianness fix-ups */ + t.seq_snd = ntohl(t.seq_snd); + t.seq_rcv = ntohl(t.seq_rcv); + t.sndq = ntohl(t.sndq); + t.notsent = ntohl(t.notsent); + t.rcvq = ntohl(t.rcvq); + + t.snd_wl1 = ntohl(t.snd_wl1); + t.snd_wnd = ntohl(t.snd_wnd); + t.max_window = ntohl(t.max_window); + t.rcv_wnd = ntohl(t.rcv_wnd); + t.rcv_wup = ntohl(t.rcv_wup); + + debug("Extended migration data, socket %i sequences send %u receive %u", + s, t.seq_snd, t.seq_rcv); + debug(" pending queues: send %u not sent %u receive %u", + t.sndq, t.notsent, t.rcvq); + debug(" window: snd_wl1 %u snd_wnd %u max %u rcv_wnd %u rcv_wup %u", + t.snd_wl1, t.snd_wnd, t.max_window, t.rcv_wnd, t.rcv_wup); + debug(" SO_PEEK_OFF %s offset=%"PRIu32, + peek_offset_cap ? "enabled" : "disabled", peek_offset); + + if (t.sndq > TCP_MIGRATE_SND_QUEUE_MAX || t.notsent > t.sndq || + t.rcvq > TCP_MIGRATE_RCV_QUEUE_MAX) { + err("Bad queues socket %i, send: %u, not sent: %u, receive: %u", + s, t.sndq, t.notsent, t.rcvq); + return -EINVAL; + } + + if (read_all_buf(fd, tcp_migrate_snd_queue, t.sndq)) { + rc = -errno; + err_perror("Failed to read send queue data, socket %i", s); + return rc; + } + + if (read_all_buf(fd, tcp_migrate_rcv_queue, t.rcvq)) { + rc = -errno; + err_perror("Failed to read receive queue data, socket %i", s); + return rc; + } + + if ((rc = tcp_flow_select_queue(s, TCP_SEND_QUEUE))) + return rc; + + if ((rc = tcp_flow_repair_seq(s, &t.seq_snd))) + return rc; + + if ((rc = tcp_flow_select_queue(s, TCP_RECV_QUEUE))) + return rc; + + if ((rc = tcp_flow_repair_seq(s, &t.seq_rcv))) + return rc; + + if ((rc = tcp_flow_repair_connect(c, conn))) + return rc; + + if ((rc = tcp_flow_repair_queue(s, t.rcvq, tcp_migrate_rcv_queue))) + return rc; + + if ((rc = tcp_flow_select_queue(s, TCP_SEND_QUEUE))) + return rc; + + if ((rc = tcp_flow_repair_queue(s, t.sndq - t.notsent, + tcp_migrate_snd_queue))) + return rc; + + if ((rc = tcp_flow_repair_opt(s, &t))) + return rc; + + /* If we sent a FIN sent and it was acknowledged (TCP_FIN_WAIT2), don't + * send it out, because we already sent it for sure. + * + * Call shutdown(x, SHUT_WR) in repair mode, so that we move to + * FIN_WAIT_1 (tcp_shutdown()) without sending anything + * (goto in tcp_write_xmit()). + */ + if (t.tcpi_state == TCP_FIN_WAIT2) { + int v; + + v = TCP_SEND_QUEUE; + if (setsockopt(s, SOL_TCP, TCP_REPAIR_QUEUE, &v, sizeof(v))) + debug_perror("Selecting repair queue, socket %i", s); + else + shutdown(s, SHUT_WR); + } + + if ((rc = tcp_flow_repair_wnd(s, &t))) + return rc; + + if ((rc = repair_set(c, conn->sock, TCP_REPAIR_OFF))) + return rc; + if ((rc = repair_flush(c))) + return rc; + + if (t.notsent) { + err("socket %i, t.sndq=%u t.notsent=%u", + s, t.sndq, t.notsent); + + if (tcp_flow_repair_queue(s, t.notsent, + tcp_migrate_snd_queue + + (t.sndq - t.notsent))) { + /* This sometimes seems to fail for unclear reasons. + * Don't fail the whole migration, just reset the flow + * and carry on to the next one. + */ + tcp_rst(c, conn); + return 0; + } + } + + /* If we sent a FIN but it wasn't acknowledged yet (TCP_FIN_WAIT1), send + * it out, because we don't know if we already sent it. + * + * Call shutdown(x, SHUT_WR) *not* in repair mode, which moves us to + * TCP_FIN_WAIT1. + */ + if (t.tcpi_state == TCP_FIN_WAIT1) + shutdown(s, SHUT_WR); + + if ((rc = tcp_epoll_ctl(c, conn))) { + debug("Failed to subscribe to epoll for migrated socket %i: %s", + conn->sock, strerror_(-rc)); + } + + return 0; +} diff --git a/tcp_conn.h b/tcp_conn.h index 8c20805e..1b203f27 100644 --- a/tcp_conn.h +++ b/tcp_conn.h @@ -19,6 +19,7 @@ * @tap_mss: MSS advertised by tap/guest, rounded to 2 ^ TCP_MSS_BITS * @sock: Socket descriptor number * @events: Connection events, implying connection states + * @listening_sock: Listening socket this socket was accept()ed from, or -1 * @timer: timerfd descriptor for timeout events * @flags: Connection flags representing internal attributes * @sndbuf: Sending buffer in kernel, rounded to 2 ^ SNDBUF_BITS @@ -68,6 +69,7 @@ struct tcp_tap_conn { #define CONN_STATE_BITS /* Setting these clears other flags */ \ (SOCK_ACCEPTED | TAP_SYN_RCVD | ESTABLISHED) + int listening_sock; int timer :FD_REF_BITS; @@ -96,6 +98,93 @@ struct tcp_tap_conn { uint32_t seq_init_from_tap; }; +/** + * struct tcp_tap_transfer - Migrated TCP data, flow table part, network order + * @pif: Interfaces for each side of the flow + * @side: Addresses and ports for each side of the flow + * @retrans: Number of retransmissions occurred due to ACK_TIMEOUT + * @ws_from_tap: Window scaling factor advertised from tap/guest + * @ws_to_tap: Window scaling factor advertised to tap/guest + * @events: Connection events, implying connection states + * @tap_mss: MSS advertised by tap/guest, rounded to 2 ^ TCP_MSS_BITS + * @sndbuf: Sending buffer in kernel, rounded to 2 ^ SNDBUF_BITS + * @flags: Connection flags representing internal attributes + * @seq_dup_ack_approx: Last duplicate ACK number sent to tap + * @wnd_from_tap: Last window size from tap, unscaled (as received) + * @wnd_to_tap: Sending window advertised to tap, unscaled (as sent) + * @seq_to_tap: Next sequence for packets to tap + * @seq_ack_from_tap: Last ACK number received from tap + * @seq_from_tap: Next sequence for packets from tap (not actually sent) + * @seq_ack_to_tap: Last ACK number sent to tap + * @seq_init_from_tap: Initial sequence number from tap +*/ +struct tcp_tap_transfer { + uint8_t pif[SIDES]; + struct flowside side[SIDES]; + + uint8_t retrans; + uint8_t ws_from_tap; + uint8_t ws_to_tap; + uint8_t events; + + uint32_t tap_mss; + + uint32_t sndbuf; + + uint8_t flags; + uint8_t seq_dup_ack_approx; + + uint16_t wnd_from_tap; + uint16_t wnd_to_tap; + + uint32_t seq_to_tap; + uint32_t seq_ack_from_tap; + uint32_t seq_from_tap; + uint32_t seq_ack_to_tap; + uint32_t seq_init_from_tap; +} __attribute__((packed, aligned(__alignof__(uint32_t)))); + +/** + * struct tcp_tap_transfer_ext - Migrated TCP data, outside flow, network order + * @seq_snd: Socket-side send sequence + * @seq_rcv: Socket-side receive sequence + * @sndq: Length of pending send queue (unacknowledged / not sent) + * @notsent: Part of pending send queue that wasn't sent out yet + * @rcvq: Length of pending receive queue + * @mss: Socket-side MSS clamp + * @snd_wl1: Next sequence used in window probe (next sequence - 1) + * @snd_wnd: Socket-side sending window + * @max_window: Window clamp + * @rcv_wnd: Socket-side receive window + * @rcv_wup: rcv_nxt on last window update sent + * @snd_ws: Window scaling factor, send + * @rcv_ws: Window scaling factor, receive + * @tcpi_state: Connection state in TCP_INFO style (enum, tcp_states.h) + * @tcpi_options: TCPI_OPT_* constants (timestamps, selective ACK) + */ +struct tcp_tap_transfer_ext { + uint32_t seq_snd; + uint32_t seq_rcv; + + uint32_t sndq; + uint32_t notsent; + uint32_t rcvq; + + uint32_t mss; + + /* We can't just use struct tcp_repair_window: we need network order */ + uint32_t snd_wl1; + uint32_t snd_wnd; + uint32_t max_window; + uint32_t rcv_wnd; + uint32_t rcv_wup; + + uint8_t snd_ws; + uint8_t rcv_ws; + uint8_t tcpi_state; + uint8_t tcpi_options; +} __attribute__((packed, aligned(__alignof__(uint32_t)))); + /** * struct tcp_splice_conn - Descriptor for a spliced TCP connection * @f: Generic flow information @@ -140,6 +229,16 @@ extern int init_sock_pool4 [TCP_SOCK_POOL_SIZE]; extern int init_sock_pool6 [TCP_SOCK_POOL_SIZE]; bool tcp_flow_defer(const struct tcp_tap_conn *conn); + +int tcp_freeze(struct ctx *c, const struct tcp_tap_conn *conn); +int tcp_thaw(struct ctx *c, struct tcp_tap_conn *conn); + +int tcp_flow_migrate_source(int fd, struct tcp_tap_conn *conn); +int tcp_flow_migrate_source_ext(int fd, const struct tcp_tap_conn *conn); + +int tcp_flow_migrate_target(struct ctx *c, int fd); +int tcp_flow_migrate_target_ext(struct ctx *c, union flow *flow, int fd); + bool tcp_splice_flow_defer(struct tcp_splice_conn *conn); void tcp_splice_timer(const struct ctx *c, struct tcp_splice_conn *conn); int tcp_conn_pool_sock(int pool[]); -- 2.48.1