* Re: Connecting back to the host through a dummy veth interface
[not found] <176606116131.2775.3279769610610037541@maja>
@ 2025-12-20 14:12 ` Stefano Brivio
2025-12-20 14:28 ` Felix Rubio
0 siblings, 1 reply; 10+ messages in thread
From: Stefano Brivio @ 2025-12-20 14:12 UTC (permalink / raw)
To: Felix Rubio; +Cc: passt-user
Hi Felix,
On Thu, 18 Dec 2025 13:32:36 +0100
Felix Rubio <felix@kngnt.org> wrote:
> Hi everybody,
>
> I am trying to run a number of rootless podman pods and containers by different
> users, while still being able to talk to each other. To this end I am creating
> a dummy veth interface and publishing all the exposed ports there (this works:
> I can communicate from other host services with those containers), and I am
> also trying to set that dummy veth interface as the default gateway for the
> pods/containers (with the expectation that then they will be able to reach
> each other). However, this is not working... and I am pretty lost.
>
> For example, I am running the following command, trying to connect a ldap
> client container to a ldap server container, unsuccessfully.
>
> podman run --rm --dns=10.255.255.1 --network=pasta:--outbound-
> if4=cluster_dns0,--gateway=10.255.255.1 --add-host=ldap.host.internal:host-gateway sh -c "ip add && ip route && ldapwhoami -H ldaps://
> ldap.host.internal:1636"
>
> Is this something impossible to do, or am I doing something wrong?
Sorry, I'm a bit swamped at the moment, and I plan to get back to you
in a bit, but meanwhile, I think the dummy veth trick is unnecessarily
complicated.
I think you could simply connect "to the host" and redirect from there
to the containers by means of mapped ports. See:
https://blog.podman.io/2024/10/podman-5-3-changes-for-improved-networking-experience-with-pasta/
for a couple of details. But I'll try to come up with a full example
next.
--
Stefano
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
2025-12-20 14:12 ` Connecting back to the host through a dummy veth interface Stefano Brivio
@ 2025-12-20 14:28 ` Felix Rubio
2025-12-21 10:47 ` Stefano Brivio
0 siblings, 1 reply; 10+ messages in thread
From: Felix Rubio @ 2025-12-20 14:28 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-user
Hey Stefano,
Thank you for your answer! I know I can run rootful containers, and that then
I can access the host's network ns. However, this exposes a number of
potential issues:
* In case the an attacker manages to break out of the container, gets root
* That enables connecting back to the host loopback, but then from that
container any service listening to the loopback can be reached as well.
The reason for looking for a way of binding those services to 10.255.255.1 (so
that only exposed services will be in that interface) and running fully
rootless, if works, provides a more secure system... in general.
About the mapped ports, I am a bit lost: for what I have tested, running
rootless disables the possibility to connect back to the host, right?
Regards, and thank you!
Felix
On Saturday, 20 December 2025 15:12:24 Central European Standard Time Stefano
Brivio wrote:
> Hi Felix,
>
> On Thu, 18 Dec 2025 13:32:36 +0100
>
> Felix Rubio <felix@kngnt.org> wrote:
> > Hi everybody,
> >
> > I am trying to run a number of rootless podman pods and containers by
different
> > users, while still being able to talk to each other. To this end I am
creating
> > a dummy veth interface and publishing all the exposed ports there (this
works:
> > I can communicate from other host services with those containers), and I
am
> > also trying to set that dummy veth interface as the default gateway for
the
> > pods/containers (with the expectation that then they will be able to reach
> > each other). However, this is not working... and I am pretty lost.
> >
> > For example, I am running the following command, trying to connect a ldap
> > client container to a ldap server container, unsuccessfully.
> >
> > podman run --rm --dns=10.255.255.1 --network=pasta:--outbound-
> > if4=cluster_dns0,--gateway=10.255.255.1 --add-
host=ldap.host.internal:host-gateway sh -c "ip add && ip route &&
> > ldapwhoami -H ldaps:// ldap.host.internal:1636"
> >
> > Is this something impossible to do, or am I doing something wrong?
>
> Sorry, I'm a bit swamped at the moment, and I plan to get back to you
> in a bit, but meanwhile, I think the dummy veth trick is unnecessarily
> complicated.
>
> I think you could simply connect "to the host" and redirect from there
> to the containers by means of mapped ports. See:
>
> https://blog.podman.io/2024/10/podman-5-3-changes-for-improved-networking-experience-with-pasta/
>
> for a couple of details. But I'll try to come up with a full example
> next.
--
Felix Rubio
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
2025-12-20 14:28 ` Felix Rubio
@ 2025-12-21 10:47 ` Stefano Brivio
2025-12-21 15:32 ` Felix Rubio
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Stefano Brivio @ 2025-12-21 10:47 UTC (permalink / raw)
To: Felix Rubio; +Cc: passt-user
On Sat, 20 Dec 2025 15:28:43 +0100
Felix Rubio <felix@kngnt.org> wrote:
> Hey Stefano,
>
> Thank you for your answer! I know I can run rootful containers, and that then
> I can access the host's network ns. However, this exposes a number of
> potential issues:
> * In case the an attacker manages to break out of the container, gets root
> * That enables connecting back to the host loopback, but then from that
> container any service listening to the loopback can be reached as well.
Sure. That's the whole point behind pasta(1) and rootless containers
with Podman / rootlesskit. I certainly won't be the one suggesting that
you'd run anything as root. :)
> The reason for looking for a way of binding those services to 10.255.255.1 (so
> that only exposed services will be in that interface) and running fully
> rootless, if works, provides a more secure system... in general.
Indeed.
> About the mapped ports, I am a bit lost: for what I have tested, running
> rootless disables the possibility to connect back to the host, right?
Hah, I see now. No, that's not the case. You can run rootless
containers and connect to the host from them, in two ways:
1. disabled by default in Podman's pasta integration, not what you want:
via the loopback interface, see -U / -T in 'man pasta' and
--host-lo-to-ns-lo for the other way around.
In that case, packets appear to be local (source address is
loopback) in the other namespace ("host" or initial namespace for
packets from a container, and container for packets from host).
This gives you better throughput but making connections appear as if
they were local is risky (cf. CVE-2021-20199), so it's disabled by
default, and not what I'm suggesting (at least in general)
2. what you get as default in Podman: using pasta's --map-guest-addr.
The current description of this option in pasta(1) isn't great, hence
https://bugs.passt.top/show_bug.cgi?id=132, but the idea is that you
will reach the host from the container with a non-loopback address,
as if the connection was coming from another host (which should
represent the expected container usage).
So here's an example:
$ podman run --rm -ti -p 8089:80 traefik/whoami
2025/12/21 10:42:16 Starting up on port 80
[in another terminal]
$ podman run --rm -ti fedora curl host.containers.internal:8089
Hostname: ab94f49b5042
IP: 127.0.0.1
IP: ::1
IP: **.***.*.***
IP: ****:***:***:***::*
IP: ****::****:****:****:****
RemoteAddr: 169.254.1.2:46592
GET / HTTP/1.1
Host: host.containers.internal:8089
User-Agent: curl/8.15.0
Accept: */*
...doesn't that work for you? Note that you'll need somewhat recent
versions of pasta (>= 2024_08_21.1d6142f) and Podman (>= 5.3).
--
Stefano
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
2025-12-21 10:47 ` Stefano Brivio
@ 2025-12-21 15:32 ` Felix Rubio
2025-12-22 22:51 ` Stefano Brivio
2025-12-22 12:48 ` Felix Rubio
[not found] ` <3627291.QJadu78ljV@altair>
2 siblings, 1 reply; 10+ messages in thread
From: Felix Rubio @ 2025-12-21 15:32 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-user
Something more: I see that pasta is binding to 0.0.0.0. This means that, while
allows other pods to connect to the published port of a container through
169.254.1.2, it also enables that port to be reachable from the network.
Is there any way to prevent that?
Regards!
Felix
On Sunday, 21 December 2025 11:47:22 Central European Standard Time Stefano
Brivio wrote:
> On Sat, 20 Dec 2025 15:28:43 +0100
>
> Felix Rubio <felix@kngnt.org> wrote:
> > Hey Stefano,
> >
> > Thank you for your answer! I know I can run rootful containers, and that
then
> > I can access the host's network ns. However, this exposes a number of
> > potential issues:
> > * In case the an attacker manages to break out of the container, gets root
> > * That enables connecting back to the host loopback, but then from that
> > container any service listening to the loopback can be reached as well.
>
> Sure. That's the whole point behind pasta(1) and rootless containers
> with Podman / rootlesskit. I certainly won't be the one suggesting that
> you'd run anything as root. :)
>
> > The reason for looking for a way of binding those services to 10.255.255.1
(so
> > that only exposed services will be in that interface) and running fully
> > rootless, if works, provides a more secure system... in general.
>
> Indeed.
>
> > About the mapped ports, I am a bit lost: for what I have tested, running
> > rootless disables the possibility to connect back to the host, right?
>
> Hah, I see now. No, that's not the case. You can run rootless
> containers and connect to the host from them, in two ways:
>
> 1. disabled by default in Podman's pasta integration, not what you want:
> via the loopback interface, see -U / -T in 'man pasta' and
> --host-lo-to-ns-lo for the other way around.
>
> In that case, packets appear to be local (source address is
> loopback) in the other namespace ("host" or initial namespace for
> packets from a container, and container for packets from host).
>
> This gives you better throughput but making connections appear as if
> they were local is risky (cf. CVE-2021-20199), so it's disabled by
> default, and not what I'm suggesting (at least in general)
>
> 2. what you get as default in Podman: using pasta's --map-guest-addr.
>
> The current description of this option in pasta(1) isn't great, hence
> https://bugs.passt.top/show_bug.cgi?id=132, but the idea is that you
> will reach the host from the container with a non-loopback address,
> as if the connection was coming from another host (which should
> represent the expected container usage).
>
> So here's an example:
>
> $ podman run --rm -ti -p 8089:80 traefik/whoami
> 2025/12/21 10:42:16 Starting up on port 80
>
> [in another terminal]
> $ podman run --rm -ti fedora curl host.containers.internal:8089
> Hostname: ab94f49b5042
> IP: 127.0.0.1
> IP: ::1
> IP: **.***.*.***
> IP: ****:***:***:***::*
> IP: ****::****:****:****:****
> RemoteAddr: 169.254.1.2:46592
> GET / HTTP/1.1
> Host: host.containers.internal:8089
> User-Agent: curl/8.15.0
> Accept: */*
>
> ...doesn't that work for you? Note that you'll need somewhat recent
> versions of pasta (>= 2024_08_21.1d6142f) and Podman (>= 5.3).
--
Felix Rubio
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
2025-12-21 10:47 ` Stefano Brivio
2025-12-21 15:32 ` Felix Rubio
@ 2025-12-22 12:48 ` Felix Rubio
2025-12-22 22:51 ` Stefano Brivio
[not found] ` <3627291.QJadu78ljV@altair>
2 siblings, 1 reply; 10+ messages in thread
From: Felix Rubio @ 2025-12-22 12:48 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-user
Ok, things are starting to get clear. The problem was, I think, between the
desk and the keyboard.
* I have everything on a VM that I configure with Ansible. I have just taken
everything down and started from scratch
* I still have my containers without any ad-hoc network. They are binding only
to network interface 10.255.255.1, which is a dummy ethernet.
* My error was that I am running an LDAP server in one of these containers,
and I was checking if it was working with a ldapwhoami. The client was
replying that could not reach the server, which triggered all subsequent
investigation, but the real cause was that the certificate offered by the server
was not trusted by the client, and the latter broke the connection (without
giving a more proper message - facepalm).
Once fixed the problem with the certificates, everything seems to work. This
means that:
* I have a dns server in 10.255.255.1 that resolves ldap.host.internal to
10.255.255.1
* ldap server rootless container is listening to 10.255.255.1:1636
* ldap client is in another rootless container, and can reach directly
ldap.host.internal:1636.
... Is this last point expected? the ldap server is started through podman as
a regular user, without any network options... nothing fancy.
The reason for me asking is that all I have read points in the direction that
from a rootless container I should not be able to loopback to the host... but
maybe this dummy interface is not identified as "the host" and therefore I can
connect to services bound to it? On the LDAP side, the logs show that these
connections are coming from the same 10.255.255.1. That would be actually
convenient, because then I can put firewall rules in place that prevent
connecting from that dummy ethernet back to the host at all.
Thank you very much, and sorry for the initial confusing messages.
Felix
On Sunday, 21 December 2025 11:47:22 Central European Standard Time Stefano
Brivio wrote:
> On Sat, 20 Dec 2025 15:28:43 +0100
>
> Felix Rubio <felix@kngnt.org> wrote:
> > Hey Stefano,
> >
> > Thank you for your answer! I know I can run rootful containers, and that
then
> > I can access the host's network ns. However, this exposes a number of
> > potential issues:
> > * In case the an attacker manages to break out of the container, gets root
> > * That enables connecting back to the host loopback, but then from that
> > container any service listening to the loopback can be reached as well.
>
> Sure. That's the whole point behind pasta(1) and rootless containers
> with Podman / rootlesskit. I certainly won't be the one suggesting that
> you'd run anything as root. :)
>
> > The reason for looking for a way of binding those services to 10.255.255.1
(so
> > that only exposed services will be in that interface) and running fully
> > rootless, if works, provides a more secure system... in general.
>
> Indeed.
>
> > About the mapped ports, I am a bit lost: for what I have tested, running
> > rootless disables the possibility to connect back to the host, right?
>
> Hah, I see now. No, that's not the case. You can run rootless
> containers and connect to the host from them, in two ways:
>
> 1. disabled by default in Podman's pasta integration, not what you want:
> via the loopback interface, see -U / -T in 'man pasta' and
> --host-lo-to-ns-lo for the other way around.
>
> In that case, packets appear to be local (source address is
> loopback) in the other namespace ("host" or initial namespace for
> packets from a container, and container for packets from host).
>
> This gives you better throughput but making connections appear as if
> they were local is risky (cf. CVE-2021-20199), so it's disabled by
> default, and not what I'm suggesting (at least in general)
>
> 2. what you get as default in Podman: using pasta's --map-guest-addr.
>
> The current description of this option in pasta(1) isn't great, hence
> https://bugs.passt.top/show_bug.cgi?id=132, but the idea is that you
> will reach the host from the container with a non-loopback address,
> as if the connection was coming from another host (which should
> represent the expected container usage).
>
> So here's an example:
>
> $ podman run --rm -ti -p 8089:80 traefik/whoami
> 2025/12/21 10:42:16 Starting up on port 80
>
> [in another terminal]
> $ podman run --rm -ti fedora curl host.containers.internal:8089
> Hostname: ab94f49b5042
> IP: 127.0.0.1
> IP: ::1
> IP: **.***.*.***
> IP: ****:***:***:***::*
> IP: ****::****:****:****:****
> RemoteAddr: 169.254.1.2:46592
> GET / HTTP/1.1
> Host: host.containers.internal:8089
> User-Agent: curl/8.15.0
> Accept: */*
>
> ...doesn't that work for you? Note that you'll need somewhat recent
> versions of pasta (>= 2024_08_21.1d6142f) and Podman (>= 5.3).
--
Felix Rubio
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
[not found] ` <3627291.QJadu78ljV@altair>
@ 2025-12-22 22:51 ` Stefano Brivio
0 siblings, 0 replies; 10+ messages in thread
From: Stefano Brivio @ 2025-12-22 22:51 UTC (permalink / raw)
To: Felix Rubio; +Cc: passt-user
Let me answer your latest three email separately, because actually
there are valid open questions in all of them (and yes, we need
https://bugs.passt.top/show_bug.cgi?id=144 and some "howto" section
beyond man pages and Podman documentation, but it won't be for this year
either...)
On Sun, 21 Dec 2025 16:17:37 +0100
Felix Rubio <felix@kngnt.org> wrote:
> Ciao, Stefano
>
> I have just discovered how little I know about rootless networking in containers: I thought
> that when using host.containers.internal I was really connecting back to the loopback
> interface (127.0.0.1).
>
> Indeed, this works
> - Terminal 1, user 1: podman run --rm -ti -p 8089:80 traefik/whoami
> - Terminal 2, user 2: podman run --rm -ti alpine /bin/sh -c "apk add curl; curl
> host.containers.internal:8089"
>
> As I have a smtp server listening on that interface, port 25, I have run this experiment,
> which does not work:
> podman run --rm -ti alpine /bin/sh -c "apk add busybox-extras; telnet
> host.containers.internal 25"
> telnet: can't connect to remote host (169.254.1.2): Connection refused
Because it's probably binding to localhost (something in 127.0.0.1/8 or
::1 or both), but the destination of this connection attempt is not a
loopback address.
> I only seem to be able to connect, using rootless pasta, to ports that are published by
> other containers. In case any container gets compromised connections from that
> container could only be established to services run by other containers, then?
...or other hosts. But there's a way to override that. From pasta(1),
emphasis mine:
--map-host-loopback addr
Translate addr to refer to the host. Packets from the guest to
addr will be redirected to the host. ** On the host such packets
will appear to have both source and destination of 127.0.0.1 or
::1. **
...and yes, I guess we should rephrase this as well, but with this
option you would be able to connect to services that bind to loopback
addresses (too). Podman doesn't enable this by default (it would be a
bad default for security) so you would need to issue something like
'podman run --net=pasta:--map-host-loopback,169.254.1.2 ...'.
> Similarly...
> Could I create another "network of pods" by using map-guest-addr with another ip (say
> 169.254.1.3) and the pods in 169.254.1.2 and 169.254.1.3 would not be able to talk to
> each other?
It all depends on what ports are exposed and what interface and
address they are bound to, on the host. But yes, you could do something
like that.
Eventually, *after* https://bugs.passt.top/show_bug.cgi?id=140 is done,
we might consider implementing proper inter-container communication with
a single instance of pasta. That would make things easier... but we're
not quite there yet.
> So the solution for my use case is then to bind e.g., port 1636 to both 10.255.255.1 and to
> 169.254.1.2, so that external connections to it can get through, but also connections from
> other rootless pods?
You could do that, yes.
--
Stefano
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
2025-12-21 15:32 ` Felix Rubio
@ 2025-12-22 22:51 ` Stefano Brivio
0 siblings, 0 replies; 10+ messages in thread
From: Stefano Brivio @ 2025-12-22 22:51 UTC (permalink / raw)
To: Felix Rubio; +Cc: passt-user
On Sun, 21 Dec 2025 16:32:23 +0100
Felix Rubio <felix@kngnt.org> wrote:
> Something more: I see that pasta is binding to 0.0.0.0. This means that, while
> allows other pods to connect to the published port of a container through
> 169.254.1.2, it also enables that port to be reachable from the network.
>
> Is there any way to prevent that?
Yes, you can specify specific addresses or interfaces to bind to,
relevant examples from pasta(1):
-t 192.0.2.1/22
Forward local port 22, bound to 192.0.2.1, to port
22 on the guest
-t 192.0.2.1%eth0/22
Forward local port 22, bound to 192.0.2.1 and in‐
terface eth0, to port 22
-t %eth0/22
Forward local port 22, bound to any address on in‐
terface eth0, to port 22
Podman supports part of that as well, see podman-run(1) (--publish) or:
https://github.com/containers/podman/blob/2fbecb48e166ed79662ea5e45f2d56081ad08d3b/test/system/505-networking-pasta.bats#L186
for a summary.
--
Stefano
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
2025-12-22 12:48 ` Felix Rubio
@ 2025-12-22 22:51 ` Stefano Brivio
2025-12-23 7:34 ` Felix Rubio
0 siblings, 1 reply; 10+ messages in thread
From: Stefano Brivio @ 2025-12-22 22:51 UTC (permalink / raw)
To: Felix Rubio; +Cc: passt-user
On Mon, 22 Dec 2025 13:48:03 +0100
Felix Rubio <felix@kngnt.org> wrote:
> Ok, things are starting to get clear. The problem was, I think, between the
> desk and the keyboard.
The chair! I think it was the chair. :)
> * I have everything on a VM that I configure with Ansible. I have just taken
> everything down and started from scratch
>
> * I still have my containers without any ad-hoc network. They are binding only
> to network interface 10.255.255.1, which is a dummy ethernet.
>
> * My error was that I am running an LDAP server in one of these containers,
> and I was checking if it was working with a ldapwhoami. The client was
> replying that could not reach the server, which triggered all subsequent
> investigation, but the real cause was that the certificate offered by the server
> was not trusted by the client, and the latter broke the connection (without
> giving a more proper message - facepalm).
>
> Once fixed the problem with the certificates, everything seems to work. This
> means that:
> * I have a dns server in 10.255.255.1 that resolves ldap.host.internal to
> 10.255.255.1
> * ldap server rootless container is listening to 10.255.255.1:1636
> * ldap client is in another rootless container, and can reach directly
> ldap.host.internal:1636.
>
> ... Is this last point expected? the ldap server is started through podman as
> a regular user, without any network options... nothing fancy.
Yes, it's expected, because 10.255.255.1 is not a loopback address.
> The reason for me asking is that all I have read points in the direction that
> from a rootless container I should not be able to loopback to the host... but
> maybe this dummy interface is not identified as "the host" and therefore I can
It's rather not identified as "loopback".
> connect to services bound to it? On the LDAP side, the logs show that these
> connections are coming from the same 10.255.255.1. That would be actually
> convenient, because then I can put firewall rules in place that prevent
> connecting from that dummy ethernet back to the host at all.
You don't need a whole new interface for that, by the way. You could
just add that address to an existing interface, assuming that the LDAP
server lets you bind to a specific address and not just a specific
interface.
--
Stefano
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Connecting back to the host through a dummy veth interface
2025-12-22 22:51 ` Stefano Brivio
@ 2025-12-23 7:34 ` Felix Rubio
0 siblings, 0 replies; 10+ messages in thread
From: Felix Rubio @ 2025-12-23 7:34 UTC (permalink / raw)
To: Stefano Brivio; +Cc: passt-user
Damn... I knew I had to get rid of that chair... xD
My setup is a bit complex: I am running a k3s cluster with some services
outside it, but running on the same host. The purpose is to have some central
services common to all the applications I am running (e.g., authentication)
running on these rootless containers. This way I can take down the whole
cluster without loosing services that are required by other parties... at the
expense of properly protecting them.
The reason for using a dummy interface is because then I can implement simple,
wide rules, stating that this interface can only receive connections from the
k3s cluster or to specific ports, and that connections from that interface can
only be established to the cluster or to specific ports. I am doing this
because, should a malicious actor manage to run code on those services or
break outside of the container, they would be able to establish connections
outside anywhere.
I know I can use an existing interface for all this, but then I would have to
be way more careful about how these firewall rules are implemented... whereas
using this dummy interface I can deny by default and only allow as required.
Stefano, thank you very much for your answers. I really appreciate the time
you took writting them.
Regards!
Felix
On Monday, 22 December 2025 23:51:17 Central European Standard Time Stefano
Brivio wrote:
> On Mon, 22 Dec 2025 13:48:03 +0100
>
> Felix Rubio <felix@kngnt.org> wrote:
> > Ok, things are starting to get clear. The problem was, I think, between
the
> > desk and the keyboard.
>
> The chair! I think it was the chair. :)
>
> > * I have everything on a VM that I configure with Ansible. I have just
taken
> > everything down and started from scratch
> >
> > * I still have my containers without any ad-hoc network. They are binding
only
> > to network interface 10.255.255.1, which is a dummy ethernet.
> >
> > * My error was that I am running an LDAP server in one of these
containers,
> > and I was checking if it was working with a ldapwhoami. The client was
> > replying that could not reach the server, which triggered all subsequent
> > investigation, but the real cause was that the certificate offered by the
server
> > was not trusted by the client, and the latter broke the connection
(without
> > giving a more proper message - facepalm).
> >
> > Once fixed the problem with the certificates, everything seems to work. This
> >
> > means that:
> > * I have a dns server in 10.255.255.1 that resolves ldap.host.internal to
> >
> > 10.255.255.1
> >
> > * ldap server rootless container is listening to 10.255.255.1:1636
> > * ldap client is in another rootless container, and can reach directly
> >
> > ldap.host.internal:1636.
> >
> > ... Is this last point expected? the ldap server is started through podman
as
> > a regular user, without any network options... nothing fancy.
>
> Yes, it's expected, because 10.255.255.1 is not a loopback address.
>
> > The reason for me asking is that all I have read points in the direction
that
> > from a rootless container I should not be able to loopback to the host...
but
> > maybe this dummy interface is not identified as "the host" and therefore I
can
>
> It's rather not identified as "loopback".
>
> > connect to services bound to it? On the LDAP side, the logs show that
these
> > connections are coming from the same 10.255.255.1. That would be actually
> > convenient, because then I can put firewall rules in place that prevent
> > connecting from that dummy ethernet back to the host at all.
>
> You don't need a whole new interface for that, by the way. You could
> just add that address to an existing interface, assuming that the LDAP
> server lets you bind to a specific address and not just a specific
> interface.
--
Felix Rubio
^ permalink raw reply [flat|nested] 10+ messages in thread
* Connecting back to the host through a dummy veth interface
@ 2025-12-18 12:32 Felix Rubio
0 siblings, 0 replies; 10+ messages in thread
From: Felix Rubio @ 2025-12-18 12:32 UTC (permalink / raw)
To: passt-user
Hi everybody,
I am trying to run a number of rootless podman pods and containers by different
users, while still being able to talk to each other. To this end I am creating
a dummy veth interface and publishing all the exposed ports there (this works:
I can communicate from other host services with those containers), and I am
also trying to set that dummy veth interface as the default gateway for the
pods/containers (with the expectation that then they will be able to reach
each other). However, this is not working... and I am pretty lost.
For example, I am running the following command, trying to connect a ldap
client container to a ldap server container, unsuccessfully.
podman run --rm --dns=10.255.255.1 --network=pasta:--outbound-
if4=cluster_dns0,--gateway=10.255.255.1 --add-host=ldap.host.internal:host-gateway sh -c "ip add && ip route && ldapwhoami -H ldaps://
ldap.host.internal:1636"
Is this something impossible to do, or am I doing something wrong?
Thank you very much for any help you can provide!
--
Felix
Felix Rubio
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-12-23 7:34 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <176606116131.2775.3279769610610037541@maja>
2025-12-20 14:12 ` Connecting back to the host through a dummy veth interface Stefano Brivio
2025-12-20 14:28 ` Felix Rubio
2025-12-21 10:47 ` Stefano Brivio
2025-12-21 15:32 ` Felix Rubio
2025-12-22 22:51 ` Stefano Brivio
2025-12-22 12:48 ` Felix Rubio
2025-12-22 22:51 ` Stefano Brivio
2025-12-23 7:34 ` Felix Rubio
[not found] ` <3627291.QJadu78ljV@altair>
2025-12-22 22:51 ` Stefano Brivio
2025-12-18 12:32 Felix Rubio
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for IMAP folder(s).