This Time Self-Hosted
dark mode light mode Search

Isolating containers within a single box

This is the kind of post I write to not forget how I did something that was tricky, so don’t expect the best prose out there.

So the hardware I’m hosting the box on is beefy enough to actually be partitioned in multiple containers through LXC — which might not be the best security but it’s still better than nothing, and everybody who has a login there is a trusted person anyway.

I also have a 64-bit prefix from TunnelBroker for IPv6, but I don’t usually have IPv6 at home/work. So what am I using it for? Well, first of all for inter-container exchanges was my idea: the IPv6 are more unique and that makes it easier to write down configuration for them.

But there is another interesting issue to this: I can simply set this up on OpenSSH config with an old trick so that I can jump straight into a container:

Host excelsior-gw.flameeyes.eu
HostName excelsior.flameeyes.eu
User scponly
ControlMaster no
ForwardAgent no
ForwardX11 no

Host *.excelsior.flameeyes.eu
ProxyCommand ssh excelsior-gw.flameeyes.eu -W %h:%p
Compression no
Code language: CSS (css)

This defines a gateway that uses scponly, then sets up the system to jump around to the other hosts by using their public hostname — which is registered as the AAAA record of its actual IP address.

But… I was hitting some more issues: while I let connections to port 22 (SSH) go through to and from all the containers on IPv6, I wanted to limit the tinderbox instances from connecting to the outside, as that’s important to test software that tries to connect to external hosts during the ebuild phases (I found some more live ebuilds on ~arch — don’t get me started!). I was having some trouble to set it up but at the end this is what works:

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 6573 6035K ACCEPT     all      *      *       ::/0                 ::/0                 state RELATED,ESTABLISHED
   27  5128 ACCEPT     tcp      !br0   *       ::/0                 ::/0                 tcp dpt:22
   34  2408 ACCEPT     icmpv6    *      *       ::/0                 ::/0                
  720 65542 TINDERBOXES  all      *      *       2001:470:d:525:2::/80  ::/0                

Chain TINDERBOXES (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp      *      *       ::/0                 2001:470:d:525::1/128  tcp dpt:3128
   95  9470 ACCEPT     udp      *      *       ::/0                 2001:4860:4860::8888/128  udp dpt:53
   16  1600 ACCEPT     udp      *      *       ::/0                 2001:4860:4860::8844/128  udp dpt:53
    0     0 ACCEPT     tcp      *      *       ::/0                 2001:4860:4860::8888/128  tcp dpt:53
    0     0 ACCEPT     tcp      *      *       ::/0                 2001:4860:4860::8844/128  tcp dpt:53
    2   160 ACCEPT     tcp      *      *       ::/0                 ::/0                 tcp dpt:873
    4   320 ACCEPT     tcp      *      *       ::/0                 2001:470:d:525:1::1/128  tcp dpt:28011
Code language: JavaScript (javascript)

On the FORWARD chain, we first accept all the related/established connections – this is required as otherwise you won’t get the DNS responses back – then we let all connections to port 22 pass through … and critically we let icmpv6 pass.

This is what actually kicked me in the nuts today: first I forgot that ip6tables expects a different identifier for ICMP on IPv6 (as you see, it’s icmpv6 not icmp). Second, when the target hostname is in the same subnet/prefix, it’ll try to use NDP (message type 135, from the logs) to know what the link-local address for it is.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.