Does your webapp really need network access?

One of the interesting thing that I noticed after shellshock was the amount of probes for vulnerabilities that counted on webapp users to have direct network access. Not only ping to known addresses to just verify the vulnerability, or wget or curl with unique IDs, but even very rough nc or even /dev/tcp connections to give remote shells. The fact that probes are there makes it logical to me to expect that for at least some of the systems these actually worked.

The reason why this piqued my interest is because I realized that most people don’t do the one obvious step to mitigate this kind of problems by removing (or at least limiting) the access to the network of their web apps. So I decided it might be a worth idea to describe a moment why you should think of that. This is in part because I found out last year at LISA that not all sysadmins have enough training in development to immediately pick up how things work, and in part because I know that even if you’re a programmer it might be counterintuitive for you to think that web apps should not have access, well, to the web.

Indeed, if you think of your app in the abstract, it has to have access to the network to serve the response to the users, right? But what happens generally is that you have some division between the web server and the app itself. People who have looked into Java in the early nougthies probably have heard of the term Application Server, which usually is present in form of Apache Tomcat or IBM WebSphere, but here is essentially the same “actor” for Rails app in the form of Passenger, or for PHP with the php-fpm service. These “servers” are effectively self-contained environments for your app, that talk with the web server to receive user requests and serve them responses. This essentially mean that in the basic web interaction, there is no network access needed for the application service.

Things gets a bit more complicated in the Web 2.0 era though: OAuth2 requires your web app to talk, from the backend, with the authentication or data providers. Similarly even my blog needs to talk with some services, to either ping them to tell them that a new post is out, and to check with Akismet for blog comments that might or might not be spam. WordPress plugins that create thumbnails are known to exist and to have a bad history of security and they fetch external content, such as videos from YouTube and Vimeo, or images from Flickr and other hosting websites to process. So there is a good amount of network connectivity needed for web apps too. Which means that rather than just isolating apps from the network, what you need to implement is some sort of filter.

Now, there are plenty of ways to remove access to the network from your webapp: SElinux, GrSec RBAC, AppArmor, … but if you don’t want to set up a complex security system, you can do the trick even with the bare minimum of the Linux kernel, iptables and CONFIG_NETFILTER_XT_MATCH_OWNER. Essentially what this allows you to do is to match (and thus filter) connections based of the originating (or destination) user. This of course only works if you can isolate your webapps on a separate user, which is definitely what you should do, but not necessarily what people are doing. Especially with things like mod_perl or mod_php, separating webapps in users is difficult – they run in-process with the webserver, and negate the split with the application server – but at least php-fpm and Passenger allow for that quite easily. Running as separate users, by the way, has many more advantages than just network filtering, so start doing that now, no matter what.

Now depending on what webapp you have in front of you, you have different ways to achieve a near-perfect setup. In my case I have a few different applications running across my servers. My blog, a WordPress blog of a customer, phpMyAdmin for that database, and finally a webapp for an old customer which is essentially an ERP. These have different requirements so I’ll start from the one that has the lowest.

The ERP app was designed to be as simple as possible: it’s a basic Rails app that uses PostgreSQL to store data. The authentication is done by Apache via HTTP Basic Auth over HTTPS (no plaintext), so there is no OAuth2 or other backend interaction. The only expected connection is to the PostgreSQL server. Pretty similar the requirements for phpMyAdmin: it only has to interface with Apache and with the MySQL service it administers, and the authentication is also done on the HTTP side (also encrypted). For both these apps, your network policy is quite obvious: negate any outside connectivity. This becomes a matter of iptables -A OUTPUT -o eth0 -m owner --uid-owner phpmyadmin -j REJECT — and the same for the other user.

The situation for the other two apps is a bit more complex: my blog wants to at least announce that there are new blog posts, and it needs to reach Akismet; both actions use HTTP and HTTPS. WordPress is a bit more complex because I don’t have much control over it (it has a dedicated server, so I don’t have to care), but I assume it mostly is also HTTP and HTTPS. The obvious idea would be to allow ports 80, 443 and 53 (for resolution). But you can do something better. You can put a proxy on your localhost, and force the webapp to go through it, either as a transparent proxy or by using the environment variable http_proxy to convince the webapp to never connect directly to the web. Unfortunately that is not straight forward to implement as neither Passenger not php-fpm has a clean way to pass environment variables per users.

What I’ve done is for now is to hack the environment.rb file to set ENV['http_proxy'] = 'http://127.0.0.1:3128/' so that Ruby will at least respect it. I’m still out for a solution for PHP unfortunately. In the case of Typo, this actually showed me two things I did not know: when looking at the admin dashboard, it’ll make two main HTTP calls: one to Google Blog Search – which was shut down back in May – and one to Typo’s version file — which is now a 404 page since the move to the Publify name. I’ll be soon shutting down both implementations since I really don’t need it. Indeed the Publify development still seems to go toward the “let’s add all possible new features that other blogging sites have” without considering the actual scalability of the platform. I don’t expect me to go back to it any time soon.

Linux Containers and Networking

So, at the moment I start writing this (and that’s unlikely to be the time I actually post this, given that I see now it could use some drawings) it’s early in the morning in Italy and I haven’t slept yet – a normal condition for me especially lately – but I have spent a bit bouncing ideas around with ferringb, Ramereth and Craig for what concerns Linux Containers (LXC). Given that, I’d like to point point out a couple of things regarding networking and LXC that might not have been extremely obvious before.

First of all, of the four networking types supported by LXC, I only could try two, for obvious reasons: phys is used to assign a particular physical device to the container, and only works if you have enough physical devices to work with, vlan requires a device able to do vlan tagging. This leaves us with veth (virtual ethernet), and macvlan (mac-address based virtual lan tagging). The former is the most simple setup, and the one I’ve been using; it creates a pair of devices, one of which is assigned within the container, and the other which is assigned to the host; you can then manage that device exactly like any other device you have on your system, and in my case that means it’s added to the br0 bridge where KVM instances are also joined. LXC allows for defining the bridge to join directly in the configuration file.

Linux Containers with Virtual Ethernet

The macvlan mode is supposed to have smaller overhead because the kernel knows the mac address assigned to the single interfaces beforehand; on the other hand setting it up is slightly harder; in particular, there is one further mode parameter that can be set, in either vepa (Virtual Ethernet Port Aggregator) or bridge mode; the former isolates the container, like they were a number of different hosts connected over to the network segment, but disallows the various containers from talking with one another; on the other hand the latter mode actually creates a special bridge (not to be confused with the Linux bridge used above with virtual ethernet devices), that allows all the containers to talk with one another.. but isolates them from the host system

Linux Containers with MACVLAN VEPA-mode

You end up having to choose between the performance of network-to-container and that of host-to-container: in the first case you can choose macvlan, reducing the work the kernel has to do, but requiring you to route your own traffic to the container with an outside router; in the second case you use veth and make the kernel handle the bridge itself. In my case, since the containers are mostly used for local testing, and the workstation will still be using the in-kernel bridge anyway, the choice is obvious for veth.

Linux Containers with MACVLAN Bridge-mode

Now, when I decided to seal the tinderbox I wondered about one thing, that LXC cannot do and that I would like to find the time to send upstream. As it is, I want to disallow any access from the tinderbox to the outside, minus the access to the RSync service and the non-caching Squid proxy. To achieve that I dropped IPv4 connectivity (so I don’t run any DHCP client at all), and limited myself to autoconfigured IPv6 addresses; then I set in /etc/hosts the static address for yamato.home.flameeyes.eu, and used that as hostname for the two services. Using iptables to firewall the access to any other thing had unfortunate results before (the kernel locked up without actually any panic happening); while I have to investigate that again, I don’t think much changed in that regard. There is no access to the outside network or from the outside network, since the main firewall is set to refuse talking at all with the tinderbox, but that’s not generally a good thing (I would like, at some point in the future, to allow access to the tinderbox to other developers), and does not ensures isolation between that and the other boxes on the network, which is a security risk (remember: the tinderbox builds and execute a lot of code that for me is untrusted).

Now, assuming that the iptables kernel problem happens only with the bridge enabled (I would be surprised if it failed that badly on a pure virtual ethernet device!), my solution was actually kinda easy: I would just have used the link-local IPv6 address, and relied on Yamato as a jump-host to connect to the tinderbox. Unfortunately, while LXC allows you to set a fixed hardware address for the interface created inside the container, it provides you no way to do the same for the host-side interface (which also get a random name such as veth8pUZr), so you cannot simply use iptables to enforce the policy as easily.

But up to this, it’s just a matter of missing configuration interfaces, so it shouldn’t be much of a problem, no? Brian pointed out a chance of safety issue there though, and I went on to check it out. Since when you use virtual ethernet devices it is the kernel’s bridge that takes care of identifying where to send the packages based on STP there is no checking of the hardware address used by the container; just like the IP settings you have there, any root user inside the container can add and remove IP addresses and change the mac address of it altogether. D’uh!

I’m not sure whether this would work better with macvlan, but as it is, there is enough work to be done with the configuration interface, and – over an year after the tinderbox started using LXC to run – it’s still not ready for production use — or at least not for the kind of production use where you actually allow third parties to access a “virtualised” root.

For those interested, the original SVG file of the (bad) network diagrams used in the article, is here and is drawn using Inkscape.