Mail, SSL and Postfix

In my previous post, I delineated a few reasons why I care about SSL for my blog and the xine bugzilla. What I did not talk about was about the email infrastructure for both. The reason is that, according to the very same threat model that I delineated in that post, it’s not as important for me to secure that part of the service.

That does not mean, though, that I never considered it, just I considered it not important enough yet. But after that post I realized it’s time to fix that hole and I’ve now started working on securing the email coming out of my servers as well as that coming through the xine server. But before going into details about that, let’s see why I was not as eager to secure the mail servers compared to the low-hanging fruit of my blog.

As I said in the previous post, what you have to identify two details: what information is important to defend, and who the attackers would be. In the case of the blog, as I said, the information was the email addresses of the commenters, and the attackers the other users of open, unencrypted wifi networks in use. In the case of email, the attackers in particular change drastically; the only people in a position to get access to the connections’ streams are the people in the hosting and datacenter companies, and if they made mistakes, your neighbours in the same datacenter. So for sure it’s not the very easy prey of the couple sitting at the Starbucks next to you.

The content, well, is a bit more interesting information. We all know that there is no real way to make email completely opaque to service providers unless we use end to end encryption such as GnuPG, so if you really don’t want your server admin to ever be able to tell what’s in your email, that’s what you should do. But even then, there is something that (minus protocol-level encryption) is transmitted in cleartext: the headers, the so-called metadata, that stirred the press so much last year. So once again it’s the address of the people you contact that could be easily leaked, even with everything else being encrypted. In the case of xine, the mail server handles mostly bugzilla messaging, and it is well possible that it sends over, without encryption, the comments on security bugs, so reducing the risk of that information leaking is still a good idea.

Caveat emptor in all of this post, though! In the case of the xine mail server, the server handles both inbound and outbound messages, but at the same time it does not ever let users access their mailbox; the server itself is a mail router, rather than a full mail service. This is important, becuase otherwise I wouldn’t be able to justify my sloppiness on covering SSL support for the mail! If your server hosts mailboxes or allows direct mail submission (relay), you most definitely need to support SSL as then it’s a client-server connection which is attackable by the Starbucks example above.

So what needs to be done to implement this? Well, first you need to remember that a mail router like the one I described above requires SSL in two directions: when it receive a message it should be able to offer SSL to the connecting client, and when it sends a message it has to request SSL to the remote server too. In a perfect set up, the client also offers a certificate to prove who it is. This means that you need a certificate that works both as a server and as a client certificate; thankfully, StartSSL supports that for Class 2 certificates, even if they are named for web servers, they work just fine for mail servers too.

Unfortunately, the same caveat that apply to HTTP certificates, apply to mail servers: ciphers and protocol versions combinations. Unfortunately, while Qualys has SSL Labs to qualify the SSL support in your website, I know of no similar service for mail routers, and coming up with one is not trivial, as you would want to make sure not to become a relay-spammer by mistake, and the only way to judge the message pushing of the server is to trick it into sending a message to your own service back, which should not be possible on a properly non-open relay of a server.

So the good news is that of all of the xine developers with an alias on the domain have a secure server when routing mail to them, so the work I’ve been done is not for nothing. The other note is that a good chunk of the other users in Bugzilla uses GMail or similar big hosting providers. And unlike others I actually find this a good thing, as it’s much more likely that the lonely admin of a personal mail server (like me for xine) would screw up encrypion, compared to my colleagues over at GMail. But I digress.

The bad news is that not only there is no way to assess the quality of the configuration of a mail server, but at least for the case of Postfix you have only a ternary setting about TLS: yes always, yes if client requests it (or if the server provides the option, when submitting mail), or not at all. There is no way to set up policy so that e.g. gmail servers don’t get spoofed and tricked into sending the messages over a clear text connection. A second bad news is that I have not been able to get Postfix to validate the certificates either as server or client, likely caused by the use of opportunistic TLS rather than forcing TLS support. And the last problem is that servers connecting to submit mail will not fallback to cleartext if the TLS can’t be negotiated (either because of cipher or protocols), and will instead keep trying to negotiate TLS the same way.

Anyway, my current configuration for this is:

smtpd_tls_cert_file = /etc/ssl/postfix/server.crt
smtpd_tls_key_file = /etc/ssl/postfix/server.key
smtpd_tls_received_header = yes
smtpd_tls_loglevel = 1
smtpd_tls_security_level = may
smtpd_tls_ask_ccert = yes
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3

smtp_tls_cert_file = /etc/ssl/postfix/server.crt
smtp_tls_key_file = /etc/ssl/postfix/server.key
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtp_tls_protocols = !SSLv2, !SSLv3

If you have any suggestions on how to make this more reliable and secure, please do!

Apple’s Biggest Screwup — My favourite contender

Many people have written about Apple’s screwups in the past — recently, the iPad Mini seems to be the major focus for about anybody, and I can see why. While I don’t want to argue I know their worst secret, I’m definitely going to show you one contender that could actually be fit to be called Apple’s Biggest Screwup: OS X Server.

Okay, those who read my blog daily already know that because I blogged this last night:

OS X Server combines UNIX’s friendliness, with Windows’s remote management capabilities, Solaris’s hardware support, and AIX software availability.

So let’s try to dissect this.

UNIX friendliness. This can be argued both positively and negatively — we all know that UNIX in general is not very friendly (I’m using the trademarked name because OS X is actually using FreeBSD which is UNIX), but it’s friendlier to have an UNIX server than a Windows one. So if you want to argue it negatively, you don’t have all the Windows-style point-and-click tools for every possible service. If you want to argue it positively, you’re still running solid (to a point) software such as Apache, BIND, and so on.

Windows’s remote management capabilities. This is an extremely interesting point. While, as I just said, OS X Server provides you with BIND as DNS server, you’re supposed not to edit the files by hand but leave it to Apple’s answer to the Microsoft Management Console — ServerAdmin. Unfortunately, doing so remotely is hard.

Yes because even though it’s supposed to be usable from a remote host, it requires that you’re using the same version on both sides, and that is impractical if your server is running 10.6, and your only client at hand is updated to 10.8. So this option has to be dropped entirely in most cases — you don’t want to keep updating to the latest OS your server, but you do so for your client, especially if you’re doing development on said client. Whoops.

So can you launch it through an SSH session? Of course not, because despite all people complaining about X11, the X protocol, and SSH X11 forwarding, are a godsend for remote management, if you have things like very old versions of libvirt and friends, or some other tool that can only be executed on a graphical environment, you only need another X with an SSH client and you’re done.

Okay so what can you do? Well, the first option would be to do it locally on the box, but that’s not possible, so the second best would be to use one of the many remote desktop techniques — OS X Server comes with Apple’s Remote Desktop server by default. While this is using the VNC standard 5900 port… it seems like it does not work with a standard VNC client such as KRDC. You really need Apple’s Remote Desktop Client, which is a paid-for proprietary app. Of course you can set up one of many third party apps to connect to it, but if you didn’t think about that when installing the server, you’re basically stuck.

And I’m pretty sure that this does not limit itself to the DNS server, but Apache, and other servers, will probably have the same issues.

Solaris’s hardware support. This should be easy, if you ever tried to run Solaris on real hardware, rather than just virtualized – and even then … – you know that it’s extremely picky. Last time I tried it, it wouldn’t run on a system with SATA drives, to give you an idea.

What hardware can OS X Server run on? Obviously, only Apple hardware. If you’re talking about a server, you have to remove from the equation all their laptops, obviously. If it’s a local server you could use an iMac, but the problem I’ve got is that it’s used not locally but at a co-location. The XServe, which was the original host for OS X Server, is now gone forever, and that leaves us with only two choices: Mac Pro and Mac Mini. Which are the only ones that are sold with that version of OS X anyway.

The former hasn’t been updated in quite a long time. It’s quite bulky to put at a co-location, even though I’ve seen enough messy racks to know that somebody could actually think about bringing it there. The latter actually just recently got an update that makes it sort of interesting, by giving you a two-HDDs option…

But you still get a system that has 2.5”, 5400 RPM disks at most, with no RAID, and that’s telling you to use external storage if you need anything difference. And since this is a server edition, it comes with no mouse or keyboard, just adding those means adding another $120. Tell me again why would anybody in their sane mind use one of those for a server? And no don’t remind that I could have an answer on the tip of my tongue.

For those who might object that you can fit two Mac Minis on 1U – you really can’t, you need a tray and you end up using 2U most of the time anyway – you can easily use something like SuperMicro’s Twins that fits two completely independent nodes on a single 1U chassis. And the price is not really different.

The model I linked is quote, googling, at around eighteen hundreds dollars ($1800); add $400 for four 1TB hard disks (WD Caviar Black, that’s their going price as I ordered, since last April, eight of them already — four for Excelsior, four for work), you get to $2200 — two Apple Mac Minis? $2234, with mouse and keyboard that you need (the Twin system has IPMI support and remote KVM, so you don’t need them).

AIX’s software availability. So yes, you can have MacPorts, or Gentoo Prefix, or Fink, or probably a number of other similar projects. The same is probably true for AIX. How much software is actually tested on OS X Server? Probably not much. While Gentoo Prefix and MacPorts cover most of the basic utilities you’d use on your UNIX workstation, I doubt that you’ll find the complete software coverage that you currently find for Linux, and that’s often enough a dealbreaker.

For example, I happen to have these two Apple servers (don’t ask!). How do I monitor them? Neither Munin nor NRPE are easy to set up on OS X so they are yet unmonitored, and I’m not sure if I’ll ever actually monitor them. I’d honestly replace them just for the sake of not having to deal with OS X Server anymore, but it’s not my call.

I think Apple did quite a feat, to make me think that our crappy HP servers are not the worst out there…

Revenge of the HP Updates

Just shy of three months ago I was fighting with updating the iLO firmware (IPMI and extras) and as I recounted, even when you select downloads for RHEL (which is a supported operating system on those boxes), you’re given a Windows executable file, which you have to extract. But at least, you can use the file you extract, to update the IPMI firmware remotely.

Well, if it wasn’t for a small little issue that the fans are going to get stuck at 14 KRPM until you also update the BIOS. It wasn’t obvious how much of a problem that is until we got to the co-location last week and… “What on Earth is this noise?” “I think it’s our servers!” screamed on the backside of the cabinet.

Since one of the servers also had some other hardware issues (one of the loops that keep the chipset’s heatsink gave way — I glued it back and applied a new layer of thermal paste after scraping the old one), we ended up bringing it back to the office, where today, after repairing it and booting, it became obvious that we couldn’t leave it running at any time with that kind of noise. So it was time to update the BIOS. Which is easier said than done.

Step one is finding the correct download — the first one I found turned out to be wrong, but it took me some time to understand that, because BIOS update has to be done with DOS. And that brought me back to a very old post of mine (well, not that old, it’s just an year and a half ago, now that I see), and its follow-up which came with a downloadable 383KB compressed, 2GB uncompressed bootable FreeDOS image. Since getting sysrescuecd’s FreeDOS to do anything other than booting and playing its own demos was impossible.

So when I actually get to run the executable in the FreeDOS image … what I come to is an extremely stupid tool (that, I remember you, will not work on Windows XP, Vista or 7) to create an USB drive to update the BIOS… lol, whut?

The correct download is, once again, for Windows even when you select RHEL4, and it auto-extracts in a multitude of files that include the BIOS itself some four different times, and would provide some sort of network update, as well as “flat files” (which you can use with FreeDOS), a Windows updater, an ISO file, and an utility to build an USB stick to update the BIOS itself.

If you count the fact that this is for a server running Linux, you now just involved two more operating systems. And the next trip to the co-lo we’ve got work cut out for us, updating server by server the BIOS, and the IPMI firmware (hoping that the new firmware actually have a reliable SOL connection, among others).

But to avoid being all too negative with HP, it’s still better than trying to do standard sysadmin work on an Apple OS X Server install on a Mac Mini. OS X Server combines UNIX’s friendliness, with Windows’s remote management capabilities, Solaris’s hardware support, and AIX software availability. But that’s a topic for another post.