New devbox running

I announced it in February that Excelsior, which ran the Tinderbox, was no longer at Hurricane Electric. I have also said I’ll start on working on a new generation Tinderbox, and to do that I need a new devbox, as the only three Gentoo systems I have at home are the laptops and my HTPC, not exactly hardware to run compilation all the freaking time.

So after thinking of options, I decided that it was much cheaper to just rent a single dedicated server, rather than a full cabinet, and after asking around for options I settled for Online.net, because of price and recommendation from friends. Unfortunately they do not support Gentoo as an operating system, which makes a few things a bit more complicated. They do provide you with a rescue system, based on Ubuntu, which is enough to do the install, but not everything is easy that way either.

Luckily, most of the configuration (but not all) was stored in Puppet — so I only had to rename the hosts there, changed the MAC addresses for the LAN and WAN interfaces (I use static naming of the interfaces as lan0 and wan0, which makes many other pieces of configuration much easier to deal with), changed the IP addresses, and so on. Unfortunately since I didn’t start setting up that machine through Puppet, it also meant that it did not carry all the information to replicate the system, so it required some iteration and fixing of the configuration. This also means that the next move is going to be easier.

The biggest problem has been setting up correctly the MDRAID partitions, because of GRUB2: if you didn’t know, grub2 has an automagic dependency on mdadm — if you don’t install it it won’t be able to install itself on a RAID device, even though it can detect it; the maintainer refused to add an USE flag for it, so you have to know about it.

Given what can and cannot be autodetected by the kernel, I had to fight a little more than usual and just gave up and rebuilt the two (/boot and / — yes laugh at me but when I installed Excelsior it was the only way to get GRUB2 not to throw up) arrays as metadata 0.90. But the problem was being able to tell what the boot up errors were, as I have no physical access to the device of course.

The Online.net server I rented is a Dell server, that comes with iDRAC for remote management (Dell’s own name for IPMI, essentially), and Online.net allows you to set up connections to through your browser, which is pretty neat — they use a pool of temporary IP addresses and they only authorize your own IP address to connect to them. On the other hand, they do not change the default certificates, which means you end up with the same untrustable Dell certificate every time.

From the iDRAC console you can’t do much, but you can start up the remove, JavaWS-based console, which reminded me of something. Unfortunately the JNLP file that you can download from iDRAC did not work on either Sun, Oracle or IcedTea JREs, segfaulting (no kidding) with an X.509 error log as last output — I seriously thought the problem was with the certificates until I decided to dig deeper and found this set of entries in the JNLP file:

 <resources os="Windows" arch="x86">
   <nativelib href="https://idracip/software/avctKVMIOWin32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin32.jar" download="eager"/>
 </resources>
 <resources os="Windows" arch="amd64">
   <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/>
 </resources>
 <resources os="Windows" arch="x86_64">
   <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/>
 </resources>
  <resources os="Linux" arch="x86">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  </resources>
  <resources os="Linux" arch="i386">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  </resources>
  <resources os="Linux" arch="i586">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  </resources>
  <resources os="Linux" arch="i686">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  </resources>
  <resources os="Linux" arch="amd64">
    <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/>
  </resources>
  <resources os="Linux" arch="x86_64">
    <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/>
  </resources>
  <resources os="Mac OS X" arch="x86_64">
    <nativelib href="https://idracip/software/avctKVMIOMac64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLMac64.jar" download="eager"/>
  </resources>

Turns out if you remove everything but the Linux/x86_64 option, it does fetch the right jar and execute the right code without segfaulting. Mysteries of Java Web Start I guess.

So after finally getting the system to boot, the next step is setting up networking — as I said I used Puppet to set up the addresses and everything, so I had working IPv4 at boot, but I had to fight a little longer to get IPv6 working. Indeed IPv6 configuration with servers, virtual and dedicated alike, is very much an unsolved problem. Not because there is no solution, but mostly because there are too many solutions — essentially every single hosting provider I ever used had a different way to set up IPv6 (including none at all in one case, so the only option was a tunnel) so it takes some fiddling around to set it up correctly.

To be honest, Online.net has a better set up than OVH or Hetzner, the latter being very flaky, and a more self-service one that Hurricane, which was very flexible, making it very easy to set up, but at the same time required me to just mail them if I wanted to make changes. They document for dibbler, as they rely on DHCPv6 with DUID for delegation — they give you a single /56 v6 net that you can then split up in subnets and delegate independently.

What DHCPv6 in this configuration does not give you is routing — which kinda make sense, as you can use RA (Route Advertisement) for it. Unfortunately at first I could not get it to work. Turns out that, since I use subnets for the containerized network, I enabled IPv6 forwarding, through Puppet of course. Turns out that Linux will ignore Route Advertisement packets when forwarding IPv6 unless you ask it nicely to — by setting accept_ra=2 as well. Yey!

Again this is the kind of problems that finding this information took much longer than it should have been; Linux does not really tell you that it’s ignoring RA packets, and it is by far not obvious that setting one sysctl will disable another — unless you go and look for it.

Luckily this was the last problem I had, after that the server was set up fine and I just had to finish configuring the domain’s zone file, and the reverse DNS and the SPF records… yes this is all the kind of trouble you go through if you don’t just run your whole infrastructure, or use fully cloud — which is why I don’t consider self-hosting a general solution.

What remained is just bits and pieces. The first was me realizing that Puppet does not remove the entries from /etc/fstab by default, so I noticed that the Gentoo default /etc/fstab file still contains the entries for CD-ROM drives as well as /dev/fd0. I don’t remember which was the last computer with a floppy disk drive that I used, let alone owned.

The other fun bit has been setting up the containers themselves — similarly to the server itself, they are set up with Puppet. Since the server used to be running a tinderbox, it used to also host a proper rsync mirror, it was just easier, but I didn’t want to repeat that here, and since I was unable to find a good mirror through mirrorselect (longer story), I configured Puppet to just provide to all the containers with distfiles.gentoo.org as their sync server, which did not work. Turns out that our default mirror address does not have any IPv6 hosts on it ­– when I asked Robin about it, it seems like we just don’t have any IPv6-hosted mirror that can handle that traffic, it is sad.

So anyway, I now have a new devbox and I’m trying to set up the rest of my repositories and access (I have not set up access to Gentoo’s repositories yet which is kind of the point here.) Hopefully this will also lead to more technical blogging in the next few weeks as I’m cutting down on the overwork to relax a bit.

What does #shellshock mean for Gentoo?

Gentoo Penguins with chicks at Jougla Point, Antarctica
Photo credit: Liam Quinn

This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I’ll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that’s a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I’ve pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart’s design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you’re using systemd (like me on the laptops), then it’s mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash

That will change your /bin/sh and make it much less likely that you’d be vulnerable to this particular problem. Unfortunately as I said it’s mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you’ll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I’m actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That’s bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it’s a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that’s hard to deny. I’ve stopped being active since I finally accepted stable employment – I’m almost thirty, it was time to stop playing around, I needed to make a living, even if I don’t really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it’s going to be probably too late to be relevant, I’ll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I’ll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that’s probably hoping too much.

Why Puppet?

Seems like the only thing everybody has to comment on my previous post was to ask me why I haven’t used $this, $that and ${the kitchen sink}. Or to be precise they asked about cfengine, chef and bcfg2. I have to say I don’t really like being forced into justifying myself but at this point I couldn’t just avoid answering, or I would keep getting the same requests over and over again.

So first of all, why a configuration management system? I have three production vservers at IOS (one is this, another is xine, and another is a customer’s of mine). I have a standby backup server at OVH. And then there’s excelsior, which has four “spacedocks” (containers that I use for building binpkgs for the IOS servers), three tinderbox (but only two usually running), and a couple of “testing” containers (for x32 and musl), beside the actual container I use in it to maintain stuff.

That’s a lot of systems, and while they are very similar between themselves, they are not identical. To begin with, they are in three different countries. And they us three different CPUs. And this is without adding the RaspberryPi I set up with the weather station for a friend of mine. The result is that trying to maintain all those systems manually is a folly, even though I already reduced the number of hosts, since the print shop customer – the one I wrote so often about – moved on and found someone else to pick up their sysadmin tasks (luckily for both of us, since it was a huge time sink).

But the reason why I focused almost exclusively onto Puppet is easy to understand: people I know have been using it for a while. Even though this might sound stupid, I do follow the crowd of friends of mine when I have to figure out what to use. This is because the moment when I have no idea how to do something, it’s easier to ask to a friend than going through the support chain at the upstream project. Gentoo infra people are using and working on Puppet, so that’s a heavy factor to me. I don’t know why they chose puppet but at this point I really don’t care.

But there is another thing, a lesson I learned with Munin: I need to judge the implementation language. The reason is simple, and that’s that I’ll find bugs, for sure. I have this bad knack at finding bugs in stuff I use… which is the main reason why I got interested in open source development: I could then fix the bugs I found! But to do so I have to understand what’s written. And even though learning Perl was easy, understanding Munin’s code… was, and is, tricky. I was able to get some degree of stuff done. Puppet being written in Ruby is a positive note.

I know, chef is also written in Ruby. But I do have a reason to not wanting to deal with chef: its maintainer in Gentoo. Half the bugs I find have to do with the way things are packaged, which is the reason why I became a developer in the first place. This means though that I have to be able to collaborate with the remaining developers, and sometimes that’s just not possible. Sometimes it’s due to upstream developers but in the case of chef the problem is the Gentoo developer who’s definitely not somebody I want to work with, since he’s been “fiddling” with Ruby ebuilds for chef messing up a lot of the work that the Ruby team, me included, kept pouring to improve the quality of the Ruby packages.

So basically these are the reason why I decided to start using Puppet and writing Puppet modules.

Managing configuration

So I’ve finally bit the bullet and decided to look into installing and setting up Puppet to manage the configuration of my servers. The reason is to be found in my transfer to Dublin as I expect I won’t have the same time I had before, and that means that any streamlining in the administration of my servers is a net improvement.

In particular, just the other day I spent a lot of time fighting just to set up SSL properly on the servers, and I kept scp’ing files around — it was obvious I wasn’t doing it right.

It goes deeper than this though; since Puppet is obviously trying to get me to standardize the configurations between different servers, I’ve ended up uncovering a number of situations where the configuration of different servers was, well, different. Most of the times without a real reason. For instance, the Munin plugins configured didn’t match, even those that are not specific to a service — of three vservers, one uses PostgreSQL, another uses MySQL and the third, being the standby backup for the two, has both.

Certainly there’s a conflict between your average Gentoo Linux way to do things and the way Puppet expects things to be done. Indeed, the latter requires you to make configurations very similar, while the former tends to make you install each system like its own snowflake — but if you are even partially sane, you would know that to manage more than one Gentoo system, you’ll have to at least standardize some configurations.

The other big problem with using Puppet on Gentoo is that there is a near-showstoppper lack of modules that support our systems. While Theo and Adrien are maintaining a very nice Portage module, there is nothing that allows us to set the OpenRC oldnet-style network configuration, for instance. For other services, often times the support is written with only CentOS or Debian in mind, and the only way to get them to work in Gentoo is to fix the module.

To solve this problem, I started submitting pull requests to modules such as timezone and ntp so that they work on Gentoo. It’s usually relatively easy to do, but it can get tricky, when the CentOS and Gentoo way to set something up are radically different. By the way, the ntp module is sweet because finally I can no longer forget that we have two places to set the NTP server pools.

I also decided to create a module to fit in whatever is Gentoo-specific enough, although this is not yet the kind of stuff you want to rely upon forever — it would have to be done through a real parsed file to set it up properly. On the other hand it allows me set up all my servers’ networks, so it should be okay. And another module allows me to set environment variables on different systems.

You can probably expect me to publish a few more puppet modules – and editing even more – in the next few weeks while I transition as much configuration as I can from custom files to Puppet. In particular, but that’s worth of a separate blog post, I’ll have to work hard to get a nice, easy, and dependable Munin module.