I have written a bit about Amazon’s Elastic Computer Cloud before; mostly in bad even though some things are interesting and at least somewhat cool. Well, here is another post on the subject, that I’m using to clear my own ideas as to how to write a proper, objective and respectful guide to Gentoo on EC2. Besides, some of the roblems I faced with EC2 are giving me insights to improve Gentoo as a whole.
Metadata fetching: to pass details such as the hostname to use for the system or the SSH keys to authenticate against, Amazon provides the metadata through a webserver listening to an address in the APIPA range. At first I thought it was cumbersome because I expected it to be unable to talk outside of its range, and there is currently no way to have APIPA addresses assigned to the interfaces; luckily instead it works just as fine with the internal IPs that are assigned by the DHCP server, which means I can easily fetch the data after the interface is provided a real address.
To handle metadata fetching I have ready an init script that would just have to be executed, that will take care of changing /etc/conf.d/hostname
to set the hostname properly, and creating an authorized_keys
file for root to allow logging in on the system. It sounds easy but the problem here is that it has to run before hostname, but after network is brought up. Turns out that this is not possible with the OpenRC 0.6.3 version because of dependencies between the sysctl and network services, and hostname itself. This is fixed in the OpenRC GIT repository and will be has been released as part of 0.6.4.
Modules, keyboards, consoles: unfortunately that’s far from the only problem EC2 would face. Since the hardware on EC2 is pretty much frozen, and thus the options you have for the kernel are quite limited, in Gentoo there is no real reason to use modules and ramdisks (the fact that Amazon relies a lot on ramdisks has likely something to do that most other distributions rely on the ramdisks to load the correct modules for the harware at bootup, since they have to provide generic kernels). This would suggest you wouldn’t need the module-init-tools
package, but that’s not the case, since it’s a dependency of OpenRC, as a number of init scripts and network modules depend on using modprobe
themselves.
This actually made me think about our current situation with OpenRC: the openrc package itself provides init scripts for a few services, such as keyboard settings, timezone, “oldnet” support for things like ATM connections and so on so forth. Why shouldn’t we move these scripts to be installed by the single packages instead? So the modules
init script would be installed by module-init-tools
and keymaps
by kbd
? This way if I don’t use those I can simply not install the package on the target system. Unfortunately, this doesn’t seem to be in the plans of Mike, and I’m not sure if I can convince Jory to do this kind of work (since it would require changing a huge number of packages, especially packages that are mostly managed by base-system).
Architectures: maybe this should have gone before bu tI think it has lower importance on the matter; you might know that the performance of straight x86 on Xen are abysmally low; the reason is that there is a conflict between the way the Xen hypervisor hides itself and glibc implements Thread-Local Storage (TLS), which means that calls that are supposedly direct references are instead emulated, and thus slow the heck down of anything that uses TLS; which happens to be a lot of stuff nowadays since errno
is implemented as TLS, and almost any function call in C uses errno
to report errors back to the program.
A totally different thing is true for x86-64 though; so even if you were not to try hardened userland (which I think you should, but that’s a different problem) where it really makes a difference you might want to consider using a 64-bit kernel even with 32-bit userland, since that works without having to rebuild everything with strange CFLAGS. Unfortunately to do so, up to a few months ago you had to use the m2.large
instances that are quite expensive to keep running.
Thankfully, the new t1.micro
instances that they started providing as I said a few months ago allow both 32- and 64-bit guests. But they also have a catch: they only work with EBS-backed images, not with instance-store ones, and most distributions simply provide the instance-store AMIs. But that’s not an excessive trouble if you don’t factor in a few more problems related to EBS.
The volumes trouble: EBS is supposed to stand for Elastic Block Storage; it really has nothing elastic in it: once you give it a size, you cannot expand it (you have to create a new one starting from a snapshot, if you want to have a bigger volume). Also, it’s not elastic enough to be used directly as root file system. Sure, Amazon talks of EBS storage instances because what it runs on is a full-fledged volume, but to prepare them you have to use a snapshot, not a volume. I’m sure there are a number of good reasons why they decided to do it this way; on the other hand, this is quite upsetting because if you want to be able to terminate the instance and statefully restart it from time to time you have to take a snapshot of the current-volume before terminating it and then re-creating an AMI for it for the next time you start it up. Stopping a reserved instance is fine though, and require no further work, even though I guess it might be a tad more expensive.
You can have stateful EBS volumes without having to use snapshots; but those cannot be connected right at the start but rather need to be reconnected after each creation; so you can choose between creating the instance and then starting it, or simply wait till the instance is started, then connect the disks and start up the services using those volumes with Rudy, or with custom udev rules.
Custom kernels: might be of interest the fact that finally Amazon allows for custom kernels, which means we can use a non-Jurassic kernel with modern udev support (which is a godsend rootsend for what concerns attaching EBS volumes). This still requires a domU-capable kernel so in Gentoo you have to use xen-sources, not gentoo-sources to build your kernel, but still.
To implement this they use a hacked-up version of grub that will look for /boot/grub/menu.lst
in either /dev/sda
or /dev/sda1
(the “kernel” image to use changes between the two options — and between regions and bitness; don’t ask me why the bitness of the ”kernel“ differs, considering that we’re talking about grub).
Unfortunately since the filesystems supported by this grub is quite limited you have to jump through a few hops to have this working properly for filesystems like XFS and JFS that seem to have the best performances on EBS. And even more so if you want to use LVM. I’m still trying to overcome the problem of growing the partitions’ size properly when the original root EBS is not big enough. Maybe I should try with two volumes even though that wastes enough space on the /boot
partition/volume.
Do note! If you are using a separate partition for /boot
, for whatever reason you need the symlink /boot/boot -> .
that is installed by the grub ebuild (without merging the ebuild); so just run ln -s . /boot/boot
and be safe. Otherwise PVGRUB fail to find menu.lst
and won’t let you start your instance.
How about gentoo-sources with* CONFIG_PARAVIRT_GUEST=y* CONFIG_XEN=y* CONFIG_PARAVIRT=y* etc …Isn’t this supported by EC2?
marios: I spent quite a bit of time trying to get gentoo-sources working using an approach similar to what you suggest. I’m not an expert with xen but I think Diego summarized the problem well. xen-sources isn’t quite as up-to-date as gentoo-sources, but for the intended purpose it works great and it supports the latest and greatest udev/etc.I blogged some of my notes at:http://rich0gentoo.wordpres…This lists a few amis you’re welcome to try out and replicate, and it gives instructions suitable for re-creating them from scratch. My kernels all have /proc/config.gz enabled I think so it should be very easy to re-create them.
The only reason I am asking is because with 2.6.26+ (if I am not mistaken) it possible to run a domU with the mainline kernel. I have never used EC2, but I use “Linode”:http://library.linode.com/a… and I have been running a system gentoo-sources for a year now and it works very nicely.
Hi Marios,I’ll back up what rich0 and diego have said. I have also spent some considerable time experimenting with the various kernels on gentoo domu’s (ideally I would like to run hardened-sources on my production boxes) on the EC2 platform.The only success I have had so far is with xen-sources.I must admit I haven’t gone to the trouble of trying to patch hardened-sources with the xen patchset(s), or vice-versa. I’ll admit this is right at the edge of my comfort zone, so I don’t expect much success without a helping hand from ‘upstream’. Whichever stream that may be!If I make any headway I’ll be sure to write it up.