Ah, LXC! Isn’t that good now?

If you’re using LXC, you might have noticed that there was a 0.8.0 release lately, finally, after two release candidates, one of which was never really released. Do you expect everything would go well with it? Hah!

Well, you might remember that over time I found that the way that you’re supposed to mount directories in the configuration files changed, from using the path to use as root, to the default root path used by LXC, and every time that happened, no error message was issued that you were trying to mount directories outside of the tree that they are running.

Last time the problem I hit was that if you try to mount a volume instead of a path, LXC expected you to use as a base path the full realpath, which in the case of LVM volumes is quite hard to know by heart. Yes you can look it up, but it’s a bit of a bother. So I ended up patching LXC to allow using the old format, based on the path LXC mounts it to (/usr/lib/lxc/rootfs). With the new release, this changed again, and the path you’re supposed to use is /var/lib/lxc/${container}/rootfs — again, it’s a change in a micro bump (rc2 to final) which is not documented…. sigh. This would be enough to irk me, but there is more.

The new version also seem to have a bad interaction with the kernel on stopping of a container — the virtual ethernet device (veth pair) is not cleared up properly, and it causes the process to stall, with something insisting to call the kernel and failing. The result is a not happy Diego.

Without even having to add the fact that the interactions between LXC and SystemD are not clear yet – with the maintainers of the two projects trying to sort out the differences between them, at least I don’t have to care about it anytime soon – this should be enough to make it explicit that LXC is not ready for prime time so please don’t ask.

On a different, interesting note, the vulnerability publicized today that can bypass KERNEXEC? Well, unless you disable the net_admin capability in your containers (which also mean you can’t set the network parameters, or use iptables), a root user within a container could leverage that particular vulnerability.. which is one extremely good reason not to have untrusted users having root on your containers.

Oh well, time to wait for the next release and see if they can fix a few more issues.

Linux Resource Containers and Gentoo, another step

I currently am tackling a new job, details of which I’m not going to disclose, but one of the steps on my “pre-work setup” task was to create a new LXC-based guest, reason for which is quite easy to guess: I wanted a clean slate to work on rather than my messed up live workstation.

I could have gone to use Fedora or Ubuntu in a KVM, or even a Gentoo install in KVM, but I still like a lot the idea of using LXC, it’s a very nice “virtualisation” technology that does not consume a huge amount of resources, nor it requires special kernel support (it’s working mostly fine with vanilla kernels). And as I’m one of the maintainers for it in Gentoo, I also wanted to check if I could improve the situation.

Thanks to Adrian Nord, I’ve been able to learn quite a bit of things about LXC even without having to follow the upstream mailing lists. Unfortunately, the documentation about using LXC is scattered and sometimes messed up, so it took me a while to get this to work as intended. So here a few notes that might come useful to other people wanting to use it:

I’ve currently not packaged lxc-0.6.5 that was released earlier this week; my reason to avoid it is that I cannot get it to work, at least I couldn’t when it was released; once I can find what the heck is wrong with it, I’ll make sure to add it; unfortunately I cannot look into it as long as I’m using the lxc guests to do real work. On a similar, and maybe related note, I couldn’t get 0.6.4 to work on the 2.6.33 release candidates, so it might be a problem of version-specificity.

You cannot rely on udev, neither to have /dev filled in, nor to have a working detection of hardware; I guess that might be obvious given that we’re talking about virtual environments, but there is a trick: by default if you just bind-mount /dev (like I was used to do on chroots) you end up with the guest having full access to the devices in the host. You need to set up an access control list on the device nodes that are allowed to be accessed for that to work.

The easiest way to handle the problem with /dev, as Adrian pointed me to, is to use a static /dev; this does not mean using the static-dev package as that installs a huge amount of extra stuff we will probably never need, and at the same time it does not install some other devices, such as random on my system. My solution? This tarball – not even compressed as it’s just metadata! – creates the subset of devices that seem to actually be used by this stuff.

You don’t want to bind-mount /dev or /dev/pts, but you want instead for LXC to take care of the latter for you: not only it’ll mount a new instance of the PTS filesystem, but it’ll also bind the tty* devices to pseudo-terminals, allowing you to access the various virtual consoles through the lxc-console command. To do that you need a bit of configuration in the file:

lxc.tty = 12
lxc.pts = 128

Note the numbers there: the second one, referring to lxc.pts is, as of 0.6.4, unused, and just needs to be non-zero so that the new /dev/pts instance is created properly. The former instead is important: that’s the number of TTYs that LXC will be wrapping around to pseudo-terminals. You want that to be at least 12 for Gentoo-based guests. The reason is that consolefonts and keymaps scripts will be accessing tty1 through tty12 during the “start-up” and will then mess up you system badly if they are not wrapped. There are two extra catches: the first is that if the devices don’t exist, the init scripts will create regular files with those names (and that might sound quite strange when you go debugging you problems), the second is that you need to have the files for the given number of ttys around: LXC will silently fail if it cannot find the file to bind at the device path. Which also takes a while to debug.

I still haven’t finished making sure that the cgroup device access functions work as intended, so for now I won’t post anything about those just yet. But you might want to look into lxc.cgroup.devices.access to whitelist the device nodes that I have in the tarball, with the exclusion of the 4:* range (which is the real TTYs.

Now, maybe I’ll be able to add an init script with the next release of LXC I’m going to package!

I know you missed them: virtualisation ranting goes on!

While I started writing init scripts for qemu I’ve been prodded again by Luca to look at libvirt instead of reinventing the wheel. You probably remember me ranting about the whole libvirt and virt-manager suite quite some time ago as it really wasn’t my cup of tea. But then I gave it another try.

*On a very puny note here, what’s up with the lib- prefix here? libvirt, libguestfs, they don’t look even remotely like libraries to me… sure there is a libvirt library in libvirt, but then shouldn’t the daemon be called simply virtd?*

The first problem I found is that the ebuild still tries to force dnsmasq and iptables on me if I have the network USE flags enabled; turns out that neither is mandatory so I have to ask Doug to either drop them or add another USE flag for them since I’m sure they are pain in the ass for other people beside me. I know quite a bit of people ranted about dnsmasq in particular.

Sidestepped that problem I first tried, again, to use the virt-manager graphical interface to build a new VM interface. My target this time was to try re-installing OpenSUSE, this time, though, using the virtio disk interface.

A word of note about qemu vs. qemu-kvm: at first I was definitely upset by the fact that the two cannot be present on the same system, this is particularly nasty considering the fact that it takes a little longer to get the qemu-kvm code bumped when a new qemu is released. On the other hand, after finding out that, yeah, qemu allows you to use virtio for disk device but no, it doesn’t allow you to boot from them, I decided that upstream is simply going crazy. Reimar maybe you should send your patches directly to qemu-kvm, they would probably be considered I guess.

The result of the wizard was definitely not good; the main problem was that the selection for the already-present hard disk image silently failed; I had to input the LVM path myself, which at the time felt a minor problem (although another strange thing was that it could see just one out of the two volume groups I have in the system); but the result was … definitely not what I was looking for.

First problem was that the selection dialog that I thought was not working was working alright… just on the wrong field, so it replaced the path to the ISO image to use for installing with that of the disk again (which as you might guess does not work that well). The second problem was that even though I set explicitly that I wanted to use a Linux version with support for virtio devices, it didn’t configure it to use virtio at all.

Okay, time to edit the configuration file by hand; I could certainly use virt-manager to replace vinagre to access the VNC connections (over unix path instead of TCP/IP to localhost), so that would be enough to me. Unfortunately the configuration file declares it to be XML; if you know me you know I’m not one of those guys who just go away screaming as soon as XML is involved, even though I dislike it as a configuration file it makes probably quite a bit of sense in this case, I found by myself trying to make the init script above usable that the configuration for qemu is quite complex. The big bad turn down for me is that *it’s not XML, it’s aXML (almost XML)!

With the name aXML I call all those uses of XML that are barely using the syntax but not the features. In this particular case, the whole configuration file, while documented for humans, is lacking an XML declaration as well as any kind of doctype or namespace that would tell a software like, say, nxml, what the heck is it dealing with. And more to the point, I could find no Relax-NG or other kind of schema for the configuration file; with one of those, I could make it possible for Emacs to become a powerful configuration file editor: it would know how to validate the syntax and allow completion of elements. Lacking it, it’s quite a task for the human to look at.

Just to make things harder, the configuration file, which, I understand, has to represent very complex parameters that the qemu command line accepts, is not really simplified at all. For instance, if you configure a disk, you have to choose the type between block and file (which is normal operation even for things like iSCSI); unfortunately to configure the path the device or file is found you don’t simply have a <source>somepath</source> element but you need to provide <source dev="/path" /> or <source file="/path" /> — yes, you have to change the attribute name depending on the type you have chosen! And no, virsh does not help you by telling you that you had an invalid attribute or left one empty; you have to guess by looking at the logs. It doesn’t even tell you that the path to the ISO image you gave is wrong.

But okay, after reconfiguring the XML file so that the path is correct, that network card and disks are to use virtio and all that stuff, as soon as you start you can see a nice -no-kvm in the qemu command line. What’s that? Simple: virt-manager didn’t notice that my qemu is really qemu-kvm. Change the configuration to use kvm and surprise surprise: libvirtd crashes! Okay to be fair it’s qemu that crashes first and libvirtd follows it, but the whole point is that if qemu is the hypervisor, libvirtd should be the supervisor and not crash if the hypervisor it launched doesn’t seem to work.

And it goes even funnier: if I launch as root the same qemu command, it starts up properly, without network but properly. Way to go libvirt; way to go. Sigh.

Virtualisation woes, again

I know this starts to get old, with my ranting about virtualisation software, but since I’m trying my best to optimise the power of Yamato to software testing, I’m still working on getting virtualised systems to properly work for me.

In a long series of blog posts ranting about VirtualBox, QEmu, KVM and so on, there was exactly one system that worked quite fine up to now: Windows XP (SP3) under VirtualBox. With the latest release, though, this was broken too: network started up then came crashing down, with a striking resemblance to an old Solaris problem .

Since I was in need to have my Windows XP virtual machine working for a job, I tried porting it to Parallels on my iMac, with the Parallels demo (since my license was only valid for 3.x series). After waiting for the 64GB image file to convert, it turns out that there is no hope in getting it to start: the VirtualBox additions drivers crash with a blue screen of death at boot when they are executed outside of a VirtualBox instance; the Windows Recovery console does not allow to remove the drivers from loading, and trying to delete the drivers to avoid them from loading was not an option, since they get installed in the program files directory (that the recovery console cannot access).

At the end, given the absolute unreliability of VirtualBox on every operating system at this point, I simply gave up and paid for the upgrade of my license to Parallels 4, which is now providing as my only Windows XP instance (which I’m still unfortunately tied to for work), and deleted VirtualBox from my system. Why, you’d ask, since networking not working is far from the biggest problem out there? Well the biggest problem, and the final straw that broke the camel’s back, was that while trying to figure out why Samba was not working, VirtualBox’s network filter module crashed the kernel. So what? Well, VirtualBox decided that rather than using the quite well-tested mixed kernel/userland TUN/TAP networking system, or the userland virtual network (with tap to interfacing it with the rest) provided by VDE, they had to provide a kernel module instead. For performance reasons, or, quite most likely, so that they can have the same interface to the network internals between different operating systems. Do I have to make it explicit how this is a problem?

Interestingly, while writing this I noticed that there are problems downloading VirtualBox and the thing also reminded me of how many time they messed up the ebuilds by changing the tarballs…

But it doesn’t stop here. Remember the NetBSD trouble with the networking I reported about one month ago? Well, I wanted to see if something changed with the new NetBSD 5.0 release (I actually wanted to make sure that feng detected the newly-added POSIX Message Queue support properly), but still no luck, I don’t see any network card with whatever model I provide to KVM, included the e1000 that I’d expect NetBSD to support at least.

On the other hand I was at least able to get Ubuntu (9.04) working on KVM, next step is Fedora 11, so I can actually test feng on other distributions as well as Gentoo.

More virtually real troubles

So after fighting with QEmu and surrendering to KVM I finally got a FreeBSD 7.1 vanilla instance, and an OpenSolaris instance running; I made sure that feng builds on both, and since I was there I also fixed up the SCTP autoconf check on both, so that feng can ideally speak SCTP with both of them.

A note here for those interested: SCTP (Stream Control Transmission Protocol) is a protocol, alternative to TCP and UDP, that is designed to work well for streaming applications; the fact that feng supports it is more a proof of concept than an actually useful feature, I’m sincerely not sure how well it works nowadays, but since I had to fight to get it to build correctly on Linux already, I just wanted to fix it up for FreeBSD and Solaris implementations as well; I assumed that Apple had its own implementation as well but even though there are APPLE defines in the FreeBSD implementations, at least OS X 10.5 lacks any SCTP support that I can see.

I already have reserved a logical volume for Gentoo/FreeBSD 7.1 which I’m hopefully going to test today, but in the mean time I wanted to fix up NetBSD too, since I have seen that it also has an SCTP stack, and since none of the three we support now is identical to the other it seemed worth looking into it; unfortunately NetBSD is proving to have no network to offer me. While I set up the KVM instance just like any other, no matter which model I use I can see no device in ifconfig -a output of NetBSD; I have chosen the full installation, but still it doesn’t seem to have much. The documentation also doesn’t seem to help.

I guess NetBSD will keep waiting in line for now, unless somebody has a suggestion on how to deal with it.

Virtual machine, real problems

Since I bought Yamato I have been trying my best to make use of the AMD-V support in the Opterons, this included continuing the fight with VirtualBox to get it to work properly, with Solaris and Fedora, then trying RedHat’s virt-manager and now, after the failure the other day QEmu 0.10 (under Luca’s insistence).

The summary of my current opinions is something like the following:

  • VirtualBox is a nice concept but the limitation in the “Open Source Edition” are quite nasty, plus it has huge problems (at least, in the OSE version) with networking under Solaris (which is mind boggling for me since both products are developed by Sun), making it unusable for almost anything in my book; replacing the previously used Linux tun/tap code with its own network modules wasn’t very nice because it reminded me of VMware, and it didn’t solve much in my case;
  • RedHat’s virt-manager is a nice idea but it has limitations that are (quite understandably from one point of view) tied with the hardcoding of RedHat style systems; I say quite understandably because I’m not even dreaming to ask RedHat to support other operating systems before they feel their code is absolutely prime-time ready; on the other hand it would be nice if there was better support for Gentoo, maybe with an active branch for it;
  • I still don’t like the kqemu approach, so I’m hoping for the support to Linux’s own KVM interface in the next kernel release (2.6.29), but it should be feasible; on the other hand, setting up QEmu (or kvm manually) is quite a task the first time.

So while I’m using VirtualBox to virtualise a Windows XP install (which, alas, I have to use for some work tasks and to offer Windows support to my “customers”) I decided to try QEmu for a FreeBSD (vanilla) virtual system; I needed a vanilla FreeBSD to try a couple of things out, so that was a good choice to start. I was actually impressed by the sheer speed of FreeBSD install in the virtual system even without kqemu or KVM, it indeed took less than on my old test systems. I don’t know if the I/O difference between QEmu and VirtualBox was because VirtualBox uses more complex virtual disk images (with recovery data I expect), or because I set QEmu to work straight on a block device (lvm logical volume); I had, though, to curse a bit to get networking working.

A side on networking; since what I wanted was to be able to interface the virtual machines with the rest of my network transparently, I decided to give a try to net-misc/vde; unfortunately getting the thing working with that has been more troublesome than expected. For once, if you don’t set up the TAP device explicitly with OpenRC, vde will try to do so for you, but on my system, it put udev in a state that continuously took up and down the interface, quite unusable. Secondly, I had some problem with dhcpd: even if I set the DHCPD_IFACE variable in /etc/conf.d/dhcpd, the init script does not produce proper service dependencies, I have to explicitly set RC_NEED. In both those case the answer would be “dynamic” dependencies of the scripts, calculating the needed network services based on the settings in the conf.d files. I guess I should open bugs for those.

Once I finally had the networking support working properly, I set up SSH, connected and started the usual task of basic customisation. The first step for me is always to get zsh as shell. Not bash because I don’t like bash as a project, I find zsh more useful too. But once it started building m4, and in particular to test for strstr() time linearity, the virtual machine was frozen solid; qemu strted taking 100% CPU constantly, and even after half an hour it never moved from there. I aborted the VM and tried again, hoping it was just a glitch, but it’s perfectly reproducible. I don’t know what the problem is with that.

So I decided to give a try to installing Solaris, I created a new logical volume, started up again qemu and .. it is frozen solid during boot from the 2008.11 DVD.

In truth, I’m disappointed because the FreeBSD install really looked promising: fast, nice, not overloading more than a single core (I have eight, I can spare one or two for constantly-running VMs), it also worked fine when running as unprivileged user (my user) after giving it access to the kqemu device and the block device with the virtual disk; it didn’t work as nice with the tun/tap support in qemu itself in this setup since it required root to access the tap device, but at least with vde it reduced the amount of code running unprivileged.

On the other hand, since the KVM and QEmu configuration is basically identical (beside the fact that they emulate different network cards), I just tried again kvm, using the manual configuration I used for QEmu and vde for networking (networking configuration was what made me hope to use virt-manager last time, but now I know I don’t need it); it does seem faster, and it also passed the strstr() test before. So I guess the winner this round is KVM, and I’ll wait for the next Linux release to test the QEmu+Linux KVM support.

Post Scriptum: even KVM is unable to start the OpenSolaris LiveDVD though, so I wonder if it’s a problem with Solaris itself; I’d like to try the free as in soda version of Solaris 10, but the “Sun Download Manager” does not seem to work with IcedTea6 and downloading that 4GB image with Firefox is masochistic.

Free virtualisation – not working yet

If you’ve been following my blog for a while you probably remember how much I fought with VirtualBox once it was released to get it to work, so that I could use OpenSolaris. Nowadays, even with some quirks, VirtualBox Open Source Edition is fairly usable, and I’m using it not only for OpenSolaris but also for a Fedora system (which I use for comparing issues with Gentoo), a Windows install (that I use for my job), and a Slackware install that I use for kernel hacking.

Obviously, the problem is that the free version of VirtualBox come with some disadvantages, like not being able to forward USB devices, having limited type of hardware to virtualise and so on. This is not much of a problem for my use, but of course it would have been nicer if they just open sourced the whole lot. I guess the most obnoxious problem with VirtualBox (OSE at least, not sure about the proprietary version) is the inability to use a true block device as virtual disk, but rather having to deal with the custom image format that is really slow at times, and needs to pass through the VFS.

For these reasons Luca suggested me many times to try out kvm instead, but I have to say one nice thing of VirtualBox is that it has a quite easy to use interface which allows me to set up new virtual machines in just a few clicks. And since nowadays it also supports VT-x and similar, it’s not so bad at all.

But anyway, I wanted to try kvm, and tonight I finally decided to install it, together with the virt-manager frontend; although there are lots of hopes for this, it’s not yet good enough, and it really isn’t usable for me at all. I guess I might actually get to hack at it, but maybe this is a bit too soon yet.

Continue reading on my blog for the reasoning, if you’re interested.
One thing I really dislike of the newer versions of VirtualBox is that it went the VMware route and decided to create its own kernel module to handle networking. They said it’s for performance reasons but I’d expect the main reason is that it would allow them to have a single interface between Linux, Windows, Solaris and so on. The TUN/TAP interface is Linux-specific so supporting that together with whatever they have been doing on the other operating systems is likely a cost for them. Although I can understand their reasoning, it’s not that I like it at all. More kernel code means more code that can crash your system, especially when not using the Open Source Edition.

Interestingly enough, the RedHat’s Virtual Machine Manager is instead doing its best to avoid creating new network management software, and uses very common pieces of software: dnsmasq as DHCP server, the Linux kernel bridging support, and so on. This is very good, but it also poses a few problems: first of all, my system already runs a DHCP server, why do I have to install another? But it’s not just that; instead of creating an userspace networking bridging software, like VirtualBox does, it tries to use what the kernel provides already, in the form of the bridging support and iptables to forward requests and create NAT zones. This is all fine and dandy, as it reduces the feature duplication in software, but obviously requires more options to be enabled in kernels that might not otherwise have it enabled at all.

As it happens, my system does have bridging support installed, but not iptables nor masquerade targets and similar. Up to now I never used them so it had no reason to be there. I also sincerely can’t understand why it does need it if I don’t want a NAT, but it doesn’t seem to allow me to set up the network myself. Which would include me being able to just tell it to create a new interface and add it to a bridge I manage myself, leaving to me all the details like dhcp, and thus not requiring iptables at all.

Also, even though there is a way to configure an LVM-based storage pool, it does not seem to allow me to choose directly one of the volumes of that pool when it asks me what to use as a virtual disk. Which is kinda silly, to me.

But these are just minor annoyances, there are much bigger problems: if the modules for kvm (kvm and kvm-amd in my system) are not loaded when virt-manager is loaded, it obviously lack a way to choose KVM as hypervisor. This is nice, but it also fails to see that there is no qemu in the system, and tries to use a default path, that is not only specific to RedHat, but also very not existing here (/usr/lib64/xen/bin/qemu-dm, on x86-64, just checking the uname!), returning an error that the particular command couldn’t be found. At least it should probably have said that it couldn’t find qemu, rather than the particular path. It would have also been nice to still allow choosing kvm but then telling that the device was missing (and suggested loading the modules; VirtualBox does something like that already).

To that you have to add that I haven’t been able to finish the wizard to create a new virtual machine yet. This because it does not check the permission to create the virtual machine before proposing you to create one, so it let you spend time to fill in the settings in the wizard and then fails telling you don’t have permission to access the read/write socket. Which by default is accessible only by root.

Even though it’s obvious by the 0-prefix that the software is still not production-level, there are also a few notes on the way the interfaces are designed. While it’s a good idea to use Python to write the interface, since it allows a much faster test of the code and so on, and speed is not a critical issue here, as the work is carried out by C code in the background, every error is reported with a Python traceback, one of the most obnoxious things to do for users. In particular for the permission problem I just listed, the message is a generic error message: “Unable to complete install: ‘unable to connect to ‘/var/run/libvirt/libvirt-sock’: Permission denied’”; what is the user supposed to know here? Of course virtualisation is not something the user at the first tries with Linux is going to use, but still, even a power user might be scared by errors that appear this way and have attached a traceback (that most users would probably rationally link with “the program crashed” like bug-buddy messages, rather than “something wasn’t as the software expected”, which is what happened.

On a very puny level, instead, I have to notice that just pressing Enter to proceed to the next page on the wizard fails, and using ESC to close the windows too. Which is very unlike any other commonly-used software.

So to cut the post short now, what is the outcome of my test? Well, I didn’t get to test the actual running performance of KVM against VirtualBox, so I cannot say much about that technology. I can say that there is a long road ahead for the Virtual Machine Manager software to become a frontend half as good as VirtualBox’s or VMware’s. This does not mean that the software was badly written, at all. The software by design is not bad at all; of course it’s very focused on RedHat and less on other distributions, which explains lots of the problems I’ve noticed, but in general I think it’s going the right way. A more advanced setup for advanced users would probably be welcome by me, as well as an ISO images management like the one VirtualBox has (even better if you can assign an ISO image to be of a specific operating system, so that when I choose “Linux”, “Fedora” it would list me the various ISO for Fedora 8, 9, 10 and then have a “Other” button to take a different ISO to install, if desired.

I’ll keep waiting to see how it evolves.

Fedora 9 and VirtualBox

For a job I have been hired to do now, I have the need to keep a virtual machine with the development environment. The reason for this is that there are a few quirks in that environment which caused me some headaches before.

As it’s not like the other virtual machine (on the laptop), requiring a perfect Windows support, I decided to go with VirtualBox again. Which by the way forces me to keep GCC 4.2 around. But oh well, that’s not important, is it?

The choice of distribution inside wasn’t difficult, I thought. I didn’t want to rebuild system on a vbox, no Gentoo. I avoided Debian and Debian-based, after the debacle about OpenSSL I won’t trust them easily. Yes I am picking on them because of that, it’s because it was a huge problem. While I found openSUSE nice to install on a computer last month, it didn’t suit well my needs I thought, so I decided to go with Fedora 9.

I used Fedora a couple of times before, it’s a nice development environment when I need something up quickly and cleanly. Unfortunately I found Fedora 9 a bit rough, more than I remembered.

I wasn’t planning on giving it Internet access at first, because of my strange network setup (I will make a schema as talking with Joshua made me think it’s pretty uncommon); but then the package manager refused to install me the kernel-devel package out of the DVD, it had to use the network. So I tried to configure the network with a different IP and netmask, but this didn’t work cleanly either, the network setting interface seemed to clash with NetworkManager. I admit I didn’t want to spend too much time looking for documentation, so I just created a “VBOX” entry on NetworkManager which I’m selecting at each boot.

After this was done, I updated all the packages as the update manager asked me to do, and tried to install the new kernel-devel. This was needed by the VirtualBox guest tools, which I wanted to install to have the “magic” mouse grab. But VirtualBox refused to do so, because Fedora 9 is shipping with a pre-release Xorg 1.5 that they don’t intend to support. Sigh.

I’m not blaming Fedora here. Lots of people blamed them for breaking compatibility with nVidia’s drivers, but they did give enough good reasons to use that particular Xorg version (I know I read a blog about this, but I don’t remember which Planet it was in, nor the title). What I’m surprised of is that VirtualBox, although being vastly opensourced, seems to keep the additions closed-sourced, which in turn cause a few problems.

Different Linux distributions have different setups, some might use different Xorg versions, other different kernel building methods, and I sincerely think the answer is not LSB. Interestingly, you can get the VMware mouse and video drivers directly from Xorg nowadays (although I admit I haven’t checked how well do they work), but you cannot get the VirtualBox equivalents.

If any Sun/ex-Innotek employee is reading me now, please consider freeing your guest additions! Then your problems with supporting different Linux distributions would be very much slimmed down: we could all package the drivers, so instead of having to connect the ISO image of the additions, mount it, install the kernel and Xorg development files, and compiling modules and drivers, the only required step would be for the distribution to identify VirtualBox like it was any other “standard” piece of real hardware.

I hope the serial device forwarding is working properly as I’ll need that too, and it did throw me a couple of errors since I started installing Fedora… I haven’t tried it yet though. I hope there are picocom packages for Fedora.

A “Thank You” to VMware folks

I really want to thanks the VMware guys for their vmware-server… today the beta2 was released, and tonight I was having a few issues with vmware-server itself, so I decided to update it and see…

I was about to curse for a while, as it was really giving me hard times with slowness, but then I seen it had somehow started also another virtual machine I didn’t need. I started the Gentoo/FreeBSD 6.0 VM and…. magic! the “negative runtime” warnings that were coming up during the boot phase are now gone! Great! Thanks VMware.

Anyway, the stage is now on my home on dev.gentoo.org, waiting for one of the mirrors’ admins to pick it up and put it on mirrors.
Right now I”m testing Emanuele’s patch to build freebsd-lib with newer binutils, I hope vapier will merge my patch into binutils for tomorrow, that would really help.

Anyway, now it’s 2:27 and I really need to sleep as last night I haven’t sleep at all (and slept only during the afternoon, 11-16).

It’s fortunate that I can conciliate the work on Gentoo/FreeBSD with my actual job, if I were to have a way more committing job, I would have to drop most of G/FBSD maintenance, and that would be bad at the current status :(