If you’re using LXC, you might have noticed that there was a 0.8.0 release lately, finally, after two release candidates, one of which was never really released. Do you expect everything would go well with it? Hah!
Well, you might remember that over time I found that the way that you’re supposed to mount directories in the configuration files changed, from using the path to use as root, to the default root path used by LXC, and every time that happened, no error message was issued that you were trying to mount directories outside of the tree that they are running.
Last time the problem I hit was that if you try to mount a volume instead of a path, LXC expected you to use as a base path the full realpath, which in the case of LVM volumes is quite hard to know by heart. Yes you can look it up, but it’s a bit of a bother. So I ended up patching LXC to allow using the old format, based on the path LXC mounts it to (/usr/lib/lxc/rootfs
). With the new release, this changed again, and the path you’re supposed to use is /var/lib/lxc/${container}/rootfs
— again, it’s a change in a micro bump (rc2 to final) which is not documented…. sigh. This would be enough to irk me, but there is more.
The new version also seem to have a bad interaction with the kernel on stopping of a container — the virtual ethernet device (veth pair) is not cleared up properly, and it causes the process to stall, with something insisting to call the kernel and failing. The result is a not happy Diego.
Without even having to add the fact that the interactions between LXC and SystemD are not clear yet – with the maintainers of the two projects trying to sort out the differences between them, at least I don’t have to care about it anytime soon – this should be enough to make it explicit that LXC is not ready for prime time so please don’t ask.
On a different, interesting note, the vulnerability publicized today that can bypass KERNEXEC? Well, unless you disable the net_admin capability in your containers (which also mean you can’t set the network parameters, or use iptables), a root user within a container could leverage that particular vulnerability.. which is one extremely good reason not to have untrusted users having root on your containers.
Oh well, time to wait for the next release and see if they can fix a few more issues.
Sounds like a typical case of developers that totally ignore their user base. They blindly assume that everyone out there is familiar with their source code and closely follows their git history. Of course in such a closed-minded universe documentation and release notes are total overkill.
meh i’ve been rather unsatisfied with lxc. the backend is not perfect either but its ok. lxc has been difficult to use and understand for most people however, so it gets classified as “crap, unsecure, won’t use”which is too bad, because the vserver-like technology behind it is good.
I’m still using it and I’ll keep using it for a long time I foresee. The speed is extremely important to me for the Tinderbox so LXC is the only realistic solution.On the other hand I know it’s not a final solution, as you noted. Hopefully there will come a time I can say “Finally, LXC is ready for prime time!” … although I’m not sure when that might happen.
This reminded me that Kir Kolyshkin from OpenVZ recently announced (http://openvz.livejournal.c… that vzctl can now work (with limited functionality) with the vanilla Linux kernel. I have not tried it myself, however: I’m still using the 2.6.32-openvz kernel on my home server.
The same – still using OpenVZ patched kernel and even recently upgraded from 4 years old 2.6.18 to 2.6.32 and after 8 years I’m still impressed how flawlesly this works.Actually…. I was looking for a link with some of the last news I read about OpenVZ situation and… found that this is the link to the same link as above… http://openvz.livejournal.c… 🙂
I am using lxc and have done so at work for years. Before that I used OpenVZ for years but their insistence to support only RHEL kernels drove me away since I wanted new features like ext4 and then btrfs in my storage fileservers. On one server I upgraded to the 0.8.0 release but then quickly I needed to go back to the rc because all the support tools like lxc-ps said something like the cgroup filesystem was not mounted when it was and the containers were all running. I reverted to 0.8.0_rc2 and all is well like it has been for quite some time.
Forgot to mention I made the same OpenVZ to lxc transition at home as well. That happened earlier than at work mainly because of my main gentoo box at home runs mythtv and thus I needed updated hardware drivers for my tuners which were not very compatible with the RHEL based kernels.
The 2.6.32-openvz kernel has ext4, which I’m using without problems for root and /boot. Everything else, except for swap and grub2 is on ZFS, which can be kept current separately. I thought about btrfs and am testing it on other machines. After following the btrfs and zfsonlinux mailing lists for some time, I do not yet want btrfs on my server.
vzctl (openvz) now works on upstream kernel, may be it is time to switch?
I am a long time OpenVZ user and I can’t wait to get rid of the crappy kernel I have to use (2.6.32 w/RHEL config). Not to mention that newer udev releases require at least 2.6.34, even though they seem to work “fine” with 2.6.32 as well in my case.
Maybe it’s time to switch over to linux-vserver.It’s really a robust and mature project, although it’s driven essentially by a single developer.