This Time Self-Hosted
dark mode light mode Search

Why there is no third party access to the tinderbox

So we seem all to agree that the tinderbox approach (or at least a few tinderbox approaches) are the only feasible way to keep the tree in a sane state without requiring each developer to test it all alone. So why is my tinderbox (and almost any other tinderbox) not accessible to third parties? And, as many asked before, why isn’t a tinderbox such as the one I’ve been running managed by infra themselves, rather than by me alone?

The main problem is how privileges are handled; infra policies are quite strict, especially for what concerns root access; right now, the tinderbox is implemented with LXC which is not dependable to have proper privilege separation, neither locally, nor for what concerns network (even if you firewall it out so that the mac address only has access to limited services, you can still change the mac address for the LXC). This approach is not feasible for infra-managed boxes, and can have quite a bit of trouble for what concerns private networks as well (such as mine).

The obvious alternative to this is to use KVM; while KVM provides a much better security encapsulation, it is not a perfect fit either; in the earliest iteration of working on the tinderbox I did consider using KVM; but even when using hardware support, it was too slow; the reason is that most of the work done by the tinderbox is I/O bound, to the point that even on the bare OS, with a RAID0, it leaves a lot to desire. While virtio-block increases a lot the performance of virtualised systems, it’s still nowhere near the kind of performance you can get out of LXC, and as I said, that’s still slow.

This brings up two possible solutions; one is to use tmpfs to do the build; which is something I actually do on my normal systems but not on the tinderbox, for a number of reasons I’ll be telling in a moment, and the other is the recently-implemented plan9-compatible virtualisation of filesystems (in contrast to block devices upon which you build filesystems), which is supposed to be faster than NFS, as well as less tricky. I haven’t had the chance to try that out yet, but while it sounds interesting as an approach I’m not entirely sure it’s reliable enough for what the tinderbox approach needs.

There is another problem with the KVM approach here, and it relates once again to the infra policies: KVM gets very high performance hits when you use hardened kernels, especially the grsecurity features. This performance hit makes it very difficult to depend on KVM within infra boxes; until somebody finds a way to work around that problem, there are very few chances of getting it working there.

Of course the question would then become “why do you need third-parties to access the tinderbox as root?”; you can much more easily deal with restriction for standard users rather than root, even though you still have to deal with privilege escalation bugs. Well, it’s complicated. While a lot of the obvious bugs are easily dealt with by looking at a build log and/or the emerge --info output, a lot require more information, such as the version of packages installed on the system, or the config.log file generated by autoconf, or a number of other files for other build systems. For packages that include computer-generated source or data files, relying on other software altogether, you also might need those sources to ensure that the generated code corresponds to what is expected.

All of this requires you to access the system first-hand, and by default it also requires you to access the system as root, as the portage build directories are not world-readable. I thought of two ways to avoid the need to accss the system, but neither is especially fun to deal with. The first is gathering the required logs and files and produce a one-stop-log with all that data. It makes the log tremendously long, and complex, and it requires the ebuilds to declare which files to gather out of a build directory to report the bugs. The other way is to simply save up a tarball with the complete build directory, and access that as needed.

I originally tried to implement this second method; storing the data that way is also helpful because you can then easily clean up the build directory, which is a prerequisite to build with tmpfs; unfortunately, I soon discovered that the builds are just too big. I’m not referring only to builds such as OpenOffice, but also a lot of the scientific software, as well as a number of unexpected software, especially when you run tests for it, as often enough tests involve generating output and comparing it to expected results. On the pure performance scale of using tmpfs, I found that this method ended up taking more time than simply building on the harddrives. I’m not sure how it would scale within KVM.

With all these limitations, I hope it is clear why the tinderbox has been up to now a one-man-gateway project, at least for me. I’d sincerely be happy if infra could pick up the project, and manage the running bits of it, but until somebody finds a way to deal with that that doesn’t require a level of magnitude more work than what I’ve been doing up to now, I’ll keep running my own little tinderbox, and hope the others will run one of theirs as well.

Comments 7
  1. two little questions:1) why you ruled out linux-vserver.org completely? it’s rather fast and permit bind mounts.2) there is a repository for the custom script you use for the tinderbox?

  2. For my own system I want to keep using a vanilla kernel; and infra uses hardened kernels that are incompatible with linux-vserver. So there you go on why that (and OpenVZ fwiw) are both ignored.All the scripts are public and available “on our GIT”:http://git.overlays.gentoo….

  3. You’ve obviously written extensively about your tinderbox project. However, right now most of what is in writing has been about the evolutionary process in getting to where you are now.How practical is it for devs to manage their own tinderboxes (either as virtual machines or whatever)? How much CPU time are we talking to evaluate some moderate-impact change? How much setup effort to get ready for a run and generally update the tinderbox to stable or ~arch status before making a change/etc? How often do things break and need care to fix?If devs running their own tinderboxes is something that is remotely in the realms of reality then perhaps a howto or something might help.

  4. What % of slowdown are you seeing with virtio-block vs LXC (or native)? If it’s more than a few % then you may have a configuration problem …

  5. would it be difficult to adapt ‘tinderbox’ to smaller subsets of packages. Say one that infra could run for ‘system’ and say gnome team for ‘gnome’ or kde-team for ‘kde’

  6. The main problem with the tinderbox is that you want ALL the cpu cycles available spent on it, even a 1% more is too much. =|

Leave a Reply to lu_zeroCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.