Sealed tinderbox

I’ve been pushing the tinderbox one notch stricter from time to time; a few weeks ago I set up the tinderbox so that any network access beside for the basic protocols (HTTP, HTTPS, FTP and RSYNC) was denied; the idea is that if the ebuilds try to access network by themselves, something is wrong: once the files are fetched, that should be enough. Incidentally, this is why live ebuilds should not be in the tree.

Now, since I’ve received a request regarding the actual network traffic issued by the tinderbox, I decided to go one step further still, and make sure that beside for the tasks that do require network access the tinderbox does not connect to anything outside of the local network. To do so, I set up a local RSync mirror, then added a squid passthrough proxy, that does not cache anything; at that point, rather than allowing some protocols on the router for the tinderbox, I simply reject anything originating from the tinderbox to access Internet; all the outgoing connections originating from the tinderbox are done through Yamato, so I have something like this in my make.conf:

FETCHCOMMAND="/usr/bin/curl --location --proxy yamato.local:3128 --output ${DISTDIR}/${FILE} ${URI}" 
RESUMECOMMAND="/usr/bin/curl --location --proxy yamato.local:3128 --continue-at - --output ${DISTDIR}/${FILE} ${URI}"

Note: googling on how to set up those two variables in Gentoo to use curl I did find some descriptions on the Gentoo Forums that provide most of them; unfortunately all I found ignore the --location option, which makes it fail to fetch stuff from the SourceForge mirrors and any other mirroring system that uses 302 Moved responses.

I also modified the bti-calling script so that the dents are sent properly through the proxy. I didn’t set the http_proxy variable, because that would have made moot the sealing. Instead, by setting it up this way, explicitly for the fetch and dent, if any testsuite tries to fetch something, even via HTTP, will be denied.

But… why should it be a problem if testsuites were to access services on the network? Well, the answer is actually easy once you understand two rules of Gentoo: what is not in package.mask is supposed to work, and any bug found needs to be fixable, and testsuites results need to be reproducible, to make sure that the package works. When you rely on external infrastructure like GIT repositories, you have no way to make sure that if there is a problem it can be fixed; and when your testsuite relies on remote network services, it might fail because of connection problems, and it will fail if the remote service is closed entirely.

I’ve also been tempted to remove IPv4 connectivity from the tinderbox at all; IPv6 should well be enough given that it only needs to connect to Yamato, and it would be under NAT anyway..

Too many alternatives are not always so good

I can be quite difficult to read for what concerns alternative approaches to the same problem; while I find software diversity to be an integral part of the Free Software ideal and very helpful to find the best approach to various situations, I also am not keen on maintaining the same code many time because of that, and I’d rather have projects to share the same code to do the same task. This is why I think using FFmpeg for almost all the multimedia projects in the Free Software world is a perfectly good thing.

Yesterday, while trying to debug (with the irreplaceable help of Jürgen) a problem I was having with Gwibber (which turned out to be an out-of-date ca-certificates tree), I noted one strange thing with pycurl, related to this fact, that proves my point to a point.

CURL can make use of SSL/TLS encryption using one out of three possible libraries: OpenSSL, GnuTLS and Mozilla NSS. The first option is usually avoided by binary distributions because it is incompatible with some licensing terms; the third option is required for instance by the Thunderbird binary package in Gentoo as it is. By default Gentoo uses OpenSSL, that you like it or not.

When CURL is built against OpenSSL (USE="ssl -gnutls -nss"), PyCURL linked to libcrypto; given that my system is built with forced --as-needed, it also means it uses it. I found it quite strange so I went to look at it; if you rebuild CURL (and then PyCURL) with GnuTLS (USE="ssl gnutls -nss") you’ll see that it only links to libgnutls, but if you look closer, it’s using at least one libgcrypt symbol. Finally if you build it with Mozilla NSS (USE="ssl -gnutls nss") then it will warn that it didn’t detect the SSL library used.

The problem here is that CURL seems not to provide a total abstraction of the SSL implementation it uses, and for proper threading support, PyCURL needs to run special code for the crypto-support library (libcrypto for OpenSSL; libgcrypt for GnuTLS). I’m sincerely not sure how big the problem would be when you mix and match the CURL and PyCURL implementations, I also have no idea what would happen if you were to use CURL with NSS and PyCURL with that (which will not provide locking for crypto at all). What I can tell you, is that if you change the SSL provider in CURL, you’d better rebuild PyCURL, to be on the safe side. And there is currently no way to let Portage do that automatically for you.

And if you are using CURL with NSS and you see Portage asking you to disable it in favour of GnuTLS or OpenSSL, you’ll know why: PyCURL is likely to be your answer. At least once the bug will be addressed.

Distributions and interactions

If you have read the article I wrote about distribution-friendly projects at LWN (part 1 and part 2, part 3 is still to be published at the time I write this), I tried to list some of the problems that distributions face when working with upstream projects.

One interesting thing that I did oversee, and I think was worth an article on its own, or even a book on its own, finding the time, is how to handle interaction between projects. All the power-users expect that the developers don’t try to reinvent the wheel every time, or if they do they do it for a bloody good reason. Unfortunately this is far from true, but beside this fact, when sharing code between projects it’s quite common that mistakes end up propagating in quite a huge area.

A very good example of this is the current state of Gentoo’s ~arch tree. At the moment there are quite a few things that might lead to build-failure because different projects are interleaved:

  • GCC 4.3 changes will cause a lot of build failures in C++ software because the headers dependencies were cleaned up; similar changes happen every GCC minor release;
  • autoconf 2.62 started catching bad errors in configure scripts, especially related to variable names; similar changes happen every autoconf release;
  • libtool 2.2 stopped calling C++ and Fortran compilers check by default, so packages have to take care of what they do use;
  • curl 7.18 is now stricter in the way options are set in its easy interface, requiring boolean options to be explicitly provided;
  • Qt 4.4 is being split in multiple packages, and dependencies has thus to be fixed.

There are probably more problems thatn these, but these are probably the main ones. Unfortunately the solution to similar problems by a few projects is not to start a better symbiotic relationship between the various project, is to require the user to use a given version for their dependencies… which might be different from the version that the user is told to use for another project… or even worse, they import a copy of the library they use and use that.

Interestingly enough, it seems expat is really a good example of library imported all over, and less understandable than zlib (which is quite small on its own, although it has its share of security issues built-in). I’ve found a few before, and some of them are now fixed in the tree, but in the last two days I found at least four more. Two are in Python itself, returned from the death (yeah two of them, one in celementtree and one in pyexpat), one is – probably, not sure though – in Firefox, and thus in other Mozilla products, I suppose, and the last one is in xmlrpc-c, which has one of the worst build-systems I ever seen and thus makes it quite hard to fix the issue entirely.

Maybe one day we’ll have just one copy of expat in any system, and that would be shared by everybody.. maybe.