Even though I’ve heard Patrick complaining about this many times, I never would have been able to assess how much of the tree goes untested if I didn’t start my little own tinderbox. Now, I’m probably hitting more problems than Patrick because I’m running ~arch
(or mostly ~arch
) with --as-needed
enabled, but it still shows that there is a huge amount of stuff that needs to be fixed, or dropped.
Up to now I’ve been using GCC 4.1, and still hit build failures with it; now I’ve switched to GCC 4.3, even though the tracker shown a bad situation already; and of course there are packages that didn’t have bugs opened just yet, because nobody built them recently.
Still, supporting the new compilers is not my main concern sincerely; there are packages that won’t build with GCC 4.3 just yet, like VirtualBox, as there are packages that still don’t compile with GCC 4.0. What concerns me is that there is stuff that hasn’t been tested at all. For instance, sys-fs/diskdev_cmds
which was marked ~amd64
was totally broken, with fsck.hfs
causing a Segmentation Fault as soon as it was executed (the version that is now available works, the old one has been taken out of the keyworded tree).
Since even upstream sometimes fail, one should also take into consideration the packages’ tests, possibly ensuring their failures are meaningful, and not just “ah this would never work anyway”. If you check dev-ruby/ruby-prof
, the test suite is executed, but a specific test, which is known to fail, is taken out of it first. This is actually pretty important because it saves me from using RESTRICT to disable the whole testsuite, and executing the remaining tests helped me when new code was added to support rdtsc on AMD64, which was broken. The broken code never entered the tree, by the way.
Unfortunately doing a run with FEATURES=test enabled is probably going to waste my time since I expect a good part of the tree to fail with that; with some luck, if Zac implements me a softfail for tests, I’ll be able to do such a run in the next months. I wonder if the run this time will be faster, I’ve moved my chroots partition to use ext4(dev) rather than XFS, and it seems to be seriously faster. I guess once 2.6.28 is released I’ll move the rest of my XFS filesystems to ext4 (not my home directory yet though, which is ext3, nor the multimedia stuff that is going remain HFS+ for now).
My build run also has some extra bashrc tests, beside the ones I already written about, that is the checks for misplaced documentation and man pages. One checks for functions from the most common libraries (libz, libbz2, libexpat, libavcodec, libpng, libjpeg) that gets bundled in, to identify possibly bundled-in copies of those, another checks for the functions that are considered insecure and dangerous by libc itself (those for which the linker will emit a warning). It is interesting to see their output, although a bit scary.
Hopefully, the data that I gather and submit on the bugzilla for these builds will allow us to have a better, more stable, and more secure portage tree as the time goes by. And hopefully ext4 won’t fry my drives.