Do we need the older versions of automake?

One interesting problem relatd to autotools is the presence in portage of many different versions in multiple slot for automake, one per minor version. Historically the reason for this is that different software requires different autotools version in their bootstrap scripts and thus we needed to have them around. While this is mostly still the case, autotools lately have improved a lot, and especially software designed to work with automake 1.8 mostly work out of the box with version 1.10, with some exceptions.

As of today’s standards for autotools in portage (the use of autotools eclass to begin with), the only reasons to have older automake versions around are the rare cases where the build system does not work with modern automake, and to avoid running the whole lot of eautoreconf when just changing a Makefile.am file. But even this does not work that well because sometimes you just prefer to rebuild everything for the sake of it.

I guess one thing we could be doing is trying to migrate as much packages as possible to automake 1.10 and make sure that version older than 1.8 are well masked (those would be ancient enough). My reason to keep the newer ones is to take into consideration eventual stricter checks in 1.10 that might not work for some less-than-recent software, and also because a lot of KDE-related ebuilds using the admin/ directory from the KDE 3 build system (which is an aberration built over autotools, rather than plain autotools) fail to work with newer automake versions, and is not trivial to port them over.

Since the older autotools version may be useful for other things, like working on very old projects that really cannot be ported, I guess removing them from portage is not really an option, but they should come with huge “warning” messages, and the developers should really think a lot before deciding to restrict the automake version to something that is not “latest”.

Now, the “latest” option is certainly not entirely safe, when 1.11 will be added to portage, “latest” will pick that up and might cause quite some stir since it could break older software which would need to be either restricted to 1.10 or fixed up (the latter preferred), but there is a solution for that too. Given that all the software that actually use autotools depend on them, it’s not so complex to just set up a chroot tinderbox like mine and build all the software that uses automake to ensure that it builds with the new version, before unleashing it in ~arch.

Unfortunately there are quite a few problems with that: the first is that since we consider implicit all the dependencies over system packages, which means we cannot really just find a list of software that uses a particular piece of software that is considered system, like autotools, gettext or flex. This is something I really have a beef with since it makes it non-trivial to just test all the software that would be involved with a particular tool change without rebuilding the whole tree over.

The second is that not all developers think that it’s worth masking even for jut a day or a week tools that break backward compatibility, with the result that for about a week the tree is broken (at least for the most common packages, I still find failures related to gcc 4.1 in the tree, and I found more about 4.3 after I unleashed my log data miner). I’m afraid the problem is that the compartmental work in Gentoo calls for a tighter coordination between teams, if we want not to break stuff for too many users at once.

Another issue is that since there are lots of packages in the tree that are nearly never used, and they thus suffer from a clinical case of bitrotting. While it’s not nice to just kill all of them once they have problems, it’s also impossible to say that a package can enter tree, be unmasked or marked stable only when all the packages are fixed. Just look at the GCC 4.3 tracker to see how many packages still don’t build with GCC 4.3; it would be unfeasible to expect that all of the are fixed before it goes stable, considering how much time it’s taking even for bugs that have patches to fix the issues.

I should find time to commit those patches, but either I do that or I work on refining the log mining script so that I can file more bugs to the point, trying to avoid duplicates. The boring bit is I cannot rely on just checking the open bugs, since some bugs like the pre-stripped files found ones got fixed without revbump, and thus the packages didn’t hit in my tinderbox. And just so you know, ccache held up its myths: it works nicely for a work like a tinderbox, since the same package is built multiple times (when it fails and it’s a dep of something else) it can reuse the built code each time. But even with this setup which is the perfect setup for ccache, it hasn’t got over 2:1 ration in three and a half tinderbox executions. Just to say.