As I wrote, I’m improving autotools for some 0pointer projects. Among these there is of course PulseAudio (for which, for a while, I had access to the upstrem Subversion repository). The nice thing about git is that I can easily make a mess, take back a few commits, fix them, and when I’m sure it’s all fine I can push it through.
At any rate, what I want to write about today is the importance of caching the values. As I wrote before, there is a lot of anger toward autotools because running the ./configure
script over and over is slow. This is true for a few very good reasons:
- lots of configure scripts check for things they don’t use; this is not always fault of who wrote it, because for instance using
libtool
caused checks for C++ and Fortran compilers to be emitted even if they are not used at all; this has been fixed inlibtool
2.2 and is no more the case on modern packages; - lots of configure scripts don’t cache their results, rebuilding the same software over and over can benefit from caching tests, especially the ones that involve linking; while confcache is broken by design, as different packages use the same cache value name with different meanings, the cache is very useful during development, applied to the same package;
- some packages require linking test to identify their dependencies, rather than relying on
pkg-config
, for instance, or the authors of theconfigure.ac
scripts don’t rely on thepkg-config
tool even when available; this is often the case for X11 stuff: modular Xorg supportspkg-config
for discovery of packages and libraries, but very few configure scripts check that, partly because they were designed before modular Xorg, partly for compatibility with older versions and other vendors.
There are good solutions to tackle all these problems actually, the issue is that they are not widely known.
The first point can be solved by making sure that the configure phase checks for what is needed and just that. This is done by using new enough tools (newer libtool
versions dropped checks for many things that are not required, like C++ and Fortran), and making sure that the content of the configure.ac
file is not just boilerplate.
A common issue is checking for the presence of 20 different headers that are actually used in the project, and then never using the HAVE_FOOBAR_H
macros, similarly for different system-dependent functions.
In a similar fashion, checking for functions and exiting with error if they are not present is not a huge improvement over failing at build time. Okay it might make it faster to notice if the software will not build, but it’s not much of an improvement. Functions and headers check are usually better when you have an alternative. Similarly, if you want to find which header to include for standard integers like uint8_t
, between stdint.h
, inttypes.h
and sys/types.h
, knowing some systems have more than one of those, and the priority order is this, it’s better to stop at the first one found, using a construct like:
AC_CHECK_HEADERS([stdint.h inttypes.h sys/types.h], [break;])
so if the system supports C99’s stdint.h
, the other two are not tested for at all. Something similar happens for strcasecmp
:
AC_CHECK_FUNCS([strcasecmp stricmp], [break;])
will stop at the first working alternative.
For what concerns the caching, one of the best things you can do is making sure that the tests are factored out on their own macros, so that you can easily reuse them between projets. If there was a better understanding of this need, confcache
could have worked. Unfortunately it’s near to impossible to handle this correctly right now.
Once they are on their own macros, use AC_CACHE_CHECK
rather than doing AC_MSG_CHECKING
and AC_MSG_RESULT
manually, this way the result will be saved in the cache file, making it available to further configure executions.
The third problem is quite tricky, as it depends on vairous needs of projects. In general, it’s possible to use pkg-config
to find something and fallback to a build test if it fails; this is what xine-lib does already to discover X libraries: on a modern system using modular Xorg, the configure takes much less time because it uses pkg-config
discovery. If KDE did that to discover Qt dependencies and other things, it would probably have taken quite less time.
There’s also the problem that, to be “consistent”, KDE uses the same build system both for complicated software making use of extended features, and simple data files like icons or similar. The same seems to apply now as cmake is used for everything. The nice thing about autotools is that you can slim them down when you don’t need a huge lot of features, for instance if you look at GNOME’s icon themes, they don’t start checking for gtk+ features that are enabled or not, while KDE’s icon themes using the KDE build system will do that anyway. Now of course this makes it much easier to integrate the icon themes together with the rest of the code, but it would be quite nicer if the icon theme was either available standalone, or if it was possible to install it bypassing most of the complexity of the build system with a simple script.
Seems like confcache would work fine if someone made it package-specific, so that the cache is only reused for that exact package. That has just as much or more value than ccache, which relies on unchanged files.
It would probably work up to a point, but might experience some random failures (and worse). But I suppose it’s a feasible option, yes 🙂
@donnie: problem is, doing the cache per pkg really isn’t all that beneficial performance wise unless you’re repeatedly building the same pkg over and over.Keep in mind there is a bit of a validation cost for using the cache for confcache (chksums among other things).