For A Parallel World. Theory lesson n.1: avoiding pointless autoconf checks

In my “For A Parallel World” series I have, up to now, only dealt with practical problems related with actual parallel builds. Today, I wish to try something different writing about some good practises to follow when working with autotools.

Once again I want to repeat that no, I don’t think autotools are completely flawed, or that they are way too complex for normal people to use; I certainly don’t agree that CMake is “just superior” like some good programmer said recently (although I admit after some stuff I’ve seen I’d gladly take CMake over some custom-built makefiles). I do think that there are too many bad examples of autotools usage, I do think that autotools could very well use better documentation and better tutorials, and I do think that a lot of people have been misusing autotools to the point that you blame the tool for how it was used.

You certianly know what the problem I’m referring to right now is, about parallel builds and autoconf: the lengthy and serialised execution of autoconf checks. It’s a very boring part of a build, but it’s unfortunately necessary, usually. What is the problem there though? The problem is that a lot of packages execute more tests than are actually needed, which is going to bore you to death since you’re waiting for things to be checked that you’re not going to have any useful result from.

The causes of this are quite varied: legacy systems being supported, overzealousness of the developer writing with respect to missing headers or functions, autoscan-like tools, boilerplate checks coming from a bastardised build systems (like the one forced upon each KDE 3 package), and of course mis-knowledge of autoconf. In addition to this, libtool 1.5 was very bad and checked for C++ and Fortran compilers, features, linkers and so on even though they were not to be used; luckily 2.2 is now fixed, and upstream projects are slowly migrating to the new version that takes much less time to run.

I’m currently looking for a way to scan a source tree to identify possibly overzealous checks from configure, so that I can help reducing the pointless checks myself, but in the mean time I want to give some indications that might help people to write better autotools-based buildsystems for their projects, at least.

My first suggestion is to require a standard base: I’m not referring to stuff like the Linux Standard Base, I’m referring to requiring a standard base for the language; in the case of C, which is mostly what autotools are used for (okay there’s also C++ but that’s not a topic I want to deal with right now), you can ensure that ou have some stuff present by requiring a C99-compliant compiler. You’re going to cut out some compilers, but C99 nowadays is pretty well supported under any operating system I ever dealt with, and even under Linux you can choose between three C99 compilers: GCC, Sun Studio Express and the Intel C Compiler. As you can guess, as long as you use C99 features, the compatibility between these three compilers is also almost perfect (there are some implementation-dependent things that vary, but if you avoid those it’s quite good).

But the important part here is that, by requiring C99 explicitly, you’re requiring also that the standard headers that come with that, which you don’t need to check for: stuff like locale.h, stdio.h, stdbool.h, stdlib.h, …; they have to be present for C99 to be supported. And even better, you can require POSIX.1-2001, and _XOPEN_SOURCE 600, so that you have a wide enough featureset you can rely upon. It’s very easy: -D_POSIX_C_SOURCE=200112 and -D_XOPEN_SOURCE=600, together with a C99-compatible compiler (like gcc -std=c99 or sunc99) and you can rely upon presence of functions like nanosleep() or gethostname(); you won’t have to check for them.

Now of course to support legacy systems you cannot relay on these standards, that are pretty new and not well supported, if at all, by older versions of compilers and operating systems that you might be interested in supporting. Well, guess what? A good way to deal with this is, rather than checking everything with autotools, and then dealing with all the issues one by one, is to assume things are available and give legacy operating system a chance to run this by having a library to supply the missing parts. Such a library can implement replacement or stubs for the missing functions, and headers; then the users of the legacy operating systems might just provide the library as extra to the project itself.

If you don’t like this approach, which in my opinion is quite nice and clear, you can rely fully on an external library instead, such as glib, to provide you some platform-independent support (like integer named types, byteswap macros, string functions). Again this reduces to requiring things to be available, rather than adapting (maybe too much) the software to the platforms it supports.

Sometimes, these approaches can be a bit an overkill though, since you might not have the need for the full C99, but you can accept C89 just fine, with a few touches; for this reason you might just assume that your functions are present, but they might not be using the exact name you expect (for instance there are a few functions that change name between POSIX and Windows), or you might be wanting to look at the function name for a known replacement function present in an extension library (that might as well be glib itself!).

In these cases you can well rely on the checks that come from autotools, but even then, I’d suggest you to be careful with what you write. One common problem for instance is overchecking headers. Let’s say you have to find at least one header that declares standard integer types (int64_t and similar); in general you can expect that one of inttypes.h, stdint.h or sys/types.h will be present and will have the definitions you need. The simplest test to find them is to use something like this:

dnl in configure.ac
AC_CHECK_HEADERS([inttypes.h stdint.h sys/types.h])

/* in a common header file */
#if defined(HAVE_INTTYPES_H)
# include 
#elif defined(HAVE_STDINT_H)
# include 
#elif defined(HAVE_SYS_TYPES_H)
# include 
#else
# error "Missing standard integer types"
#endif

While the code in the header is quite good, since only one of the found types is used, the example code in configure.ac is overchecking, since it’s checking all three of them, even if just the first hit is used. Indeed, if you check the output of a configure script using that, you’ll get this:

..snip..
checking for inttypes.h... (cached) yes
checking for stdint.h... (cached) yes
checking for sys/types.h... (cached) yes
..snip..

(the fact that the test is cached is because autoconf already checks for them, that’s overchecking in autoconf itself, and should probably be fixed).

Instead of the former code, a slightly different variant can be used:

AC_CHECK_HEADERS([inttypes.h stdint.h sys/types.h], [break;])

With this, the checks will stop at the first header that has been found. It might not sound much of a difference but if you pile them up, well it’s the usual point, little drops make a sea. This gets particularly useful when packages rename their include files, or they decide they want to move from just the basename to packagename/basename approach (think FFmpeg); you can test for the new one, and if that doesn’t hit, check the old one, but avoid checking twice if the new one hit already.

The same approach can be used with AC_CHECK_FUNCS so that you only check for the first function of a series of possible replacements, and go with that one.

But the most important thing is to make sure that all the checks you run during configure are actually useful and used. It’s not uncommon for software to check for a series of headers or functions to define the HAVE macros, but never use the macro themselves. It’s tremendously unfortunate and it should be avoided at all costs, you should always check your software for that, especially when you make changes that might make checks pointless.

Do you maintain an autotools-based software? Then please take a look at your configure.ac and make sure you’re not running pointless checks, all the users will be happy if you can reduce their build time!

Supporting more than one compiler

As I’ve written before, I’ve been working on FFmpeg to make it build with the Sun Studio Express compiler, under Linux and then under Solaris. Most sincerely, while supporting multiple (free) operating systems, even niche Unixes (like Lennart likes to call them) is one of the things I spend a lot of time on, I have little reason to support multiple compilers. FFmpeg on the other hand tends to support compilers like the Intel C Compiler (probably because it sometimes produces better code than the GNU compiler, especially when coming to MMX/SSE code — on the other hand it lacks some basic optimisation), so I decided to make sure I don’t create regressions when I do my magic.

Right now I have five different compile trees for FFmpeg: three for Linux (GCC 4.3, ICC, Sun Studio Express), two for Solaris (GCC 4.2 and Sun Studio Express). Unfortunately the only two trees to build entirely correctly are GCC and ICC under Linux. GCC under Solaris still needs fixes that are not available upstream yet, while Sun Studio Express has some problem with libdl under Linux (but I think the same applies to Solaris), and explodes entirely under Solaris.

While ICC still gives me some problems, Sun Studio is giving me the worst headache since I started this task.

While Sun seems to strive to reach GCC compatibility, there are quite a few bugs in their compiler, like -shared not really being the same as -G (although the help output states so). Up to now the most funny bug (or at least absurd idiotic behaviour) has been the way the compiler handles libdl under Linux. If a program uses the dlopen() function, sunc99 decides it’s better to silently link it to libdl, so that the build succeeds (while both icc and gcc fail since there is an undefined symbol), but if you’re building a shared object (a library) that also uses the function, that is not linked against libdl. It remembered me of FreeBSD’s handling of -pthread (it links the threading library in executables but not in shared objects), and I guess it is done for the same reason (multiple implementation, maybe in the past). Unfortunately since it’s done this way, the configure will detect dlopen() not requiring any library, but then later on libavformat will fail the build (if vhook or any of the external-library-loading codecs are enabled).

I thus reported those two problems to Sun, although there are a few more that, touching some grey areas (in particular C99 inline functions), I’m not sure to treat as Sun bugs or what. This includes for instance the fact that static (C99) inline functions are emitted in object files even if not used (with their undefined symbols following them, causing quite a bit of a problem for linking).

The only thing for which I find non-GCC compilers useful is to take a look to their warnings. While GCC is getting better at them, there are quite a few that are missing; both Sun Studio and ICC are much more strict with what they accept, and raise lots of warnings for things that GCC simply ignores (at least by default). For instance, ICC throws a lot of warnings about mixing enumerated types (enums) with other types (enumerated or integers), which gets quite interesting in some cases — in theory, I think the compiler should be able to optimise variables if they know they can only assume a reduce range of values. Also, both Sun Studio, ICC, Borland and Microsoft compilers warn when there is unreachable code in sources; recently I discovered that GCC, while supporting that warning, disables it by default both with -Wall and -Wextra to avoid false positives with debug code.

Unfortunately, not even with the combined three of them I’m getting the warning I was used to on Borland’s compiler. It would be very nice if Codegear decided to release an Unix-style compiler for Linux (their command-line bcc for Windows does have a syntax that autotools don’t accept, one would have to write a wrapper to get those to work). They already released free as in soda compilers for Windows, it would be a nice addition to have a compiler based upon Borland’s experience under Linux, even if it was proprietary.

On the other hand, I wonder if Sun will ever open the sources of Sun Studio; they have been opening so many things that it wouldn’t be so impossible for them to open their compiler too. Even if they decided to go with CDDL (which would make it incompatible with GCC license), it could be a good way to learn more things about the way they build their code (and it might be especially useful for UltraSPARC). I guess we’ll have to wait and see about that.

It’s also quite sad that there isn’t any alternative open source compiler focusing, for instance, toward issuing warnings rather than optimising stuff away (although it’s true that most warnings do come out of optimisation scans).