This Time Self-Hosted
dark mode light mode Search

Autotools Mythbuster: autoscan? autof—up

Beside the lack of up-to-date and sane documentation about autotools (for which I started my guide that you should remember is still only extended in my free time), there is a second huge user experience problem: the skeleton build system produced by the autoscan script.

Now, I do understand why they created it, the problem is that as it is, it mostly creates fubar’d configure.ac skeletons, that confuse newcomers and causes a lot of grief to packagers and to users of source-based distributions (and those few who still think of building software manually without getting it from their distribution).

The problem with autoscan is that it embodies again the “GNU spirit”, or actually the GNU spirit of the original days, back when GNU tried to support any operating system, to “give freedom” to users forced to use those OSes rather than Linux itself. Given that nowadays FSF seems to interest itself mostly on discouraging anybody from using non-free operating systems (or even, non-GNU based operating systems) – sometimes failing and actually discouraging companies from using Free Software altogether – it seems like they had a change of ideas in the middle of that. But that’s something for another post.

Anyway, assuming that you’ll have to make your software work on any operating system out there is something that you are well unlikely to do. First of all, a number of projects nowadays, for good or bad, target Linux only; sometimes even just GNU/Linux (that is they don’t support running on other C libraries) because they require specific features from the kernel, specific drivers and other requirements like that. Secondly, you can easily require your users to have a sane environment to begin with; unless you really have to run on a 15 years old operating system, you can assume at least some basic standard support. I have already written about pointless autotools checks but I guess I didn’t make it too clear yet.

But it’s not just the idea of just dropping support for anything that does not support a given standard, whatever that might be (C99, POSIX.1-2008, whatever), it’s more that the configure.ac generated by autoscan is not going to make it magically work on a number of operating systems it didn’t support before. What it does, for the most part, is adding a number of AC_CHECK_HEADERS and AC_CHECK_FUNCS calls, which will verify the presence of various functions and headers that your software is using… but it won’t change the software to provide alternatives; heck there might not be alternatives.

So if your software keeps on using strings.h (which is POSIX) and you check for it in configure phase, you’re just making the configure phase longer without any solution, because you’re not making use of the results from the configure phase. Again, this often translates to things like the following:

#ifdef HAVE_STRINGS_H
#include <strings.h>
#endifCode language: CSS (css)

Okay, so what is the problem with this idea? Well, to begin with, I have seen it so many times without an idea of why it is there! A number of people expect that since autoscan added the check, and thus they have the definition, they have to use it. But if you use a function that is defined in that header, and the header is not there, what are you going to do? Not including it is not going to make your software any more portable, if anything you’re going to get an implicit declaration of the function, and probably fail later at runtime. So, if it’s not an optional header, or function, just running the check and using the definition is not enough.

A common alternative is to fail the configure step if the header or function is not found, while it makes a bit more sense, I still dislike that option, Sure you might be able to tell the reason why the function is needed and whether they have to install something else or upgrade their system, but in truth that made much more sense when there was near to no common ground between operating systems, and users were the common people running the ./configure script. Nowadays, that’s a task that is often limited to packagers, that know their systems much better. The alternative to failing in configure is failing during build, and it’s generally not too bad. Especially since you’ll be failing build for any condition you didn’t know about beforehand.

I have another reason to provide, as for why you shouldn’t be running all those tests for things you don’t support a fallback for: autoconf provides means to pass external libraries and include directives to the compiler; since having each package provide its replacement for common function is going to cause a tremendous amount of code duplication (which in turn may cause a lot of work for packages if one of them is broken, such as dtoa() anybody remembers that?), I’m always surprised that there aren’t many more libraries that provide compatibility replacements for the functions missing in the system C library (gnulib does not count as it’s solving the problem with code replication, if not duplication). Rather than fail, or trying to understand whether you can build or not depending on the OS used, just assume their presence if you can’t go without, and leave it to the developers running that system to come up with a fix, which might involve additional tests, or might not.

My suggestion here is thus to start considering first the operating systems you’re targeting directly; try to find what actually changes between them; in most cases, for instance, you might have still pieces of very-old systems around, like the include for malloc.h that is only useful if you want to call functions such as memalign() but is not used for malloc() since, well, ever (stdlib.h is enough for that), and that will cause errors on both FreeBSD and OS X if included. So once you find that a header is not present in some of your desired operating system, look up what replaces it, then make sure to check for it properly; that means using something like this:

dnl in configure.ac
AC_CHECK_HEADERS([stdint.h inttypes.h], [break;])

/* in your C code */
#if HAVE_STDINT_H
#  include <stdint.h>
#else HAVE_INTTYPES_H
#  include <inttypes.h>
#endif
Code language: CSS (css)

This way you won’t be running checks for a number of alternative headers on all systems: most modern C99-compatible system libraries will have stdint.h available, even though a few older systems will need for inttypes.h to be discovered instead. This might sound cheap, since it’s just two headers, but especially when you’re looking for the correct place of a library header, you might end up with an alternative among three or four headers, and add a bunch of alternatives here, and you’re going to have problems. The same trick can be used for functions, and the description is also on my guide so and I’ll soon expand it to cover functions as well.

It shouldn’t “sound wrong” to have a configure.ac with near to no AC_CHECK_* call at all! Most of the tests autoconf will do for you, and you have the ability to add further, but there is no need to strain at using them when they are unneeded. Take as example the feng configure.ac — it has a few checks, of course, but they are limited to a few optional features that we workaround if missing, in the code itself. And some I would probably just remove (like making ipv6 support optional… I’d sincerely just make it work if it’s found on the system, as you still need to enable it in the configuration file to use it anyway).

And please, please, just don’t use autoscan, from now on!

Comments 9
  1. I know you already confirmed all this to me by e-mail but thanks for taking the time to write about it fully. I had drawn the same conclusions myself but wanted to know that I wasn’t alone.

  2. I’ve long thought that there’s a subtle lack of focus to autoconf that creates a lot of problems. Nobody every says for sure what configure’s job is:* to handle build-time configuration (include/exclude features, set paths and default values, etc.);* to check for build requirements and fail if they are not met; or* to check for build requirements and work around them.This lack of focus is embodied in the available autoconf macros themselves: * some will helpfully provide workarounds for missing functionality (e.g., AC_FUNC_OBSTACK or AC_TYPE_INT8_T);* some have action-if-found and action-if-not-found, allowing you to provide your own workaround, disable a feature, or simply fail (e.g., AC_CHECK_FUNC); and* some simply perform a test and give you no way to follow up on the test results (e.g., AC_SYS_LONG_FILENAMES).The autoconf macro archive isn’t any more consistent, either. Which is partly my fault..

  3. @Dustin: IMO “to check for build requirements and fail if they are not met” is a silly goal. It’s just not possible. How do you test that the compiler won’t miscompile? How do you test it can compile all your inline ASM? Or in other words, if you wanted to check that properly in configure, configure would have to run the build and tests.The only case it makes sense if there’s a really common thing and you want to save the users the time of starting a build and to figure out what that build error means (mostly because nobody write Makefiles that try to help diagnose the issue).I also think it is questionable for configure to work around missing things itself, it often needlessly entangles your configure script with your main code and dependencies are hardly ever a good thing, particularly when like with auto* a lot of unbelievably messy auto-generated code is involved (autotools a very good example why you really should invest some effort to pretty-print even generated code).

  4. Another interesting observation from my work on Amanda (an app that dates back to 1992) is that it’s very difficult to justify removing a test from configure, or even modifying one. For example, the check in http://github.com/djmitche/… is probably superfluous at this point, as are a lot of the function checks in configure.in.However, imagine you are reviewing my proposal to delete these checks:* the checks don’t hurt anything as they stand (aside from killing CPU cycles);* you do not have access to many of the platforms on which the checks might matter (HP/UX at least, but maybe others?)* someone, at some point, added this check – why?In many cases, a detailed analysis can prove that a test isn’t necessary. In an easy case, the generated #define’s are never used, but not all cases are easy. With limited developer resources, it’s not worth the time or risk to clean out configure tests.The result is, of course, that the tests build up, and configure takes longer and longer to run. About two years ago, I revamped Amanda’s configure.in, and removed a lot of cruft at that time. As things have been rewritten in perl, we’ve removed many uses of non-portable functions and headers, so I should probably take another look through the codebase to find other unused symbols and functions.Thanks, as always, for your insightful thoughts on the autotools!

  5. Your point in these two posts is well-taken – as someone who does a fair bit of embedded work (mostly via Gentoo, albeit somewhat bastardized Gentoo) it’s not even a little reasonable from my standpoint to assume I’m using glibc; most of the time, I’m using either uClibc or eglibc. (Yes, I have a *very* rough ebuild for eglibc; it’s not ready for prime-time yet though.)Even knowing which runtime I’m using won’t help you, though, since the reason I use one of those is because I can disable the chunks I don’t need! In the case of uClibc, I ended up needing quite a number of patches to make everything build for my router, because Autotools was doing a lot of checks it didn’t handle, and not bothering to check other things at all.

  6. @Reimar: You make a good point about failing because build requirements are not present. Quite often, the errors that appear when requirements are not present are very unclear and turn into FAQ’s, so there is some benefit to checking in configure.More to the point, I was not arguing that this was a sensible goal – rather, I was pointing out that it describes the operation of lots of macros out there – both in autoconf and in the autoconf macro archive.

  7. For the case of removing functionality out of the C library (uClibc), most of the time you *cannot* simply assume that every upstream will take care of building on crippled system, so it’s either “re-enable whatever you disabled that the package needs” or “give upstream enough motivation for making the usage of @whatever()@ optional”. In either case, autotools can help, but _needs_ to be used properly.And Dustin hit the nail on the head regarding the disabling of tests.. tracking down whether something is actually needed or not is one of the most obnoxious situations… repository logs only allow to find them out if whoever committed them in actually provided a clue about what the test was for… and that’s rare.

  8. Indeed, since I’m coding for GNU/Linux only… pointless to have massively complex build software stack like the autotools.I even started to code a small user space lib which does provide only what’s needed to perform linux syscalls. There is some arch dependent user space code. The target is to have a 100%-C, zero autotools usable system (I realize that it’s just a few hundreds of software packages).

  9. Even if you target is Linux only, Autotools can still be of some use.During a parallel programing class I took last semester, I wrote and tested most of the class assignments on my Gentoo box running mpich. The supercomputer we ran the assignments on used lmmpi as its default mpi implementation. I used autoconf and automake to handle the differences in compiler flags. The code was not very pretty or well designed, but it did what I needed it to do.Even if you are developing for just Linux systems autoconf and friends can be useful to handle cross distro difficulties.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.