Sometimes automake is no good… Mono projects

You know I’m an autotools-lover, and that I think that they can easily be used for almost all projects. Well sometimes they are not, one of this cases is Mono projects. Indeed, you may remember that I’m working more closely with Mono lately for my job, and that has made me notice that there really aren’t many great ways to build Mono projects.

Most of the libraries out there are written with Visual Studio in mind, and that’s not excessively bad if they provide solution files for Visual Studio 2008, since MonoDevelop’s mdtool can deal with them just fine (unfortunately it doesn’t deal with VS2k5 solutions which is what stopped me from adding FlickrNet in source form to Portage, limiting myself to the binary version).

There are a few custom Makefiles out there and some mostly wok, but there are also projects like f-spot that provide an autotools-based build system… but with a lot of custom code in it that, in my view, makes the whole thing difficult to manage: indeed it gets out a mix of automake and manual make rules that don’t really rock my boat properly.

It’s not that I just don’t like the way the Makefiles are written, but factoring in the fact that automake does not support C#/Mono natively, you get to the point where:

  • the support for dependencies is just not there;
  • automake is designed to support language where a source file is translated into an object file, and then linked; C# does not work in that way since all the source files are given when building a single assembly;
  • the support for various flags variables is just pointless with the way the compiler work.

I guess there are mostly two reasons why autotools are still used by C#/Mono based projects, the first is that it integrates well when you also have some native extensions, like F-Spot has, and the latter is that it provides at least some level of boilerplate code.

I guess one interesting project would be to replace Makefile.am with some kind of Makefile.mb (Mono-Build or something along those lines for a name) so that they could generate some different Makefile.in files, without all the pointless code coming from automake and not used by C#/Mono builds, but still interface-compatible with it so that commands like make, make clean and make dist work as intended.

autoconf-2.64: the phantom macro menace

While I should write it down on autotools mythbuster in a most generic fashion, since I found this today, I wanted to write down these notes for other developers. With autoconf-2.64 there can be a problem with “phantom macros” as in macros that are called, but seem not to produce any code.

In particular, I noticed a configure failure in recode today. The reported error out is the following:

checking for flex... flex
checking lex output file root... lex.yy
checking lex library... -lfl
checking whether yytext is a pointer... yes
checking for flex... (cached) flex
./configure: line 10866: syntax error near unexpected token `fi'
./configure: line 10866: `fi'

Looking at the actual configure code, you can easily see what the problem is around line 10866:

if test "$LEX" = missing; then
  LEX="$(top_srcdir)/$ac_aux_dir/missing flex"
  LEX_OUTPUT_ROOT=lex.yy
  else


fi

In sh, you probably know already, “else fi” is invalid syntax; but what is the code that produces this? Well, looking at configure.in is not enough, you need to also check an m4 file for the package:

# in configure.in:
ad_AC_PROG_FLEX

# in m4/flex.m4
## Replacement for AC_PROG_LEX and AC_DECL_YYTEXT
## by Alexandre Oliva 
## Modified by Akim Demaille so that only flex is legal

# serial 2

dnl ad_AC_PROG_FLEX
dnl Look for flex or missing, then run AC_PROG_LEX and AC_DECL_YYTEXT
AC_DEFUN(ad_AC_PROG_FLEX,
[AC_CHECK_PROGS(LEX, flex, missing)
if test "$LEX" = missing; then
  LEX="$(top_srcdir)/$ac_aux_dir/missing flex"
  LEX_OUTPUT_ROOT=lex.yy
  AC_SUBST(LEX_OUTPUT_ROOT)dnl
else
  AC_PROG_LEX
  AC_DECL_YYTEXT
fi])

So there are calls to AC_PROG_LEX and AC_DECL_YYTEXT macros, so there should be code in there. What’s happening? Well, maybe you remember a previous post where I listed some user-advantages in autoconf-2.64 :

Another interesting change in the 2.64 release which makes it particularly sweet to autotools fanatics like me is the change in AC_DEFUN_ONCE semantics that makes possible for macros to be defined that are executed exactly once. The usefulness of this is that often times you get people to write bad autoconf code, that instead of using AC_REQUIRE to make sure a particular macro has been expanded (which is usually the case for macros using $host and thus needing AC_CANONICAL_HOST), simply call it, which would mean the same check is repeated over and over (with obvious waste of time and increase in size of the generated configure file).

Thanks to the AC_DEFUN_ONCE macro, not only it’s possible to finally define macros that never gets executed more than once, but also most of the default macros that are supposed to work that way, like AC_CANONICAL_HOST and its siblings, are now defined with that, which means that hopefully even untouched configure files will be slimmed down.

Of course, this also means there are more catches with it, so I’ll have to write about them in the future. Sigh I wish I could find more time to write on the blog since there are so many important things I have to write about, but I have not enough time to expand them to a proper size since I’m currently working all day long.

Indeed the two macros above are both once-expanded macros, which means that autoconf expand them before the rest of the now-defined macro. Now, the solution for this is using M4sh properly (because autoconf scripts are not pure sh, they are M4sh, which is a language in the middle between sh and m4). Instead of using if/then/else, you should use AS_IF; indeed changing the above macro to this:

AC_DEFUN(ad_AC_PROG_FLEX,
[AC_CHECK_PROGS(LEX, flex, missing)
AS_IF([test "$LEX" = missing], [
  LEX="$(top_srcdir)/$ac_aux_dir/missing flex"
  LEX_OUTPUT_ROOT=lex.yy
  AC_SUBST(LEX_OUTPUT_ROOT)dnl
], [
  AC_PROG_LEX
  AC_DECL_YYTEXT
]])

allows autoconf to understand the flow of the code and produces the proper sh code (this is true sh code) in the final configure file:

checking for flex... flex
checking for flex... (cached) flex
checking lex output file root... lex.yy
checking lex library... -lfl
checking whether yytext is a pointer... yes

(see how the two checks for flex are both up the list of checks?).

Unfortunately there are more problems with recode, but at least this documents the first problem, which I’m afraid is going to be a common one.

For A Parallel World. Theory lesson n.3: directory dependencies

Since this is not extremely common knowledge, I wanted to write down some more notes regarding the problem that Daniel Robbins reported in Ruby 1.9 which involves parallel make install problems.

This is actually a variant of a generic parallel install failure: lots of packages in the past assumed that make install is executed on a live filesystem and didn’t create the directories where to copy the files on. This of course fails for all the staging trees install (DESTDIR-based install), which are used by all distributions to build packages, and by Gentoo to merge from ebuilds. With time, and distributions taking a major role, most of the projects updated this so that they do create their directories before merging (although there are quite a few failing this still, just look for dodir calls in the ebuilds).

The problem we have here instead is slightly different: if you just have a single install target that depends at the same time on the rules that create the directories and on those that install the files, these doesn’t specify interdependencies:

install: install-dirs install-bins

install-dirs:
        mkdir -p /usr/bin

install-bins: mybin
        install mybin /usr/bin/mybin

(Read it like it used DESTDIR properly). When using serial make, the order the rules appear on the dependency list is respected and thus the directories are created before the binaries; with no problem. When using parallel make instead, the two rules are executed in parallel and if the install command may be executed before mkdir. Which makes the build fail.

The “quick” solution that many come to is to depend on the directory:

install: /usr/bin/mybin

/usr/bin:
        mkdir -p /usr/bin

/usr/bin/mybin: mybin /usr/bin
        install mybin /usr/bin/mybin

This is the same solution that Daniel came to; unfortunately this does not work properly; the problem is that this dependency is not just ensuring that the directory exists, but it also adds a condition on the timestamp of modification (mtime) of the directory itself. And since the directory’s mtime is updated whenever the mtime of its content is updated, this can become a problem:

flame@yamato foo % mkdir foo   
flame@yamato foo % stat -c '%Y' foo
1249082013
flame@yamato foo % touch foo/bar
flame@yamato foo % stat -c '%Y' foo
1249082018

This does seem to work for most cases, and indeed a similar patch was added already to Ruby 1.9 in Portage (and I’m going to remove it as soon as I have time). Unfortunately if there are multiple files that gets installed in a similar way, it’s possible to induce a loop inside make (installing the latter binaries will update the mtime of the directory, which will then have an higher mtime than the first binary installed).

There are two ways to solve this problem, neither look extremely clean, and neither are prefectly optimal, but they do work. The first is to always call mkdir before installing the file; this might sound overkill, but using mkdir -p it really has a small overhead compared to just calling it once.

install: /usr/bin/mybin

/usr/bin/mybin: mybin /usr/bin
        mkdir -p $(dir $@)
        install mybin /usr/bin/mybin

The second is to depend on a special time-stamped rule that creates the directories:

install: /usr/bin/mybin

usr-bin-ts:
        mkdir -p /usr/bin
        touch $@

/usr/bin/mybin: mybin usr-bin-ts
        install mybin /usr/bin/mybin

Now for Ruby I’d sincerely go with the former option rather than the latter, because the latter adds a lot more complexity and for quite little advantage (it adds a serialisation point, while mkdir -p execute in parallel). Does this help you?

Having fun with autoconf 2.64

With the tinderbox now running in a container I’ve been running tests with autoconf 2.64 as well, which is currently masked (and for very good reason I’d say!). This is, after all, the original point of the tinderbox: rebuilding everything with the latest version of the tools, so that if a bug is found users won’t be hitting it before we know about it.

For what concerns autoconf 2.64, I’ve been tracking the issue since last February, when I discovered one interesting change that could have caused a big effect on packages (changed the way they compiled; thus introduced runtime bugs). I also wrote some notes for users and described the incompatible change so that everybody would get ready to fix the future issues.

Well, new autoconf is now here and the run started, I haven’t evens started looking (again) to the present-but-not-compilable warnings, and I already started filing apocalyptic bugs. Indeed, a few packages stopped working with autoconf 2.64, as usual, but there is one that is particularly interesting.

You might remember that I criticised KDE for not using autotools, but rather a crappy buildsystem vaguely based on autotools. Well it seems like my point was proven again, not for the first time I have to say. Indeed the admin/ directory from 3.5.10 fails with autoconf 2.64 and why is that? Because they used internal macros!

Way to go KDE buildsystem developers: autotools are shit anyway, why bothering to stick to the stuff that is actually documented?

*Okay this might just be seen as a sterile comment at something that happened already; on the other hand I’d like to use this to point out that all the FUD about autotools that is available around is often due to what KDE decided, and people should know that KDE didn’t use autotools properly in the first place!*

Also, take this as a warning against autoconf 2.64 unless you really want to start debugging and fixing the bugs.

The long-awaited build systems post (part 1, maybe)

I call this “long-awaited” because I promised Donnie to write about this a few months ago, but I haven’t had time to work on it in quite a while. Since today it should be an holiday, yet I’m still working on a few things, I’m going to try filling the hole.

I have been criticising CMake quite a bit on my blog, and I have been trying with all my strength to show that autotools aren’t as bad as they are made appear since I think that is still the best build system framework we have available at the moment. But in all this, I’m afraid the wrong message was sent somehow, so I want to clear it up a bit.

First of all, CMake isn’t as bad as, say, scons or imake; it’s also not as bad as qmake under certain circumstances. I don’t think that CMake is bad in absolute; I just think it’s bad as an “universal” solution. Which I have to admit autotools are bad for too, to an extent. So let me explain what my problem is.

For Free Software for a long time the autotools framework was the almost de-facto standard for lots of packages; switching away from that to CMake just because “it’s cool” or it seems easier (it does seem easier, mostly because the autotools documentation and examples are full of particularly bad code), it’s not the right thing to do since it increases the work for packages, especially because for some things CMake isn’t yet particularly polished.

I blame KDE for the CMake tide not much because I don’t think it was the right choice for them, but rather because they seem to pinpoint the reasons to the change to autotools defects when they are, actually, making a pragmatic choice for a completely different reason: supporting Microsoft Visual C++ compiler. As I have already expressed more than a couple of time, the autotools-based buildsystem in KDE 3 is a real mess, a bastardised version of autotools. Blaming autotools in general for that mess is uncalled for.

It’s also not only about Windows support; you can build software under Windows with just autotools, if you use cygwin or msys with GCC, what you cannot do is building with Microsoft’s compiler. Since GCC objectively still lacks some features needed or highly desired under Windows, I can understand that some projects do need Microsoft’s compiler to work. I’m not sure how true that is for KDE, but that’s their choice to desire Microsoft’s compiler. And CMake allows them to do that. More power to them.

But in general, I’m very happy if a project whose build system is custom-made, or based on scons or imake, gets ported to CMake, even if not autotools, since that would mean having a somewhat standard build system, which is still better than the other options. And I’m totally fine if a project like libarchive gets dual build system to build on Unix and Windows with the “best” build system framework available on those systems.

Still, I think CMake has a few weak spots that should be taken care of sooner rather than later, and which are shared with autotools (which is what I usually point out when people say that it’s always and only better than autotools; while it’s actually making similar mistakes).

The first is the fact that they seem to have moved (or people claim they moved) from an actual build system solution to a “framework to build build systems” which is more or less what autoconf basically can be said to be and what scons always have been. This is particularly bad because it ensures that there is no standard way to build a package without actually having to check the definition files for that particular release: scons provides no standard options for flags handling, feature switching and similar; autotools can be messed up since different packages will use the same variable names for different meanings. If CMake were to provide just a framework, it’s have the same exact problem. I think somewhat this was supposed to be limited, from what I read when I tried to learn CMake, but the results now don’t seem to be as good.

The second problem is slightly tied to the one above, and relates to the “macro hell”. One of the worse issues with autoconf is that beside the set of macros provided with autotools itself, there are basically no standard macros. Sure there is the Autoconf Macro archive but even I fail at using it (had some problems before with the license handling, should probably try to use it again), and the result is that you end up copying forking and modifying the same macros over a series of projects. Some of the macros I wrote for xine are now used in lscube and also in PulseAudio .

CMake provides a set of modules to identify the presence of various libraries and other software packages; but instead of using it as an official repository for these macros, I’ve been told that they are “just examples” of how to write them. And some of them are very bad examples. I already ranted about the way the FindRuby module was broken and the issue was ignored until a KDE developer didn’t submit his version. Unfortunately there are still modules that are just as broken. The CMake developers should really look into avoiding the “macro hell” problem of autotools by integrating the idea of a macro archive with CMake itself, maybe having an official “CMake modules” package to install to the side to provide the package search macros, which can be updated independently from the actual CMake package.

I have lots of reserves about CMake, but I still think it’s a better alternative than many others. I also have quite a few problems with autotools, and I know they are far from perfect. Both build systems have their area of usefulness, I don’t think either can be an absolute replacement for the other, and that is probably my only problem with all this fuzz over CMake. Also, I do have an idea of what kind of buildsystem framework could hopefully replace both of them, and many other, but I still haven’t found anything that comes near it; I’ll leave that description for part two of this post, if I can find time.

MySQL build system

I have to introduce the post stating that I’m not a MySQL user; this site and the xine bugzilla both run on PostgreSQL, and similarly PostgreSQL is what holds the symbol collision database on my box. So if I feel critical of MySQL, well, I am. I’m not really much interested in improving it either, but I don’t want just to shitspread them, so my assessment will be as objective as I can be.

I’ve been looking at the buildsystem of MySQL to help Jorge with the infamous Amarok 2 problem which is making it unavailable on AMD64 and other architecture. I have to say that I do blame a bit the Amarok guys (as much as I love them) because they should probably have pushed MySQL devs to handle the issue before they hit stable; but well, what is done is done, and it’s time to handle it.

The problem is that Amarok tries to use MySQL Embedded in a shared library; shared libraries on AMD64 need to be built with PIC code, and of course MySQL Embeded is not built that way as it only provides static archives to link against. So what is needed is for MySQL to build a shared library copy of MySQL Embedded so that it can be linked in the Amarok engine library.

This is not something tremendously exotic to do, especially for Sound and Video we’ve been converting packages to install shared libraries for quite a while, and not just in Gentoo, but most distributions out there. When libtool is present, this usually gets quite easy since you just have to change some names around to make sure that instead of just building the archives, libtool is used to build the whole library, and there you go. For MySQL is not even vaguely that easy.

The problem is that the MySQL Embedded code seems like it was put there on a customer’s request, and as long as it worked for them, the whole thing was fine as it was; it shares part of the code with MySQL proper but not all of it, and the shared sub-modules (which are installed as separated static archives by MySQL right now), are not self-sufficient, as they require cyclic linking of dependencies.

The result is that to actually have the thing working, it’s likely that there will be some change in both the buildsystem’s interface, as well as some changes in the number and type of installed files; while this will likely also solve part of the code duplication problem that I found some time ago, it’s very unlikely that upstream would accept that for their main sources right away.

While MySQL is using autoconf and automake for the base of their build system, they are not only using lots of custom rules for the makefiles, they are also using some “strange” methods to add source files to target in particular cases (rather than using AM_CONDITIONAL) and also making use of knowledge against discovery which makes it a bit more difficult to find what is asking for what.

So where is this post going? Well, it’s just a way to ask users to understand why we haven’t fixed the problem yet; as for why we cannot just hack it around like it seems other distributions have done, the answer is that we do prefer to avoid those hacks since they are, well, nasty. And our users expect better from us, even when that means that it takes months to get a package in an architecture because the only way to build it there is to use a nasty hack.

Discovering the environment versus knowledge repository

For what concerns users, the main problem with build systems based on autotools is the long delay caused by executing ./configure scripts. It is, indeed, a bit of a problem from time to time and I already expressed my resentment regarding superfluous checks. One of the other proposed solution to mitigate the problem in Gentoo is running the script through a faster, more basic shell rather than the heavy and slow bash, but this has a few other side issues that I’m not going to discuss today.

As a “solution” (but to me, a workaround) to this problem, a lot of build systems prefer using a “knowledge repository” to know how to deal with various compiler and linker flags, and various operating systems. While this certainly has better result for smaller, standards-based software (as I write in my other post, one quite easy way out from a long series of tests is just to require C99), it does not work quite that well for larger software, and tends to create fairly problematic situations.

One example of such a buildsystem is qmake as used by Trolltech, sorry Qt Software. I had to fight with it quite a bit in the past when I was working on Gentoo/FreeBSD since the spec files used under FreeBSD assumed that the whole environment was what ports provided, and of course Gentoo/FreeBSD had a different environment. But without going full-blown toward build systems entirely based on knowledge, there can easily be similar problems with autotools-based software, as well as cmake-based software. Sometimes it’s just a matter of not knowing well enough what the future will look like, sometimes these repositories are simply broken. Sometimes, the code is simply wrong.

Let’s take for instance the problem I had to fix today (in a hurry) on PulseAudio: a patch to make PulseAudio work under Solaris went to look for the build-time linker (ld) and if it was the GNU version it used the -version-script option that it provides. If you look at it on paper, it’s correct, but it didn’t work and messed up the test4 release a lot. In this case the cause of the problem is that the macro used has been obsoleted and thus it never found the link as being GNU, but this was, nonetheless, a bad way to deal with the problem.

Instead of knowing that the GNU ld supported that option, and just that, the solution I implemented (which works) is to check if the linker accepts the flag we needed, and if it does it provides a variable that can be used to deal with it. This is actually quite useful since as soon as I make Yamato take a break from the tinderbox I can get the thing to work with the Sun linker too. But it’s not just that: nobody tells me whether in the future a new linker will support the same options as the GNU ld (who knows, maybe gold).

A similar issue applies to Intel’s ICC compiler, that goes to the point as passing itself as GCC (defining the same internal preprocessor macros), for the software to use the GCC extension that ICC implements. If everybody used discovery instead of knowledge repository, this would have not been needed (and you wouldn’t have to workaround when ICC does not provide the same features as GCC — I had to do that for FFmpeg some time ago).

Sure, knowledge repository is faster, but is it good just the same? I don’t think so.

Autotools Mythbuster! A guide!

I’ve been thinking about this for a while, and now I thought it was the time to implement it and make it public: I’d like to write some complete and clean documentation about autotools: autoconf, automake, libtool and all the others. Something that can show more practical information about autotools rather than the references shipping with them, and a way to collect the good information out of my blog.

Since this kind of task is, though, quite complex and time-consuming, I just can’t afford to get to it as it is in my spare time. Since I have little spare time and the one I have I’d rather not spend entirely on free software-related tasks or my health would likely get bad again. I already devote a lot of my spare time to Gentoo and at least a bit of it has to stay for myself. But, since I have been asked about this many times, I decided to take a stake to it.

Although I certainly would have loved to see it become a book, especially since that would have helped me pay for bills, hardware and everything else related to my work on Free Software and not, I’m afraid that is unlikely to ever happen. O’Reilly have it explicit in their guidelines that only native English speakers are welcome to submit their proposal. Alas, I’m not a native speaker (and if you’re one of the many still wondering whether I’m Spanish or whatever else, I’m Italian; yes I know the name Diego sounds Spanish).

So my idea is to work on it on a donation basis; that is if people are interested in getting it written down, I’ll work on it, otherwise I’ll just add some stuff now and then and keep writing the interesting bits on my blog. My idea is to get donations, and starting from €50 dedicate a given time a week to writing (Edit: I didn’t make it clear before, the €50 needs not to come from a single person at a single time, it’s just the point where I start to write it, donations stack up over different people and times). The documentation would still be available to everybody for free, under a CreativeCommons BY-NC-SA license (but still open to relicensing for other uses if you ask me — I know there was a site that had a way to declare that explicitly but I forgot the name, I remember it having to do with pineapples though).

But since I already have a lot of sparse documentation about autotools in this blog, to the point I often use it as reference for patches I submit around and bugs I open, why would you care if I were to write it in a comprehensive documentation? Well, as it is the documentation is very sparse, which means that you have to search around the blog to find it. Even though I do embed a Google Custom Search widget on the pages, it’s not really easy to find what you need most of the times.

Also, the blog posts suffer from their nature: I don’t go around editing them, if I have committed mistakes I usually correct them by posting something new; I also tend to write opinionated entries, so you can have me writing snarky remarks about KDE and CMake in a post that is supposed to provide information on porting to autoconf 2.64, without feeling bad at all: it’s my blog after all, but this also means that it’s not “professional” to link to that entry as a reference article. At the same time I don’t think this is material for articles, because they also suffer from the problem of being mostly “set in stone”, while autotools are not set in stone, and new useful tricks can be added easily.

I’m basically not asking anybody to pay me to tell you guys new useful tricks for autotools, or how to avoid mistakes or how to fix bugs. I’m going to continue doing that, I’m going to continue posting on the blog. What I’m actually asking to be paid for is, well, the editing in a form that can be easily searched and referenced. I’m also still going to answer to enquiries and help requests, so don’t worry about that, and I’m also going to pour the same amount of effort in what I do like I do every day.

So where is the beef? For now it’s really just one section ported from the post I already linked above, together with an explanation of how the thing is (hopefully) going to work, and you can find it on my site . I’m going to add at least two more sections this week, compatibly with the time I got; for anything more, feel free to chip in.

And before you tell me, yes I know that it’s a bit evil to also add the Google AdSense to the page. On the other hand if you use the AdBlock Plus extension for Firefox, or anything similar, you won’t notice it at all, since the container is set to disappear in that case. Please don’t think I made much money with that, but every bit helps, considering that out of 14 hours a day I spend in front of the computer, in the past month I probably had an average of 11 spent for Gentoo and other Free Software work, not paid for, if not for the few guys who thought of me (thank you! once again!).

Post Scriptum: the sources aren’t available yet since I have to check one thing out first, they’ll also go online later this week anyway, there are some useful pieces of reusable DocBook trickery in there. And I already fixed a bug in app-text/docbook-xsl-ns-stylesheets while preparing that up. Basically, they’ll get released together with a decent style (thanks Gilles for it).

For A Parallel World. Theory lesson n.3: on recursive make

There is one particular topic that was in my TODO list of things to write about in the “For A Parallel World” series, and that topic is recursive make, the most common form of build system in use in this world.

The problems with recursive make has been exposed since at least 1997 by the almost famous Recursive Make Considered Harmful paper. I suggest the reading of the paper to everybody who’s interested in the problems of parallelisation of build systems.

As it turns out, automake supports non-recursive forms quite well, and indeed I use it on at least one project of mine . Lennart also uses it in PulseAudio, as all the modules are built in the same Makefile.am file even though their sources (even the generated ones) are split among different directories.

Unfortunately the paper has to be taken with a grain of salt. The reason why I’m saying that is that if you read it there are at least a couple of places where the author seems to misknown his make rules and defines a rule with two output files, and one where he ignores temporary files naming problems .

There are, of course, other solutions to this problem, for instance I remember the make command for FreeBSD to be able to run into directories in parallel just fine, but I sincerely think here we have to stop one particular problem first. I don’t have too many problems with recursive make for different binaries or final libraries, or for libraries that are shared among targets. Yes they tend to put more serialisation into the process but it’s not tremendously bad, especially not for the non-parallel case where the problem does not seem to appear for most users at all.

What I think is a problem, and I seriously detest it, is when the only reason to use recursive make is to keep the layout of the built object files the same as the source files, and then create sub-directories for logical parts of the same target. With automake this tends to require the creation of convenience noinst libraries that are built and linked against, but never installed. While this works, it tends to increase tremendously the complexity of the whole build, and the time required to build them, since sometimes they get compiled not only into archive file (.a static libraries) but also in final ELF shared objects, depending on their use. Since we know that linking is slow we should try to avoid doing it for no good reason, don’t you think?

In general, the noinst_LTLIBRARIES presence means that either you’re grouping sources that will be used just one in a recursive make system, or you’re creating internal convenience libraries which might even be more evil than that, since it can create two huge binaries like the case of Inkscape.

Once again, if you want your build system reviewed, feel free to drop me an email, depending on how much time I’ve got I’ll do my best to point out eventual fixes, or actually fix it.

Putting an user prospective to the upcoming changes in autoconf 2.64

Although I’ve written about a possible cataclysm related to autoconf-2.64 to warn about the next release, it’s easy for users not to understand why the fuss about it. Especially, Gentoo users have came to know that changes, especially in autotools, tend to be quite disruptive and might actually start asking themselves why people go through that at all, considering that each time a new autoconf, automake, libtool or whatever is released we have to go round and round to fix the issues.

For this reason, I’ve decided to put the upcoming changes in the prospective of users, to let them understand that all the work that is going on for this is going to be quite useful to them, on the longish run. I say longish because the change I’ve blogged about, the handling of present-but-not-compilable headers, is something that was in the making since at least 2.59, which was out already when I joined Gentoo the first time, just to give a timeframe, this was about three years ago if not more (a quick check on the ChangeLog file dates the start of the transition to 2001-08-17, that’s almost eight years ago!).

The change was done with quite a good technical reason: just checking if an header if present is of no help, even though it’s not just a stat() call like CMake does, but it does go through the preprocessor (which in turn makes it possible to consider an header that is found but not usable as not present, like malloc.h on Mac OS X), the developers most likely want to know if the header can be used in their project. Which means it has to work with the compiler they are using, and with the options they are enabling.

Since changing the behaviour between one version and the other wouldn’t have given enough time to people to actually convert their code to check properly for header usability, for a while autoconf-generated configure files checked both that the header was present (through the preprocessor) and that it was usable (through the compiler). This, though, creates long, boring and slow configure files because it checks for more stuff than needed: for each header file in a AC_CHECK_HEADERS macro, there are two process spawned: preprocessor and compiler. As you might guess, this gets tremendously boring on projects that check just shy of an hundred header files.

While the 2.64 version still checks for both preprocessor and compiler, and warns in the case the compiler rejects an header that the preprocessor accepted and vice-versa (the compiler always winning now), hopefully we won’t have to wait till 2017 to have just one test per header in the configure output, which will finally mean shorter, slimmer, faster configure scripts.

Another interesting change in the 2.64 release which makes it particularly sweet to autotools fanatics like me is the change in AC_DEFUN_ONCE semantics that makes possible for macros to be defined that are executed exactly once. The usefulness of this is that often times you get people to write bad autoconf code, that instead of using AC_REQUIRE to make sure a particular macro has been expanded (which is usually the case for macros using $host and thus needing AC_CANONICAL_HOST), simply call it, which would mean the same check is repeated over and over (with obvious waste of time and increase in size of the generated configure file).

Thanks to the AC_DEFUN_ONCE macro, not only it’s possible to finally define macros that never gets executed more than once, but also most of the default macros that are supposed to work that way, like AC_CANONICAL_HOST and its siblings, are now defined with that, which means that hopefully even untouched configure files will be slimmed down.

Of course, this also means there are more catches with it, so I’ll have to write about them in the future. Sigh I wish I could find more time to write on the blog since there are so many important things I have to write about, but I have not enough time to expand them to a proper size since I’m currently working all day long.