Yes we still needs autotools

One of the most common refrains that I hear lately, particularly when people discover Autotools Mythbuster is that we don’t need autotools anymore.

The argument goes as such: since Autotools were designed for portability on ancient systems that nobody really uses anymore, and that most of the modern operating systems have a common interface, whether that is POSIX or C99, the reasons to keep Autotools around are minimal.

This could be true… if your software does nothing that is ever platform specific. Which indeed is possible, but quite rare. Indeed, unpaper has a fairly limited amount of code in its configure.ac, as the lowest level code it has, it’s to read and write files. Indeed, I could have easily used anything else for the build system.

But on the other hand, if you’re doing anything more specific, which usually includes network I/O, you end up with a bit more of a script. Furthermore, if you don’t want to pull a systemd and decide that the latest Linux version is all you want to support, you end up having to figure out alternatives, or at least conditionals to what you can and cannot use. You may not want to do like VLC which supports anything between OS/2 and the latest Apple TV, but there is space between those extremes.

If you’re a library, this is even more important. Because while it might be that you’re not interested in any peculiar systems, it might very well be that one of your consumers is. Going back to the VLC example, I have spent quite a bit of time in the past weekends of this year helping the VLC project by fixing (or helping to fix) the build system of new libraries that are made a dependency of VLC for Android.

So while we have indeed overcome the difficulties of porting across many different UNIX flavours, we still have portability concerns. I would guess that it is true that we should reconsider what Autoconf tests for by default, and in particular there are some tests that are not completely compatible for modern systems (for instance the endianness tests were an obvious failure when MacIntel arrived, as then it would be building the code for both big endian (PPC) and little endian (Intel) — on the other hand, even these concerns are not important anymore, as universal binaries are already out of style.

So yes, I do think we still need portability, and I still think that not requiring a tool that depends on XML RPC libraries is a good side of autotools…

Autotools Mythbuster: so why do we have three projects?

As much as I’ve become an expert on the topic, there is one question I still have no idea how to answer, and that is why on earth we have three separate projects (autoconf, automake, libtool) instead of a single Autotools project. Things get even more interesting when you think that there is the Autoconf Archive – which, by the way, references Autotools Mythbuster as best practices – and then projects such as dolt that are developed by completely separate organisations.

I do think that this is a quite big drawback of autotools compared to things like CMake: you now have to allow for combinations of different tools written in different languages (autoconf is almost entirely shell and M4, automake uses lots of Perl, libtool is shell as well), with their own deprecation timelines, and with different distributions providing different sets of them.

My guess is that many problems lie in the different sets of developers for each project. I know for instance that Stefano at least was planning to have a separate Automake-NG implementation that did not rely on Perl at all, but used GNU make features, including make macros. I generally like this idea, because similarly to dolt it removes overhead for the most common case (any Linux distribution will use GNU make by default), while not removing the option where this is indeed needed (any BSD system.) On the other hand it adds one more dimension to the already multi-dimensional compatibility problem.

Having a single “autotools” package, while making things a bit more complicated on the organizational level, could make a few things fit better. For instance if you accepted Perl as a dependency of the package – since automake needs it; but remember this is not a dependency for the projects using autotools! – you could simplify the libtoolize script which is currently written in shell.

And it would probably be interesting if you could just declare in your configure.ac file whether you want a fully portable build system, or you’re okay with telling people that they need a more modern system, and drop some of the checks/compatibility quirks straight at make dist time. I’m sure that util-linux does not care about building dynamic libraries on Windows, and that PulseAudio does not really care for building on non-GNU make implementations.

Of course these musings are only personal and there is nothing that substantiate them regarding how things would turn out; I have not done any experiment with actually merging the packages into a single releasable unit, but I do have some experience with split-but-not-really software, and in this case I can’t see many advantages in the split of autotools, at least from the point of view of the average project that is using the full set of them. There are certainly reasons for which people would prefer them to be split, because especially if they have been using only autoconf and snobbing automake all this time, but… I’m not sure I agree with those reasons to begin with.

Autotools Mythbuster: who’s afraid of autotools?

I’ve been asked over on Twitter if I had any particular tutorial for an easy one-stop-shop tutorial for Autotools newbies… the answer was no, but I will try to make up for it by writing this post.

First of all, with the name autotools, we include quite a bit of different tools. If you have a very simple program (not hellow-simple, but still simple), you definitely want to use at the very least two: autoconf and automake. While you could use the former without the latter, you really don’t want to. This means that you need two files: configure.ac and Makefile.am.

The first of the two files (configure.ac) is processed to produce a configure script which the user will be executing at build time. It is also the bane of most people because, if you look at one for a complex project, you’ll see lots of content (and logic) and next to no comments on what things do. Lots of it is cargo-culting and I’m afraid I cannot help but just show you a possible basic configure.ac file:

AC_INIT([myproject], [123], [flameeyes@flameeyes.eu], [https://flameeyes.blog/tag/autotools-mythbuster/])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])

AC_PROG_CC

AC_OUTPUT([Makefile])

Let me explain. The first two lines are used to initialize autoconf and automake respectively. The former is being told the name and version of the project, the place to report bugs, and an URL for the package to use in documentation. The latter is told that we’re not a GNU project (seriously, this is important — you wouldn’t believe how many tarballs I find with 0-sized files just because they are mandatory in the default GNU layout; even though I found at least one crazy package lately that wanted to have a 0-sized NEWS file), and that we want a .tar.xz tarball and not a .tar.gz one (which is the default).

After initializing the tools, you need to, at the very least, ask for a C compiler. You could have asked for a C++ compiler as well, but I’ll leave that as an exercise to the reader. Finally, you got to tell it to output Makefile (it’ll use Makefile.in but we’ll create Makefile.am instead soon).

To build a program, you need then to create a Makefile.am similar to this:

bin_PROGRAMS = hellow

dist_doc_DATA = README

Here we’re telling automake that we have a program called hellow (which sources are by default hellow.c) which has to be installed in the binary directory, and a README file that has to be distributed in the tarball and installed as a documentation piece. Yes this is really enough as a very basic Makefile.am.

If you were to have two programs, hellow and hellou, and a convenience library between the two you could do it this way:

bin_PROGRAMS = hellow hellou

hellow_SOURCES = src/hellow.c
hellow_LDADD = libhello.a

hellou_SOURCES = src/hellou.c
hellow_LDADD = libhello.a

noinst_LIBRARIES = libhello.a
libhello_a_SOURCES = lib/libhello.c lib/libhello.h

dist_doc_DATA = README

But then you’d have to add AC_PROG_RANLIB to the configure.ac calls. My suggestion is that if you want to link things statically and it’s just one or two files, just go for building it twice… it can actually makes it faster to build (one less serialization step) and with the new LTO options it should very well improve the optimization as well.

As you can see, this is really easy when done on the basis… I’ll keep writing a few more posts with easy solutions, and probably next week I’ll integrate all of this in Autotools Mythbuster and update the ebook with an “easy how to” as an appendix.

Autotools Mythbuster: automagically disabled dependencies

One question that I’ve been asked before, and to which I didn’t really have a good answer up to now is: should configure scripts fail, when a dependency is enabled explicitly, but it can’t be found? This is the automagic dependency problem, but on the other branch.

With proper automatic dependencies, if the user does not request explicitly whether to enable something or not, it’s customary that the dependency is checked for, and if found, the feature that it’s connected to is enabled. When the user has no way to opt out from it (which is bad), we call it an automagic dependency. But what happens if the user has requested it and the feature is not available?

Unfortunately, there is no standard for this, and myself I used both the “fail if asked and not found” and “warn if asked and not found” approaches. But the recent trouble between ncurses and freetype made me think that it’s important to actually make a point that there is a correct way to deal with this.

Indeed what happens is that right now, I have no way to tell you all that the tinderbox has found every single failure caused by sys-libs/ncurses[tinfo] even after the whole build completed: it might well be that a particular package, unable to link to ncurses, decided to disable it altogether. The same goes for freetype. Checking for all of that would be nice, but I have honestly no way to do it.

So to make sure that the user gets really what they want, please always make sure that you do verify that you’re proceeding how the user wanted. This makes sure that even in packaging, there won’t be any difference when a dependency is updated, or changed. In particular, with pkg-config, the kind of setup you should have is the following:

AC_ARG_WITH([foobar],
  AS_HELP_STRING([--with-foobar], [Enable the features needing foobar library]))

AS_IF([test "x$with_foobar" != "xno"], [
  PKG_CHECK_MODULES([FOOBAR], [foobar >= 123], [
    AC_DEFINE([HAVE_FOOBAR], [1], [Found foobar library])
  ], [
    AS_IF([test "x$with_foobar" = "xyes"], [
      AC_MSG_ERROR([$FOOBAR_PKG_ERRORS])
    ])
  ])
])

I’ll be discussing this and proposing this solution in the next update to Autotools Mythbuster (which is due anytime soon, including the usual eBook update for Kindle users). This would hopefully make sure that in the future, most configure scripts will follow this approach.

Don’t try autoconf 2.66 at home just yet!

I have to thank Arfrerver for making me notice this with the bug about Ruby 1.9 he reported.

The GNU project released autoconf 2.66 two days ago. Very few notable changes are present in it, just like a few were listed before, so I didn’t go out of my way to test it beforehand. My bad! Indeed there is one big nasty change with it for which I’d say to all of you to put off the update until I write it so. Hopefully it won’t get unmasked in Gentoo for a while either.

There are two main problems with this release; the first is due to the implementation of a stricter macro to ensure the parameters given to it is not variable over executions:

**** The macro AS_LITERAL_IF is slightly more conservative; text containing shell quotes are no longer treated as literals. Furthermore, a new macro, AS_LITERAL_WORD_IF, adds an additional level of checking that no whitespace occurs in literals.

well, whatever the idea about this was, it seems to have broken the AC_CHECK_SIZEOF macro: if you pass it [void*] as parameter, it’ll report it not being a literal (while it is) causing the following error:

flame@yamato test % cat configure.ac
AC_INIT([foo], [0])

AC_CHECK_SIZEOF([void*])

AC_OUTPUT

flame@yamato test % autoconf
configure.ac:3: error: AC_CHECK_SIZEOF: requires literal arguments
../../lib/autoconf/types.m4:765: AC_CHECK_SIZEOF is expanded from...
configure.ac:3: the top level
autom4te-2.66: /usr/bin/m4 failed with exit status: 1

This would be bad enough. But the nastier surprise I got when running autoreconf over the feng sources, the build system of which I wrote myself, and if I may say so, is very well engineered:

flame@yamato feng % autoreconf -fis
configure:6275: error: possibly undefined macro: AS_MESSAGE_LOG_FDdnl
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
autoreconf-2.66: /usr/bin/autoconf-2.66 failed with exit status: 1

The problem here is almost obvious, and it’s related to the dnl entry at end of the macro name; the dnl keyword is used as (advanced) comment delimiter in autoconf scripts, meaning “Discard up to New Line” and is often used to keep on multiple lines commands that should be kept togever, like is in many languages. A quick check at the configure files brings in this:

        as_fn_error $? "Package requirements (glib-2.0 >= 2.16 gthread-2.0) were not met:

$GLIB_PKG_ERRORS

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables GLIB_CFLAGS
and GLIB_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details." "$LINENO" AS_MESSAGE_LOG_FDdnl

You can easily see that the problem here is with the pkg-config macros (pkg.m4). Funnily enough there is no change related to the errors reporting that is listed in the autoconf news file so I wasn’t expecting this. The problem is further down the path of pkg-config files but it’s not important to fully debug it right now, it’s actually quite easy to fix, in pkg-config itself, but here’s the catch.

Since the pkg.m4 macro file is way too often bundled with the upstream packaging, and its presence overrides the copy from the system, even fixing pkg-config will not fix all the software that carries outdated copies of the macro file.

This is almost the same problem with libtool 1 vs libtool 2 macro files with the difference that this is going to be much much more common. If you’re a package maintainer, you can do something already before this even hits the users: remove the pkg.m4 file during the src_prepare() phase; you’re already depending on pkg-config in the ebuild for it to work at build-time, and since we don’t split the macro file from the command itself, you can simply rely on its presence on the system.

In the mean time, I’m not sure if I want to start testing with it just yet or if we should be waiting for 2.67…

Autotools Mythbuster: Why autoconf is not getting any friendlier

You know that I’m an “autotools lover” in the sense that i prefer having to deal with autotools-based buildsystem that most custom build-system that are as broken as they can possibly be. Or with cmake.

But I’m definitely not saying that autotools are perfect, far from it. I’m pretty sure that I said before that there are so many non-obvious tricks around that confuse most people and cause lots of confusion out there. The presence of these non-obvious tricks is due partly to the way autoconf tarted in the first place, partly into the fact that it has to generate standard sh scripts for compatibility with older systems, and partly because upstream is not really helping the whole stack to maturate.

One of the most non-obvious, but yet common, macros out there is definitely the AC_ARG_ENABLE (and its cousin AC_ARG_WITH obviously), as I’ve blogged and documented before. The main issue with those macros is that most people expect them to be boolean (--enable and --disable; --with and --without) even though it’s quite freeform (--with-something=pfx or --enable-errors=critical). But also the sheer fact that the second parameters need to be returned by another different macro (AS_HELP_STRING) is quite silly if you think of it. And, if you look at it, the (likely more recent) AC_ARG_VAR macro does not require a further macro call for the help string.

It gets even wilder: a lot of the AC_ARG_WITH calls are, further down, wrappers around calls to PKG_CHECK_MODULES (a pkg-config support macro). How is this worse? Well, you end up repeating the same code between different projects to produce, more or less, the same output. To at least reduce the incidence of this, Luca wrote a set of macros that wrap around and perform these common tasks. The result is that you have to provide a lot less details; you lose some flexibility of course, but it produces a much neater code in your configure.ac.

Now, as I said, autoconf upstream is partly responsible for this: while the new versions have been trying to reduce the possible misuse of macros, they are bound to a vast amount of compatibility. You don’t see the kind of changes that happened with the 2.1x → 2.5x version jump, nowadays. The result of this is that you cannot easily provide new macros, as they wouldn’t be compatible with older versions of autoconf, and that would be taken as mostly bad.

And the reason for that can be found in the reason why libvirt’s configure is still stuck with compatibility toward autoconf 2.59. While autotools are designed to not require their presence on either the host or build system (unlike CMake, imake, qmake, scons, …), developing for, and checking compatibility with, older systems require to have the tools at hand to rebuild configure and company. I solve this myself by just using NFS to pass the code around the various (real and virtual) machines, after regenerating it on my Gentoo box, but I admit it’s definitely not sleek enough. In this case, libvirt is also developed against RHEL-5, and in that system the only autoconf available is 2.59.

One problem to solve this kind of problem would be to have a single, main repository of macros, that everybody could use to perform the same task. Of course this would also mean standardising on some sort of interface for macros, and for the build systems themselves, and this is easier said than done. While we’re definitely not to the point Ruby is we still aren’t really that standard. Many packages handled by more or less the same people tend to re-use the same code over and over, but that only builds a multiple amount of similar, but still half-custom build systems.

There are only two, as far as I know, efforts that ever tried extending autoconf in a “massive” way: KDE (and the results are, well, let’s just say that I can’t blame for trying to forget about KDE3’s build system), and GNOME. The latter is still alive, although it isn’t concerned with giving more generic macro interfaces to common tasks.

There is the autoconf macro archive but there is one catch with it: some of the macros submitted there have GPL or GPL-affine licenses (as long as they are compatible); that means that you might not be able to use them in MIT-licensed systems. While it even says that FSF does suggest using more open licenses for macro files, it does not require the submissions to be. Which can get quite messy on the long term, I’m afraid, for those projects.

At any rate, this post should show that I don’t really think that autotools are error-safe, but at the same time, I don’t think that creating a totally new language to express these things (like CMake does) is the solution. If only Rake was parallel-capable (which is unlikely to happen as long as Ruby cannot seriously multithread), then it would probably be, in my opinion, a better replacement for autotools than what we have now. That is, if the presence of Ruby is not a problem.

Why autoconf updates are never really clean

I’m sure a lot of users have noticed that each time autoconf is updated, a helluva lot of packages fail to build for a while. This is probably one of the reasons why lots of people dislike autotools in the first place. I would like to let people know that it’s not entirely autoconf’s fault if that happens, and actually, it’s often not autoconf’s fault at all!

I have already written one post about phantom macros due to recent changes but that wasn’t really anybody’s fault in the sense that the semantic of the two macros changed, with warning, between autoconf versions. On the other hand, I also have ranted, mostly on identi.ca, about the way KDE 3, in its latest version, is badly broken by the update. Since the problem with KDE3 is far from being isolated, I’d rant a bit more about it here and try to explain why it’s a bad thing.

First of all let me describe the problem with KDE3 so that you understand I’m not coming up with stuff just to badmouth them. I have already written in the past, ranted and so on, about the fact that KDE3’s build system was not autotools, but it was rather autotools-based. Indeed, the admin/ subdirectory that is used by almost all the KDE3 packages is a KDE invention; the configure.in.in files as well. Unfortunately it doesn’t look like the KDE developers learnt anything about it before, and they seem to be doing something very similar with CMake as well. I feel sorry for our KDE team now.

Now, of course there has been reasons why KDE created such a build system that reminds me of Frankenstein: from one side, they needed to wire up support for Qt’s uic and moc tools; from the other, they wanted the sub-module monolithic setup that is sought after by a limited amount of binary distributions and hated to the guts by almost all the source-based distributions.

I started hating this idea for two separates reasons: the first is that we couldn’t update automake: only 1.9 works, and we’re now at 1.11, the new ones changed enough behaviour that there is no chance the custom code works; the second reason is that the generated configure files were overly long, checking the same things over and over, and in very slow ways (compiling rather than using pkg-config). One of the tests I always found braindead was the check, done by every KDE3-based package, on whether libqt-mt required libjpeg at link time: a workaround for a known broken libqt-mt library!

Now, with autoconf 2.64, the whole build system broke down. Why’s that? Very simple: if you try to rebuild the autotools for kdelibs, like Gentoo does, you end up with a failure because the macro AH_CHECK_HEADERS is not found. That macro has been renamed in 2.64 to _AH_CHECK_HEADERS since it’s an internal macro, not something that configure scripts should be using directly. Indeed this macro call is in the KDE-custom KDE_CHECK_HEADERS that seems to deal with C and C++ language differences in the checks for headers (funnily this wasn’t enough to avoid language mistakes ). This wouldn’t be bad, extending macros is what autoconf is all about; but using internal headers to do that, is really a mistake.

Now, if it was just KDE making this mistake, the problem would be solved already: KDE 4 migrated to CMake, so in the worst case it would fail when the next CMake change is done that breaks their CMake-based build system, and KDE 3 is going away, which will solve the autoconf 2.64 problem altogether. Unfortunately, KDE was not the only project making this mistake; even worse, projects with exactly one package made this mistake, and that’s quite a bit of a problem.

When you have to maintain a build system for a whole set of packages, like KDE has to, mistakes like the one of using internal macros are somewhat to be expected and shouldn’t be considered strange or out of place. When you do that for a single package, then you really should stop from writing build systems, since you’re definitely overcomplicating things without any good reason.

Some of the signs that your build system is overcomplicated:

  • it still looks like it was generated by autoscan; you have not removed any of the checks added by that, nor you have added conditionals in the code to act upon those checks;
  • you’re doing lots of special conditioning over the host and target definitions; you don’t even try to find the stuff in the system but decide it’s there or it’s not by checking the host; that’s not the autoconf way;
  • you replace all the standard autoconf macros with your own, having NIH_PROG_CC for instance; you are trying to be smarter than autoconf, but you most likely are not.

autoconf-2.64: the phantom macro menace

While I should write it down on autotools mythbuster in a most generic fashion, since I found this today, I wanted to write down these notes for other developers. With autoconf-2.64 there can be a problem with “phantom macros” as in macros that are called, but seem not to produce any code.

In particular, I noticed a configure failure in recode today. The reported error out is the following:

checking for flex... flex
checking lex output file root... lex.yy
checking lex library... -lfl
checking whether yytext is a pointer... yes
checking for flex... (cached) flex
./configure: line 10866: syntax error near unexpected token `fi'
./configure: line 10866: `fi'

Looking at the actual configure code, you can easily see what the problem is around line 10866:

if test "$LEX" = missing; then
  LEX="$(top_srcdir)/$ac_aux_dir/missing flex"
  LEX_OUTPUT_ROOT=lex.yy
  else


fi

In sh, you probably know already, “else fi” is invalid syntax; but what is the code that produces this? Well, looking at configure.in is not enough, you need to also check an m4 file for the package:

# in configure.in:
ad_AC_PROG_FLEX

# in m4/flex.m4
## Replacement for AC_PROG_LEX and AC_DECL_YYTEXT
## by Alexandre Oliva 
## Modified by Akim Demaille so that only flex is legal

# serial 2

dnl ad_AC_PROG_FLEX
dnl Look for flex or missing, then run AC_PROG_LEX and AC_DECL_YYTEXT
AC_DEFUN(ad_AC_PROG_FLEX,
[AC_CHECK_PROGS(LEX, flex, missing)
if test "$LEX" = missing; then
  LEX="$(top_srcdir)/$ac_aux_dir/missing flex"
  LEX_OUTPUT_ROOT=lex.yy
  AC_SUBST(LEX_OUTPUT_ROOT)dnl
else
  AC_PROG_LEX
  AC_DECL_YYTEXT
fi])

So there are calls to AC_PROG_LEX and AC_DECL_YYTEXT macros, so there should be code in there. What’s happening? Well, maybe you remember a previous post where I listed some user-advantages in autoconf-2.64 :

Another interesting change in the 2.64 release which makes it particularly sweet to autotools fanatics like me is the change in AC_DEFUN_ONCE semantics that makes possible for macros to be defined that are executed exactly once. The usefulness of this is that often times you get people to write bad autoconf code, that instead of using AC_REQUIRE to make sure a particular macro has been expanded (which is usually the case for macros using $host and thus needing AC_CANONICAL_HOST), simply call it, which would mean the same check is repeated over and over (with obvious waste of time and increase in size of the generated configure file).

Thanks to the AC_DEFUN_ONCE macro, not only it’s possible to finally define macros that never gets executed more than once, but also most of the default macros that are supposed to work that way, like AC_CANONICAL_HOST and its siblings, are now defined with that, which means that hopefully even untouched configure files will be slimmed down.

Of course, this also means there are more catches with it, so I’ll have to write about them in the future. Sigh I wish I could find more time to write on the blog since there are so many important things I have to write about, but I have not enough time to expand them to a proper size since I’m currently working all day long.

Indeed the two macros above are both once-expanded macros, which means that autoconf expand them before the rest of the now-defined macro. Now, the solution for this is using M4sh properly (because autoconf scripts are not pure sh, they are M4sh, which is a language in the middle between sh and m4). Instead of using if/then/else, you should use AS_IF; indeed changing the above macro to this:

AC_DEFUN(ad_AC_PROG_FLEX,
[AC_CHECK_PROGS(LEX, flex, missing)
AS_IF([test "$LEX" = missing], [
  LEX="$(top_srcdir)/$ac_aux_dir/missing flex"
  LEX_OUTPUT_ROOT=lex.yy
  AC_SUBST(LEX_OUTPUT_ROOT)dnl
], [
  AC_PROG_LEX
  AC_DECL_YYTEXT
]])

allows autoconf to understand the flow of the code and produces the proper sh code (this is true sh code) in the final configure file:

checking for flex... flex
checking for flex... (cached) flex
checking lex output file root... lex.yy
checking lex library... -lfl
checking whether yytext is a pointer... yes

(see how the two checks for flex are both up the list of checks?).

Unfortunately there are more problems with recode, but at least this documents the first problem, which I’m afraid is going to be a common one.

Having fun with autoconf 2.64

With the tinderbox now running in a container I’ve been running tests with autoconf 2.64 as well, which is currently masked (and for very good reason I’d say!). This is, after all, the original point of the tinderbox: rebuilding everything with the latest version of the tools, so that if a bug is found users won’t be hitting it before we know about it.

For what concerns autoconf 2.64, I’ve been tracking the issue since last February, when I discovered one interesting change that could have caused a big effect on packages (changed the way they compiled; thus introduced runtime bugs). I also wrote some notes for users and described the incompatible change so that everybody would get ready to fix the future issues.

Well, new autoconf is now here and the run started, I haven’t evens started looking (again) to the present-but-not-compilable warnings, and I already started filing apocalyptic bugs. Indeed, a few packages stopped working with autoconf 2.64, as usual, but there is one that is particularly interesting.

You might remember that I criticised KDE for not using autotools, but rather a crappy buildsystem vaguely based on autotools. Well it seems like my point was proven again, not for the first time I have to say. Indeed the admin/ directory from 3.5.10 fails with autoconf 2.64 and why is that? Because they used internal macros!

Way to go KDE buildsystem developers: autotools are shit anyway, why bothering to stick to the stuff that is actually documented?

*Okay this might just be seen as a sterile comment at something that happened already; on the other hand I’d like to use this to point out that all the FUD about autotools that is available around is often due to what KDE decided, and people should know that KDE didn’t use autotools properly in the first place!*

Also, take this as a warning against autoconf 2.64 unless you really want to start debugging and fixing the bugs.

How _not_ to fix glibc 2.10 function collisions

Following my previous how not to fix post I’d like to explain also how not to fix the other kind of glibc 2.10 failure: function collisions.

With the new version of glibc, support for the latest POSIX C library specification is added; this means that, among other things, a few functions that previously were only available as GNU extensions are now standardised and, thus, visible by default unless requesting a strictly older POSIX version.

The functions that were, before, available as GNU extensions were usually hidden unless the _GNU_SOURCE feature selection macro was defined; in autotools, that meant using the AC_SYSTEM_EXTENSIONS macro to request them explicitly. Now these are visible by default; this wouldn’t be a problem if some packages didn’t decide to either reimplement them, or call their functions just like that.

Most commonly the colliding function is getline(); the reason for which is that the name is really too generic and I would probably curse POSIX committees for accepting it with the same name in the C library; I already cursed GNU for adding it with that name to glibc. With the name of getline() there are over tons of functions, with the most different interfaces, that try to get lines from any kind of media. The solution for these is to rename them to some different name so that the collision is avoided.

More interesting is instead the software that, wanting to use something alike to strndup() decide to create its own version, because some system do lack that function. In this case, renaming the functions, like I’ve seen one user propose today, is crazy. The system already provide the function; use that!

This can be done quite easily with autotools-based packages (and can be applied to other build systems, like cmake, that work on the basis of understanding what the system provides):

# in configure.ac
AC_SYSTEM_EXTENSIONS
AC_CHECK_FUNCS([strndup])

/* in a private header */
#include "config.h"

#ifndef HAVE_STRNDUP
char *strndup(const char *str, size_t len);
#endif

/* in an implementation file */
#ifndef HAVE_STRNDUP
char *strndup(const char *str, size_t len)
{
  [...]
}
#endif

When building on any glibc (2.7+ at least, I’d say), this code will use the system-provided function, without adding further duplicate, useless code; when building on systems where the function is not (yet) available, like FreeBSD 7, then the custom functions will be used.

Of course it takes slightly more time than renaming the function, but we’re here to fix stuff in the right way, aren’t we?