Yes we still needs autotools

One of the most common refrains that I hear lately, particularly when people discover Autotools Mythbuster is that we don’t need autotools anymore.

The argument goes as such: since Autotools were designed for portability on ancient systems that nobody really uses anymore, and that most of the modern operating systems have a common interface, whether that is POSIX or C99, the reasons to keep Autotools around are minimal.

This could be true… if your software does nothing that is ever platform specific. Which indeed is possible, but quite rare. Indeed, unpaper has a fairly limited amount of code in its configure.ac, as the lowest level code it has, it’s to read and write files. Indeed, I could have easily used anything else for the build system.

But on the other hand, if you’re doing anything more specific, which usually includes network I/O, you end up with a bit more of a script. Furthermore, if you don’t want to pull a systemd and decide that the latest Linux version is all you want to support, you end up having to figure out alternatives, or at least conditionals to what you can and cannot use. You may not want to do like VLC which supports anything between OS/2 and the latest Apple TV, but there is space between those extremes.

If you’re a library, this is even more important. Because while it might be that you’re not interested in any peculiar systems, it might very well be that one of your consumers is. Going back to the VLC example, I have spent quite a bit of time in the past weekends of this year helping the VLC project by fixing (or helping to fix) the build system of new libraries that are made a dependency of VLC for Android.

So while we have indeed overcome the difficulties of porting across many different UNIX flavours, we still have portability concerns. I would guess that it is true that we should reconsider what Autoconf tests for by default, and in particular there are some tests that are not completely compatible for modern systems (for instance the endianness tests were an obvious failure when MacIntel arrived, as then it would be building the code for both big endian (PPC) and little endian (Intel) — on the other hand, even these concerns are not important anymore, as universal binaries are already out of style.

So yes, I do think we still need portability, and I still think that not requiring a tool that depends on XML RPC libraries is a good side of autotools…

Code memes, an unsolved problem

I’ll start the post by pointing out that my use of the word meme will follow relatively closely the original definition provided by Dawkins (hate him, love him, or find him a prat that has sometimes good ideas) in The Selfish Gene rather than the more modern usage of “standard image template with similar text on it.”

The reason is that I really need that definition to describe what I see happening often in code: the copy-pasting of snippets, or concepts, across projects, and projects, and projects, mutating slightly in some cases because of coding style rules and preferences.

This is particularly true when you’re dealing with frameworks, such as Rails and Autotools; the obvious reason for that is that most people will strive for consistency with someone else — if they try themselves, they might make a mistake, but someone else already did the work for them, so why not use the same code? Or a very slightly different one just to suit their tastes.

Generally speaking, consistency is a good thing. For instance if I can be guaranteed that a given piece of code will always follow the same structure throughout a codebase I can make it easier on me to mutate the code base if, as an example, a function call loses one of its parameters. But when you’re maintaining a (semi-)public framework, you no longer have control over the whole codebase, and that’s where the trouble starts.

As you no longer have control over your users, bad code memes are going to ruin your day for years: the moment when one influential user finds a way to work around a bug implement a nice trick, their meme will live on for years, and breaking it is going to be just painful. This is why Autotools-based build systems suck in many cases: they all copied old bad memes from another build system and they stuck around. Okay, there is a separate issue of people deciding to break all memes and creating something that barely works and will break at the first change in autoconf or automake, but that’s beside my current point.

So when people started adding AC_CANONICAL_TARGET the result was an uphill battle to get people to drop it. It’s not like it’s a big problem for it to be there, it just makes the build system bloated, and it’s one of a thousand cuts that make Autotools so despised. I’m using this as an example, but there are plenty of other memes in autotools that are worse, breaking compatibility, or cross-compilation, or the maintainers only know what.

This is not an easy corner to get out of, adding warnings about the use of deprecated features can help, but sometimes it’s not as simple, because it’s not a feature being used, it’s the structure being the problem, which you can’t easily (or at all) warn on. So what do you do?

If your framework is internal to an organisation, a company or a project, your best option is to make sure that there are no pieces of code hanging around that uses the wrong paradigm. It’s easy to say “here is the best practices piece of code, follow that, not the bad examples” — but people don’t work that way, they will be looking on a search engine (or grep) for what they need done, and find the myriad bad examples to follow instead.

When your framework is open to the public and is used by people all around the world, well, there isn’t much you can do about it, beside being proactive and pointing out the bad examples and provide solutions to them that people can reference. This was the reason why I started Autotools Mythbuster, especially as a series of blog posts.

You could start breaking the bad code, but it would probably be a bad move for PR, given that people will complain loudly that your software is broken (see the multiple API breakages in libav/ffmpeg). Even if you were able to provide patches to all the broken software out there, it’s extremely unlikely that it’ll be seen as a good move, and it might make things worse if there is no clear backward compatibility with the new code, as then you’ll end up with the bad code and the good code wrapped around compatibility checks.

I don’t have a clean solution, unfortunately. My approach is fix and document, but it’s not always possible and it takes much more time than most people have to spare. It’s sad, but it’s the nature of software source code.

The future of Autotools Mythbuster

You might have noticed after yesterday’s post that I have done a lot of visual changes to Autotools Mythbuster over the weekend. The new style is just a bunch of changes over the previous one (even though I also made use of sass to make the stylesheet smaller), and for the most part is to give it something recognizable.

I need to spend another day or two working on the content itself at the very least, as the automake 1.13 porting notes are still not correct, due to further changes done on Automake side (more on this in a future post, as it’s a topic of its own). I’m also thinking about taking a few days off Gentoo Linux maintenance, Munin development, and other tasks, and just work on the content on all the non-work time, as it could use some documentation of install and uninstall procedures for instance.

But leaving the content side alone, let me address a different point first. More and more people lately have been asking for a way to have the guide available offline, either as ebook (ePub or PDF) or packaged. Indeed I was asked by somebody if I could drop the NonCommercial part of the license so that it can be packaged in Debian (at some point I was actually asked why I’m not contributing this to the main manuals; the reason is that I really don’t like the GFDL, and furthermore I’m not contributing to automake proper because copyright assignment is becoming a burden in my view).

There’s an important note here: while you can easily see that I’m not pouring into it the amount of time needed to bring this to book quality, it does take a lot of time to work on it. It’s not just a matter of gluing together the posts that talk about autotools from my blog, it’s a whole lot of editing, which is indeed a whole lot of work. While I do hope that the guide is helpful, as I wrote before, it’s much more work for the most part that I can pour into on my free time, especially in-between jobs like now (and no, I don’t need to find a job — I’m waiting to hear from one, and got a few others lined up if it falls through). While Flattr helps, it seems to be drying up, at least for what concerns my content; even Socialvest is giving me some grief, probably because I’m no longer connecting from the US. Beside that, the only “monetization” (I hate that word) strategy I got for the guide is AdSense – which, I remind you, kicked my blog out for naming an adult website on a post – and making the content available offline would defeat even the very small returns of that.

At this point, I’m really not sure what to do; on one side I’m happy to receive more coverage just because it makes my life easier to have fewer broken build systems around. On the other hand, while not expecting to get rich off it, I would like to know that the time I spend on it is at least partly compensated – token gestures are better than nothing as well – and that precludes a simple availability of the content offline, which is what people at this point are clamoring for.

So let’s look into the issues more deeply: why the NC clause on the guide? Mostly I want to have a way to stop somebody else exploiting my work for gain. If I drop the NC clause, nothing can stop an asshole from picking up the guide, making it available on Amazon, and get the money for it. Is it likely? Maybe not, but it’s something that can happen. Given the kind of sharks that infest Amazon’s self-publishing business, I wouldn’t be surprised. On the other hand, it would probably make it easier for me to accept non-minor contributions and still be able to publish it at some point, maybe even in real paper, so it is not something I’m excluding altogether at this point.

Getting the guide packaged by distributions is also not entirely impossible right now: Gentoo has generally not the same kind of issues as Debian regarding the NC clauses, and since I’m already using Gentoo to build and publish it, making an ebuild for it is tremendously simple. Since the content is also available on Git – right now on Gitorious, but read on – it would be trivial to do. But again, this would be cannibalizing the only compensation I got for the time spent on the guide. Which makes me very doubtful on what to do.

About the sources, there is another issue: while at the time I started all this, Gitorious was handier than GitHub, over time Gitorious interface didn’t improve, while the latter improved a lot, to the point that right now it would be my choice to host the guide: easier pull requests, and easier coverage. On the other hand, I’m not sure if the extra coverage is a good thing, as stated above. Yes, it is already available offline through Gitorious, but GitHub would make it effectively easier to get offline than to consult online. Is that what I want to do? Again, I don’t know.

You probably also remember an older post of mine from one and a half years ago where I discussed the reasons why I haven’t published Autotools Mythbuster at least through Amazon; the main reason was that, at the time, Amazon has no easy way to update the book for the buyers without having them buying a new copy. Luckily, this has changed recently, so the obstacle is actually fallen. With this in mind, I’m considering making it available as a Kindle book for those of you who are interested. To do so I have first to create it as an ePub though — so it would solve the question that I’ve been asked about the eBook availability… but at the same time we’re back to the compensation issue.

Indeed, if I decide to set up ePub generation and start selling it on the Kindle store, I’d be publishing the same routines on the Git repository, making it available to everybody else as well. Are people going to buy the eBook, even if I priced it at $0.99? I’d suppose not. Which brings me to not be sure what the target would be, on the Kindle store: price it down so that the convenience to just buy it from Amazon overweights the work to rolling your own ePub, or googling for a copy, – considering that just one person rolling the ePub can easily make it available to everybody else – or price it at a higher point, say $5, hoping that a few, interested users would fund the improvements? Either bet sounds bad to me honestly, even considering that Calcote’s book is priced at $27 at Amazon (hardcopy) and $35 at O’Reilly (eBook) — obviously, his book is more complete, although it is not a “living” edition like Autotools Mythbuster is.

Basically, I’m not sure what to do at all. And I’m pretty sure that some people (who will comment) will feel disgusted that I’m trying to make money out of this. On the whole, I guess one way to solve the issue is to drop the NC clause, stick it into a Git repository somewhere, maybe keep it running on my website, maybe not, but not waste energy into it anymore… the fact that, with the much more focused topic, it has just 65 flattrs, is probably indication that there is no need for it — which explains why I couldn’t find any publisher interested in making me write a book on the topic before. Too bad.

Autotools Mythbuster: being a foreigner is not a bad thing

This was a leftover post on my drafts’ list.. I just decided to post it as it is, even though there are a few things that are slightly out of date. Please bear with me.

Have you ever noticed that many projects ship in their tarball or, even worse, in their source repositories, files that are either empty or simply placeholder saying “look at this other file”? Most of the time these files are NEWS, ChangeLog, COPYING and INSTALL. In some corner cases, I even found packages that have files called INSTALL.real.

So what’s going on with this? Well, the problem comes from automake, and its ties to the GNU project it belongs to. The idea behind it is that the default settings of automake have to fit with the GNU projects. And GNU projects have a long list of coding styles, best practices, and policies that might sound silly (and some are) but are consistently followed by official projects.

These policies not only mandate the presence of a stable set of files (including those noted above, and a couple more), but also that the portability warnings are enabled, as the resulting Makefiles are supposed to be usable with non-GNU make implementations. So basically by default automake will mandate the presence of those files, the activation of some warnings’ classes, and that’s the reason why people do create those files even if they are not going to be used (either they are left zero sized or, worse, they get a single line referring to another file — I say worse because for zero sized files we can stop from installing them with simple checks, but for single-line references we require human intervention).

So how do you fix this? Well, it’s actually easy, you just have to pass the foreign option to the AM_INIT_AUTOMAKE macro — this way you’re telling automake that your project does not have to follow the GNU rules, which means that the files no longer have to be there and that if you want portability warnings you have to enable them explicitly. Which is very likely what you want.

Do note that the fact that the files are no longer mandatory does not mean that you can no longer use them. You’re actually suggested to keep most of them in your project, and actually install it properly. But trust me, you want to be a foreigner, in GNU land.

For details on AM_INIT_AUTOMAKE and the various automake flavours, you can see my guide which I also have to expand a little bit over the weekend.

Cross-compilation and pkg-config

As it happens – and as I noted yesterday – one of my current gigs involves cross-compilation of Gentoo packages; this should also be obvious for those monitoring my commits. This sometimes involves simply fixing the ebuilds; and other times it needs to move up the stream and fix the buildsystem of the original project. Lately, I hit a number of the latter cases.

In the previous post, I noted that I fixed upower’s build system to not choose the backend based on the presence of files in the build’s machine. Somehow, checking for presence of files in the system to choose whether to enable stuff, or to install stuff, seems to be the thing to do for build system; a similar situation happened with SystemTap, that tries to identify the prefix of NSS and Avahi headers by checking for the /usr/include/nss and /usr/include/avahi-common directories, among others. Or, I should have said, tried, since luckily my two patches to solve those issues have also been merged today.

Another package I had to fix for cross-compilation recently was ecryptfs-utils, and again NSS is involved. This time rather than testing the presence of files, the build system did almost the right thing, and used the nss-config script that ships with the package to identify which flags and libraries to use. While it is a step in the right direction, compared to SystemTap’s, it is not yet the real right thing to do.

What is the problem? When you’re cross-compiling, simply calling nss-config will not do: the build machine’s script would be used, which will report the library paths of that system and not your target, which is what you really want. How do you solve this trouble then? Simple, you use the freedesktop-developed pkg-config utility, that can be easily rigged to handle cross-compilation nicely, even though it is not as immediate as you’d like, and if you ever tried using it in a cross-compilation environment, you probably remember the problems.

It is for this reason that I started writing cross-compile documentation for pkg-config in my Autotools Mythbuster quasi-book. Speaking about which, I’d like to find some time to spend focusing on writing more sections of it. To do so, though, my only chance is likely going to be if I get to take some vacation, and book a hotel room while I work on it: the former part is needed as right now my gigs tend to occupy my whole day leaving me with scarce to no free time, and the latter because at home I have too many things to take care of to actually focus on writing when I’m not working.

At any rate, please people, make sure you support pkg-config rather than rely on custom scripts that are not really cross-compilable.. it is the best thing you can do, trust me on this please.

EAPI=4, missing tests, and emake diagnostics

Some time ago, well, quite a bit of time ago, I added the following snippet to the tinderbox’s bashrc file:

make() {
    if [[ "${FUNCNAME[1]}" == "einstall" ]] ; then
        emake -j1 "$@"
    else
        eqawarn "Tinderbox QA Notice: 'make' called by ${FUNCNAME[1]}"
        emake "$@"
    fi
}

I’m afraid I forgot whose suggestion that was, but it was an user, Edit: thanks to the anonymous contributor who actually remembered the original author of the code. And sorry Kevin, my memory starts to feel like it’s an old flash.

Kevin Pyle suggested me this code to find packages that ignored parallel build simply by using the straight make invocation over the correct emake one. It has served me well, and it has shown me a relatively long list of packages that should have been fixed, only a handful of which actually failed with parallel make. Most of the issues were due to to simple omission, rather than wilful disregard for parallel compilation.

One of the problems that this has shown is that, by default, all the make calls within src_test() default functions are not running in parallel. I guess the problem is that most people assume that tests are too fragile to run in parallel, but at least with (proper) autotools this is a mistake: the check_PROGRAMS and other check-related targets are rebuilt before running tests, and that can be done with parallel make just fine. Only starting with automake 1.11 you have the option to execute tests in simultaneously, otherwise the generated Makefile would still call them one by one, no matter how many make jobs you requested. This is why most of my ebuilds rewrite the src_test() function simply to call emake check.

This is the set-up situation. Now to get to the topic of the post… I’ve been seeing in the tinderbox some strange output, when looking at the compilation output, that I couldn’t find in the logs either. Said output simply told me that emake failed, but there was no obvious failure output from it, and the package merged fine. What was the problem? Well, it is a bit complex.

The default src_test() function contains a bit of logic to identify what the Makefile’s test target is, so that it can be used both with autotools (the default target to run tests with an automake-generated Makefile is check) and with otherwise-provided Makefile that likely use test as their testing target. This has unfortunate implications already, especially for packages whose maintainers didn’t test with FEATURES=test enabled:

  • if the package has a check target that doesn’t run tests, but rather checks whether the installed copy is in working state, then you have a broken src_test(); qmail-related packages are used to this, and almost all the ebuilds for said packages never restricted or blocked tests;
  • if the package does not have any specific test or check target, but it’s using make’s default rules, it could be trying to build a test command out of a non-existing test.c source file
  • in any case, it requires make to parse and load the Makefile which could be a bit taxing, depending on the package’s build system.

To this you can now add one further issue, when you join the above-quoted snippet from the tinderbox’s bashrc and the new EAPI=4 semantics for which ebuild helpers die by default on failure: if there is no Makefile at all in $S then you get an emake failure, which of course wouldn’t be saved in the logs but would still be repeated at the end of merge just for the sake of completeness. Okay, it’s just a nuisance and not a real issue, but it still baffled me.

So what is my suggestion here? Well, it’s something I’ve been applying myself independently of all this: declare your src_test explicitly. This means not only add it with the expected interface even if tests are present, but actually null it out if there are no tests: this way no overhead is added when building with FEATURES=test (which is what I do in the tinderbox). To do so, rather than using RESTRICT=test (which at least I read as a “tests are supposed to work but they don’t”), do something like:

src_test() { :; } # no tests present

_I usually do the same for each function in virtual packages._

I really wish that more of my fellows started doing the same, the tinderbox would probably have a much lower overhead when not running tests, which seems to be an unfortunate scenario, as most automake-based packages will try a pointless, empty make check each time they are merged, even if no test at all is present.

I’ll follow up with more test-related issues for the tinderbox in the next weeks, if time permits. Remember that the tinderbox is on flattr if you wish to show your appreciation… and I definitely could use some help with the hardware maintenance (I just had to replace the system’s mirrored disks, as one WD RE3 died on me).

Why Autotools Mythbuster is not a published ebook

I have already expressed that my time lately is so limited that there is no hope for me to catch a breath (today I’m doing triple shifts to be able to find the time to read Ghost Story the latest in The Dresden Files novels’ series, that was released today, oh boy do I love eBooks?). So you might probably understand why even Autotools Mythbuster hasn’t seen much improvement over the past month and something.

But I have considered its future during this time. My original idea of trying to write this down for a real publisher was shot down: the only publisher who expressed something more than “no interest at all”, was the one which had already a book on queue on the topic. The second option, that was to work on it during spare time, finding the time through donations covering the time spent on the task. This also didn’t fly much, if at all.

One suggestion I’ve been given was to make the content available in print – for instance through lulu – or as a more convenient offline read, as a properly formatted ebook. Unfortunately, this seems to be overly complex for very little gain. First of all, the main point of doing this should have been to give it enough visibility and get back some money for the time spent on writing it, so simply adding PDF and ePub generation rules to the guide wouldn’t be much of an improvement.

The two obvious solutions were, as noted, lulu, and on the other hand Amazon’s Kindle Store. The former, though, is a bit complex because any print edition would just be a snapshot of the guide at some point in time, not complete and just an early release, at any point in time. While it would probably still get me something, I don’t think it is “right” for me to propose such an option. I originally hoped for the Kindle Store to be more profitable and still ethic, but read on.

While there are some technical issues with producing a decent ePub out of a DocBook “book” – even O’Reilly isn’t getting something perfect out of their ePubs, both when read on the Sony Reader and with iPad’s iBooks – that isn’t the main issue with the plan. The problem is that Amazon seems to make Kindle e-books much more similar to print books than we’d like to.

While as an author you can update the content of your book, to replace it with an updated version with more, corrected content, the only way for the buyer to get the new content is to pay again in full for the item. I can probably guess that this was done likely on purpose and almost as likely with at least partially with an intent to protect the consumer from the producer who might replace the content of any writing without the former’s intervention, but this is causing major pain in my planning, which in turn cause this method to not be viable either.

What I am planning on adding is simply a PDF version, with a proper cover (but I need a graphic project for it), and a Flattr QR Code, that can then be read offline. But that’s not going to make the guide any more profitable, which means it won’t get any extra time…

Reduced system set and dependency issues

I first proposed reducing the size of the system set – the minimal set of packages that form a Gentoo installation – back in 2008 with the explicit idea of making the dependency tree more accurate by adding the packages to the list of dependencies of those using them, rather than forcing them all in. While it is not going as far as I’d have liked at the time, finally Gentoo is moving in that direction and, a couple of weeks ago, a bunch of build-time packages were removed from the system set, including autotools themselves.

It is interesting to note that a few people complained that this was a moot point since they are among the first packages that you got to merge anyway, as they are build-time dependency of probably half the tree, so you can’t have a Gentoo system without them.. but reality is a bit happier when you look at it. On a “normal” Gentoo system, indeed, you can’t get rid of autotools easily: by default, since build-time dependencies are preserved on the system even after build has completed, they wouldn’t even be removed — on the other hand, they wouldn’t get upgraded until a package needs to be built that use them, which is still a nice side effect. Where this matters most is on systems built with binary packages, which is what I do with my home router and on the two vservers that host this blog and xine’s bugzilla so I can make use of the higher-end hardware of Yamato.

By removing autotools from the system set and instead expecting them to be listed as dependencies (as it happens when using autotools.eclass), the binpkg-based systems can be set up without installing autotools at all, in most cases. Unfortunately; that is not always the case. The problem lies with the libltdl library, which is libtool’s wrapper to the dlopen() interface — the so-called dynamic runtime linking, which is implemented in most modern operating systems, but with various, incompatible interfaces. This library is provided by the sys-devel/libtool package, and is used at runtime by some packages such as PulseAudio, OpenSC and ImageMagick, making libtool a runtime dependency of those packages, and not simply a build-time one. Unfortunately, since libtool itself relies mostly on autoconf and automake, this also adds to the runtime dependency tree those packages, causing the system to be “dirty” again.

Luckily, it appears that libltdl is falling out of preference, and is used by a very limited set of packages nowadays. The most obnoxious to avoid is ImageMagick, especially on a server. I don’t remember whether GraphicsMagick allows you to forgo libltdl altogether when not using dynamic plug-ins, I think it should but I wouldn’t bet on it.

More obnoxious are probably the few failures caused by not depending on less-commonly-used tools such as flex and bison (or their grandparents lex and yacc). While I did some work at the time I proposed the system set reduction to identify packages that lacked a dependency over flex, new packages are added, old packages are reworked, and we probably have a number of packages that lack such dependencies. It is not only bothersome for the users, who might find a failure due to a package not being installed when it should have been; it is also very annoying for me when running the tinderbox because I can’t get the complete list of reverse-dependencies to test a new version of the package (it happened before that I needed to test the reverse-dependencies of flex, it wasn’t nice).

This begs a question: why isn’t my tinderbox catching these problems? The answer is actually already out there since the end of the same year: my tinderbox is not focusing non minimal-dependencies. That is, it runs with all the packages installed at once, which means it checks for collisions and can identify (some) automagic dependencies, but it rarely can tell if a dependency is missing in the list. Patrick used to run such a tinderbox setup, but I don’t know if he’s still doing so. It sure would be handy to see what broke with the recent changes.

Parameters to ./configure can deceive

As much as I’d like for this not to be the case, one of the most obnoxious problems with autotools is that not only there is little consistency between packages on the use of ./configure parameters, but also when there is, the parameters themselves can deceive, either because of their sheer name or because of one package differing in its use.

This is the reason why I dislike the idea of autotools-utils.eclass of wiring the debug USE flag to --enable-debug: that option is often implemented badly, if at all, and is not consistently used across projects: it might enable debug specific code, it might be used to turn off assertions (even though AC_HEADER_ASSERT already provides a --disable-assert option), it might add debug symbol information (-g and -ggdb) or, very bad!, it might fiddle with optimizations.

One of the most-commonly deceiving options is --enable-static, which is provided by libtool, and thus used in almost any autotools-based build system. What this does is tell libtool to build static archives (static libraries) for the declared libtool targets, but enough people, me included when I started, assumed it was going to tell the build system to build static binaries, and wired it to the static USE flag, which was simply wrong. Indeed, most of the static USE flags are wired to append -static to LDFLAGS instead, while --enable-static is tied to static-libs, when it does make sense — in a lot of cases, building the static libraries doesn’t make sense; for details see my other post where I make some notes about internal convenience libraries.

Unfortunately, while libtool-based packages are forced into consistency on this parameter, this doesn’t stop it from being inconsistent among other, non-libtool-based projects (who intend it to behave as “make the final executable static”) and, ironically, by libtool itself. Indeed if you use the --disable-static option to ./configure for libtool itself, what you’re actually asking is for the libtool script to be unable to build static libraries altogether. This is not happening out of the blue though; in itself it is actually fairly consistent within libtool itself. But to understand this you need to take a step back.

Autoconf, automake, and libtool are all designed not to add extra dependencies to build a package, unless you change the autotools sources themselves. The fact that for a distribution such as Gentoo this is so common that building a system without autotools is near impossible is a different story altogether. Following this design, when a package uses autotools and libtool, it cannot rely on the presence of /usr/bin/libtool; what it does instead is creating its own libtool script, using the same macros used by the libtool package. Incidentally, this is the reason why we can’t just patch the libtool package to implement features and work around bugs, but we usually have to add a call to elibtoolize in the ebuild.

So, from a behaviour point of view, the --disable-static option does nothing more than generating a script that is unable to build static libraries, and from there you get packages without static libraries, when they don’t use the system libtool script.

On the other hand, the system libtool script is still used by some packages, one of which turns out to be lua. The build system used by lua is one that is quite messed up, and it relies on the creation of a static liblua archive, from which just a subset of sources are used to generate the lua compiler. To do so, it uses the system libtool script. If your sys-devel/libtool package is built with --disable-static, though, you get failures which took me a while to debug, when it hit me on the chroot I use to build package for my vservers (it is related to ModSecurity 2.6 having an automagic dependency on lua I haven’t had time to work on yet).

What’s the end of the story for this post? Do not assume that two packages share the same meaning for a given option unless those packages come from the same developer, I guess. And even then make sure to check it out. And don’t provide random parameters to packages in eclasses or generic setting files just because it seems like a good idea, you might end up debugging some random failure.

P.S.: I’ll try to find time to write something about libtool’s system script issue on Autotools Mythbuster over the weekend; last week I added some notes about pkg-config and cross-compiling due to work I’ve been doing for one of my contract jobs.

There’s a new sudo in town

It was last month that I noticed the release of the new series of Sudo (1.8), which brought a number of changes in the whole architecture of the package, starting from a plugin-based architecture for the authentication systems. Unfortunately when I went to bump it, I decided to simply leave users updating to 1.7.5 and keep 1.8.0 in package.mask simply because I didn’t have time to solve the (many) quality issues the new version reported, starting from LDFLAGS handling.

This morning, during my usual work tour-de-force, I noticed the first release candidate of 1.8.1 series that is due to come out at the end of this week, so I decidd to finally get to work on the ebuild for 1.8 series so that it would work correctly. I didn’t expect that it would take me till over 6 in the morning to get it to work to an extent (and even then it wasn’t completely right!).

While looking into 1.8 series I also ended up getting the 1.7.6 ebuild updated to a more… acceptable state:

  • the infamous S/Key patch that I’ve been carrying along since I took over the package from Tavis didn’t apply on 1.8, mostly just for sources reorganisation; I ended up converting it in a proper autoconf-based check, so that it could be submitted upstream;
  • the recursive make process wasn’t right in the 1.8 releases, so even if the package failed to build, there was no indication of it to the caller, and thus the ebuild didn’t fail – this is probably something I could write about in the future, as I’ve seen it happening before as well – I was able to fix that and send it upstream, but it has also shown me that I was ignoring one of the regression tests failing altogether, which is worrisome to say the least;
  • since I had to re-run autotools, I ended up hitting a nasty bug with packages using autoconf but not using automake: we didn’t inject the search path of extra m4 files, even though the eclass was designed with AT_M4DIR in mind for that; it is now fixed, and the various autoconf/autoheader steps provide the proper path conditioning, so that even sudo can have its autotools build system updated.
  • while looking at the ebuilds, I decided to change both of them from EAPI=0 to EAPI=4; it didn’t make much of a difference by itself, but I was able to use REQUIRED_USE to express the requisite of not having both PAM and S/Key enabled at the same time — even though I didn’t get this right the first time around; thanks Zeev!
  • another piece of almost “cargo code” was left in the ebuild since I took it over, and it was in charge of adding a few variables to the list of variables to blacklist; since this was done through the use of sed, and just with the 1.8 series it stopped working, I never noticed that it was nowadays meaningless: all those variables were already in the sudo upstream sources; I was simply able to drop the whole logic — not to self: make sure never to rely so much on silently-ignored sed statements!
  • today while at it (I added rc1 this morning; moved to rc2 today after Todd pointed to me that the LDFLAGS issue was fixed there), I reviewed the secure path calculation, which now treats the gnat compiler paths just like those for gcc itself;
  • the sudo plugins are no longer installed in /usr/libexec but rather in the so-called pkglibdir (/usr/lib/sudo); this also required patching the build system.

I hope to be able to unmask sudo 1.8.1 to ~arch when it comes out. With a bit of luck, by that time, there won’t be any patch applied at all, since I sent all of them to Todd. And that would be the first version of sudo in a very long time built from the vanilla, unpatched, unmodified sources. And if you know me, I like situations like those!

Unfortunately, I have never used nor tested the LDAP support for sudo, so I’m pretty sure it doesn’t work with sudo 1.8 series in Gentoo. Pretty please if somebody has used that, let me know what needs to be fixed! Thanks.