Cross-compilation and pkg-config

As it happens – and as I noted yesterday – one of my current gigs involves cross-compilation of Gentoo packages; this should also be obvious for those monitoring my commits. This sometimes involves simply fixing the ebuilds; and other times it needs to move up the stream and fix the buildsystem of the original project. Lately, I hit a number of the latter cases.

In the previous post, I noted that I fixed upower’s build system to not choose the backend based on the presence of files in the build’s machine. Somehow, checking for presence of files in the system to choose whether to enable stuff, or to install stuff, seems to be the thing to do for build system; a similar situation happened with SystemTap, that tries to identify the prefix of NSS and Avahi headers by checking for the /usr/include/nss and /usr/include/avahi-common directories, among others. Or, I should have said, tried, since luckily my two patches to solve those issues have also been merged today.

Another package I had to fix for cross-compilation recently was ecryptfs-utils, and again NSS is involved. This time rather than testing the presence of files, the build system did almost the right thing, and used the nss-config script that ships with the package to identify which flags and libraries to use. While it is a step in the right direction, compared to SystemTap’s, it is not yet the real right thing to do.

What is the problem? When you’re cross-compiling, simply calling nss-config will not do: the build machine’s script would be used, which will report the library paths of that system and not your target, which is what you really want. How do you solve this trouble then? Simple, you use the freedesktop-developed pkg-config utility, that can be easily rigged to handle cross-compilation nicely, even though it is not as immediate as you’d like, and if you ever tried using it in a cross-compilation environment, you probably remember the problems.

It is for this reason that I started writing cross-compile documentation for pkg-config in my Autotools Mythbuster quasi-book. Speaking about which, I’d like to find some time to spend focusing on writing more sections of it. To do so, though, my only chance is likely going to be if I get to take some vacation, and book a hotel room while I work on it: the former part is needed as right now my gigs tend to occupy my whole day leaving me with scarce to no free time, and the latter because at home I have too many things to take care of to actually focus on writing when I’m not working.

At any rate, please people, make sure you support pkg-config rather than rely on custom scripts that are not really cross-compilable.. it is the best thing you can do, trust me on this please.

Why Autotools Mythbuster is not a published ebook

I have already expressed that my time lately is so limited that there is no hope for me to catch a breath (today I’m doing triple shifts to be able to find the time to read Ghost Story the latest in The Dresden Files novels’ series, that was released today, oh boy do I love eBooks?). So you might probably understand why even Autotools Mythbuster hasn’t seen much improvement over the past month and something.

But I have considered its future during this time. My original idea of trying to write this down for a real publisher was shot down: the only publisher who expressed something more than “no interest at all”, was the one which had already a book on queue on the topic. The second option, that was to work on it during spare time, finding the time through donations covering the time spent on the task. This also didn’t fly much, if at all.

One suggestion I’ve been given was to make the content available in print – for instance through lulu – or as a more convenient offline read, as a properly formatted ebook. Unfortunately, this seems to be overly complex for very little gain. First of all, the main point of doing this should have been to give it enough visibility and get back some money for the time spent on writing it, so simply adding PDF and ePub generation rules to the guide wouldn’t be much of an improvement.

The two obvious solutions were, as noted, lulu, and on the other hand Amazon’s Kindle Store. The former, though, is a bit complex because any print edition would just be a snapshot of the guide at some point in time, not complete and just an early release, at any point in time. While it would probably still get me something, I don’t think it is “right” for me to propose such an option. I originally hoped for the Kindle Store to be more profitable and still ethic, but read on.

While there are some technical issues with producing a decent ePub out of a DocBook “book” – even O’Reilly isn’t getting something perfect out of their ePubs, both when read on the Sony Reader and with iPad’s iBooks – that isn’t the main issue with the plan. The problem is that Amazon seems to make Kindle e-books much more similar to print books than we’d like to.

While as an author you can update the content of your book, to replace it with an updated version with more, corrected content, the only way for the buyer to get the new content is to pay again in full for the item. I can probably guess that this was done likely on purpose and almost as likely with at least partially with an intent to protect the consumer from the producer who might replace the content of any writing without the former’s intervention, but this is causing major pain in my planning, which in turn cause this method to not be viable either.

What I am planning on adding is simply a PDF version, with a proper cover (but I need a graphic project for it), and a Flattr QR Code, that can then be read offline. But that’s not going to make the guide any more profitable, which means it won’t get any extra time…

Where did I disappear? A collection of random status reports

Before people end up thinking I’ve been hiding, or was eaten by the grues while thoroughly labelling my cables – by the way, a huge thanks goes to kondor6c! By running this cleanup operation I finally understood why Yamato and Raven had problems when power was cut: by mistake the AP bridge connecting me to the rest of the network was not connected behind the UPS – I would just like to quickly point out that the reason why I seem less active than usual is that I’m unfortunately forced to spend a lot of time on a daily job right now, namely writing a final report on LScube which will serve as a basis for the future documentation and, hopefully new website.

Speaking about documentation, I’ve been receiving a number of requests to make Autotools Mythbuster available as a single download — I plan on doing so as soon as I can find enough free time to work on that in a relaxing environment.

You might remember though, that originally intended to work on the guide on something nearing full time under compensation; this was not a matter of simple greed but the fact that, to keep it relevant and up to date, it requires more time than I could afford to give it without something paying my bills. Given that, I plan on adding an offline Flattr barcode on the cover when I’ll make it available in PDF.

If you wish to see this happening sooner rather than later, you can generally resort my priorities by showing your appreciation for the guide — it really doesn’t matter how much your flattr is worth, any bit helps and knowing you do want to see the guide extended and available for download is important to me.

Next up in line on the list of things I have to let you know is that in my Portage overlay you can now find a patched, hacked version of pcsc-lite making the libusb device discovery feign being libhal. The reason is a bit far-fetched, but if you remember my pcsc-lite changes Gentoo is dropping HAL support whenever possible; unfortunately pcsc-lite considers HAL the current discovery and libusb the “legacy” one, with the future moving to libudev as soon as Ludovic has a plan for it. In the interim, libusb should work fine for most situations.. fortunately (or not, depends on the point of view) I hit one of those situations where libhal was the only way to get it working properly.

In particular, after a security device firmware update (which has to happen strictly under Windows, dang!), my laptop – of which I have to review the opinion, the only things not working properly right now are the touchpad, for which work is underway, and the fingerprint reader, that seems to be a lost cause – now exposes the RFID contactless smartcard reader as a CCID device. For those not speaking cryptogeek, this means that the interface is the standard one and pcsc-lite can use it through the usual app-crypt/ccid driver it uses already to access the “usual” chipcards. Unfortunately, since the device is compound and exposes the two CCID interfaces on a single address, it has been working correctly only with libhal, as that was the only way to tell the CCID driver to look for a given interface. My patch works this around, and compound devices are seen correctly; I’m waiting to hear from Ludovic whether it would be safe to apply it in Gentoo for all the other users.

So I’m not gone, just very very very busy. But I’m around by mail mostly.

Autotools Mythbuster: A practical case of TMT (Too Many Tests)

I have written already about the fact that using autoscan is simply a bad idea not too long ago. On the other hand, today I found one very practical case where using (an old version of) autoscan have create a very practical problem. While this post is not going to talk about the pambase problems there is a connection to my previous posts: it is related to the cross-compilation of Linux-PAM.

It is an unfortunately known problem that Linux-PAM ebuild fails cross-compilation — and there are a number of workaround that have never been applied in the ebuilds. The reason are relatively simple: I have insisted for someone else who had the issue at hand to send them upstream. Finally somebody did, and Thorsten fixed the issues with the famous latest release — or so is theory. A quick check shows me that the latest PAM version is also not working as intended.

Looking around the build log it seems like the problem is that the configure script for Linux-PAM, using the original AC_PROG_LEX macro, is unable to identify the correct (f)lex library to link against. Again, the problem is obvious: the cross-building wrappers that we provide in the crossdev package are causing dependencies present in DEPEND but not RDEPEND to be merged just on the root file system. Portage allows us to set the behaviour to merge them instead on the cross-root; but even that is wrong since we need flex to be present both as a (CBUILD) tool, and as a (CHOST) library. I’ve asked Zac to provide a “both” solution, we’ll see how we go from there.

Unfortunately a further problem happens when you try to cross-compile flex: it fails with undefined references to the rpl_malloc symbol. You can look it up and you’ll see that it’s definitely not a problem limited to flex. I know what I’m dealing with when finding these mistakes, but I guess it doesn’t hurt to explain them a bit further.

The rpl_malloc and rpl_realloc are two replacement symbols, #define’d during the configure phase by the AC_FUNC_MALLOC autoconf macro. They are used to replace the standard functions with two custom ones that can be conditioned; the fact that they are left dangling is, as I’ll show in a moment, pretty much a signal of overtesting.

Rather than simply checking for the presence of the malloc() function (can you really expect the function to be missing on any non-embedded system at all?), the AC_FUNC_MALLOC macro (together with its sibling AC_FUNC_REALLOC) checks for the presence of a glibc-compatible malloc() function. That “glibc-compatible” note simply means a function that will not return NULL when passed a length argument of 0 (which is the behaviour found in glibc and a number of other systems). It is a corner-case condition and most of the software I know is not relying at all on this happening, but sometimes it has been useful to test for, otherwise the macro wouldn’t be there.

Of course the sheer fact that nobody implemented the compatibility replacement functions in the flex source tree should make it safe to assume that there is no need for the behaviour to be present.

Looking at the original configure code really tells more of a story:

# Checks for typedefs, structures, and compiler characteristics.

AC_HEADER_STDBOOL
AC_C_CONST
AC_TYPE_SIZE_T

# Checks for library functions.

AC_FUNC_FORK
AC_FUNC_MALLOC
AC_FUNC_REALLOC
AC_CHECK_FUNCS([dup2 isascii memset pow regcomp setlocale strchr strtol])

The two comments are the usual ones that you’d find in a configure.in script created by autoscan; it’s also one of the few cases where you actually find a check for size_t, as most software assumes its presence anyway. More importantly, if you look at the long list of AC_CHECK_FUNCS macro call arguments, and then compare with the actual source code of the project, you can understand that the boilerplate checks are there, but their results are never checked for. That’s true for all the functions in there.

What do we make of it? Well, first of all, it shows that at least part of the autotools-based buildsystem in flex is not really well thought off (and you can add to that some quite idiotic stuff in the Makefile.am to express object file dependencies, which then required a patch that produced half-broken results). But it also shows that in all likeliness, the check for malloc() is there just because, and not because there is need for the malloc(0) behaviour.

A quick fix, consisting of removing useless checks, and rebuilding autotools with more modern autotools versions, and you have a perfectly cross-compilable, workaround-free flex ebuilds. Voilà!

So why am I blogging about this at all? Well, I’ll be updating the Guide later on today; in the mean time I wanted to let you know about this trickery because I sincerely have a problem with having to add tons of workaround for cross-building: it’s time to tackle them down, and quite a lot of those are related to this stupid macro usage, so if you find one, please remember this post and get to fix it properly!

Autotools Mythbuster: autoscan? autof—up

Beside the lack of up-to-date and sane documentation about autotools (for which I started my guide that you should remember is still only extended in my free time), there is a second huge user experience problem: the skeleton build system produced by the autoscan script.

Now, I do understand why they created it, the problem is that as it is, it mostly creates fubar’d configure.ac skeletons, that confuse newcomers and causes a lot of grief to packagers and to users of source-based distributions (and those few who still think of building software manually without getting it from their distribution).

The problem with autoscan is that it embodies again the “GNU spirit”, or actually the GNU spirit of the original days, back when GNU tried to support any operating system, to “give freedom” to users forced to use those OSes rather than Linux itself. Given that nowadays FSF seems to interest itself mostly on discouraging anybody from using non-free operating systems (or even, non-GNU based operating systems) – sometimes failing and actually discouraging companies from using Free Software altogether – it seems like they had a change of ideas in the middle of that. But that’s something for another post.

Anyway, assuming that you’ll have to make your software work on any operating system out there is something that you are well unlikely to do. First of all, a number of projects nowadays, for good or bad, target Linux only; sometimes even just GNU/Linux (that is they don’t support running on other C libraries) because they require specific features from the kernel, specific drivers and other requirements like that. Secondly, you can easily require your users to have a sane environment to begin with; unless you really have to run on a 15 years old operating system, you can assume at least some basic standard support. I have already written about pointless autotools checks but I guess I didn’t make it too clear yet.

But it’s not just the idea of just dropping support for anything that does not support a given standard, whatever that might be (C99, POSIX.1-2008, whatever), it’s more that the configure.ac generated by autoscan is not going to make it magically work on a number of operating systems it didn’t support before. What it does, for the most part, is adding a number of AC_CHECK_HEADERS and AC_CHECK_FUNCS calls, which will verify the presence of various functions and headers that your software is using… but it won’t change the software to provide alternatives; heck there might not be alternatives.

So if your software keeps on using strings.h (which is POSIX) and you check for it in configure phase, you’re just making the configure phase longer without any solution, because you’re not making use of the results from the configure phase. Again, this often translates to things like the following:

#ifdef HAVE_STRINGS_H
#include 
#endif

Okay, so what is the problem with this idea? Well, to begin with, I have seen it so many times without an idea of why it is there! A number of people expect that since autoscan added the check, and thus they have the definition, they have to use it. But if you use a function that is defined in that header, and the header is not there, what are you going to do? Not including it is not going to make your software any more portable, if anything you’re going to get an implicit declaration of the function, and probably fail later at runtime. So, if it’s not an optional header, or function, just running the check and using the definition is not enough.

A common alternative is to fail the configure step if the header or function is not found, while it makes a bit more sense, I still dislike that option, Sure you might be able to tell the reason why the function is needed and whether they have to install something else or upgrade their system, but in truth that made much more sense when there was near to no common ground between operating systems, and users were the common people running the ./configure script. Nowadays, that’s a task that is often limited to packagers, that know their systems much better. The alternative to failing in configure is failing during build, and it’s generally not too bad. Especially since you’ll be failing build for any condition you didn’t know about beforehand.

I have another reason to provide, as for why you shouldn’t be running all those tests for things you don’t support a fallback for: autoconf provides means to pass external libraries and include directives to the compiler; since having each package provide its replacement for common function is going to cause a tremendous amount of code duplication (which in turn may cause a lot of work for packages if one of them is broken, such as dtoa() anybody remembers that?), I’m always surprised that there aren’t many more libraries that provide compatibility replacements for the functions missing in the system C library (gnulib does not count as it’s solving the problem with code replication, if not duplication). Rather than fail, or trying to understand whether you can build or not depending on the OS used, just assume their presence if you can’t go without, and leave it to the developers running that system to come up with a fix, which might involve additional tests, or might not.

My suggestion here is thus to start considering first the operating systems you’re targeting directly; try to find what actually changes between them; in most cases, for instance, you might have still pieces of very-old systems around, like the include for malloc.h that is only useful if you want to call functions such as memalign() but is not used for malloc() since, well, ever (stdlib.h is enough for that), and that will cause errors on both FreeBSD and OS X if included. So once you find that a header is not present in some of your desired operating system, look up what replaces it, then make sure to check for it properly; that means using something like this:

dnl in configure.ac
AC_CHECK_HEADERS([stdint.h inttypes.h], [break;])

/* in your C code */
#if HAVE_STDINT_H
#  include 
#else HAVE_INTTYPES_H
#  include 
#endif

This way you won’t be running checks for a number of alternative headers on all systems: most modern C99-compatible system libraries will have stdint.h available, even though a few older systems will need for inttypes.h to be discovered instead. This might sound cheap, since it’s just two headers, but especially when you’re looking for the correct place of a library header, you might end up with an alternative among three or four headers, and add a bunch of alternatives here, and you’re going to have problems. The same trick can be used for functions, and the description is also on my guide so and I’ll soon expand it to cover functions as well.

It shouldn’t “sound wrong” to have a configure.ac with near to no AC_CHECK_* call at all! Most of the tests autoconf will do for you, and you have the ability to add further, but there is no need to strain at using them when they are unneeded. Take as example the feng configure.ac — it has a few checks, of course, but they are limited to a few optional features that we workaround if missing, in the code itself. And some I would probably just remove (like making ipv6 support optional… I’d sincerely just make it work if it’s found on the system, as you still need to enable it in the configuration file to use it anyway).

And please, please, just don’t use autoscan, from now on!

Autotools Mythbuster: Indexed!

Since there has been talking about Autotools today, and at least on the Reddit comments, my Autotools Guide got linked, I decided to take a few minutes of my time and extended the guide a bit further. I was already doing so to document the automake options (I was actually aiming at documenting the flavors and in particular the foreign mode so I would stop finding 0-sized NEWS files around), but this time I tried to make it a bit more searchable…

So right now there is a new page with the terms index and I shortened the table of concepts so that it more easily flow in the browser. The titles should be quite explaining of where to end up to. Right now I only added a single index for terms, even though I considered splitting them down per macro or variable, similarly to how it’s done in the official documentation, but for now this should do. I did add a “commons error” primary term though, as that should make it easier to find the common errors that the various tools report which I covered.

Now, these are the good news, here come the bad news though. Quite a while after first publication, the guide still is lacking a lot and my style hasn’t particularly improved. I’m not sure how good it can become by this pace. On the other hand, I’m still open to receiving requests and answering them there (thanks to Fabio asking about it, there’s now a whole section about pkg-config although it does not cover the -uninstalled variant that I use(d) on lscube so much).

Contributions in both corrections, general improvements or even just ideas are very welcome; so are donations or, more interestingly nowadays, flattr clicks (thanks to Sebastian for giving me an invite!). There is a flattr button at the bottom of the Autotools Mythbuster pages… if it is going to help you, a flattr, little as it can be, is going to show your appreciation in a way that reminds me why I started working on it.

There are going to be more news related to the guide in the future anyway, and a few more related to autotools in general, sweet news for some of you, slightly less sweet for me… so keep yourself seated, the journey is still on!

Autotools Mythbuster Mailbox: optional coloured tests

Didier sent me an email last week (which unfortunately took me a while to get to), asking a simple question:

I also would like to add colour to test results (color-tests option), but it seems to be only supported in automake >= 1.11 according to http://www.mail-archive.com/automake@gnu.org/msg14979.html .

Do you know a way – such as the one for the silent rules – to enable this option in a compatible way for both 1.10 and 1.11 versions ?

He’s referring to the way I documented using optional silent rules so that you can use silent rules on automake 1.11 without breaking when using an older automake version, such as 1.10. This is one of the most linked snippets of my guide, because many people are interested in having silent (cleaner) rules by default, but they can’t afford to require their users to update automake just because of that (heck, there are a number of projects stuck to automake versions even older than 1.10, for compatibility with older distributions such as RHEL5). Having optional, auto-discovered silent rules fit most uses without trouble.

Now, Didier is asking for similar conditions to make use of coloured test output, which should make it easier to interpret when looking at the output (at least interactively over a terminal, as coloured output in log files becomes harder to read, I have bad experiences on the tinderbox with packages that always use colours.

Anyway, since this is a similar issue to the silent rules, it’s definitely not worth overdoing by requiring automake 1.11 just for this feature. Unfortunately, it’s not possible to have a clean solution to this. You’d expect that the automake developers, knowing all too well that most projects don’t want to force newer automake version requirements, would have added a definition, or a macro, to test for the automake version in use, allowing for conditional code depending on the version. Unfortunately, that’s not the case; the only macro that might seem like doing that is the AM_AUTOMAKE_VERSION macro, but it’s not what you think it would be:

# (This private macro should not be called outside this file.)
..
dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to
dnl require some minimum version.  Point them to the right macro.

Sigh.

Anyway, since you cannot use a conditional on the version you got to look for an alternative approach. In the case of silent rules, my solution was to check for the definition of the AM_SILENT_RULES macro itself: beside the option to AM_INIT_AUTOMAKE, for silent rules you have the ability to either enable them by default or not, and to do that, you’re given the eponymous macro. While I would have thought the same type of conditional would be feasible for the coloured tests, it turns out it’s not the case.

While the two clean options (checking for automake minimum version, or checking explicitly for the color-test feature availability) are not available, we are not entirely stuck. We can use a dirty hack: since we know that the coloured test output and the silent rules were implemented at the same time, we can simply check on whether silent rules are available, and if they are, enable coloured tests as well. But, you cannot expect automake to make it easy to implement this check either! Indeed, because of how aclocal is implemented, m4_ifdef is going to succeed only if the macro is going to be called, even if just conditionally. This works out properly for the conditional silent rules, since your target is to call that, at the end. If you consider your target here to just set color-test, you’ll definitely fail. You have to get both features at once, for it to work:

m4_ifdef([AM_SILENT_RULES],
  [m4_define([myproj_color_tests], [color-tests])],
  [m4_define([myproj_color_tests], [])])

AM_INIT_AUTOMAKE([foreign ]myproj_color_tests)

m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES])

Yes, this is definitely nasty, tricky, dirty, and unreliable, since we can’t know whether in the future one of the two features is going away, or maybe even become the default. Without a proper version identifier or an official way to enable features only if available, we’re basically stuck. We should probably complain with automake upstream until they can actually provide us with the needed features to test for this. Not that I’m positive that’ll happen anytime soon.

So while I could tell you how to do this, I’m going to positively recommend against this, and I won’t document it on Autotools Mythbuster proper.

Autotools Mythbuster: Why autoconf is not getting any friendlier

You know that I’m an “autotools lover” in the sense that i prefer having to deal with autotools-based buildsystem that most custom build-system that are as broken as they can possibly be. Or with cmake.

But I’m definitely not saying that autotools are perfect, far from it. I’m pretty sure that I said before that there are so many non-obvious tricks around that confuse most people and cause lots of confusion out there. The presence of these non-obvious tricks is due partly to the way autoconf tarted in the first place, partly into the fact that it has to generate standard sh scripts for compatibility with older systems, and partly because upstream is not really helping the whole stack to maturate.

One of the most non-obvious, but yet common, macros out there is definitely the AC_ARG_ENABLE (and its cousin AC_ARG_WITH obviously), as I’ve blogged and documented before. The main issue with those macros is that most people expect them to be boolean (--enable and --disable; --with and --without) even though it’s quite freeform (--with-something=pfx or --enable-errors=critical). But also the sheer fact that the second parameters need to be returned by another different macro (AS_HELP_STRING) is quite silly if you think of it. And, if you look at it, the (likely more recent) AC_ARG_VAR macro does not require a further macro call for the help string.

It gets even wilder: a lot of the AC_ARG_WITH calls are, further down, wrappers around calls to PKG_CHECK_MODULES (a pkg-config support macro). How is this worse? Well, you end up repeating the same code between different projects to produce, more or less, the same output. To at least reduce the incidence of this, Luca wrote a set of macros that wrap around and perform these common tasks. The result is that you have to provide a lot less details; you lose some flexibility of course, but it produces a much neater code in your configure.ac.

Now, as I said, autoconf upstream is partly responsible for this: while the new versions have been trying to reduce the possible misuse of macros, they are bound to a vast amount of compatibility. You don’t see the kind of changes that happened with the 2.1x → 2.5x version jump, nowadays. The result of this is that you cannot easily provide new macros, as they wouldn’t be compatible with older versions of autoconf, and that would be taken as mostly bad.

And the reason for that can be found in the reason why libvirt’s configure is still stuck with compatibility toward autoconf 2.59. While autotools are designed to not require their presence on either the host or build system (unlike CMake, imake, qmake, scons, …), developing for, and checking compatibility with, older systems require to have the tools at hand to rebuild configure and company. I solve this myself by just using NFS to pass the code around the various (real and virtual) machines, after regenerating it on my Gentoo box, but I admit it’s definitely not sleek enough. In this case, libvirt is also developed against RHEL-5, and in that system the only autoconf available is 2.59.

One problem to solve this kind of problem would be to have a single, main repository of macros, that everybody could use to perform the same task. Of course this would also mean standardising on some sort of interface for macros, and for the build systems themselves, and this is easier said than done. While we’re definitely not to the point Ruby is we still aren’t really that standard. Many packages handled by more or less the same people tend to re-use the same code over and over, but that only builds a multiple amount of similar, but still half-custom build systems.

There are only two, as far as I know, efforts that ever tried extending autoconf in a “massive” way: KDE (and the results are, well, let’s just say that I can’t blame for trying to forget about KDE3’s build system), and GNOME. The latter is still alive, although it isn’t concerned with giving more generic macro interfaces to common tasks.

There is the autoconf macro archive but there is one catch with it: some of the macros submitted there have GPL or GPL-affine licenses (as long as they are compatible); that means that you might not be able to use them in MIT-licensed systems. While it even says that FSF does suggest using more open licenses for macro files, it does not require the submissions to be. Which can get quite messy on the long term, I’m afraid, for those projects.

At any rate, this post should show that I don’t really think that autotools are error-safe, but at the same time, I don’t think that creating a totally new language to express these things (like CMake does) is the solution. If only Rake was parallel-capable (which is unlikely to happen as long as Ruby cannot seriously multithread), then it would probably be, in my opinion, a better replacement for autotools than what we have now. That is, if the presence of Ruby is not a problem.

Autotools Mythbuster: updated guide

While preparing for my first vacation ever next week, I’ve been trying to write up more content on my guide so that at least my fellow developers in LScube have a references of what I’ve been doing, and Gentoo developers as well, as lately I’ve been asked quite a few interesting questions (and not just them).

So, first of all, thanks to David (user99) who cleaned up the introduction chapter, and to Gilles (eva) who gave me the new stylesheet (so that it doesn’t look as rough as it did before, it also narrows the lines so that it reads better. It might not be the final style, but it really is an improvement now.

As for my changes, I’ve been trying to change slightly the whole take of the guide, trying to write up complete working examples for the readers to use, that are listed in the main page. At the same time, I’m trying to cover the most important, or less known, topics, with particular attention to what people asked me, or what I’ve been using on projects which is not very well known. The list of topics added include:

  • using AC_CHECK_HEADERS to get one out of a prioritised list of headers;
  • using AC_ARG_* macros (enable, with and environment variables);
  • using AS_HELP_STRING to document the arguments;
  • using AC_ARG_WITH to set up automatic but not automagic dependencies;
  • using automake with non-recursive makefiles (including some of the catches);
  • improved automake silent-rules documentation;
  • using external macro files with autoconf (work in progress, for now only autoconf with no extras is documented).

I’m considering the idea of merging in some of the For A Paralllel World articles, at least those dealing with automake, to complete the documentation. The alternative would be to document all these kind of problems and writing something along the lines of “A Distributor’s Bible”… the problem with that idea is that almost surely somebody will complain if I use the name “Bible” (warning: I’m not a Catholic, I’m an atheist!)… and if I am to call it “The Sacred Book of Distributors” I’ll just be having to dig up all the possible mocks and puns over various religions, ‘cause I’ll be almost surely writing the ten commandments for upstream projects (“Thou shall not ignore flags”, “Thou shall version your packages”), and that also will enter a politically correctness problem.

Oh well, remember that I do accept gifts (and I tried not putting there the stuff that I’ll be buying next week… I already told my friends not to let me enter too many shops, but I’ll be following them and that’s not going to be a totally safe tour anyway…).

Movin!!

For a while I have been quoting songs, anime and other media when choosing posts’ titles; then I stopped. Today, it looks perfectly fine to quote the title of one of the Bleach anime endings, by Takacha, since it suits what my post is about… just so you know.

So, since my blog has been experiencing technical difficulties last week, as you might know, I want to move out of the current vserver (midas), which is thankfully sponsored by IOS for xine project to a different server that I can handle just for the blog and a couple more things. I’m now waiting for a few answers (from IOS to start) to see where this blog is going to be deployed next time (I’m looking for Gentoo Linux vservers again).

The main problem is that the big, expensive factor in all this is the traffic; midas is currently serving lots of traffic: this blog alone averages over a 300 MB/day, which gets down to about 10GB of traffic a month. But the big hits come from the git repositories, which means that a relatively easy way to cut down the traffic expense of the server is to move the repositories out.

For this reason I’ve migrated my overlay back over Gentoo hardware (layman included), while Ruby-Elf is the first of my projects to be hosted at gitorious (I’m going to add Autotools Mythbuster soon too).

As for why I decided to go with gitorious over GitHub, it’s a technical and political reason for me. Technical, because I like the interface better; political both for the AGPL3 license used by gitorious and for the fact that it does not highlight the “fork it” method that github seem to have based itself off. On the other hand, I actually had difficulties finding where to clone the unofficial PulseAudio repository to prepare my local copy, and the project interface shows pretty well the “Merge Requests” counter.

At any rate there will be some of my stuff available at github at the end of the day, mostly the things that started or are now maintained within github, like Typo itself (for which I have quite a few changes locally, both bug fixes and behaviour changes, that I’d like to get merged upstream soonish).

This starts to look like a good training for when I’ll actually move out of home too.

Update (2017-04-22): as you may know, Gitorious was acquired by GitLab in 2015 and turned down the service. Which not only means this post is now completely useless, but I gave up and joined the GitHub crowd, since that service “won the war”. Unfortunately some of my content from Gitorious has been lost because I wasn’t good at keeping backups.