Why is `git send-email` so awful in 2017?

I set myself out to send my first Linux kernel patch using my new Dell XPS13 as someone contacted me to ask for help supporting a new it87 chip in the gpio-it87 driver I originally contributed.

Writing the (trivial) patch was easy, since they had some access to the datasheet, but then the problem came to figure out how to send it over to the right mailing list. And that took me significantly more time than it should have, and significantly more time than writing the patch, too.

So why is it that git send-email is still so awful, in 2017?

So the first problem is that the only way you can send these email is either through a sendmail-compatible interface, which is literally an interface older than me (by two years), or through SMTP directly (this is even older, as RFC 821 is from 1982 — but being a protocol, that I do expect). The SMTP client has at least support for TLS, provided you have the right Perl modules installed, and authentication, though it does not support more complex forms of authentication such as Gmail’s XOAUTH2 protocol (ignore the fact it says IMAP, it is meant to apply to both IMAP and SMTP).

Instead, the documented (in the man page) approach for users with Gmail and 2FA enabled – which should be anybody who wants to contribute to the Linux kernel! – is to request an app-specific password and saving it through the credential store mechanism. Unfortunately the default credential store just stores it as unencrypted plaintext. Instead there are a number of credential helpers you can use, either using Gnome Keyring or libsecret, and so on.

Microsoft maintains and releases its own Credential Manager which is designed to support multi-factor login to a number of separate services, including GitHub and BitBucket. Thank you, Microsoft, although it appears to only be available for Windows, sigh!

Unfortunately it does not appear there is a good credential helper for either KWallet or LastPass which would have been interesting — to a point of course. I would probably never give LastPass an app-specific password to my Google account, as it would defeat my point of not keeping that particular password in a password manager.

So I start looking around and I find that there is a tool called keyring2 which supposedly has kwallet support, though on Arch Linux it does not appear to be working (the kwallet support, not the tool, which appear to work fine with Gnome Keyring). So I checked out the issues, and the defaulting to gnome-keyring is known, and there is a feature request for a LastPass backend. That sounds promising, right? Except that the author suggests building it as a separate library, which makes sense to a point. Unfortunately the implicit reference to their keyrings.alt (which does not appear to support KDE/Plasma), drove me away from the whole thing. Why?

License is indicated in the project metadata (typically one or more of the Trove classifiers). For more details, see this explanation.

And the explanation then says:

I acknowledge that there might be some subtle legal ramifications with not having the license text with the source code. I’ll happily revisit this issue at such a point that legal disputes become a real issue.

Which effectively reads to me as “I know what the right thing to do is, but it cramps my style and I don’t want to do it.” The fact that there have been already people pointing out the problem, and the fact that multiple issues have been reported and then marked as duplicate of this master issue, should speak clearly enough.

In particular, if I wanted to contribute anything to these repositories I would have no hope to do so but in my free time, if I decide to apply for a personal project request, as these projects are likely considered “No License” by the sheer lack of copyright information or licenses.

Now, I know I have not been the best person for this as well. But at least for glucometerutils I have made sure that each file lists its license clearly, and the license is spelt out in the README file too. And I will be correcting some of my past mistakes at some point soon, together with certain other mistakes.

But okay, so this is not a viable option. What else remains to use? Well, turns out that there is an actual FreeDesktop.org specification, or at least a draft, which appears to have been last touched seven years ago, for a common API to share between GNOME and KWallet, and for which there are a few other implementations already out there… but the current KWallet does not support it, and the replacement (KSecretService) appears to be stalled/gone/deprecated. And that effectively means that you can’t use that either.

Now, on Gentoo I know I can use msmtp integrated with KWallet and the sendmail interface, but I’m not sure if in Arch Linux it would work correctly. After all I even found out that I needed to install a number of Perl modules manually, because they are not listed in the dependencies and I don’t think I want to screw with PKGBUILD files if I can avoid it.

So at the end of the day, why is git send-email so awful? I guess the answer is that in so many years we still don’t have a half-decent, secure replacement for sending email. We need what they would now call “disruptive technology”, akin to how SSH killed Telnet, to bring up a decent way to send email, or at least submit Git patches to the Linux kernel. Sigh.

Finding a better blog workflow

I have been ranting about editors in the past few months, an year after considering shutting the blog down. After some more thinking out and fighting, I have now a better plan and the blog is not going away.

First of all, I decided to switch my editing to Draft and started paying for a subscription at $3.99/month. It’s a simple-as-it-can-be editor, with no pretence. It provides the kind of “spaced out” editing that is so trendy nowadays and it provides a so-called “Hemingway” mode that does not allow you to delete. I don’t really care for it, but it’s not so bad.

More importantly it gets the saving right: if the same content is being edited in two different browsers, one gets locked (so I can’t overwrite the content), and a big red message telling me that it can’t save appears the moment I try to edit something and the Internet connection goes away or I get logged out. It has no fancy HTML editor, and instead is designed around Markdown, which is what I’m using nowadays to post on my blog as well. It supports C-i and C-b with it just fine.

As for the blog engine I decided not to change it. Yet. But I also decided that upgrading it to Publify is not an option. Among other things, as I went digging trying to fix a few of the problems I’ve been having I’ve discovered just how spaghetti-code it was to begin with, and I lost any trust in the developers. Continuing to build upon Typo without taking time to rewrite it from scratch is in my opinion time wasted. Upstream’s direction has been building more and more features to support Heroku, CDNs, and so on so forth — my target is to make it slimmer so I started deleting good chunks of code.

The results have been positive, and after some database cleanup and removing support for structures that never were implemented to begin with (like primary and hierarchical categories), browsing the blog should be much faster and less of a pain. Among the features I dropped altogether is the theming, as the code is now very specific to my setup, and that allowed me to use the Rails asset pipeline to compile the stylesheets and javascripts; this should lead to faster load time for all (even though it also caused a global cache invalidation, sorry about that!)

My current plan is to not spend too much time on the blog engine in the next few weeks, as it reached a point where it’s stable enough, but rather fix a few things in the UI itself, such as the Amazon ads loading that are currently causing some things to jump across the page a little too much. I also need to find a new, better way to deal with image lightboxes — I don’t have many in use, but right now they are implemented with a mixture of Typo magic and JavaScript — ideally I’d like for the JavaScript to take care of everything, attaching itself to data-fullsize-url attributes or something like that. But I have not looked into replacements explicitly yet, suggestions welcome. Similarly, if anybody knows a good JavaScript syntax highligher to replace coderay, I’m all ears.

Ideally, I’ll be able to move to Rails 4 (and thus Passenger 4) pretty soon. Although I’m not sure how well that works with PostgreSQL. Adding (manually) some indexes to the tables and especially making sure that the diamond-tables for tags and categories did not include NULL entries and had a proper primary key being the full row made quite the difference in the development environment (less so in production as more data is cached there, but it should still be good if you’re jumping around my old blog posts!)

Coincidentally, among the features I dropped off the codebase I included the update checks and inbound links (that used the Google Blog Search service that does not exist any more), making the webapp network free — Akismet stopped working some time ago and that is one of the things I want to re-introduce actually, but then again I need to make sure that the connection can be filtered correctly.

By the way, for those who are curious why I spend so much time on this blog: I have been able to preserve all the content I could, from my first post on Planet Gentoo in April 2005, on b2evolution. Just a few months shorts of ten years now. I also was able to recover some posts from my previous KDEDevelopers blog from February that years and a few (older) posts in Italian that I originally sent to the Venice Free Software User Group in 2004. Which essentially means, for me, over ten years of memories and words. It is dear to me and most of you won’t have any idea how much — it probably also says something about priorities in my life, but who cares.

I’m only bothered that I can’t remember where I put the backup from blogspot I made of what I was writing when I was in high school. Sure it’s not exactly the most pleasant writing (and it was all in Italian), but I really would like for it to be part of this single base. Oh and this is also the reason why you won’t see me write more on G+ or Facebook — those two and Twitter are essentially just a rant platform to me, but this blog is part of my life.

I’ll stick with Thunderbird still

Even though it hasn’t been an year yet that I moved to KDE, after spending a long time with GNOME 2, XFCE and then Cinnamon, over the past month or so I looked at how much of non-KDE software I could ditch this time around.

The first software I ditched was Pidgin — while the default use of GnuTLS caused some trouble KTP works quite decently. Okay some features are not fully implemented, but the basic chat works, and that’s enough for me — it’s not like I used much more than that on Pidgin either.

Unfortunately, when yesterday I decided to check whether it was possible to ditch Thunderbird for KMail, things didn’t turn out as nice. Yes, the client improved a truckload since what we had at KDE 3 time — but no, it didn’t improve enough for make it usable for me.

The obvious problem zeroth is the dependencies: to install KMail you need to build (but don’t need to enable) the “semantic desktop” — that is, Nepomuk and the content indexing. In particular it brings in Soprano and Virtuoso that have been among the least usable components when KDE4 was launched (at least Strigi is gone with 4.10; we’ll see what the future brings us). So after a night rebuilding part of the system to make sure that the flags are enabled and the packages in place, today I could try KMail.

First problem — at the first run it suggested importing data from Thunderbird — unfortunately it completely stuck there, and after over half an hour it went nowhere. No logs, no diagnostic, just stuck. I decided to ignore it and create the account manually. While KMail tried to find automatically which mail servers to use, it failed badly – I guess it tried to look for some _imap._srv.flameeyes.eu or something, which does not exist – even though Thunderbird can correctly guess that my mail servers are Google’s.

Second problem — the wizard does not make it easy to set up a new identity, which makes it tempting to add the accounts manually, but since you got three different entries that you have to add (Identity, Sending account, Receiving account), adding them in the wrong order gets you to revisit the settings quite a few times. For the curious, the order is sending, identity, receiving.

Third problem — KMail does not implement the Special Folder extension defined in RFC 6154 which GMail makes good use of (it actually implements it both with the standard extension and their own). This means that KMail will store all messages locally (drafts, sent, trash, …) unless you manually set them up. Unlike what somebody have told me, this means that the extension is completely unimplemented, not implemented only partially. I’m not surprised that it’s not implemented, by the way, due to the fact that the folders are declared in two different settings (the identity and the account).

Fourth problem — speaking about GMail, there is no direct way to handle the “archive” action, which is almost a necessity if you want to use it. While this started with GMail and as an almost exclusive to that particular service, nowadays many other services, including standalone software such as Kerio, provide the same workflow; the folder used for archiving is, once again, provided with the special-use notes discussed earlier. Even though the developers do not use GMail themselves, it feels wrong that it’s not implemented.

Fifth problem — while at it, let’s talk a moment about the IDLE command implementation (one of the extensions needed for Push IMAP). As Wikipedia says, KMail implements support for it since version 4.7 — unfortunately, it’s not using it in every case, but only if you disable the “check every X minutes” option — if that is enabled, then the IDLE command is not used. Don’t tell me it’s obvious, because even though it makes sense under some point of views, I wasn’t the only one that was tricked by that. Especially since I read that setting first as “disable if you only want manual check for new mail” — Thunderbird indeed uses IDLE even if you set the scheduled check every few minutes.

Sixth problem — there is no whitelist for remote content on HTML emails. GMail, both web and on the clients, Android and iOS, supports a complete whitelist, separate from everything else. Thunderbird supports a whitelist by adding the sender to the contacts’ list (which is honestly bothersome when adding mailing lists, like in my case). As far as I could tell, there is no way to have such a whitelist on KMail. You either got the protection enabled, or you got it disabled.

The last problem is the trickiest, and it’s hard to tell if it’s a problem at all. When I went to configure the OpenPGP key to use, it wouldn’t show me anything to select at all. I tried for the good part of an hour trying to get it to select my key, and it failed badly. When I installed Kleopatra it worked just fine; on the other hand, Pesa and other devs pointed out that it works for them just fine without Kleopatra installed.

So, what is the resolution at this point, for me? Well, I guess I’ll have to open a few bug feature requests on KDE’s Bugzilla, if I feel like it, and then I might hope for version 4.11 or 4.12 to have something that is more usable than Thunderbird. As it is, that’s not the case.

There are a bunch of minor nuisance and other things that require me to get used to them, such as the (in my view too big) folder icons (even if you change the size of the font, the size of the icon does not change), and the placement of the buttons which required me to get used to it on Thunderbird as well. But these are only minor annoyances.

What I really need for KMail to become my client is a tighter integration with GMail. It might not suit the ideals as much as one might prefer, but it is one of the most used email providers in the world nowadays, and it would go a long way for user friendliness to work nicely with it instead of making it difficult.

My life with KDE4

It was late July that I went back to KDE — I left it just after version 4.0 was released, with the whole confusion after it. I’ve got now a generally good impression with it, it works much better than it used to in the original release, and it proved itself much more stable than GNOME 3 or Cinnamon, as it never crashed on me altogether, like both of them did so many times. Multi-monitor, which is what I complained about regarding Cinnamon, is not perfect here either, but that’s a completely different story at this point I guess.

While I’m still not liking the way the whole KDE 4.0 release has been handled – “everybody should know that a dot-zero is not ready for daily use” is just a sorry excuse for a mess up – the situation has improved and the results are good. Of course, like everybody already told me, I steered clear of semantic desktop, which means I’m not using KMail or anything like that (even though Tomáš pointed out that the problem is that it needs to have the code available, but not enabled at runtime). On one laptop I’ve been using Thunderbird — on the other I’m using only GMail and it seems enough for most cases, right now.

There are a few things that I still don’t really find straightforward, like the Plasma widget sharing (what the heck is it used for?) and the fact that you can have both widgets and notification icons in the lower panel for things like battery and network — and of the two, the nice ones are the notification icons, which have been, to me, the most difficult to identify. I also haven’t grasped the idea of activities, or to be precise their difference from desktops, beside looking, to me, like a three-dimensions desktop wall. I know that for it to work properly you need support from the apps as well, and they change both the apps and the available widgets… still, it doesn’t look like I can care for it right now.

KRunner (the thingy that comes up with M-F2) is actually quite nice, but there are a few rough edges on it as it is right now, I think. One of the was with the latest update (4.10) I lost the icons on it .. till the next reboot, then they appeared back. Maybe it was something to do with the further updates done on the ebuilds after the first unmask. Also, at least in 4.9, sometimes if you’re too quick to type and press enter, it executes the results of the previous search rather than current one… but okay, it’s my fault I guess.

After quite a long time I also decided to give up on Pidgin (on the Dell where I have been using it, the Zenbook right now does not have it at all, as when I’m using it, like right now, I care about being left alone, mostly — I have bought it as my “time to write” laptop), in favour of the Telepathy integration provided by KDE itself – you probably noticed after my previous rant about SSL implementations – which actually seems to work pretty well. Only issue? When you first try to add an account, and you don’t have the backends installed yet, it’ll tell you to install both telepathy-gabble and telepathy-haze — the former implements XMPP and thus allows both GTalk and Facebook accounts, the latter implements almost everything else on top of Pidgin’s libpurple… you don’t need it for either GTalk or Facebook, which happen to be the only accounts I care about myself, so at the end I was able to get rid of both telepathy-haze and Pidgin itself.

I like digiKam for photo handling, although the Flickr upload feature (at least in the previous version) was lacking, and the Facebook one absolutely unreliable… I know that version 3 has been just released and I’m looking forward to see if the new version solves my problems, which would make me very happy. I’m also hoping I’ll be able to get a new harddrive just for the photos, the one I’m using now is shared with Windows, and thus is formatted in NTFS — access speed even over USB3 is abysmal.

So, all in all, going back to KDE was really a very good idea. Although it took me a while to get used to it, and while it includes a number of features that I don’t think I’ll ever use (activities and widget sharing as noted above), it does not get in my way, which is the main reason why I was so bothered by GNOME 3, and it’s much more stable than Cinnamon.

Back to … KDE‽

Now this is one of the things that you wouldn’t expect — Years after I left KDE behind me, today I’m back using it.. and honestly I feel again at home. I guess the whole backend changes that got through KDE4 were a bit too harsh for me at the time, and I could feel the rough edges. But after staying with GNOME2, XFCE, then Cinnamon.. this seems to be the best fit for me at this point.

Why did I decide to try this again? Well, even if it doesn’t happen that often, Cinnamon tends to crash from time to time, and that’s obnoxious when you’re working on something. Then there is another matter, which is that I’m using a second external monitor together with my laptop, as I’m doing network diagrams and other things on my dayjob, and for whatever reason the lockscreen provided by gnome-screensaver no longer works reliably: sometimes the second display is kept running, other times it doesn’t resume at all and I have to type blindly (with the risk that I’m actually typing my password on a different application), and so on so forth.

Then since I wanted to have a decent photo application, and Shotwell didn’t show itself as valid enough for what I want to do (digiKam is much better), I decided to give it another try…

So first of all I owe lots of congratulations to all the people working on KDE, you did a tremendously good job. There are though a few things that I’d like to point out. The first is that I won’t be using the term “KDE SC” anytime soon; that still sounds dumb to me, sorry. The second is that I wonder if some guys made it a point to exaggerate on the graphical department, and then having to roll back. I remember how much hype was created around Oxygen and now it feels, that just like Keramik at KDE 3 time, they had to drop it back.

Another sore spot is the Italian translation — it’s not just awkward but in some places it’s definitely wrong! They translated “Plastique” (the theme name) and “Plastik” (the reference to the KDE 3 theme) as if they were “Plastic” — this is not the case! Keep the names the same as the original please. There are also a few more problems, including the fact that they did translate terms like “Serif”, “Sans Serif” and “Monospace” which … well they don’t really sound that good in Italian. At the end I simply changed the system language back to English.

There are still quite a few things that I have to take care of to set this up properly; right now it’s just a messy first set up and there are a few bugs that I have to report (including the fact that to upload originally-sized files on Flickr, KIPI is actually copying the original file to /tmp which makes it very unpleasant especially when you have that directory in tmpfs.

At any rate, you’ll probably read some more comments on KDE4 from me in the next few weeks, so be ready for them.

Libtool archives and their pointless points

Since the discussion resumed about libtool files, and we’re back at deciding whether to kill them now or “plan” for the next five to ten years, I guess I better summarise again all the information regarding them, maybe trying to extend what I already wrote about a number of times and provide all the reasoning to abandon them as soon as possible.

I’m writing it here since this is what I use as main reference; I’ll see to send this to gentoo-dev tomorrow if I have time and I feel motivated enough, but if somebody wants to beat me to this, I’ll be just happy since it’ll mean less work for me.

About the chance of just removing all of them unconditionally. This is one thing I sincerely don’t think it’s possible, even though some trials have been done toward that target, for instance with the Portage-Multilib branch. The reasons are multiple; the most obvious one is that calling them *.la is not just enough; there is at least one package in the tree (in the subset of packages the tinderbox is able to merge, more to the point) that names files with .la suffix but simply are not libtool archives at all. So simply removing all of the files based on the file name and path is a bad idea.

About fixing this at the core, libtool. That’s a feasible, albeit way long term solution, if it wasn’t that I find it unlikely that upstream will accept that. Beside the fact that rather than coupling autotools more tightly to reduce duplication issues with version compatibility, they seem to be splitting it further and requiring users to use more code that might not be useful at all for them (for instance by deprecating the mktime() checks in favour of using gnulib, as they did with autoconf 2.68), libtool developers seems to have indicated they don’t intend conceding to modern system use cases if they might hinder support of much older systems.

While it is feasible to patch libtool to not emit the .la files, and quite a number of projects will rejoice of that, it cannot be done unconditionally, as I’ll explain further along the post. So this will require either doing it under conditional of -shared flag, or adding a new flag parameter to use. But even assuming that upstream were to merge the flag, fixing all of the packages upstream not to emit the libtool archive files, well, it’s a plan that takes way too many years, with user inconvenience not stopping until all of it is done. So as it is, it’s something that should be sought out, but won’t help on short to mid term.

About usefulness of libtool archives with plugins, which is something I wrote in detail over an year ago. I’m not sure if I can say “each and every software building plugins with automake uses libtool”, as I don’t know all of them (yet), but I can safely assume that most of the automake-based build systems out there can only make use of libtool to build plugins, shared objects that are dynamically loaded into processes to provide expanded features. This means that most of them will emit and install, by default, a number of libtool archive files.

But even if you build the plugins with libtool, it does not mean you’re loading them with it as well; you could be using the libltdl library that libtool provides, but sincerely, most of the time just learning about it is a waste of time. The most important feature it provides you with is a common interface for various dynamic linker interfaces; indeed there are different dynamic linker interfaces on Linux, Windows and Mac OS X (if I recall correctly, at least), but in this usage mode, there is no use of the archive files either, they are not really needed; the interface allows you to load any object file, as long as you use the full basename, including the operating system-specific extension (.so, .dll, .dylib…).

Where the libtool archive files are actually used is dynamic linker emulation, which is another work mode of libltdl; instead of accessing the standard dynamic linker (loader) in the system, the host software only relies on static linking, and the libtool archives are used for knowing what to look for. In this mode, the archives are used even when not using the emulation itself, for they describe the plugin is used from a shared object anyway. In this case you cannot remove the libtool archive without changing the underlying host software architecture considerably.

The result is that for most usage of libltdl, you can remove the .la files without thinking twice once you know that the software uses dlopen() to load the plugins (such is the case of xine-lib, which also uses a nasty Makefile.am hack to remove them altogether). You might require a bit more analysis for the software that does use libltdl.

About using libtool archive files for standard libraries is definitely a different, complex topic that I’ll have to write probably a lot about.

The first problem: static archives (and thus static linking) for libraries hosting plugins (like xine-lib, for instance). Most of these host programs do not support static linking at all, so for instance xine-lib never provided a static archive to begin with. This does not mean that there is no way to deal with static-linked plugins (the so-called built ins that I talked a lot about; heck Apache pulls it off pretty nicely as well). But it means that if the plugin links to the host library (and most do, to make sure proper linking is applied), then you cannot have a statically-linked copy of it.

Anything that uses dlopen(), anything that uses features such as PAM or NSS, will then not have a chance to work correctly with static linking (I admit, I’m oversimplifying here a bit, but please bear with me, getting into the proper details of that will require a different blog post altogether and I’m too tired to start drawing diagrams). After pushing this to an accepted state, we can now look at what libtool archive files provide for the standard libraries.

The libtool archive files were mainly created to overcome the limitations of the original library implementations for GNU and other operating systems, as they did not provide ABI version information, or dependency data. On some operating systems (but please don’t ask me to track down which), neither shared objects nor static archives provide these information; on most modern operating system, Unix, Unix-like, or not at all, at least shared objects are advanced enough to not require support files to provide metadata; ELF (used by Linux, BSD and Solaris) provides them in form of sonames and needed entries; Mach-O (used by OSX) and PE (used by Windows and .NET) have their own ways as well. Static libraries are a different matter.

For most Unix and Unix-like system, static libraries are rather called “static archives” because that’s what they are: archive files created with ar where a number of object files are added. For these, the extra information about static linking is somewhat valuable. It is not, though, as valuable as you might think it is. While it does provide dependency information, there are no warranty that the dependencies used to create the shared object and those to link the static archives are the same; transitive dependencies cover part of the issue, but it doesn’t let you know which builtins you’d have to link statically against. Also, they can be only used by the software that in turns use libtool to link.

With the current tendency at abandoning autotools (for good or bad, we have to accept that this is the current trend), this means that the .la files are getting even more useless, especially because the projects that do build libraries with libtool cannot simply rely on their users to use libtool on their own, and that means they have to provide them with options to link statically (if it’s at all supported) without using libtool. This usually boils down to using pkg-config to do the deed; this also have the positive effect of working for non-autotools and/or non-libtool based projects.

Sincerely, the only relatively big subsystem that relied heavily on libtool archive files was KDE 3; since they switched away from autotools (and thus, libtool) altogether, the rest of the software stack I know of, only consider libtool archives side effects, most of the time not thinking twice about their presence. A few projects are actively trying to avoid the presence of such files, for instance by removing them through an install hook (which is what xine-lib has been doing for years), but also has a drawback: make uninstall does not work if you do that, because it relies on the presence of the .la files (we don’t need those in ebuilds since we don’t use the uninstall target at all).

Taking all of this in consideration, I reiterate my suggestion that we start removing .la files on an ebuild basis (or on subsystem basis like it’s the case for X11 and gnome, as they can vouch for their packages not to idiotic things); doing so in one big swoop will cause more trouble (as a number of packages that do need them will require the change to be reverted, and a switch to be added); and simply adding a f..ine load of USE flags to enable/disable their installation is just going to be distracting, adding to the maintenance burden, and in general not helpful even for the packages that are not in tree at all.

Repeat after me: nobody sane would rely on libtool so much nowadays.

An appeal to upstream KDE developers

While I have stopped using KDE almost two years ago, I don’t hate KDE per-se… I have trouble with some of the directions KDE has been running in – and I fear the same will happen to GNOME, but we’ll see that in a few months – but I still find tehy have very nice ideas in many fields.

Unfortunately, there is still one huge technical problem with KDE, and KDE-based third party software (well, actually, with any Qt-based software): they are still mostly written in C++.

Now, it’s no mystery that I moved from having a liking at that language to hating it outright, preferring things like C# to it (I still have to find the time to learn ObjectiveC to be honest). One of the problems I have with it is the totally unstable nature of it, that is well shown by the number of problems we have to fix each time a new GCC minor version is released. I blogged last week about GCC 4.5 and what actually blew my mind out is what Harald said about the Foo::Foo() syntax:

The code was valid C+98, and it wasn’t a constructor call. In C98, in every class C, the name C exists as both a constructor and a type name, but the constructor could only be used in certain contexts such as a function declaration. In other contexts, C::C simply means C. C::C::C::C::C was also a perfectly valid way to refer to class C. Only later did there turn out to be a need to refer to the constructor in more contexts, and at that point C::C was changed from a typename to a constructor. GCC 4.4 and earlier follow the original C+ standard in this respect, GCC 4.5 follows the updated standard.

Now, I thank Harald for clearing this up, but even so I think this aliasing in the original standard was one of the most braindamaged things I ever read… it was actually making much more sense to me that it was an oversight! Anyway, enough with the C++ bashing for its own sake.

What I would ask the upstream KDE developers is to please set up some way to test your code on the newest, unstable, unusable for day-to-day coding upcoming GCC branches. This way we would avoid having the latest, very recent, release of a KDE-related tool to fail the day the new GCC exits beta status and gets released properly. I’m not pretending that you fiddle to make sure that there are no ICEs (Internal Compiler Errors) caused by your code (although finding them will probably help GCC upstream a lot), but at least make sure that the syntax changes won’t break as soon as a new GCC is released, pretty please.

Of course, this kind of continuous integration, even extended to all of the KDE software hosted by KDE.org is not going to be enough as there are huge numbers of packages that are being developed out of that… but it would at least give the idea that there is need for better CI in general in FLOSS!

I don’t like what people tell me is good for me

— That drink was individually tailored to meet your nutritional requirements and pleasure.
— So I’m a masochist on a diet!

Arthur Dent and the Nutramatic Machine; The Hitchhiker’s Guide To the Galaxy — Secondary Phase

One of the reasons given for Free Software popularity among geeks and other technical people is that it consists, for many, of a simple way to scratch their own itches; it’s probably the same reason why I keep using Gentoo: it allows me to scratch my own issues pretty easily. Since people scratch their own issues, they do the things the way they best like, and that turns out to be successful because both great minds and lots of geeks think alike.

At the same time, there is a very strong drive to give Free Software to the masses… this drive is ethical for some, commercial for others, but the bottomline can be generally summarised in “Free Software needs to be done the right way”. This includes many aspects of Free Software: from code quality, to maintainability, to usability of the interfaces. And once again, to be able to have results you have to accept that you’re going to have rules, standards and common practises to accept. The problem is: how do you forge them? And how much they should distance the “older” versions?

Now, for once don’t let me get into the technicalities of code practises, QA and so on so forth… I’ll focus on something that I have to admit I have near to no working knowledge of: interface usability. I’m a developer, and as many developers, I suck at designing interfaces that are not programming interfaces: websites, GUIs, CLIs… you name it, I suck at it. Thus why I do find it very helpful that there are usability experts out there that works hard to make software interfaces better to use for the average user and (possibly) for me as well.

— The ventilation system; you had a go at me yesterday.
— Yes, because you keep filling the air with cheap perfume.
— You like scented air, it’s fresh and invigorating.

Arthur Dent and the Heart of Gold ventilation system; The Hitchhiker’s Guide To the Galaxy — Secondary Phase

Unfortunately, I’m afraid stuff like that soon gets overboard, because people start to take a liking into dictating how other people should use their computer. This is among the other things one of the most common criticism directed toward Apple, as they tend to only allow you certain degree of use of both their hardware and their software; and the obvious challenge is to get their hardware (at least) to do something it wasn’t deigned for (second hard-drive on MacBooks, XBMC on AppleTV, iPhone Jailbreak…).

Now, sometimes the dictats on how to do something turn out for the best, and people are hooked into the new interfaces and paradigms (let’s take as example the original iMac’s lack of a floppy disk drive; I wouldn’t be surprised if Apple were at some point to drop optical drives on all their line of computers and then ship OSX on read-only USB media). This might create a trend that is followed by other developers, or manufacturers, as well. Without entering the merits of the iPhone in the sparks of Android phones, just think of when Apple pushed iTunes with their iPods: the average Windows user used WinAMP before, and iTunes has a completely different interface, on Linux, XMMS first and Audacious after was the norm, both using the same interface as WinAMP. After iTunes, and for Linux especially after Amarok, around version 1.3, we have a number of playlist-centric players instead.

Now, once upon a time, the KDE users and developers laughed at GNOME’s purported usability studies that hid all the settings, and caused Nautilus to become “spatial” (I remember one commenter on the issue, supporting the then-new spatial Nautilus by saying that tabbed browsing wasn’t usable because it would have been the same as glueing together newspapers to read them… now that was a silly thing to say, especially in that context). With time the situation reversed, for a while at least with KDE deciding to “move for usability” and “new concepts” with KDE 4… and breaking the shit out of it all, for many people, me included. I think a very iconic point here would be some of the complains I heard about the latest Amarok development in #gentoo-it, about the application is supposedly “more usable” by changing so many things around that even long-time users can’t feel at home any longer.

While Amarok always had this edgy feeling that it could screw up your mechanics by simply deciding that something is better done in the opposite way that it was before, it worked out because the ideas caught on pretty quickly: people moaned and ranted, but after a month or two, near everybody was enthusiast, and wondered why the other players didn’t do the same. This trend has changed with Amarok 2 it seems, as I heard almost only rants, and very few enthusiasts outside of the core developers. And I’m not speaking about the technical side of things here (like the usage of MySQL Embedded — which in my opinion has been a very bad move… mostly because MySQLe was definitely not ready at the time, as Jorge might tell you).

But my safe haven of GNOME start to feel disturbed; while I’ve read good things about the “Usabiltiy Hackfest” that happened a couple of weeks ago in London, sponsored among others by Canonical if I recall correctly, some of the posts coming from there looked positively worrisome. In particular, Seth Nickell’s posts about “Task Pooper” (maybe I’m biased but projects choosing such names feel like a very bad start to me) reminded me a lot of Seigo’s posts about Plasma, and while I hear most people happy with it as implemented currently, I also remember the huge rants in the first iterations where the whole interaction was designed out of thin air… I’ll quote the Ars Technica article (which title is in my opinion a bit too forceful):

Despite his protest that the new design isn’t “handwavy,” I had a hard time seeing how all the pieces fit together after reading the initial document. [snip]

Actually, I think Nickell’s went to say that his design was not exactly what he made it to be, as it stands now. Going all the way to declare the New Majestic Paradigm Of Desktops is the first bad move if you want something good, I think. Not only it’ll add a lot of expectation to a project that is for now just designed out of thin air, but it also make him sound way too convinced about his stuff. I like it much better when the designers are not convinced about their stuff as that means they’ll think about it a lot more… it’s a challenge of second-guessing oneself and improving step by step. If you think you reached the top already, you’re going to stop thinking about it.

At any rate, the point I wanted to make was simply that people need to complain and need to rant about things, if you want them to be good. So please don’t take my rants always as negative, I do rant, and sometimes I rant a lot but I usually do that because I want to improve the situation.

P.S.: if GNOME 3 turns out to break as many things as KDE 4.0 I might consider to try the latest version of KDE at that time. Unfortunately I have heard too many bad things about KMail and eating email… so I’m still a bit wary. I really like the idea of GNOME developers working on 3.0 already, even though 2.30 is still to be released… branching is good!

Tree cleanup

You might have not noticed it, but the tree is slimming down, and not just because me and Samuli are removing broken packages (it has been actually a while since I sent my last QA last rite), but also because I started removing duplicate files and moving some of them to the mirrors. Indeed, it seems like the tree has been accumulating layers of crap, like unused patches, duplicated files, huge files, and huge duplicated files, which are the worst.

Now, while a lot of big files are just there because they are legacy stuff, there are also files that have been added recently, and sometimes they are not even borderline with the warning (repoman warns for files bigger than 20K; if the size – not occupied space – of the files/ directory is over 20K, though, we ŕe already in trouble; I found files added recently that were well over 40K! This mostly means that my fellow developers ignore warnings from repoman about file size systematically and that is something that really upsets me.

But it’s not just that the problem; another common problem is that quite a bit of ebuilds are committed to the tree without even being tried, or without even thinking twice about it. And this is not something limited to a bunch of developers, but starts to feel like it’s also something quite widespread. If you use ~arch you probably noted the bump of sys-libs/db to 4.8 version, when stuff like Python, OpenOffice and APR failing to build; you might also remember that db 4.7 was kept in package.mask for quite some time to give time to the reverse dependencies to be fixed; that’s what is called QA, and that seems to be definitely lacking.

Sometimes, problems are due to ebuilds written or heavily modified by users and committed straight away, without thinking. For instance, the iscan ebuild I had to heavily edit for QA because it was committed more or less straight from a bug, and had the usual round of problems with autotools (configure executed three times — don’t say that autotools are slow, if packagers don’t even note that stuff), libtool 2 failures and overcomplex handling of USE flags (we have package.use.mask for that, stop using arch-conditionals!).

In general, I have to say that Gentoo’s QA is on a slippery slope and it’s falling down from time to time, even though me, Mark, Samuli and others are putting on great effort to get it right. Sigh.

I really need to find a way to get the tinderbox’s log cleared up by the way, making sure that only the last merge log (not unmerge!) of a given package/slot combination is kept, ignoring older previous merges (might have been fixed already) and older versions than those now available; if I could also note the revision of the individual ebuild, it would be even better, but that’d be quite too much I’m afraid, and it wouldn’t work quite as well.

Why autoconf updates are never really clean

I’m sure a lot of users have noticed that each time autoconf is updated, a helluva lot of packages fail to build for a while. This is probably one of the reasons why lots of people dislike autotools in the first place. I would like to let people know that it’s not entirely autoconf’s fault if that happens, and actually, it’s often not autoconf’s fault at all!

I have already written one post about phantom macros due to recent changes but that wasn’t really anybody’s fault in the sense that the semantic of the two macros changed, with warning, between autoconf versions. On the other hand, I also have ranted, mostly on identi.ca, about the way KDE 3, in its latest version, is badly broken by the update. Since the problem with KDE3 is far from being isolated, I’d rant a bit more about it here and try to explain why it’s a bad thing.

First of all let me describe the problem with KDE3 so that you understand I’m not coming up with stuff just to badmouth them. I have already written in the past, ranted and so on, about the fact that KDE3’s build system was not autotools, but it was rather autotools-based. Indeed, the admin/ subdirectory that is used by almost all the KDE3 packages is a KDE invention; the configure.in.in files as well. Unfortunately it doesn’t look like the KDE developers learnt anything about it before, and they seem to be doing something very similar with CMake as well. I feel sorry for our KDE team now.

Now, of course there has been reasons why KDE created such a build system that reminds me of Frankenstein: from one side, they needed to wire up support for Qt’s uic and moc tools; from the other, they wanted the sub-module monolithic setup that is sought after by a limited amount of binary distributions and hated to the guts by almost all the source-based distributions.

I started hating this idea for two separates reasons: the first is that we couldn’t update automake: only 1.9 works, and we’re now at 1.11, the new ones changed enough behaviour that there is no chance the custom code works; the second reason is that the generated configure files were overly long, checking the same things over and over, and in very slow ways (compiling rather than using pkg-config). One of the tests I always found braindead was the check, done by every KDE3-based package, on whether libqt-mt required libjpeg at link time: a workaround for a known broken libqt-mt library!

Now, with autoconf 2.64, the whole build system broke down. Why’s that? Very simple: if you try to rebuild the autotools for kdelibs, like Gentoo does, you end up with a failure because the macro AH_CHECK_HEADERS is not found. That macro has been renamed in 2.64 to _AH_CHECK_HEADERS since it’s an internal macro, not something that configure scripts should be using directly. Indeed this macro call is in the KDE-custom KDE_CHECK_HEADERS that seems to deal with C and C++ language differences in the checks for headers (funnily this wasn’t enough to avoid language mistakes ). This wouldn’t be bad, extending macros is what autoconf is all about; but using internal headers to do that, is really a mistake.

Now, if it was just KDE making this mistake, the problem would be solved already: KDE 4 migrated to CMake, so in the worst case it would fail when the next CMake change is done that breaks their CMake-based build system, and KDE 3 is going away, which will solve the autoconf 2.64 problem altogether. Unfortunately, KDE was not the only project making this mistake; even worse, projects with exactly one package made this mistake, and that’s quite a bit of a problem.

When you have to maintain a build system for a whole set of packages, like KDE has to, mistakes like the one of using internal macros are somewhat to be expected and shouldn’t be considered strange or out of place. When you do that for a single package, then you really should stop from writing build systems, since you’re definitely overcomplicating things without any good reason.

Some of the signs that your build system is overcomplicated:

  • it still looks like it was generated by autoscan; you have not removed any of the checks added by that, nor you have added conditionals in the code to act upon those checks;
  • you’re doing lots of special conditioning over the host and target definitions; you don’t even try to find the stuff in the system but decide it’s there or it’s not by checking the host; that’s not the autoconf way;
  • you replace all the standard autoconf macros with your own, having NIH_PROG_CC for instance; you are trying to be smarter than autoconf, but you most likely are not.