Why is `git send-email` so awful in 2017?

I set myself out to send my first Linux kernel patch using my new Dell XPS13 as someone contacted me to ask for help supporting a new it87 chip in the gpio-it87 driver I originally contributed.

Writing the (trivial) patch was easy, since they had some access to the datasheet, but then the problem came to figure out how to send it over to the right mailing list. And that took me significantly more time than it should have, and significantly more time than writing the patch, too.

So why is it that git send-email is still so awful, in 2017?

So the first problem is that the only way you can send these email is either through a sendmail-compatible interface, which is literally an interface older than me (by two years), or through SMTP directly (this is even older, as RFC 821 is from 1982 — but being a protocol, that I do expect). The SMTP client has at least support for TLS, provided you have the right Perl modules installed, and authentication, though it does not support more complex forms of authentication such as Gmail’s XOAUTH2 protocol (ignore the fact it says IMAP, it is meant to apply to both IMAP and SMTP).

Instead, the documented (in the man page) approach for users with Gmail and 2FA enabled – which should be anybody who wants to contribute to the Linux kernel! – is to request an app-specific password and saving it through the credential store mechanism. Unfortunately the default credential store just stores it as unencrypted plaintext. Instead there are a number of credential helpers you can use, either using Gnome Keyring or libsecret, and so on.

Microsoft maintains and releases its own Credential Manager which is designed to support multi-factor login to a number of separate services, including GitHub and BitBucket. Thank you, Microsoft, although it appears to only be available for Windows, sigh!

Unfortunately it does not appear there is a good credential helper for either KWallet or LastPass which would have been interesting — to a point of course. I would probably never give LastPass an app-specific password to my Google account, as it would defeat my point of not keeping that particular password in a password manager.

So I start looking around and I find that there is a tool called keyring2 which supposedly has kwallet support, though on Arch Linux it does not appear to be working (the kwallet support, not the tool, which appear to work fine with Gnome Keyring). So I checked out the issues, and the defaulting to gnome-keyring is known, and there is a feature request for a LastPass backend. That sounds promising, right? Except that the author suggests building it as a separate library, which makes sense to a point. Unfortunately the implicit reference to their keyrings.alt (which does not appear to support KDE/Plasma), drove me away from the whole thing. Why?

License is indicated in the project metadata (typically one or more of the Trove classifiers). For more details, see this explanation.

And the explanation then says:

I acknowledge that there might be some subtle legal ramifications with not having the license text with the source code. I’ll happily revisit this issue at such a point that legal disputes become a real issue.

Which effectively reads to me as “I know what the right thing to do is, but it cramps my style and I don’t want to do it.” The fact that there have been already people pointing out the problem, and the fact that multiple issues have been reported and then marked as duplicate of this master issue, should speak clearly enough.

In particular, if I wanted to contribute anything to these repositories I would have no hope to do so but in my free time, if I decide to apply for a personal project request, as these projects are likely considered “No License” by the sheer lack of copyright information or licenses.

Now, I know I have not been the best person for this as well. But at least for glucometerutils I have made sure that each file lists its license clearly, and the license is spelt out in the README file too. And I will be correcting some of my past mistakes at some point soon, together with certain other mistakes.

But okay, so this is not a viable option. What else remains to use? Well, turns out that there is an actual FreeDesktop.org specification, or at least a draft, which appears to have been last touched seven years ago, for a common API to share between GNOME and KWallet, and for which there are a few other implementations already out there… but the current KWallet does not support it, and the replacement (KSecretService) appears to be stalled/gone/deprecated. And that effectively means that you can’t use that either.

Now, on Gentoo I know I can use msmtp integrated with KWallet and the sendmail interface, but I’m not sure if in Arch Linux it would work correctly. After all I even found out that I needed to install a number of Perl modules manually, because they are not listed in the dependencies and I don’t think I want to screw with PKGBUILD files if I can avoid it.

So at the end of the day, why is git send-email so awful? I guess the answer is that in so many years we still don’t have a half-decent, secure replacement for sending email. We need what they would now call “disruptive technology”, akin to how SSH killed Telnet, to bring up a decent way to send email, or at least submit Git patches to the Linux kernel. Sigh.

Fake “Free” Software hurts us all

Brian Krebs, the famous information security reporter, posted today (well, at the time of writing) an article on security issues with gSOAP library. I found this particularly interesting to me because I remembered seeing the name before. Indeed, Krebs links to the project’s SourceForge page, which is something to see. It has a multi-screen long list of “features”, including a significant number of variant and options for the protocol, which ends in the following:

Licenses: GPLv2, gSOAP public license (for engine and plugins), commercial non-GPL license available upon request (software is 100% in-house developed, no third-party GPL contributions included)

Ah, there we go. “Free” Software.

You may remember my post just a few days ago about the fact that Free Software to actually make a difference in society the way Stallman prophesizes needs a mediating agency, and at least right now that agency is companies and the free market. I argued that making your software usable by companies that provide services or devices is good, as it makes for healthy, usable and used projects, and increase competition and reduce costs of developing new solutions. So is gSOAP the counterexample? I really don’t think so. gSOAP is the perfect example for me that matches my previous rant about startups and fake communities.

The project at first look like the poster child for FSF-compatible software, since it’s licensed under GPL-2, and it clearly has no CLA (Contributor License Agreement), though the company provides a “way out” from GPL-2 obligations by buying a commercial license. This is not, generally speaking, a bad thing. I have seen many examples, including in the open multimedia cliques, of using this trick to foster the development of open solutions while making money to pay salary or build new Free Software companies.

But generally, those projects allow third party contributions with a CLA or similar agreement that allows the “main” copyright holders to still issue proprietary licenses, or enforce their own rights. You can for instance see what Google’s open source projects do about it. Among other things, this contribution method also disallows re-licensing the software, as that requires agreement from all copyright holders. In the case of gSOAP, that is not the case: as they say their software is «100% in-house developed».

They are very well proud of this situation, because it gives them all the power: if you want to use gSOAP without paying, you’re tied to the GPL, which may or may not become a compliance problem. And if you happen to violate the license, they have all the copyright to sue you or just ask you to settle. It’s a perfect situation for copyright trolls.

But, because of this, even though the software is on paper “Free Software” according to FSF, it’s a piece of proprietary software. Sure you can fork the library and build your own GPL-2 instead, as you have the freedom of fork, but that does not make it a community, or a real Free Software project. And it also means you can’t contribute patches to it to make it more secure, safer, or better for society. You could report bugs, including security bugs, but what’s the likeliness that you would actually care to do so, given that one of the first thing they make clear on their “project” page is that they are not interested in your contributions? And we clearly can see that the particular library could have used some care from the community, given its widespread adoption.

What does this mean to me, is that gSOAP is a clear example that just releasing something under GPL-2 is not enough to make it Free Software, and that even “Free” Software released under GPL-2 can be detrimental to society. And it also touches on the other topic I brought up recently, that is that you need to strike a balance between making code usable to companies (because they will use, and thus very likely help you extend or support your project) and keeping it as a real community or a real project. Clearly in this case the balance was totally off. If gSOAP was available with a more liberal license, even LGPL-2, they would probably lose a lot in license fees, as for most companies, just using this as a shared library would be enough. But it would then allow developers, both hobbyists and working for companies, to contribute fixes so that they trickle down on everybody’s device.

Since I do not know what the proprietary license that the company behind gSOAP requires their customers to agree with, I cannot say whether there is any incentive in said companies to provide fixes back to the project, but I assume if they were to do so, they wouldn’t be contributing them under GPL-2, clearly. What I can say is that for the companies I worked for in the past, actually receiving the library under GPL-2 and being able to contribute the fixes back would have been a better deal. The main reason is that as much as a library like this can be critical to connected devices, it does not by itself contain any of the business logic. And there are ways around linking GPL-2 code into the business logic application, that usually involve some kind of RPC between the logic and the frontend. And being able to contribute the changes back to the open source project would allow them to not have to maintain a long set of patches to sync release after release — I had the unfortunate experience of having to deal with something in that space before.

My advice, is once again, to try figuring out what you want to achieve by releasing a piece of software. Are you trying to make a statement, sticking it to the man, by forcing companies not to use your software? Are you trying to make money by forcing companies interested in using your work to buy a special license from you? Or are you contributing to Free Software because you want to improve society? In the latter case, I would suggest you consider a liberal license, to make sure that your code (that can be better than proprietary, closed-source software!) is actually available for those mediating agencies that transform geeky code into usable gadgets and services.

I know, it would be oh so nicer, if by just releasing something awesome under GPL-2 you’d force every company to release all their firmware as Free Software as well, but that’s not going to happen. Instead, if they feel they have to, they will look for worse alternatives, or build their own (even worse) alternatives, and keep them to themselves, and we’ll all be the poorer for it. So if in doubt, consider MIT or Apache 2 licenses. The latter in particular appears to be gaining more and more traction as both Google and Microsoft appear to be fond of the license, and Facebook’s alternative is tricky.

Some of you may consider me a bit of a hypocrite since I have released a number of projects under more restrictive Free licenses (including AGPL-3!) before I came to the conclusion that’s actually bad for society. Or at least before I came back to that idea, as I was convinced of that back in 2004, when I wrote (In Italian) of why I think MySQL is bad for Free Software (I should probably translate that post, just for reference). But what I decided is that I’ll try now to do my best to re-license the projects for which I own the full copyright under Apache 2 licenses. This may take a little while until I figure out all the right steps, but I feel is the right thing to do.

Technology and society, the cellphone example

After many months without blogging, you can notice I’m blogging a bit more about my own opinions than before. Part of it is because these are things I can write about without risking conflicts of interests with work, so that makes it easier to write, and part of it is because my opinions differing from what I perceive as the majority of Free Software advocates. My hope is that providing my opinions openly may, if not sway the opinion of others, find out that there are other people sharing them. To make it easier to filter out I’ll be tagging them as Opinions so you can just ignore them, if you use anything like NewsBlur and its Intelligence Trainer (I love that feature.)

Note: I had to implement this in Hugo as this was not available when I went to check if the Intelligence Trainer would have worked. Heh.

Okay, back on topic. You know how technologists, particularly around the Free Software movement, complain abut the lack of openness in cellphones and smartphones? Or of the lack of encryption, or trustworthy software? Sometimes together, sometimes one more important than the other? It’s very hard to disagree with the objective: if you care about Free Software you want more open platforms, and everybody should (to a point) care about safety and security. What I disagree with is the execution, for the most part.

The big problem I see with this is the lack of one big attribute for their ideal system: affordability. And that does not strictly mean being cheap, it also means being something people can afford to use — Linux desktops are cheap, if you look only at the bottom line of an invoice, but at least when I last had customers as a -Sysadmin for hire- Managed Services Provider, none of them could afford Linux desktops: they all had to deal with either proprietary software as part of their main enterprise, or with documents that required Microsoft Office or similar.

If you look at the smartphone field, there have been multiple generations of open source or free software projects trying to get something really open out, and yet what most people are using now is either Android (which is partly but not fully open, and clearly not an open source community) or iOS (which is completely closed and good luck with it.) These experiments. were usually bloody expensive high-end devices (mostly with the excuse of being development platforms) or tried to get the blessing of “pure free software” by hiding the binary blobs in non-writeable flash memory so that they could be shipped with the hardware but not with the operating systems.

There is, quite obviously, the argument that of course the early adopters end up paying the higher price for technology: when something is experimental it costs more, and can only become cheaper with enough numbers. But on the other hand, way too many of the choices became such just for the sake of showing off, in my opinion. For instance in cases like Nokia’s N900 and Blackphone.

Nowadays, one of the most common answers when talking about the lack of openness and updates of Android is still CyanogenMod despite some of the political/corporate shenanigans happening in the backstory of that project. Indeed, as an aftermarket solution, CyanogenMod provides a long list of devices with a significantly more up to date (and thus secure) Android version. It’s a great project, and the volunteers (who have been doing the bulk of the reverse engineering and set up for the builds) did a great job all these years. But it comes with a bit of a selection bias. It’s very easy to find builds for a newer flagship Android phone, even in different flavours (I see six separate builds for the Samsung Galaxy S4, since each US provider has different hardware) but it’s very hard to find up to date builds for cheaper phones, like the Huawei Y360 that Three UK offers (or used to offer) for £45 a few months back.

I can hear people saying “Well, of course you check before you buy if you can put a free ROM on it!” Which kind of makes sense if what constraints your choice is the openness, but expecting the majority of people to care about that primarily is significantly naïve. Give me a chance to explain my argument for why we should spend a significant amount of time working on the lower end of the scale rather than the upper.

I have a Huawei Y360 because I needed a 3G-compatible phone to connect my (UK) SIM card while in the UK. This is clearly a first world problem: I travel enough that I have separate SIM cards for different countries, and my UK card is handy for more than a few countries (including the US.) On the other hand, since I really just needed a phone for a few days (and going into why is a separate issue) I literally went to the store and asked them “What’s the cheapest compatible phone you sell?” and the Y360 was the answer.

This device is what many people could define craptastic: it’s slow, it has a bad touchscreen, very little memory for apps and company. It comes with a non-stock Android firmware by Huawei, based on Android 4.4. The only positive sides for the device are that it’s cheap, its battery actually tends to last, and for whatever reason it allows you to select GPS as the timesource, which is something I have not seen any other phone doing in a little while. It’s also not fancy-looking, it’s a quite boring plastic shell, but fairly sturdy if it falls. It’s actually fairly well targeted, if what you have is not a lot of money.

The firmware is clearly a problem in more than one way. This not being just a modified firmware by Huawei, but a custom one for the provider means that the updates are more than just unlikely: any modification would have to be re-applied by Three UK, and given the likely null margin they make on these phones, I doubt they would bother. And that is a security risk. At the same time the modifications made by Huawei to the operating system seem to go very far on the cosmetic side, which makes you wonder how much of the base components were modified. Your trust on Huawei, Chinese companies, or companies of any other country is your own opinion, but the fact that it’s very hard to tell if this behaves like any other phone out there is clearly not up for debate.

This phone model also appears to be very common in South America, for whatever reason, which is why googling for it might find you a few threads on Spanish-language forums where people either wondered if custom ROMs are available, or might have been able to get something to run on it. Unfortunately my Spanish is not functional so I have no idea what the status of it is, at this point. But this factoid is useful to make my point.

Indeed my point is that this phone model is likely very common with groups of people who don’t have so much to spend on “good hardware” for phones, and yet may need a smartphone that does Internet decently enough to be usable for email and similar services. These people are also the people who need their phones to last as long as possible, because they can’t afford to upgrade it every few years, so being able to replace the firmware with something more modern and forward looking, or with a slimmed down version, considering the lack of power of the hardware, is clearly a thing that would be very effective. And yet you can’t find a CyanogenMod build for it.

Before going down a bit of a road about the actual technicalities of why these ROMs may be missing, let me write down some effectively strawman answers to two complaints that I have heard before, and that I may have given myself when I as young and stupid (now I’m just stupid.)

If they need long-lasting phones, why not spend more upfront and get a future-proof device? It is very true that if you can afford a higher upfront investment, lots of devices become cheaper in the long term. This is not just the case for personal electronics like phones (and cameras, etc.) but also for home hardware such as dishwashers and so on. When some eight or so years ago my mother’s dishwasher died, we were mostly strapped on cash (but we were, at the time, still a family of four, so the dishwasher was handy for the time saving), so we ended up buying a €300 dishwasher on heavy discounts when a new hardware store just opened. Over the next four years, we had to have it repaired at least three times, which brought its TCO (without accounting for soap and supplies) to at least €650.

At the fourth time it broke, I was just back from my experience in Los Angeles, and thus I had the cash to buy a good dishwasher, for €700. Four years later the dishwasher is working fine, no repair needed. It needs less soap, too, and it has a significantly higher energy rating than the one we had before. Win! But I was lucky I could afford it at the time.

There are ways around this: paying things by instalments is one of these, but not everybody is eligible to that either. In my case at the time I was freelancing, which means that nobody would really give me a loan for it. The best I could have done would have been using my revolving credit card to pay for it, but let me just tell you that the interests compound much faster on that than with a normal loan. Flexibility costs.

This, by the way, relate to the same toilet paper study I have referenced yesterday.

Why do you need such a special device? There are cheaper smartphones out there, change provider! This is a variation of the the argument above. Three UK, like most of their Three counterparts across Europe, is a bit peculiar, because you cannot use normal GSM phones with them, you need at least UMTS. For this reason you need more expensive phones than your average Nokia SIM-free. So arguing that using a different provider may be warranted if all you care about is calls and text, but nowadays that is not really the case.

I’m now failing to find a source link of it, but I have been reading this not too long ago (likely on the Wall Street Journal or New York Times, as those are the usual newspapers I read when I’m at a hotel) how for migrants the importance of Internet-connected mobile phones is significant. The article listed a number of good reasons, among which I remember being able to access the Internet to figure out what kind of documents/information they need, being able to browse available jobs opening, and of course to be able to stay in touch with their family and friends that may well be in different countries.

Even without going to the full extreme of migrants who just arrived in a country, there are a number of “unskilled” job positions that are effectively “at call” — this is nothing new, the whole are of Dublin where I live now, one of the most expensive in the city, used to be a dormitory for dock workers, who needed to be as close as possible to the docks themselves so that they could get there quickly in the morning to find job. “Thanks” to technology, physical home proximity has been replaced with reachability. While GSM and SMS are actually fairly reliable, having the ability to use WiFi hotspots to receive text and SMS (which a smartphone allows, but a dumbphone doesn’t) is a significant advantage.

An aside on the term “unskilled” — I really hate the term. I have been told that delivering and assembling furniture is an unskilled job, I would challenge my peers to bring so many boxes inside an apartment as quickly as the folks who delivered my sofa and rest of furniture a few months ago without damaging either the content of the boxes or the apartment, except I don’t want to ruin my apartment. It’s all a set of different skills.

Once you factor in this, the “need” for a smartphone clearly outweighs the cheapness of a SIM-free phone. And once you are in for a smartphone, having a provider that does not nickel and dime your allowances is a plus.

Hopefully now this is enough social philosophy for the post — it’s not really my field and I can only trust my experience and my instincts for most of it.

So why are there not more ROMs for these devices? Well the first problem is that it’s a completely different set of skills, for the most part, between the people who would need those ROMs and the people who can make those ROMs. Your average geek that has access to the knowledge and tools to figure out how the device works and either extract or build the drivers needed is very unlikely to do that on a cheap, underpowered phone, because they would not be using one themselves.

But this is just the tip of the iceberg, as that could be fixed by just convincing a handful of people who know their stuff to maintain the ROM for these. The other problem with cheap device, and maybe less so with Huawei than others, for various reasons, is that the manufacturer is hard to reach, in case the drivers could be available but nobody has asked. In Italy there is a “brand” of smartphones that prides itself in advertisement material that they are the only manufacturer in Italy — turns out the firmware, and thus most likely the boards too, are mostly coming from random devshops in mainland China, and can be found in fake Samsung phones in that country. Going through the Italian “manufacturer” would lead to nothing if you need specs or source code. [After all I’ve seen that for myself with a different company before.

A possible answer to this would be to mandate better support for firmware over time, fining the manufacturers that refuse to comply with the policy. I heard this proposed a couple of times, particularly because of the recent wave of IoT-based DDoS that got to the news so easily. I don’t really favour this approach because policies are terrible to enforce, as it should be clear by now to most technologists who dealt with leaks and unhashed passwords. Or with certificate authorities. It also has the negative side effect of possibly increasing the costs as the smaller players might actually have a hard time to comply with these requirements, and thus end up paying the highest price or being kicked out of the market.

What I think we should be doing, is to change our point of view on the Free Software world and really become, as the organization calls itself software in the public interest. And public interest does not mean limiting to what the geeks think should be the public interest (that does, by the way, include me.) Enforcing the strict GPL has become a burden to so many companies by now, that most of the corporate-sponsored open source software nowadays is released under Apache 2 license. While I would love an ideal world in which all of the free software out there is always GPL and everybody just contributes back at every chance, I don’t think that is quite so likely, so let’s accept that and be realistic.

Instead of making it harder for manufacturers to build solutions based on free and open source software, make it easier. That is not just a matter of licensing, though that comes into play, it’s a matter of building communities with the intent of supporting enterprises to build upon them. With all the problems it shows, I think at least the Linux Foundation is trying this road already. But there are things that we can all do. My hopes are that we stop the talks and accusations for and against “purity’ of free software solutions. That we accept when a given proposal (proprietary, or coming out a proprietary shop) is a good idea, rather than ignore it because we think they are just trying to do vendor lock-in. Sometimes they are and sometimes they aren’t, judge ideas, formats, and protocols on their merits, not on who propose them.

Be pragmatic: support partially closed source solutions if they can be supported or supportive of Free Software. Don’t buy into the slippery slope argument. But strive to always build better open-source tool whenever there is a chance.

I’ll try to write down some of my preferences of what we should be doing, in the space of interaction between open- and closed-source environments, to make sure that the users are safe, and the software is as free as possible. For the moment, I’ll leave you with a good talk by Harald Welte from 32C3; in particular at the end of the talk there is an important answer from Harald about using technologies that already exist rather than trying to invent new ones that would not scale easily.

GNU is actually a totalitarian regime

You probably remember that I’m not one to fall in line with the Free Software Foundation — a different story goes for the FSFe which I support and I look forward for the moment when I can come back as a supporter; for the moment I’m afraid that I have to contribute only as a developer.

Well, it seems like more people are joining in the club. After Werner complained about the handling of GNU copyright assignments – not much after my covering of Gentoo’s assignments which should probably make those suggesting a GNUish approach to said copyright assignment think a lot – Nikos of GnuTLS decided to split off the GNU project.

Why did Nikos decide this? Well, it seems like the problem is that both Werner and Nikos are tired of the secrecy in the GNU project and even more of the inability to discuss, even in a private setting, some topics because they are deemed taboo by the FSF, in the person of Richard Stallman.

So, Nikos decided to move the lists, source code and website on its own hosting, and then declared GnuTLS no longer part of the GNU project. Do you think that this would have put the FSF in a “what are we doing wrong?” mood? Hah, naïve are you! Indeed the response from the FSF (in the person of Richard Stallman, see a pattern?) was to tell Nikos (who wrote, contributed to GNU, and maintained the project) that he can’t take his own project and take it out of GNU, and that if he wants he can resign from the maintainer’s post.

Well, now it seems like we might end up with a “libreTLS” package, as Nikos is open to renaming the project… it’s going to be quite a bit of a problem I’d say, if anything because I want to track Nikos’s development more than GNU’s, and thus I would hope for the “reverse fork” from the GNU project to just die off. Considering I also had to sign the assignment paperwork (and in the time that said paperwork was being handled, I lost time/motivation for the contributions I had in mind, lovely isn’t it?).

Well, what this makes very clear to me is that I still don’t like the way the GNU project, and the FSF are managed, and that my respect for Stallman’s behaviour is, once again, zero.

Bloody upstream

Please note, this post is likely to be interpreted as a rant. From one point of view it is. It’s mostly a general rant geared toward those upstreams that is generally impossible to talk into helping us distribution out.

The first one is the IEEE — you might remember that back in April I was troubled by their refusal to apply a permissive license to their OUI database, and actually denied that they allow redistribution of said database. A few weeks ago I had to bite the bullet and added both the OUI and the IAB databases to the hwids package that we’re using in Gentoo, so that we can use them on different software packages, including bluez and udev.

While I’m trying not to bump the package as often as before, simply because the two new files increase the size of the package four times. But I am updating the repository more often so that I can see if something changes and could be useful to bump it sooner. And what I noticed is that the two files are managed very badly by IEEE.

At some point, while adding one entry to the OUI list, the charset of the file was screwed up, replacing the UTF-8 with mojibake then somebody fixed it, then somebody decided that using UTF-8 was too good for them and decided to go back to pure ASCII, doing some near-equivalent replacement – although whoever changed ß to b probably got to learn some German – then somebody decided to fix it up again … then again somebody broke it while adding an entry, another guy tried to go back to ASCII, and someone else fixed it up again.

How much noise is this in the history of the file? Lots. I really wish they actually wrote a decent app to manage those databases so they don’t break them every other time they have to add something to the list.

The other upstream is Blender. You probably remember I was complaining about their multi-level bundling ad the fact that there are missing license information for at least one of the bundled libraries. Well, we’re now having another problem. I was working on the bump to 2.65, but now either I return to bundle Bullet, or I have to patch it because they added new APIs to the library.

So right now we have in tree a package that:

  • we need to patch to be able to build against a modern version of libav;
  • we need to patch to make sure it doesn’t crash;
  • we need to patch to make it use over half a dozen system libraries that it otherwise bundles;
  • we need to patch to avoid it becoming a security nightmare for users by auto-executing scripts in downloaded files;
  • bundles libraries with unclear licensing terms;
  • has two build systems, with different features available, neither of which is really suitable for a distribution.

Honestly, I reached a point where I’m considering p.masking the package for removal and deal with those consequences rather than dealing with Blender. I know it has quite a few users especially in Gentoo, but if upstream is unwilling to work with us to make it fit properly, I’d like users to speak to them to see that they get their act together at this point. Debian is also suffering from issues related to the libav updates and stuff like that. Without even going into the license issues.

So if you have contacts with Blender developers, please ask them to actually start reducing the amount of bundled libraries, decide on which of the two build systems we should be using, and possibly start to clear up the licensing terms of the package as a whole (including the libraries!). Unfortunately, I’d expect them not to listen — until maybe distributions, as a whole, decide to drop Blender because of the same reasons, to make them question the sanity of their development model.

Inside the life of an embedded developer

Now that the new ultrabook is working almost fine (I’m still tweaking the kernel to make sure it works as intended, especially for what concerns the brightness, and tweaking the power usage, for once, to give me some more power), I think it’s time for me to get used to the new keyboard, which is definitely different from the one I’m used to on the Dell, and more similar to the one I’ve used on the Mac instead. To do so, I think I’ll resume writing a lot on different things that are not strictly related to what I’m doing at the moment but talk about random topic I know about. Here’s the first of this series.

So, you probably all know me for a Gentoo Linux developer, some of you will know me for a multimedia-related developer thanks to my work on xine, libav, and a few other projects; a few would know me as a Ruby and Rails developer, but that’s not something I’m usually boasting or happy about. In general, I think that I’m doing my best not to work on Rails if I can. One of the things I’ve done over the years has been working on a few embedded projects, a couple of which has been Linux related. Despite what some people thought of me before, I’m not an electrical engineer, so I’m afraid I haven’t had the hands-on experience that many other people I know had, which I have to say, sometimes bother me a bit.

Anyway, the projects I worked on over time have never been huge project, or very well funded, or managed projects, which meant that despite Greg’s hope as expressed on the gentoo-dev mailing list, I never had a company train me in the legal landscape of Free Software licenses. Actually in most of the cases, I ended up being the one that had to explain to my manager how these licenses interact. And while I take licenses very seriously, and I do my best to make sure that they are respected, even when the task falls on the shoulder of someone else, and that someone might not actually do everything by the book.

So, while I’m not a lawyer and I really don’t want to go near that option, I always take the chance of understand licenses correctly and, when my ass is on the line, I make sure to verify the licenses of what I’m using for my job. One such example is the project I’ve been working since last March, of which I’ve prepared the firmware, based on Gentoo Linux. To make sure that all the licenses were respected properly, I had to come up with the list of packages that the firmware is composed of, and then verify the license of each. Doing this I ended up finding a problem with PulseAudio due to linking in the (GPL-only) GDBM library.

What happens is that if the company you’re working for has no experience with licenses, and/or does not want to pay to involve lawyers to review what is being done, it’s extremely easy for mistake to happen unless you are very careful. And in many cases, if you, as a developer, pay more attention to the licenses than your manager does, it’s also seen as a negative point, as that’s not how they’d like for you to employ your time. Of course you can say that you shouldn’t be working for that company then, but sometimes it’s not like you have tons of options.

But this is by far not the only problem. Sometimes, what happens is a classic 801 — that is that instead of writing custom code for the embedded platform you’re developing for, the company wants you to re-use previous code, which has a high likelihood of being written in a language that is completely unsuitable for the embedded world: C++, Java, COBOL, PHP….

Speaking of which, here’s an anecdote from my previous experiences in the field: at one point I was working on the firmware for an ARM7 device, that had to run an application written in Java. Said application was originally written to use PostgreSQL and Tomcat, with a full-fledged browser, but had to run on a tiny resistive display with SQLite. But since at the time IcedTea was nowhere to be found, and the device wouldn’t have had enough memory for it anyway, the original implementation used a slightly patched GCJ to build the application to ELF, and used JNI hooks to link to SQLite. The time (and money, when my time was involved) spent making sure the system wouldn’t run out of memory would probably have sufficed to rewrite the whole thing in C. And before you ask, the code bases between the Tomcat and the GCJ versions started drifting almost immediately, so code sharing was not a good option anyway.

Now, to finish this mostly pointless, anecdotal post of mine, I’d like to write a few words of commentary about embedded systems, systemd, and openrc. Whenever I head one or the other side saying that embedded people love the other, I think they don’t know how different embedded development can be from one company to the next, and even between good companies there are so big variation, that make them stand lightyears apart from bad companies like some of the ones I described above. Both sides have good points for the embedded world; what you choose depends vastly on what you’re doing.

If memory and timing are your highest constraints, then it’s very likely that you’re looking into systemd. If you don’t have those kind of constraints, but you’re building a re-usable or highly customizable platform, it’s likely you’re not going to choose it. The reason? While if you’re half-decent in your development cross-compilation shouldn’t be a problem, the reality is that in many place, it is. What happen then is that you want to be able to make last-minute changes, especially in the boot process, for debugging purposes, using shell scripts is vastly easier, and for some people, doing it more easily is the target, rather than doing it right (and this is far from saying that I find the whole set of systemd ideas and designs “right”).

But this is a discussion that is more of a flamebait than anything else, and if I want to go into details I should probably spend more time on it than I’m comfortable doing now. In general, the only thing I’m afraid of, is that too many people make assumption on how people do things, or take for granted that companies, big and small, care about doing things “right” — by my experience, that’s not really the case, that often.

A matter of copyrights

One of the issues that came through with the recent drama about the n-th udev fork is the matter of assigning copyright to the Gentoo Foundation. This topic is not often explored, mostly because it really is a minefield, and – be ready to be surprised – I think the last person who actually said something sane on the topic has been Ciaran.

Let’s see a moment what’s going on: all ebuilds and eclasses in the main tree, and in most of the overlays, report “Gentoo Foundation” as the holder of copyright. This is so much a requirement that we’re not committing to the tree anything that reports anyone else’s copyright, and we refuse the contribution in that case for the most part. While it’s cargo-culted at this point, it is also an extremely irresponsible thing to do.

First of all, nobody ever signed a copyright assignment form to the Gentoo Foundation, as far as I can tell. I certainly didn’t do it. And especially as we go along with getting more and more proxied maintainers, as they almost always are not Gentoo Foundation members (Foundation membership comes after an year as a developer, if I’m not mistaken — or something along those lines, I honestly forgot because, honestly, I’m not following the Foundation doing at all).

Edit: Robin made me notice that a number of people did sign a copyright assignment, first to Gentoo Technologies that were then re-assigned to the Foundation. I didn’t know that — I would be surprised if a majority of the currently active developers knew about that either. As far as I can tell, copyright assignment was no longer part of the standard recruitment procedure when I joined, as, as I said, I didn’t sign one. Even assuming I was the first guy who didn’t sign it, 44% of the total active developers wouldn’t have signed it, and that’s 78% of the currently active developers (give or take). Make up your mind on these numbers.

But even if we all signed said copyright assignment, it’s for a vast part invalid. The problem with copyright assignment is that they are just that, copyright assignments… which means they only work where the law regime concerning authors’ work is that of copyright. For most (all?) of Europe, the regime is actually that of author’s rights and like VideoLAN shows it’s a bit more complex, as the authors have no real way to “assign” those rights.

Edit²: Robin also pointed at the fact that FSFe, Google (and I add Sun, at the very lest) have a legal document, usually called Contributor License Agreement (when it’s basically replacing a full blown assignment) or Fiduciary Licence Agreement (the more “free software friendly” version). This solves half the problem only, as the Foundation would still not be owning the copyright, which means that you still have to come up with a different way to identify the contributors, as they still have their rights even though they leave any decision regarding their contributions to the entity they sign the CLA/FLA to.

So the whole thing stinks of half-understood problem.

This is actually gotten more complex recently, because the sci team borrowed an eclass (or the logic for an eclass) from Exherbo — who actually handles the individual’s copyright. This is actually a much more sensible approach, on the legal side, although I find the idea of having to list, let’s say, 20 contributors at the top of every 15-lines ebuild a bit of an overkill.

My proposal would then be to have a COPYRIGHTS.gentoo file in every package directory, where we list the contributors to the ebuild. This way even proxied maintainers, and one-time contributors, get their credit. The ebuild can then refer to “see the file” for the actual authors. A similar problem also applies to files that are added to the package, including, but not limited to, the init scripts, and making the file formatted, instead of freeform, would probably allow crediting those as well.

Now, this is just a sketch of an idea — unlike Fabio, whose design methodology I do understand and respect, I prefer posting as soon as I have something in mind, to see if somebody can easily shoot it down or if it has wings to fly, and also in the vain hope that if I don’t have the time, somebody else would pick up my plan — but if you have comments on it, I’d be happy to hear them. Maybe after a round of comments, and another round of thinking about it, I’ll propose it as a real GLEP.

Multi-level bundling, with a twist

I spent half my Saturday afternoon working on Blender, to get the new version (2.64a) in Portage. This is never an easy task but in this case it was a bit more tedious because thanks to the new release of libav (version 9) I had to make a few more modifications … making sure it would still work with the old libav (0.8).

FFmpeg support is not guaranteed — If you care you can submit a patch. I’m one of the libav developers thus that’s what I work, and test, with.

Funnily enough, while I was doing that work, a new bug for blender was reported in Gentoo, so I looked into it and found out that it was actually caused by one of the bundled dependencies — luckily, one that was already available as its own ebuild, so I just decided to get rid of it. The interesting part was that it wasn’t listed in the “still bundled libraries” list that the ebuild’s own diagnostic prints… since it was actually a bundled library of the bundled libmv!

So you reach the point where you get one package (Blender) bundling a library (libmv) bundling a bunch of libraries, multi-level.

Looking into it I found out that not only the dependency that was causing the bug was bundled (ldl) but there were at least two more that, I knew for sure, were available in Gentoo (glog and gflags). Which meant I could shave some more code out of the package, by adding a few more dependencies… which is always a good thing in my book (and I know that my book is not the same as many others’).

While looking for other libraries to unbundle, I found another one, mostly because its name (eltopo) was funny — it has a website and from there you can find the sources — neither are linked in the Blender package. When I looked at the sources, I was dismayed to see that there was no real build system but just an half-broken Makefile building two completely different PIC-enabled static archives, for debug and release. Not really something that distributions could get much interest in packaging.

So I set up at build my usual autotools-based build system (which no matter what people say it’s extremely fast, if you know how to do it), fix the package to build with gcc 4.7 correctly (how did it work for Blender? I assume they patched it somehow but they don’t write down what they do!), and .. uh where’s the license file?

Turns out that while the homepage says that the library is “public domain”, there is no license statement anywhere in the source code, making it in all effects the exact opposite: a proprietary software. I’ve opened an issue for it and hopefully upstream will fix that one up so I can send him my fixes and package it in Gentoo.

Interestingly enough, the libmv software that Blender packages, is much better in its way of bundling libraries. While they don’t seem to give you an easy way to disable the bundled copies (which might or might not be Blender’s build system fault), they make it clear where each library come from, and they have scripts to “re-bundle” said libraries. When they make changes, they also keep a log of them so that you can identify what changed and either ignore, patch or send it upstream. If all projects bundling stuff did it that way, it would be a much easier job to unbundle…

In the mean time, if you have some free time and feel like doing something to improve the bundled libraries situation in Gentoo Linux, or you care about Blender and you’d like to have a better Gentoo experience with it, we could use some ebuilds for ceres-solver and SSBA as well as fast-C (this last one has no buildsystem at all, unfortunately) all used by libmv, or maybe carve libredcode (for which I don’t even have an URL at hand), recastnavigation (which has no releases) which are instead used directly by Blender.

P.S.: don’t expect to see me around this Sunday, I’m actually going to see the Shuttle, and so I won’t be back till late, most likely, or at least I hope so. You’ll probably see a photo set on Monday on my Flickr page if you want to have a treat.

Who said that IDs wouldn’t have license issues?

When I posted about the hwids data I was not expecting to find what I just found today, and honestly, I’m wondering if I’m always looking for stuff under rocks.

My reason to split out the ID databases off their respective utilities (pciutils and usbutils) was simple enough: I didn’t need to deal with the two utilities, both of which are GPL-2, when the database themselves are available under the BSD 3-clauses license; it was just a matter of removing code, and avoiding auditing of projects that we don’t need to rely upon.

The fact that it was still a pet peeve of mine to not have an extra package taking care of it, rather than bundling them, was just an added bonus.

So after creating a silly placeholder which is fine for our needs here, with the blessing of Greg I created a new package, sys-apps/hwids (I wanted to call it hwdata, but we have both gentoo-hwdata and redhat-hwdata that install very different stuff), which has its own repository and with a live ebuild that simply fetches the files out of the respective website. I’m going to update the package weekly, if there are changes, so that we always have a more up-to-date version of it, and we won’t be needing the network cron scripts at all either.

I’ve also updated lshw to support the split package, so that it doesn’t install its own ids files anymore… of course that is only half the work done there, since the lshw package has two more datafiles: oui.txt and manuf.txt. The latter comes out of Wireshark, while the former is downloaded straight from IEEE’s Public OUI registry and that’s where the trouble starts.

The problem is that while you’re free to download the oui.txt file, you won’t find any kind of license on the file itself. I’ve sent a request to IEEE for some clarification on the matter and their answer is a clear “you cannot redistribute that file” (even though Ulrich, while not a lawyer, pointed out that it’s factual information which might not be copyrightable at all — see Threshold of originality for further details.

So why would I care about that single file given that lshw is a minor package by itself, and shouldn’t matter that much? Well, the answer is actually easy to give: bluez also contains a copy of it. And we’re redistributing that for sure, at least in source form. Sabayon is actually distributing binaries of it.

Interestingly enough, neither Debian’s lshw package nor their Bluez one do install the oui.txt file and I wouldn’t be surprised if their source archives have been neutered made Free by removing the distributed copy of the file.

What should we do about this? Unfortunately, that’s one question I don’t have an answer for myself yet, but I guess it should be clear enough that you can’t always assume that what upstream declares to be the case… actually is the case, especially for what concerns licensing. And this is the reason why, even though we don’t have any problem with releasing the source of all the GPL’d packages we have, we’d like to reduce as much as possible the amount of licenses I have to go through.

I’m a Geek!

The title, by itself, is pretty obvious; I’m a geek, otherwise I wouldn’t be doing what I’m doing, even with the bile refluxes I end up having, just for the gratitude of a few dozens users (which I’ll thank once again from my heart; you make at least part of the insults I receive bearable). But it’s more a reminder for those who follow me since a long time ago, and who could remember that I started this blog over four years ago, using Rails 1.1 and a Gentoo/FreeBSD install.

Well, at the time my domain wasn’t “flameeyes.eu”, which I only bought two years ago but rather the more tongue-in-cheek Farragut.Flameeyes.Is-A-Geek.org where Farragut was the name of the box (which was contributed by Christoph Brill and served Gentoo/FreeBSD as main testing and stagebuilding box until PSU/MB gave up).

At any rate, I’ve been keeping the hostname, hoping one day to be able to totally phase it out and get rid of it; this because while at the start it was easy to keep it updated, DynDNS has been pressing more and more for free users to register for the “pro” version. Up to now, I’ve been just refreshing it whenever it was on the verge of expiring, but.. since the latest changes will not allow me to properly re-register the same hostname if it failed, and a lot of websites still link to my old addresses, I decided to avoid problems and scams, and I registered the hostname with DynDNS Pro, for two years, which means it won’t risk expiration.

Given that situation, I decided to change the Apache (and AWStats) configuration so that the old URLs for the blog and the site won’t redirect straight to the new sites, but rather accept the request and show the page normally. Obviously, I’d still prefer if the new canonical name is used. Hopefully, at some point in time, browsers and other software will support the extended metadata provided by OpenGraph which not only breaks down the title in site and page title (rather than the current mess of different separators between the two in the <title> element!), but also provides a “canonical URL” value that can solve the problem of multiple-hostnames as well (yes that means that if you were to link one post of mine on Facebook with the old URLs it’ll be automatically translated to the new, canonical URLs).

But it’s not all stopping here; for the spirit of old times, I also ended up looking at some of the articles I wrote around that time, or actually before that time, for NewsForge/Linux.com (as I said in my previous post noted). At the time, I wasn’t even paid for them, but the only requirement was a one year exclusive; last one was in December 2005, so the exclusive definitely expired a long time ago. So, since their own website (now only Linux.com, and changed owner as well) is degrading (broken links, comments with different text formatting functions, spam, …) I decided to re-publish them on my own website in the newly refurbished articles section and, wait for it…

I decided to re-license all three of the articles I wrote in 2005 under CreativeCommons Attribution-ShareAlike license.

Update (2017-04-21): as it happens my articles section is now gone and instead the articles are available as part of the blog itself.

Okay, nothing exceptional I guess, but given there was some doubts about my choices of licenses, this actually makes available a good chunk of my work under a totally free license. I could probably ask Jonathan whether I could do something like that to the articles I wrote for LWN, but since that site is still well maintained I see no urgency.

I should also be converting from PDF/TeX to HTML the rest of the articles on that index page, but they are also not high on my priority list.

Finally, I’m still waiting on FSF to give me an answer regarding the FSWS licensing — Matija helped me adapt the Autoconf exception into something usable for the web… unfortunately the license of the exception itself is quite strict, and so I had to ask FSF for permission of using that.. the request has been logged into their RT, I hope it’ll get me an answer soon… who knows, FSWS might be my last “gift” before I throw the towel.