We need Free Software Co-operatives, but we probably won’t get any

The recent GitHub craze that got a number of Free Software fundamentalists to hurry away from GitHub towards other hosting solutions.

Whether it was GitLab (a fairly natural choice given the nature of the two services), BitBucket, or SourceForge (which is trying to rebuild a reputation as a Free Software friendly hosting company), there are a number of options of new SaaS providers.

At the same time, a number of projects have been boasting (and maybe a bit too smugly, in my opinion) that they self-host their own GitLab or similar software, and suggested other projects to do the same to be “really free”.

A lot of the discourse appears to be missing nuance on the compromises that using SaaS hosting providers, self-hosting for communities and self-hosting for single projects, and so I thought I would gather my thoughts around this in one single post.

First of all, you probably remember my thoughts on self-hosting in general. Any solution that involves self-hosting will require a significant amount of ongoing work. You need to make sure your services keep working, and keep safe and secure. Particularly for FLOSS source code hosting, it’s of primary importance that the integrity and safety of the source code is maintained.

As I already said in the previous post, this style of hosting works well for projects that have a community, in which one or more dedicated people can look after the services. And in particular for bigger communities, such as KDE, GNOME, FreeDesktop, and so on, this is a very effective way to keep stewardship of code and community.

But for one-person projects, such as unpaper or glucometerutils, self-hosting would be quite bad. Even for xine with a single person maintaining just site+bugzilla it got fairly bad. I’m trying to convince the remaining active maintainers to migrate this to VideoLAN, which is now probably the biggest Free Software multimedia project and community.

This is not a new problem. Indeed, before people rushed in to GitHub (or Gitorious), they rushed in to other services that provided similar integrated environments. When I became a FLOSS developer, the biggest of them was SourceForge — which, as I noted earlier, was recently bought by a company trying to rebuild its reputation after a significant loss of trust. These environments don’t only include SCM services, but also issue (bug) trackers, contact email and so on so forth.

Using one of these services is always a compromise: not only they require an account on each service to be able to interact with them, but they also have a level of lock-in, simply because of the nature of URLs. Indeed, as I wrote last year, just going through my old blog posts to identify those referencing dead links had reminded me of just how many project hosting services shut down, sometimes dragging along (Berlios) and sometimes abruptly (RubyForge).

This is a problem that does not only involve services provided by for-profit companies. Sunsite, RubyForge and Berlios didn’t really have companies behind, and that last one is probably one of the closest things to a Free Software co-operative that I’ve seen outside of FSF and friends.

There is of course Savannah, FSF’s own Forge-lookalike system. Unfortunately for one reason or another it has always lagged behind the featureset (particularly around security) of other project management SaaS. My personal guess is that it is due to the political nature of hosting any project over on FSF’s infrastructure, even outside of the GNU project.

So what we need would be a politically-neutral, project-agnostic hosting platform that is a co-operative effort. Unfortunately, I don’t see that happening any time soon. The main problem is that project hosting is expensive, whether you use dedicated servers or cloud providers. And it takes full time people to work as system administrators to keep it running smoothly and security. You need professionals, too — or you may end up like lkml.org being down when its one maintainer goes on vacation and something happens.

While there are projects that receive enough donations that they would be able to sustain these costs (see KDE, GNOME, VideoLAN), I’d be skeptical that there would be an unfocused co-operative that would be able to take care of this. Particularly if it does not restrict creation of new projects and repositories, as that requires particular attention to abuse, and to make good guidelines of which content is welcome and which one isn’t.

If you think that that’s an easy task, consider that even SourceForge, with their review process, that used to take a significant amount of time, managed to let joke projects use their service and run on their credentials.

A few years ago, I would have said that SFLC, SFC and SPI would be the right actors to set up something like this. Nowadays? Given their infights I don’t expect them being any useful.

Can you run a brick and mortar store on Free Software?

I have written before about the CRM I wrote for a pizzeria and I am happy to see that even FSFE started looking into Free Software for SME. I also noted the needs for teams to develop healthy projects. Today I want to give an example of why I think these things are not as easy as most people expect them to be, and how many different moving parts exist that are required to align to make Free Software for SME.

As I’m no longer self-employed, and I have no intention of going back to be a MSP in my lifetime, what I’m writing here is more of a set of “homework pointers” if a community of SME-targeted Free Software projects would be formed.

I decided to focus in my thoughts on the need of a brink and mortar store (or high street store if you prefer), mostly because it has a subset of the requirements that I could think of, compared to a restaurant like the pizza place I actually worked with.

These notes are also probably a lot more scattered and incomplete than I would like, because I have only worked retail for a short while, between high school and the two miserable week of university, nearly fifteen years ago, in a bookstore to be precise.

For most of the people who have not worked retail, it might seem like the most important piece of software/hardware for a store is the till, because that is what they interact with most of the time. While the till systems (also called POS) are fairly important, as those are in direct contact with the customer, they are only the tip of the iceberg.

But let’s start with the POS: whether you plan on integrating them directly with a credit card terminal or not, right now there are a number of integrated hardware/software solution for these, that include a touchscreen to input the receipt components and a (usually thermal) printer for the receipts to be printed on, while sometimes allowing the client to be emailed the receipt instead. As far as I know, there’s no Free Software system for this. I do see an increasing number of Clover tills in Europe, and Square in the United States (but these are not the only ones).

The till software is more complicated than one would think, because in addition to the effects that the customers can see (select line items, print receipt, eventually take payment), it has to be able to keep track of the cash flow, whether it is in form of actual cash, or in the form of card payments. Knowing the cash flow is a requisite for any business, as without that information you cannot plan your budgets.

In bigger operations, this would feed into a dedicated ERP system, which would often include an inventory management software — because you need to know how much stock you have and how fast it is moving, to know when to order new stock.

There is also the need to handle invoices, which usually don’t get printed by the till (you don’t want an invoice printed on thermal paper, particularly in countries like Italy, where you’re meant to keep the original of an invoice for over ten years).

And then there is the filing of payable invoices and, well, their payment. This is part of the accounting procedures, and I know of very few systems that allow integration with a bank to the point of automating this part. PSD2 is meant to require financial institutions to provide APIs to make this possible, at least in Europe, but that has been barely received yet, and we’ll have to see what the solution will be.

Different industries have different expected standards, too. When I worked in the bookstore, there was a standard piece of software that was used to consult the online stock of books from various depots, which was required to handle orders of books for people looking for something that was not in the store. While Amazon and other online services have for the most part removed the need for many to custom order books in a store, I know still a few people who do so, simply to make sure the bookstore stays up. And I assume that very similar, yet different, software and systems exist for most other fields of endeavour, such as computer components, watches, and shoes.

Depending on the size of the store, and the amount of employees, and in general the hours of operation, there may also be need for a roster management software, so that the different workers have fair (and legal) shifts, while still being able to manage days off. I don’t know how well solutions like Workday work for small realities, but in general I feel this is likely going to be one area in which Free Software won’t make an easy dent: following all the possible legal frameworks to actually be compliant with the law is the kind of work that requires a full-time staff of people, and unless something changes drastically, I don’t expect any FLOSS project to keep up with that.

You can say that this post is not giving any answer and is just adding more questions. And that’s the case, actually. I don’t have the time or energy of working on this myself, and my job does not involve working with retailers, or even developing user-focused software. I wanted to write this as a starting point of a project if someone is interested in doing so.

In particular, I think that this would be prime territory for a multi-disciplinary university project, starting from asking questions to store owners of their need, and understanding the whole user journey. Which seems to be something that FSFE is now looking into fostering, which I’m very happy about.

Please, help the answer to the question “Can you run a brink and mortar store on Free Software?” be Yes!

Two words about my personal policy on GitHub

I was not planning on posting on the blog until next week, trying to stick on a weekly schedule, but today’s announcement of Microsoft acquiring GitHub is forcing my hand a bit.

So, Microsoft is acquiring GitHub, and a number of Open Source developers are losing their mind, in all possible ways. A significant proportion of comments on this that I have seen on my social media is sounding doomsday, as if this spells the end of GitHub, because Microsoft is going to ruin it all for them.

Myself, I think that if it spells the end of anything, is the end of the one-stop-shop to work on any project out there, not because of anything Microsoft did or is going to do, but because a number of developers are now leaving the platform in protest (protest of what? One company buying another?)

Most likely, it’ll be the fundamentalists that will drop their projects away to GitHub. And depending on what they decide to do with their projects, it might even not show on anybody’s radar. A lot of people are pushing for GitLab, which is both an open-core self-hosted platform, and a PaaS offering.

That is not bad. Self-hosted GitLab instances already exist for VideoLAN and GNOME. Big, strong communities are in my opinion in the perfect position to dedicate people to support core infrastructure to make open source software development easier. In particular because it’s easier for a community of dozens, if not hundreds of people, to find dedicated people to work on it. For one-person projects, that’s overhead, distracting, and destructive as well, as fragmenting into micro-instances will cause pain to fork projects — and at the same time, allowing any user who just registered to fork the code in any instance is prone to abuse and a recipe for disaster…

But this is all going to be a topic for another time. Let me try to go back to my personal opinions on the matter (to be perfectly clear that these are not the opinions of my employer and yadda yadda).

As of today, what we know is that Microsoft acquired GitHub, and they are putting Nat Friedman of Xamarin fame (the company that stood behind the Mono project after Novell) in charge of it. This choice makes me particularly optimistic about the future, because Nat’s a good guy and I have the utmost respect for him.

This means I have no intention to move any of my public repositories away from GitHub, except if doing so would bring a substantial advantage. For instance, if there was a strong community built around medical devices software, I would consider moving glucometerutils. But this is not the case right now.

And because I still root most of my projects around my own domain, if I did move that, the canonical URL would still be valid. This is a scheme I devised after getting tired of fixing up where unieject ended up with.

Microsoft has not done anything wrong with GitHub yet. I will give them the benefit of the doubt, and not rush out of the door. It would and will be different if they were to change their policies.

Rob’s point is valid, and it would be a disgrace if various governments would push Microsoft to a corner requiring it to purge content that the smaller, independent GitHub would have left alone. But unless that happens, we’re debating hypothetical at the same level of “If I was elected supreme leader of Italy”.

So, as of today, 2018-06-04, I have no intention of moving any of my repositories to other services. I’ll also use a link to this blog with no accompanying comment to anyone who will suggest I should do so without any benefit for my projects.

The importance of teams, and teamwork

Today, on Twitter, I have received a reply with a phrase that, in its own sake and without connecting back with the original topic of the thread, I found significant of the dread I feel with working as a developer, particularly in many opensource communities nowadays.

Most things don’t work the way I think they work. That’s why I’m a programmer, so I can make them work the way I think they should work.

I’m not going to link back to the tweet, or name the author of the phrase. This is not about them in particular, and more about the feeling expressed in this phrase, which I would have agreed with many years ago, but now feels so much off key.

What I feel now is that programmers don’t make things work the way they think they should. And this is not intended as a nod to the various jokes about how bad programming actually is, given APIs and constraints. This is about something that becomes clear when you spend your time trying to change the world, or make a living alone (by running your own company): everybody needs help, in the form of a team.

A lone programmer may be able to write a whole operating system (cough Emacs), but that does not make it a success in and by itself. If you plan on changing the world, and possibly changing it for the better, you need a team that includes not only programmers, but experts in quite a lot of different things.

Whether it is a Free Software project, or a commercial product, if you want to have users, you need to know what they want — and a programmer is not always the most suitable person to go through user stories. Hands up all of us who have, at one point or another, facepalmed at an acquaintance taking a screenshot of a web page to paste it into Word, and tried to teach them how to print the page to PDF. While changing workflows so that they make sense may sound the easiest solution to most tech people, that’s not what people who are trying to just do their job care about. Particularly not if you’re trying to sell them (literally or figuratively) a new product.

And similarly to what users want to do, you need to know what the users need to do. While effectively all of Free Software comes with no warranty attached, even for it (and most definitely for commercial products), it’s important to consider the legal framework the software has to be used on. Except for the more anarchists of the developers out there, I don’t think anyone would feel particularly interested in breaching laws for the sake of breaching them, for instance by providing a ledger product that allows “black book accounting” as an encrypted parallel file. Or, to reprise my recent example, to provide a software solution that does not comply with GDPR.

This is not just about pure software products. You may remember, from last year, the teardown of Juicero. In this case the problems appeared to step by the lack of control over the BOM. While electronics is by far not my speciality, I have heard more expert friends and colleagues cringe at seeing the spec of projects that tried to actually become mainstream, with a BOM easily twice as expensive as the minimum.

Aside here, before someone starts shouting about that. Minimising the BOM for an electronic project may not always be the main target. If it’s a DIY project, making it easier to assemble could be an objective, so choosing more bulky, more expensive parts might be warranted. Similarly if it’s being done for prototyping, using more expensive but widely available components is generally a win too. I have worked on devices that used multi-GB SSDs for a firmware less than 64MB — but asking for on-board flash for the firmware would have costed more than the extremely overprovisioned SSDs.

And in my opinion, if you want to have your own company, and are in for the long run (i.e. not with startup mentality of getting VC capital and get acquired before even shipping), you definitely need someone to follow up the business plan and the accounting.

So no, I don’t think that any one programmer, or a group of sole programmers, can change the world. There’s a lot more than writing code, to build software. And a lot more than building software, to change society.

Consider this the reason why I will plonk-file any recruitment email that is looking for “rockstars” or “ninjas”. Not that I’m looking for a new gig as I type this, but I would at least give thought if someone was looking for a software mechanic (h/t @sysadmin1138).

Diabetes Software: the importance of documented protocols

You may remember that just last week I was excited to announce that I had more work planned and lined up for my glucometer utilities, one of which was supporting OneTouch Verio IQ which is a slightly older meter that is still sold and in use in many countries, but for which no protocol is released.

In the issue I linked above you can find an interesting problem: LifeScan discontinued their Diabetes Management Software, and removed it from their website. Indeed instead they suggest you get one of their Bluetooth meters and to use that with their software. While in general the idea of upgrading a meter is sane, the fact that they decided to discontinue the old software without providing protocols is at the very least annoying.

This shows the importance of having open source tools that can be kept alive as long as needed, because there will be people out there that still rely on their OneTouch Verio IQ, or even on the OneTouch Ultra Easy, which was served by the same software, and is still being sold in the US. Luckily at least they at least used to publish the Ultra Easy protocol specs and they are still available on the Internet at large if you search for them (and I do have a copy, and I can rephrase that into a protocol specification if I find that’s needed).

On the bright side, the Tidepool project (of which I wrote before) has a driver for the Verio IQ. It’s not a particularly good driver, as I found out (I’ll get to that later), but it’s a starting point. It made me notice that the protocol was almost an in-between of the Ultra Easy and the Verio 2015, which I already reverse engineered before.

Of course I also managed to find a copy of the LifeScan software on a mostly shady website and a copy of the “cable drivers” package from the Middle East and Africa website of LifeScan, which still has the website design from five years ago. This is good because the latter package is the one that installs kernel drivers on Windows, while the former only contains userland software, which I can trust a little more.

Comparing the USB trace I got from the software with the commands implemented in the TidePool driver showed me a few interesting bits of information. The first being the first byte of commands on the Verio devices is not actually fixed, but can be chosen between a few, as the Windows software and the TidePool driver used different bytes (and with this I managed to simplify one corner case in the Verio 2015!). The second is that the TidePool driver does not extract all the information it should! In particular the device allows before/after meal marking, but they discard the byte before getting to it. Of course they don’t seem to expose that data even from the Ultra 2 driver so it may be intentional.

A bit more concerning is that they don’t verify that the command returned a success status, but rather discard the first two bytes every time. Thankfully it’s very easy for me to check that.

On the other hand, reading through the TidePool driver (which I have to assume was developed with access to the LifeScan specifications, under NDA) I could identify two flaws in my own code. The first was not realizing the packet format between the UltraEasy and the Verio 2015 was not subtly different as I thought, but it was almost identical, except the link-control byte in both Verio models is not used, and is kept to 0. The second was that I’m not currently correctly dropping out control solutions from the readings of the Verio 2015! I should find a way to get a hold of the control solution for my models in the pharmacy and make sure I get this tested out.

Oh yeah, and the TidePool driver does not do anything to get or set date and time; thankfully the commands were literally the same as in the Verio 2015, so that part was an actual copy-paste of code. I should probably tidy up a bit, but now I would have a two-tier protocol system: the base packet structure is shared between the UltraEasy, Verio IQ and Verio 2015. Some of the commands are shared between UltraEasy and Verio IQ, more of them are shared between the Verio IQ and the Verio 2015.

You can see now why I’ve been disheartened to hear that the development of drivers, even for open source software, is done through closed protocol specifications that cannot be published (or the drivers thoroughly commented). Since TidePool is not actually using all of the information, there is no way to tell what certain bytes of the responses represent. And unless I get access to all the possible variants of the information, I can’t tell how some bytes that to me look like constant should represent. Indeed since the Verio 2015 does not have meal information, I assumed that the values were 32-bit until I got a report of invalid data on another model which shares the same protocol and driver. This is why I am tempted to build “virtual” fakes of these devices with Facedencer to feed variants of the data to the original software and see how it’s represented there.

On the bright side I feel proud of myself (maybe a little too much) for having spent the time to rewrite those two drivers with Construct while at 34C3 and afterwards. If I hadn’t refactored the code before looking at the Verio IQ, I wouldn’t have noticed the similarities so clearly and likely wouldn’t have come to the conclusion it’s a shared similar protocol. And no way I could have copy-pasted between the drivers so easily as I did.

Public Money, Public Code

Imagine that all publicly funded software were under a free license: Everybody would be able to use, study, share and improve it.

I have been waiting for Free Software Foundation Europe to launch the Public Money, Public Code campaign for almost a year now, when first Matthias told me about this being in the works. I have been arguing the same point, although not quite as organized, since back in 2009 when I complained about how the administration of Venice commissioned a GIS application to a company they directly own.

For those who have not seen the campaign yet, the idea is simple: software built with public money (that is, commissioned and paid for by public agencies), should be licensed using a FLOSS license, to make it public code. I like this idea and will support it fully. I even rejoined the Fellowship!

The timing of this campaign ended up resonating with a post on infrastructure projects and their costs, which I find particularly interesting and useful to point out. Unlike the article that is deep-linked there, which lamented of the costs associated with this project, this article focuses on pointing out how that money actually needs to be spent, because for the most part off the shelf Free Software is not really up to the task of complex infrastructure projects.

You may think the post I linked is overly critical of Free Software, and that it’s just a little rough around the edges and everything is okay once you spend some time on it. But that’s exactly what the article is saying! Free Software is a great baseline to build complex infrastructure on top of. This is what all the Cloud companies do, this is what even Microsoft has been doing in the past few years, and it is reasonable to expect most for-profit projects would do that, for a simple reason: you don’t want to spend money working on reinventing the wheel when you can charge for designing an innovative engine — which is a quite simplistic view of course, as sometimes you can invent a more efficient wheel indeed, but that’s a different topic.

Why am I bringing this topic up together with the FSFE campaign? Because I think this is exacly what we should be asking from our governments and public agencies, and the article I linked shows exactly why!

You can’t take off the shelf FLOSS packages and have them run a whole infrastructure, because they usually they are unpolished, and might not scale or require significant work to bring them up to the project required. You will have to spend money to do that, and maybe in some cases it will be cheaper to just not use already existing FLOSS projects at all, and build your own new, innovative wheel. So publicly funded projects need money to produce results, we should not complain about the cost1, but rather demand that the money spent actually produces something that will serve the public in all possible ways, not only with the objective of the project, but also with any byproduct of it, which include the source code.

Most of the products funded with public money are not particularly useful for individuals, or for most for-profit enterprises, but byproducts and improvements may very well be. For example, in the (Italian) post I wrote in 2009 I was complaining about a GIS application that was designed to report potholes and other roadwork problems. In abstract, this is a way to collect and query points of interests (POI), which is the base of many other current services, from review sites, to applications such as Field Trip.

But do we actually care? Sure, by making the code available of public projects, you may now actually be indirectly funding private companies that can reuse that code, and thus be jumpstarted into having applications that would otherwise cost time or money to build from scratch. On the other hand, this is what Free Software has been already about before: indeed, Linux, the GNU libraries and tools, Python, Ruby, and all those tools out there are nothing less than a full kit to quickly start projects that a long time ago would have taken a lot of money or a lot of time to start.

You could actually consider the software byproducts of these project similarly to the public infrastructure that we probably all take from granted: roads, power distribution, communication, and so on. Businesses couldn’t exist without all of this infrastructure, and while it is possible for a private enterprise to set out and build all the infrastructure themselves (road, power lines, fiber), we don’t expect them to do so. Instead we accept that we want more enterprises, because they bring more jobs, more value, and the public investment is part of it.

I actually fear the reason a number of people may disagree with this campaign is rooted in localism — as I said before, I’m a globalist. Having met many people with such ideas, I can hear them in my mind complaining that, to take again the example of the IRIS system in Venice, the Venetian shouldn’t have to pay for something and then give it away for free to Palermo. It’s a strawman, but just because I replaced the city that they complained about when I talked about my idea those eight years ago.

This argument may make sense if you really care about local money being spent locally and not counting on any higher-order funding. But myself I think that public money is public, and I don’t really care if the money from Venice is spent to help reporting potholes in Civitella del Tronto. Actually, I think that cities where the median disposable income is higher have a duty to help providing infrastructure for the smaller, poorer cities at the very least in their immediate vicinity, but overall too.

Unfortunately “public money” may not always be so, even if it appears like that. So I’m not sure if, even if a regulation was passed for publicly funded software development to be released as FLOSS, we’d get a lot in form of public transport infrastructure being open sourced. I would love for it to be though: we’d more easily get federated infrastructure, if they would share the same backend, and if you knew how the system worked you could actually build tools around it, for instance integrating Open Street Map directly with the transport system itself. But I fear this is all wishful thinking and it won’t happen in my lifetime.

There is also another interesting point to make here, which I think I may expand upon, for other contexts, later on. As I said above, I’m all for requiring the software developed with public money to be released to the public with a FLOSS-compatible license. Particularly one that allows using other FLOSS components, and the re-use of even part of the released code into bigger projects. This does not mean that everybody should have a say in what’s going on with that code.

While it makes perfect sense to be able to fix bugs and incompatibilities with websites you need to use as part of your citizen life (in the case of the Venetian GIS I would probably have liked to fix the way they identified the IP address they received the request for), adding new features may actually not be in line with the roadmap of the project itself. Particularly if the public money is already tight rather than lavish, I would surely prefer that they focused on delivering what the project needs and just drop the sources out in compatible licenses, without trying to create a community around them. While the latter would be nice to have, it should not steal the focus on the important part: a lot of this code is currently one-off and is not engineered to be re-used or extensible.

Of course on the long run, if you do have public software available already as open-source, there would be more and more situations where solving the same problem again may become easier, particularly if an option is added there, or a constant string can become a configured value, or translations were possible at all. And in that case, why not have them as features of a single repository, rather than have a lot of separate forks?

But all of this should really be secondary, in my opinion. Let’s focus on getting those sources, they are important, they matter and they can make a difference. Building communities around this will take time. And to be honest, even making these secure will take time. I’m fairly sure that in many cases right now if you do take a look at the software that is running for public services, you can find backdoors, voluntary or not, and even very simple security issues. While the “many eyes” idea is easily disproved, it’s also true that for the most part those projects cut corners, and are very difficult to make sure to begin with.

I want to believe we can do at least this bit.


  1. Okay, so there are case of artificially inflated costs due to friends-of-friends. Those are complicated issues, and I’ll leave them to experts. We should still not be complaining that these projects don’t appear for free.
    [return]

The breadwinner product

This may feel a bit random of a post, as Business and Economics are not my areas of expertise and I usually do my best not talk about stuff I don’t know, but I have seen the complete disregard for this concept lately and I thought it would be a good starting point to define here, before I talk about it, what a “breadwinner product” is, from my point of view.

The term breadwinner is used generally to refer to the primary income-earner in a household. While I have not seen this very often extended to products and services in companies, I think it should be fairly obvious how the extension would work.

In a world of startups there are still plenty of companies that have a real “breadwinner product”, even when acting as startups. This is the case for instance of the company I used to contract out for, in Los Angeles: they have been in activity for a number of years with a different, barely related product, and I was contracting out for their new project.

I think it’s important to think of this term, because without having this concept in mind, it’s hard to understand a lot of business decisions of many companies, why startups such as Revolut are “sweeping up the market” and so on.

This is something that came up on Twitter a time or two: a significant amount of geeks appear to wilfully ignore the needs of a business, and the marketing concepts as words of the devil, and will refuse to try considering whether decisions made business sense, and instead they will either try to judge decisions based purely on technical merits, or even just on their own direct interests. Now it is true that technical merits can make good business sense, but sometimes there are very good long-term vision reasons that people don’t appreciate on the pure technical point of view.

In particular, sometimes it’s hard to understand why a service by a company that may appear as a startup is based on “old” technology, but it may just be the case that it is actually a “traditional” company trying to pivot into a different market or a different brand or level of service. And when that happens, there’s at least some gravity pull to keep the stack in line with the previous technology. Particularly if the new service can piggyback on the old one for a while, both in term of revenue, technology and staff.

So in the case of the company I referred to above, when I started contracting out they were already providing a separate service that was built on a real legacy technology, ran on a stack based on bare metal servers with RedHat 5. Since the new service had two components, one of them ended up being based on the same stack and the other one (which I was setting up) ended up based on Gentoo Linux with containers instead. The same way as the Tinderbox used to be run. If you wonder why one would run two stacks this separate, the answer is that messing with the breadwinner product, most of the time, is a risky endeavour and unless you have a very good reason to do so, you don’t.

So even though I was effectively building a new architecture from scratch, and was setting up new servers, with more proper monitoring (based on Munin and Icinga), and Puppet for configuration management, I was not allowed to touch the old service. And rightly so, as it was definitely brittle and it would have lead to actually losing money, as that service was running in production, while the new one was not ready yet, and the few users of it would be able to be told about maintenance windows in advance.

There is often a tipping point though, when the cost of running a legacy service is higher than the revenue the service is bringing in. For that company that happened right as I was leaving it to start working at my current place of work. The owner though was more business savvy than many other people I met before and since, and was actually already planning how to cut some expenses. Indeed the last thing I helped that company with was setting up a single1 baremetal server with multiple containers to virtualise their former fully bare metal hardware, and bring it physically to a new location (Fremont, CA) to cut on the hosting costs.

The more the breadwinner service is making money, and the less the company is experimenting with alternative approaches to cut the costs in the future or to build up new services or open new market opportunities, the more working for those companies become hard. Of all the possible things I can complain about my boss at the time, ability to deal with business details was not one of those. Actually, I think that despite leaving me in quite the bad predicament afterwards, he did end up teaching me quite a bit of the nitty-gritty details of doing business, particularly US-style — and I may not entirely like it either.

But all in all, I think this is something lots more people in tech should learn about. Because I still maintain that Free Software can only be marketed by businesses and to be able to have your project cater to business users without selling its soul, you need to be able to tell what they need and how they need it provided.


  1. Okay, actually a bit more than one: a single machine ran the production environment for the legacy servers, and acted as warm-backup for the new service; another machine ran the production environment for the new service, and acted as warm-backup for the new service. A pair of the older baremetal servers acted as database backends for both systems.
    [return]

Fake “Free” Software hurts us all

Brian Krebs, the famous information security reporter, posted today (well, at the time of writing) an article on security issues with gSOAP library. I found this particularly interesting to me because I remembered seeing the name before. Indeed, Krebs links to the project’s SourceForge page, which is something to see. It has a multi-screen long list of “features”, including a significant number of variant and options for the protocol, which ends in the following:

Licenses: GPLv2, gSOAP public license (for engine and plugins), commercial non-GPL license available upon request (software is 100% in-house developed, no third-party GPL contributions included)

Ah, there we go. “Free” Software.

You may remember my post just a few days ago about the fact that Free Software to actually make a difference in society the way Stallman prophesizes needs a mediating agency, and at least right now that agency is companies and the free market. I argued that making your software usable by companies that provide services or devices is good, as it makes for healthy, usable and used projects, and increase competition and reduce costs of developing new solutions. So is gSOAP the counterexample? I really don’t think so. gSOAP is the perfect example for me that matches my previous rant about startups and fake communities.

The project at first look like the poster child for FSF-compatible software, since it’s licensed under GPL-2, and it clearly has no CLA (Contributor License Agreement), though the company provides a “way out” from GPL-2 obligations by buying a commercial license. This is not, generally speaking, a bad thing. I have seen many examples, including in the open multimedia cliques, of using this trick to foster the development of open solutions while making money to pay salary or build new Free Software companies.

But generally, those projects allow third party contributions with a CLA or similar agreement that allows the “main” copyright holders to still issue proprietary licenses, or enforce their own rights. You can for instance see what Google’s open source projects do about it. Among other things, this contribution method also disallows re-licensing the software, as that requires agreement from all copyright holders. In the case of gSOAP, that is not the case: as they say their software is «100% in-house developed».

They are very well proud of this situation, because it gives them all the power: if you want to use gSOAP without paying, you’re tied to the GPL, which may or may not become a compliance problem. And if you happen to violate the license, they have all the copyright to sue you or just ask you to settle. It’s a perfect situation for copyright trolls.

But, because of this, even though the software is on paper “Free Software” according to FSF, it’s a piece of proprietary software. Sure you can fork the library and build your own GPL-2 instead, as you have the freedom of fork, but that does not make it a community, or a real Free Software project. And it also means you can’t contribute patches to it to make it more secure, safer, or better for society. You could report bugs, including security bugs, but what’s the likeliness that you would actually care to do so, given that one of the first thing they make clear on their “project” page is that they are not interested in your contributions? And we clearly can see that the particular library could have used some care from the community, given its widespread adoption.

What does this mean to me, is that gSOAP is a clear example that just releasing something under GPL-2 is not enough to make it Free Software, and that even “Free” Software released under GPL-2 can be detrimental to society. And it also touches on the other topic I brought up recently, that is that you need to strike a balance between making code usable to companies (because they will use, and thus very likely help you extend or support your project) and keeping it as a real community or a real project. Clearly in this case the balance was totally off. If gSOAP was available with a more liberal license, even LGPL-2, they would probably lose a lot in license fees, as for most companies, just using this as a shared library would be enough. But it would then allow developers, both hobbyists and working for companies, to contribute fixes so that they trickle down on everybody’s device.

Since I do not know what the proprietary license that the company behind gSOAP requires their customers to agree with, I cannot say whether there is any incentive in said companies to provide fixes back to the project, but I assume if they were to do so, they wouldn’t be contributing them under GPL-2, clearly. What I can say is that for the companies I worked for in the past, actually receiving the library under GPL-2 and being able to contribute the fixes back would have been a better deal. The main reason is that as much as a library like this can be critical to connected devices, it does not by itself contain any of the business logic. And there are ways around linking GPL-2 code into the business logic application, that usually involve some kind of RPC between the logic and the frontend. And being able to contribute the changes back to the open source project would allow them to not have to maintain a long set of patches to sync release after release — I had the unfortunate experience of having to deal with something in that space before.

My advice, is once again, to try figuring out what you want to achieve by releasing a piece of software. Are you trying to make a statement, sticking it to the man, by forcing companies not to use your software? Are you trying to make money by forcing companies interested in using your work to buy a special license from you? Or are you contributing to Free Software because you want to improve society? In the latter case, I would suggest you consider a liberal license, to make sure that your code (that can be better than proprietary, closed-source software!) is actually available for those mediating agencies that transform geeky code into usable gadgets and services.

I know, it would be oh so nicer, if by just releasing something awesome under GPL-2 you’d force every company to release all their firmware as Free Software as well, but that’s not going to happen. Instead, if they feel they have to, they will look for worse alternatives, or build their own (even worse) alternatives, and keep them to themselves, and we’ll all be the poorer for it. So if in doubt, consider MIT or Apache 2 licenses. The latter in particular appears to be gaining more and more traction as both Google and Microsoft appear to be fond of the license, and Facebook’s alternative is tricky.

Some of you may consider me a bit of a hypocrite since I have released a number of projects under more restrictive Free licenses (including AGPL-3!) before I came to the conclusion that’s actually bad for society. Or at least before I came back to that idea, as I was convinced of that back in 2004, when I wrote (In Italian) of why I think MySQL is bad for Free Software (I should probably translate that post, just for reference). But what I decided is that I’ll try now to do my best to re-license the projects for which I own the full copyright under Apache 2 licenses. This may take a little while until I figure out all the right steps, but I feel is the right thing to do.

Who reaps the benefits of Free Software?

I feel silly to have to start this post by having to boast about my own achievements, but my previous post has stirred up a lot of comments (outside my blog’s own, that is) and a number of those could be summed up with “I don’t know this guy, he’s of course new, has no idea what he’s talking about”. So let’s start with the fact that I’ve been involved in Free Software for just about half of my life at this point.

And while I have not tied my name to any particular project, I have contributed to a significant amount of them by now. I’m not an “ideas man” so you can count on me to help figuring out problems and fixing bugs, but I would probably start hiding in a corner if I built up a cult of personality; unfortunately that appears to be what many other contributors to Free Software have done over the years, and this brings weight to their own name. I don’t have such weight, but you probably better off googling my name around before thinking I have no stake in the fire.

Introduction over, let’s get to the meat of this post.

If you’re reading this post, it is likely that you’re a Free Software supporter, creator, user, or a combination of these. Even those people that I know fiercely criticized Free Software can’t say that they don’t use any nowadays: at the very least the two major mobile operating systems have a number of Free Software components they rely upon, which means they are all users of Free Software, whether they want it or not. Which means I have no reason to try to convince you that Free Software is good. But good for who?

RMS notoriously titled his essay anthology Free Software, Free Society, which implies that Free Software is good for everybody — I can agree to a point, but at the same time I don’t think it’s as clear cut as he wants to make it sound.

I wrote about this before, but I want to write it down again. For the majority of people, Free Software is not important. You can argue that we should make it clear to them that it is important, but that sounds like a dogma, and I really don’t like dogmas.

Free Software supporters and users, for the most part, are geeks who are able to make use of available source code of software to do… something about it. Sometimes it’s not a matter of making changes to the code, because even making changes is not enough, but for instance you can reverse engineer an undocumented protocol in such a way that can be reimplemented properly.

But what about my brother in law? He can’t code to save his life, and he has no interest in reverse engineering protocols. How is Free Software important to him and benefiting him? Truthfully, the answer is “not at all, directly”. He’s a Free Software user, because he has an Android phone and an iPad, but neither are entirely Free Software, and even all the more “libre” projects are making his life any easier. I’ll go back to that later.

In the post I already referenced, I pointed out how the availability of open-source online diabetes management software would make possible for doctors to set up their own instances of these apps to give their patients access. But that only works in theory — in practice, no doctor would be able to set this up by themselves safely, and data protection laws would likely require them to hire an external contractor to set it up and maintain it. And in that case, what would be the difference between that and hiring a company that developed their own closed-source application, and maybe provide it as a cloud service?

Here’s the somewhat harsh truth: for most people who are not into IT and geekery, there is no direct advantage to Free Software, but there are multiple indirect ones, almost all of which rotate around one “simple” concept: Free Software makes Free Market. And oh my, is this term loaded, and ready to explode a flame just by me using it. Particularly with a significant amount of Free Software activists nowadays being pretty angry with capitalism as a concept, and that’s without going into the geek supremacists ready to poison the well for their own ego.

When Free Software is released, it empowers companies, freelancer, and people alike to re-use and improve it – I will not go into the discussion of which license allows what; I’ll just hand-wave this problem for now – which increase competition, which is generally good. Re-using the example of online diabetes management software, thanks to open-source it’s no longer only the handful of big companies that spent decades working on diabetes that can provide software to the doctors, but any other company that wants to… that is if they have the means to comply with data protection laws and similar hurdles.

Home routers are another area in which Free Software has clearly left a mark. From the WRT54G, which was effectively the easiest hackable router of its time, we made significant progress with both OpenWRT and LEDE, to the point that using a “pure” (to a point) Free Software home router is not only possible but feasible, and there even are routers that you can buy with Free Software firmware installed by default. But even here you can notice an important distinction: you have to buy the router with the firmware you want. You can’t just build it yourself for obvious reasons.

And this is the important part for me. There is this geek myth that everyone should be a “maker” — I don’t agree with this, because not everyone wants to be one, so they should not be required to become one. I am totally sold that everybody should have the chance and access to information to become one if they so want, and that’s why I also advocate reverse engineering and documenting whatever is not already freely accessible.

But for Free Software to be consumable by users, to improve society, you need a mediating agency, and that agency lies in the companies that provide valuable services to users. And with “valuable services” I do not mean solely services aimed at the elites, or even just at that part of the population living in big metropolises like SF or NYC. Not all the “ubers of” companies that try to provide services with which you can interact online or through apps are soulless entities. Not all the people wanting to use these services are privileged (though I am).

And let me be clear, that I don’t mean that Free Software has to be subject to companies and startups to make sense. I have indeed already complained about startups camouflaged by communities. In a healthy environment, the fact that some Free Software project is suitable to make a startup thrive is not the same as the startup needing Free Software contributions to be alive. The distinction is hard to put down on paper, but I think most of you have seen how that turns out for projects like ownCloud.

Since I have already complained about anti-corporatism in Free Software years ago, well before joining the current “big corporation” I work for, why am I going back to the topic, particularly as I can be seen as a controversial character due to that? Well, there are a few things that made me think. Only partially, this relates to the fact that I’ve been attacked a time or two for working for said corporation, and some of it is because I stopped contributing to some projects — although all but one of those cases, the reason was simply my own energy to keep contributing, rather than a work specific problem.

I have seen other colleagues still maintaining enough energy to keep contributing to open source while working at the same office; I have seen some dedicating enough of their work time to that as well. While of course this kind of jobs do limit the amount of time you can put to Free Software, I also think that a number of contributors who end up burning themselves out due to the hardships of paying the bills would probably be glad to exchange full-time Free Software work with part-time one if they were so lucky. So in this, I still count myself particularly privileged, but embrace it, because if I can contribute less time but for a longer time, I think it’s worth it.

But while I do my best to keep improving Free Software, and contribute to the public good, including by documenting glucometer protocols, I hear people criticizing how the only open-source GSM stack is funded, even though Harald Welte is dedicating a lot of his personal time, and doing a lot of ungrateful work, while certain “freedom fighters” decide to cut corners and break licenses.

At the same time, despite not being my personal favourite company, particularly after the most recent allegations of its conduct, GitHub’s Open Source Friday is a neat idea to convince companies that rely on Free Software to do something — sometimes the something may just as well be writing documentation for software, because that’s possibly more important than coding! Given that some of the reasons I’ve read for attacking them is that they are not “pure enough”, because they do not open their core business application, I feel it’s a bit of a cheap shot, given that they are probably the company that most empowered Free Software since the original SourceForge.

So what is that I am suggesting (given people appear to expect me to have answers in additions to opinions)? If I have to give a suggestion to all Free Software contributors out there, that is to always consider what they can do to make sure that their contributions can be consumed at all. That includes for instance not using joke licenses and not discriminating against requests from companies, because those companies might have the resources to make your software successful too.

Which does not mean that you should just bend to whatever the first company passing by request you, nor that you should provide them with free labour. Again that’s a game of balance: you can’t have a successful project that nobody uses, but you’re not there to work for free either. The right way is to provide something that is useful and used. And to make this compromise work, one of the best suggestion I can give to Free Software developers, is to learn a bit about the theories of business.

Unfortunately, I have also seen way too many Free Software supporters (luckily, less so contributors) keep believing that words like “business” and “marketing” are the devil’s own, and do not even stop thinking of what they actually mean — and that is a bad thing. Even when you don’t like some philosophy, or even more so when you don’t like some philosophy, the best way to fight it, is to know it. So if you really think marketing is that evil, you may want to go and read a book about marketing: you’ll understand how it works, and how to defend yourself from its tactics.

Linux desktop and geek supremacists

I have written about those who I refer to as geek supremacists a few months ago, discussing the dangerous prank at FOSDEM — as it turns out, they overlap with the “Free Software Fundamentalists” I wrote about eight years ago. I have found another one of them at the conference I was at the day this draft is being written. I’m not going to refer the conference because the conference does not deserve to be associated with my negative sentiment here.

The geek supremacist in this case was the speaker of one of the talks. I did not sit through the whole talk (which also run over its allotted time), because after the basic introduction, I was so ticked off by so many alarm bells that I just had to leave and find something more interesting and useful to do. The final drop for me was when the speaker insisted that “Western values didn’t apply to [them]” and thus they felt they could “liberate” hardware by mixing leaked sources of the proprietary OS with the pure Free (obsolete) OS of it. Not only this is clearly illegal (as they know and admitted), but it’s unethical (free software relies on licenses that are based on copyright law!) and toxic to the community.

But that’s not what I want to complain about here. The problem was a bit earlier than that. The speaker defined themselves as a “freedom fighter” (their words, not mine!), and insisted they can’t see why people are still using Windows and macOS despite Linux and FreeBSD being perfectly good options. I take a big issue with this.

Now, having spent about half my life using, writing and contributing to FLOSS, you can’t possibly expect me to just say that Linux on the desktop is a waste of time. But at the same time I’m not delusional, and I know there are plenty of reasons to not use Linux on the desktop.

While there has been huge improvements in the past fifteen years, and SuSE or Ubuntu are somewhat usable as desktop environments, there is still no comparison with using macOS or Windows, particularly in terms of applications working out of the box, and support from third parties. There are plenty of applications that don’t work on Linux, and even if you can replace them, sometimes that is not acceptable, because you depend on some external ecosystem.

For instance, when I was working as a sysadmin for hire, none of my customers could possibly have used a pure-Linux environment. Most of them were Windows only companies, but even the one that was a mixed environment (the print shop I wrote about before), could not do without macOS and Windows. From one point, the macOS environment was their primary workspace: Adobe software is not available for Linux, nor is QuarkXpress, nor the Xerox print queue software (ironic, since it interfaces with a Linux system on board the printer, of course). The accounting software, which handled everything from ordering to invoicing to tax report, was developed by a local company, – and they had no intention to build a version for Linux – and because tax regulations in Italy are… peculiar, no off-the-shelf open source software is available for that. As it happens, they also needed a Mandriva workstation – no other distribution would do – because the software for their large-format inkjet printer was only available for either that, or PPC Mac OS X, and getting it running on a modern PC with the former is significantly less expensive than trying to recover the latter.

(To make my life more complicated, the software they used for that printer was developed by Caldera. No, not the company acquired by SCO, but Caldera Graphics, a French company completely unrelated to the other tree of companies, which was recently acquired again. It was very confusing when the people at the shop told me that they had a “Caldera box running Linux”.)

Of course, there are people who can run a Linux-only shop, or can only run Linux on their systems, personal or not, because they do not need to depend on external ecosystems. More power to them, and thank you for their support on improving desktop features (because they are helping, right?). But they are clearly not part of the majority of the population, as it’s clear by the fact that people are indeed vastly using Windows, macOS, Android and iOS.

Now, this does not mean that Linux on the desktop is dead, or will never happen. It just means that it’ll take quite a while longer, and in the mean time, all the work of Linux on the desktop is likely going to profit other endeavours too. LibreOffice and KDE are clearly examples of “Linux on the desktop”, but at the same time they provide Free Software with the visibility (and energy, to a point) even when being used by people on Windows. The same goes for VLC, Firefox, Chrome, and a long list of other FLOSS software that many people rely upon, sometimes realising it is Free Software. But even that, is not why I’m particularly angry after encountering this geek supremacist.

The problem is that, again in the introduction to the talk, that was about mobile phones, they said they don’t expect things changed significantly in the proprietary phones for the past ten years. Ten years is forever in computing, let alone mobile! Ten years ago, the iPhone was just launched, and it still did not have an SDK or apps! Ten years ago the state of the art in smartphones you could develop apps for was Symbian! And this is not the first time I hear something like this.

A lot of people in the FLOSS community appear to have closed their eyes to what the proprietary software environment has been doing, under any area. Because «Free Software works for me, so it has to be working for everyone!» And that is dangerous under multiple point of views. Not only this shortsightedness is what, in my opinion, is making distributions irrelevant but it’s also making Linux on the desktop worse than Windows, and is why I don’t expect FSF will come up with an usable mobile phone any time soon.

Free desktop environments (KDE and GNOME, effectively) have spent a lot of time in the past ten (and more) years, first trying to catch up to Windows, then to Mac, then trying to build new paradigms, with mixed results. Some people loved them, some people hated them, but at least they tried and, ignoring most of the breakages, or the fact that they still try to have semantics nobody really cares about (like KDE’s “Activities” — or the fact that KDE-the-desktop is no more, and KDE is a community that includes stuff that has nothing to do with desktops or even barely Linux, but let’s not go there), a modern KDE system is fairly close in usability to Windows… 7. There is still a lot of catch up to do, particularly around security, but I would at least say that for the most part, the direction is still valid.

But to keep going, and to catch up, and if possible to go beyond those limits, you also need to accept that there are reasons why people are using proprietary software, and it’s not just a matter of lock-in, or the (disappointingly horrible) idea that people using Windows are “sheeple” and you hold the universal truth. Which is what pissed me off during that talk.

I could also add another note here about the idea that non-smart phones are a perfectly valid option nowadays. As I wrote already, there are plenty of reasons why a smartphone should not be considering a luxury. For many people, a smartphone is the only access they have to email, and the Internet at large. Or the only safe way to access their bank account, or other fundamental services that they rely upon. Being able to use a different device for those services, and only having a ten years old dumbphone is a privilege not the demonstration that there is no need for smartphones.

Also, I sometimes really wonder if these people have any friends at all. I don’t have many friends myself, but if I was stuck on a dumbphone only able to receive calls or SMS, I would probably have lost those few I have as well. Because even with European, non-silly tariffs on SMS, sending SMS is still inconvenient, and most people will use WhatsApp, Messenger, Telegram or Viber to communicate with their friends (and most of these applications are also more secure than SMS). That may be perfectly fine, I mean if you don’t want to be easily reachable by people, that is a very easy way to do so, but it’s once again a privilege, because it means you either don’t have people who would want to contact you in different ways, or you can afford to limit your social contacts to people who accepts your quirk — and once again, a freelancer could never do that.