Diabetes Software: the importance of documented protocols

You may remember that just last week I was excited to announce that I had more work planned and lined up for my glucometer utilities, one of which was supporting OneTouch Verio IQ which is a slightly older meter that is still sold and in use in many countries, but for which no protocol is released.

In the issue I linked above you can find an interesting problem: LifeScan discontinued their Diabetes Management Software, and removed it from their website. Indeed instead they suggest you get one of their Bluetooth meters and to use that with their software. While in general the idea of upgrading a meter is sane, the fact that they decided to discontinue the old software without providing protocols is at the very least annoying.

This shows the importance of having open source tools that can be kept alive as long as needed, because there will be people out there that still rely on their OneTouch Verio IQ, or even on the OneTouch Ultra Easy, which was served by the same software, and is still being sold in the US. Luckily at least they at least used to publish the Ultra Easy protocol specs and they are still available on the Internet at large if you search for them (and I do have a copy, and I can rephrase that into a protocol specification if I find that’s needed).

On the bright side, the Tidepool project (of which I wrote before) has a driver for the Verio IQ. It’s not a particularly good driver, as I found out (I’ll get to that later), but it’s a starting point. It made me notice that the protocol was almost an in-between of the Ultra Easy and the Verio 2015, which I already reverse engineered before.

Of course I also managed to find a copy of the LifeScan software on a mostly shady website and a copy of the “cable drivers” package from the Middle East and Africa website of LifeScan, which still has the website design from five years ago. This is good because the latter package is the one that installs kernel drivers on Windows, while the former only contains userland software, which I can trust a little more.

Comparing the USB trace I got from the software with the commands implemented in the TidePool driver showed me a few interesting bits of information. The first being the first byte of commands on the Verio devices is not actually fixed, but can be chosen between a few, as the Windows software and the TidePool driver used different bytes (and with this I managed to simplify one corner case in the Verio 2015!). The second is that the TidePool driver does not extract all the information it should! In particular the device allows before/after meal marking, but they discard the byte before getting to it. Of course they don’t seem to expose that data even from the Ultra 2 driver so it may be intentional.

A bit more concerning is that they don’t verify that the command returned a success status, but rather discard the first two bytes every time. Thankfully it’s very easy for me to check that.

On the other hand, reading through the TidePool driver (which I have to assume was developed with access to the LifeScan specifications, under NDA) I could identify two flaws in my own code. The first was not realizing the packet format between the UltraEasy and the Verio 2015 was not subtly different as I thought, but it was almost identical, except the link-control byte in both Verio models is not used, and is kept to 0. The second was that I’m not currently correctly dropping out control solutions from the readings of the Verio 2015! I should find a way to get a hold of the control solution for my models in the pharmacy and make sure I get this tested out.

Oh yeah, and the TidePool driver does not do anything to get or set date and time; thankfully the commands were literally the same as in the Verio 2015, so that part was an actual copy-paste of code. I should probably tidy up a bit, but now I would have a two-tier protocol system: the base packet structure is shared between the UltraEasy, Verio IQ and Verio 2015. Some of the commands are shared between UltraEasy and Verio IQ, more of them are shared between the Verio IQ and the Verio 2015.

You can see now why I’ve been disheartened to hear that the development of drivers, even for open source software, is done through closed protocol specifications that cannot be published (or the drivers thoroughly commented). Since TidePool is not actually using all of the information, there is no way to tell what certain bytes of the responses represent. And unless I get access to all the possible variants of the information, I can’t tell how some bytes that to me look like constant should represent. Indeed since the Verio 2015 does not have meal information, I assumed that the values were 32-bit until I got a report of invalid data on another model which shares the same protocol and driver. This is why I am tempted to build “virtual” fakes of these devices with Facedencer to feed variants of the data to the original software and see how it’s represented there.

On the bright side I feel proud of myself (maybe a little too much) for having spent the time to rewrite those two drivers with Construct while at 34C3 and afterwards. If I hadn’t refactored the code before looking at the Verio IQ, I wouldn’t have noticed the similarities so clearly and likely wouldn’t have come to the conclusion it’s a shared similar protocol. And no way I could have copy-pasted between the drivers so easily as I did.

Public Money, Public Code

Imagine that all publicly funded software were under a free license: Everybody would be able to use, study, share and improve it.

I have been waiting for Free Software Foundation Europe to launch the Public Money, Public Code campaign for almost a year now, when first Matthias told me about this being in the works. I have been arguing the same point, although not quite as organized, since back in 2009 when I complained about how the administration of Venice commissioned a GIS application to a company they directly own.

For those who have not seen the campaign yet, the idea is simple: software built with public money (that is, commissioned and paid for by public agencies), should be licensed using a FLOSS license, to make it public code. I like this idea and will support it fully. I even rejoined the Fellowship!

The timing of this campaign ended up resonating with a post on infrastructure projects and their costs, which I find particularly interesting and useful to point out. Unlike the article that is deep-linked there, which lamented of the costs associated with this project, this article focuses on pointing out how that money actually needs to be spent, because for the most part off the shelf Free Software is not really up to the task of complex infrastructure projects.

You may think the post I linked is overly critical of Free Software, and that it’s just a little rough around the edges and everything is okay once you spend some time on it. But that’s exactly what the article is saying! Free Software is a great baseline to build complex infrastructure on top of. This is what all the Cloud companies do, this is what even Microsoft has been doing in the past few years, and it is reasonable to expect most for-profit projects would do that, for a simple reason: you don’t want to spend money working on reinventing the wheel when you can charge for designing an innovative engine — which is a quite simplistic view of course, as sometimes you can invent a more efficient wheel indeed, but that’s a different topic.

Why am I bringing this topic up together with the FSFE campaign? Because I think this is exacly what we should be asking from our governments and public agencies, and the article I linked shows exactly why!

You can’t take off the shelf FLOSS packages and have them run a whole infrastructure, because they usually they are unpolished, and might not scale or require significant work to bring them up to the project required. You will have to spend money to do that, and maybe in some cases it will be cheaper to just not use already existing FLOSS projects at all, and build your own new, innovative wheel. So publicly funded projects need money to produce results, we should not complain about the cost1, but rather demand that the money spent actually produces something that will serve the public in all possible ways, not only with the objective of the project, but also with any byproduct of it, which include the source code.

Most of the products funded with public money are not particularly useful for individuals, or for most for-profit enterprises, but byproducts and improvements may very well be. For example, in the (Italian) post I wrote in 2009 I was complaining about a GIS application that was designed to report potholes and other roadwork problems. In abstract, this is a way to collect and query points of interests (POI), which is the base of many other current services, from review sites, to applications such as Field Trip.

But do we actually care? Sure, by making the code available of public projects, you may now actually be indirectly funding private companies that can reuse that code, and thus be jumpstarted into having applications that would otherwise cost time or money to build from scratch. On the other hand, this is what Free Software has been already about before: indeed, Linux, the GNU libraries and tools, Python, Ruby, and all those tools out there are nothing less than a full kit to quickly start projects that a long time ago would have taken a lot of money or a lot of time to start.

You could actually consider the software byproducts of these project similarly to the public infrastructure that we probably all take from granted: roads, power distribution, communication, and so on. Businesses couldn’t exist without all of this infrastructure, and while it is possible for a private enterprise to set out and build all the infrastructure themselves (road, power lines, fiber), we don’t expect them to do so. Instead we accept that we want more enterprises, because they bring more jobs, more value, and the public investment is part of it.

I actually fear the reason a number of people may disagree with this campaign is rooted in localism — as I said before, I’m a globalist. Having met many people with such ideas, I can hear them in my mind complaining that, to take again the example of the IRIS system in Venice, the Venetian shouldn’t have to pay for something and then give it away for free to Palermo. It’s a strawman, but just because I replaced the city that they complained about when I talked about my idea those eight years ago.

This argument may make sense if you really care about local money being spent locally and not counting on any higher-order funding. But myself I think that public money is public, and I don’t really care if the money from Venice is spent to help reporting potholes in Civitella del Tronto. Actually, I think that cities where the median disposable income is higher have a duty to help providing infrastructure for the smaller, poorer cities at the very least in their immediate vicinity, but overall too.

Unfortunately “public money” may not always be so, even if it appears like that. So I’m not sure if, even if a regulation was passed for publicly funded software development to be released as FLOSS, we’d get a lot in form of public transport infrastructure being open sourced. I would love for it to be though: we’d more easily get federated infrastructure, if they would share the same backend, and if you knew how the system worked you could actually build tools around it, for instance integrating Open Street Map directly with the transport system itself. But I fear this is all wishful thinking and it won’t happen in my lifetime.

There is also another interesting point to make here, which I think I may expand upon, for other contexts, later on. As I said above, I’m all for requiring the software developed with public money to be released to the public with a FLOSS-compatible license. Particularly one that allows using other FLOSS components, and the re-use of even part of the released code into bigger projects. This does not mean that everybody should have a say in what’s going on with that code.

While it makes perfect sense to be able to fix bugs and incompatibilities with websites you need to use as part of your citizen life (in the case of the Venetian GIS I would probably have liked to fix the way they identified the IP address they received the request for), adding new features may actually not be in line with the roadmap of the project itself. Particularly if the public money is already tight rather than lavish, I would surely prefer that they focused on delivering what the project needs and just drop the sources out in compatible licenses, without trying to create a community around them. While the latter would be nice to have, it should not steal the focus on the important part: a lot of this code is currently one-off and is not engineered to be re-used or extensible.

Of course on the long run, if you do have public software available already as open-source, there would be more and more situations where solving the same problem again may become easier, particularly if an option is added there, or a constant string can become a configured value, or translations were possible at all. And in that case, why not have them as features of a single repository, rather than have a lot of separate forks?

But all of this should really be secondary, in my opinion. Let’s focus on getting those sources, they are important, they matter and they can make a difference. Building communities around this will take time. And to be honest, even making these secure will take time. I’m fairly sure that in many cases right now if you do take a look at the software that is running for public services, you can find backdoors, voluntary or not, and even very simple security issues. While the “many eyes” idea is easily disproved, it’s also true that for the most part those projects cut corners, and are very difficult to make sure to begin with.

I want to believe we can do at least this bit.


  1. Okay, so there are case of artificially inflated costs due to friends-of-friends. Those are complicated issues, and I’ll leave them to experts. We should still not be complaining that these projects don’t appear for free.
    [return]

The breadwinner product

This may feel a bit random of a post, as Business and Economics are not my areas of expertise and I usually do my best not talk about stuff I don’t know, but I have seen the complete disregard for this concept lately and I thought it would be a good starting point to define here, before I talk about it, what a “breadwinner product” is, from my point of view.

The term breadwinner is used generally to refer to the primary income-earner in a household. While I have not seen this very often extended to products and services in companies, I think it should be fairly obvious how the extension would work.

In a world of startups there are still plenty of companies that have a real “breadwinner product”, even when acting as startups. This is the case for instance of the company I used to contract out for, in Los Angeles: they have been in activity for a number of years with a different, barely related product, and I was contracting out for their new project.

I think it’s important to think of this term, because without having this concept in mind, it’s hard to understand a lot of business decisions of many companies, why startups such as Revolut are “sweeping up the market” and so on.

This is something that came up on Twitter a time or two: a significant amount of geeks appear to wilfully ignore the needs of a business, and the marketing concepts as words of the devil, and will refuse to try considering whether decisions made business sense, and instead they will either try to judge decisions based purely on technical merits, or even just on their own direct interests. Now it is true that technical merits can make good business sense, but sometimes there are very good long-term vision reasons that people don’t appreciate on the pure technical point of view.

In particular, sometimes it’s hard to understand why a service by a company that may appear as a startup is based on “old” technology, but it may just be the case that it is actually a “traditional” company trying to pivot into a different market or a different brand or level of service. And when that happens, there’s at least some gravity pull to keep the stack in line with the previous technology. Particularly if the new service can piggyback on the old one for a while, both in term of revenue, technology and staff.

So in the case of the company I referred to above, when I started contracting out they were already providing a separate service that was built on a real legacy technology, ran on a stack based on bare metal servers with RedHat 5. Since the new service had two components, one of them ended up being based on the same stack and the other one (which I was setting up) ended up based on Gentoo Linux with containers instead. The same way as the Tinderbox used to be run. If you wonder why one would run two stacks this separate, the answer is that messing with the breadwinner product, most of the time, is a risky endeavour and unless you have a very good reason to do so, you don’t.

So even though I was effectively building a new architecture from scratch, and was setting up new servers, with more proper monitoring (based on Munin and Icinga), and Puppet for configuration management, I was not allowed to touch the old service. And rightly so, as it was definitely brittle and it would have lead to actually losing money, as that service was running in production, while the new one was not ready yet, and the few users of it would be able to be told about maintenance windows in advance.

There is often a tipping point though, when the cost of running a legacy service is higher than the revenue the service is bringing in. For that company that happened right as I was leaving it to start working at my current place of work. The owner though was more business savvy than many other people I met before and since, and was actually already planning how to cut some expenses. Indeed the last thing I helped that company with was setting up a single1 baremetal server with multiple containers to virtualise their former fully bare metal hardware, and bring it physically to a new location (Fremont, CA) to cut on the hosting costs.

The more the breadwinner service is making money, and the less the company is experimenting with alternative approaches to cut the costs in the future or to build up new services or open new market opportunities, the more working for those companies become hard. Of all the possible things I can complain about my boss at the time, ability to deal with business details was not one of those. Actually, I think that despite leaving me in quite the bad predicament afterwards, he did end up teaching me quite a bit of the nitty-gritty details of doing business, particularly US-style — and I may not entirely like it either.

But all in all, I think this is something lots more people in tech should learn about. Because I still maintain that Free Software can only be marketed by businesses and to be able to have your project cater to business users without selling its soul, you need to be able to tell what they need and how they need it provided.


  1. Okay, actually a bit more than one: a single machine ran the production environment for the legacy servers, and acted as warm-backup for the new service; another machine ran the production environment for the new service, and acted as warm-backup for the new service. A pair of the older baremetal servers acted as database backends for both systems.
    [return]

Fake “Free” Software hurts us all

Brian Krebs, the famous information security reporter, posted today (well, at the time of writing) an article on security issues with gSOAP library. I found this particularly interesting to me because I remembered seeing the name before. Indeed, Krebs links to the project’s SourceForge page, which is something to see. It has a multi-screen long list of “features”, including a significant number of variant and options for the protocol, which ends in the following:

Licenses: GPLv2, gSOAP public license (for engine and plugins), commercial non-GPL license available upon request (software is 100% in-house developed, no third-party GPL contributions included)

Ah, there we go. “Free” Software.

You may remember my post just a few days ago about the fact that Free Software to actually make a difference in society the way Stallman prophesizes needs a mediating agency, and at least right now that agency is companies and the free market. I argued that making your software usable by companies that provide services or devices is good, as it makes for healthy, usable and used projects, and increase competition and reduce costs of developing new solutions. So is gSOAP the counterexample? I really don’t think so. gSOAP is the perfect example for me that matches my previous rant about startups and fake communities.

The project at first look like the poster child for FSF-compatible software, since it’s licensed under GPL-2, and it clearly has no CLA (Contributor License Agreement), though the company provides a “way out” from GPL-2 obligations by buying a commercial license. This is not, generally speaking, a bad thing. I have seen many examples, including in the open multimedia cliques, of using this trick to foster the development of open solutions while making money to pay salary or build new Free Software companies.

But generally, those projects allow third party contributions with a CLA or similar agreement that allows the “main” copyright holders to still issue proprietary licenses, or enforce their own rights. You can for instance see what Google’s open source projects do about it. Among other things, this contribution method also disallows re-licensing the software, as that requires agreement from all copyright holders. In the case of gSOAP, that is not the case: as they say their software is «100% in-house developed».

They are very well proud of this situation, because it gives them all the power: if you want to use gSOAP without paying, you’re tied to the GPL, which may or may not become a compliance problem. And if you happen to violate the license, they have all the copyright to sue you or just ask you to settle. It’s a perfect situation for copyright trolls.

But, because of this, even though the software is on paper “Free Software” according to FSF, it’s a piece of proprietary software. Sure you can fork the library and build your own GPL-2 instead, as you have the freedom of fork, but that does not make it a community, or a real Free Software project. And it also means you can’t contribute patches to it to make it more secure, safer, or better for society. You could report bugs, including security bugs, but what’s the likeliness that you would actually care to do so, given that one of the first thing they make clear on their “project” page is that they are not interested in your contributions? And we clearly can see that the particular library could have used some care from the community, given its widespread adoption.

What does this mean to me, is that gSOAP is a clear example that just releasing something under GPL-2 is not enough to make it Free Software, and that even “Free” Software released under GPL-2 can be detrimental to society. And it also touches on the other topic I brought up recently, that is that you need to strike a balance between making code usable to companies (because they will use, and thus very likely help you extend or support your project) and keeping it as a real community or a real project. Clearly in this case the balance was totally off. If gSOAP was available with a more liberal license, even LGPL-2, they would probably lose a lot in license fees, as for most companies, just using this as a shared library would be enough. But it would then allow developers, both hobbyists and working for companies, to contribute fixes so that they trickle down on everybody’s device.

Since I do not know what the proprietary license that the company behind gSOAP requires their customers to agree with, I cannot say whether there is any incentive in said companies to provide fixes back to the project, but I assume if they were to do so, they wouldn’t be contributing them under GPL-2, clearly. What I can say is that for the companies I worked for in the past, actually receiving the library under GPL-2 and being able to contribute the fixes back would have been a better deal. The main reason is that as much as a library like this can be critical to connected devices, it does not by itself contain any of the business logic. And there are ways around linking GPL-2 code into the business logic application, that usually involve some kind of RPC between the logic and the frontend. And being able to contribute the changes back to the open source project would allow them to not have to maintain a long set of patches to sync release after release — I had the unfortunate experience of having to deal with something in that space before.

My advice, is once again, to try figuring out what you want to achieve by releasing a piece of software. Are you trying to make a statement, sticking it to the man, by forcing companies not to use your software? Are you trying to make money by forcing companies interested in using your work to buy a special license from you? Or are you contributing to Free Software because you want to improve society? In the latter case, I would suggest you consider a liberal license, to make sure that your code (that can be better than proprietary, closed-source software!) is actually available for those mediating agencies that transform geeky code into usable gadgets and services.

I know, it would be oh so nicer, if by just releasing something awesome under GPL-2 you’d force every company to release all their firmware as Free Software as well, but that’s not going to happen. Instead, if they feel they have to, they will look for worse alternatives, or build their own (even worse) alternatives, and keep them to themselves, and we’ll all be the poorer for it. So if in doubt, consider MIT or Apache 2 licenses. The latter in particular appears to be gaining more and more traction as both Google and Microsoft appear to be fond of the license, and Facebook’s alternative is tricky.

Some of you may consider me a bit of a hypocrite since I have released a number of projects under more restrictive Free licenses (including AGPL-3!) before I came to the conclusion that’s actually bad for society. Or at least before I came back to that idea, as I was convinced of that back in 2004, when I wrote (In Italian) of why I think MySQL is bad for Free Software (I should probably translate that post, just for reference). But what I decided is that I’ll try now to do my best to re-license the projects for which I own the full copyright under Apache 2 licenses. This may take a little while until I figure out all the right steps, but I feel is the right thing to do.

Who reaps the benefits of Free Software?

I feel silly to have to start this post by having to boast about my own achievements, but my previous post has stirred up a lot of comments (outside my blog’s own, that is) and a number of those could be summed up with “I don’t know this guy, he’s of course new, has no idea what he’s talking about”. So let’s start with the fact that I’ve been involved in Free Software for just about half of my life at this point.

And while I have not tied my name to any particular project, I have contributed to a significant amount of them by now. I’m not an “ideas man” so you can count on me to help figuring out problems and fixing bugs, but I would probably start hiding in a corner if I built up a cult of personality; unfortunately that appears to be what many other contributors to Free Software have done over the years, and this brings weight to their own name. I don’t have such weight, but you probably better off googling my name around before thinking I have no stake in the fire.

Introduction over, let’s get to the meat of this post.

If you’re reading this post, it is likely that you’re a Free Software supporter, creator, user, or a combination of these. Even those people that I know fiercely criticized Free Software can’t say that they don’t use any nowadays: at the very least the two major mobile operating systems have a number of Free Software components they rely upon, which means they are all users of Free Software, whether they want it or not. Which means I have no reason to try to convince you that Free Software is good. But good for who?

RMS notoriously titled his essay anthology Free Software, Free Society, which implies that Free Software is good for everybody — I can agree to a point, but at the same time I don’t think it’s as clear cut as he wants to make it sound.

I wrote about this before, but I want to write it down again. For the majority of people, Free Software is not important. You can argue that we should make it clear to them that it is important, but that sounds like a dogma, and I really don’t like dogmas.

Free Software supporters and users, for the most part, are geeks who are able to make use of available source code of software to do… something about it. Sometimes it’s not a matter of making changes to the code, because even making changes is not enough, but for instance you can reverse engineer an undocumented protocol in such a way that can be reimplemented properly.

But what about my brother in law? He can’t code to save his life, and he has no interest in reverse engineering protocols. How is Free Software important to him and benefiting him? Truthfully, the answer is “not at all, directly”. He’s a Free Software user, because he has an Android phone and an iPad, but neither are entirely Free Software, and even all the more “libre” projects are making his life any easier. I’ll go back to that later.

In the post I already referenced, I pointed out how the availability of open-source online diabetes management software would make possible for doctors to set up their own instances of these apps to give their patients access. But that only works in theory — in practice, no doctor would be able to set this up by themselves safely, and data protection laws would likely require them to hire an external contractor to set it up and maintain it. And in that case, what would be the difference between that and hiring a company that developed their own closed-source application, and maybe provide it as a cloud service?

Here’s the somewhat harsh truth: for most people who are not into IT and geekery, there is no direct advantage to Free Software, but there are multiple indirect ones, almost all of which rotate around one “simple” concept: Free Software makes Free Market. And oh my, is this term loaded, and ready to explode a flame just by me using it. Particularly with a significant amount of Free Software activists nowadays being pretty angry with capitalism as a concept, and that’s without going into the geek supremacists ready to poison the well for their own ego.

When Free Software is released, it empowers companies, freelancer, and people alike to re-use and improve it – I will not go into the discussion of which license allows what; I’ll just hand-wave this problem for now – which increase competition, which is generally good. Re-using the example of online diabetes management software, thanks to open-source it’s no longer only the handful of big companies that spent decades working on diabetes that can provide software to the doctors, but any other company that wants to… that is if they have the means to comply with data protection laws and similar hurdles.

Home routers are another area in which Free Software has clearly left a mark. From the WRT54G, which was effectively the easiest hackable router of its time, we made significant progress with both OpenWRT and LEDE, to the point that using a “pure” (to a point) Free Software home router is not only possible but feasible, and there even are routers that you can buy with Free Software firmware installed by default. But even here you can notice an important distinction: you have to buy the router with the firmware you want. You can’t just build it yourself for obvious reasons.

And this is the important part for me. There is this geek myth that everyone should be a “maker” — I don’t agree with this, because not everyone wants to be one, so they should not be required to become one. I am totally sold that everybody should have the chance and access to information to become one if they so want, and that’s why I also advocate reverse engineering and documenting whatever is not already freely accessible.

But for Free Software to be consumable by users, to improve society, you need a mediating agency, and that agency lies in the companies that provide valuable services to users. And with “valuable services” I do not mean solely services aimed at the elites, or even just at that part of the population living in big metropolises like SF or NYC. Not all the “ubers of” companies that try to provide services with which you can interact online or through apps are soulless entities. Not all the people wanting to use these services are privileged (though I am).

And let me be clear, that I don’t mean that Free Software has to be subject to companies and startups to make sense. I have indeed already complained about startups camouflaged by communities. In a healthy environment, the fact that some Free Software project is suitable to make a startup thrive is not the same as the startup needing Free Software contributions to be alive. The distinction is hard to put down on paper, but I think most of you have seen how that turns out for projects like ownCloud.

Since I have already complained about anti-corporatism in Free Software years ago, well before joining the current “big corporation” I work for, why am I going back to the topic, particularly as I can be seen as a controversial character due to that? Well, there are a few things that made me think. Only partially, this relates to the fact that I’ve been attacked a time or two for working for said corporation, and some of it is because I stopped contributing to some projects — although all but one of those cases, the reason was simply my own energy to keep contributing, rather than a work specific problem.

I have seen other colleagues still maintaining enough energy to keep contributing to open source while working at the same office; I have seen some dedicating enough of their work time to that as well. While of course this kind of jobs do limit the amount of time you can put to Free Software, I also think that a number of contributors who end up burning themselves out due to the hardships of paying the bills would probably be glad to exchange full-time Free Software work with part-time one if they were so lucky. So in this, I still count myself particularly privileged, but embrace it, because if I can contribute less time but for a longer time, I think it’s worth it.

But while I do my best to keep improving Free Software, and contribute to the public good, including by documenting glucometer protocols, I hear people criticizing how the only open-source GSM stack is funded, even though Harald Welte is dedicating a lot of his personal time, and doing a lot of ungrateful work, while certain “freedom fighters” decide to cut corners and break licenses.

At the same time, despite not being my personal favourite company, particularly after the most recent allegations of its conduct, GitHub’s Open Source Friday is a neat idea to convince companies that rely on Free Software to do something — sometimes the something may just as well be writing documentation for software, because that’s possibly more important than coding! Given that some of the reasons I’ve read for attacking them is that they are not “pure enough”, because they do not open their core business application, I feel it’s a bit of a cheap shot, given that they are probably the company that most empowered Free Software since the original SourceForge.

So what is that I am suggesting (given people appear to expect me to have answers in additions to opinions)? If I have to give a suggestion to all Free Software contributors out there, that is to always consider what they can do to make sure that their contributions can be consumed at all. That includes for instance not using joke licenses and not discriminating against requests from companies, because those companies might have the resources to make your software successful too.

Which does not mean that you should just bend to whatever the first company passing by request you, nor that you should provide them with free labour. Again that’s a game of balance: you can’t have a successful project that nobody uses, but you’re not there to work for free either. The right way is to provide something that is useful and used. And to make this compromise work, one of the best suggestion I can give to Free Software developers, is to learn a bit about the theories of business.

Unfortunately, I have also seen way too many Free Software supporters (luckily, less so contributors) keep believing that words like “business” and “marketing” are the devil’s own, and do not even stop thinking of what they actually mean — and that is a bad thing. Even when you don’t like some philosophy, or even more so when you don’t like some philosophy, the best way to fight it, is to know it. So if you really think marketing is that evil, you may want to go and read a book about marketing: you’ll understand how it works, and how to defend yourself from its tactics.

Linux desktop and geek supremacists

I have written about those who I refer to as geek supremacists a few months ago, discussing the dangerous prank at FOSDEM — as it turns out, they overlap with the “Free Software Fundamentalists” I wrote about eight years ago. I have found another one of them at the conference I was at the day this draft is being written. I’m not going to refer the conference because the conference does not deserve to be associated with my negative sentiment here.

The geek supremacist in this case was the speaker of one of the talks. I did not sit through the whole talk (which also run over its allotted time), because after the basic introduction, I was so ticked off by so many alarm bells that I just had to leave and find something more interesting and useful to do. The final drop for me was when the speaker insisted that “Western values didn’t apply to [them]” and thus they felt they could “liberate” hardware by mixing leaked sources of the proprietary OS with the pure Free (obsolete) OS of it. Not only this is clearly illegal (as they know and admitted), but it’s unethical (free software relies on licenses that are based on copyright law!) and toxic to the community.

But that’s not what I want to complain about here. The problem was a bit earlier than that. The speaker defined themselves as a “freedom fighter” (their words, not mine!), and insisted they can’t see why people are still using Windows and macOS despite Linux and FreeBSD being perfectly good options. I take a big issue with this.

Now, having spent about half my life using, writing and contributing to FLOSS, you can’t possibly expect me to just say that Linux on the desktop is a waste of time. But at the same time I’m not delusional, and I know there are plenty of reasons to not use Linux on the desktop.

While there has been huge improvements in the past fifteen years, and SuSE or Ubuntu are somewhat usable as desktop environments, there is still no comparison with using macOS or Windows, particularly in terms of applications working out of the box, and support from third parties. There are plenty of applications that don’t work on Linux, and even if you can replace them, sometimes that is not acceptable, because you depend on some external ecosystem.

For instance, when I was working as a sysadmin for hire, none of my customers could possibly have used a pure-Linux environment. Most of them were Windows only companies, but even the one that was a mixed environment (the print shop I wrote about before), could not do without macOS and Windows. From one point, the macOS environment was their primary workspace: Adobe software is not available for Linux, nor is QuarkXpress, nor the Xerox print queue software (ironic, since it interfaces with a Linux system on board the printer, of course). The accounting software, which handled everything from ordering to invoicing to tax report, was developed by a local company, – and they had no intention to build a version for Linux – and because tax regulations in Italy are… peculiar, no off-the-shelf open source software is available for that. As it happens, they also needed a Mandriva workstation – no other distribution would do – because the software for their large-format inkjet printer was only available for either that, or PPC Mac OS X, and getting it running on a modern PC with the former is significantly less expensive than trying to recover the latter.

(To make my life more complicated, the software they used for that printer was developed by Caldera. No, not the company acquired by SCO, but Caldera Graphics, a French company completely unrelated to the other tree of companies, which was recently acquired again. It was very confusing when the people at the shop told me that they had a “Caldera box running Linux”.)

Of course, there are people who can run a Linux-only shop, or can only run Linux on their systems, personal or not, because they do not need to depend on external ecosystems. More power to them, and thank you for their support on improving desktop features (because they are helping, right?). But they are clearly not part of the majority of the population, as it’s clear by the fact that people are indeed vastly using Windows, macOS, Android and iOS.

Now, this does not mean that Linux on the desktop is dead, or will never happen. It just means that it’ll take quite a while longer, and in the mean time, all the work of Linux on the desktop is likely going to profit other endeavours too. LibreOffice and KDE are clearly examples of “Linux on the desktop”, but at the same time they provide Free Software with the visibility (and energy, to a point) even when being used by people on Windows. The same goes for VLC, Firefox, Chrome, and a long list of other FLOSS software that many people rely upon, sometimes realising it is Free Software. But even that, is not why I’m particularly angry after encountering this geek supremacist.

The problem is that, again in the introduction to the talk, that was about mobile phones, they said they don’t expect things changed significantly in the proprietary phones for the past ten years. Ten years is forever in computing, let alone mobile! Ten years ago, the iPhone was just launched, and it still did not have an SDK or apps! Ten years ago the state of the art in smartphones you could develop apps for was Symbian! And this is not the first time I hear something like this.

A lot of people in the FLOSS community appear to have closed their eyes to what the proprietary software environment has been doing, under any area. Because «Free Software works for me, so it has to be working for everyone!» And that is dangerous under multiple point of views. Not only this shortsightedness is what, in my opinion, is making distributions irrelevant but it’s also making Linux on the desktop worse than Windows, and is why I don’t expect FSF will come up with an usable mobile phone any time soon.

Free desktop environments (KDE and GNOME, effectively) have spent a lot of time in the past ten (and more) years, first trying to catch up to Windows, then to Mac, then trying to build new paradigms, with mixed results. Some people loved them, some people hated them, but at least they tried and, ignoring most of the breakages, or the fact that they still try to have semantics nobody really cares about (like KDE’s “Activities” — or the fact that KDE-the-desktop is no more, and KDE is a community that includes stuff that has nothing to do with desktops or even barely Linux, but let’s not go there), a modern KDE system is fairly close in usability to Windows… 7. There is still a lot of catch up to do, particularly around security, but I would at least say that for the most part, the direction is still valid.

But to keep going, and to catch up, and if possible to go beyond those limits, you also need to accept that there are reasons why people are using proprietary software, and it’s not just a matter of lock-in, or the (disappointingly horrible) idea that people using Windows are “sheeple” and you hold the universal truth. Which is what pissed me off during that talk.

I could also add another note here about the idea that non-smart phones are a perfectly valid option nowadays. As I wrote already, there are plenty of reasons why a smartphone should not be considering a luxury. For many people, a smartphone is the only access they have to email, and the Internet at large. Or the only safe way to access their bank account, or other fundamental services that they rely upon. Being able to use a different device for those services, and only having a ten years old dumbphone is a privilege not the demonstration that there is no need for smartphones.

Also, I sometimes really wonder if these people have any friends at all. I don’t have many friends myself, but if I was stuck on a dumbphone only able to receive calls or SMS, I would probably have lost those few I have as well. Because even with European, non-silly tariffs on SMS, sending SMS is still inconvenient, and most people will use WhatsApp, Messenger, Telegram or Viber to communicate with their friends (and most of these applications are also more secure than SMS). That may be perfectly fine, I mean if you don’t want to be easily reachable by people, that is a very easy way to do so, but it’s once again a privilege, because it means you either don’t have people who would want to contact you in different ways, or you can afford to limit your social contacts to people who accepts your quirk — and once again, a freelancer could never do that.

Distributions are becoming irrelevant: difference was our strength and our liability

For someone that has spent the past thirteen years defining himself as a developer of a Linux distribution (whether I really am still a Gentoo Linux developer or not is up for debate I’m sure), having to write a title like this is obviously hard. But from the day I started working on open source software to now I have grown a lot, and I have realized I have been wrong about many things in the past.

One thing that I realized recently is that nowadays, distributions lost the war. As the title of this post says, difference is our strength, but at the same time, it is also the seed of our ruin. Take distributions: Gentoo, Fedora, Debian, SuSE, Archlinux, Ubuntu. They all look and act differently, focusing on different target users, and because of this they differ significantly in which software they make available, which versions are made available, and how much effort is spent on testing, both the package itself and the system integration.

While describing it this way, there is nothing that scream «Conflict!», except at this point we all know that they do conflict, and the solutions from many different communities, have been to just ignore distributions: developers of libraries for high level languages built their own packaging (Ruby Gems, PyPI, let’s not even talk about Go), business application developers started by using containers and ended up with Docker, and user application developers have now started converging onto Flatpak.

Why the conflicts? A lot of time the answer is to be found in bickering among developers of different distributions and the «We are better than them!» attitude, which often turned to «We don’t need your help!». Sometimes this went all the way to the negative side to the point of «Oh it’s a Gentoo [or other] developer complaining, it’s all their fault and their problem, ignore them.» And let’s not forget of the enmity between forks (like Gentoo, Funtoo and Exherbo), in which both sides are trying to prove being better than the other. A lot of conflict all over the place.

There were of course at least two main attempts to standardise parts of how a distribution works: the Linux Standard Base and FreeDesktop.org. The former is effectively a disaster, the latter is more or less accepted, but the problem lies there: in the more-or-less. Let’s look at these two separately.

The LSB was effectively a commercial effort, which was aimed at pleasing (effectively) only the distributors of binary packages. It really didn’t make much of an assurance of the environment you could build things in, and it never invited non-commercial entities to discuss the reasoning behind the standard. In an environment like open source, the fact that the LSB became an ISO standard is not a badge of honour, but rather a worry that it’s over-specified and over-complicated. Which I think most people agree it is. There is also quite an overreach of specifying the presence of binary libraries, rather than being a set of guidelines for distributions to follow.

And yes, although technically LSB is still out there working, the last release I could find described in Wikipedia is from 2015, and I couldn’t even find at first search whether they certified any distribution version. Also, because of the nature of certifications, it’s impossible to certify a rolling-release distribution, which as it happens are becoming much more widespread than they used to.

I think that one of the problem of LSB, both from the adoption and usefulness point of views, is that it focused entirely too much on providing a base platform for binary and commercial application. Back when it was developed, it seemed like the future of Linux (particularly on the desktop) relied entirely on the ability for proprietary software applications to be developed that could run on it, the way they do on Windows and OS X. Since many of the distributions didn’t really aim to support this particular environment, convincing them to support LSB was clearly pointless.

FreeDesktop.org is in a much better state in this regard. They point out that whatever they write is not standards, but de-facto specifications. Because of the de-facto character of these, they started by effectively writing down whatever GNOME and RedHat were doing, but then grown to be significantly more cross-desktop, thanks to KDE and other communities. Because of the nature of the open source community, FD.o specifications are much more widely adopted than the “standards”.

Again, if you compare with what I said above, FD.o provides specifications that make it easier to write, rather than run, applications. It provides you with guarantees of where you should be looking for your file and which icons should be rendering, and which interfaces are exposed. Instead of trying to provide an environment where a in-house written application will keep running for the next twenty years (which, admittedly, Windows has provided for a very long time), it provides you building blocks interfaces so that you can create whatever the heck you want and integrate with the rest of the desktop environments.

As it happens, Lennart and his systemd ended up standardizing distributions a lot more than LSB or FD.o ever did, if nothing else by taking over one of the biggest customization points of them all: the init system. Now, I have complained about this before that it probably could have been a good topic for a standard even before systemd, and independently from it, that developers should have been following, but that’s another problem. At the end of the day, there is at least some cross-distribution way to provide init system support, and developers know that if they build their daemon in a certain way, then they can provide the init system integration themselves, rather than relying on the packagers.

I feel that we should have had much more of that. When I worked on ruby-ng.eclass and fakegem.eclass, I tried getting the Debian Ruby team, who had similar complaints before, to join me on a mailing list so that we could discuss a common interface between Gems developers and Linux distributions, but once again, that did not actually happen. My afterthought is that we should have had a similar discussion for CPAN, CRAN, PyPI, Cargo and so on… and that would probably have spared us the mess that is go packaging.

The problem is not only getting the distributions to overcome their differences, both in technical direction and marketing, but that it requires sitting at a table with the people who built and use those systems, and actually figuring out what they are trying to achieve. Because in particular in the case of Gems and the other packaging systems, who you should talk with is not only your distribution’s users, but most importantly the library authors (whose main interest is shipping stuff so that people can use them) and the developers who use them (whose main interest is being able to fetch and use a library without waiting for months). The distribution users are, for most of the biggest projects, sysadmins.

This means you have a multi-faceted problem to solve, with different roles of people, and different needs of them. Finding a solution that does not compromise, and covers 100% of the needs of all the roles involved, and requires no workflow change on any one’s part is effectively impossible. What you should be doing is focusing on choosing the very important features for the roles critical to the environment (in the example above, the developers of the libraries, and the developers of the apps using those libraries), requiring the minimum amount of changes to their workflow (but convince them to change the workflow where it really is needed, as long as it’s not more cumbersome than it was before for no advantage), and figuring out what can be done to satisfy or change the requirements of the “less important” roles (distribution maintainers usually being that role).

Again going back to the example of Gems: it is clear by now that most of the developers never cared of getting their libraries to be carried onto distributions. They cared about the ability to push new releases of their code fast, seamlessly and not have to learn about distributions at all. The consumers of these libraries don’t and should not care about how to package them for their distributions or how they even interact with it, they just want to be able to deploy their application with the library versions they tested. And setting aside their trust in distributions, sysadmin only care to have a sane handling of dependencies and being able to tell which version of which library is running on their production, to upgrade them in case of a security issues. Now, the distribution maintainers can become the nexus for all these problems, and solve it once and for all… but they will have to be the ones making the biggest changes in their workflow – which is what we did with ruby-ng – otherwise they will just become irrelevant.

Indeed, Ruby Gems and Bundler, PyPI and VirtualEnv, and now Docker itself, are expressions of that: distribution themselves became a major risk and cost point, by being too different between each other and not providing an easy way to just provide one working library, and use one working library. These roles are critical to the environment: if nobody publish libraries, consumers have no library to use; if nobody consumes libraries, there is no point in publishing them. If nobody packages libraries, but there are ways to publish and consume them, the environment still stands.

What would I do if I could go back in time, be significantly more charismatic, and change the state of things? (And I’m saying this for future reference, because if it ever becomes relevant to my life again, I’ll do exactly that.)

  • I would try to convince people that even on divergence of technical direction, discussing and collaborating is a good thing to do. No idea is stupid, idiotic or any other random set of negative words. The whole point of that is that you need to make sure that even if you don’t agree on a given direction, you can agree on others, it’s not a zero-sum game!
  • Speaking of, “overly complicated” is a valid reason to not accept one direction and take another; “we always did it this way” is not a good reason. You can keep using it, but then you’ll end up with Solaris a very stagnant project.
  • Talk with the stakeholders of the projects that are bypassing distributions, and figure out why they are doing that. Provide “standard” tooling, or at least a proposal on how to do things in such a way that the distributions are still happy, without causing undue burden.
  • Most importantly, talk. Whether it is by organizing mailing lists, IRC channels, birds of a feather at conferences, or whatever else. People need to talk and discuss the issues at hand in clear, in front of the people building the tooling and making the decisions.

I have not done any of that in the past. If I ever get in front of something like this, I’ll do my best to, instead. Unfortunately, this is a position that, in the current universe we’re talking about, would have required more privilege than I had before. Not only for my personal training and experience to understand what should have been done, but because it requires actually meeting with people and organizing real life summits. And while nowadays I did become a globetrotter, I could never have afforded that before.

The dot-io boom, or how open source projects are breeding startups

You may remember that some months ago I stopped updating the blog. Part of it was a technical problem of having the content on Typo (for more than a few reasons), part of it was disappointed in the current free software and open source scene. I vented this disappointed to the people over at FSFE that and they suggested I should have gone to the yearly meeting in Berlin to talk about them, but I was otherwise engaged, and I really felt I needed to withdraw from the scene for a while to think things over.

Open source and free software have, over the years, attracted people for widely different reasons. In the beginning, it was probably mostly the ethics, it also attracted tinkerers and hackers of course. I was attracted to it as an user because I liked tinkering, but as a developer because I was hoping it would lead me to find a good job. You can judge me if you want, but growing up in a blue collar family, finding a good job was actually something I was taught to be always mindful of. And it’s not fair, but I had the skills and time (and lack of extreme pressure to find said job) that I managed to spend a significant amount of time on free software development.

I went to a technical school, and the default career out of it when I joined was working for a fairly big insurance company, whose headquarters were (and as far as I know still are) not far from where I grew up. Ending up at the Italian telco (Telecom Italia) was considered a huge score — this was a time before university was considered mandatory to do anything at all wherever you wanted to go (not that I think that’s the case now).

My hopes were to find something better: originally, that meant hoping to move to a slightly bigger city (Padua) and work at Sun Microsystems, that happened to have a local branch. And it seemed like the open source would get you noticed. Of course at the end I ended up slightly more North than I planned – in Ireland – and Sun is gone replaced by a much bigger corporation that, well, let’s just say does not interest me at all.

Was open source needed for me to get where I am? Probably yes, if nothing else because it made me grow more than anything else I could have done otherwise. I see many of my classmates, who even after going to university ended up in the same insurance company, or at the local telco, or one of the many big consultancy companies. The latter is the group that tends to be the most miserable. On the other hand I also have colleagues at my current company who came from the same year of the same high school as me — except they went on to university, while I didn’t. I’m not sure who got the better deal but we’re all happy, mostly.

What I see right now though, and that worries a bit, is that many people see open source as a way to jump-start a startup. And that is perfectly okay if that’s how you present yourself, but it is disingenuous when you effectively hide behind open-source projects to either avoid scrutiny or to make yourself more appealing, and what you end up with is a fake open source project which probably is not free software in the license, and easily so even in the spirit.

I have complained before about a certain project, that decided to refuse my offer to work on a shared format specification because they thought a single contributor not part of their core team can leave the scene at any moment. Even leaving aside the fact that I probably maintained open-source code for more than they were active on said scene, this is quite the stance. Indeed when I suggested this format is needed, their answer was that they are already working on that behind closed doors together with vendors. Given how they rebranded themselves from a diabetes management software to a quantified self tool, I assume their talks with vendor went just as about everybody expected.

But that was already just a final straw; the project itself was supposedly open-source, but beside for the obvious build or typo fixes, their open source repository only had internal contributions. I’m not sure if it was because they didn’t want to accept external contributions or because the possible contributors were turned down by CLA requirements or something like that, it’s not really something I cared enough to look into. In addition to the open source repository, though, most of the service was closed-source, making it nearly impossible to leverage in a free way.

My impression at this point is that whereas before your average “hacker” would be mostly looking to publish whatever they were working on as an open source project, possibly to use it as a ticket to find a job (either because of skill or being paid to maintain the software), nowadays any side project is a chance to a startup… whether a business plan is available for it or not.

And because you want at some point to make money, if you are a startup, you need to have at least some plan B, some reserve on the code, that makes your startup itself valuable, at least to a point. And that usually makes for poor open source contributions as noticed above, and as, good timing, CyanogenMod turned out to be. Similar things have happened and keep happening over time with OpenWRT too, although that one probably already went through the phase of startup-importance into a more community-driven project, although clearly not a mature enough one.

So here it is, I’m grumpy and old at this point, but I think a lot of us in the free software and open source world should do better to consider the importance of maintaining community projects as such, and avoid hiding targets of “startupizzation”, rather than just caring about firmware lock-in and “tivoization.” I wish I had a solution to this but I really don’t, I can only rant, at least for now.

Last words on diabetes and software

Started as a rant on G+, then became too long and suited for a blog.

I do not understand why we can easily get people together with something like VideoLAN, but the moment when health is involved, the results are just horrible.

Projects either end up in “startuppy”, which want to keep things for themselves and by themselves, or we end up fractionated in tiny one-person-projects because every single glucometer is a different beast and nobody wants to talk with others.

Tonight I ended up in a half-fight with a project to which I came saying “I’ve started drafting an exchange format, because nobody has written one down, and the format I’ve seen you use is just terrible and when I told you, you haven’t replied” and the answer was “we’re working on something we want to standardize by talking with manufacturers.”

Their “we talk with these” projects are also something insane — one seem to be more like the idea of building a new device from scratch (great long term solution, terrible usefulness for people) and the other one is yet-another-build-your-own-cloud kind of solution that tells you to get Heroku or Azure with MongoDB to store your data. It also tells you to use a non-manufacturer-approved scanner for the sensors, which the comments point out can fry those sensors to begin with. (I should check whether that’s actually within ToS for Play Store.)

So you know what? I’m losing hope in FLOSS once again. Maybe I should just stop caring, give up this laptop for a new Microsoft Surface Pro, and keep my head away from FLOSS until I am ready for retirement, at which point I can probably just go and keep up with the reading.

I have tried reaching out to the people who have written other tools, like I posted before, but it looks like people are just not interested in discussing this — I did talk with a few people over email about some of the glucometers I dealt with, but that came to one person creating yet another project wanting to become a business, and two figuring out which original proprietary tools to use, because they do actually work.

So I guess you won’t be reading much about diabetes on my blog in the future, because I don’t particularly enjoy writing this for my sole use, and clearly that’s the only kind of usage these projects will ever get. Sharing seems to be considered deprecated.

Travel cards collection

As some of you might have noticed, for example by following me on Twitter, I have been traveling a significant amount over the past four years. Part of it has been for work, part for my involvement with VideoLAN and part again for personal reason (i.e. vacation.)

When I travel, I don’t rent a car. The main reason being I (still) don’t have a driving license, so particularly when I travel for leisure I tend to travel where there is at least some form of public transport, and even better if there is a good one. This matched perfectly with my hopes of visiting Japan (which I did last year), and usually tends to work relatively well with conference venues, so I have not had much trouble on it in the past few years.

One thing that is going a bit overboard for me, though, is the number of travel cards I have by now. With the exception of Japan, here every city or so have a different travel card — while London appears to have solved that, at least for tourists and casual passengers, by accepting contactless cards as if it was their local travel card (Oyster), it does not seem to be followed up by anyone else, that I can see.

Indeed I have at this point at home:

  • Clipper for San Francisco and Bay Area; prepaid, I actually have not used it in a while so I have some money “stuck” on it.
  • SmarTrip for Washington DC; also prepaid, but at least I managed to only keep very little on it.
  • Metro dayLink for Belfast; prepaid by tickets.
  • Ridacard for Edinburgh and the Lothian region; this one has my photo on it, and I paid for a weekly ticket when I used it.
  • imob.venezia, which is now discontinued, and I used when I lived in Venice, it’s just terrible.
  • Suica, for Japan, which is a stored-value card that can be used for payments as well as travel, so it comes the closest to London’s use of contactless.
  • Leap which is the local Dublin transports card, also prepaid.
  • Navigo for Paris, but I only used it once because you can only store Monday-to-Sunday tickets on it.

I might add a few more this year, as I’m hitting a few new places. On the other hand, while in London yesterday, I realized how nice and handy it is to just use my bank card for popping in and out of the Tube. And I’ve been wondering how did we get to this system of incompatible cards.

In the list above, most of the cities are one per State or Country, which might suggest cards work better within a country, but that’s definitely not the case. I have been told that recently Nottingham has moved to a consolidate travelcard which is not compatible with Oyster either, and both of them are in England.

Suica is the exception. The IC system used in Japan is a stored-value system which can be used for both travel and for general payments, in stores and cafes and so on. This is not “limited” to Tokyo (though limited might be the wrong word there), but rather works in most of the cities I’ve visited — one exception being busses in Hiroshima, while it worked fine for trams and trains. It is essentially an upside-down version of what happens in London, like if instead of using your payment card to travel, you used your travel card for in-store purchases.

The convenience of using a payment card, by the way, lies for me mostly on being able to use (one of) my bank accounts to pay for the money without having to “earmark” it the way I did for Clipper, which is now going to be used only the next time I actually use the public transport in SF — which I’m not sure when it is!

At the same time, I can think of two big obstacles to implementing contactless payment in place for travelcards: contracts and incentives. On the first note, I’m sure that there is some weight that TfL (Travel for London) can pull, that your average small town can’t. On the other note, it’s a matter for finance experts, which I can only guess on: there is value for the travel companies to receive money before you travel — Clipper has already had my money in their coffers since I topped it up, though I have not used it.

While topped-up credit of customers is essentially a liability for the companies, it also increases their liquidity. So there is little incentive for them, particularly the smaller ones. Indeed, moving to a payment system for which the companies get their money mostly from banks rather than through cash, is likely to be a problem for them. And we’re back on the first matter: contracts. I’m sure TfL can get better deals from banks and credit card companies than most.

There is also the matter of the tech behind all of this. TfL has definitely done a good job with keeping compatible systems — the Oyster I got in 2009, the first time I boarded a plane, still works. During the same seven years, Venice changed their system twice: once keeping the same name/brand but with different protocols on the card (making it compatible with more NFC systems), and once by replacing the previous brand — I assume they have kept some compatibility on the cards but since I no longer live there I have not investigated.

I’m definitely not one of those people who insist that opensource is the solution to everything, and that just by being opened, things become better for society. On the other hand, I do wonder if it would make sense for the opensource community to engage with public services like this to provide a solution that can be more easily mirrored by smaller towns, who would not otherwise be able to afford the system themselves.

On the other hand, this would require, most likely, compromises. The contracts with service providers would likely include a number of NDA-like provisions, and at the same time, the hardware would not be available off-the-shelf.

This post is not providing any useful information I’m afraid, it’s just a bit of a bigger opinion I have about opensource nowadays, and particularly about how so many people limit their idea of “public interest” to “privacy” and cryptography.