More on IPv6 feasibility

I didn’t think I would have to go back to the topic of IPv6, particularly after my last rant on the topic. But of course it’s the kind of topic that leads itself to harsh discussions over Twitter, so here I am back again (sigh).

As a possibly usual heads’ up, and to make sure people understand where I’m coming from, it is correct I do not have a network background, and I do not know all the details of IPv6 and the related protocol, but I do know quite a bit about it, have been using it for years, and so my opinion is not one of the lazy sysadmin that sees a task to be done and wants to say there’s no point. Among other things, because I do not like that class of sysadmins any more (I used to). I also seem to have given some people the impression that I am a hater of IPv6. That’s not me, that’s Todd. I have been using IPv6 for a long time, I have IPv6 at home, I set up IPv6 tunnels back in the days of having my own office and contracting out, and I have a number of IPv6-only services (including the Tinderbox).

So with all this on the table, why am I complaining about IPv6 so much? The main reason is that, like I said in the other post, geeks all over the place appear to think that IPv6 is great right now and you can throw away everything else and be done with it right this moment. And I disagree. I think there’s a long way to go, and I also think that this particular attitude will make the way even longer.

I have already covered in the linked post the particular issue of IPv6 having originally be designed for globally identifiable, static IPv6 addresses, and the fact that there have been at least two major RFCs to work around this particular problem. If you have missed that, please go back to the post and read it there because I won’t repeat myself here.

I want instead to focus on why I think IPv6 only is currently infeasible for your average end user, and why NAT (including carrier-grade NAT) is not going away any year now.

First of all, let’s define what an average end user is, because that is often lost to geeks. An average end user does not care what a router does, they barely care what a router is, and a good chunk of them probably still just call them modem, as their only interest is referring to “the device that the ISP gives you to connect to the Internet”. An average user does not care what an IP address is, nor cares how DNS resolution happens. And the reason for all of this is because the end user cares about what they are trying to do. And what they are trying to do is browse the Internet, whether it is the Web as a whole, Facebook, Twitter, YouTube or whatever else. They read and write their mail, they watch Netflix, NowTV and HBO GO. They play games they buy on Steam or EA Origin. They may or may not have a job, and if they do they may or may not care to use a VPN to connect to whatever internal resources they need.

I won’t make any stupid or sexist stereotype example for this, because I have combined my whole family and some of my friends in that definition, and they are all different. They all don’t care about IPv6, IPv4, DSL technologies and the like. They just want an Internet connection, and one that works and is fast. And with “that works” I mean “where they can reach the services they need to complete their task, whichever that task is”.

Right now that is not an IPv6 only network. It may be, in the future, but I won’t hold my breath for a number of reasons, that this is going to happen in the next 10 years, despite the increasing pressure and the increasing growth of IPv6 deployment to end users.

The reason why I say this is that right now, there are plenty of services that can only be reached over IPv4, some of which are “critical” (for some definition of critical of course) to end users, such as Steam. Since the Steam servers are not available over IPv6, the only way you can reach them is either through IPv4 (which will involve some NAT) or NAT64. While the speed of the latter, at least on closed-source proprietary hardware solutions, is getting good enough to be usable, I don’t expect it being widely deployed any time now, as it has the drawback of not working with IP literals. We all hate IP literals, but if you think no company ever issue their VPN instructions with an IP literal in them, you are probably going to be disappointed once you ask around.

There could be an interesting post about this level of “security by obscurity”, but I’ll leave that for later.

No ISP wants to receive calls from their customers that access to a given service is not working for them, particularly when you’re talking about end users that do not want to care about tcpdump and traceroute, and customer support that wouldn’t know how to start debugging that NowTV will send a request to an IPv4 address (literal) before opening the stream, and then continue the streaming over IPv4. Or that Netflix refuse to play any stream if the DNS resolution happens over IPv4 and the stream tries to connect over IPv6.

Which I thought Netflix finally fixed until…

Now, to be fair, it is true that if you’re using an IPv6 tunnel you are indeed proxying. Before I had DS-Lite at home I was using TunnelBroker and it appeared like I was connecting from Scotland rather than Ireland, and so for a while I unintentionally (but gladly) sidestepped country restrictions. But this also happened a few times on DS-Lite, simply because the GeoIP and WhoIs records didn’t match between the CGNAT and the IPv6 blocks. I can tell you it’s not fun to debug.

The end result is that most customer ISPs will choose to provide a service in such a way that their users feel the least possible inconvenience. Right now that means DS-Lite, which involves a carrier-grade NAT, which is not great, as it is not cheap to run, and it still can cause problems, particularly when users use Torrent or P2P heavily, in which case they can very quickly exhaust the 200-ports forwarding blocks that are allocated for CGNAT. Of course DS-Lite also takes away your public IPv4, which is why I heard a number of geeks complaining loudly about DS-Lite as a deployment option.

Now there is another particular end user, in addition to geeks, that may care about IP addresses: gamers. In particular online gamers (rather than, say, Fallout 4 or Skyrim fans like me). The reason for that is that most of the online games use some level of P2P traffic, and so require you to have a way to receive inbound packets. While it is technically possible to set up IGD-based redirection all the way from the CGNAT address to your PC or console, I do not know how many ISPs implement that correctly. Also, NAT in general introduces risks for latency, and requires more resources on the passing routers, and that is indeed a topic that is close to the heart of gamers. Of course, gamers are not your average end user.

An aside: back in the early days of ADSL in Italy, it was a gaming website, building its own ISP infrastructure, that first introduced Fastpath to the country. Other ISPs did indeed follow, but NGI (the ISP noted above) stayed for a long while a niche ISP focused on the need of gamers over other concerns, including price.

There is one caveat that I have not described yet, but I really should, because it’s one of the first objections I receive every time I speak about the infeasibility of IPv6 only end user connections: the mobile world. T-Mobile in the US, in particular, is known for having deployed IPv6 only 3G/4G mobile networks. There is a big asterisk to put here, though. In the US, and in Italy, and a number of other countries to my knowledge, mobile networks have classically been CGNAT before being v6-only, and with a large amount of filtering in what you can actually connect to, even without considering tethering – this is not always the case for specialised contracts that allow tethering or for “mobile DSL” as they marked it in Italy back in the days – and as such, most of the problems you could face with VPN, v4 literals and other similar limitations of v6-only with NAT64 (or proxies) already applied.

Up to now I have described a number of complexities related to how end users (generalising) don’t care about IPv6. But ISPs do, or they wouldn’t be deploying DS-Lite either. And so do a number of other “actors” in the world. As Thomas pointed out over Twitter, not having to bother with TCP keepalive for making sure a certain connection is being tracked by a router makes mobile devices faster and use less power, as they don’t have to wake up for no reason. Certain ISPs are also facing problems with the scarcity of IPv4 blocks, particularly as they grow. And of course everybody involved in the industry hates pushing around the technical debt of the legacy IPv4 year after year.

So why are we not there yet? In my opinion and experience, it is because the technical debt, albeit massive, is spread around too much: ISPs, application developers, server/service developers, hosting companies, network operators, etc. Very few of them feel enough pain from v4 being around that they want to push hard for IPv6.

A group of companies that did feel a significant amount of that pain organized the World IPv6 Day. In 2011. That’s six years ago. Why was this even needed? The answer is that there were too many unknowns. Because of the way IPv6 is deployed in dual-stack configurations, and the fact that a lot of systems have to deal with addresses, it seems obvious that there is a need to try things out. And while opt-ins are useful, they clearly didn’t stress test enough of the usage surface of end users. Indeed, I stumbled across one such problem back then: when my hosting provider (which was boasting IPv6 readiness) sent me to their bank infrastructure to make a payment, the IP addresses of the two requests didn’t match, and the payment session failed. Interactions are always hard.

A year after the test day, the “Launch” happened, normalizing the idea that services should be available over IPv6. Even though that the case, it took quite a longer while for many services to be normally available over IPv6, and I think, despite being one of the biggest proponents and pushers of IPv6, Microsoft update servers only started providing v6 support by default in the last year or so. Things improved significantly over the past five years, and thanks to the forced push of mobile providers such as T-Mobile, it’s a minority of the connections of my mobile phones that still connect to the v4 world, though there are still enough not to be able to be ignored.

What are the excuse for those? Once upon a time, the answer was “nobody is using IPv6, so we’re not bothering supporting it”. This is getting less and less valid. You can see the Google IPv6 statistics that show an exponential growth of connections coming from IPv6 addresses. My gut feeling is that the wider acceptance of DS-Lite as a bridge solution is driving that – full disclosure: I work for Google, but I have no visibility in that information, so I’m only guessing this out of personal experience and experience gathered before I joined the company – and it’s making that particular excuse pointless.

Unfortunately, there are still “good” excuses. Or at least reasons that is hard to argue with. Sometimes, you cannot enable IPv6 for your web service, even though you have done all your work, because of dependencies that you do not control, for instance external support services such as the payment system in the OVH example above. Sometimes, the problem is to be found in another piece of infrastructure that your service shares with others and that requires to be adapted, as it may have code expecting a valid IPv4 address at some particular place, and an IPv6 would make it choke, say in some log analysis pipeline. Or you may rely on hardware for the network layer that just still does not understand IPv6, and you don’t want to upgrade because you still have not found enough of an upside to you to make the switch.

Or you may be using an hosting provider that insists that giving you a single routable IPv6 is giving you a “/64” (it isn’t — they are giving you a single address in a /64 they control). Any reference to a certain German ISP I had to deal with in the past is not casual at all.

And here is why I think that the debt is currently too spread around. Yes, it is true that mobile phones batteries can be improved thanks to IPv6. But that’s not something your ISP or the random website owner care about – I mean, there are websites so bad that they take megabytes to load a page, that would be even better! – and of course a pure IPv6 without CGNAT is a dream of ISPs all over the world, but it is very unlikely that Steam would care about them.

If we all acted “for the greater good”, we’d all be investing more energy to make sure that v6-only becomes a feasible reality. But in truth, outside of controlled environments, I don’t see that happening any year now as I said. Controlled environments in this case can refer not only to your own personal network, but to situations like T-Mobile’s mobile data network, or an office’s network — after all, it’s unlikely that an office, outside of Sky’s own, would care whether they can connect to NowTV or Steam. Right now, I feel v6-only network (without NAT64 even) are the realm of backend networks. Because you do not need v4 for connecting between backends you control, such as your database or API provider, and if you push your software images over the same backend network, there is no reason why you would even have to hit the public Internet.

I’m not asking to give a pass to anyone who’s not implementing v6 access now, but as I said when commenting on the FOSDEM network, it is not by bothering the end users that you’ll get better v6 support, is by asking the services to be reachable.

To finish off, here’s a few personal musings on the topic, that did not quite fit into the discourse of the post:

  • Some ISPs appear to not have as much IPv4 pressure as others; Telecom Italia still appears to not have reassigned or rerouted the /29 network I used to have routed to my home in Mestre. Indeed, whois information for those IPs still has my old physical address as well as an old (now disconnected) phone number.
  • A number of protocols that provide out-of-band signalling, such as RTSP and RTMP, required changes to be used in IPv6 environments. This means that just rebuilding the applications using them against a C library capable of understanding IPv6 would not be enough.
  • I have read at least one Debian developer in the past giving up on running IPv6 on their web server, because their hosting provider was sometimes unreliable and they had no way to make sure the site was actually correctly reachable at all time; this may sound like a minimal problem, but there is a long tail of websites that are not actually hosted on big service providers.
  • Possibility is not feasibility. Things may be possible, but not really feasible. It’s a subtle distinction but an important one.

Linux desktop and geek supremacists

I have written about those who I refer to as geek supremacists a few months ago, discussing the dangerous prank at FOSDEM — as it turns out, they overlap with the “Free Software Fundamentalists” I wrote about eight years ago. I have found another one of them at the conference I was at the day this draft is being written. I’m not going to refer the conference because the conference does not deserve to be associated with my negative sentiment here.

The geek supremacist in this case was the speaker of one of the talks. I did not sit through the whole talk (which also run over its allotted time), because after the basic introduction, I was so ticked off by so many alarm bells that I just had to leave and find something more interesting and useful to do. The final drop for me was when the speaker insisted that “Western values didn’t apply to [them]” and thus they felt they could “liberate” hardware by mixing leaked sources of the proprietary OS with the pure Free (obsolete) OS of it. Not only this is clearly illegal (as they know and admitted), but it’s unethical (free software relies on licenses that are based on copyright law!) and toxic to the community.

But that’s not what I want to complain about here. The problem was a bit earlier than that. The speaker defined themselves as a “freedom fighter” (their words, not mine!), and insisted they can’t see why people are still using Windows and macOS despite Linux and FreeBSD being perfectly good options. I take a big issue with this.

Now, having spent about half my life using, writing and contributing to FLOSS, you can’t possibly expect me to just say that Linux on the desktop is a waste of time. But at the same time I’m not delusional, and I know there are plenty of reasons to not use Linux on the desktop.

While there has been huge improvements in the past fifteen years, and SuSE or Ubuntu are somewhat usable as desktop environments, there is still no comparison with using macOS or Windows, particularly in terms of applications working out of the box, and support from third parties. There are plenty of applications that don’t work on Linux, and even if you can replace them, sometimes that is not acceptable, because you depend on some external ecosystem.

For instance, when I was working as a sysadmin for hire, none of my customers could possibly have used a pure-Linux environment. Most of them were Windows only companies, but even the one that was a mixed environment (the print shop I wrote about before), could not do without macOS and Windows. From one point, the macOS environment was their primary workspace: Adobe software is not available for Linux, nor is QuarkXpress, nor the Xerox print queue software (ironic, since it interfaces with a Linux system on board the printer, of course). The accounting software, which handled everything from ordering to invoicing to tax report, was developed by a local company, – and they had no intention to build a version for Linux – and because tax regulations in Italy are… peculiar, no off-the-shelf open source software is available for that. As it happens, they also needed a Mandriva workstation – no other distribution would do – because the software for their large-format inkjet printer was only available for either that, or PPC Mac OS X, and getting it running on a modern PC with the former is significantly less expensive than trying to recover the latter.

(To make my life more complicated, the software they used for that printer was developed by Caldera. No, not the company acquired by SCO, but Caldera Graphics, a French company completely unrelated to the other tree of companies, which was recently acquired again. It was very confusing when the people at the shop told me that they had a “Caldera box running Linux”.)

Of course, there are people who can run a Linux-only shop, or can only run Linux on their systems, personal or not, because they do not need to depend on external ecosystems. More power to them, and thank you for their support on improving desktop features (because they are helping, right?). But they are clearly not part of the majority of the population, as it’s clear by the fact that people are indeed vastly using Windows, macOS, Android and iOS.

Now, this does not mean that Linux on the desktop is dead, or will never happen. It just means that it’ll take quite a while longer, and in the mean time, all the work of Linux on the desktop is likely going to profit other endeavours too. LibreOffice and KDE are clearly examples of “Linux on the desktop”, but at the same time they provide Free Software with the visibility (and energy, to a point) even when being used by people on Windows. The same goes for VLC, Firefox, Chrome, and a long list of other FLOSS software that many people rely upon, sometimes realising it is Free Software. But even that, is not why I’m particularly angry after encountering this geek supremacist.

The problem is that, again in the introduction to the talk, that was about mobile phones, they said they don’t expect things changed significantly in the proprietary phones for the past ten years. Ten years is forever in computing, let alone mobile! Ten years ago, the iPhone was just launched, and it still did not have an SDK or apps! Ten years ago the state of the art in smartphones you could develop apps for was Symbian! And this is not the first time I hear something like this.

A lot of people in the FLOSS community appear to have closed their eyes to what the proprietary software environment has been doing, under any area. Because «Free Software works for me, so it has to be working for everyone!» And that is dangerous under multiple point of views. Not only this shortsightedness is what, in my opinion, is making distributions irrelevant but it’s also making Linux on the desktop worse than Windows, and is why I don’t expect FSF will come up with an usable mobile phone any time soon.

Free desktop environments (KDE and GNOME, effectively) have spent a lot of time in the past ten (and more) years, first trying to catch up to Windows, then to Mac, then trying to build new paradigms, with mixed results. Some people loved them, some people hated them, but at least they tried and, ignoring most of the breakages, or the fact that they still try to have semantics nobody really cares about (like KDE’s “Activities” — or the fact that KDE-the-desktop is no more, and KDE is a community that includes stuff that has nothing to do with desktops or even barely Linux, but let’s not go there), a modern KDE system is fairly close in usability to Windows… 7. There is still a lot of catch up to do, particularly around security, but I would at least say that for the most part, the direction is still valid.

But to keep going, and to catch up, and if possible to go beyond those limits, you also need to accept that there are reasons why people are using proprietary software, and it’s not just a matter of lock-in, or the (disappointingly horrible) idea that people using Windows are “sheeple” and you hold the universal truth. Which is what pissed me off during that talk.

I could also add another note here about the idea that non-smart phones are a perfectly valid option nowadays. As I wrote already, there are plenty of reasons why a smartphone should not be considering a luxury. For many people, a smartphone is the only access they have to email, and the Internet at large. Or the only safe way to access their bank account, or other fundamental services that they rely upon. Being able to use a different device for those services, and only having a ten years old dumbphone is a privilege not the demonstration that there is no need for smartphones.

Also, I sometimes really wonder if these people have any friends at all. I don’t have many friends myself, but if I was stuck on a dumbphone only able to receive calls or SMS, I would probably have lost those few I have as well. Because even with European, non-silly tariffs on SMS, sending SMS is still inconvenient, and most people will use WhatsApp, Messenger, Telegram or Viber to communicate with their friends (and most of these applications are also more secure than SMS). That may be perfectly fine, I mean if you don’t want to be easily reachable by people, that is a very easy way to do so, but it’s once again a privilege, because it means you either don’t have people who would want to contact you in different ways, or you can afford to limit your social contacts to people who accepts your quirk — and once again, a freelancer could never do that.

Distributions are becoming irrelevant: difference was our strength and our liability

For someone that has spent the past thirteen years defining himself as a developer of a Linux distribution (whether I really am still a Gentoo Linux developer or not is up for debate I’m sure), having to write a title like this is obviously hard. But from the day I started working on open source software to now I have grown a lot, and I have realized I have been wrong about many things in the past.

One thing that I realized recently is that nowadays, distributions lost the war. As the title of this post says, difference is our strength, but at the same time, it is also the seed of our ruin. Take distributions: Gentoo, Fedora, Debian, SuSE, Archlinux, Ubuntu. They all look and act differently, focusing on different target users, and because of this they differ significantly in which software they make available, which versions are made available, and how much effort is spent on testing, both the package itself and the system integration.

While describing it this way, there is nothing that scream «Conflict!», except at this point we all know that they do conflict, and the solutions from many different communities, have been to just ignore distributions: developers of libraries for high level languages built their own packaging (Ruby Gems, PyPI, let’s not even talk about Go), business application developers started by using containers and ended up with Docker, and user application developers have now started converging onto Flatpak.

Why the conflicts? A lot of time the answer is to be found in bickering among developers of different distributions and the «We are better than them!» attitude, which often turned to «We don’t need your help!». Sometimes this went all the way to the negative side to the point of «Oh it’s a Gentoo [or other] developer complaining, it’s all their fault and their problem, ignore them.» And let’s not forget of the enmity between forks (like Gentoo, Funtoo and Exherbo), in which both sides are trying to prove being better than the other. A lot of conflict all over the place.

There were of course at least two main attempts to standardise parts of how a distribution works: the Linux Standard Base and The former is effectively a disaster, the latter is more or less accepted, but the problem lies there: in the more-or-less. Let’s look at these two separately.

The LSB was effectively a commercial effort, which was aimed at pleasing (effectively) only the distributors of binary packages. It really didn’t make much of an assurance of the environment you could build things in, and it never invited non-commercial entities to discuss the reasoning behind the standard. In an environment like open source, the fact that the LSB became an ISO standard is not a badge of honour, but rather a worry that it’s over-specified and over-complicated. Which I think most people agree it is. There is also quite an overreach of specifying the presence of binary libraries, rather than being a set of guidelines for distributions to follow.

And yes, although technically LSB is still out there working, the last release I could find described in Wikipedia is from 2015, and I couldn’t even find at first search whether they certified any distribution version. Also, because of the nature of certifications, it’s impossible to certify a rolling-release distribution, which as it happens are becoming much more widespread than they used to.

I think that one of the problem of LSB, both from the adoption and usefulness point of views, is that it focused entirely too much on providing a base platform for binary and commercial application. Back when it was developed, it seemed like the future of Linux (particularly on the desktop) relied entirely on the ability for proprietary software applications to be developed that could run on it, the way they do on Windows and OS X. Since many of the distributions didn’t really aim to support this particular environment, convincing them to support LSB was clearly pointless. is in a much better state in this regard. They point out that whatever they write is not standards, but de-facto specifications. Because of the de-facto character of these, they started by effectively writing down whatever GNOME and RedHat were doing, but then grown to be significantly more cross-desktop, thanks to KDE and other communities. Because of the nature of the open source community, FD.o specifications are much more widely adopted than the “standards”.

Again, if you compare with what I said above, FD.o provides specifications that make it easier to write, rather than run, applications. It provides you with guarantees of where you should be looking for your file and which icons should be rendering, and which interfaces are exposed. Instead of trying to provide an environment where a in-house written application will keep running for the next twenty years (which, admittedly, Windows has provided for a very long time), it provides you building blocks interfaces so that you can create whatever the heck you want and integrate with the rest of the desktop environments.

As it happens, Lennart and his systemd ended up standardizing distributions a lot more than LSB or FD.o ever did, if nothing else by taking over one of the biggest customization points of them all: the init system. Now, I have complained about this before that it probably could have been a good topic for a standard even before systemd, and independently from it, that developers should have been following, but that’s another problem. At the end of the day, there is at least some cross-distribution way to provide init system support, and developers know that if they build their daemon in a certain way, then they can provide the init system integration themselves, rather than relying on the packagers.

I feel that we should have had much more of that. When I worked on ruby-ng.eclass and fakegem.eclass, I tried getting the Debian Ruby team, who had similar complaints before, to join me on a mailing list so that we could discuss a common interface between Gems developers and Linux distributions, but once again, that did not actually happen. My afterthought is that we should have had a similar discussion for CPAN, CRAN, PyPI, Cargo and so on… and that would probably have spared us the mess that is go packaging.

The problem is not only getting the distributions to overcome their differences, both in technical direction and marketing, but that it requires sitting at a table with the people who built and use those systems, and actually figuring out what they are trying to achieve. Because in particular in the case of Gems and the other packaging systems, who you should talk with is not only your distribution’s users, but most importantly the library authors (whose main interest is shipping stuff so that people can use them) and the developers who use them (whose main interest is being able to fetch and use a library without waiting for months). The distribution users are, for most of the biggest projects, sysadmins.

This means you have a multi-faceted problem to solve, with different roles of people, and different needs of them. Finding a solution that does not compromise, and covers 100% of the needs of all the roles involved, and requires no workflow change on any one’s part is effectively impossible. What you should be doing is focusing on choosing the very important features for the roles critical to the environment (in the example above, the developers of the libraries, and the developers of the apps using those libraries), requiring the minimum amount of changes to their workflow (but convince them to change the workflow where it really is needed, as long as it’s not more cumbersome than it was before for no advantage), and figuring out what can be done to satisfy or change the requirements of the “less important” roles (distribution maintainers usually being that role).

Again going back to the example of Gems: it is clear by now that most of the developers never cared of getting their libraries to be carried onto distributions. They cared about the ability to push new releases of their code fast, seamlessly and not have to learn about distributions at all. The consumers of these libraries don’t and should not care about how to package them for their distributions or how they even interact with it, they just want to be able to deploy their application with the library versions they tested. And setting aside their trust in distributions, sysadmin only care to have a sane handling of dependencies and being able to tell which version of which library is running on their production, to upgrade them in case of a security issues. Now, the distribution maintainers can become the nexus for all these problems, and solve it once and for all… but they will have to be the ones making the biggest changes in their workflow – which is what we did with ruby-ng – otherwise they will just become irrelevant.

Indeed, Ruby Gems and Bundler, PyPI and VirtualEnv, and now Docker itself, are expressions of that: distribution themselves became a major risk and cost point, by being too different between each other and not providing an easy way to just provide one working library, and use one working library. These roles are critical to the environment: if nobody publish libraries, consumers have no library to use; if nobody consumes libraries, there is no point in publishing them. If nobody packages libraries, but there are ways to publish and consume them, the environment still stands.

What would I do if I could go back in time, be significantly more charismatic, and change the state of things? (And I’m saying this for future reference, because if it ever becomes relevant to my life again, I’ll do exactly that.)

  • I would try to convince people that even on divergence of technical direction, discussing and collaborating is a good thing to do. No idea is stupid, idiotic or any other random set of negative words. The whole point of that is that you need to make sure that even if you don’t agree on a given direction, you can agree on others, it’s not a zero-sum game!
  • Speaking of, “overly complicated” is a valid reason to not accept one direction and take another; “we always did it this way” is not a good reason. You can keep using it, but then you’ll end up with Solaris a very stagnant project.
  • Talk with the stakeholders of the projects that are bypassing distributions, and figure out why they are doing that. Provide “standard” tooling, or at least a proposal on how to do things in such a way that the distributions are still happy, without causing undue burden.
  • Most importantly, talk. Whether it is by organizing mailing lists, IRC channels, birds of a feather at conferences, or whatever else. People need to talk and discuss the issues at hand in clear, in front of the people building the tooling and making the decisions.

I have not done any of that in the past. If I ever get in front of something like this, I’ll do my best to, instead. Unfortunately, this is a position that, in the current universe we’re talking about, would have required more privilege than I had before. Not only for my personal training and experience to understand what should have been done, but because it requires actually meeting with people and organizing real life summits. And while nowadays I did become a globetrotter, I could never have afforded that before.

I’m moving… to London

There is something about me and writing blog posts while sitting in airport lounges around the globe, or at the very least in New York and Paris…

Four years and change after I announced my first move I’m moving again, though this time the jump is significantly shorter, and even the downsizing is downsized, particularly as the plug I’d be using is the same. You could say this is a bold move, particularly with the ghost of Brexit floating above us as a semi-transparent Damocle’s Sword, but it’s a decision I made, not over night, but through many months of deliberation and a good number of arguments back and forth. And I decided this before the hilarious hung parlament results.

So, why am I doing this move, and what does it actually include? I’m staying at the same company, in the same role and in a team not really far away (on a management, if not service, chain) from the one I was at before. What really matters in all of this is the place I live in, rather than the work I’m doing. Work provided an excuse and a spark for me to make the jump, simply because my previous team has been folded into another.

When I flew onto Dublin, for the second time after the interview, four years ago, I knew next to nothing about Ireland, except for the stereotypes and the worrying stories of the Troubles. I effectively knew only one person in the country, and I was fairly scared about the whole process, which went significantly smoother than I would have expected. As a friend of mine told me before I left, Dublin is the capital that still feels like a little town, and I can definitely see that.

I grew up in a relatively small town, with next to no public transport, but a significant amount of malls and stores, and a stupid but functional airport nearby. Dublin beat that significantly, except for the lack of malls, and the fact that unless you have a car to go outside the city centre, it is likely that whatever you need you’ll order on Amazon anyway. Except there is no Amazon Ireland, so you’ll be looking at either Amazon UK, Italy, Germany or (if you can manage) France. Because the prices will be different all over, and some of the Amazon stores will refuse to send things to you in Ireland, and you have to use one of the many services that provide you with an UK address, and then reships the parcel to you (most of which use addresses in Northern Ireland, because it’s close enough, and counts as UK for most of those limitation).

But it also has limitations that a small provincial town has, and I’m not talking about the song from Beauty and the Beast, though it does come close. At least for the native Irish, the stereotypes is that they all know each other. It is not true of course, as it’s a stereotype, but it comes awfully close. In particular, according to a former manager, Irish people make friends in two phases in life: during school and during their kids’ school years. If you happen to get to Ireland when you’re clearly too old to be a student, but single, with no kids (and no intention to have any), making friends is not quite easy.

I have tried, maybe not with just as much energy as others have, but still tried. Unfortunately, almost the full set of social gatherings in Ireland involve pubs. I don’t drink alcohol, alcohol could kill me, and definitely makes me mean. But I could go to a pub and not drink of course. I did that in Italy, where the pub favourite by my group of friends offered as many pages of menu to coffee, chocolate, the and infusions drinks as of beers, plus a good selection of other desserts. That is not the case in Ireland. Effectively, the only non-alcoholic drinks you find at the pub are there to not live thirsty the designated drivers, and they are bloody expensive. €5 for a 150ml bottle of Diet Coke! — I’m told that Coca Cola by the pint is cheaper, unfortunately because of diabetes that would be almost as bad as alcohol. I know at least one pub in which I can order a pot of tea, and not get the stink eye by the waiters, but that’s about it.

For the first couple of years I didn’t really feel this being a big problem: for the first few months, Enrico (from #gentoo-it) was around as well, before moving on with his life in a different country, I made a number of friends at the office, and a few people who I knew before, such as Stefano from automake, came to Dublin. Then something happened, that made me question the idea of mixing colleagues and friends. Nothing Earth-shattering, but enough of a demonstration for me that I started isolating myself from most of the people at the office: rejected invitations for social events, mass de-friended people from Facebook, and so on. You may think this sounds a bit extreme, but I have my reasons.

I turned again to trying to find interesting groups of people in Ireland, and that’s where the introvert and awkward part of me just gave up and filled me with dread: I’m not particularly good with meeting people already, but it just felt too much work to join an established group of people who all know each other without even having one person you know to introduce you. I had better luck on that with meeting people at Eastercon and Nine Worlds, thanks to Gianni. And all the groups I could find in Dublin that didn’t have to do with “going out for pints”, were effectively groups of people who have known each other for years. They may be friendly but not easy to make friends, at least not for me.

Of course there is a group of people that I knew already, and I considered: the people who left the office! Unfortunately, almost the full amount of people who I knew that stayed in Ireland did so because they have a family there. They are in a different “time of life” than me, and that makes for an awkward time — most wouldn’t be interested in meeting to go to a geek store on the weekend, or a Comicon or similar venue. You can go and see them at home (depending on how friendly the terms you’re in are), and see the kids. Heh, no thanks.

So after four years, a lot of the people I knew and was/am friend with leaving Ireland for other countries, offices or jobs, I found myself lonely, and with a need to move somewhere else. And just to be clear, loneliness is not just about being single. That is obviously a part of it, but it’s in general the inability to say to any group of friends “Let’s meet tonight for movies, boardgames or a match of Street Fighter” (or whatever beat ‘em up the cool kids play nowadays). I could do that in Italy, I even still managed to do that in Italy last time I flew in, with some of my best friends now living together and expecting a baby. But I can’t do that in Dublin because there are very few people with who this proposition is not totally awkward.

Heck, I can do that in Paris! I have friends in France who I just had to say “There is this concert at this date, want to go?” and six months later I’m flying a DUB-CDG/CDG-YUL/YUL-DUB ticket to see Joe Hisaishi at Le Palais des Congrés. (Montréal is for a conference, it was the week after so it was easy to just go from one to the other without passing by Dublin.)

But most of this awkwardness is on me, so why do I expect things to be any different in London? Well, to begin with I know more people there (outside of work) than I did in Dublin, or that I even do now. A friend of mine from junior high moved to London last year with her boyfriend, so at the last I know them, and we’re close enough that the similar proposition of spending an evening in playing Scrabble is totally feasible (actually, we did just that when I went to see them last year). And these are people who can make the general awkwardness of entering a new group of people easier for me.

Most importantly, though, I think London has a different structure that should make it possible for me to end up meeting people even randomly. Because though I am clearly awkward and scared of joining big groups that already know each other, I don’t mind the random words with a stranger in the queue for a flight or while waiting for coffee. After all that’s give or take how I end up meeting people at conferences. In London there are just too many people for them to all know each other. And in particular, there are the places where these things can happen.

Part of the reason why almost all the social gatherings in Dublin revolve around pubs is that they are the only venue you can go to spend some time on “neutral grounds” (i.e. not in someone’s home, not at an office). I don’t know of a coffee shop in Dublin that is open after 1900 on a weekday, except for the Starbucks in front of St Stephen Green and a few Insomnias that are embedded within 24hrs convenience stores. London has plenty, although they are not all in the most convenient of places, if anything you can find a Nero open until 2200 almost every other corner. No, Nero does not stay open that much in Dublin either.

There is more. If you’re a geek you probably have at least heard of Forbidden Planet. It’s a megastore of comics, manga, books, and anything geek effectively. It’s very commercial and mainstream, but it’s also big and they run book signings there, which makes it an interesting place to go just to see if something may be going on. It’s effectively in the centre of life if London, effectively in SoHo, and if you meet there with friends you can geek out with friends on their latest purchases in one of the many cafes around the area. Or browse, buy something and go to dinner.

There is a Forbidden Planet in Dublin which is not as big but it still has a decent selection. Unfortunately all the venues around it are effectively pubs, as you’re on the verge of Temple Bar, which appears to be the Irish national treasure. Unless you manage to get out well before 5pm and get coffee to the shop just behind it, which of course closes at 5pm all week long. And even to be fair and consider Sub City comics which in my opinion is usually better stocked, it’s all pubs around it except for Bewley’s and Krüst down the street in front of it… which requires you to cross one of the most trafficked streets in Dublin city centre.

I’m not saying that there is an absolute certainty that I will meet people in London, I’m just saying that there are a few things that are not stacked against me, like they are in Dublin. And I really need to make some change that I don’t feel full of dread every time I come back to my apartment after weeks travelling, like I did coming back from my recent trip across Asia, when all of the four people I hang out with at all were not in town.

Nothing else really changes even, but you may notice that the targets of my rants will change from Ireland to UK, and possibly to the process of actually moving, filing taxes and so on. Actually, I know you’ll get at least another post on payment cards because I started looking into them and it looks like the foreign transaction fees in the UK are really bad.

Let’s have a talk about TOTP

You probably noticed that I’m a firm believer in 2FA, and in particular on the usefulness of U2F against phishing. Unfortunately not all the services out there have upgraded to U2F to protect their users, though at least some of them managed to bring a modicum of extra safety against other threats by implementing TOTP 2FA/2SV (2-steps verification). Which is good, at least.

Unfortunately this can become a pain. In a previous post I have complained about the problems of SMS-based 2FA/2SV. In this post I’m complaining about the other kind: TOTP. Unlike the SMS, I find this is an implementation issue rather than a systemic one, but let’s see what’s going on.

I have changed my mobile phone this week, as I upgrade my work phone to one whose battery lasts more than half a day, which is important since my job requires me to be oncall and available during shifts. And as usual when this happens, I need to transfer my authenticator apps to the new phone.

Some of the apps are actually very friendly to this: Facebook and Twitter use a shared secret that, once login is confirmed, is passed onto their main app, which means you just need any logged in app to log in again and get a new copy. Neat.

Blizzard was even better. I have effectively copied my whole device config and data across with the Android transfer tool. The authenticator copied over the shared key and didn’t require me to do anything at all to keep working. I like things that are magic!

The rest of the apps was not as nice though.

Amazon and allow you to add at any time a new app using the same shared key. The latter has an explicit option to re-key the 2FA to disable all older apps. Amazon does not tell you anything about it, and does not let you re-key explicitly — my guess is that it re-keys if you disable authentication apps altogether and re-enable it.

WordPress, EA/Origin, Patreon and TunnelBroker don’t allow you to change the app, or get the previously shared key. Instead you have to disable 2FA, then re-enable it. Leaving you “vulnerable” for a (hopefully) limited time. Of these, EA allows you to receive the 2SV code by email, so I decided I’ll take that over having to remember to port this authenticator over every time.

If you remember in the previous post I complained about the separate Authenticator apps that kept piling up for me. I realized that I don’t need as many: the login approval feature, which effectively Microsoft and LastPass provide, is neat and handy, but it’s not worth having two more separate apps for it, so I downgraded them to just use normal TOTP on the Google Authenticator app, which gets me the Android Wear support to see the code straight on my watch. I have particularly failed to remember when I last logged into a Microsoft product except for setting up the security parameters.

Steam on the other hand, was probably the most complete disaster of trying to migrate. Their app, similarly to the one, is just a specially formatted TOTP with a shared key you’re not meant to see. Unfortunately to be able to move the Authenticator to a new phone, you need to disable it first — and you disable it from the logged-in app that has it enabled. Then you can re-enable it on a new phone. I assume there is some specific way to get recovery if that does not work, too. But I don’t count on it.

What does this all mean? TOTP is difficult, it’s hard for users, and it’s risky. Not having an obvious way to de-authenticate the app is bad. If you were at ENIGMA, you could have listened to a talk that was not recorded, on ground of the risky situations there described. The talk title was «Privacy and Security Practices of Individuals Coping with Intimate Partner Abuse». Among various topic that the talk touched upon, there was an important note on the power of 2FA/2SV for people being spied upon to gain certainty that somebody else is not logging in on their account. Not being able to de-authenticate TOTP apps goes against this certainty. Having to disable your 2FA to be able to change it to a new device makes it risky.

Then there are the features that become a compromise between usability and paranoia. As I said I love the Android Wear integration for the Authenticator app. But since the watch is not lockable, it means that anybody who could have access to my watch while I’m not attentive could have access to my TOTP. It’s okay for my use case, but it may not be for all. The Google Authenticator app also does not allow you to set a passcode and encrypt the shared keys, which means if you have enabled developer mode, run your phone unencrypted, or have a phone that is known vulnerable, your TOTP shared keys can be exposed.

What does this all mean? That there is no such thing as the perfect 100% solution that covers you against all possible threat models out there, but some solutions are better than others (U2F) and then compromises depend on what you’re defending against: a relative, or nation-states? If you remember I already went over this four years ago, and again the following year, and the one after talking about threat models. It’s a topic that I’m interested in, if it was not clear.

And a very similar concept was expressed by Zeynep Tufekci when talking about “low-profile activists”, wanting to protect their personal life, rather than the activism. This talk was recorded and is worth watching.

The geeks’ wet dreams

As a follow up to my previous rant about FOSDEM I thought I would talk to what I define the “geeks’ wet dream”: IPv6 and public IPv4.

During the whole Twitter storm I had regarding IPv6 and FOSDEM, I said out loud that I think users should not care about IPv6 to begin with, and that IPv4 is not a legacy protocol but a current one. I stand by my description here: you can live your life on the Internet without access to IPv6 but you can’t do that without access to IPv4, at the very least you need a proxy or NAT64 to reach a wide variety of the network.

Having IPv6 everywhere is, for geeks, something important. But at the same time, it’s not really something consumers care about, because of what I just said. Be it for my mother, my brother-in-law or my doctor, having IPv6 access does not give them any new feature they would otherwise not have access to. So while I can also wish for IPv6 to be more readily available, I don’t really have any better excuse for it than making my personal geek life easier by allowing me to SSH to my hosts directly, or being able to access my Transmission UI from somewhere else.

Yes, there is a theoretical advantage for speed and performance, because NAT is not involved, but to be honest for most people that is not what they care about: a 36036 Mbit connection is plenty fast enough even when doing double-NAT, and even Ultra HD, 4K, HDR resolution is well provided by that. You could argue that an even lower latency network may enable more technologies to be available, but that not only sounds to me like a bit of a stretch, it also misses the point once again.

I already liked to Todd’s post, and I don’t need to repeat it here, but if it’s about the technologies that can be enabled, it should be something the service providers should care about. Which by the way is what is happening already: indeed the IPv6 forerunners are the big player companies that effectively are looking for a way to enable better technology. But at the same time, a number of other plans were executed so that better performance can be gained without gating it to the usage of a new protocol that, as we said, really brings nothing to the table.

Indeed, if you look at protocols such as QUIC or HTTP/2, you can notice how these reduce the amount of ports that need to be opened, and that has a lot to do with the double-NAT scenario that is more and more common in homes. Right now I’m technically writing under a three-layers NAT: the carrier-grade NAT used by my ISP for deploying DS-Lite, the NAT issued by my provider-supplied router, and the last NAT which is set up by my own router, running LEDE. I don’t even have a working IPv6 right now for various reasons, and know what? The bottleneck is not the NATs but rather the WiFi.

As I said before, I’m not doing a full Todd and thinking that ignoring IPv6 is a good idea, or that we should not spend any time fixing things that break with it. I just think that we should get a sense of proportion and figuring out what the relative importance of IPv6 is in this world. As I said in the other post, there are plenty of newcomers that do not need to be told to screw themselves if their systems don’t support IPv6.

And honestly the most likely scenario to test for is a dual-stack network in which some of the applications or services don’t work correctly because they misunderstand the system. Like OVH did. So I would have kept the default network dual-stack, and provided a single-stack, NAT64 network as a “perk”, for those who actually do care to test and improve apps and services — and possibly also have a clue not to go and ping a years old bug that was fixed but not fully released.

But there are more reasons why I call these dreams. A lot of the reasoning behind IPv6 appears to be grounded into the idea that geeks want something, and that has to be good for everybody even when they don’t know about it: IPv6, publicly-routed IPv4, static addresses and unfiltered network access. But that is not always the case.

Indeed, if you look even just at IPv6 addressing, and in particular to how stateless addressing works, you can see how there has been at least three different choices at different times for generating them:

  • Modified EUI-64 was the original addressing option, and for a while the only one supported; it uses the MAC address of the network card that receives the IPv6, and that is quite the privacy risk, as it means you can extract an unique identifier of a given machine and identify every single request coming from said machine even when it moves around different IPv6 prefixes.
  • RFC4941 privacy extensions were introduced to address that point. These are usually enabled at some point, but these are not stable: Wikipedia correctly calls these temporary addresses, and are usually fine to provide unique identifiers when connecting to an external service. This makes passive detection of the same machine across network not possible — actually, it makes it impossible to detect a given machine even in the same network, because the address changes over time. This is good on one side, but it means that you do need session cookies to maintain login section active, as you can’t (and you shouldn’t) rely on the persistence of an IPv6 address. It also allows active detection, at least of the presence of a given host within a network, as it does not by default disable the EUI-64 address, just not use it to connect to services.
  • RFC7217 adds another alternative for address selection: it provides a stable-within-a-network address, making it possible to keep long-running connections alive, and at the same time ensuring that at least a simple active probing does not give up the presence of a known machine in the network. For more details, refer to Lubomir Rintel’s post as he went in more details than me on the topic.

Those of you fastest on the uptake will probably notice the connection with all these problems: it all starts by designing the addressing assuming that the most important addressing option is stable and predictable. Which makes perfect sense for servers, and for the geeks who want to run their own home server. But for the average person, these are all additional risks that do not provide any desired additional feature!

There is one more extension to this: static IPv4 addresses suffer from the same problem. If your connection is always coming from the same IPv4 address, it does not matter how private your browser may be, your connections will be very easy to track across servers — passively, even. What is the remaining advantage of a static IP address? Certainly not authentication, as in 2017 you can’t claim ignorance of source address spoofing.

And by the way this is the same reason why providers started issuing dynamic IPv6 prefixes: you don’t want a household (if not a person strictly) to be tied to the same address forever, otherwise passive tracking is effectively a no-op. And yes, this is a pain for the geek in me, but it makes sense.

Static, publicly-routable IP addresses make accessing services running at home much easier, but at the same time puts you at risk. We all have been making about the “Internet of Things”, but at the same time it appears everybody wants to be able to set their own devices to be accessible from the outside, somehow. Even when that is likely the most obvious way for external attackers to access one’s unprotected internal network.

There are of course ways around this that do not require such a publicly routable address, and they are usually more secure. On the other hand, they are not quite a panacea of course. Indeed they effectively require a central call-back server to exist and be accessible, and usually that is tied to a single company, with customise protocols. As far as I know, no open-source such call-back system appears to exist, and that still surprises me.

Conclusions? IPv6, just like static and publicly routable IP addresses, are interesting tools that are very useful to technologists, and it is true that if you consider the original intentions behind the Internet these are pretty basic necessities, but if you think that the world, and thus requirements and importance of features, have not changed, then I’m afraid you may be out of touch.

How the European Roaming directive makes my life worse

One of the travel blogs I follow covered today the new EU directive on roaming charges, I complained quickly on Twitter, but as you can imagine, 140 characters are not enough to explain why I’m not actually happy about this particular change.

You can read the press release as reported by LoyaltyLobby, but here is where things get murky for me:

The draft rules will enable all European travellers using a SIM card of a Member State in which they reside or with which they have “stable links” to use their mobile device in any other EU country, just as they would at home.

Emphasis mine, of course.

This is effectively undermining the European common market of services: if you do not use a local provider, you’re now stuck to pay roaming just as before, or more likely even higher. Of course this makes perfect sense for those people who barely travel and so have only their local carrier, or those who never left a country and so never had to get a number in a different country while keeping the old one around. But for me, this sucks in two big and kind of separate ways.

The first problem is one of convenience, admittedly making use of a bit of a loophole. As I write this post I’m in the USA, but my phone is running a British SIM card, by Three UK. The reason is simple: with £10 I can get 1GB of mobile data (and an amount of call minutes and SMS, which I never use), and wit £20 I can get 12GB. This works both in th UK, and (as long as I visited in the previous 3 months) a number of other countries, including Ireland, Italy, USA, and Germany. So I use it when I’m in the USA and I used it when I went to 33c3 in Hamburg.

But I’m not a resident of the UK, and even though I do visit fairly often, I don’t really have “stable ties”.

It’s of course possible that Three UK will not stop their free roaming due to this. After all they include countries like the US and (not a country) Hong Kong in the areas of free roaming and they are not in Europe a all. Plus the UK may not be part of the EU that long anyway. But it also gives them leverage to raise the prices for non-residents.

The other use case I have is my Italian mobile phone number, which has been the same for about ten years or so, changing three separate mobile providers – although quite ironically, I changed from 3 ITA to Wind to get better USA roaming, and now 3 ITA bought Wind up, heh – but keeping the number as it is associated with a number of services, including my Italian bank.

Under the new rules I may be able to pull off a “stable links” indication thanks to being Italian, but that might require me to do paperwork in Italy, where I don’t go very often. If I don’t do that, I expect the roaming to become even more expensive than it is now.

Finally, there is another interesting part to this. In addition to UK, Irish and Italian numbers, I have a billpay subscription in France through — the reason is that I visit France fairly often, and it’s handy to have a local phone number when I visit. I have no roaming enabled on that contract though, so the directive has no effect on it anyway. That’s okay.

What is not okay in my opinion is that the directive says nothing about maintaining quality of service when roaming, it only impose prices. And indeed sent an update this past July that, due to a similar directive within France, their in-country roaming will have reduced speeds:

De ce fait, les débits théoriques maximums atteignables par abonné sur
le réseau de l’opérateur partenaire en itinérance 2G/3G seront de
5 Mbit/s (débit descendant) et de 448 kbit/s (débit montant) à compter
du 1er septembre 2016 jusqu’au 31 décembre 2016. En 2017 et 2018, ces
débits seront de 1 Mbit/s (débit descendant) et 448 kbit/s (débit
montant). Ensuite, ils seront de 768 kbit/s (débit descendant) et 384
kbit/s (débit montant) pour l’année 2019 et de 384 kbit/s (débit
descendant) et 384 kbit/s (débit montant) pour l’année 2020.

So sure, you’ll get free roaming, but it’ll have a speed that will make it completely useless.

My opinion on this directive is that it targets a particular set of complaints by a vocal part of the population that got screwed sideways by horrible roaming tariffs of many European providers when on vacation, and at the same time provide a feel-good response for those consumers that do not actually care, as they barely, if ever, leave their country.

Indeed if you travel, say, a week a year in the summer outside of the border, probably these fixed limits are pretty good: you do not have to figure out which is the most advantageous provider for roaming in your country (which may not be advantageous in other circumstances) and you do not risk ending up with multiple hundreds of euros of bill from your vacation.

On the other hand if you, like me, travel a lot, and effectively spend a significant amount of the year outside of your residence country, and you even live outside of your native country, well, you’re now very likely worse off. Indeed, with the various 3 companies and their special roaming plans I was very happy not having to have a bunch of separate SIM cards: in Germany, USA, Austria I just used my usual SIM cards. In UK, France and Italy I had both a free-roaming card and a local one. And instead before that I ended up having Vodafone SIM cards for the Netherlands, Czech Republic, Portugal and very nearly Spain (in that case I used Wind’s roaming package instead).

Don’t get me wrong: I”m not complaining about European meddling into mobile providers. I used to have a tagline for my blog: “Proud to be European”, and I still am. I’m not happy about Brexit, because that actually put a stop to my plans of moving to London eventually. But at the same time I think this regulation is a gut reaction rather than a proper solution.

If I was asked what the solution should be, my suggestion would be to allow companies such as 3 and Vodafone to provide an European number option. Get a new international prefix for EU, allow the companies that have wide enough reach to set up their own agreements locally where they do not have network themselves (3 and Vodafone clearly have already a wide reach) by providing a framework for capping the cost as applied to providers. Then get me a SIM that just travels Europe with no additional costs, and with a phone number that can be called at local rates everywhere (you can do that by ensuring that the international prefix maps to a local one in all the countries). Even if such a contract is more expensive than a normal one, the frequent travellers would be pretty happy not to have to switch SIM cards, phone numbers, and have unknown costs appearing out of nowhere.

Emailing receipts

Before starting, I usually avoid taking political stances outside of Italy, since that is the only country I can vote in. But I think it’s clear to most people over here that, despite posting vastly about first-world problems, I do not thrive of the current political climate overall. So while you hear me complaining about things that are petty, don’t assume I don’t have just as much worries that are actually relevant to society as a whole. I just don’t have solutions, and tend to stick to talking to what I know.

I’m visiting the US, maybe for the last time for a while, given the current news. It reminds me of Japan and China, in the sense that it’s a country that mixes extremely high-tech and vintage solutions in the same space. The country had to be brought kicking and screaming into the 20th century some years ago to start issuing chip cards but on the other hand, thanks to Square, and LevelUp and all kind of other similar mobile payment platforms, email receipts are becoming more and more common.

I find this is interesting. I wrote some time ago my preference for electronic bills but I did not go into details of simpler receipts. I have touched on the topic when talking about expenses but I did not go into precise details either. And I thought that maybe it’s time to write something, if nothing else because this way I can share what my opinion on them is.

For those who may have not been to the States, or at least not in California or Manhattan, here is the deal I’m talking about: when you pay with a given credit card with Square for the first time (but the same applies to other platforms), it asks you if you want to receive a receipt confirmation via email. I usually say yes, because I prefer that to paper (more on that later). Afterwards, any payment made with the same card also gets emailed. You can unsubscribe if you want, but the very important part is that you can’t refuse the receipt at payment time. Which is fun, because after going to a bibimbap restaurant near the office last week, while on business travel, and taking a picture of the printed out receipt for the expense report, I got an email with the full receipt, including tip, straight into my work inbox (I paid with the company card, and I explicitly make the two go to different email addresses). The restaurant didn’t even have to ask.

As it happens, Square and mobile payments are not the only ones doing that. Macy’s, a fairly big department store in North America, also allows you to register a card, although as far as I remember, it allows you to still opt to only get the paper receipt. This different in options is interesting, and it kind of make sense, in the context of what spending pattern you may have: if you’re going to Macy’s to buy a gift for your significant other, it makes sense that you may not want to send them a copy of the receipt of the gift. On the other hand, I would not share my email password with a SO — maybe that’s why I’m single. Apple Stores also connect a card with an email address, but I remember the email receipt is also opt-in, which is not terribly good.

Why do I think it is important that the service allows you to opt-in to receipts but not opt-out of a single transaction? It’s a very good safeguard against fraud. If a criminal were to skim your card and use it through one of those establishment that send you email receipts, they would definitely opt out of the email receipts, as to no alert you. This is not a theoretical by the way, as this happened to me earlier this month. My primary card got skimmed – I have a feeling this happened in December, at the MIT Coop store in Cambridge, MA but that’s not important now – and used, twice at one or two Apple Store in Manhattan, buying the same item for something above €800, during what, for me, was a Saturday evening. I honestly don’t remember if I used that card at an Apple Store before, but assuming I did and the receipts would not be opt-in, I would have known to call my card company right away, rather than having to wait for them to call me on Monday morning.

While real-time alerts is something that a few banks do provide, no bank in Ireland does that, to my knowledge, and even in Italy the banks doing that make you pay extra for the service, which is kind of ludicrous, particularly for credit cards where the money at stake is usually the banks’. And since accounting of foreign transactions sometimes can easily take days, while the receipts are nearly instantaneous by design, this is very helpful to protect customers. I wish more companies started doing that.

An aside here about Apple: by complete coincidence, a colleague of mine had a different kind of encounter with criminals who tried to buy Apple devices with his card the week before me. In this case, the criminals got access to the card information to use online, and set up a new Apple ID to buy something. In this case, he did have the card attached to his real Apple ID account, and made online purchases from them not long before, so when they tried that, the risk engine on Apple’s side triggered, and they contacted him to verify whether the order was genuine. So in this case neither Apple nor the bank lost money, as the transaction was cancelled lately. He still had to cancel the card, though.

But there is more. Most people will treat receipts, and even more so credit card slips, as trash and just throw it the first chance they have. For most people and in most cases this is perfectly okay, but sometimes it is not. Check out this lecture by James Mickens — who I had the pleasure to listen to in person at LISA 2015… unfortunately not to meet and greet because I came under shock during his talk, as exactly at that time the Bataclan attacks in Paris happened, and I was distraught trying to reach all my Parisian friends.

If you have watched the full video, you now know that the last four digits of a credit card number are powerful. If you like fantasy novels, such as the Dresden Files, you probably read that “true names have power” — well, as it happens, a credit card number has possibly more power, in the real world. And the last four digits of a credit card can be found on most credit card slips, together with a full or partial name, as written on the card. So while it’s probably okay to leave the credit card slip on the table, at a random restaurant in the middle of the desert, if you’re the only patron inside… it might not be quite the same if you’re a famous person, or a person a risk of harassment. And let’s be honest, everybody is at risk nowadays.

While it is true that credit card slips and receipts are often separate, particularly when using chip cards, as the POS terminal and the registry are usually completely separated, this is not always the case, and almost never the case for big stores, both in the United States and abroad. Square cash registries, as well as a number of other similar providers, that graduated from mobile-only payments to full blown one-stop-shop of payment processing, tend to print out a single slip of paper (if you have not registered for the email receipts). This at least reduces the chance that you would throw away the receipt right away, as you probably want to bring it home with you for warranty purposes.

And then there is the remaining problem: when you throw away paper receipts directly into the trash, dumpster diving makes it possible to find out a lot about your habits, and in particular it makes significantly easier to target you, just as an opportunity, with the previously-mentioned power of the last four digits of your card, and a name.

Now, it is true that we have two different security problems now: the payment processing companies can now connect a credit card number with an email address — but I would hope that PCI-DSS would stop them from actually storing the payment information in cleartext, I hope they only store a one-way hash of the credit card number, to connect to the email address. It still is tricky, because even with the hashed card numbers, a leak of that database would make the above attacks even easier: you can find out the email address, and from that easily the accounts, of a credit card owner, and take control way too easily.

There is also a risk that you’re opening up more details of your personal life to whoever has access to your email account — let’s say your employer, if you’re not properly siloing your email accounts. This is a real problem, but only made slightly worse by the usage of email receipts for in-store purchases. Indeed, most likely for stores like CVS, you can have a order history straight from the website, which most likely you can already access if you have access to the email account — which, by the way, it’s why you should ask for 2FA!. As I said above, I only get sent email to my work account if they are undoubtedly work only; anything I buy with the work credit card is clearly work-related only, but for instance taxi receipts, flights or hotel bookings may be personal, and so the account is set to mail my personal account only — when needed I forward the email messages over, but usually I just need the receipts for expensing.

And hey, even the EFF, who I renewed my support today, uses Square to take donations, so why not?

33c3 impressions: ethical development and Dieselgate

The Dieselgate is in my opinion an interesting case study for the ethics of software development and hacking in general. Particularly because it’s most definitely not a straightforward bad guys vs good guys case, and because there is no clear cut solution to the problem. There was a talk about this (again) at the Chaos Computer Congress this year. It’s a nice talk to watch:

I was actually physically in the room only for the closing notes, and they had me thinking, thus why it was the second talk I watched once back in Dublin, after the Congress was over.

[Regarding the responsibility for Bosch as the supplier to Volkswagen]

If you build software that you know is used to be illegally, it should it must be your responsibility to not do that and I’m not sure if this is something that is legally enforceable but it should be something that’s enforceable ethically or for all us programmers that we don’t build software that is designed to break the law.

The rambling style of the quote is clearly understandable as it was a question asked on the spot to the speaker.

Let me be clear: these cheating “devices” (the device itself is not the cheating part, but rather the algorithms and curves programmed onto it, of course) are a horrible thing not only from the legal point of view but because we only have one planet. Although in the great scheme of things they appear to have forced the hands of various manufacturers towards moving to electric cars quicker. But let’s put that aside for a moment.

Felix suggests that developers (programmers) should refuse to build software to break the law. Which law are we talking about here? Any law? All the laws? Just the laws we like?

Let’s take content piracy. File sharing software is clearly widely used to break the law, by allowing software, movies, tv series, books, music to be shared without permission. DRM defeating software is also meant to break (some, many) laws. Are the people working on the BitTorrent ecosystem considering that the software they are writing is used to break the law? Should it be enforceable ethically that they should not do that?

Maybe content piracy is not that high in the ethical list of hackers. After all we have all heard «Information wants to be free», particularly used as an excuse for all kind of piracy. I would argue that I’m significantly more open to accept de-DRM software as ethical if not legal, because DRM is (vastly) unethical. In particular as I already linked above, defeating DRM is needed to fix the content you buy and to maintain access to the content you already bought.

Let’s try to find a more suitable example… oh I know! “Anonymous” transaction systems like Bitcoin, Litecoin and the newest product of that particular subculture, Keybase’s zCash which just happens to have been presented at 33C3 as well. These are clearly used to break the law in more way than one. They allow transactions outside of the normal, regulated banking system, they allow to work around laws that apply to cash-only transactions, and they keep engineering to make transactions untraceable. After all, Silk Road would probably have had a significant reduction in business if people actually had to use traceable money transfers.

Again, should we argue that anybody working on this ecosystem should be ethically enforced out? Should they be forced to acknowledge that the software they work on is used for illegal activities? Well, maybe not, after all these systems are just making things easier, but they are not enabling anything that was not possible already before. So after all, they are not critical to the illegal activity, so surely it’s not the developers fault up to this point.

Let’s take yet another example, even though I feel like I’m beating a dead horse. The most loved revolutionary tool: Tor. When you ask the privacy advocates, this is the most important tool for refugees, activists, dissidents, all the good people in the world. And arguably, in this day and age, there are indeed many activists and dissidents that a lot of us want to protect, as even the US starts feeling like a regime to some. When asked about it, these advocates would argue that supermarket loyalty cards are abused more than Tor.

But if you do take down your starry-eyed glasses, you’ll see how Tor has been used not only by Silk Road and its ilks, but also for significantly nastier endeavours than buying and selling drugs. Another talk from 33c3 criticised law enforcement using “hacking techniques”, particularly when dealing with child pornography websites, and for once it was not doing so simply to attack law enforcement’s intentions of breaking Tor, but rather the fact that they have caused lots of evidence to be thrown out as fruit of poisonous tree.

I’m not arguing that Felix is wrong with saying that Bosch very likely knew what the software they developed, under spec from their customer, would have caused. And I’m not sure I can actually asses the ethical differences between providing platforms for people to hire killers and poisoning our cities and planets. I’m not a philosopher and I don’t particularly care about thinking through trolley problems, whether for real or for trolling. (Yes we have all seen the memes.)

But at the same time, I find it ironic that about the same group of people who cheered on his take down of Bosch would also cheer Tor and the other projects… you could say that what Felix referred to was more about immorality than illegality, but not all morals are absolute, and laws are not universal. And saying that you want to enforce whatever laws you think should be, but not others, well, it’s not something I agree with.

Leaving this last consideration aside – and it’s funny that just his last few phrases are what brought me to post all this! – I really liked Felix’s talk and happy that he’s talking a grown-up approach, rather than the anarchist hacker’s. Among other things admitting that it might not be feasible to suggest or force the car companies to open-source their whole ECU code, but it might be feasible to approach it like Microsoft’s Shared Source license.

Finally I would point out that this is a possibly proper example of where giving the ability of users to upload their own code is not something I’d be looking forward to. The software that is the centre of the storm is not designed to just defeat regulation, for the sole sake that corporations are evil and want to poison our planet, even though I’m sure that it would be an easy narrative, particularly with the Congress crowd. Instead, these curves and decisions are meant to tip the scales towards the interests of the user rather than those of the collectivity.

You just need to watch the talk and you’ll hear how these curves actually increase the time-to-service for cars, both by reducing the amount of dirt getting caught by the EGR and by reducing the amount of AdBlue used. It is also stands to reason that the aim of not only the curves, but the whole design of these cars is to ignore regulations wherever possible (since the ECU helps cheating during the needed tests) to improve performance. Do we really expect that, if people were allowed to push their own code into the ECU they would consider the collective good, rather than their own personal gain? Or would you rather expect them to rather disable the AdBlue tank altogether, so they have to spend even less time worrying about servicing their cars?

Let me be clear, there is no easy solution to this set of problems. And I think Felix’s work and these talks are actually a very good starting point to the solution. So maybe there is hope, after all.

Growing up, or why I don’t really feel part of the community

I have said that I’ve been wrong multiple times in the past. Some of it has been buying too much into the BOFH myth. With this I mean that between reading UserFriendly and the Italian language Storie dalla Sala Macchine, I bought into the story that system administrators (which now you may want to call “ops people”) are the only smart set of eyes in an organization, and that most of other people have absolutely no idea what they are doing.

But over time I have worn many hats: I left high school with some basic understanding of system administration, and I proceeded working as a translator, a developer for autonomous apps in an embedded environment, I taught courses on debugging and development under Linux, and worked on even more embedded systems. While I have been self-employed as a sysadmin for hire (or to use a more buzzword-compliant term, as a MSP, a Managed Service Provider), I ended up working on media streaming software, as well as media players and entire services. I worked as a web developer, even in PHP for a very brief time, I wrote software for proprietary environment as well as Linux and other open source systems. In addition to my random musings on this blog, I wrote for more reputable publications. And I’m currently a so-called Site Reliability Engineer, and “look after” (but not really) highly distributed systems.

This possibly abnormal list of tasks, if not really occupations, has a clear upside: I can pass for a member of different groups relatively easily. For months I had people think that I was an EE student, at work a bunch of people thought I had previous experience with distributed systems, and of course at LISA I can talk my way around as if I was still a system administrator. Somehow I also manage to pass for a security person, because I have a personal interest in the topic and so I learnt a bunch of things about it, even though I have not worked or researched that officially.

On the other hand, this gives me a downside that, personally, is much heavier: impostor syndrome. In all those crowds, while I can probably hide within for a while, I’ll feel out of place. I have not used a soldering iron (or a breadboard) for years by now (although I’m working to fix that), and I have not really worked on much on the small discrete electronics for years — the last time I worked as a firmware engineer, it was for a system that had 8GB of RAM and an i7 Xeon. I don’t have the calculus skills to be an actual multimedia developer: I know my way through container formats, but I need someone else to write the codecs as I have no clue how to go to decode even the simplest encoding, well maybe except Huffman codes at this point.

So while I can camouflage in these groups, I can’t really find myself feeling as a real member of them.

You could say that the free software community should be something I’m more at my ease with, given I’ve been a developer for over fifteen years at this point, but there is are two big problems with it: the first is that I’m not a purist, I use proprietary software any time I feel it’s the best tool to do the task, and the second is that free software and privacy advocacy mix up too much nowadays, and my concept of privacy does not match that of the activists.

While at 33C3 I realized I don’t really match up to this crowd either, and not just for the privacy topic. I somehow have more respect for the rules than most of the people I see around here, though I still enjoy the hacking and breaking, so when the Twitter people starts complaining that Nintendo and PS4 exploits are not released, I find that a perfectly reasonable approach. After all, hiding behind outrage for blocking Linux on PS3 was the intention to pirate games, and that’s not something I’m happy to condone.

I have hang around a few of my previous acquaintances, and friends of theirs, while they were working on the CTF — and that was kind of cool, but that’s also not something I’m very interested in: while I can work m way around security problems, and know what to look out for, I don’t really like that kind of puzzle. Just like I don’t enjoy logic puzzles, or sudoku. I much enjoy Scrabble, though.

The evenings were the least interesting to me, too. Most of them included parties that revolve around alcohol, and you know I don’t care for it. Given that this is C3, I’m sure a number of other drugs involved too — I’m not an expert, but I can at this point tell the smell of weed quite clearly, and the conference centre was smelling more than Seattle. So effectively the only night I left after 10pm was the first, that only because Hector was talking at 11pm. (On the bright side, Hamburg makes procuring sugar free fritz-kola very easy.)

To stray away a bit from technology, I should add that even when going to non-software related conventions, such as EasterCon and Nine Worlds, I feel as an outsider there too. Much as I’m a fan of sci-fi and fantasy, and a would-be avid reader, I don’t have the time to read as much as I would like, and I’m clearly not tailored to be a cosplayer or a fanzine writer. And most of these events also involve a disproportionate amount of alcohol.

So why did I title this post “Growing up”? The answer is that acceptance comes with growing up. For some of the subcultures and groups I get myself sometimes in, but I know I won’t. For some of them it’s because I don’t have the time to invest to join them properly, for instance while I would love to actually be an EE, I did not really go to university (two weeks don’t count) and going right now is not an option, I’m too old for this. And I would not be ready to compromise my ethics with regard to piracy or legality.

And, much as I understand people do enjoy those “responsibly”, I don’t really think that weed or any other drug would be something I care about using. I know how I feel when I’m not in control, and though that may be able to “relax” me enough to not be afraid of every single social interaction, it is not a pretty feeling afterwards. Even though there is a chance I’ll always feel isolated without one of those “social lubricants”.

Unfortunately this does mean that for many things, I’ll always be an outsider looking in, rather than an insider, which makes it difficult to drive change, for instance. But again, accepting that is part of growing up. So be it.