Ten Years of IPv6, Maybe?

It seems like it’s now a tradition that, once a year, I will be ranting about IPv6. This usually happens because either I’m trying to do something involving IPv6 and I get stumped, or someone finds one of my old blog posts and complains about it. This time is different, to a point. Yes I sometimes throw some of my older post out there, and they receive criticism in the form of “it’s from 2015” – which people think is relevant, but isn’t, since nothing really changed – but the occasion this year is celebrating the ten years anniversary for the World IPv6 Day, the so-called 24-hour test of IPv6 from the big players of network services (including, but not limited to, my current and past employer).

For those who weren’t around or aware of what was going on at the time, this was a one-time event in which a number of companies and sites organized to start publishing AAAA (IPv6) records for their main hostnames for a day. Previously, a number of test hostnames existed, such as ipv6.google.com, so if you wanted to work on IPv6 tech you could, but you had to go and look for it. The whole point of the single day test was to make sure that users wouldn’t notice if they started using the v6 version of the websites — though as history will tell us now, a day was definitely not enough to find that many of the issues around it.

For most of these companies it wasn’t until the following year, on 2012-06-06, that IPv6 “stayed on” on their main domains and hostnames, which should have given enough time to address whatever might have come out of the one day test. For a few, such as OVH, the test looked good enough to keep IPv6 deployed afterwards, and that gave a few of us a preview of the years to come.

I took part to the test day (as well as the launch) — at the time I was exploring options for getting IPv6 working in Italy through tunnels, and I tried a number of different options: Teredo, 6to4, and eventually Hurricane Electric. If you’ve been around enough in those circle you may be confused by my lack of Sixxs as an option — I have encountered their BOFH side of things, and got my account closed for signing up with my Gmail address (that was before I started using Gmail For Business on my own domain). Even when I was told that if I signed up with my Gentoo address I would have had extra credits, I didn’t want to deal with that behaviour, so I just skipped on the option.

So ten years on, what lessons did I learn about IPv6?

It’s A Full Stack World

I’ve had a number of encounters with self-defined Network Engineers, who think that IPv6 just needs to be supported at the network level. If your ISP supports IPv6, you’re good to go. This is just wrong, and shouldn’t even need to be debated, but here we are.

Not only supporting IPv6 requires using slightly different network primitives at times – after all, Gentoo has had an ipv6 USE flag for years – but you need to make sure anything that consumes IP addresses throughout your application knows how to deal with IPv6. For an example, take my old post about Telegram’s IPv6 failures.

As far as I know their issue is solved, but it’s far from uncommon — after all it’s an obvious trick to feed legacy applications a fake IPv4 if you can’t adapt them quickly enough to IPv6. If they’re not actually using it to initiate a connection, but only using it for (short-term) session retrieval or logging, you can get away with this until you replace or lift the backend of a network application. Unfortunately that doesn’t work well when the address is showed back to the user — and the same is true for when the IP needs to be logged for auditing or security purposes: you cannot map arbitrary IPv6 into a 32-bit address space, so while you may be able to provide a temporary session identifier, you would need to have something mapping the session time and the 32-bit identifier together, to match the original source of the request.

Another example of where the difference between IPv4 and IPv6 might cause hard to spot issues is in anonymisation. Now, I’m not a privacy engineer and I won’t suggest that I’ve got a lot of experience in the field, but I have seen attempts at “anonymising” user IPv4s by storing (or showing) only the first three octets of it. Beside the fact that this doesn’t work if you are trying to match up people within a small pool (getting to the ISP would be plenty enough in some of those cases), this does not work with IPv6 at all — you can have 120 of the 128 bits of it and still pretty much being able to identify a single individual.

You’re Only As Prepared As Your Dependencies

This is pretty much a truism in software engineering in general, but it might surprise people that this applies to IPv6 even outside of the “dependencies” you see as part of your code. Many network applications are frontends or backends to something else, and in today’s world, with most things being web applications, this is true for cross-company services too.

When you’re providing a service to one user, but rely on a third party to provide you a service related to that user, it may very well be the case that IPv6 will get in your way. Don’t believe me? Go back and read my story about OVH. What happened there is actually a lot more common than you would think: whether it is payment processors, advertisers, analytics, or other third party backends, it’s not uncommon to assume that you can match the session by the source address and time (although that is always very sketchy, as you may be using Tor, or any other setup where requests to different hosts are routed differently.

Things get even more complicated as time goes by. Let’s take another look at the example of OVH (knowing full well that it was 10 years ago): the problem there was not that the processor didn’t support IPv6 – though it didn’t – the problem was that the communication between OVH (v6) and the payment processor (v4) broke down. It’s perfectly reasonable for the payment processor to request information about the customer that the vendor is sending through, through a back-channel: if the vendor is convinced they’re serving an user in Canada, but the processor is receiving a credit card from Czechia, something smells fishy — and payments are all about risk management after all.

Breaking when using Tor is just as likely, but that can also be perceived as a feature, from the point of view of risk. But when the payment processor cannot understand what the vendor is talking about – because the vendor was talking to you over v6, and passed that information to a processor expecting v4 – you just get a headache, not risk management.

How did this become even more complicated? Well, at least in Europe a lot of payment processors had to implement additional verification through systems such as 3DSecure, Verified By Visa, and whatever Amex calls it. It’s often referred to as Strong Customer Authentication (SCA), and it’s a requirement of the 2nd Payment Service Directive (PSD2), but it has existed for a long while and I remember using it back before World IPv6 Day as well.

With SCA-based systems, a payment processor has pretty much no control on what their full set of dependencies is: each bank provides their own SCA backend, and to the best of my understanding (with the full disclosure that I never worked on payment processing systems), they all just talk to Visa and MasterCard, who then have a registry of which bank’s system to hit to provide further authentication — different banks do this differently, with some risk engine management behind that either approves straight away, or challenges the customer somehow. American Express, as you can imagine, simplifies their own life by being both the network and the issuer.

The Cloud is Vastly V4

This is probably the one place where I’m just as confused as some of the IPv6 enthusiast. Why do neither AWS nor Google Cloud provide IPv6 as an option, for virtual machines, to the best of my knowledge?

If you use “cloud native” solutions, at least on Google Cloud, you do get IPv6, so there’s that. And honestly if you’re going all the way to the cloud, it’s a good design to leverage the provided architecture. But there’s plenty of cases in which you can’t use, say, AppEngine to provide a non-HTTP transport, and having IPv6 available would increase the usability of the protocol.

Now this is interesting because other providers go different ways. Scaleway does indeed provide IPv6 by default (though, not in the best of ways in my experience). It’s actually cheaper to run on IPv6 only — and I guess that if you do use a CDN, you could ask them to provide you a dual-stack frontend while talking to them with an IPv6-only backend, which is very similar to some of the backend networks I have designed in the past, where containers (well before Docker) didn’t actually have IPv4 connectivity out, and they relied on a proxy to provide them with connections to the wide world.

Speaking of CDNs – which are probably not often considered part of Cloud Computing but I will bring them in anyway – I have mused before that it’s funny how a number of websites that use Akamai and other CDNs appear to not support IPv6, despite the fact that the CDNs themselves do provide IPv6-frontend services. I don’t know for sure this is not related to something “silly” such as pricing, but certainly there are more concerns to supporting IPv6 than just “flipping a switch” in the CDN configuration: as I wrote above, there’s definitely full-stack concern with receiving inbound connections coming via IPv6 — even if the service does not need full auditing logs of who’s doing what.

Privacy, Security, and IPv6

If I was to say “IPv6 is a security nightmare”, I’d probably get a divided room — I think there’s a lot of nuance needed to discuss privacy and security about IPv6.

Privacy

First of all, it’s obvious that IPv6 was designed and thought out at a different time than the present, and as such it brought with it some design choices that, looking at them nowadays, look wrong or even laughable. I don’t laugh at them, but I do point out that they were indeed made with a different idea of the world in mind, one that I don’t think is reasonable to keep pining for.

The idea that you can tie up an end-user IPv6 with the physical (MAC) address of an adapter is not something that you would come up with in 2021 — and indeed, IPv6 was retrofitted with at least two proposals for “privacy-preserving” address generation option. After all, the very idea of “fixed” MAC addresses appear to be on the way out — mobile devices started using random MAC addresses tied to specific WiFi networks, to reduce the likeliness of people being tracked between different networks (and thus different locations).

Given that IPv6 is being corrected, you may expect then that the privacy issue is now closed, but I don’t really believe that. The first problem is that there’s no real way to enforce what your equipment inside the network will do from the point of view of the network administrator. Let me try to build a strawman for this — but one that I think is fairly reasonable, as a threat model.

While not every small run manufacturer would go out of their way to be assigned an OUI to give their devices a “branded” MAC address – many of the big ones don’t even do that and leave the MAC provided by the chipset vendor – there’s a few of them who do. I know that we did that at one of my previous customers, where we decided to not only getting an OUI to use for setting the MAC addresses of our devices — but we also used it as serial number for the device itself. And I know we’re not alone.

If some small-run IoT device is shipped with a manufacturer-provided MAC address with their own OUI, it’s likely that the addresses themselves are predictable. They may not quite be sequential, and they probably won’t start from 00:00:01 (they didn’t in our case), but it might be well possible to figure out at least a partial set of addresses that the devices might use.

At that point, if these don’t use a privacy-preserving ephemeral IPv6, it shouldn’t be too hard to “scan” a network for the devices, by calculating the effective IPv6 on the same /64 network from a user request. This is simplified by the fact that, most of the time, ICMP6 is allowed through firewalls — because some of it is needed for operating IPv6 altogether, and way too often even I left stuff more open than I would have liked to. A smart gateway would be able to notice this kind of scans, but… I’m not sure how most routers do with things like this, still. (As it turns out, the default UniFi setup at least seems to configure this correctly.)

There’s another issue — even with privacy extensions, IPv6 addresses are often provided by ISPs in the form of /64 networks. These networks are usually static per subscriber, as if they were static public IPv4, which again is something a number of geeks would like to have… but has also side effects of being able to track a whole household in many cases.

This is possibly controversial with folks, because the move from static addresses to dynamic dialup addresses marks the advent of what some annoying greybeards refer to as Eternal September (if you use that term in the comments, be ready to be moderated away, by the way). But with dynamic addresses came some level of privacy. Of course the ISP could always figure out who was using a certain IP address at a given time, but websites wouldn’t be able to keep users tracked day after day.

Note that the dynamic addresses were not meant to address the need for privacy; it was just incidental. And the fact that you would change addresses often enough that the websites wouldn’t track you was also not by design — it was just expensive to stay connected for long period of times, and even on flat rates you may have needed the phone line to make calls, or you may just have lost connection because of… something. The game (eventually) changed with DSLs (and cable and other similar systems), as they didn’t hold the phone line busy and would be much more stable, and eventually the usage of always-on routers instead of “modems” connected to a single PC brought the whole cycling to a new address a rare occurrence.

Funnily enough, we once again some tidbit of privacy in this space with the advent of carrier-grade NATs (CGNAT), which were once again not designed at all for this. But since they concentrated multiple subscribers (sometimes entire neighbourhoods or towns!) into a single outbound IP address, they would make it harder to tell the person (or even the household) accessing a certain website — unless you are the ISP, clearly. This, by the way, is kind of the same principle that certain VPN providers use nowadays to sell their product as a privacy feature; don’t believe them.

This is not really something that the Internet was really designed — protection against tracking didn’t appear to be one of the worries of an academic network that considered the idea that each workstation would be directly accessible to others, and shared among different users. The world we live in, with user devices that are increasingly single-tenant, and that are not meant to be accessible by anyone else on the network, is different from what the original design of this network was visualizing. And IPv6 carried on with such a design to begin with.

Security

Now on the other hand there’s the problem with actual endpoint security. One of the big issues with network protocols is that firewalls are often designed with one, and only one protocol in mind. Instead of a strawman, this time let me tlak about an episode from my past.

Back when I was in high school, our systems lab was the only laboratory that had a mixed IP (v4, obviously, it was over 16 years ago!) and IPX network, since one of the topics that they were meant to teach us about was how to set up a NetWare network (it was a technical high school). All of the computers were set up with Windows 95, with the little security features that were possible to use, and included a software firewall (I think was ZoneAlarm, but my memory is fuzzy around this point). While I’m sure that trying to disable it or work it around would have worked just as well, most of us decided to not even try: Unreal Tournament and ZSNES worked over IPX just as well, and ZoneAlarm had no idea of what we were doing.

Now, you may want to point out that obviously you should make sure to secure your systems to work for both IP generations anyway. And you’d be generally right. But given that sometimes systems have been lifted and shifted many times, it might very well be that there’s no certainty that a legacy system (OS image, configuration, practice, network, whatever it is) can be safely deployed on a v6 world. If you’re forced to, you’ll probably invest money and time to make sure that it is the case — if you don’t have an absolute need beside the “it’s the Right Thing To Do”, you most likely will try to live as much as you can without it.

This is why I’m not surprised to hear that for many sysadmins out there, disabling IPv6 is part of the standard operating procedure of setting up a system, whether it is a workstation or a server. This is not even helped by the fact that on Linux it’s way easy to forget that ip6tables is different from iptables (and yes I know that this is hopefully changing soon).

Software and Hardware Support

Probably this is the aspect of the IPv6 fan base where I feel myself most at home with. For operating systems and hardware (particularly network hardware) to not support IPv6 in 2021 it feels like we’re being cheated. IPv6 is a great tool for backend networks, as you can avoid a lot of legacy features of IPv4 quickly, and use cascading delegation of prefixes in place of NAT (I’ve done this, multiple times) — so not supporting it at the very basic level is damaging.

Microsoft used to push for IPv6 with a lot of force: among other things, they included Miredo/Teredo in Windows XP (add that to the list of reasons why some professionals still look at IPv6 suspiciously). Unfortunately WSL2 (and as far as I understand HyperV) do not allow using IPv6 for the guests on a Windows 10 workstation. This has gotten in my way a couple of times, because I am otherwise used to just jump around my network with IPv6 addresses (at least before Tailscale).

Similarly, while UniFi works… acceptably well with IPv6, it is still considering it an afterthought, and they are not exactly your average home broadband router either. When even semi-professional network equipment manufacturers can’t get you a good experience out of the box, you do need to start asking yourself some questions.

Indeed, if I could have a v6-only network with NAT64, I might do that. I still believe it is useless and unrealistic, but since I actually do develop software in my spare time, I would like to have a way to test it. It’s the same reason why I own a number of IDN domains. But despite having a lot of options for 802.1x, VLAN segregation, and custom guest hotspots, there’s no trace of NAT64 or other goodies like that.

Indeed, the management software is pretty much only showing you IPv4 addresses for most things, and you need to dig deep to find the correct settings to even allow IPv6 on a network and set it up correctly. Part of the reason is likely that the clients have a lot more weight when it comes to address selection than in v4: while DHCPv6 is a thing, it’s not well supported (still not supported at all on WiFi on Android as far as I know — all thanks to another IPv6 “purist”), and the router advertising and network discovery protocols allow for passive autoconfiguration that, on paper, is so much nicer than the “central authority” of DHCP — but makes it harder to administer “centrally”.

iOS, and Apple products in general, appear to be fond of IPv6. More than Android for sure. But most of my IoT devices are still unable to work on an IPv6-only network. Even ESPHome, which is otherwise an astounding piece of work, does not appear to provide IPv6 endpoints — and I don’t know how much of that is because the hardware acceleration is limited to v4 structures, and how much of it is just because it’s possibly consuming more memory in such small embedded device. The same goes for CircuitPython when using the AirLift FeatherWing.

The folks who gave us the Internet of Things name sold the idea of every device in the world to be connected to the Internet through a unique IPv6 address. This is now a nightmare for many security professionals, a wet dream for certain geeks, but most of all an unrealistic situation that I don’t expect will become reality in my lifetime.

Big Names, Small Sites

As I said at the beginning explaining some of the thinking behind World IPv6 Day and World IPv6 Launch, a number of big names, including Facebook and Google, have put their weight behind IPv6 from early on. Indeed, Google keeps statistics of IPv6 usage with per-country split. Obviously these companies, as well as most of the CDNs, and a number of other big players such as Apple and Netflix, have had time, budget, and engineers to be able to deploy IPv6 far and wide.

But as I have ventured before, I don’t think that they are enough to make a compelling argument for IPv6 only networks. Even when the adoption of IPv6 in addition to IPv4 might make things more convenient for ISPs, the likeliness of being able to drop tout-court IPv4 compatibility is approximately zero, because the sites people actually need are not going to be available over v6 any time soon.

I’m aware of trackers (for example this one, but I remember seeing more of those) that tend to track the IPv6 deployment for “Alexa Top 500” (and similar league tables) websites. But most of the services that average people care about don’t seem to be usually covered by this.

The argument I made in the linked post boils down to this: the day to day of an average person is split between a handful of big name websites (YouTube, Facebook, Twitter), and a plethora of websites that are very sort of global. Irish household providers are never going to make any of the Alexa league tables — and the same is likely true for most other countries that are not China, India, or the United States.

Websites league tables are not usually tracking national services such as ISPs, energy suppliers, mobile providers, banks and other financial institutions, grocery stores, and transport companies. There are other lists that may be more representative here, such as Nielsen Website Ratings, but even those are targeted at selling ad space — and suppliers and banks are not usually interested in that at all.

So instead, I’ve built my own. It’s small, and it mostly only cares about the countries I experienced directly; it’s IPv6 in Real Life. I’ve tried listing a number of services I’m aware of, and should give a better idea of why I think the average person is still not using IPv6 at all, except for the big names we listed above.

There’s another problem with measuring this when resolving hosts (or even connecting to them — which I’m not actually doing in my case). While this easily covers the “storefront” of each service, many use separate additional hosts for accessing logged-in information, such as account data. I’m covering this by providing a list of “additional hosts” for each main service. But while I can notice where the browser is redirected, I would have to go through the whole network traffic to find all the indirect hosts that each site connects to.

Most services, including the big tech companies often have separate hosts that they use to process login requests and similar high-stake forms, rather than using their main domain. Or they may use a different domain for serving static content, maybe even from a CDN. It’s part and parcel of the fact that, for the longest time, we considered hostnames to be a security perimeter. It’s also a side effect of wanting to make it easier to run multiple applications written in widely different technologies — one of my past customers did exactly this using two TLDs: the marketing pages were on a dot-com domain, while the login to the actual application would be on the dot-net one.

Because of this “duality”, and the fact that I’m not really a customer of most of the services I’m tracking, I decided to just look at the “main” domain for them. I guess I could try to aim higher and collect a number of “service domains”, but that would be a point of diminishing returns. I’m going to assume that if the main website (which is likely simpler, or at least with fewer dependencies) does not support IPv6, their service domains don’t, either.

You may noticed that in some cases, smaller companies and groups appear to have better IPv6 deployments. This is not surprising: not only you can audit smaller codebases much faster than the extensive web of dependencies of big companies’ applications — but also the reality of many small businesses is that the system and network administrators do get a bit more time to learn and apply skills, rather than having to follow through a stream of tickets from everyone in the organization that is trying to deploy something, or has a flaky VPN connection.

It also makes it easier for smaller groups to philosophically think of “what’s the Right Thing To Do” versus the medium-to-big company reality of “What does the business get out of spending time and energy on deploying this?” To be fair, it looks like Apple’s IPv6 requirements might have pushed some of the right buttons for that — except for the part where they are not really requiring the services in use by the app to be available on IPv6, it’s acceptable for the app to connect with NAT64 and similar gateways.

Conclusions

I know people paint me as a naysayer — and sometimes I do feel like one. I fear that IPv6 is not going to become the norm during my lifetime, definitely not my career. It is the norm to me, because working for big companies you do end up working with IPv6 anyway. But most users won’t have to care for much longer.

What I want to point out to the IPv6 enthusiast out there is that the road to adoption is harsh, and it won’t get better any time soon. Unless some killer application of IPv6 comes out, where supporting v4 is no longer an option, most smaller players won’t bother. It’s a cost to them, not an advantage.

The performance concerns of YouTube, Netflix, or Facebook will not apply to your average European bank. The annoyance of going through CGNAT that Tailscale experience is not going to be a problem for your smart lightbulb manufacturer who just uses MQTT.

Just saying “It’s the Right Thing To Do” is not going to make it happen. While I applaud those who are actually taking the time to build IPv6-compatible software and hardware, and I think that we actually need more of them taking the pragmatic view of “if not now, when?”, this is going to be a cost. And one that for the most part is not going to benefit the average person in the medium term.

I would like to be wrong, honestly. I would want to wish that next year I’ll magically get firmware updates for everything I have at home and be working with IPv6 — but I don’t think I will. And I don’t think I’ll replace everything I own just because we ran out of address space in IPv4. It would be a horrible waste, to begin with, in the literal sense. The last thing we want, is to tell people to throw away anything that does not speak IPv6, as it would just pile up as e-waste.

Instead I wish that more IPv6 enthusiasts would get to carry the torch of IPv6 while understanding that we’ll live with IPv4 for probably the rest of our lives.

My Password Manager Needs a Virtual USB Keyboard

You may remember that a couple of years ago, Tavis convinced me to write down an idea of how my ideal password manager would look like. Later the same year I also migrated from LastPass to 1Password, because it made family sharing easier, but like before this was a set of different compromises.

More recently, I came to realise that there’s one component that my password manager needs, and I really wish I could convince the folks at 1Password to implement it: a virtual USB keyboard on a stick. Let me try to explain, because this already generated negative reactions on Twitter, including from people who didn’t wait to understand how it all fits together first.

Let me start with a note: I have been thinking in my idea of something like this for a long while, but I have not been able to figure out in my mind how to make it safe and secure, which means I don’t recommend to just use my random idea for this all. I then found out that someone else already came up with pretty much the same idea and did some of the legwork to get this to work, back in 2014… but nothing came of it.

What this suggests, is to have some kind of USB hardware token that can be paired with a phone, say over bluetooth, and be instructed to “type out” text via USB HID. Basically a “remote keyboard” controlled with the phone. Why? Let’s take a step back and see.

Among security professionals, there’s a general agreement that the best way to have safe passwords is to use unique, generated passwords and have them saved somewhere. There’s difference in the grade of security you need — so while I do use and recommend a password manager, practically speaking, having a “passwords notebook” in a safe place is pretty much as good, particularly for less technophile people. You may disagree on this but if so please move on, as this whole post is predicated on wanting to use password managers.

Sometimes, though, you may need a password for something that cannot use a password manager. The example that comes to my mind is trying to log in to PlayStation Network on my PS3/PS4, but there’s a number of other cases like that in addition to gaming consoles, such as printers/scanners, cameras (my Sony A7 need to log in to the Sony Online account to update the onboard apps, no kidding!), and computers that are just being reinstalled.

In these cases, you end up making a choice: for something you may have to personally type out more often than not, it’s probably easier to use a so-called “memorable password”, which is also commonly (but not quite correctly) called a Diceware password. Or, again alternatively, a 936 password. You may remember that I prefer a different form of memorable passwords, when it comes to passwords you need to repeatedly type out yourself very often (such as the manager’s master password, or a work access password), but for passwords that you can generate, store in a manager, and just seldomly type out, 936-style passwords are definitely the way to go in my view.

In certain cases, though, you can’t easily do this either. If I remember this correctly, Sony enforced passwords to have digits and symbols, and not repeat the same digit more than a certain amount of times, which makes diceware passwords not really usable for that either. So instead you get a generated password you need to spend a lot of time reading and typing — and in many cases, having to do that with on-screen keyboards that are hard to use. I often time out on my 1Password screen while doing so, and need to re-login, which is a very frustrating experience in and by itself.

But it’s not the only case where this is a problem. When you set up a computer for the first time, no matter what the operating system, you’ll most likely find yourself having to set up your password manager. In the case of 1Password, to do so you need the secret key that is stored… in 1Password itself (or you may have printed out and put in the safe in my case). But typing that secret key is frustrating — being able to just “send” it to the computer would make it a significantly easier task.

And speaking again of reinstalling computers, Windows BitLocker users will likely have their backup key connected to their Microsoft account so that they can quickly recover the key if something goes wrong. Nothing of course stops you from saving the same key in 1Password, but… wouldn’t it be nice to be able to just ask 1Password to type it for you on the computer you just finished reinstalling?

There’s one final case for which is this is useful, and that’s going to be a bit controversial: using the password on a shared PC where you don’t want to log in with your password manager. I can already hear the complaints that you should never log in from a shared, untrusted PC and that’s a recipe for disaster. And I would agree, except that sometimes you just have to do that. A long time ago, I found myself using a shared computer in a hotel to download and print a ticket, because… well, it was a whole lot of multiple failures why I had to do it, but it was still required. Of course I went on and changed the password right after, but it also made me think.

When using shared computers, either in a lounge, hotel, Internet cafe (are they still a thing), or anything like that, you need to see the password, which makes it susceptible to shoulder surfing. Again, it would be nice to have the ability to type the password in with a simpler device.

Now, the biggest complain I have received to this suggestion is that this is complex, increases surface of attack by targeting the dongle, and instead the devices should be properly fixed not to need any of this. All of that is correct, but it’s also trying to fight reality. Sony is not going to go and fix the PlayStation 3, particularly not now that the PS5 got announced and revealed. And some of these cases cannot be fixed: you don’t really have much of an option for the BitLocker key, aside from reading it off your Microsoft account page and typing it on a keyboard.

I agree that device login should be improved. Facebook Portal uses a device code that you need to type in on a computer or phone that is already logged in to your account. I find this particular login system much easier than typing the password with a gamepad that Sony insists on, and I’m not saying that because Facebook is my employer, but because it just makes sense.

Of course to make this option viable, you do need quite a few critical bits to be done right:

  • The dongle needs to be passive, the user needs to request a password typed out explicitly. No touch sensitive area on the dongle to type out in the style of a YubiKey. This is extremely important, as a compromise of the device should not allow any password to be compromised.
  • The user should be explicit on requesting the “type out”. On a manager like 1Password, an explicit refresh of the biometric login is likely warranted. It would be way too easy to exfiltrate a lot of passwords in a short time otherwise!
  • The password should not be sent in (an equivalent of) cleartext between the phone and the device. I honestly don’t remember what the current state of the art of Bluetooth encryption is, but it might not be enough to use the BT encryption itself.
  • There needs defense against tampering, which means not letting the dongle’s firmware to be rewritten directly with the same HID connection that is used for type out. Since the whole point is to make it safe to use a manager-stored password on an untrusted device, having firmware flashing access would make it too easy to tamper with.
    • While I’m not a cryptography or integrity expert, my first thought would be to make sure that a shared key negotiated between the dongle and the phone, and that on the dongle side, this is tied to some measurement registers similar to how TPM works. This would mean needing to re-pair the dongle when updating the firmware on it, which… would definitely be a good idea.

I already asked 1Password if they would consider implementing this… but I somewhat expect this is unlikely to happen until someone makes a good proof of concept of it. So if you’re better than me at modern encryption, this might be an interesting project to finish up and getting to work. I even have a vague idea on a non-integrated version of this that might be useful to have: instead of being integrated with the manager, having the dongle connect with a phone app that just has a textbox and a “Type!” button would make it less secure but easier to implement today: you’d copy the password from the manager, paste it into the app, and ask it to type of the dongle. It would be at least a starting point.

Now if you got to this point (or you follow foone on Twitter), you may be guessing what the other problem is: USB HID doesn’t send characters but keycodes. And keycodes are dependent on the keyboard layout. That’s one of the issue that YubiKeys and similar solutions have: you either need to restrict to a safe set of characters, or you end up on the server/parser side having to accept equivalence of different strings. Since this is intended to use with devices and services that are not designed for it, neither option is really feasible — in particular, the option of just allowing a safe subset just doesn’t work: it would reduce the options in the alphabet due to qwerty/qwertz/azerty differences, but also would not allow some of the symbol classes that a number of services require you to use. So the only option there would be for the phone app to do the conversion between characters and keycodes based on the configured layout, and letting users change it.

Updating email addresses, GDPR style

After scrambling to find a bandaid solution for the upcoming domainpocalypse caused by EURid, I set myself out tomake sure that all my accounts everywhere use a more stable domain. Some of you might have noticed, because it was very visible in me submitting .mailmap files to a number of my projects to bundle together old and new addresses alike.

Unfortunately, as I noted on the previous post, not all the services out there allow you to change your email address from their website, and of those, very few allow you to delete the account altogether (I have decided that, in some cases, keeping an account open for a service I stopped using is significantly more annoying than just removing it). But as Daniel reminded me in the comments, the Right to rectification or Right to correction, allows me to leverage GDPR for this process.

I have thus started sending email to the provided Data Protection contact for various sites lacking an email editing feature:

Hello,

I’m writing to request that my personal data is amended, under my right to correction (Directive 95/46/EC (General Data Protection Regulation), Article 16), by updating my email address on file as [omissis — new email] (replacing the previous [omissis — old email] — which this email is being sent from, and to which you can send a request to confirm identity).

I take the occasion to remind you that you have one month to respond to this request free of charge per Art. 12(3), that according to the UK Information Commissioner’s Office interpretation (https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-of-access/) you must comply to this request however you receive it, and that it applies to the data as it exists at the time you receive this.

The responses to this have been of all sorts. Humans being amused at the formality of the requests, execution of the change as requested, and a couple of push backs, which appear to stem from services that not only don’t have a self-service way to change the email address, but also seem to lack technical means to change it.

The first case of this is myGatwick — the Gatwick airport flyer portal. When I contacted the Data Protection Officer to change my email address, the first answer was that at best they could close the account for the old email address and open a new one. I pointed out that’s not what I asked to do and not what the GDPR require them to do, and they tried to argue that email addresses are not personal data.

The other interesting case if Tile, the beacon startup, which will probably be topic of a separate blog post because their response to my GDPR request is a long list of problems.

What this suggests to me is that my first guess (someone used email addresses as primary keys) is not as common as I feared — although that appears to be the problem for myGatwick, given their lack of technical means. Instead, the databases appears to be done correctly, but the self-service feature of changing email address is just not implemented.

While I’m not privy to product decisions for the involved services, I can imagine that one of the reasons why it was done that way, is that implementing proper access controls to avoid users locking themselves in, or to limit the risk of account takeover, is too expensive in terms of engineering.

But as my ex-colleague Lea Kissner points out on Twitter, computers would be better at not introducing human errors in the process to begin with.

Of all the requests I sent and were actioned, there were only two cases in which I have been asked to verify anything about either the account or the email address. In both cases my resorting to GDPR requests was not because the website didn’t have the feature, but rather that it failed: British Airways and Nectar (UK). Both actioned the request straight from Twitter, and asked security questions (not particularly secure, but still good enough compared to the rest).

Everyone else have at best sent an email to the old address to inform of the change, in reply to my request. This is the extent of the verification most of the DPO appear to have put on GDPR requests. None of the services were particularly critical: takeaway food, table bookings, good tea. But if it was not me sending these requests I would probably be having a bad half an hour the next time I tried using them.

Among the requests I sent yesterday there was one to delete my account to Detectify — I have used it when it was a free trial, found it not particularly interesting to me, and moved on. While I have expressed my intention to disable my account on Twitter, the email I sent was actioned, deleting my account (or at least it’s expected to have been deleted now), without a confirmation request of any kind, or any verification that I did indeed have access to the account.

Maybe they checked the email headers to figure out that I was really sending as the right email address, instead of just assumed so because it looked that way. I can only imagine that they would have done more due process if I was a paying customer, if nothing else to keep getting money. I just find it interesting that it’s a security-oriented company, and didn’t realise that it’s much more secure to provide the self-service interfaces rather than letting a human decide, there.

dot-EU Kerfuffle: what’s in an email anyway?

You may remember that last year I complained about what I defined the dot-EU kerfuffle, related to the news that EURid had been instructed to cancel the domain registrations of UK entities after Brexit. I thought the problem was passed when they agreed to consider European citizen as eligible holders of dot-EU domains, with an agreement reached last December, and due to enter into effect in… 2022.

Update 2020-12-13: as this is now a hot topic again, let me just point out that the changes, allowing EU citizens to maintain control of dot-eu domains even when residing outside of the Union, have been enacted in 2019. This means that for individuals who are still EU citizens, even living in the UK, the situation is clarified, and my urgency to update my email has, as such, reduced. British citizens and organizations are still affected, though.

You would think that, knowing that a new regulation needs to enter into effect, EURid would stop their plan of removing access to those domains for the UK residents for the time being, but it’s not so. Indeed, they instead sent a notice that effectively suggests that any old and new domain that would be then taken off the zone by marking them as WITHDRAWN first, and REVOKED second.

This means that on 2020-03-30 2021-01-01, a lot of previously-assigned domains will be available for scammers, phishers, and identity thieves, unless they are transferred before this coming May!

You can get more user-focused read of this in this article by The Register, which does good justice to the situation, despite the author seemingly being a leaver, from the ending of a previous article linked there. One of the useful part of that article is knowing that there are over 45 thousands domain name assigned to individuals residing in the UK — and probably a good chunk of those are of either Europhiles Brits, or citizen of other EU countries residing in the UK (like me).

Why should we worry about this, given the amount of other pressing problems that Brexit is likely to cause? Well, there is a certain issue of people being identified by email addresses that contain domain names. What neither EURid nor The Register appear to have at hand (and me even less) would be to figure out how many of those domains actually are used as logins, or receive sensitive communications such as GP contacts from NHS, or financial companies.

Because if someone can take over a domain, they can take over the email address, and very quickly from there you can ruin the life of, or at least heavily bother, any person that might be using a dot-EU domain. The risks for scams, identity theft and the like are being ignored once again by EURid to try to make a political move, at a time when nobody is giving a damn of what EURid is doing.

As I said in the previous post, I have been using flameeyes[dot]eu as my primary domain for the past ten or eleven years. The blog was moved on its own domain. My primary website is still there but will be moved shortly. My primary email address is changed. You’ll see me using a dot-com email address more often.

I’m now going through the whole set of my accounts to change the email they have on file for me with a new one on a dot-com domain. This is significantly helped by having all of them on 1password, but that’s not enough — it only tells you which services that use email as username. It says nothing about (say) the banks that use a customer number, but still have your email on file.

And then there are the bigger problems.

Sometimes the email address is immutable.

You’d be surprised on how many websites have either no way to change an email address. My best guess is that whoever designed the database schema thought that just using the email address as a primary key was a good idea. This is clearly not the case, and it has not been the case ever. I’d be surprised if anyone who got their first email address from an ISP would be making that mistake, but in the era of GMail, it seems this is often forgotten.

I now have a tag for 1Password to show me which accounts I can’t change the email address of. Some of them are really minimal services, that you probably wouldn’t be surprised to just store an email address as identifier, such as the Fallout 4 Map website. Some appear to have bugs with changing email addresses (British Airways). Some … surprised me entirely: Tarsnap does not appear to have a way to change email address either.

While for some of these services being unable to receive email is not a particularly bad problem, for most of them it would be. Particularly when it comes to plane tickets. Let alone the risk that any one of those services would store passwords in plain text, and send them back to you if you forgot them. Combine that with people who reuse the same password everywhere, and you can start seeing a problem again.

OAuth2 is hard, let’s identify by email.

There is another problem if you log into services with OAuth2-based authentication providers such as Facebook or (to a lesser extent) Google. Quite a few of those services would create an account for you at first login, and use the email address that they are given by the identity provider. And then they just match the email address the next time you login.

While changing Google’s email address is a bit harder (but not impossible if, like me, you’re using GSuite), changing the address you register on Facebook with is usually easy (exceptions exist). So if you signed up for a service through Facebook, and then changed your Facebook address, you may not be able to sign in again — or you may end up signing up for the service again when you try.

In my case, I changed the domain associated of my Google account, since it’s a GSuite (business) account. That made things even more fun, because even if services may remember that Facebook allows you to change your email address, many might have forgotten that technically Google allows you to do that too. While Android and ChromeOS appear to work fine (which honestly surprised me, sorry colleagues!), Pokémon Go got significantly messed up when I did that — luckily I had Facebook connected to it as well, so a login later, and disconnect/reconnect of the Google account, was enough for it to work.

Some things are working slightly better than other. Pocket, which allows you to sign in with either a Firefox account, a Google account, or an email/password pair, appears to only care about the email address of the Google account. So when I logged in, I ended up with a new account and no access to the old content. The part that works well is that you can delete the new account, and immediately after login to the old one and replace the primary email address.

End result? I’m going through nearly every one of my nearly 600 accounts, a few at a time, trying to change my email address, and tagging those where I can’t. I’m considering writing a standard template email to send to any support address for those that do not support changing email address. But I doubt they would be fixed in time before Brexit. Just one more absolute mess caused by Cameron, May, and their friends.

Why do we still use Ghostscript?

Late last year, I have had a bit of a Twitter discussion on the fact that I can’t think of a good reason why Ghostscript is still a standard part of the Free Software desktop environment. The comments started from a security issue related to file-access from within a PostScript program (i.e. a .ps file), and at the time I actually started drafting some of the content that is becoming this post now. I then shelved most of it because I’ve been busy and it was not topical.

Then Tavis had to bring this back to the attention of the public, and so I’m back writing this.

To be able to answer the question I pose in the title we have to first define what Ghostscript is — and the short answer is, a PostScript renderer. Of course it’s a lot more than just that, but for the most part, that’s what it is. It deals with PostScript programs (or documents, if you prefer), and renders them into different formats. PostScript is rarely if at all use in modern desktops — not just because it’s overly complicated, but because it’s just not that useful in a world that mostly settled in PDF, which is essentially a “compiled PostScript”.

Okay not quite. There are plenty of qualifications that go around that whole paragraph, but I think it matches the practicalities of the case fairly well.

PostScript has found a number of interesting niche uses though, a lot of which focus around printing, because PostScript is the language that older (early?) printers used. I have not seen any modern printers speak PostScript though, at least after my Kyocera FS-1020, and even those who do, tend to support alternative “languages” and raster formats. On the other hand, because PostScript was a “lingua franca” for printers, CUPS and other printer-related tooling still use PostScript as an intermediate language.

In a similar fashion, quite a few software that deal with faxes (yes, faxes), tend to make use of Ghostscript itself. I would know because I wrote one, under contract, a long time ago. The reason is frankly pragmatic: if you’re on the client side, you want Windows to “print to fax”, and having a virtual PostScript printer is very easy — at that point you want to convert the document into something that can be easily shoved down the fax software throat, which ends up being TIFF (because TIFF is, as I understand it, the closest encoding to the physical faxes). And Ghostscript is very good at doing that.

Indeed, I have used (and seen used) Ghostscript in many cases to basically combine a bunch of images into a single document, usually in TIFF or PDF format. It’s very good at doing that, if you know how to use it, or you copy-paste from other people’s implementation.

Often, this is done through the command line, too, the reason for which is to be found in the licenses used by various Ghostscript implementations and versions over time. Indeed, while many people think of Ghostscript as an open source Swiss Army Knife of document processing, it actually is dual-licensed. The Wikipedia page for the project shows eight variant, with at least four different licenses over time. The current options are AGPLv3 or the commercial paid-for license — and I can tell you that a lot of people (including the folks I worked under contract for), don’t really want to pay for that license, preferring instead the “arms’ length” aggregation of calling the binary rather than linking it in. Indeed, I wrote a .NET Library to do just that. It’s optimized for (you guessed it right) TIFF files, because it was a component of an Internet Fax implementation.

So where does this leave us?

Back ten years ago or so, when effectively every Free Software desktop PDF viewer was effectively forking the XPDF source code to adapt it to whatever rendering engine they needed, it took a significant list of vulnerabilities that needed to be fixed time and time again for the Poppler project to take off, and create One PDF Rendering To Rule Them All. I think we need the same for Ghostscript. With a few differences.

The first difference is that I think we need to take a good look at what Ghostscript, and Postscript, are useful for in today’s desktops. Combining multiple images in a single document should _not_ require processing all the way to PostScript. There’s no reason to! Particularly not when the images are just JPEG files, and PDF can embed them directly. Having a tool that is good at combining multiple images into a PDF, with decent options for page size and alignment, would probably replace many of the usages of Ghostscript that I had in my own tools and scripts over the past few years.

And while rendering PostScript for either display or print are similar enough tasks, I have some doubt the same code would work right for both. PostScript and Ghostscript are often used in networked printing as well. In which case there’s a lot of processing of untrusted input — both for display and printing. Sandboxing – and possibly writing this in a language better suited to deal with untrusted input than C is – would go a long way to prevent problems there.

But there are a few other interesting topics that I want to point out on this. I can’t think of any good reason for desktops to support PostScript out of the box in 2019. While I can still think of a lot of tools, particularly from the old timers, that use PostScript as an intermediate format, most people _in the world_ would use PDF nowadays to share documents, not PostScript. It’s kind of like sharing DVI files — which I have done before, but I now wonder why. While both formats might have advantages over PDF, in 2019 they definitely lost the format war. macOS might still support both (I don’t know), but Windows and Android definitely don’t, which make them pretty useless to share knowledge with the world.

What I mean with that is that it’s probably due time that PostScript becomes an optional component of the Free Software Desktop, one that the users need to enable explicitly if they ever need it, just to limit the risks that accepting, displaying and thumbnailing full, Turing-complete programs masqueraded as documents. Even Microsoft stopped running macros in Office documents by default, when they realize the type of footgun it had become.

Of course talk is cheap, and I should probably try to help directly myself. Unfortunately I don’t have much experience with graphics formats, beside for maintaining unpaper, and that is not a particularly good result either: I tried using libav’s image loading, and it turns out it’s actually a mess. So I guess I should either invest my time in learning enough about building libraries for image processing, or poke around to see if someone wrote a good multi-format image processing library in, say, Rust.

Alternatively, if someone starts to work on this and want to have some help with either reviewing the code, or with integrating the final output in places where Ghostscript is used, I’m happy to volunteer my time. I’m fairly sure I can convince my manager to let me do some of that work as a 20% project.

Passwords, password managers, and family life

Somehow, I always end up spending time writing about passwords when I even breach the subject on Twitter.

In this case, I’ve been asking around about password managers, as after many years with LastPass I want to reconsider if there is a better alternative, particularly as my needs have changed (or rather, are going to, in the not too distant future).

One of the thing that I’m looking for is a password manager that can generate diceware/xkcd-style passwords: a set of words in a certain language that are easy to say on (say) the phone, and type on systems where there is no password manager app. The reason for this is that there are a few places in which I need to be able to give the password to someone else who might not otherwise be trusted with the full password list. For instance the WiFi password for my apartment, or my mother’s house.

But it’s a bit more complicated than that. There are a number of situations where an account is not just an user. Or rather, you may want to allow h multiple users (people) to access the same account. Say for instance my energy provider’s dashboard. Or the phone provider. Or the online grocery shopping…

All of these things expect a single (billing) account, but they may rather be shared with a household than with a single individual. A few services do have a concept of a shared account, but very few do, and that makes less and less sense as the world progresses to such an everything-connected level.

I think it might be easy to figure out from the way I’ve been expressing this just above, but just to make sure not to leave “clues” rather than clear information that can be obviously be taken for public knowledge, I got to think about this because I have (finally, someone might say) found a soulmate. And while we don’t yet live together, I start to see the rough corners of these. We have not gotten to “What’s the Netflix password, again?” but I did end up changing the password to the account for Los Angeles transport card, to give her access, after setting it first with LastPass (we were visiting, and I added both of our TAP cards to the same account).

As I made clear earlier, part of this was a (minor) problem with my mother, too. But significantly less so: she never cared to have access to the power provider, phone company, and so on. Just as long as she had a copy of the invoices from time to time (which I solved by having a mailing list, which only the two of us subscribe to, as the contact address for all the services I use or used for the household in Italy).

Service providers take note: integrating with Google Drive or Dropbox so that the invoices get automatically added to a shared folder would be a lovely feature to have. And not just for households. I would love if it was easier to just have a copy of my invoices automatically added to, and indexed by, Google Drive.

But now, with a partner, it’s different. As the word implies, it’s a partnership, an equal standing. Once we will move in, we’ll share the expenses, and that means sharing the access to the accounts. Which means I don’t want to be the only one having the passwords. So I need a password manager that not only allows me to share the passwords easily, but also that allows her to use the passwords easily — which likely will translate to be able to read them off the phone, and type in a work computer’s incognito window (because she likely won’t be allowed to install the password manager on a work computer).

Which is why I’m looking for a new password manager: LastPass is actually fairly great when it comes to sharing passwords with other accounts. But it’s effectively useless when it comes to “typeable” passwords. Their “Make pronounceable” option is okay to make it easier to spell out, but I don’t want to have to use an eight-letters password to be able to type it easily, when I could just as easily use a three-words combination that is significantly stronger.

And while I could just use xkcdpass on my laptop and generate those shared passwords (which is what I did with my mother’s router), that does not really scale (it still keeps me as the gatekeeper), and it does not make the security usability for my SO. And it wouldn’t be fair to keep the password hygiene for me only.

Similarly, any solution that involves running personal infrastructure (servers, cron, git, whatever) is not an option: not only I’m increasingly not relying on it myself (I even gave up on running my own blog’s webapp!), but most of my family is not even slightly interested in figuring out how to do that. And I don’t blame the least, they have enough of their own things to care about.

If you have any suggestions for a new password manager, please do let me know. I think I may try 1Password next, if nothing else because I think Troy Hunt’s opinion is worth something, and if he backed 1Password, there has to be a reason.

The dot-EU kerfuffle — or how EURid is messing with their own best supporters

TL;DR summary: be very careful if you use a .eu domain as your point of contact for anything. If you’re thinking of registering a .eu domain to use as your primary domain, just don’t.


I have forecasted a rant when I pointed out I changed domain with my move to WordPress.

I have registered flameeyes.eu nearly ten years ago, part of the reason was because flameeyes.com was (at the time) parked to a domain squatter, and part because I have been a strong supported of the European Union.

In those ten years I started using the domain not just for my website, but as my primary contact email. It’s listed as my contact address everywhere, I have all kind of financial, commercial and personal services attached to that email. It’s effectively impossible for me to ever detangle from it, even if I spend the next four weeks doing nothing but amending registrations — some services just don’t allow you to ever change email address; many requires you to contact support and spend time talking with a person to get the email updated on the account.

And now, because I moved to the United Kingdom, which decided to leave the Union, the Commission threatens to prevent me from keeping my domain. It may sound obvious, since EURid says

A website with a .eu or .ею domain name extension tells your customers that you are a legal entity based in the EU, Iceland, Liechtenstein or Norway and are therefore, subject to EU law and other relevant trading standards.

But at the same time it now provides a terrible collapse of two worlds: technical and political. The idea that you any entity in control of a .eu domain is by requirement operating under EU law sounds good on paper… until you come to this corner case where a country leaves the Union — and now either you water down this promise, eroding trust in the domain by not upholding this law domain, or you end up with domain takeover, eroding trust in the domain on technical merit.

Most of the important details for this are already explained in a seemingly unrelated blog post by Hanno Böck: Abandoned Domain Takeover as a Web Security Risk. If EURid will forbid renewal of .eu domains for entities that are no longer considered part of the EU, a whole lot of domains will effectively be “up for grabs”. Some may currently be used as CDN aliases, and be used to load resources on other websites; those would be the worst, as they would allow the controller of the domains to inject content in other sites that should otherwise be secure.

But even more important for companies that used their .eu domain as their primary point of contact: think of any PO, or invoice, or request for information, that would be sent to a company email address — and now think of a malicious actor getting access to those communications! This is not just the risk that me (and any other European supporter who happened to live in the UK, I’m sure I’m not alone) as a single individual have — it’s a possibly unlimited amount of scams that people would be subjected to, as it would be trivial to pass for a company, once their domain is taken over!

As you can see from the title, I think this particular move is also going to hit the European supporters the most. Not just because of those individuals (like me!) who wanted to signal how they feel part of something bigger than their country of birth, but also because I expect a number of UK companies used .eu domain specifically to declare themselves open to European customers — as otherwise, between pricing in Sterling, and a .co.uk domain, it would always feel like buying “foreign goods”. Now those companies, that believed in Europe, find themselves in the weakest of positions.

Speaking of individuals, when I read the news I had a double-take, and had to check the rules for .eu domains again. At first I assumed that something was clearly wrong: I’m a European Union citizen, surely I will be able to keep my domain, no matter where I live! Unfortunately, that’s not the case:

In this first step the Registrant must verify whether it meets the General
Eligibility Criteria, whereby it must be:
(i) an undertaking having its registered office, central administration or
principal place of business within the European Union, Norway, Iceland
or Liechtenstein, or
(ii) an organisation established within the European Union, Norway, Iceland
or Liechtenstein without prejudice to the application of national law, or
(iii) a natural person resident within the European Union, Norway, Iceland or
Liechtenstein.

If you are a European Union citizen, but you don’t want your digital life to ever be held hostage by the Commission or your country’s government playing games with it, do not use a .eu domain. Simple as that. EURid does not care about the well-being of their registrants.

If you’re a European company, do think twice on whether you want to risk that a change in government for the country you’re registered in would lead you to open both yourself, your suppliers and your customers into the a wild west of overtaken domains.

Effectively, what EURid has signalled with this is that they care so little about the technical hurdles of their customers, that I would suggest against ever relying on a .eu domain for anyone at all. Register it as a defense against scammers, but don’t do business on it, as it’s less stable than certain microstate domains, or even the more trendy and modern gTLDs.

I’ll call this a self-goal. I still trust the European Union, and the Commission, to have the interests of the many in their mind. But the way they tried to apply a legislative domain to the .eu TLD was brittle at best to begin with, and now there’s no way out of here that does not ruin someone’s day, and erode the trust in that very same domain.

It’s also important to note that most of the bigger companies, those that I hear a lot of European politicians complain about, would have no problem with something like this: just create a fully-own subsidiary somewhere in Europe, say for instance Slovakia, and have it hold onto the domain. And have it just forward onto a gTLD to do business on, so you don’t even give the impression of counting on that layer of legislative trust.

Given the scary damage that would be caused by losing control over my email address of ten years, I’m honestly considering looking for a similar loophole. The cost of establishing an LLC in another country, firmly within EU boundaries, is not pocket money, but it’s still chump change compared to the amount of damage (financial, reputation, relationships, etc) that it would be a good investment.

Designing My Password Manager

So while at the (in my opinion excellent) Enigma, Tavis decided to nerd snipe me into thinking of how would I design a password manager. This because he knows (and disagree with) my present choice of using LastPass as my password manager of choice, despite the existence of a range of opensource password managers.

The main reason why I have discarded the choice of the available password managers is that, in my opinion, they all miss the main point of what a password manager should be: convenient. Password reuse is a problem because it’s more convenient than using dozens, maybe hundreds of different passwords for all the different services. An easy to use password manager is even more convenient than reusing password.

The first problem I found with effectively all of the opensource password managers I know of just use a single file-blob, and leave to you the problem of syncing it. The best I have seen was one software providing integration with Dropbox, Google Drive and ownCloud to sync the blob around, just as long as you don’t make conflicting changes on different devices. To me, the ability to make independent changes to services is actually a big requirement. This means that I would be using a file format that allows encrypting “per row”, both the keys and the values (because you don’t want to leak which accounts the user registered on, if the file is leaked). I would probably gravitate around something like the Bigtable format.

Another problem, which is present in Chrome SmartLock too, is the lack of support for what LastPass call “equivalent domains”. Due to many silly reasons, particularly if you travel a lot or if you live in different countries, you end up collecting a long list of websites using either separate domain names, or at least different TLDs. An example of this is Amazon, that use a constellation of different domains, but all share the same account management (except for Amazon Japan). A sillier example for this are Yelp and TripAdvisor, that decide to change your TLDs depending on the IP address you’re coming from, despite being the kind of services you would use particularly outside your usual country.

Admittedly, as Tavis suggested, these should be solved by the services themselves, using a single host for login/identity management. I do not expect this to happen any time soon. My previous proposal for this was defining equivalence as part as a well known configuration file (together with other improvements to password management). I now have some personal introspection questions about this, because I wonder if there is a privacy risk in sending requests to other domain to validate the reciprocal equivalence configurations. So I think I’ll leave this up for debate for later.

The next design bit is figuring out how should the password generator behave. We already have a number of good password generators of different types, including software implementations of diceware passwords (despite the site repeatedly telling you not to use computer random generators — to add to the inconvenience), and xkcdpass, that generate passwords that are easier to remember or at least to type. I think that a good password manager should allow for more than just the random-bunch-of-characters passwords that LastPass uses.

In particular, for me, I have a few websites for which I use passwords generated by xkcdpass, because I need to actually type in the password, rather than use the password manager autofill capabilities. This is the case of Sony and Nintendo accounts, that need to be typed from consoles, and of Telegram, as I need to type the password on my watch to receive messages there. Unfortunately implementing this is probably going to be an UX nightmare — one of the reason being the ability to select different wordlists. Non-English speakers are likely interested in using their own language for it. Or even the English speakers that are not afraid of other languages, and may decide to throw off a possible attacker anyway.

Ideally, the password generation settings would be stored on a domain-by-domain basis, so that if a certain website only allows numbers in its passcode, or it has a specific character limit, the same setting is used to generated a password if it’s ever breached. This may sound minor, but to me it would be so much more of a time (and frustration) saver, that it would easily become a killer feature.

But all of these idea fall to nothing without good, convenient, and trustworthy client implementations. Right now one of the many weak spots of LastPass is its Chrome extension (and Firefox too). A convenient password manager, though, ought to be able to integrate with the browser and, since it’s 2018 after all, with your phone. Unfortunately, here is where any opensource offering can’t really help as much as we would all like: it still relies (hugely) on trust. As far as I can tell, there is no way to make absolutely certain that the code of a Chrome extension on the Web Store, or of an Android app on either Play Store or F-Droid, corresponds exactly with a certain source distribution.

Don’t get me wrong, this is a real problem right now, with closed source extensions too. You need to trust the extension is not injecting malicious content in your webpage, or exfiltrating data out of your browser session. Earlier this year a widely used Chrome extension was reported as malicious, but it wasn’t until that was identified that it was removed from Chrome’s repository. At least I can have a (maybe underserved) trust in LogMeIn not to intentionally ruin their reputation by pushing actively malicious code to the Store. Would I say the same for a random single developer maintaining their widely used extension?

What this means to me is that building a good replacement for LastPass is not just a technical problem that needs to be solved by actively syncing with cloud storage services… it’s a problem of social trust, and that requires quite a bit of thought from many actors of the market: browser vendors, device manufacturers, and developers, which I’m not sure is currently happening. So I don’t hold my breath, and keep at compromises. I made mine with my threats in mind, you should make yours with what concerns you the most.

SMS for two-factors authentication

Having spent now a few days at the 34C3 (I’ll leave comments over that to discussion to happen in front of a very warm coffee with a nice croissant to the side), I have heard a few times already presenters referencing the SS7 hack as an everyday security threat. You probably are not surprised I don’t agree with that being an everyday security issue, while still thinking this is a real security issue.

Indeed, while some websites refer to the SS7 hack as the reason not to use SMS for auth, at least you can find more reasonable articles talking about the updated NIST recommendation. Myself, the preference for TOTP (as it’s used in authenticator apps), is because I don’t need to be registered on a mobile network and I can use it to login while in flights.

I want to share some more thoughts that add up to the list of reasons to not use SMS as the second factor authentication. Because while I do not believe SS7 attack is something that all users should be accounting for in their day-to-day life, it is an addition to the list of reasons why SMS auth is not a good idea at all, and I don’t want people to think that, since they are unlikely to be attacked by someone leveraging SS7, they are okay with using SMS authentication instead.

The obvious first problem is reliability: as I said in the post linked above, SMS-based authentication requires you to have access to the phone with the SIM, and that it’s connected to the network and allowed to receive the SMS. This is fine if you never travel and you have good reception where you need to use this. But myself, I have had multiple situations in which I was unable to receive SMS (in flights as I said already, or while visiting China the first time, with 3 Ireland not giving access to any roaming partners to pay-as-you-go customers, or even at my own desk at my office with 3 UK after I moved to London).

This lack of reliability is unfortunate, but not by itself a security issue: it prevents you from accessing the account that the 2FA is set on, and that would sound like it’s doing what it’s designed to do, failing close. At the same time, this increases the friction of using 2FA, reducing usability, and pushing users, bit by bit, to stop using 2FA altogether. Which is a security problem.

The other problem is the ability to intercept or hijack those messages. As you can guess by what I wrote above, I’m not referring to SS7 hacks or equivalent. It’s all significantly more simple.

The first way to intercept the SMS auth messages is having access to the phone itself. Most phones, iOS and Android alike, are configured to show new text messages in the standby page. In some cases, only a part of the message is visible, and to have access to the rest of the message you’d have to unlock the phone – assuming the phone is locked at all – but 2FA authentication messages tend to be very short and to the point, showing the number in the preview, for ease of access. On Android, such a message can also be swiped away without unlocking the phone. An user that would be victim to this type of interception might have a very hard time noticing this, as nowadays it’s likely the SMS app is not opened very often, and a swiped-off notification would take time to be noticed1.

The hijacking I have in mind is a bit more complicated and (probably) noticeable. Instead of using the SS7 hack, you can just take over a phone number by leveraging the phone providers. And this can be done in (at least) two ways: you can convince the victim’s phone provider to reprovision the number to a new SIM card within the same operator, or you can port the number to a new operator. The complexity or easiness of these two processes change a lot between countries, some countries are safer than others.

For instance, the UK system for number portability is what I expect to be the more secure (if not the most userfriendly) I have seen. The first step is to get a Portability Authorization Code (PAC) for the number you want to port. You do that by calling the current provider. None of the three providers I had up to now in the UK had any way to get this code online, which is a tad safer, as a misplaced password cannot bring full access to the account line. And while the code could be “intercepted” the same way as I pointed out above for authentication codes, the (new) operator does get in touch with you reminding you when the portability will take place, and giving you a chance to contact them if it doesn’t sound right. In the case of Vodafone, they also send you an email when you request the PAC, meaning just swiping away the notification is not enough to hide the fact that it was requested in the first place.

In Ireland, a portability request completes in the span of less than an hour, and only requires you to have (brief) access to the line you want to take over, as the new operator will send you a code to confirm the request. Which means the process, while being significantly easier for the customers, is also extremely insecure. In Italy, I actually went to the store with the line that I wanted to port, and I don’t remember if they asked anything but my IDs to open the new line. No authentication code is involved at all, so if you can fake enough documents, you likely can take over any lines. I do not remember if they notified my old SIM card before the move. I have not tried number portability in France, but it appears you can get the RIO (the equivalent transfer code) from the online system of Free at the very least.

The good thing about all the portability processes I’ve seen up to now is that at least they do not drop a new number on the old SIM card. I was told that this is (or was at least) actually common for US providers, where porting a number out just assign a new number to the old SIM. In that case, it would probably take a while for a victim to notice they had their account taken over. And that would not be a surprise.

If you’re curious, you can probably try that by yourself. Call your phone provider from another number than your own, see how many and which security questions they ask you to identify that it is actually their customer calling, instead of a random stranger. I think the funniest I’ve had was Three Ireland, that asked me for the number I “recently” sent a text message to, or a recent call made or received — you can imagine that it’s extremely easy to force you to get someone to send you a text message, or have them call you, if you’re close enough to them, or even just have them pick up the phone and reporting to the provider that you were last called by the number you used.

And then there is the other interesting point of SMS-based authentication: the codes last longer in time. A TOTP has a short lifetime by design, as it’s time based. Add some fuzzing, most of the implementations I’ve seen allow a single code to be valid for 2× the time the code is displayed on the authenticator, by accepting a code from the past, or from the future, generally at half the expected duration. Since the normal case has the code lasting for 60 seconds, they would accept a code 30 seconds before it’s supposed to be used, and 30 seconds after. But text messages can (and do) take much longer than that.

And this is particularly useful for attackers of systems that do not implement 2FA correctly. Entering the wrong OTP most of the time does not invalidate a login attempt, at least not on the first mistake, because users can mistype, or they can miss the window to send an OTP over. But sometimes there are egregious errors, such as that made by N26, where they neither ratelimited, nor invalidated the OTP requests, allowing a simple bruteforcing of a valid OTP. Since TOTP change without requesting a new code, bruteforcing those give you a particularly short time span of viability… SMS on the other hand, open for a much larger window of opportunity.

Oh and remember the Barclays single-factor-authentication? How long do you think it would take to spoof the outgoing number of the SMS that needs to be sent (with the text “Y”) to authorize the transaction, even without having access to the text message that was sent?


  1. This is an argument for using another of your messaging apps to receive text messages, whether it is Facebook Messenger, Hangouts or Signal. Assuming you use any of those on a day-to-day basis, you would then have an easy way to notice if you received messages you have not seen before.
    [return]

Barclays and the single factor authentication

In my previous post on the topic I have barely touched on one of the important reasons why I did not like Barclays at all. The reason for that was that I still had money into my account with them, and I wanted to make sure that was taken care of before lamenting further on the state of their security. As I managed to close my account now, I should go on and discuss this further, even though I have touched upon the major topics of this.

Barclays online banking system relies heavily on what I would define as “single factor authentication”.

Usually, you define authentication factors as things you have or things you know. In the case of Barclays, the only thing they effectively rely upon is “access to the debit card”. Okay, technically you could say that by itself it’s a two-factor system, as it requires access to the debit card and to its PIN. And since the EMV-CAP protocol they use for this factor executes directly on the chipcard, it is not susceptible to the usual PIN-stripping attacks as most card fraud with chip-and-pin cards uses.

But this does not count for much when the PIN of the card they issued me was 7766 — and to lament of that is why I waited to close the account and give them back the card. It seems like there’s a pattern of banks issuing “easy to remember” 4-digit PINs: XYYX, XXYY, etc. One of my previous (again, cancelled) cards had a PIN terribly easy to remember for a computerist, at least not for the average person though: 0016.

Side note: I have read someone suggesting to badly scribbled a wrong PIN on the back of a card as a theft prevention. Though I like that idea, I’m just afraid the banks won’t like it anyway. Also it would take some work to make the scribble being easily misunderstood for different digits so that they can try the three times needed to block it.

You access Barclays online banking account through the use of the Identify method provided by CAP, which means you put the card into the reader, provide the PIN, and you get an 8-digits identifier that can be used to login on the website. Since I’m no expert of how CAP works internally, I will only venture a guess that this is similar to a counter-based OTP, as the card has no access to a real-time clock, and there is no challenge provided for this information.

This account access sounds secure, but it’s really not any more secure than an username and password, at least when it comes to dealing with phishing. You may think that producing a façade that shows the full Barclays login, and proxies the responses in real time is a lot of work, but the phishing tools are known for being flexible, and they don’t really need to reproduce the whole website, just the parts they care about getting data from. The rest can easily be proxied as it is without any further change, of course.

So what can we do once you can fool someone into logging in to the bank? Well, you can’t really do much, as most of the actions require further CAP confirmation: wires, new standing orders, and so on so forth. You can, though, get a lot of information about the victim, including enough proofs of address or identity that you can really mess with their life. It also makes it possible to cancel things like standing orders to pay for rent, which would be quite messy to deal with for most people — although most of the phishing is not done for the purpose of messing with people, and more to get their money.

As I said, for sending money you need to have access to the CAP codes. That includes having access not only to the device itself, but also the card and the PIN. To execute those transactions, Barclays will ask you to sign a transaction by providing the CAP device with the account number and the amount to wire. This is good and it’s pretty hard to tamper with, hopefully (I do not make any guarantee on the implementation of CAP), so even if you’re acting through a proxy-phishing site, your wires are probably safe.

I say probably, because the way the challenge-response is implemented, only the 8-digits account number is used during the signature. If the phishers are attacking a victim that they studied for long enough, which may be the case when attacking businesses, you could know which account they pay every month manually, and set up an account with the same number at a different bank (different sort code). The signature would be valid for both.

To be fair to Barclays, implementing the CAP fully, the way they did here, is actually more secure than what Ulster Bank (and I assume the rest of RBS Group) does, with an opaque “challenge” token. While this may encode more information, the fact that it’s opaque means there is no way for the user to know whether what they are signing is indeed what they meant to.

Now, these mitigations are actually good. They require continuous access to the card on request, and that makes it very hard for phishing to just keep using the site in the background after the user logged in. But they still rely on effectively a single factor. If someone gets a hold of the card and the PIN (and we know at least some people will write the real one on the back of the card), then it’s game over: it’s like the locks on my flat’s door: two independent locks… except they use the same key. Sure, it’s a waste of time to pick both, so it increases the chances a neighbour would walk in on wannabe burglars trying to open the apartment door. But there’s a single key, I can’t just use two separate keychains to make sure a thief would only grab one of the two, and if anyone gets it from me, well, it’s game over.

Of course Barclays knows that this is not enough, so they include a risk engine. If something in the transactions don’t comply with their profile of your activity, it’s considered risky and they require an additional verification. This verification happens to be in form of text messages. I will not suggest that the problem with these is with GSM-layer attacks, as that is still not (yet) in the hands of the type of criminals aiming at personal bank accounts, but there is at the very least the risk that a thieve would get a handle of my bag with both my card and my phone, so the only “factors” that are still in my head, rather than tied to the physical objects, are the (provided) PIN of the card, and the PIN of the phone.

This profile fitting is actually the main reason why I got frustrated with Barclays: since I had just opened the account, most of the transactions were all “exceptional”, and that is extremely annoying. This was compounded by the fact that my phone provider didn’t even let me receive SMS from the office, due to lack of coverage (now fixed), and the fact that at least for wires, the Barclays UI does not warn you to check your phone!

There is also the problem with the way Barclays handle these “exceptional transactions”: debit card transactions are out-and-out rejected. The Verified by Visa screen tells you to check your phone, but the phone will only ask you if it was your transaction or not, and after you confirm it is, it’ll ask you to “retry in a couple of minutes” — retrying too quickly will lead to the transactions being blocked by the processor directly, with a temporary card lock. The wire transfer one will unblock the execution of the wire, which is good, but it can also push the wire to after the cut-off time for non-“Faster Payments” wires.

Update (2017-12-30): since I did not make this very clear, I have added a note about this at the bottom of my new post, about the fact hat confirming these transactions only need you to spoof the sender, since the content and destination of the text message to send are known (it only has to say “Y”, and it’s always to the same service number). So this validation should not really count as a second factor authentication for a skilled attacker.

These are all the reasons for which I abandoned Barclays as fast as I could. Some of those are actually decent mitigation strategies, but the fact that they do not really increase security, while increasing inconvenience, makes me doubt the validity of their concerns and threat models.