We Need More Practical, Business-Oriented Open Source — Case Study: The Pizzeria CRM

I’m an open source developer, because I think that open source makes for safer, better software for the whole community of users. I also think that, by making more software available to a wider audience, we improve the quality, safety and security of every user out there, and as such I will always push for more, and more open, software. This is why I support the Public Money, Public Code campaign by the FSFE for opening up the software developed explicitly for public administrations.

But there is one space that I found is quite lacking when it comes with open source: business-oriented software. The first obvious thing is the lack of good accounting software, as Jonathan has written extensively about, but there is more. When I was consulting as a roaming sysadmin (or with a more buzzwordy, and marketing-friendly term, a Managed Services Provider — MSP), a number of my customers relied heavily on nearly off-the-shelf software to actually run their business. And in at least a couple of cases, they commissioned me custom-tailored software for that.

In a lot of cases, there isn’t really a good reason not to open-source this software: while it is required to run certain businesses, it is clearly not enough to run them. And yet there are very few examples of such software in the open, and that includes from me: my customers didn’t really like the idea of releasing the software to others, even after I offered a discount on the development price.

I want to show the details of an example of one such custom software, something that, to give a name to it, would be a CRM (Customer Relationship Manager), that I built for a pizzeria in Italy. I won’t be opening the source code for it (though I wish I could do so), and I won’t be showing screenshots or provide the name of the actual place, instead referring to it as Pizza Planet.

This CRM (although the name sounds more professional than what it really was), was custom-designed to suit the work environment of the pizzeria — that is to say, I did whatever they asked me, despite it disagreeing with my sense of aesthetics and engineering. The basic idea was very simple: when a customer calls, they wanted to know who the customer was even before picking up the phone — effectively inspecting the caller ID, and connecting it with the easiest database editing facility I could write, so that they could give it a name and a freeform text box to write down addresses, notes, and preferences.

The reason why they called me to write this is that they originally bought a hardware PBX (for a single room pizzeria!) just so that a laptop could connect to it and use the Address Book functionality of the vendor. Except this functionality kept crashing, and after many weeks of back-and-forth with the headquarters in Japan, the integrator could not figure out how to get it to work.

As the pizzeria was wired with ISDN (legacy technology, heh), to be able to take at least two calls at the same time, the solution I came up with was building a simple “industrial” PC, with an ISDN line card and Asterisk, get them a standard SIP phone, and write the “CRM” so that it would initiate a SIP connection to the same Asterisk server (but never answer it). Once an inbound call arrived, it would look up if there was an entry in a simple storage layer for the phone number and display it with very large fonts, to be easily readable while around the kitchen.

As things moved and changed, a second pizzeria was opened and it required a similar setup. Except that, as ISDN are legacy technology, the provider was going to charge up to the nose for connecting a new line. Instead we decided to set up a VoIP account instead, and instead of a PC to connect the software, Asterisk ran on a server (in close proximity to the VoIP provider). And since at that point the limitation of an ISDN line on open calls is limited, the scope of the project expanded.

First of all, up to four calls could be queued, “your call is very important to us”-style. We briefly discussed allowing for reserving a spot and calling back, but at the time calls to mobile phones would still be expensive enough they wanted to avoid that. Instead the calls would get a simple message telling them to wait in line to contact the pizzeria. The CRM started showing the length of the queue (in a very clunky way), although it never showed the “next call” like the customer wanted (the relationship between the customer and the VoIP provider went South, and all of us had to end up withdrawing from the engagement).

Another feature we ended up implementing was opening hours: when call would arrive outside of the advertised opening hours, an announcement would play (recorded by a paid friend, who used to act in theatre, and thus had a good pronunciation).

I’m fairly sure that none of this would actually comply with the new GDPR requirements. At the very least, the customers should be advised that their data (phone number, address) will be saved.

But why am I talking about this in the context of Open Source software? Well, while a lot of the components used in this set up were open source, or even Free Software, it still requires a lot of integration to become usable. There’s no “turnkey pizzeria setup” — you can build up the system from components, but you need not just an integrator, you need a full developer (or development team) to make sure all the components fit together.

I honestly wish I had opensourced more of this. If I was to design this again right now, I would probably make sure that there was a direct, real-time API between Asterisk and a Web-based CRM. It would definitely make it easier to secure the data for GDPR compliance. But there is more than just that: having an actual integrated, isolated system where you can make configuration changes give the user (customer) the ability to set up things without having to know how the configuration files are structured.

To set up the Asterisk, it took me a week or two reading through documentation, books on the topic, and a significant amount of experimentation with a VoIP number and a battery of testing SIM cards at home. To make the recordings work I had to fight with converting the files to G.729 beforehand, or the playback would use a significant amount of CPU.

But these are not unknown needs. There are plenty of restaurants (who don’t have to be pizza places) out there that probably need something like this. And indeed services such as Deliveroo appear to now provide a similar all-in-one solution… which is good for restaurants in cities big enough to sustain Deliveroo, but probably not grate for the smaller restaurants in smaller cities, who probably would not have much of a chance of hiring developers to make such a system themselves.

So, rambling asides, I really wish we had more ready-to-install Open Source solutions for businesses (restaurants, hotels, … — I would like to add banks to that but I know regulatory compliance is hard). I think these would actually have a very good social impact on all those towns and cities that don’t have a critical mass of tech influence, that they come with their own collection of mobile apps, for instance.

If you’re the kind of person who complains that startups only appear to want to solve problems in San Francisco, maybe think of what problems you can solve in and around your town or city.

Some of my thoughts on comments in general

One of the points that is the hardest for me to make when I talk to people about my blog is how important comments are for me. I don’t mean comments in source code as documentation, but comments on the posts themselves.

You may remember that was one of the less appealing compromises I made when I moved to Hugo was accepting to host the comments on Disqus. A few people complained when I did that because Disqus is a vendor lock-in. That’s true in more ways than one may imagine.

It’s not just that you are tied into a platform with difficulty of moving out of it — it’s that there is no way to move out of it, as it is. Disqus does provide you the ability to download a copy of all the comments from your site, but they don’t guarantee that’s going to be available: if you have too many, they may just refuse to let you download them.

And even if you manage to download the comments, you’ll have fun time trying to do anything useful with them: Disqus does not let you re-import them, say in a different account, as they explicitly don’t allow that format to be imported. Nor does WordPress: when I moved my blog I had to hack up a script that took the Disqus export format, a WRX dump of the blog (which is just a beefed up RSS feed), and produces a third file, attaching the Disqus comments to the WRX as WordPress would have exported them. This was tricky, but it resolved the problem, and now all the comments are on the WordPress platform, allowing me to move them as needed.

Many people pointed out that there are at least a couple of open-source replacements for Disqus — but when I looked into them I was seriously afraid they wouldn’t really scale that well for my blog. Even WordPress itself appears sometimes not to know how to deal with a >2400 entries blog. The WRX file is, by itself, bigger than the maximum accepted by the native WordPress import tool — luckily the Automattic service has higher limits instead.

One of the other advantages of having moved away from Disqus is that the comments render without needing any JavaScript or third party service, make them searchable by search engines, and most importantly, preserves them in the Internet Archive!

But Disqus is not the only thing that disappoints me. I have a personal dislike for the design, and business model, of Hacker News and Reddit. It may be a bit of a situation of “old man yells at cloud”, but I find that these two websites, much more than Facebook, LinkedIn and other social media, are designed to take the conversation away from the authors.

Let me explain with an example. When I posted about Telegram and IPv6 last year, the post was sent to reddit, which I found because I have a self-stalking recipe for IFTTT that informs me if any link to my sites get posted there. And people commented on that — some missing the point and some providing useful information.

But if you read my blog post you won’t know about that at all, because the comments are locked into Reddit, and if Reddit were to disappear the day after tomorrow there won’t be any history of those comments at all. And this is without going into the issue of the “karma” going to the reposter (who I know in this case), rather than the author — who’s actually discouraged in most communities from submitting their own writings!

This applies in the same or similar fashion to other websites, such as Hacker News, Slashdot, and… is Digg still around? I lost track.

I also find that moving the comments off-post makes people nastier: instead of asking questions ready to understand and talk things through with the author, they assume the post exist in isolation, and that the author knows nothing of what they are talking about. And I’m sure that at least a good chunk of that is because they don’t expect the author to be reading them — they know full well they are “talking behind their back”.

I have had the pleasure to meet a lot of people on the Internet over time, mostly through comments on my or other blogs. I have learnt new thing and been given suggestions, solutions, or simply new ideas of what to poke at. I treasure the comments and the conversation they foster. I hope that we’ll have more rather than fewer of them in the future.

Mobile Web, Internet of Things, and the Geeks

I’m a geek. It’s not just the domain I used to use, but it’s a the truth at the core of myself. I’m also a gadgeteer: if there’s a new gadget that may do something I’m interested in, and I can afford it, I’ll have it (sometimes even if I can barely afford it). I love “toys” and novelties, and I don’t mind if they are a bit on the rough side, “some assembly required”.

All of this, though, is sometimes hard to reconcile with the absolute vitriol I see online, among the communities that include geeks, free software activists, privacy activists and so on.

I sometimes still hear a lot of people complaining about websites optimizing for mobile, sometimes to the disadvantage of 32″ 4K HiDPI monitors — despite the fact that the latter are definitely in a minority of use cases, while the former is the new reality of web access. I do understand that sometimes it’s bothersome just how messy some websites become when they decide to focus primarily on mobile, but there are plenty of cases in which a “mobile-first” point of view is just what people are more likely to need, and ignoring this can be actively harmful.

Let me try to build up an example, which may sound a bit contrived but I would expect to be very realistic.

As you now know, I now live in London, and things here are different than in Ireland. In particular, I can no longer just drop by the pharmacy every other week and go “Just refill me in on this stuff please”. Instead I need to order the prescription to the pharmacy by going online, to the portal of the service provider my surgery contracted, and fill in the form. Then I need to note down when the stuff will be available and go pick it up.

The service provider that my surgery is using did not do a particularly good job in the UI/UX of their product. The website is definitely not mobile optimised, it does not integrate with anything and does not send email reminders for anything, let alone ICS attachments. And when I spoke about that with friends and colleagues, reactions were mixed between the «Why would they spend time on mobile? It’s just fancy stuff» and «Only geeks would care about receiving ICS attachments».

I disagree because the thing is, I can definitely see myself taking the last pills from the blister while on vacation and remembering I need to order more — but I probably don’t have my computer at hand. Being able to just go on the mobile website (or app) and ordering them on the fly can easily be a lifesaver, particularly for people who don’t usually travel with their laptop at all.

And similarly, if I were to ask people about the ICS attachments themselves they would probably wonder what the heck am I talking about, but ask people if they’d appreciate their calendar to show when they are meant to pick up their prescription, or when they have an appointment with their GP, and they probably would go “Yes, please!”

Let me take another example: the Internet of Things. Of course it’s a buzzword, nowadays, but it does not come out of nowhere. The concept of home automation (which in Italian actually takes the word “domotica” for well over 20 years) is not new and it’s not just a matter of being the trend of the year.

While there indeed are a number of “connected things” ideas that make me raise eyebrow or frown on “what the heck were they thinking?”, dissing the ideas tout-court just because they are, well, “connected things” is, in my opinion, short sighted.

I don’t remember if it was Samsung, LG, or whoever else, that proposed first on the market a fridge with an Internet-connected webcam, so that you can check on what you have inside. I heard people complain that it’s just a gimmick and for the lazy — but I could definitely see myself using it. See something on sale at the supermarket, which you didn’t put on the list? Do you remember if you have enough space to put it in the fridge, or if it would be wasted?

Plenty of the solutions that relate around Internet of Things are indeed easy to disavow as “lazy” – I would love to have a washing machine that could be started while I’m the bus because I forgot to do so before leaving the apartment – but at the same time, they are very valuable for people who do have real problems with remembering about things later on. It does not strictly have to be available from the phone in the middle of London — if my phone could, once I get home, remind me “The dishwasher is done. The washing machine is done. You’re out of milk. You need to take out the trash”, that would make my day.

But instead of saying “Hey folks, we need better, safer products!”, I see lots of geeks just going “That’s the Internet of Shit for you, why would you want your coffee machine connected to the Internet?” — like this was never dreamed of by geeks. Or insisting that, since “Internet of Things” is a marketing term, it is cursed and everything that relates to it is “ungeek”.

From my point of view, a lot of these people are those that are now looking down on iPhone users, but were sending email instead of text messages back when you had to use WAP to access anything mobile.

Stop blaming the users. Accept that you may not like or have a need for something but someone else might want it anyway. And if you really want to help, start figuring out how we can make things more secure by default instead of making fun of those that get burnt by the latest vulnerability.

Updates on Silicon Labs CP2110

One month ago I started the yak shave of supporting the Silicon Labs CP2110 with a fully opensource stack, that I can even re-use for glucometerutils.

The first step was deciding how to implement this. While the device itself supports quite a wide range of interfaces, including a GPIO one, I decided that since I’m only going to be able to test and use practically the serial interface, I would at least start with just that. So you’ll probably see the first output as a module for pyserial that implements access to CP2110 devices.

The second step was to find an easy way to test this in a more generic way. Thankfully, Martin Holzhauer, who commented on the original post, linked to an adapter by MakerSpot that uses that chip (the link to the product was lost in the migration to WordPress, sigh), which I then ordered and received a number of weeks later, since it had to come to the US and clear customs through Amazon.

All of this was the easy part, the next part was actually implementing enough of the protocol described in the specification, so that I could actually send and receive data — and that also made it clear that despite the protocol being documented, it’s not as obvious as it might sound — for instance, the specification says that the reports 0x01 to 0x3F are used to send and receive data, but it does not say why there are so many reports… except that it turns out they are actually used to specify the length of the buffer: if you send two bytes, you’ll have to use the 0x02 report, for ten bytes 0x0A, and so on, until the maximum of 63 bytes as 0x3F. This became very clear when I tried sending a long string and the output was impossible to decode.

Speaking of decoding, my original intention was to just loop together the CP2110 device with a CH341 I bought a few years ago, and have them loop data among each other to validate that they work. Somehow this plan failed: I can get data from the CH341 into the CP2110 and it decodes fine (using picocom for the CH341, and Silicon Labs own binary for the CP2110), but I can’t seem to get the CH341 to pick up the data sent through the CP2110. I thought it was a bad adapter, but then I connected the output to my Saleae Logic16 and it showed the data fine, so… no idea.

The current status is:

  • I know the CH341 sends out a good signal;
  • I know the CP2110 can receive a good signal from the CH341, with the Silicon Labs software;
  • I know the CP2110 can send a good signal to the Saleae Logic16, both with the Silicon Labs software and my tiny script;
  • I can’t get the CH341 to receive data from the CP2110.

Right now the state is still very much up in the air, and since I’ll be travelling quite a bit without a chance to bring with me the devices, there probably won’t be any news about this for another month or two.

Oh and before I forget, Rich Felker gave me another interesting idea: CUSE (Character Devices in User Space) is a kernel-supported way to “emulate” in user space devices that would usually be implemented in the kernel. And that would be another perfect application for this: if you just need to use a CP2110 as an adapter for something that needs to speak with a serial port, then you can just have a userspace daemon that implements CUSE, and provide a ttyUSB-compatible device, while not requiring short-circuiting the HID and USB-Serial subsystems.

A review of the Curve debit card

Somehow, I end up spending a significant amount of my time thinking, testing and playing with financial services, both old school banks and fintech startups.

One of the most recent ones I have been playing with is Curve. The premise of the service is to allow you to use a single card for all transactions, having the ability to then charge any card underneath it as it is convenient. This was a curious enough idea, so I asked the friend who was telling me about it to give me his referral code to sign up. If you want to sign up, my code is BG2G3.

Signing up and getting the card is quite easy, even though they have (or had when I signed up) a “waitlist” — so after you sign up it takes a few days for you to be able to actually order the card and get it in your hands. They suggest you to make other people sign up for it as well to lower the time in the waitlist, but that didn’t seem to be a requirement for me. The card arrived, to my recalling, no more than two days after they said they shipped it, probably because it was coming from London itself, and that’s all you need to receive.

So how does this all work? You need to connect your existing cards to Curve, and verify them to be able to charge them — verification can be either through a 3Dsecure/Verified by Visa login, or through the usual charge-code-reverse dance that Google, PayPal and the others all use. Once you connect the cards, and select the currently-charged card, you can start using the Curve card to pay, and it acts as a “proxy” for the other card, charging it for the same amount, with some caveats.

The main advantage that my friend suggested for this set up is that you if you have a corporate card (I do), you can add that one to Curve too, and rely on that to not have to do the payback process at work if you make a mistake paying for something. As this happened to me a few times before, mostly out of selecting the wrong payment profile in apps such as Uber or Hailo, or going over the daily allowance for meals as I changed plans, it sounded interesting enough. This can work both by making sure to select the corporate card before making the purchase (for instance by defaulting to it during a work trip), or by “turning back time” on an already charged transaction. Cool.

I also had a hope that the card could be attached to Google Pay, but that’s not the case. Nor they implement their own NFC payment application, which is a bit disappointing.

Beside the “turn back time” feature, the app also has some additional features, such as the integration with the accounting software Xero, including the ability to attach a receipt image to the expense (if this was present for Concur, I’d be a real believer, but until then it’s not really that useful to me), and to receive email “receipts” (more like credit card slips) for purchases made to a certain card (not sure why that’s not just a global, but meh).

Not much else is available in the app to make it particularly useful or interesting to me, honestly. There’s some category system for expenses, very similar to the one for Revolut, but that’s about it.

On the more practical side of things, Curve does not apply any surcharges as long as the transaction is in the same currency as the card, and that includes the transactions in which you turned back time. Which is handy if you don’t know what the currency you’ll be charged in will be in, though that does not really happen often.

What I found particularly useful for this is that the card itself look like a proper “British” card — with my apartment as the verified address on it. But then I can charge one of my cards in Euro, US Dollars, or Revolut itself… although I could just charge Revolut in those cases. The main difference between the two approach is that I can put the Euro purchases onto an Euro credit card, instead of a debit one… except that the only Euro credit card I’m left also has my apartment as its verifiable billing address, so… I’d say I’m not the target audience for this feature.

For “foreign transactions” (that is, where the charged currency and the card currency disagree), Curve charges a 1% foreign transaction fee. This is pointless for me thanks to Revolut, but it still is convenient if you only have accounts with traditional banks, particularly in the UK where most banks apply a 3% foreign transaction fee instead.

In addition to the free card, they also offer a £50 (a year, I guess — it’s not clear!) “black” card that offers 1% cashback at selected retailers. You can actually get 90 days cashback for three retailers of your choice on the free card as well, but to be honest, American Express is much more widely accepted in chains, and its rewards are better. I ended up choosing to do the cashback with TfL, Costa (because they don’t take Amex contactless anyway), and Sainsbury’s just to select something I use.

In all of this, I have to say I fail to see where the business makes money. Okay, financial services are not my area of expertise, but if you’re just proxying payments, without even taking deposits (the way Revolut does), and not charging additional foreign transaction fees, and even giving cashback… where’s the profit?

I guess there is some money to be made by profiling users and selling the data to advertisers — but between GDPR and the fact that most consumers don’t like the idea of being made into products with no “kick back”. I guess if it was me I would be charging 1% on the “turn back time” feature, but that might make moot the whole point of the service. I don’t know.

At the end of the day, I’m also not sure how useful this card is going to be for me, on the day to day. The ability to have a single entry in those systems that are used “promiscuously” for business and personal usage sounds good, but even in that case, it means ignoring the advantages of having a credit card, and particularly a rewards card like my Amex. So, yeah, don’t really see much use for it myself.

It also complicates things when it comes to risk engines for fraud protection: your actual bank will see all the transactions as coming from a single vendor, with the minimum amount of information attached to it. This will likely defeat all the fraud checks by the bank, and will likely rely on Curve’s implementation of fraud checks — which I have no idea how they work, since I have not yet managed to catch them.

Also as far as I could tell, Curve (like Revolut) does not implement 3DSecure (the “second factor” authentication used by a number of merchants to validate e-commerce transactions), making it less secure than any of the other cards I have — a cloned/stolen card can only be disabled after the fact, and replaced. Revolut at least allows me to separate the physical card from my e-commerce transactions, which is neat (and now even supports one time credit card numbers).

There is also another thing that is worth considering, that shows the different point of views (and threat models) of the two services: Curve becomes your single card (single point of failure, too) for all your activities: it makes your life easy by making sure you only ever need to use one card, even if you have multiple bank accounts in multiple countries, and you can switch between them at the tap of a finger. Revolut on the other hand allows you to give each merchant its own credit card number (Premium accounts get unlimited virtual cards) — or even have “burner” cards that change numbers after use.

All in all, I guess it depends what you want to achieve. Between the two, I think I’ll vastly stick to Revolut, and my usage of Curve will taper off once the 90 days cashback offer is completed — although it’s still nice to have for a few websites that gave me trouble with Revolut, as long as I’m charging my Revolut account again, and the charge is in Sterling.

If you do want to sign up, feel free to use BG2G3 as the referral code, it would give a £5 credit for both you and me, under circumstances that are not quite clear to me, but who knows.

The dot-EU kerfuffle — or how EURid is messing with their own best supporters

TL;DR summary: be very careful if you use a .eu domain as your point of contact for anything. If you’re thinking of registering a .eu domain to use as your primary domain, just don’t.


I have forecasted a rant when I pointed out I changed domain with my move to WordPress.

I have registered flameeyes.eu nearly ten years ago, part of the reason was because flameeyes.com was (at the time) parked to a domain squatter, and part because I have been a strong supported of the European Union.

In those ten years I started using the domain not just for my website, but as my primary contact email. It’s listed as my contact address everywhere, I have all kind of financial, commercial and personal services attached to that email. It’s effectively impossible for me to ever detangle from it, even if I spend the next four weeks doing nothing but amending registrations — some services just don’t allow you to ever change email address; many requires you to contact support and spend time talking with a person to get the email updated on the account.

And now, because I moved to the United Kingdom, which decided to leave the Union, the Commission threatens to prevent me from keeping my domain. It may sound obvious, since EURid says

A website with a .eu or .ею domain name extension tells your customers that you are a legal entity based in the EU, Iceland, Liechtenstein or Norway and are therefore, subject to EU law and other relevant trading standards.

But at the same time it now provides a terrible collapse of two worlds: technical and political. The idea that you any entity in control of a .eu domain is by requirement operating under EU law sounds good on paper… until you come to this corner case where a country leaves the Union — and now either you water down this promise, eroding trust in the domain by not upholding this law domain, or you end up with domain takeover, eroding trust in the domain on technical merit.

Most of the important details for this are already explained in a seemingly unrelated blog post by Hanno Böck: Abandoned Domain Takeover as a Web Security Risk. If EURid will forbid renewal of .eu domains for entities that are no longer considered part of the EU, a whole lot of domains will effectively be “up for grabs”. Some may currently be used as CDN aliases, and be used to load resources on other websites; those would be the worst, as they would allow the controller of the domains to inject content in other sites that should otherwise be secure.

But even more important for companies that used their .eu domain as their primary point of contact: think of any PO, or invoice, or request for information, that would be sent to a company email address — and now think of a malicious actor getting access to those communications! This is not just the risk that me (and any other European supporter who happened to live in the UK, I’m sure I’m not alone) as a single individual have — it’s a possibly unlimited amount of scams that people would be subjected to, as it would be trivial to pass for a company, once their domain is taken over!

As you can see from the title, I think this particular move is also going to hit the European supporters the most. Not just because of those individuals (like me!) who wanted to signal how they feel part of something bigger than their country of birth, but also because I expect a number of UK companies used .eu domain specifically to declare themselves open to European customers — as otherwise, between pricing in Sterling, and a .co.uk domain, it would always feel like buying “foreign goods”. Now those companies, that believed in Europe, find themselves in the weakest of positions.

Speaking of individuals, when I read the news I had a double-take, and had to check the rules for .eu domains again. At first I assumed that something was clearly wrong: I’m a European Union citizen, surely I will be able to keep my domain, no matter where I live! Unfortunately, that’s not the case:

In this first step the Registrant must verify whether it meets the General
Eligibility Criteria, whereby it must be:
(i) an undertaking having its registered office, central administration or
principal place of business within the European Union, Norway, Iceland
or Liechtenstein, or
(ii) an organisation established within the European Union, Norway, Iceland
or Liechtenstein without prejudice to the application of national law, or
(iii) a natural person resident within the European Union, Norway, Iceland or
Liechtenstein.

If you are a European Union citizen, but you don’t want your digital life to ever be held hostage by the Commission or your country’s government playing games with it, do not use a .eu domain. Simple as that. EURid does not care about the well-being of their registrants.

If you’re a European company, do think twice on whether you want to risk that a change in government for the country you’re registered in would lead you to open both yourself, your suppliers and your customers into the a wild west of overtaken domains.

Effectively, what EURid has signalled with this is that they care so little about the technical hurdles of their customers, that I would suggest against ever relying on a .eu domain for anyone at all. Register it as a defense against scammers, but don’t do business on it, as it’s less stable than certain microstate domains, or even the more trendy and modern gTLDs.

I’ll call this a self-goal. I still trust the European Union, and the Commission, to have the interests of the many in their mind. But the way they tried to apply a legislative domain to the .eu TLD was brittle at best to begin with, and now there’s no way out of here that does not ruin someone’s day, and erode the trust in that very same domain.

It’s also important to note that most of the bigger companies, those that I hear a lot of European politicians complain about, would have no problem with something like this: just create a fully-own subsidiary somewhere in Europe, say for instance Slovakia, and have it hold onto the domain. And have it just forward onto a gTLD to do business on, so you don’t even give the impression of counting on that layer of legislative trust.

Given the scary damage that would be caused by losing control over my email address of ten years, I’m honestly considering looking for a similar loophole. The cost of establishing an LLC in another country, firmly within EU boundaries, is not pocket money, but it’s still chump change compared to the amount of damage (financial, reputation, relationships, etc) that it would be a good investment.

WordPress, really?

If you’re reading this blog post, particularly directly on my website, you probably noticed that it’s running on WordPress and that it’s on a new domain, no longer referencing my pride in Europe, after ten years of using it as my domain. Wow that’s a long time!

I had two reasons for the domain change: the first is that I didn’t want to keep the full chain of redirects of extremely old link onto whichever new blogging platform I would select. And the second is it that it made it significantly easier to set up a WordPress.com copy of the blog while I tweaked and set it up, rather than messing up with the domain at once. The second one will come with a separate rant very soon, but it’s related to the worrying statement from the European Commission regarding the usage of dot-EU domains in the future. But as I said, that’s a separate rant.

I have had a few people surprised when I was talking over Twitter about the issues I faced on the migration. I want to give some more context on why I went this way.

As you remember, last year I complained about Hugo – to the point that a lot of the referrers to this blog are still coming from the Hacker News thread about that – and I started looking for alternatives. And when I looked at WordPress I found that setting it up properly would take me forever, so I kept my mouth shut and doubled-down on Hugo.

Except, because of the way it is set up, it meant not having an easy way to write blog posts, or correct blog posts, from a computer that is not my normal Linux laptop with the SSH token and everything else. Which was too much of a pain to keep working with. While Hector and others suggested flows that involved GIT-based web editors, it all felt too Rube Goldberg to me… and since moving to London my time is significantly limited compared to before, so I either spend time on setting everything up, or I can work on writing more content, which can hopefully be more useful.

I ended up deciding to pay for the personal tier of WordPress.com services, since I don’t care about monetization of this content, and even the few affiliate links I’ve been using with Amazon are not really that useful at the end of the day, so I gave up on setting up OneLink and the likes here. It also turned out that Amazon’s image-and-text links (which use JavaScript and iframes) are not supported by WordPress.com even with the higher tiers, so those were deleted too.

Nobody seems to have published an easy migration guide from Hugo to WordPress, as most of the search queries produced results for the other way around. I will spend some time later on trying to refine the janky template I used and possibly release it. I also want to release the tool I wrote to “annotate” the generated WRX file with the Disqus archive… oh yes, the new blog has all the comments of the old one, and does not rely on Disqus, as I promised.

On the other hand, there are a few things that did get lost in the transition: while JetPack Plugin gives you the ability to write posts in Markdown (otherwise I wouldn’t have even considered WordPress), it doesn’t seem like the importer knows at all how to import Markdown content. So all the old posts have been pre-rendered — a shame, but honestly that doesn’t happen very often that I need to go through old posts. Particularly now that I merged in the content from all my older blogs into Hugo first, and now this one massive blog.

Hopefully expect more posts from me very soon now, and not just rants (although probably just mostly rants).

And as a closing aside, if you’re curious about the picture in the header, I have once again used one of my own. This one was taken at the maat in Lisbon. The white balance on this shot was totally off, but I liked the result. And if you’re visiting Lisbon and you’re an electronics or industrial geek you definitely have to visit the maat!

Reverse Engineering and Serial Adapter Protocols

In the comments to my latest post on the Silicon Labs CP2110, the first comment got me more than a bit upset because it was effectively trying to mansplain to me how a serial adapter (or more properly an USB-to-UART adapter) works. Then I realized there’s one thing I can do better than complain and that is providing even more information on this for the next person who might need them. Because I wish I knew half of what I know now back when I tried to write the driver for ch314.

So first of all, what are we talking about? UART is a very wide definition for any interface that implements serial communication that can be used to transmit between a host and a device. The word “serial port” probably bring different ideas to mind depending on the background of a given person, whether it is mice and modems connected to PCs, or servers’ serial terminals, or programming interfaces for microcontrollers. For the most part, people in the “consumer world” think of serial as RS-232 but people who have experience with complex automation systems, whether it is home, industrial, or vehicle automation, have RS-485 as their main reference. None of that actually matters, since these standards mostly deal with electrical or mechanical standards.

As physical serial ports on computer stopped appearing many years ago, most of the users moved to USB adapters. These adapters are all different between each other and that’s why there’s around 40KSLOC of serial adapters drivers in the Linux kernel (according to David’s SLOCCount). And that’s without counting the remaining 1.5KSLOC for implementing CDC ACM which is the supposedly-standard approach to serial adapters.

Usually the adapters are placed either directly on the “gadget” that needs to be connected, which expose a USB connector, or on a cable used to connect to it, in which case the device usually has a TRS or similar connectors. The TRS-based serial cables appeared to become more and more popular thanks to osmocom as they are relatively inexpensive to build, both as cables and as connectors onto custom boards.

Serial interface endpoints in operating systems (/dev/tty{S,USB,ACM}* on Linux, COM* on Windows, and so on) do not only transfer data between host and device, but also provides configuration of parameters such as transmission rate and “symbol shape” — you may or may not have heard references to something like “9600n8” which is a common way to express the transmission protocol of a serial interface: 9600 symbols per second (“baud rate”), no parity, 8-bit per symbol. You can call these “out of band” parameters, as they are transmitted to the UART interface, but not to the device itself, and they are the crux of the matter of interacting with these USB-to-UART adapters.

I already wrote notes about USB sniffing, so I won’t go too much into detail there, but most of the time when you’re trying to figure out what the control software sends to a device, you start by taking a USB trace, which gives you a list of USB Request Blocks (effectively, transmission packets), and you get to figure out what’s going on there.

For those devices that use USB-to-UART adapters and actually use the OS-provided serial interface (that is, COM* under Windows, where most of the control software has to run), you could use specialised software to only intercept the communication on that interface… but I don’t know of any such modern software, while there are at least a few well-defined interface to intercept USB communication. And that would not work for software that access the USB adapter directly from userspace, which is always the case for Silicon Labs CP2110, but is also the case for some of the FTDI devices.

To be fair, for those devices that use TRS, I actually have considered just intercepting the serial protocol using the Saleae Logic Pro, but beside being overkill, it’s actually just a tiny fraction of the devices that can be intercepted that way — as the more modern ones just include the USB-to-UART chip straight onto the device, which is also the case for the meter using the CP2110 I referenced earlier.

Within the request blocks you’ll have not just the serial communication, but also all the related out-of-band information, which is usually terminated on the adapter/controller rather than being forwarded onto the device. The amount of information changes widely between adapters. Out of those I have had direct experience, I found one (TI3420) that requires a full firmware upload before it would start working, which means recording everything from the moment you plug in the device provides a lot more noise than you would expect. But most of those I dealt with had very simple interfaces, using Control transfers for out-of-band configuration, and Bulk or Interrupt1 transfers for transmitting the actual serial interface.

With these simpler interfaces, my “analysis” scripts (if you allow me the term, I don’t think they are that complicated) can produce a “chatter” file quite easily by ignoring the whole out of band configuration. Then I can analyse those chatter files to figure out the device’s actual protocol, and for the most part it’s a matter of trying between one and five combinations of transmission protocol to figure out the right one to speak to the device — in glucometerutils I have two drivers using 9600n8 and two drivers using 38400n8. In some cases, such as the TI3420 one, I actually had to figure out the configuration packet (thanks to the Linux kernel driver and the datasheet) to figure out that it was using 19200n8 instead.

But again, for those, the “decoding” is just a matter to filtering away part of the transmission to keep the useful parts. For others it’s not as easy.

0029 <<<< 00000000: 30 12                                             0.

0031 <<<< 00000000: 05 00                                             ..

0033 <<<< 00000000: 2A 03                                             *.

0035 <<<< 00000000: 42 00                                             B.

0037 <<<< 00000000: 61 00                                             a.

0039 <<<< 00000000: 79 00                                             y.

0041 <<<< 00000000: 65 00                                             e.

0043 <<<< 00000000: 72 00                                             r.

This is an excerpt from the chatter file of a session with my Contour glucometer. What happens here is that instead of buffering the transmission and sending a single request block with a whole string, the adapter (FTDI FT232RL) sends short burts, probably to reduce latency and keep a more accurate serial protocol (which is important for device that need accurate timing, for instance some in-chip programming interfaces). This would be also easy to recompose, except it also comes with

0927 <<<< 00000000: 01 60                                             .`

0929 <<<< 00000000: 01 60                                             .`

0931 <<<< 00000000: 01 60                                             .`

which I’m somehow sceptical they come from the device itself. I have not paid enough attention yet to figure out from the kernel driver whether this data is marked as coming from the device or is some kind of keepalive or synchronisation primitive of the adapter.

In the case of the CP2110, the first session I captured starts with:

0003 <<<< 00000000: 46 0A 02                                          F..

0004 >>>> 00000000: 41 01                                             A.

0006 >>>> 00000000: 50 00 00 4B 00 00 00 03  00                       P..K.....

0008 >>>> 00000000: 01 51                                             .Q

0010 >>>> 00000000: 01 22                                             ."

0012 >>>> 00000000: 01 00                                             ..

0014 >>>> 00000000: 01 00                                             ..

0016 >>>> 00000000: 01 00                                             ..

0018 >>>> 00000000: 01 00                                             ..

and I can definitely tell you that the first three URBs are not sent to the device at all. That’s because HID (the higher-level protocol that CP2110 uses on top of USB) uses the first byte of the block to identify the “report” it sends or receives. Checking these against AN434 give me a hint of what’s going on:

  • report 0x46 is “Get Version Information” — CP2110 always returns 0x0A as first byte, followed by a device version, which is unspecified; probably only used to confirm that the device is right, and possibly debugging purposes;
  • report 0x41 is “Get/Set UART Enabled” — 0x01 just means “turn on the UART”;
  • report 0x50 is “Get/Set UART Config” — and this is a bit more complex to parse: the first four bytes (0x00004b00) define the baud rate, which is 19200 symbols per second; then follows one byte for parity (0x00, no parity), one for flow control (0x00, no flow control), one for the number of data bits (0x03, 8-bit per symbol), and finally one for the stop bit (0x00, short stop bit); that’s a long way to say that this is configured as 19200n8.
  • report 0x01 is the actual data transfer, which means the transmission to the device starts with 0x51 0x22 0x00 0x00 0x00 0x00.

This means that I need a smarter analysis script that understands this protocol (which may be as simple as just ignoring anything that does not use report 0x01) to figure out what the control software is sending.

And at the same time, it needs code to know how “talk serial” to this device. Usually the out-of-bad configuration is done by a kernel driver: you ioctl() the serial device to the transmission protocol you need, the driver sends the right request block to the USB endpoint. But in the case of the CP2110 device, there’s no kernel driver implementing this, at least per Silicon Labs design: since HID devices are usually exposed to userland, and in particular to non-privileged applications, sending and receiving the reports can be done directly from the apps. So indeed there is no COM* device exposed on Windows, even with the drivers installed.

Could someone (me?) write a Linux kernel driver that expose CP2110 as a serial, rather than HID, device? Sure. It would require fiddling around with the HID subsystem a bit to have it ignore the original device, and that means it’ll probably break any application built with Silicon Labs’ own development kit, unless someone has a suggestion on how to have both interfaces available at the same time, while it would allow accessing those devices without special userland code. But I think I’ll stick with the idea of providing a Free and Open Source implementation of the protocol, for Python. And maybe add support for it to pyserial to make it easier for me to use it.


  1. All these terms make more sense if you have at least a bit of knowledge of USB works behind the scene, but I don’t want to delve too much into that.
    [return]

Yak Shaving: Silicon Labs CP2110 and Linux

One of my favourite passtimes in the past years has been reverse engineering glucometers for the sake of writing an utility package to export data to it. Sometimes, in the quest of just getting data out of a meter I end up embarking in yak shaves that are particularly bothersome, as they are useful only for me and no one else.

One of these yak shaves might be more useful to others, but it will have to be seen. I got my hands on a new meter, which I will review later on. This meter has software for Windows to download the readings, so it’s a good target for reverse engineering. What surprised me, though, was that once I connected the device to my Linux laptop first, it came up as an HID device, described as an “USB HID to UART adapter”: the device uses a CP2110 adapter chip by Silicon Labs, and it’s the first time I saw this particular chip (or even class of chip) in my life.

Effectively, this device piggybacks the HID interface, which allows vendor-specified protocols to be implemented in user space without needing in-kernel drivers. I’m not sure if I should be impressed by the cleverness or disgusted by the workaround. In either case, it means that you end up with a stacked protocol design: the glucometer protocol itself is serial-based, implemented on top of a serial-like software interface, which converts it to the CP2110 protocol, which is encapsulated into HID packets, which are then sent over USB…

The good thing is that, as the datasheet reports, the protocol is available: “Open access to interface specification”. And indeed in the download page for the device, there’s a big archive of just-about-everything, including a number of precompiled binary libraries and a bunch of documents, among which figures AN434, which describe the full interface of the device. Source code is also available, but having spot checked it, it appears it has no license specification and as such is to be considered proprietary, and possibly virulent.

So now I’m warming up to the idea of doing a bit more of yak shaving and for once trying not to just help myself. I need to understand this protocol for two purposes: one is obviously having the ability to communicate with the meter that uses that chip; the other is being able to understand what the software is telling the device and vice-versa.

This means I need to have generators for the host side, but parsers for both. Luckily, construct should make that part relatively painless, and make it very easy to write (if not maintain, given the amount of API breakages) such a parser/generator library. And of course this has to be in Python because that’s the language my utility is written in.

The other thing that I realized as I was toying with the idea of writing this is that, done right, it can be used together with facedancer, to implement the gadget side purely in Python. Which sounds like a fun project for those of us into that kind of thing.

But since this time this is going to be something more widely useful, and not restricted to my glucometer work, I’m now looking to release this using a different process, as that would allow me to respond to issues and codereviews from my office as well as during the (relatively little) spare time I have at home. So expect this to take quite a bit longer to be released.

At the end of the day, what I hope to have is an Apache 2 licensed Python library that can parse both host-to-controller and controller-to-host packets, and also implement it well enough on the client side (based on the hidapi library, likely) so that I can just import the module and use it for a new driver. Bonus points if I can sue this to implement a test fake framework to implement the tests for the glucometer.

In all of this, I want to make sure to thank Silicon Labs for releasing the specification of the protocol. It’s not always that you can just google up the device name to find the relevant protocol documentation, and even when you do it’s hard to figure out if it’s enough to implement a driver. The fact that this is possible surprised me pleasantly. On the other hand I wish they actually released their code with a license attached, and possibly a widely-usable one such as MIT or Apache 2, to allow users to use the code directly. But I can see why that wouldn’t be particularly high in their requirements.

Let’s just hope this time around I can do something for even more people.

Fantasyland: in the world of IPv6 only networks

It seems to be the time of the year when geeks think that IPv6 is perfect, ready to be used, and the best thing after sliced bread (or canned energy drinks). Over on Twitter, someone pointed out to me that FontAwesome (which is used by the Hugo theme I’m using) is not accessible over an IPv6-only network, and as such the design of the site is broken. I’ll leave aside my comments on FontAwesome because they are not relevant to the rant at hand.

You may remember I called IPv6-only networks unrealistic two years ago, and I called IPv6 itself a geeks’ wet dream last year. You should then not be surprised to find me calling this Fantasyland an year later.

First of all, I want to make perfectly clear that I’m not advocating that IPv6 deployment should stop or slow down. I really wish it would be actually faster, for purely selfish reasons I’ll get to later. Unfortunately I had to take a setback when I moved to London, as Hyperoptic does not have IPv6 deployment, at least in my building, yet. But they provide a great service, for a reasonable price, so I have no intention to switch to something like A&A just to get a good IPv6 right now.

$ host hyperoptic.com
hyperoptic.com has address 52.210.255.19
hyperoptic.com has address 52.213.148.25
hyperoptic.com mail is handled by 0 hyperoptic-com.mail.eo.outlook.com.

$ host www.hyperoptic.com
www.hyperoptic.com has address 52.210.255.19
www.hyperoptic.com has address 52.213.148.25

$ host www.virginmedia.com
www.virginmedia.com has address 213.105.9.24

$ host www.bt.co.uk
www.bt.co.uk is an alias for www.bt.com.
www.bt.com has address 193.113.9.162
Host www.bt.com not found: 2(SERVFAIL)

$ host www.sky.com
www.sky.com is an alias for www.sky.com.edgekey.net.
www.sky.com.edgekey.net is an alias for e1264.g.akamaiedge.net.
e1264.g.akamaiedge.net has address 23.214.120.203

$ host www.aaisp.net.uk
www.aaisp.net.uk is an alias for www.aa.net.uk.
www.aa.net.uk has address 81.187.30.68
www.aa.net.uk has address 81.187.30.65
www.aa.net.uk has IPv6 address 2001:8b0:0:30::65
www.aa.net.uk has IPv6 address 2001:8b0:0:30::68

I’ll get back to this later.

IPv6 is great for complex backend systems: each host gets their own uniquely-addressable IP, so you don’t have to bother with jumphosts, proxycommands, and so on so forth. Depending on the complexity of your backend, you can containerize single applications and then have a single address per application. It’s a gorgeous thing. But as you move towards user facing frontends, things get less interesting. You cannot get rid of IPv4 on the serving side of any service, because most of your visitors are likely reaching you over IPv4, and that’s unlikely to change for quite a while longer still.

Of course the IPv4 address exhaustion is a real problem and it’s hitting ISPs all over the world right now. Mobile providers already started deploying networks that only provide users with IPv6 addresses, and then use NAT64 to allow them to connect to the rest of the world. This is not particularly different from using an old-school IPv4 carrier-grade NAT (CGN), which a requirement of DS-Lite, but I’m told it can get better performance and cost less to maintain. It also has the advantage of reducing the number of different network stacks that need to be involved.

And in general, having to deal with CGN and NAT64 add extra work, latency, and in general bad performance to a network, which is why gamers, as an example, tend to prefer having a single-stack network, one way or the other.

$ host store.steampowered.com
store.steampowered.com has address 23.214.51.115

$ host www.gog.com
www.gog.com is an alias for gog.com.edgekey.net.
gog.com.edgekey.net is an alias for e11072.g.akamaiedge.net.
e11072.g.akamaiedge.net has address 2.19.61.131

$ host my.playstation.com
my.playstation.com is an alias for my.playstation.com.edgekey.net.
my.playstation.com.edgekey.net is an alias for e14413.g.akamaiedge.net.
e14413.g.akamaiedge.net has address 23.214.116.40

$ host www.xbox.com
www.xbox.com is an alias for www.xbox.com.akadns.net.
www.xbox.com.akadns.net is an alias for wildcard.xbox.com.edgekey.net.
wildcard.xbox.com.edgekey.net is an alias for e1822.dspb.akamaiedge.net.
e1822.dspb.akamaiedge.net has address 184.28.57.89
e1822.dspb.akamaiedge.net has IPv6 address 2a02:26f0:a1:29e::71e
e1822.dspb.akamaiedge.net has IPv6 address 2a02:26f0:a1:280::71e

$ host www.origin.com
www.origin.com is an alias for ea7.com.edgekey.net.
ea7.com.edgekey.net is an alias for e4894.e12.akamaiedge.net.
e4894.e12.akamaiedge.net has address 2.16.57.118

But multiple other options started spawning around trying to tackle the address exhaustion problem, faster than the deployment of IPv6 is happening. As I already noted above, backend systems, where the end-to-end is under control of a single entity, are perfect soil for IPv6: there’s no need to allocate real IP addresses to these, even when they have to talk over the proper Internet (with proper encryption and access control, goes without saying). So we won’t see more allocations like Xerox’s or Ford’s of whole /8 for backend systems.

$ host www.xerox.com
www.xerox.com is an alias for www.xerox.com.edgekey.net.
www.xerox.com.edgekey.net is an alias for e1142.b.akamaiedge.net.
e1142.b.akamaiedge.net has address 23.214.97.123

$ host www.ford.com
www.ford.com is an alias for www.ford.com.edgekey.net.
www.ford.com.edgekey.net is an alias for e4213.x.akamaiedge.net.
e4213.x.akamaiedge.net has address 104.123.94.235

$ host www.xkcd.com
www.xkcd.com is an alias for xkcd.com.
xkcd.com has address 151.101.0.67
xkcd.com has address 151.101.64.67
xkcd.com has address 151.101.128.67
xkcd.com has address 151.101.192.67
xkcd.com has IPv6 address 2a04:4e42::67
xkcd.com has IPv6 address 2a04:4e42:200::67
xkcd.com has IPv6 address 2a04:4e42:400::67
xkcd.com has IPv6 address 2a04:4e42:600::67
xkcd.com mail is handled by 10 ASPMX.L.GOOGLE.com.
xkcd.com mail is handled by 20 ALT2.ASPMX.L.GOOGLE.com.
xkcd.com mail is handled by 30 ASPMX3.GOOGLEMAIL.com.
xkcd.com mail is handled by 30 ASPMX5.GOOGLEMAIL.com.
xkcd.com mail is handled by 30 ASPMX4.GOOGLEMAIL.com.
xkcd.com mail is handled by 30 ASPMX2.GOOGLEMAIL.com.
xkcd.com mail is handled by 20 ALT1.ASPMX.L.GOOGLE.com.

Another technique that slowed down the exhaustion is SNI. This TLS feature allows to share the same socket for applications having multiple certificates. Similarly to HTTP virtual hosts, that are now what just about everyone uses, SNI allows the same HTTP server instance to deliver secure connections for multiple websites that do not share their certificate. This may sound totally unrelated to IPv6, but before SNI became widely usable (it’s still not supported by very old Android devices, and Windows XP, but both of those are vastly considered irrelevant in 2018), if you needed to provide different certificates, you needed different sockets, and thus different IP addresses. It would not be uncommon for a company to lease a /28 and point it all at the same frontend system just to deliver per-host certificates — one of my old customers did exactly that, until XP became too old to support, after which they declared it so, and migrated all their webapps behind a single IP address with SNI.

Does this mean we should stop caring about the exhaustion? Of course not! But if you are a small(ish) company and you need to focus your efforts to modernize infrastructure, I would not expect you to focus on IPv6 deployment on the frontends. I would rather hope that you’d prioritize TLS (HTTPS) implementation instead, since I would rather not have malware (including but not limited to “coin” miners), to be executed on my computer while I read the news! And that is not simple either.

$ host www.bbc.co.uk
www.bbc.co.uk is an alias for www.bbc.net.uk.
www.bbc.net.uk has address 212.58.246.94
www.bbc.net.uk has address 212.58.244.70

$ host www.theguardian.com  
www.theguardian.com is an alias for guardian.map.fastly.net.
guardian.map.fastly.net has address 151.101.1.111
guardian.map.fastly.net has address 151.101.65.111
guardian.map.fastly.net has address 151.101.129.111
guardian.map.fastly.net has address 151.101.193.111

$ host www.independent.ie
www.independent.ie has address 54.230.14.45
www.independent.ie has address 54.230.14.191
www.independent.ie has address 54.230.14.196
www.independent.ie has address 54.230.14.112
www.independent.ie has address 54.230.14.173
www.independent.ie has address 54.230.14.224
www.independent.ie has address 54.230.14.242
www.independent.ie has address 54.230.14.38

Okay I know these snippets are getting old and probably beating a dead horse. But what I’m trying to bring home here is that there is very little to gain in supporting IPv6 on frontends today, unless you are an enthusiast or a technology company yourself. I work for a company that believes in it and provides tools, data, and its own services over IPv6. But it’s one company. And as a full disclosure, I have no involvement in this particular field whatsoever.

In all of the examples above, which are of course not complete and not statistically meaningful, you can see that there are a few interesting exceptions. In the gaming world, XBox appears to have IPv6 frontends enabled, which is not surprising when you remember that Microsoft even developed one of the first tunnelling protocols to kickstart adoption of IPv6. And of course XKCD, being ran by a technologist and technology enthusiast couldn’t possibly ignore IPv6, but that’s not what the average user needs from their Internet connection.

Of course, your average user spends a lot of time on platforms created and maintained by technology companies, and Facebook is another big player of the IPv6 landscape, so they have been available over it for a long while — though that’s not the case of Twitter. But at the same time, they need their connection to access their bank…

$ host www.chase.com
www.chase.com is an alias for wwwbcchase.gslb.bankone.com.
wwwbcchase.gslb.bankone.com has address 159.53.42.11

$ host www.ulsterbankanytimebanking.ie
www.ulsterbankanytimebanking.ie has address 155.136.22.57

$ host www.barclays.co.uk
www.barclays.co.uk has address 157.83.96.72

$ host www.tescobank.com
www.tescobank.com has address 107.162.133.159

$ host www.metrobank.co.uk
www.metrobank.co.uk has address 94.136.40.82

$ host www.finecobank.com
www.finecobank.com has address 193.193.183.189

$ host www.unicredit.it
www.unicredit.it is an alias for www.unicredit.it-new.gtm.unicreditgroup.eu.
www.unicredit.it-new.gtm.unicreditgroup.eu has address 213.134.65.14

$ host www.aib.ie
www.aib.ie has address 194.69.198.194

to pay their bills…

$ host www.mybills.ie
www.mybills.ie has address 194.125.152.178

$ host www.airtricity.ie
www.airtricity.ie has address 89.185.129.219

$ host www.bordgaisenergy.ie
www.bordgaisenergy.ie has address 212.78.236.235

$ host www.thameswater.co.uk
www.thameswater.co.uk is an alias for aerotwprd.trafficmanager.net.
aerotwprd.trafficmanager.net is an alias for twsecondary.westeurope.cloudapp.azure.com.
twsecondary.westeurope.cloudapp.azure.com has address 52.174.108.182

$ host www.edfenergy.com
www.edfenergy.com has address 162.13.111.217

$ host www.veritasenergia.it
www.veritasenergia.it is an alias for veritasenergia.it.
veritasenergia.it has address 80.86.159.101
veritasenergia.it mail is handled by 10 mail.ascopiave.it.
veritasenergia.it mail is handled by 30 mail3.ascotlc.it.

$ host www.enel.it
www.enel.it is an alias for bdzkx.x.incapdns.net.
bdzkx.x.incapdns.net has address 149.126.74.63

to do shopping…

$ host www.paypal.com
www.paypal.com is an alias for geo.paypal.com.akadns.net.
geo.paypal.com.akadns.net is an alias for hotspot-www.paypal.com.akadns.net.
hotspot-www.paypal.com.akadns.net is an alias for wlb.paypal.com.akadns.net.
wlb.paypal.com.akadns.net is an alias for www.paypal.com.edgekey.net.
www.paypal.com.edgekey.net is an alias for e3694.a.akamaiedge.net.
e3694.a.akamaiedge.net has address 2.19.62.129

$ host www.amazon.com
www.amazon.com is an alias for www.cdn.amazon.com.
www.cdn.amazon.com is an alias for d3ag4hukkh62yn.cloudfront.net.
d3ag4hukkh62yn.cloudfront.net has address 54.230.93.25

$ host www.ebay.com 
www.ebay.com is an alias for slot9428.ebay.com.edgekey.net.
slot9428.ebay.com.edgekey.net is an alias for e9428.b.akamaiedge.net.
e9428.b.akamaiedge.net has address 23.195.141.13

$ host www.marksandspencer.com
www.marksandspencer.com is an alias for prod.mands.com.edgekey.net.
prod.mands.com.edgekey.net is an alias for e2341.x.akamaiedge.net.
e2341.x.akamaiedge.net has address 23.43.77.99

$ host www.tesco.com
www.tesco.com is an alias for www.tesco.com.edgekey.net.
www.tesco.com.edgekey.net is an alias for e2008.x.akamaiedge.net.
e2008.x.akamaiedge.net has address 104.123.91.150

to organize fun with friends…

$ host www.opentable.com
www.opentable.com is an alias for ev-www.opentable.com.edgekey.net.
ev-www.opentable.com.edgekey.net is an alias for e9171.x.akamaiedge.net.
e9171.x.akamaiedge.net has address 84.53.157.26

$ host www.just-eat.co.uk
www.just-eat.co.uk is an alias for 72urm.x.incapdns.net.
72urm.x.incapdns.net has address 149.126.74.216

$ host www.airbnb.com
www.airbnb.com is an alias for cdx.muscache.com.
cdx.muscache.com is an alias for 2-01-57ab-0001.cdx.cedexis.net.
2-01-57ab-0001.cdx.cedexis.net is an alias for evsan.airbnb.com.edgekey.net.
evsan.airbnb.com.edgekey.net is an alias for e864.b.akamaiedge.net.
e864.b.akamaiedge.net has address 173.222.129.25

$ host www.odeon.co.uk
www.odeon.co.uk has address 194.77.82.23

and so on so forth.

This means that for an average user, an IPv6-only network is not feasible at all, and I think the idea that it’s a concept to validate is dangerous.

What it does not mean, is that we should just ignore IPv6 altogether. Instead we should make sure to prioritize it accordingly. We’re in a 2018 in which IoT devices are vastly insecure, so the idea of having a publicly-addressable IP for each of the devices in your home is not just uninteresting, but actively frightening to me. And for the companies that need the adoption, I would hope that the priority right now would be proper security, instead of adding an extra layer that would create more unknowns in their stack (because, and again it’s worth noting, as I had a discussion about this too, it’s not just the network that needs to support IPv6, it’s the full application!). And if that means that non-performance-critical backends are not going to be available over IPv6 this century, so be it.

One remark that I’m sure is going to arrive from at least a part of the readers of this, is that a significant part of the examples I’m giving here appear to all be hosted on Akamai’s content delivery network which, as we can tell from XBox’s website, supports IPv6 frontends. “It’s just a button to press, and you get IPv6, it’s not difficult, they are slackers!” is the follow up I expect. For anyone who has worked in the field long enough, this would be a facepalm.

The fact that your frontend can receive IPv6 connections does not mean that your backends can cope with it. Whether it is for session validation, for fraud detection, or just market analysis, lots of systems need to be able to tell what IP address a connection was coming from. If your backend can’t cope with IPv6 addresses being used, your experience may vary between being unable to buy services and receiving useless security alerts. It’s a full stack world.