Unnecessary, but required

In the past year, I’ve hard to learn quite a few different lessons, some harder than others, some more gratifying than others. One of the main (but far from the only) source of these lessons was learning to live with someone else — save for my mother, and a few months with Luca, I have never really shared an apartment, a flat, or a house with someone else for more than a few days. But now that I’m happily married, there’s no going back to solitude. And it’s a feeling I’m really happy about, despite the eventual challenges that this has brought to both of us.

One of the differences that we realised early on is that we have different tolerances to chaos and trinkets. I’m not particularly organised when it comes to sorting out my stuff, but I’m also not a total slob — but I don’t mind having items spread across three rooms, and I was not particularly well known for having ironed t-shirts. My wife’s much less… chaotic, but at the same time has a fairly short patience for technology for the sake of technology.

This pretty much makes a dent in the amount of random gadgets I end up buying for the sake of trying out, because they might just end up not being used, or even not being welcome if they somehow get in the way. I think my most impressive achievement has been making her accept we have an electric cheese grater. I’m still trying to convince her it’s a good idea for me to disassemble the battery charger to replace the current plug-in adapter with an micro-USB port. Which is honestly not necessary at all: the plug is an AC-DC adapter, europlug with one of those europlug-to-british screw-in adapters, which means if we decide to leave London for the Continent, we won’t be needing to replace it — it would only become an issue if we moved to a different part of the world, and we can address it then.

But at the same time, this is the type of modification that in my eyes is… well, required. Why would I not make my electric cheese grater into an USB-powered electric cheese grater?

This reminded me of what Adam Savage (of Mythbuster fame) says in his biography Every Tool’s A Hammer (which, incidentally, is an awesome read that I would recommend everyone who has even a passing interest in creating stuff):

I often describe myself as a serial skill collector. I’ve had so many different jobs over my lifetime […] that my virtual tool chest is overflowing. Still I love learning new ways of thinking and organizing, new technqiues, new ways of solving old problems. […] The skills I have, all of them, are simply arrows in my mental quiver, tools in my problem-solving tool chest, to achieve that thing. […] And I learned each of them specifically for that reason. […] Eventually, […] I came to realise this was the ONLY way I could successfully learn a skill—by doing something with it, by applying it in my real world.

Adam Savage, Every Tool’s A Hammer

This is pretty much my life. I have pretty clearly failed at learning things “academically”, lasting only a few weeks at University of Venice, and instead building up my knowledge by working on different projects, both opensource and for customers, and by trying things out for myself. This has been a blessing and a curse at same time: while it meant that I have been collecting a bunch of skills, just like Adam is saying above, for the most part I have superficial skills: I’ve only rarely had to go deep-dive into a technology or a problem in my dayjob, and the amount of time I have to spend on side projects has been fairly low, and shrinking.

Long are the the days gone when I could sit down to write a stupid IRC bot in Qt, just because I could, and not just for the lack of time. It’s also because, for the most part, I keep telling myself it’s a bad idea to work on something low level, when someone else already did it better than I could possibly do — which is likely true, but it fails to meet my requirement to add the skill to my repertoire. And that’s by itself a career-limiting move, comparable to to the bubble problem.

With these issues in mind, I’m definitely glad my wife is understanding on why I sometimes spend money, time, effort (or most likely, all three) just to get something done because I want to, and not because there’s much need for it. It’s unnecessary, but required for me to keep up to scratch. And being able to do that, without upsetting my partner despite the chaos it creates, is a significant privilege.

As well as privilege is being able to afford the time, space, and money for all these projects. I think this is, for the most part, something that is not quite clear out there yet: being able to contribute to opensource, to write up tips and tricks, to document how to do things are privileges. And I think it’s important to share this privilege, even in form of tips, tricks, videos, and blogs — which is why this blog is still existing, and even with ever-shortening spare time I try to write updates.

Whether it is Bigclive on YouTube, with sometimes off-colour comments that make me uncomfortable, or Adam Savage’s own Tested, that can rely on a real, professional shop, or Micah’s most awesome electronics reverse engineering channel, or Foone’s Twitter feed, I am very glad for those who do their best to share knowledge — and I don’t really need to know why they are doing it. Even when it doesn’t really help me directly (because I can’t learn something if I don’t try myself), I know it can help someone else. Or inspire someone else (or in some cases, me) to go and try something, that will make them learn more.

Abbott, the Libre 2, and the takedown

A few people today messaged and mentioned me on twitter regarding the news that Abbott has requested the takedown of something related to their Libre 2. I gave a quick hot take on this on Twitter, but I guess it’s worth having something in long form to be referenced, since I’m sure this will be talked about a lot more, not least because of the ominous permalink chosen by Boing Boing (“they-literally-own-you”) and the fact that, game of telephone style, the news went from the original takedown, to Reddit phrasing it as “Abbott asserts copyright on your data”, which is both silly and untrue.

So let’s start with a bit of background, that most of the re-posters of this story probably don’t know much about. The Libre 2 is an upgrade on the FreeStyle Libre system that I wrote a lot about and that I use daily. It comes with both a reader device and with support in the LibreLink app for both Android and (on more recent iPhones) iOS. The main difference with the Libre system is that the sensors provide both NFC and BLE capabilities, with the ability to proactively notify of high- or low-blood sugar conditions, that the old NFC-only sensors cannot provide, which is more similar to CGM solutions like Dexcom‘s.

In both the Libre and Libre 2 systems, the sensors don’t report blood sugar values, like in most classic glucometers. Instead they report a number of “raw” values, including from a number of temperature sensors. There’s a great explanation of these from Pierre Vandevenne, here and here. To get a real blood sugar measurement, you need to apply some algorithm, that Abbott still refines. The algorithm is what I usually refer to as “secret sauce”, and is implemented in both the reader’s firmware and the LibreLink app itself.

Above I used the word “something” to refer to what was taken down. The reason why I say that is that Boing Boing in the title straight up calls this a “tool” — but when you read the linked post from the affected person, it is described as “details of how to patch the LibreLink app”. Since I have not seen what the repository was before it was taken down, I have no idea which one to believe exactly. In either case, it looks like Abbott does not like someone to effectively leverage their “secret sauce” to use in a different application, but in particular, it does not look like we’re talking about something like glucometerutils, that implemented the protocol “clean”, without derivation off the original software.

Indeed, Boing Boing seems to make a case that this is equivalent of implementing a file format: «[…] just because Apple’s Pages can read Word docs, it doesn’t mean that Pages is a derivative of MS Office.» Except that it’s not as clear cut. If you implemented support for one format by copying the implementation code into your software, that actually would make it a derivative work, quite obviously. In this case, if I am to believe the original report instead, the taken down content were instructions to modify Abbott’s app — and not a redistribution of it. Since I’m not a lawyer, I have no idea where that stands, but it’s clearly not as black-and-white as Boing Boing appears to make it.

As I said on twitter, this does not affect either of my projects, since neither is relying on the original software, and are rather descriptions of the protocols. They also don’t include any information or support for the Libre 2, since the protocol appears to have changed. There’s an open issue with discussion, but it also appears that this time Abbott is using some encryption on the protocol. And that might be an interesting problem, as someone might have to get up close and personal with the code to figure that part out — but if that’s the case, we’re back at needing a clean-room design for implementing it.

I also want to quote Pierre explicitly from the posts I linked above:

[…] in the Libre FRAM, what we are seeing is a real “raw” signal. While the measure of the glucose signal itself is fairly reliable, it is heavily post-processed by the Libre firmware. Specifically – and in no particular order – temperature compensation, delay compensation, de-noising… all play a role. That understanding and, to some extent, my MD training, led me to extreme caution and prevented me from releasing my “solution”, which I knew to be both incomplete and unable to handle some error conditions.

The main driver behind my decision was the well known “first do no harm” (primum non nocere) motto, an essential part of the Hippocratic Oath which I symbolically took. I still stick by it today. […]

[…]

Today, there are a lot of add-on devices that aim to transform the Libre into a full CGM. To be honest, in general, I do not like either the results they provide or their (in)convenience. None of those I have tried delivered results that would lead to an approval by a regulatory agency, none of them were stable for long periods of time. But, apparently, patients still feel they are helpful and there is now a thriving community that aims at improving them.

Pierre Vandevenne

While I have not sworn a Hippocratic Oath myself, I have similar concerns to Pierre, and I have explicitly avoided documenting the sensors’ protocol, and I won’t be merging code that tries to read them directly, even if provided.

And when it comes to copyright issues, I do weigh them fairly heavily: they are the fundamental way that Free Software even works, by respecting licenses. So I will prefer someone to provide me with the description of Abbott’s encryption protocol, rather than an implementation of it where I may be afraid of a “poisonous tree.”

Working in a bubble, contributing outside of it

The holiday season is usually a great time for personal projects, particularly for people like me who don’t go back “home” with “the family” — quotes needed, since for me home is where I am (London) and where my family is (me and my wife.) Work tends to be more relaxed – even with the added pressure of completing the OKRs for the quarter, and to define those for the next – and given that there is no public transport going on, the time saved in commuting also adds up to an ideal time to work on hobbies.

Unfortunately, this year I’m feeling pretty useless on this front, and I thought this uselessness feeling is at least something I can talk about for the dozen-or-so remaining readers of this blog, in an era of social media and YouTube videos. If this sounds very dismissive, it’s probably because that is the feeling of irrelevancy that took over me, and something that I should probably aim to overcome in 2020, one way or another.

If you are reading this post, it’s likely that you noticed my FLOSS contributions waning and pretty much disappearing over the past few years, except for my work around glucometerutils, and the usbmon-tools package (that kind-of derives off it.) I have contributed the odd patch to the Linux kernel, and more recently to some of the Python typing tooling, but those are really drive-by contributions as I found time for.

Given some of the more recent Twitter threads on Google’s policies around open source contributions, you may wonder if it is related to that, and the answer is “not really”. Early on, I was granted an IARC approval for me to keep working on unpaper (which turned out possibly overkill), for the aforementioned glucometerutils, and for some code I wrote while reverse engineering my gaming mouse. More recently, I’ve leveraged the simplified patching policy, and granted approval for releasing both usbmon-tools and tanuga (although the latter is only released as a skeleton right now.)

So I have all the options, and all the opportunities, to contribute FLOSS projects while in employment of a big multinational Internet company. Why don’t I do that more, then? I think the answer is that I work in a bubble for most of the day, and when I try to contribute something on my spare time, I find myself missing the support structure that the bubble gives me.

I want to make clear here that I’m not saying that everything is better in the bubble. Just that the bubble is soft and warm enough that makes the world outside of it scary, sometimes annoying, but definitely more vast. And despite a number of sensible tools being available out there (and in many cases, better tools), it takes a significant investment in researching the right way to do something, to the point that I suffer from CBA syndrome.

The basic concepts are not generally new: people have talked out loud at conferences about the monorepo, my friend Dinah McNutt spoke and wrote at length about Rapid, the release system we use internally, and that drives the automatic releases, and so on. If you’re even more interested in the topic, this March the book Software Engineering at Google will be released by O’Reilly. I have not read it myself, but I have interacted on and off with two of the curators and I’m sure it’s going to be worth its weight in gold.

Some of the tools are also being released, even if sometimes in modified ways. But even when they are, the amount of integration you may have internally is lost when trying to use them outside. I have considered using Bazel for glucometerutils in the past — but in addition to be a fairly heavy dependency, there’s no easy way to reference most of the libraries that glucometerutils need. At the end of the day, it was not worth trying to use it, despite making my life easier by reducing the cognitive load of working on opensource projects in my personal time.

Possibly the main “support beam” of the bubble, though, is the opinionated platform, which can be seen from the outside in form of the style guides but extends further. To keep the examples related to glucometerutils, while the tests do use absl‘s parameterized class, they are written in a completely different style than I would do at work, and they feel wrong when it comes to importing the local copy of the module to test it. When I looked around to figure out what’s the best practice to write tests in Python, I could find literally dozens of blog posts, StackOverflow answers, documentation for testing frameworks, that all gave slightly different answers. In the bubble you have (pretty much) one way to write the basic test — and while people can be creative even within those guidelines, creativity is usually frown upon.

The same is true for release engineering. As I noted and linked above, all of the release grunt work is done by the Rapid tool in the bubble — and for the most part it’s automated. While there’s definitely more than one way to configure the tool, at least you know which tool to use. And while different teams have often differing opinions on those configurations, you can at least find the opinion of your team, or the closest team to you with an Opinion (with the capital O) and follow that — it might not be perfect for your use, but if it’s allowed it usually means it was reviewed and vouched for (or copy-pasted from something else that was.)

An inside joke from the Google bubble is that the documentation is always out of date and never to be trusted. Beside the unfairness of the joke to the great tech writers I had pleasure to work with, who are more than happy to make sure the documentation is not out of date (but need to know that’s the case, and most of them don’t find out until it’s too late), the truth is that at least we do have documentation for most processes and tools. The outside world has tons of documentation, and some of it is out of date, and it’s very hard to tell whether it’s still correct and valid.

Trying to figure out how to configure a CI/CD tool for a Python project on GitHub (or worse, trying to figure out how to make it release valid packages on PyPI!) still feels like going by the early 2000s HOWTOs, where you hope that the three years old description of the XFree86 configuration file is still matching the implementation (hint: it never did.) Lots of the tools are not easy to integrate, and opting into them takes energy (and sometimes money) — the end result of which is that despite me releasing usbmon-tools nearly a year ago, you still need an unreleased dependency, as the fix I needed for it is not present in any released version, and I haven’t dared bothering the author to ask for a new release yet.

It’s very possible that if I was not working in a bubble all of these issues wouldn’t be be big unknowns — probably if I spend a couple of weeks reviewing the various options for CI/CD I can come up with a good answer for setting up automated releases, and then I can go to the dependency’s author and say “Hey, can I set this up for you?” and that would solve my problem. But that is time I don’t really have, when we’re talking about hobby projects. So I end up opening up the editor in the Git repository I want to work on, add a dozen line or so of code to something I want to do, and figure out that I’m missing the tool, library, interface, opinion, document, procedure that I need, feel drained, and close the editor without having committed – let alone pushed – anything.

Stop slagging off IoT users if you care about them

It’s the season for gifts (or, as some would say, consumerism), and as way too often is the case, it starts a holy war between those who enjoy gadgets, new technology, and Internet-connected appliances, and those who define themselves as security conscious and tell people that they wouldn’t connect a computer to the Internet if they didn’t have to.

Those who follow me on Twitter, probably already know which side of this divide I find myself in: I do have a few IoT devices at home, and I’m “IoT-positive”. I even got into a long Twitter discussion years ago about the fact that IoT is no longer just a random marketing buzzword, but got to actually refer to a class of devices that the public at large can identify, the same way as “white goods” would, in the British Isles.

I have a very hard time giggling Twitter posts from geek supremacists making fun of Internet-connected ovens, when the very same geeks insist they would never possibly buy something like that — despite the excited reactions of the Linux, BSD and FLOSS communities nearly fifteen years ago at the release of a NetBSD-operated toaster.

This does not mean that I’m okay with all the random stuff that’s being proposed as an Internet-enable device. I have looked briefly at Bluetooth toothbrushes and I’m still lost on what the value proposition is with them. And even last year when I got a smart plug it took me a lot of thoughts to figure out what it would be used for, and decided that, for 11 months of the years, the plug will stay in a box, and it will come out at the same time as the Christmas Tree.

Today’s musing is finding a “Smart Essential Oil Diffuser” which was funny because I was looking for something completely different (a kitchen oil bottle, it’s a long story), but I actually clicked on it out of curiosity. I have looked into this type of devices last year, while I was writing my post about smart plugs: they sounded like an interesting approach to make sure they are on for a few minutes before we arrive home, just to give a good smell to the flat without having to keep a more standard Ambipur on all the time.

Indeed, I have considered converting our Muji diffuser into a “Smart” one with an Adafruit Featherwing, but it works too good to open it up right now, and nearly everything I can see in stores like TkMaxx appears to be fairly low quality and with power supplies that look too low to be true. But the device I found over there also appears to be a fairly bad one, so I think our old-school Muji diffuser will stay around instead.

The thing is, whether you like it or not, the public at large, not just the geeks, are the driving force of manufacturers. And you won’t win anyone over by being smug and pointing at how good you are at not buying stuff that is Internet-enabled, because you don’t trust it. The public will. So instead of throwing all IoT options under a bus, and making fun of their users, I prefer Matthew’s approach of actually looking into the various lightbulbs and documenting which ones are, indeed, terrible.

Indeed, if you think that Internet-enabled aroma diffusers are pointless, useless, and nobody will want to have one… you’ll find out that someone will be making one, people will buy one, and most likely some random Chinese factory will start making a generic enough model that other companies can rebrand, and provide the least secure option out there.

I think this is also a valid metaphor for politics nowadays. It doesn’t matter that you are sure you have the right answer — if you demonize the public at large telling them they are stupid, or that they are at fault for things, you’re not likely going take your advice for long.

So if you care about the people around you, instead of telling them that IoT is terrible and you shouldn’t connect anything to a computer ever in a million years, try finding what is not terrible, while still providing them with the convenience they desire. Whether it is a smart lightbulb, a smart thermostat, or an app-enabled doorbell. And if you can’t find anything, and you still think you’re smarter than others, make it. Clearly there’s desire for those tools, can you make a secure and safe one?

MSI X299 SLI PLUS problems and solutions

Last year, I posted about an issue with missing BitLocker and PIN authentication with my replacement Gamestation build. While it does not look like this is a particularly popular post, I did confirm that at least a couple of people managed to get good use out of that blog post.

As usual, my Twitter feed contains spoilers of this blog post, as I have ranted, complained, and asked questions (mostly to Jo) trying to figure out my Windows problems. The reason I’m writing this down is as usual as a reference to myself, so I don’t repeat the same mistakes over and over again, and as a reference for others, since particularly one of the error codes I’m going to talk about appears to find almost exclusively scammy “PC fixing” websites. And yes I know that I’m repeating the word BIOS later while this is clearly an UEFI board, but MSI calls it as such, and to be honest for most non-technical folks the differences between the two terms don’t exist.

All long help threads should have a sticky globally-editable post at the top saying ‘DEAR PEOPLE FROM THE FUTURE: Here’s what we’ve figured out so far …’

First of all, as noted in the previous post, it looks like nearly all of the settings in the BIOS are lost at any upgrade of the firmware. This is particularly annoying when it looks like a lot of the updates are early boot microcode updates to cover the increasing complexity of mitigating Spectre-style vulnerabilities, and reasonably shouldn’t need to change the semantics or format of settings such as Secure Boot, TPM settings, or smart fan configuration.

So make sure to take good screenshots of all your settings before updating your firmware, as otherwise you’ll fight for hours trying to reconfigure it as you had it before.

Your computer is not resuming from sleep when you press the power button. This appears to be common, I’ve found a bunch of forums posts by people complaining about this behaviour on a number of MSI motherboards. Most of them appears to be in the form of DenverCoder9, although with a little more detail: people claiming they solved the issue by either downgrading or upgrading the motherboard’s BIOS. Not wanting to downgrade my BIOS and having just upgraded it, I wanted to find a better answer, and turns out I probably did find it. Here’s the solution: disable GO2BIOS feature.

Some more details, which can be useful for others in the future if they encounter similar issues and the solution I’m providing is not helping them. The GO2BIOS feature by MSI is a shortcut to enter the BIOS configuration screen without using the keyboard, and it’s particularly handy once you enable all the fast-boot options, as the keyboard might not respond at all. To force entering the BIOS configuration, then, you just need to keep pressed the power button for four seconds when you turn on the computer. That’s what clued me to the connection between the setting and the failure to resume, as they both related to the power button.

The reason why downgrading or upgrading the BIOS appeared to solve the issue is the one I noted above: all firmware updates on these boards appear to completely reset the settings to defaults, and the GO2BIOS feature is not enabled by default (and probably few people would consider re-enabling it in the hurry.)

Windows 10 bluescreens with WHEA_UNCORRECTABLE_ERROR. This is trickier, mostly because all of the search hits for this particular code appears to point at very dodgy websites, and the only hit I could find on the Microsoft website was for a forum post where it was suggested that the particular code I was saying was related to AMD CPUs. Since my machine is an i7, that made no sense whatsoever.

The WHEA in the name stands for Windows Hardware Error Architecture, which suggested that the cause of the bluescreen is caused by something like a Machine-Check Exception. This was particularly scary because it started happening right after I installed a new NVMe SSD, which appeared to get very warm, leading me to first install two more fans, and then replacing the original fans with PWM ones.

During this “ordeal” I also had been installing and updating quite a few pieces of software, related to CPU, motherboard, the Kraken cooler, and so on. And since I had just updated the BIOS I also had been tweaking a lot of parameters around, including tried re-enabling the auto-over-clock feature that, as I discussed previously, appears to be implemented mostly in firmware.

Eventually, I found that I solved the problem by uninstalling MSI’s Control Center software. I had already previously disabled the OC assistant, but even with that I kept receiving random blue screens when browsing websites, or just opening Lightroom. Since I uninstalled the Control Center software I have not experienced a single one for a few days. And that including a “torture test” with Prime95 that brought the CPU to 100C and to thermal throttling.

I’m not sure what the root cause for this is. I can only imagine that there’s some strange interaction between the firmware and the software that was not quite well tested. Or maybe there’s a new update on Windows 10 that caused Control Center to fight for resources. But whatever the reason it seems the right thing to do was to remove MSI’s software, which anyway does not really do anything you can’t do in the BIOS configuration screen.

I hope this post can find its way to those looking for answers for these (or similar enough) issues. And if you find that there are other possible causes for this, feel free to leave a comment on the post.

Planets, Clouds, Python

Half a year ago, I wrote some thoughts about writing a cloud-native feed aggregator. I actually started drawing some ideas of how I would design this myself since, and I even went through the (limited) trouble of having it approved for release. But I have not actually released any code, or to be honest, I have not written any code either. The repository has been sitting idle.

Now, with the Python 2 demise coming soon, and me not interested in keeping around a server nearly only to run Planet Multimedia, I started looking into this again. The first thing that I realized is that I both want to reuse as much code exist out there as I can, and I want to integrate with “modern” professional technologies such as OpenTelemetry, which I appreciate from work, even if it sounds like overkill.

But that’s where things get complicated: while going full “left-pad” of having a module for literally everything is not something you’ll find me happy about, a quick look at feedparser, probably the most common module to read feeds in Python, shows just how much code is spent trying to cover for old Python versions (before 2.7, even), or to implement minimal-viable-interfaces to avoid mandatory dependencies at all.

Thankfully, as Samuel from NewsBlur pointed out, it’s relatively trivial to just fetch the feed with requests, and then pass it down to feedparser. And since there are integration points for OpenTelemetry and requests, having an instrumented feed fetcher shouldn’t be too hard. That’s going to probably be my first focus when writing Tanuga, next weekend.

Speaking of NewsBlur, the chat with Samuel also made me realize how much of it is still tied to Python 2. Since I’ve gathered quite a bit of experience in porting to Python 3 at work, I’m trying to find some personal time to contribute smaller fixes to run this in Python 3. The biggest hurdle I’m having right now is to set it up on a VM so that I can start it up in Python 2 to begin with.

Why am I back looking at this pseudo-actively? Well, the main reason is that rawdog is still using Python 2, and that is going to be a major pain with security next year. But it’s also the last non-static website that I run on my own infrastructure, and I really would love to get rid of entirely. Once I do that, I can at least stop running my own (dedicated or virtual) servers. And that’s going to save me time (and money, but time is the most important one here too.)

My hope is that once I find a good solution to migrate Planet Multimedia to a Cloud solution, I can move the remaining static websites to other solutions, likely Netlify like I did for my photography page. And after that, I can stop the last remaining server, and be done with sysadmin work outside of my flat. Because honestly, it’s not worth my time to run all of these.

I can already hear a few folks complaining with the usual remarks of “it’s someone else’s computer!” — but the answer is that yes, it’s someone else’s computer, but a computer of someone who’s paid to do a good job with it. This is possibly the only way for me to manage to cut away some time to work on more Open Source software.

PSD2 Made Me Do It

The European “Revised Directive on Payment Services” (usually just called PSD2) has recently enter into to legislation in many countries, including the UK — despite the current political turmoil. In addition to requirements around data access and APIs, and additional limitations for financial service providers, it includes the requirement for financial institutions to provide what is called “Strong Customer Authentication”.

The idea is to provide a stronger guarantee that it is indeed the customer accessing their balance or executing a financial operation. None of this should feel particularly sophisticated, given that banks have provided multi-factor authentication options for many years before this. But if you have read my blog before, you probably know my opinion on banks’ security theatre features.

Indeed, UK – and Irish – banks still appear to believe that asking only a subset over characters of a password, or of digits of a pin, is a good security practice, despite this been easily debunked by any web engineer with a bit of sense.

My job has nothing to do with financial services or PSD2, which means I have a very basic understanding of its intricacies. On the other hand, I’m able to observe how various companies are receiving the directive and implementing it for their customers. Take for example American Express, who sent reminders to their customers to keep their Android app up to date, as they are preparing to send SafeKey notifications – their “2FA” authentication similar to Verified by Visa and MasterCard SecureCode – directly to the customers’ phones. Similarly, Santander recently sent me a contract update that, among other things, gives them permission to send notification via app or email, rather than just SMS. Pretty much the same story applies to the Italian UniCredit, which also replaced their physical password cards (yes, they still had some) and RSA tokens with app notifications.

This is not rocket science or anything particularly new. Even my American bank, Chase, send similar notifications to either SMS, or email, whether it is while logging in, or executing a transaction — and American banks are not particularly well known for their innovative ideas. Indeed, Chase has been doing this for the past three years, without any directive requiring it, and with a fairly low bullshit level. And it even supports OAuth2 delegation for transfers, which TransferWise uses. I guess we’re now seeing European banks catching up to be fairly low bar.

On the other hand of this we have Fineco, now no longer part of UniCredit. Their “strong customer authentication” appears to be an additional 7 digits PIN called “mobile code.” How and where this is going to be used is not particularly clear — the announcement says it’ll be used to hide your balance, but that does not appear to be the case right now. You need to set it in the mobile app, and once done, you’re proposed to link it to your fingerprint. The interesting part is that you already need an additional code to execute operations, and you needed it for the past two years. You also have a separate “client services” PIN, and both of those are 8-digits. And the “web password” is itself only 8 characters. You would think that instead of four “memorables”, having one that can be longer than 20 characters would work better.

Settings banks and financial institutions aside I think nothing can top the original email sent by John Lewis, the British department store (that also operates the Waitrose supermarkets). On September 2nd, they sent an email titled Important information about payment changes, which effectively introduced PSD2 and SCA to their customers. In the email, there was this gem:

SHOPPING IN STORE
You’ll notice changes when making contactless payments in our shops, including when using Apple Pay, Samsung Pay and payments via wearable technology such as smart watches. You may be asked to insert your card and key in your PIN. Chip and PIN payments will continue to work as normal.

WHAT YOU NEED TO DO
As the checks are random, you won’t know in advance whether validation is required, and neither will our Partners. So if you plan to use contactless payment, make sure you have the relevant card with you, or an alternative method to use, so you can continue with your purchase.

my John Lewis email, 2019-09-02

I took it to Twitter then to rant about the insanity of suggesting customers to insert a card when using a mobile-based payment system. Not just because there may not be a card to insert (Revolut allows connecting a virtual card to Google Pay, so there’s no matching physical card for it), not because there shouldn’t be a way for merchant to link the Google Pay/Apple Pay to the original card you connected, but most importantly because the authentication provided by an unlocked phone is stronger than that of a Chip’n’Pin card.

But they went even worse with “What you need to do”, because they are explicitly saying that they were introducing random checks, not risk-based checks which PSD2 and SCA are usually suggesting. And let’s ignore again the note of “relevant” card that may not exist. It makes it a lottery to figure out if you can pay for the groceries you’re buying, and honestly I don’t want to have an awkward moment when their till system decide to quiz me on a card I might not have to begin with.

I don’t know if anyone at the store chain noticed my tweet rant, but two days later, they sent another email, titled An update on Strong Customer Authentication.

At John Lewis & Partners, we are committed to ensuring you have a safe and secure experience when shopping with us. On Monday 2 September we sent you an email about Strong Customer Authentication (SCA) and the importance of your card issuer having your most up-to-date contact information.
We incorrectly suggested that you may be asked to insert your card and key in your PIN when using Apple Pay and Samsung Pay. We are pleased to tell you that you are not required to present your card or enter your PIN when using these payment methods, and you can continue to use Apple Pay and Samsung Pay as normal.

my John Lewis Email, 2019-09-04

I don’t know if this is a change of plan, where someone pointed out that implementing it that way was silly, or just a communication error in the first place. But it definitely shows how careless the communication around this was from John Lewis. I somehow expect that other companies are on the same boat, and I just haven’t noticed because I’m not their customer.

Speaking of Twitter, I saw at least two people recently complaining that their banks refuse connection from IP addresses from countries outside their operation area. While this does not seem to be announced as part of SCA, I have a certain feeling that this is becoming more popular because of it. It’s the same kind of risk analysis that forces me to use TunnelBear to connect to my GP’s online services to order my medical supplies if I’m traveling, as their app is rejecting any request coming from a non-UK address.

I’m afraid that as usual, with bank security, we’re not talking about rational solutions. We’re instead looking at solutions that consultant can sell to banks, and that bank management can feel confident enough to defend in court. And maybe confuse their customers over the fact that they may be making their life miserable, but they do so for security.

It effectively reminded me of Andrea’s work on chip-and-pin implementations, now nearly eight years ago:

Andrea Barisani and Daniele Bianco talking about Chip&PIN.

Honestly, I wish banks took their ideas from TransferWise, which, among all of my bank accounts, is the only one implementing 2FA as push notifications with the app they have on my phone.

Beurer GL50, Linux and Debug Interfaces

In the previous post when I reviewed the Beurer GL50, I have said that on Windows this appears as a CD-Rom with the installer and portable software to use to download the data off it. This is actually quite handy for the users, but of course leaves behind users of Linux and macOS — except of course if you wanted to use the Bluetooth interface.

I did note that on Linux, the whole device does not work correctly. Indeed, when you connect this to a modern Linux kernel, it’ll fail to mount at all. But because of the way udev senses a new CD-Rom being inserted, it also causes an infinite loop in the userspace, making udev use most of a single core for hours and hours, trying to process CD in, CD out events.

When I noticed it I thought it would be a problem in the USB Mass Storage implementation, but at the end of the day the problem turned out to be one layer below that and be a problem in the SCSI command implementation instead. Because yes, of course USB Mass Storage virtual CD-Rom devices still mostly point at SCSI implementations below.

To provide enough context, and to remind myself how I went around this if I ever forget, the Beurer device appears to use a virtual CD-Rom interface on a chip developed by either Cygnal or Silicon Labs (the latter bought the former in 2003). I only know the Product ID of the device as 0x85ED, but I failed trying to track down the SiliconLabs model to figure out why and how.

To find may way around the Linux kernel, and try to get the device to connect at all, I ended up taking a page off marcan’s book, and used the qemu’s ability to launch a Linux kernel directly, with a minimum initramfs that only contains the minimum amount of files. In my case, I used the busybox-static binary that came with OpenSuse as the base, since I didn’t need any particular reproduction case beside trying to mount the device.

The next problem was figuring out how to get the right debug information. At first I needed to inspect at least four separate parts of the kernel: USB Mass Storage, the Uniform (sic) CD-Rom driver, the SCSI layer, and the ISO9660 filesystem support — none of those seemed a clear culprit at the very beginning, so debugging time it was. Each of those appear to have separate ideas of how to do debugging at all, at least up to version 5.3 which is the one I’ve been hacking on.

The USB Mass Storage layer has its own configuration option (CONFIG_USB_STORAGE_DEBUG), and once enabled in the kernel config, a ton of information on the USB Mass Storage is output on the kernel console. SCSI comes with its own logging support (CONFIG_SCSI_LOGGING) but as I found a few days of hacking later, you also need to enable it within /proc/sys/dev/scsi/logging_level, and to do so you need to calculate an annoying bitmask — thankfully there’s a tool in sg3_utils called scsi_logging_level… but it says a lot that it’s needed, in my opinion. The block layer in turn has its own CONFIG_BLK_DEBUG_FS option, but I didn’t even manage to look at how that’s configured.

The SCSI CD driver (sr), has a few debug outputs that need to be enabled by removing manual #if conditions in the code, while the cdrom driver comes with its own log level configuration, a module parameter to enable the logging, and overall a complicated set of debug knobs. And just enabling them is not useful — at some point the debug output in the cdrom driver was migrated to the modern dynamic debug support, which means you need to enable the debugging specifically for the driver, and then you need to enable the dynamic debug. I sent a patch to just remove the driver-specific knobs.

Funnily enough, when I sent the first version of the patch, I was told about the ftrace interface, which turned out to be perfect to continue sorting out the calls that I needed to tweak. This turned into another patch, that removes all the debug output that is redundant with ftrace.

So after all of this, what was the problem? Well, there’s a patch for that, too. The chip used by this meter does not actually include all the MMC commands, or all of the audio CD command. Some of those missing features are okay, and an error returned from the device will be properly ignored. Others cause further SCSI commands to fail, and that’s why I ended up having to implement vendor-specific support to mask away quite a few features — and gate usage in a few functions. It appears to me that as CD-Rom, CD-RW, and DVDs became more standard, the driver stopped properly gating feature usage.

Well, I don’t have more details of what I did to share, beside what is already in the patches. But I think if there’s a lesson here, is that if you want to sink your teeth into the Linux kernel’s code, you can definitely take a peek at a random old driver, and figure out if it was over-engineered in a past that did not come with nice trimmings such as ftrace, or dynamic debug support, or generally the idea that the kernel is one big common project.

Glucometer Review: beurer GL50 evo

I was looking for a new puzzle to solve after I finally finished with the GlucoRx Nexus (aka TaiDoc TD-4277), so I decided to check out what Boots, being one of the biggest pharmacy in the country, would show on their website under “glucometer”. The answer was the Beurer GL-50, which surprised me because I didn’t know Beurer did glucometers at all. It also was extremely overpriced at £55. But thankfully I found it for £20 at Argos/eBay, so I decided to give it a try.

The reason why I was happy to get one was that the the device itself looked interesting, and reminded me of the Accu-Chek Mobile, with its all-in-one design. While the website calls it a 3-in-1, there are only two components to the device: the meter itself and the lancing device. The “third” device is the USB connector that appears when you disconnect the other two. I have to say that this is a very interesting approach, as it makes it much easier to connect to a computer — if it wasn’t that the size of the meter makes it very hard to connect it.

On my laptop, I can only use it on the USB plug on the right, because on the left, it would cover the USB-C plug I use to charge it. It’s also fairly tall, which makes it hard to use on chargers such as my trusted Anker 5-port USB-C (of which I have five, spread across rooms.) At the end, I had to remove two cables from one of them to be able to charge the meter, which is required for it to be usable at all, when it arrives.

To be honest, I’m not sure if the battery being discharged was normal or due to the fact that the device appears to have been left on shelves for a while: the five sample strips to test the device expire in less than two months. I guess it’s not the kind of device that flies off the shelves.

FreeStyle Libre, gl50 evo, GlucoRx Nexus

So how does the device fare compared to other meters? Size wise, it’s much nicer to handle than the GlucoRx, although it looks bigger than the FreeStyle Libre reader. Part of the reason is that the device, in its default configuration, includes the lancing device, unlike both of the meters I’m comparing it with above. If you don’t plan to use the included lancing device, for instance because you have a favourite lancing device like me (I’m partial to the OneTouch Delica), you can remove the lancing device and hide the USB plug with the alternative provider cap. The meter then takes a much smaller profile than the Libre too. I actually like the compact size better than the spread out one of the FreeStyle Precision Neo.

FreeStyle Libre, gl50 evo (without lancing device), GlucoRx Nexus

Interface-wise, the gl50 is confusingly different from anything I have seen before. It comes with a flush on/off switch on the side, which would be frustrating for most people with short nails, or for people with impeded motion control. Practically, I think this and the “Nexus” are at opposite ends of the scale — the TD-4277 has big, blocky display that can be read without glasses and a single, big button, which makes it a perfect meter for the elderly. The gl50 is frustrating even for me in my thirties.

The flush switch is not the only problem. After you turn it on, the control you have is a wheel, which can be clicked. So you navigate menus in up-down-click. Not very obvious but feasible. But since the wheel can easily be pressed in your purse, that’s why you got the flush switch, I guess. The UI is pretty much barebone but it includes the settings for enabling Bluetooth (with a matching Android app, which I have not checked out for this review yet), and NFC (not sure what for). Worthy of note is that the UI defaults to German, without asking you, and you need to manage to get to the settings in that language to switch to English, Italian, French, or Spanish.

Once you plug it into a computer with Windows, the device appears as a standard CD-Rom UMS device that includes an auto-started “portable” version of the download software, which is a very nice addition, again reminiscent of the Accu-Chek Mobile. It also comes with an installer for the onboard software. As a preview of the technical information post on this meter, it looks like that, similar to the OneTouch Verio, the readings are downloaded through UMS/SCSI packets.

I called out Windows above because I have not checked how this even presents on macOS, and on Linux… it doesn’t. It looks like I may have to take some time to debug the kernel, because what I get on Linux is infinite dmesg spam. I fear the UMS implementation on the meter is missing something, and Linux sends a command that the meter does not recognize.

The software itself is pretty much bland, and there’s nothing really much to say. It does not appear to have a way to even set or get the time for the device, which in my case is still stuck in 2015, because I couldn’t bother yet to roll the wheel all the way to today.

Overall, I wouldn’t recommend this meter over any of the other meters I have or used. If beurer keeps staying in the market of glucometers (assuming they are making it themselves, rather than rebranding someone else’s, like GlucoRx and Menarini appear to do), then it might be an interesting start of further competition in Europe, which I would actually appreciate.

Glucometer notes: GlucoRx Nexus

This is a bit of a strange post, because it would be a glucometer review, except that I bought this glucometer a year and a half ago, teased a review, but don’t actually remember if I ever wrote any notes for it. While I may be able to get a new feel for the device to write a review, I don’t even know if the meter is still being distributed, and a few of the things I’m going to write here suggest me that it might not be the case, but who knows.

I found the Nexus as an over-the-counter boxed meter at my local pharmacy, in London. To me it appears like the device was explicitly designed to be used by the elderly, not just because of the large screen and numbers, but also because it comes with a fairly big lever to drop out the test strip, something I had previously only seen in the Sannuo meter.

This is also the first meter I see with an always-on display — although it seems that the backlight turns on only when the device is woken up, and otherwise is pretty much unreadable. I guess they can afford this type of display given that the meter is powered by 2 AAA batteries, rather than CR2032 like others.

As you may have guessed by now from the top link about the teased review, this is the device that uses a Silicon Labs CP2110 HID-to-UART adapter, for which I ended up writing a pyserial driver, earlier this year. The software to download the data seems to be available from the GlucoRx website for Windows and Mac — confusingly, the website you actually download the file from is not GlucoRx’s but Taidoc’s. TaiDoc Technology Corporation being named on the label under the device, together with MedNet GmbH. A quick look around suggests TaiDoc is a Taiwanese company, and now I’m wondering if I’m missing a cultural significance around the test strips, or blood, and the push-out lever.

I want to spend a couple notes about the Windows software, which is the main reason why I don’t know if the device is still being distributed. The download I was provided today was for version 5.04.20181206 – which presumes the software was still being developed as of December last year – but it does not seem to be quite tested to work on Windows 10.

The first problem is that that the Windows Defender malware detection tool actually considers the installer itself as malware. I’m not sure why, and honestly I don’t care: I’m only using this on a 90-days expiring Windows 10 virtual machine that barely has access to the network. The other problem, is that when you try to run the setup script (yes, it’s a script, it even opens a command prompt), it tries to install the redistributable for .NET 3.5 and Crystal Reports, fail and error out. If you try to run the setup for the software itself explicitly, you’re told you need to install .NET 3.5, which is fair, but then it opens a link from Microsoft’s website that is now not found and giving you a 404. Oops.

Setting aside these two annoying, but not insurmountable problems, what remains is to figure out the protocol behind the scenes. I wrote a tool that reads a pcapng file and outputs the “chatter”, and you can find it in the usbmon-tools repository. It’s far from perfect and among other things it still does not dissect the actual CP2110 protocol — only the obvious packets that I know include data traffic to the device itself.

This is enough to figure out that the serial protocol is one of the “simplest” that I have seen. Not in the sense of being easy to reverse, but rather in term of complexity of the messages: it’s a ping-pong protocol with fixed-length 8-bytes messages, of which the last one is a simple checksum (sum-modulo-8-bit), a fixed start byte of 0x51, and a fixed end with a bit for host-to-device and device-to-host selection. Adding to the first nibble of the message to always have the same value (2), it brings down the amount of data to be passed for each message to 34-bit. Which is a pretty low amount of information even when looking at simple information as glucose readings.

At any rate, I think I already have a bit of the protocol figured out. I’ll probably finish it over the next few days and the weekend, and then I’ll post the protocol in the usual repository. Hopefully if there are other users of this device they can be well served by someone writing a tool to download the data that is not as painful to set up as the original software.