Opinion: FinTech vs High Street

If you’re a regular reader of this blog, you may have noticed that I have strong opinions regarding consumer financial services, particularly when it comes to Revolut, which I wrote about a lot by now.

I didn’t start writing about these services because of a professional interest, but rather because when I moved from Italy to Dublin (via Los Angeles), I felt like I stepped back ten or more years with the banking system. And while this improved significantly when I moved to London, there are still a few things baffling me from time to time.

But as I discussed in one of my recent Revolut-bashing posts, compared to Ireland the high street banking options in London are so much more interesting that I’ve effectively ditched Revolut for day-to-day payments. So why would anyone care about FinTech products?

I have been thinking this for a while, not just as a customer, but with an awareness that, if I decided to change my perspective in life and go for a riskier professional position, from my rather cushy one, FinTech appears to be the place to be right now. Particularly given the unfortunate experience I have gained in this field by now.

One of the issues appears to be one of branding, and trust. Quite a few people appear to have a dislike for high street banks because of their association with previous scandals or news. And that’s what makes it funny to see how high street banks appear to just want to enter the market with new brands.

Another thing that Monzo appears to capitalize on, in their tube advertisements, is the ability to receive instant notification of the money spent. And that’s something that I deifnitely can relate to. This is particularly important when you get to more shady stores, or to coffee stores with untrained staff, that may suggest that a transaction didn’t really go through, and suggest you to pay cash instead, charging you twice.

Indeed, this was one of the biggest advantages of using Revolut for me in Ireland. The “famous” Tesco Bank credit card didn’t really have even an online banking platform, and the only way for me to confirm whether a transaction went through was by looking at my Tesco points statements. But this is not something revolutionary: I had notifications of all online transactions, and card-present transactions over €50, on my Italian pre-paid card in 2006 (via SMS, not via app at the time, of course.)

While I feel Monzo is right to take a swing to most high street banks for not implementing these notifications, even in 2019 London it’s not true that you need to “go FinTech” to have this level of support. My American Express does the same, and you cannot say that AmEx is a new player on the market!

And it doesn’t stop at just sending me notifications for the charges: American Express goes one step further, and integrates with Google Pay so that you get the notifications even without having the American Express application installed.

Indeed, I have a feeling that, for the most part, customers would be happy if the level of support in high street banking was on par with American Express:

  • Their website lets you log in with a simple username/password combination, rather than the silly security theatre of “Give me the 1st, 2nd, 123th character of your password, and 1st, 5th and 6th digit of your PIN” (seriously, setting aside the random index selection, why on Earth do you need two equivalent factors?)
  • New charges on the card are notified immediately, either through app or through Google Pay (I don’t know about Apple Pay but I assume that’s the case there as well).
  • You can get your card’s PIN online, which is usually verified by a text message OTP.

One of the things that AmEx does not do, that I think all of the FinTech players appear to do, is freezing/unfreezing the card on the fly. A feature that Barclays has been advertising all over as if they had invented it.

It is pretty much possible, or certain, that some UK high street banks already started providing all of these options, maybe in different combinations. As I said, Barclays does appear to have the ability to freeze/unfreeze the card. Fineco does not mail out the PIN but rather has you requesting it online and delivers it as text message. And as I made as a point before, Santander has a credit card with no foreign transaction fees.

Many of the articles I read over the importance to FinTech startups imply that the main reason why big banks can’t be this flexible or “innovative” is that they have old, heavy and difficult to manage backends. From second hand discussions, I can believe that the backends are indeed as heavy and clunky as they are purported to be, but it does seem to me that many of the features involved can’t be that tied to the backends, given that most of the banks can provide those features already.

A number of features that I see being deployed throughout different banks is the ability to “budget” expenses. While they sound particularly interesting, this appears to be mostly a “frontend” feature. Santander has this feature, but somehow they decided to implement this on a separate Android app only, which I gave up on. Indeed, it does not allow you to correct their classification of expenses, which makes it pretty much useless, not just because some vendors are classified completely wrong, but also because sometimes the same vendor might be used for different reasons (Boots, CVS, Walgreens, and similar all provide both medicines and groceries; how you categorize their spend depends on what you bought!)

While Santander have already won me over as a bank customer, I do feel that they would win over more of my credit card expenses from American Express if they implemented “this one weird trick” of informing me of charges as they happen. Because small things like that are one of the reasons I use my AmEx quite a lot in the UK, even after I reach the needed spend to upgrade my Marriott membership to gold.

So yeah, my hope is that high street banks will finally see the competition from FinTech as a list of features that they should, opportunistically, implement, rather than an excuse for the branding and marketing departments to come up with new ideas to be “hip”.

“Planets” in the World of Cloud

As I have written recently, I’m trying to reduce the amount of servers I directly manage, as it’s getting annoying and, honestly, out of touch with what my peers are doing right now. I already hired another company to run the blog for me, although I do keep access to all its information at hand and can migrate where needed. I also give it a try to use Firebase Hosting for my tiny photography page, to see if it would be feasible to replace my homepage with that.

But one of the things that I still definitely need a server for is keep running Planet Multimedia, despite its tiny userbase and dwindling content (if you work in FLOSS multimedia, and you want to be added to the Planet, drop me an email!)

Right now, the Planet is maintained through rawdog, which is a Python script that works locally with no database. This is great to run on a vserver, but in a word where most of the investments and improvements go on Cloud services, that’s not really viable as an option. And to be honest, the fact that this is still using Python 2 worries me no little, particularly when the author insists that Python 3 is a different language (it isn’t).

So, I’m now in the market to replace the Planet Multimedia backend with something that is “Cloud native” — that is, designed to be run on some cloud, and possibly lightweight. I don’t really want to start dealing with Kubernetes, running my own PostgreSQL instances, or setting up Apache. I really would like something that looks more like the redirector I blogged about before, or like the stuff I deal with for a living at work. Because it is 2019.

So sketching this “on paper” very roughly, I expect such a software to be along the lines of a single binary with a configuration file, that outputs static files that are served by the web server. Kind of like rawdog, but long-running. Changing the configuration would require restarting the binary, but that’s acceptable. No database access is really needed, as caching can be maintained to process level — although that would men that permanent redirects couldn’t be rewritten in the configuration. So maybe some configuration database would help, but it seems most clouds support some simple unstructured data storage that would solve that particular problem.

From experience with work, I would expect the long running binary to be itself a webapp, so that you can either inspect (read-only) what’s going on, or make changes to the database configuration with it. And it should probably have independent parallel execution of fetchers for the various feeds, that then store the received content into a shared (in-memory only) structure, that is used by the generation routine to produce the output files. It may sounds like over-engineering the problem, but that’s a bit of a given for me, nowadays.

To be fair, the part that makes me more uneasy of all is authentication, but Identity-Aware Proxy might be a good solution for this. I have not looked into that but used something similar at work.

I’m explicitly ignoring the serving-side problem: serving static files is a problem that has mostly been solved, and I think all cloud providers have some service that allows you to do that.

I’m not sure if I will be able to work more on this, rather than just providing a sketched-out idea. If anyone knows of something like this already, or feels like giving a try to building this, I’d be happy to help (employer-permitting of course). Otherwise, if I find some time to builds stuff like this, I’ll try to get it released as open-source, to build upon.

Introducing usbmon-tools

A couple of weeks ago I wrote some notes about my work in progress to implement usbmon captures handling code, and pre-announced I was going to publish more of my extraction/inspection scripts.

The good news is that the project is now released, and you can find it on GitHub as usbmon-tools with an Apache 2.0 license, and open to contributions (with a CLA, sorry about that part). This is the first open source project I release using my employer’s releasing process (for other projects, I used the IARC process instead), and I have to say I’m fairly pleased with the results.

This blog post is meant mostly as a way to explain what’s going on my head regarding this project, with the hope that contributors can help it become reality. Or that they can contribute other ideas to it, even when they are not part of my particular plans.

I want to start with a consideration on the choice of language. usbmon-tools is written in Python 3. And in particular it is restricted to Python 3.7, because I wanted to have access to type annotations, which I found extremely addictive at work. I even set up Travis CI to run mypy as part of the integration tests for the repository.

For other projects I tend to be more conservative, and wait for Debian stable to have a certain version before requiring that as a minimum, but as this is a toolset for developers primarily, I’m going to expect its public to be able to deal with Python 3.7 as the requirement. This version was released nearly a year ago, and that should be plenty of time for people to have one at hand.

As for what the project should achieve in my view, is an easy way for developers to dissect an USB snooping trace. I started by building a simplistic tool that recreates a text format trace from the pcapng file, based on the official documentation of usbmon in the kernel (I have some patches to improve on that, too, but that probably will become a post in by itself next week). It’s missing isochronous support, and it’s not totally tested, but it at least gave me a few important insight on the format itself, including the big caveat that the “id” (or tag) of the URBs is not unique.

Indeed, I think that alone is one of the most important pieces of the puzzle in the library: in addition to parsing the pcapng file itself, the library can re-tag the events so that they get a real unique identifier (UUID), making it significantly easier to analyze the traces.

My next steps on the project are to write a more generic tool to convert a USB capture into what I call my “chatter format” (similar to the one I used to discuss serial protocols), and a more specific one that converts HID traces (because HID is a more defined protocol, and we can go a level deeper in exposing this into a human-readable source). I’m also considering if it would be within reach to provide the tool a HID descriptor blob, parse it and have it used to parse the HID traffic based on it. It would make some debugging particularly easier, for instance the stuff I did when I was fixing the ELECOM DEFT trackball.

I would also love to be able to play with a trace in a more interactive manner, for instance by loading this into Jupyter notebook, so that I could try parsing the blobs interactively, but unless someone with more experience with those contributes the code, I don’t expect I’ll have much time for it.

Pull requests are more than welcome!

Updating email addresses, GDPR style

After scrambling to find a bandaid solution for the upcoming domainpocalypse caused by EURid, I set myself out tomake sure that all my accounts everywhere use a more stable domain. Some of you might have noticed, because it was very visible in me submitting .mailmap files to a number of my projects to bundle together old and new addresses alike.

Unfortunately, as I noted on the previous post, not all the services out there allow you to change your email address from their website, and of those, very few allow you to delete the account altogether (I have decided that, in some cases, keeping an account open for a service I stopped using is significantly more annoying than just removing it). But as Daniel reminded me in the comments, the Right to rectification or Right to correction, allows me to leverage GDPR for this process.

I have thus started sending email to the provided Data Protection contact for various sites lacking an email editing feature:

Hello,

I’m writing to request that my personal data is amended, under my right to correction (Directive 95/46/EC (General Data Protection Regulation), Article 16), by updating my email address on file as [omissis — new email] (replacing the previous [omissis — old email] — which this email is being sent from, and to which you can send a request to confirm identity).

I take the occasion to remind you that you have one month to respond to this request free of charge per Art. 12(3), that according to the UK Information Commissioner’s Office interpretation (https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-of-access/) you must comply to this request however you receive it, and that it applies to the data as it exists at the time you receive this.

The responses to this have been of all sorts. Humans being amused at the formality of the requests, execution of the change as requested, and a couple of push backs, which appear to stem from services that not only don’t have a self-service way to change the email address, but also seem to lack technical means to change it.

The first case of this is myGatwick — the Gatwick airport flyer portal. When I contacted the Data Protection Officer to change my email address, the first answer was that at best they could close the account for the old email address and open a new one. I pointed out that’s not what I asked to do and not what the GDPR require them to do, and they tried to argue that email addresses are not personal data.

The other interesting case if Tile, the beacon startup, which will probably be topic of a separate blog post because their response to my GDPR request is a long list of problems.

What this suggests to me is that my first guess (someone used email addresses as primary keys) is not as common as I feared — although that appears to be the problem for myGatwick, given their lack of technical means. Instead, the databases appears to be done correctly, but the self-service feature of changing email address is just not implemented.

While I’m not privy to product decisions for the involved services, I can imagine that one of the reasons why it was done that way, is that implementing proper access controls to avoid users locking themselves in, or to limit the risk of account takeover, is too expensive in terms of engineering.

But as my ex-colleague Lea Kissner points out on Twitter, computers would be better at not introducing human errors in the process to begin with.

Of all the requests I sent and were actioned, there were only two cases in which I have been asked to verify anything about either the account or the email address. In both cases my resorting to GDPR requests was not because the website didn’t have the feature, but rather that it failed: British Airways and Nectar (UK). Both actioned the request straight from Twitter, and asked security questions (not particularly secure, but still good enough compared to the rest).

Everyone else have at best sent an email to the old address to inform of the change, in reply to my request. This is the extent of the verification most of the DPO appear to have put on GDPR requests. None of the services were particularly critical: takeaway food, table bookings, good tea. But if it was not me sending these requests I would probably be having a bad half an hour the next time I tried using them.

Among the requests I sent yesterday there was one to delete my account to Detectify — I have used it when it was a free trial, found it not particularly interesting to me, and moved on. While I have expressed my intention to disable my account on Twitter, the email I sent was actioned, deleting my account (or at least it’s expected to have been deleted now), without a confirmation request of any kind, or any verification that I did indeed have access to the account.

Maybe they checked the email headers to figure out that I was really sending as the right email address, instead of just assumed so because it looked that way. I can only imagine that they would have done more due process if I was a paying customer, if nothing else to keep getting money. I just find it interesting that it’s a security-oriented company, and didn’t realise that it’s much more secure to provide the self-service interfaces rather than letting a human decide, there.

Dexcom G6: new phone and new sensor

In the previous posts on the Dexcom G6, I’ve talked about the setup flow and the review after a week, before the first sensor expired. This was intentional because I wanted to talk about the sensor replacement flow separately. Turns out this post will also have a second topic to it, which came by by chance: how do you reconfigure the app when you change phone or, like it happened to me this time, when you are forced to do a factory reset.

I won’t go into the details of why I had to do a factory reset. I’ll just say that the previous point about email and identities was involved.

So what happens when the Dexcom is installed on a new phone, or when you have to reinstall this? The first thing it will ask you is to login again, which is the easy part. After that, though, it will ask you to scan the sensor code. Which made no sense to me! I said “No Code”, and then it asked me to scan the transmitter code. At which point it managed to pair with the transmitter, and it showed me the blood sugar readings for the past three hours. I can assume this is the amount of caching the transmitter can do. If the data is at all uploaded to Dexcom system, it is not shown back to the user beside those three hours.

It’s important to note here that unless you are at home (and you kept the box the transmitter came with), or you have written down the transmitter serial number somewhere, you won’t be able to reconnect. You need the transmitter serial number for the two of them to pair. To compare again this to the LibreLink app, that one only requires you to log in with your account, and the current sensor can just be scanned normally. Calibration info is kept online and transmitted back as needed.

A few hours later, the first sensor (not the transmitter) finally expired and I prepared myself to set the new one up. The first thing you see when you open the app after the sensor expired is a “Start new Sensor” button. If you click that, you are asked for the code of the sensor, with a drawing of the applicator that has the code printed on the cover of the glue pad. If you type in the code, the app will think that you already set up the whole sensor and it’s ready to start, and will initiate the countdown of the warm up. At no point the app direct you to apply the new sensor. It gives you the impression you need to first scan the code and then apply the sensor, which is wrong!

Luckily despite this mistake, I was able to tell the app to stop the sensor by telling it I’d be replacing transmitter. And then re-enrolling the already present transmitter. This is all completely messed up in the flow, particularly because when you do the transmitter re-enrolment, the steps are in the correct order: scan then tell you to put the transmitter in, and then scan the transmitter serial number (again, remember to keep the box). It even optionally shows you the explanation video again — once again, totally unlike just starting a new sensor.

To say that this is badly thought out is an understatement to me. I’ll compare this again with the LibreLink app that, once the sensor terminates, actually shows you the steps to put on a new sensor (you can ignore them and go straight to scanning the sensor if you know what you’re doing).

On the more practical side, the skin adhesive that I talked about last week actually seems to work fine to keep the sensor in place better, and it makes dealing with my hairy belly simpler by bunching up the hair and keep it attached to the skin, rather than having it act as a fur against the sensor’s glue. It would probably be quite simpler to put on if they provided a simpler guide on the size of the sensor though: showing it on the video is not particularly nice.

The sensor still needed calibration: the readings were off by more than 20% at first, although they are now back on track. This either means the calibration is off in general, or somehow there’s a significant variation between the value read by the Dexcom sensor and the actual blood sugar. I don’t have enough of a medical background to be able to tell this, so I leave that to the professionals.

At this point, my impression of the Dexcom G6 system is that it’s a fairly decent technical implementation of the hardware, but a complete mess on the software and human side. The former, I’m told can be obviated by using a third-party app (by the folks who are not waiting), which I will eventually try at this point for the sake of reviewing it. The latter, probably would require them to pay more attention to their competitors.

Abbott seems to have the upper-hand with the user-friendly apps and reports, even though there are bugs and their updates are very far in between. They also don’t do alerts, and despite a few third-party “adapters” to transform the Libre “flash” system into a more proper CGM, I don’t think there will be much in the form of reliable alerts until Abbott changes direction.

dot-EU Kerfuffle: what’s in an email anyway?

You may remember that last year I complained about what I defined the dot-EU kerfuffle, related to the news that EURid had been instructed to cancel the domain registrations of UK entities after Brexit. I thought the problem was passed when they agreed to consider European citizen as eligible holders of dot-EU domains, with an agreement reached last December, and due to enter into effect in… 2022.

You would think that, knowing that a new regulation needs to enter into effect, EURid would stop their plan of removing access to those domains for the UK residents for the time being, but it’s not so. Indeed, they instead sent a notice that effectively suggests that any old and new domain that would be then taken off the zone by marking them as WITHDRAWN first, and REVOKED second.

This means that on 2020-03-30, a lot of previously-assigned domains will be available for scammers, phishers, and identity thieves, unless they are transferred before this coming May!

You can get more user-focused read of this in this article by The Register, which does good justice to the situation, despite the author seemingly being a leaver, from the ending of a previous article linked there. One of the useful part of that article is knowing that there are over 45 thousands domain name assigned to individuals residing in the UK — and probably a good chunk of those are of either Europhiles Brits, or citizen of other EU countries residing in the UK (like me).

Why should we worry about this, given the amount of other pressing problems that Brexit is likely to cause? Well, there is a certain issue of people being identified by email addresses that contain domain names. What neither EURid nor The Register appear to have at hand (and me even less) would be to figure out how many of those domains actually are used as logins, or receive sensitive communications such as GP contacts from NHS, or financial companies.

Because if someone can take over a domain, they can take over the email address, and very quickly from there you can ruin the life of, or at least heavily bother, any person that might be using a dot-EU domain. The risks for scams, identity theft and the like are being ignored once again by EURid to try to make a political move, at a time when nobody is giving a damn of what EURid is doing.

As I said in the previous post, I have been using flameeyes[dot]eu as my primary domain for the past ten or eleven years. The blog was moved on its own domain. My primary website is still there but will be moved shortly. My primary email address is changed. You’ll see me using a dot-com email address more often.

I’m now going through the whole set of my accounts to change the email they have on file for me with a new one on a dot-com domain. This is significantly helped by having all of them on 1password, but that’s not enough — it only tells you which services that use email as username. It says nothing about (say) the banks that use a customer number, but still have your email on file.

And then there are the bigger problems.

Sometimes the email address is immutable.

You’d be surprised on how many websites have either no way to change an email address. My best guess is that whoever designed the database schema thought that just using the email address as a primary key was a good idea. This is clearly not the case, and it has not been the case ever. I’d be surprised if anyone who got their first email address from an ISP would be making that mistake, but in the era of GMail, it seems this is often forgotten.

I now have a tag for 1Password to show me which accounts I can’t change the email address of. Some of them are really minimal services, that you probably wouldn’t be surprised to just store an email address as identifier, such as the Fallout 4 Map website. Some appear to have bugs with changing email addresses (British Airways). Some … surprised me entirely: Tarsnap does not appear to have a way to change email address either.

While for some of these services being unable to receive email is not a particularly bad problem, for most of them it would be. Particularly when it comes to plane tickets. Let alone the risk that any one of those services would store passwords in plain text, and send them back to you if you forgot them. Combine that with people who reuse the same password everywhere, and you can start seeing a problem again.

OAuth2 is hard, let’s identify by email.

There is another problem if you log into services with OAuth2-based authentication providers such as Facebook or (to a lesser extent) Google. Quite a few of those services would create an account for you at first login, and use the email address that they are given by the identity provider. And then they just match the email address the next time you login.

While changing Google’s email address is a bit harder (but not impossible if, like me, you’re using GSuite), changing the address you register on Facebook with is usually easy (exceptions exist). So if you signed up for a service through Facebook, and then changed your Facebook address, you may not be able to sign in again — or you may end up signing up for the service again when you try.

In my case, I changed the domain associated of my Google account, since it’s a GSuite (business) account. That made things even more fun, because even if services may remember that Facebook allows you to change your email address, many might have forgotten that technically Google allows you to do that too. While Android and ChromeOS appear to work fine (which honestly surprised me, sorry colleagues!), Pokémon Go got significantly messed up when I did that — luckily I had Facebook connected to it as well, so a login later, and disconnect/reconnect of the Google account, was enough for it to work.

Some things are working slightly better than other. Pocket, which allows you to sign in with either a Firefox account, a Google account, or an email/password pair, appears to only care about the email address of the Google account. So when I logged in, I ended up with a new account and no access to the old content. The part that works well is that you can delete the new account, and immediately after login to the old one and replace the primary email address.

End result? I’m going through nearly every one of my nearly 600 accounts, a few at a time, trying to change my email address, and tagging those where I can’t. I’m considering writing a standard template email to send to any support address for those that do not support changing email address. But I doubt they would be fixed in time before Brexit. Just one more absolute mess caused by Cameron, May, and their friends.

Dexcom G6: week 1 review

Content warning, of sorts. I’m going to talk about my experience with the continuous glucose monitor I’m trying out. This will include some PG-rated body part descriptions, so if that makes you awkward to read, consider skipping this post.

It has now been a week since I started testing out the Dexcom G6 CGM. And I have a number of opinions, some of which echo what I heard from another friend using the Dexcom before, and some that confirmed the suggestion of another friend a few years back. So let me share some of it.

The first thing we should talk about is the sensor, positioning and stickiness. As I said in the previous post, their provided options for the sensor positioning are not particularly friendly. I ended up inserting it on my left side, just below the belly button, away from where I usually would inject insulin. It did not hurt at all, and it’s not particularly in the way.

Unfortunately, I’m fairly hairy and that means that the sensor has trouble sticking by itself. And because of that, it becomes a problem when taking showers, as the top side of the adhesive strip tends to detach, and I had to stick it with bandage tape. This is not a particular problem with the Libre, because my upper back arm is much less hairy and even though it can hurt a bit to take it off, it does not hurt that much.

As of today, the sensor is still in, seventh day out of ten, although it feels very precarious right now. During one of the many videos provided during the original setup, they suggest that, to makes it more stable to stick, I should be using skin adhesive. I had no idea what that was, and it was only illustrated as a drawing of a bottle. I asked my local pharmacy, and they were just as confused. Looking up on their supplier’s catalogue, they found something they could special order, and which I picked up today. It turns out to be a German skin adhesive for £15, which is designed for urinary sheaths. Be careful if you want to open the page, it has some very graphical imagery. As far as I can tell, it should be safe to use for this use case, but you would expect that Dexcom would at least provide some better adhesive themselves, or at least a sample in their introductory kit.

I will also have to point out that the bulge caused by the sensor is significantly more noticeable than the Libre, particularly if you have tight-fitting shirts, like I often do in the summer. Glad I listened to the colleague who thought it would look strange on me, back a few years ago.

Let’s now talk about the app, which I already said before was a mess to find on the store. The app itself looks bare bones — not just for the choice of few, light colours (compare to the vivid colours of LibreLink), but also due to the lack of content altogether: you get a dial that is meant to show you the current reading, as well as the direction of the reading between “up fast” and “down fast”, then a yellow-grey-red graph of the last three hours. You can rotate the phone (or expect the app to read it as a rotation despite you keeping your phone upright) to see the last 24 hours. I have not found any way to show you anything but that.

The app does have support for “sharing/following”, and it does ask you if you want to consent to data sharing. Supposedly there’s an online diabetes management site — but I have not found any link of where that is from the app. I’ll probably look that up for another post.

You’ll probably be wondering why I’m not including screenshots like I did when I reviewed the Counter Next One. The answer is that the app prevents screenshots, which means you either share your data via their own apps, or you don’t at all. Or you end up with taking a picture of one phone with another one, which I could have, but I seriously couldn’t be bothered.

The Settings menu is the only interaction you can actually spend time on, with the app. It’s an extremely rudimentary page with a list of items name-value pairs effectively. Nothing tells you which rows are clickable and which ones aren’t. There’s a second page for Alerts, and then a few more Alerts have their own settings page.

Before I move onto talking (ranting?) about alerts, let me take a moment to talk about the sensors’ lifetime display. The LibreLink app has one of the easiest-to-the-eyes implementation of the lifetime countdown. It shows as a progress bar of days once you start the sensor, and once you reach the last day, it switches to show you the progress bar for the hours. This is very well implemented and deals well with both timezone changes (I still travel quite a bit) and daylight savings time. The Dexcom G6 app shows you the time the sensor will end with no indication of which timezone is taken in.

The main feature of a CGM like this, that pushes data, rather than being polled like the Libre, is the ability to warn you of conditions that would be dangerous, like highs and lows. This is very useful particularly if you have a history of lows and you got desensitised to them. That’s not usually my problem, but I have had a few times where I got surprised by a low because I was too focused on a task, so I was actually hoping it would help me. But it might not quite be there.

First of all, you only get three thresholds: Urgent Low, Low and High. The first one cannot be changed at all:

The Urgent Low Alarm notification level and repeat setting cannot be changed or turned off. Only the sound setting can be changed.

The settings are locked at 3.1mmol/L and 30 minutes repeat, which would be fairly acceptable. Except it’s more like 10 minutes instead of 30, which is extremely annoying when you actually do get an urgent low, and you’re trying to deal with it. Particularly in the middle of the night. My best guess of why the repeat is not working is that any reading that goes up or stays stable resets the counter of warning, so a (3.1, 3.2, 3.1) timeseries would cause two alerts 10 minutes apart.

The Low/High thresholds are used both for the graph and for the alert. If you can’t see anything wrong with this, you never had a doctor tell you to stay a little higher rather than a little lower on your blood glucose. I know, though, I’m not alone with this. In my “usual” configuration, I would consider anything below 5 as “out of range”, because I shouldn’t linger at that value too long. But I don’t want a “low” alert at that value, I would rather have an alert if I stayed at that value for over 20 minutes.

I ended up disabling the High alert, because it was too noisy even with my usual value of 12 ­— particularly for the same reason noted above about the timeseries problem: even when I take some fast insulin to bring the value down, there will be another alert in ten minutes because the value is volatile enough. It might sounds perfectly reasonable to anyone who has not been working with monitoring and alerting for years, but to me, that sounds like a pretty bad monitoring system.

You can tweak the alerts a little bit for overnight alerts, but you can’t turn them off entirely. Urgent Low will stay on, and that has woken me up a few nights already. Turns out I have had multiple cases of overnight mild lows (around 3.2 mmol/L), that recover themselves without me waking up. Is this good? Bad? I’m not entirely sure. I remember they used to be more pronounced years ago, and that’s why my doctor suggested me to run a little higher. The problem with those lows, is that if you try too hard to recover from them quickly, you end up with scary highs (20mmol/L and more!) in the morning. And since there’s no “I know, I just got food”, or “I know, I just got insulin” to shut up the alerts for an hour or half, you end up very frustrated at the end of the day.

There is a setting that turns on the feature called “Quick Glance”, which is a persistent notification showing you the current glucose level, and one (or two) arrows determining the trend. It also comes with a Dexcom icon, maybe out of necessity (Android apps are not my speciality), which is fairly confusing because the Dexcom logo is the same as the dial that shows the trend in the app, even though in this notification it does not move. And, most importantly, it stays green as the logo even when the reading is out of range. This is extremely annoying, as the “quick glance” to the colour, while you’re half asleep, would give you the totally wrong impression. On the bright side, the notification also has an expanded view that shows you the same 3 hours graph as the app itself would, so you rarely if ever see the app.

Finally, speaking of the app, let me bring up the fact that it appears to use an outrageous amount of memory. Since I started using the Dexcom, I end restarting Pokémon Go every time I switch between it and WhatsApp and Viber, on a Samsung S8 phone that should have enough RAM to run all of this in the background. This is fairly annoying, although not a deal breaker for me. But I wouldn’t be surprised if someone using a lower-end phone would have a problem trying to use this, and would have to pay the extra £290 (excluding VAT) for the receiver (by comparison, the Libre reader, which doubles as a standard glucometer – including support for β-ketone sticks – costs £58 including VAT).

Since I just had to look up the price of the reader, I also have paid a little more attention to the brochure they sent me when I signed up to be contacted. One of the thing it says is:

Customize alerts to the way you live your life (day vs night, week vs weekend).

The “customization” is a single schedule option, which I set up for night, as otherwise I would rarely be able to sleep without it waking me up every other night. That means you definitely cannot customize them the way you live your life. For instance, there’s nothing to help you use this meter while going to the movies: there’s no way to silence the alerts for any amount of time (some alerts are explicitly written so that Android’s Do Not Disturb do not block them!), there’s no silent-warning option, which would have been awesome together with the watch support (feel the buzz, check the watch, see a low—drink the soda, see a high—get the insulin/tablet).

A final word I will spend on the calibration. I was aware of the Dexcom at its previous generation (G5) required calibration during setup. As noted last week, this version (G6) does not require that. On the other hand, you can type in a calibration value, which I ended up doing for this particular sensor, as I was worried about the >20mmol/L readings it was showing me. Turns out they were not completely outlandish, but they were over 20% off. A fingerstick later, and a bit of calibration, seem to be enough for it to report a more in-line value.

Will I stick to the Dexcom G6 over the Libre? I seriously doubt so by now. It does not appear to match my usage patterns, it seems to be built for a different target audience, and it lacks any of the useful information and graphs that the LibreLink app provides. It also is more expensive and less nice to wear. Expect at least one more rant if I can figure out how to access my own readings on their webapp.

Working with usbmon captures

Two years ago I posted some notes on how I do USB sniffing. I have not really changed much since then, although admittedly I have not spent much time reversing glucometers in that time. But I’m finally biting the bullet and building myself a better setup.

The reasons why I’m looking for a new setup are multiple: first of all, I now have a laptop that is fast enough to run a Windows 10 VM (with Microsoft’s 90 days evaluation version). Second, the proprietary software I used for USB sniffing has not been updated since 2016 — and they still have not published any information about their CBCF format, despite their reason being stated as:

Unfortunately, there is no such documentation and I’m almost sure will
never be. The reason is straightforward – every documented thing
should stay the same indefinitely. That is very restrictive.

At this point, keeping my old Dell Vostro 3750 as a sacrificial machine just for reverse engineering is not worth it anymore. Particularly when you consider that it started being obsoleted by both software (Windows 10 appears to have lost the ability to map network shares easily, and thus provide local-network backups), and hardware (the Western Digital SSD that I installed on it can’t be updated — their update package only works for UEFI boot systems, and while technically that machine is UEFI, it only supports the CSM boot).

When looking at a new option for my setup, I also want to be able to publish more of my scripts and tooling, if nothing else because I would feel more accomplished by knowing that even the side effects of working on these projects can be reused. So this time around I want to focus on all open source tooling, and build as much of the tools to be suitable for me to release as part of my employer’s open source program, which basically means not include any device-specific information within the tooling.

I started looking at Wireshark and its support for protocol dissectors. Unfortunately it looks like USB payloads are a bit more complicated, and dissector support is not great. So once again I’ll be writing a bunch of Python scripts to convert the captured data into some “chatter” files that are suitable for human consumption, at least. So I started to take a closer look at the usbmon documentation (the last time I looked at this was over ten years ago), and see if I can process that data directly.

To be fair, Wireshark does make it much nicer to get the captures out, since the text format usbmon is not particularly easy to parse back into something you can code with — and it is “lossy” when compared with the binary structures. With that, the first thing to focus on is to support the capture format Wireshark generates, which is pcapng, with one particular (out of many) USB capture packet structures. I decided to start my work from that.

What I have right now, is an (incomplete) library that can parse a pcapng capture into objects that are easier to play with in Python. Right now it loads the whole content into memory, which might or might not be a bad limitation, but for now it will do. I guess it would also be nice if I can find a way to integrate this with Colaboratory, which is a tool I only have vague acquaintance with, but would probably be great for this kind of reverse engineering, as it looks a lot like the kind of stuff I’ve been doing by hand. That will probably be left for the future.

The primary target right now is for me to be able to reconstruct the text format of usbmon given the pcapng capture. This would at least tell me that my objects are not losing details in the construction. Unfortunately this is proving harder than expected, because the documentation of usbmon is not particularly clear, starting from the definition of the structure, that mixes sized (u32) and unsized (unsigned int) types. I hope I’ll be able to figure this out and hopefully even send changes to improve the documentation.

As you might have noticed from my Twitter rants, I maintain that the documentation needs an overhaul. From mention of “easy” things, to the fact that the current suggested format (the binary structures) is defined in terms of the text format fields — except the text format is deprecated, and the kernel actually appears to produce the text format based on the binary structures. There are also quite a few things that are not obviously documented in the kernel docs, so you need to read the source code to figure out what they mean. I’ll try rewriting sections of the documentation.

Keep reading the blog to find updates if you have interests in this.

Testing the Dexcom G6 CGM: Setup

I have written many times before how I have been using the FreeStyle Libre “flash” glucose monitor, and have been vastly happy with it. Unfortunately in the last year or so, Abbott has had trouble with manufacturing capacity for the sensors, and it’s becoming annoying to procure them. Once already they delayed my order to the point that I spent a week going back to finger-pricking meters, and it looked like I might have to repeat that when, earlier in January, they notified that my order would be delayed.

This time, I decided to at least look into the alternatives — and as you can guess from the title, I have ordered a Dexcom G6 system, which is an actual continuous monitor, rather than a flash system like the Libre. For those who have not looked into this before (or who, lucky them, don’t suffer from diabetes and thus don’t spend time looking like this), the main difference between these two is that the Libre needs to be scanned regularly, while the G6 sends the data continuously from the transmitter to a receiver of some kind.

I say “of some kind” because, like the Libre, and unlike the generation I looked at before, the G6 can be connected to a compatible smartphone instead of a dedicated receiver. Indeed, the receiver is a costly optional here, considering that already the starter kit is £159 (plus VAT, which I’m exempt from because I’m diabetic).

Speaking of costs, Dexcom takes a different approach to ordering than the Libre: it’s overly expensive if you “pay as you go”, the way Abbott does it. Instead if you don’t want to be charged through the nose, you need to accept a one year contract, for £159/month. It’s an okay price, barely more expensive than the equivalent Abbott sensors price, but it’s definitely a bit more “scary” as an option. In particular if you don’t feel sure about the comfort of the sensor, for instance.

I’m typing this post as I opened the boxes that arrived to me with the sensor, transmitter and instructions. And the first thing I will complain about is that the instructions tell me to “Set Up App”, and give me the name of the app and its icon, but provides no QR code or short link to it. So I looked at their own FAQ, they only provide the name of the app:

The Dexcom G6 app has to be downloaded and is different from the Dexcom G5 Mobile app. (Please note: The G6 system will not work with the G5 Mobile app.) It is available for free from the Apple App or Google Play stores. The app is named “Dexcom G6”

Once I actually find the app, that is reported as being developed by Dexcom, I actually find Dexcom G6 mmol/L DXCM1. What on Earth, folks? Yes of course the mmol/l is there because it’s the UK edition (the Italian edition would be mg/dl), and DXCM1 is probably… something. But this is one of the worst way to dealing with region-restricted apps.

Second problem: the login flow uses an in-app browser, as it’s clear from the cookies popup (that is annoying on their normal website too). Worse, it does not work with 1Password auto-fill! Luckily they don’t disable paste at least.

After logging in, the app forces you to watch a series of introductory videos, otherwise you don’t get to continue the setup at all. I would hope that this is only a requirement for the first time you use the app, but I somewhat don’t expect it to be as good. The videos are a bit repetitive, but I suppose they are designed to help people who are not used to this type of technology. I think it’s of note that some of the videos are vertical, while other are horizontal, forcing you to move your phone quite a few times.

I find it ironic that the videos suggests you to keep using a fingerstick meter to take treatment decisions. The Libre reader device doubles as a fingerstick meter, while Dexcom does not appear to even market one to begin with.

I have to say I’m not particularly impressed by the process, let alone the opportunities. The video effectively tells you you shouldn’t be doing anything at all with your body, as you need to place it definitely on your belly, but away from injection sites, from where you could have a seatbelt, or from where you may roll over while asleep. But I’ll go with it for now. Also, unlike the Libre, the sensors don’t come with the usual alcohol wipes, despite them suggesting you to use it and have it ready.

As I type this, I just finished the (mostly painless, in the sense of physical pain) process to install the sensor and transmitter. The app is now supposedly connecting with the (BLE) transmitter. The screen tells me:

Keep smart device within 6 meters of transmitter. Pairing may take up to 30 minutes.

It took a good five minutes to pair. And only after it paired, the sensor can be started, which takes two hours (compare to the 1 hour of the Libre). Funnily enough, Android SmartLock asked if I wanted to use to keep my phone unlocked, too.

Before I end this first post, I should mention that there is also a WearOS companion app — which my smartwatch asked if I wanted to install after I installed the phone app. I would love to say that this is great, but it’s implemented as a watch face! Which makes it very annoying if you actually like your watch face and would rather just have an app that allowed you to check your blood sugar without taking out your phone during a meeting, or a date.

Anyhoo, I’ll post more about my experience as I get further into using this. The starter kit is a 30 days kit, so I’ll probably be blogging more during February while this is in, and then finally decide what to do later in the year. I now have supplies for the Libre for over three months, so if I switch, that’ll probably happen some time in June.

CP2110 Update for 2019

The last time I wrote about the CP2110 adapter was nearly a year ago, and because I have had a lot to keep me busy since, I have not been making much progress. But today I had some spare cycles and decided to take a deeper look starting from scratch again.

What I should have done properly since then would have been procuring myself a new serial dongle, as I was not (and still am) not entirely convinced about the quality of the CH341 adapter I’m using. I think I used that serial adapter successfully before, but maybe I didn’t and I’ve been fighting with ghosts ever since. This counts double as, silly me, I didn’t re-read my own post when I resumed working on this, and been scratching my head at nearly exactly the same problems as last time.

I have some updates first. The first of which is that I have some rough-edged code out there on this GitHub branch. It does not really have all the features it should, but it at least let me test the basic implementation. It also does not actually let you select which device to open — it looks for the device with the same USB IDs as I have, and that might not work at all for you. I’ll be happy to accept pull requests to fix more of the details, if anyone happen to need something like this too — once it’s actually in a state where it can be merged, I’ll be doing a squash commit and send a pull request upstream with the final working code.

The second is that while fighting with this, and venting on Twitter, Saleae themselves put me on the right path: when I said that Logic failed to decode the CP2110→CH341 conversation at 5V but worked when they were set at 3.3V, they pointed me at the documentation of threshold voltage, which turned out to be a very good lead.

Indeed, when connecting the CP2110 at 5V alone, Logic reports a high of 5.121V, and a low of ~-0.12V. When I tried to connect it with the CH341 through the breadboard full of connections, Logic reports a low of nearly 3V! And as far as I can tell, the ground is correctly wired together between the two serial adapters — they are even connected to the same USB HUB. I also don’t think the problem is with the wiring of the breadboard, because the behaviour is identical when just wiring the two adapters together.

So my next step has been setting up the BeagleBone Black I bought a couple of years ago and shelved into a box. I should have done that last year, and I would probably have been very close to have this working in the first place. After setting this up (which is much easier than it sounds), and figuring out from the BeagleBoard Wiki the pinout (and a bit of guesswork on the voltage) of its debug serial port, I could confirm the data was being sent to the CP2110 right — but it got all mangled on print.

The answer was that the HID buffered reads are… complicated. So instead of deriving most of the structure from the POSIX serial implementation, I lifted it from the RFC2217 driver, that uses a background thread to loop the reads. This finally allowed me to use the pySerial miniterm tool to log in and even dmesg(!) the BBB over the CP2110 adapter, which I consider a win.

Tomorrow I’ll try polishing the implementation to the point where I can send a pull request. And then I can actually set up to look back into the glucometer using it. Because I had an actual target when I started working on this, and was not just trying to get this to work for the sake of it.