Updating email addresses, GDPR style

After scrambling to find a bandaid solution for the upcoming domainpocalypse caused by EURid, I set myself out tomake sure that all my accounts everywhere use a more stable domain. Some of you might have noticed, because it was very visible in me submitting .mailmap files to a number of my projects to bundle together old and new addresses alike.

Unfortunately, as I noted on the previous post, not all the services out there allow you to change your email address from their website, and of those, very few allow you to delete the account altogether (I have decided that, in some cases, keeping an account open for a service I stopped using is significantly more annoying than just removing it). But as Daniel reminded me in the comments, the Right to rectification or Right to correction, allows me to leverage GDPR for this process.

I have thus started sending email to the provided Data Protection contact for various sites lacking an email editing feature:

Hello,

I’m writing to request that my personal data is amended, under my right to correction (Directive 95/46/EC (General Data Protection Regulation), Article 16), by updating my email address on file as [omissis — new email] (replacing the previous [omissis — old email] — which this email is being sent from, and to which you can send a request to confirm identity).

I take the occasion to remind you that you have one month to respond to this request free of charge per Art. 12(3), that according to the UK Information Commissioner’s Office interpretation (https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-of-access/) you must comply to this request however you receive it, and that it applies to the data as it exists at the time you receive this.

The responses to this have been of all sorts. Humans being amused at the formality of the requests, execution of the change as requested, and a couple of push backs, which appear to stem from services that not only don’t have a self-service way to change the email address, but also seem to lack technical means to change it.

The first case of this is myGatwick — the Gatwick airport flyer portal. When I contacted the Data Protection Officer to change my email address, the first answer was that at best they could close the account for the old email address and open a new one. I pointed out that’s not what I asked to do and not what the GDPR require them to do, and they tried to argue that email addresses are not personal data.

The other interesting case if Tile, the beacon startup, which will probably be topic of a separate blog post because their response to my GDPR request is a long list of problems.

What this suggests to me is that my first guess (someone used email addresses as primary keys) is not as common as I feared — although that appears to be the problem for myGatwick, given their lack of technical means. Instead, the databases appears to be done correctly, but the self-service feature of changing email address is just not implemented.

While I’m not privy to product decisions for the involved services, I can imagine that one of the reasons why it was done that way, is that implementing proper access controls to avoid users locking themselves in, or to limit the risk of account takeover, is too expensive in terms of engineering.

But as my ex-colleague Lea Kissner points out on Twitter, computers would be better at not introducing human errors in the process to begin with.

Of all the requests I sent and were actioned, there were only two cases in which I have been asked to verify anything about either the account or the email address. In both cases my resorting to GDPR requests was not because the website didn’t have the feature, but rather that it failed: British Airways and Nectar (UK). Both actioned the request straight from Twitter, and asked security questions (not particularly secure, but still good enough compared to the rest).

Everyone else have at best sent an email to the old address to inform of the change, in reply to my request. This is the extent of the verification most of the DPO appear to have put on GDPR requests. None of the services were particularly critical: takeaway food, table bookings, good tea. But if it was not me sending these requests I would probably be having a bad half an hour the next time I tried using them.

Among the requests I sent yesterday there was one to delete my account to Detectify — I have used it when it was a free trial, found it not particularly interesting to me, and moved on. While I have expressed my intention to disable my account on Twitter, the email I sent was actioned, deleting my account (or at least it’s expected to have been deleted now), without a confirmation request of any kind, or any verification that I did indeed have access to the account.

Maybe they checked the email headers to figure out that I was really sending as the right email address, instead of just assumed so because it looked that way. I can only imagine that they would have done more due process if I was a paying customer, if nothing else to keep getting money. I just find it interesting that it’s a security-oriented company, and didn’t realise that it’s much more secure to provide the self-service interfaces rather than letting a human decide, there.

Dexcom G6: new phone and new sensor

In the previous posts on the Dexcom G6, I’ve talked about the setup flow and the review after a week, before the first sensor expired. This was intentional because I wanted to talk about the sensor replacement flow separately. Turns out this post will also have a second topic to it, which came by by chance: how do you reconfigure the app when you change phone or, like it happened to me this time, when you are forced to do a factory reset.

I won’t go into the details of why I had to do a factory reset. I’ll just say that the previous point about email and identities was involved.

So what happens when the Dexcom is installed on a new phone, or when you have to reinstall this? The first thing it will ask you is to login again, which is the easy part. After that, though, it will ask you to scan the sensor code. Which made no sense to me! I said “No Code”, and then it asked me to scan the transmitter code. At which point it managed to pair with the transmitter, and it showed me the blood sugar readings for the past three hours. I can assume this is the amount of caching the transmitter can do. If the data is at all uploaded to Dexcom system, it is not shown back to the user beside those three hours.

It’s important to note here that unless you are at home (and you kept the box the transmitter came with), or you have written down the transmitter serial number somewhere, you won’t be able to reconnect. You need the transmitter serial number for the two of them to pair. To compare again this to the LibreLink app, that one only requires you to log in with your account, and the current sensor can just be scanned normally. Calibration info is kept online and transmitted back as needed.

A few hours later, the first sensor (not the transmitter) finally expired and I prepared myself to set the new one up. The first thing you see when you open the app after the sensor expired is a “Start new Sensor” button. If you click that, you are asked for the code of the sensor, with a drawing of the applicator that has the code printed on the cover of the glue pad. If you type in the code, the app will think that you already set up the whole sensor and it’s ready to start, and will initiate the countdown of the warm up. At no point the app direct you to apply the new sensor. It gives you the impression you need to first scan the code and then apply the sensor, which is wrong!

Luckily despite this mistake, I was able to tell the app to stop the sensor by telling it I’d be replacing transmitter. And then re-enrolling the already present transmitter. This is all completely messed up in the flow, particularly because when you do the transmitter re-enrolment, the steps are in the correct order: scan then tell you to put the transmitter in, and then scan the transmitter serial number (again, remember to keep the box). It even optionally shows you the explanation video again — once again, totally unlike just starting a new sensor.

To say that this is badly thought out is an understatement to me. I’ll compare this again with the LibreLink app that, once the sensor terminates, actually shows you the steps to put on a new sensor (you can ignore them and go straight to scanning the sensor if you know what you’re doing).

On the more practical side, the skin adhesive that I talked about last week actually seems to work fine to keep the sensor in place better, and it makes dealing with my hairy belly simpler by bunching up the hair and keep it attached to the skin, rather than having it act as a fur against the sensor’s glue. It would probably be quite simpler to put on if they provided a simpler guide on the size of the sensor though: showing it on the video is not particularly nice.

The sensor still needed calibration: the readings were off by more than 20% at first, although they are now back on track. This either means the calibration is off in general, or somehow there’s a significant variation between the value read by the Dexcom sensor and the actual blood sugar. I don’t have enough of a medical background to be able to tell this, so I leave that to the professionals.

At this point, my impression of the Dexcom G6 system is that it’s a fairly decent technical implementation of the hardware, but a complete mess on the software and human side. The former, I’m told can be obviated by using a third-party app (by the folks who are not waiting), which I will eventually try at this point for the sake of reviewing it. The latter, probably would require them to pay more attention to their competitors.

Abbott seems to have the upper-hand with the user-friendly apps and reports, even though there are bugs and their updates are very far in between. They also don’t do alerts, and despite a few third-party “adapters” to transform the Libre “flash” system into a more proper CGM, I don’t think there will be much in the form of reliable alerts until Abbott changes direction.

dot-EU Kerfuffle: what’s in an email anyway?

You may remember that last year I complained about what I defined the dot-EU kerfuffle, related to the news that EURid had been instructed to cancel the domain registrations of UK entities after Brexit. I thought the problem was passed when they agreed to consider European citizen as eligible holders of dot-EU domains, with an agreement reached last December, and due to enter into effect in… 2022.

You would think that, knowing that a new regulation needs to enter into effect, EURid would stop their plan of removing access to those domains for the UK residents for the time being, but it’s not so. Indeed, they instead sent a notice that effectively suggests that any old and new domain that would be then taken off the zone by marking them as WITHDRAWN first, and REVOKED second.

This means that on 2020-03-30, a lot of previously-assigned domains will be available for scammers, phishers, and identity thieves, unless they are transferred before this coming May!

You can get more user-focused read of this in this article by The Register, which does good justice to the situation, despite the author seemingly being a leaver, from the ending of a previous article linked there. One of the useful part of that article is knowing that there are over 45 thousands domain name assigned to individuals residing in the UK — and probably a good chunk of those are of either Europhiles Brits, or citizen of other EU countries residing in the UK (like me).

Why should we worry about this, given the amount of other pressing problems that Brexit is likely to cause? Well, there is a certain issue of people being identified by email addresses that contain domain names. What neither EURid nor The Register appear to have at hand (and me even less) would be to figure out how many of those domains actually are used as logins, or receive sensitive communications such as GP contacts from NHS, or financial companies.

Because if someone can take over a domain, they can take over the email address, and very quickly from there you can ruin the life of, or at least heavily bother, any person that might be using a dot-EU domain. The risks for scams, identity theft and the like are being ignored once again by EURid to try to make a political move, at a time when nobody is giving a damn of what EURid is doing.

As I said in the previous post, I have been using flameeyes[dot]eu as my primary domain for the past ten or eleven years. The blog was moved on its own domain. My primary website is still there but will be moved shortly. My primary email address is changed. You’ll see me using a dot-com email address more often.

I’m now going through the whole set of my accounts to change the email they have on file for me with a new one on a dot-com domain. This is significantly helped by having all of them on 1password, but that’s not enough — it only tells you which services that use email as username. It says nothing about (say) the banks that use a customer number, but still have your email on file.

And then there are the bigger problems.

Sometimes the email address is immutable.

You’d be surprised on how many websites have either no way to change an email address. My best guess is that whoever designed the database schema thought that just using the email address as a primary key was a good idea. This is clearly not the case, and it has not been the case ever. I’d be surprised if anyone who got their first email address from an ISP would be making that mistake, but in the era of GMail, it seems this is often forgotten.

I now have a tag for 1Password to show me which accounts I can’t change the email address of. Some of them are really minimal services, that you probably wouldn’t be surprised to just store an email address as identifier, such as the Fallout 4 Map website. Some appear to have bugs with changing email addresses (British Airways). Some … surprised me entirely: Tarsnap does not appear to have a way to change email address either.

While for some of these services being unable to receive email is not a particularly bad problem, for most of them it would be. Particularly when it comes to plane tickets. Let alone the risk that any one of those services would store passwords in plain text, and send them back to you if you forgot them. Combine that with people who reuse the same password everywhere, and you can start seeing a problem again.

OAuth2 is hard, let’s identify by email.

There is another problem if you log into services with OAuth2-based authentication providers such as Facebook or (to a lesser extent) Google. Quite a few of those services would create an account for you at first login, and use the email address that they are given by the identity provider. And then they just match the email address the next time you login.

While changing Google’s email address is a bit harder (but not impossible if, like me, you’re using GSuite), changing the address you register on Facebook with is usually easy (exceptions exist). So if you signed up for a service through Facebook, and then changed your Facebook address, you may not be able to sign in again — or you may end up signing up for the service again when you try.

In my case, I changed the domain associated of my Google account, since it’s a GSuite (business) account. That made things even more fun, because even if services may remember that Facebook allows you to change your email address, many might have forgotten that technically Google allows you to do that too. While Android and ChromeOS appear to work fine (which honestly surprised me, sorry colleagues!), Pokémon Go got significantly messed up when I did that — luckily I had Facebook connected to it as well, so a login later, and disconnect/reconnect of the Google account, was enough for it to work.

Some things are working slightly better than other. Pocket, which allows you to sign in with either a Firefox account, a Google account, or an email/password pair, appears to only care about the email address of the Google account. So when I logged in, I ended up with a new account and no access to the old content. The part that works well is that you can delete the new account, and immediately after login to the old one and replace the primary email address.

End result? I’m going through nearly every one of my nearly 600 accounts, a few at a time, trying to change my email address, and tagging those where I can’t. I’m considering writing a standard template email to send to any support address for those that do not support changing email address. But I doubt they would be fixed in time before Brexit. Just one more absolute mess caused by Cameron, May, and their friends.

Dexcom G6: week 1 review

Content warning, of sorts. I’m going to talk about my experience with the continuous glucose monitor I’m trying out. This will include some PG-rated body part descriptions, so if that makes you awkward to read, consider skipping this post.

It has now been a week since I started testing out the Dexcom G6 CGM. And I have a number of opinions, some of which echo what I heard from another friend using the Dexcom before, and some that confirmed the suggestion of another friend a few years back. So let me share some of it.

The first thing we should talk about is the sensor, positioning and stickiness. As I said in the previous post, their provided options for the sensor positioning are not particularly friendly. I ended up inserting it on my left side, just below the belly button, away from where I usually would inject insulin. It did not hurt at all, and it’s not particularly in the way.

Unfortunately, I’m fairly hairy and that means that the sensor has trouble sticking by itself. And because of that, it becomes a problem when taking showers, as the top side of the adhesive strip tends to detach, and I had to stick it with bandage tape. This is not a particular problem with the Libre, because my upper back arm is much less hairy and even though it can hurt a bit to take it off, it does not hurt that much.

As of today, the sensor is still in, seventh day out of ten, although it feels very precarious right now. During one of the many videos provided during the original setup, they suggest that, to makes it more stable to stick, I should be using skin adhesive. I had no idea what that was, and it was only illustrated as a drawing of a bottle. I asked my local pharmacy, and they were just as confused. Looking up on their supplier’s catalogue, they found something they could special order, and which I picked up today. It turns out to be a German skin adhesive for £15, which is designed for urinary sheaths. Be careful if you want to open the page, it has some very graphical imagery. As far as I can tell, it should be safe to use for this use case, but you would expect that Dexcom would at least provide some better adhesive themselves, or at least a sample in their introductory kit.

I will also have to point out that the bulge caused by the sensor is significantly more noticeable than the Libre, particularly if you have tight-fitting shirts, like I often do in the summer. Glad I listened to the colleague who thought it would look strange on me, back a few years ago.

Let’s now talk about the app, which I already said before was a mess to find on the store. The app itself looks bare bones — not just for the choice of few, light colours (compare to the vivid colours of LibreLink), but also due to the lack of content altogether: you get a dial that is meant to show you the current reading, as well as the direction of the reading between “up fast” and “down fast”, then a yellow-grey-red graph of the last three hours. You can rotate the phone (or expect the app to read it as a rotation despite you keeping your phone upright) to see the last 24 hours. I have not found any way to show you anything but that.

The app does have support for “sharing/following”, and it does ask you if you want to consent to data sharing. Supposedly there’s an online diabetes management site — but I have not found any link of where that is from the app. I’ll probably look that up for another post.

You’ll probably be wondering why I’m not including screenshots like I did when I reviewed the Counter Next One. The answer is that the app prevents screenshots, which means you either share your data via their own apps, or you don’t at all. Or you end up with taking a picture of one phone with another one, which I could have, but I seriously couldn’t be bothered.

The Settings menu is the only interaction you can actually spend time on, with the app. It’s an extremely rudimentary page with a list of items name-value pairs effectively. Nothing tells you which rows are clickable and which ones aren’t. There’s a second page for Alerts, and then a few more Alerts have their own settings page.

Before I move onto talking (ranting?) about alerts, let me take a moment to talk about the sensors’ lifetime display. The LibreLink app has one of the easiest-to-the-eyes implementation of the lifetime countdown. It shows as a progress bar of days once you start the sensor, and once you reach the last day, it switches to show you the progress bar for the hours. This is very well implemented and deals well with both timezone changes (I still travel quite a bit) and daylight savings time. The Dexcom G6 app shows you the time the sensor will end with no indication of which timezone is taken in.

The main feature of a CGM like this, that pushes data, rather than being polled like the Libre, is the ability to warn you of conditions that would be dangerous, like highs and lows. This is very useful particularly if you have a history of lows and you got desensitised to them. That’s not usually my problem, but I have had a few times where I got surprised by a low because I was too focused on a task, so I was actually hoping it would help me. But it might not quite be there.

First of all, you only get three thresholds: Urgent Low, Low and High. The first one cannot be changed at all:

The Urgent Low Alarm notification level and repeat setting cannot be changed or turned off. Only the sound setting can be changed.

The settings are locked at 3.1mmol/L and 30 minutes repeat, which would be fairly acceptable. Except it’s more like 10 minutes instead of 30, which is extremely annoying when you actually do get an urgent low, and you’re trying to deal with it. Particularly in the middle of the night. My best guess of why the repeat is not working is that any reading that goes up or stays stable resets the counter of warning, so a (3.1, 3.2, 3.1) timeseries would cause two alerts 10 minutes apart.

The Low/High thresholds are used both for the graph and for the alert. If you can’t see anything wrong with this, you never had a doctor tell you to stay a little higher rather than a little lower on your blood glucose. I know, though, I’m not alone with this. In my “usual” configuration, I would consider anything below 5 as “out of range”, because I shouldn’t linger at that value too long. But I don’t want a “low” alert at that value, I would rather have an alert if I stayed at that value for over 20 minutes.

I ended up disabling the High alert, because it was too noisy even with my usual value of 12 ­— particularly for the same reason noted above about the timeseries problem: even when I take some fast insulin to bring the value down, there will be another alert in ten minutes because the value is volatile enough. It might sounds perfectly reasonable to anyone who has not been working with monitoring and alerting for years, but to me, that sounds like a pretty bad monitoring system.

You can tweak the alerts a little bit for overnight alerts, but you can’t turn them off entirely. Urgent Low will stay on, and that has woken me up a few nights already. Turns out I have had multiple cases of overnight mild lows (around 3.2 mmol/L), that recover themselves without me waking up. Is this good? Bad? I’m not entirely sure. I remember they used to be more pronounced years ago, and that’s why my doctor suggested me to run a little higher. The problem with those lows, is that if you try too hard to recover from them quickly, you end up with scary highs (20mmol/L and more!) in the morning. And since there’s no “I know, I just got food”, or “I know, I just got insulin” to shut up the alerts for an hour or half, you end up very frustrated at the end of the day.

There is a setting that turns on the feature called “Quick Glance”, which is a persistent notification showing you the current glucose level, and one (or two) arrows determining the trend. It also comes with a Dexcom icon, maybe out of necessity (Android apps are not my speciality), which is fairly confusing because the Dexcom logo is the same as the dial that shows the trend in the app, even though in this notification it does not move. And, most importantly, it stays green as the logo even when the reading is out of range. This is extremely annoying, as the “quick glance” to the colour, while you’re half asleep, would give you the totally wrong impression. On the bright side, the notification also has an expanded view that shows you the same 3 hours graph as the app itself would, so you rarely if ever see the app.

Finally, speaking of the app, let me bring up the fact that it appears to use an outrageous amount of memory. Since I started using the Dexcom, I end restarting Pokémon Go every time I switch between it and WhatsApp and Viber, on a Samsung S8 phone that should have enough RAM to run all of this in the background. This is fairly annoying, although not a deal breaker for me. But I wouldn’t be surprised if someone using a lower-end phone would have a problem trying to use this, and would have to pay the extra £290 (excluding VAT) for the receiver (by comparison, the Libre reader, which doubles as a standard glucometer – including support for β-ketone sticks – costs £58 including VAT).

Since I just had to look up the price of the reader, I also have paid a little more attention to the brochure they sent me when I signed up to be contacted. One of the thing it says is:

Customize alerts to the way you live your life (day vs night, week vs weekend).

The “customization” is a single schedule option, which I set up for night, as otherwise I would rarely be able to sleep without it waking me up every other night. That means you definitely cannot customize them the way you live your life. For instance, there’s nothing to help you use this meter while going to the movies: there’s no way to silence the alerts for any amount of time (some alerts are explicitly written so that Android’s Do Not Disturb do not block them!), there’s no silent-warning option, which would have been awesome together with the watch support (feel the buzz, check the watch, see a low—drink the soda, see a high—get the insulin/tablet).

A final word I will spend on the calibration. I was aware of the Dexcom at its previous generation (G5) required calibration during setup. As noted last week, this version (G6) does not require that. On the other hand, you can type in a calibration value, which I ended up doing for this particular sensor, as I was worried about the >20mmol/L readings it was showing me. Turns out they were not completely outlandish, but they were over 20% off. A fingerstick later, and a bit of calibration, seem to be enough for it to report a more in-line value.

Will I stick to the Dexcom G6 over the Libre? I seriously doubt so by now. It does not appear to match my usage patterns, it seems to be built for a different target audience, and it lacks any of the useful information and graphs that the LibreLink app provides. It also is more expensive and less nice to wear. Expect at least one more rant if I can figure out how to access my own readings on their webapp.

Working with usbmon captures

Two years ago I posted some notes on how I do USB sniffing. I have not really changed much since then, although admittedly I have not spent much time reversing glucometers in that time. But I’m finally biting the bullet and building myself a better setup.

The reasons why I’m looking for a new setup are multiple: first of all, I now have a laptop that is fast enough to run a Windows 10 VM (with Microsoft’s 90 days evaluation version). Second, the proprietary software I used for USB sniffing has not been updated since 2016 — and they still have not published any information about their CBCF format, despite their reason being stated as:

Unfortunately, there is no such documentation and I’m almost sure will
never be. The reason is straightforward – every documented thing
should stay the same indefinitely. That is very restrictive.

At this point, keeping my old Dell Vostro 3750 as a sacrificial machine just for reverse engineering is not worth it anymore. Particularly when you consider that it started being obsoleted by both software (Windows 10 appears to have lost the ability to map network shares easily, and thus provide local-network backups), and hardware (the Western Digital SSD that I installed on it can’t be updated — their update package only works for UEFI boot systems, and while technically that machine is UEFI, it only supports the CSM boot).

When looking at a new option for my setup, I also want to be able to publish more of my scripts and tooling, if nothing else because I would feel more accomplished by knowing that even the side effects of working on these projects can be reused. So this time around I want to focus on all open source tooling, and build as much of the tools to be suitable for me to release as part of my employer’s open source program, which basically means not include any device-specific information within the tooling.

I started looking at Wireshark and its support for protocol dissectors. Unfortunately it looks like USB payloads are a bit more complicated, and dissector support is not great. So once again I’ll be writing a bunch of Python scripts to convert the captured data into some “chatter” files that are suitable for human consumption, at least. So I started to take a closer look at the usbmon documentation (the last time I looked at this was over ten years ago), and see if I can process that data directly.

To be fair, Wireshark does make it much nicer to get the captures out, since the text format usbmon is not particularly easy to parse back into something you can code with — and it is “lossy” when compared with the binary structures. With that, the first thing to focus on is to support the capture format Wireshark generates, which is pcapng, with one particular (out of many) USB capture packet structures. I decided to start my work from that.

What I have right now, is an (incomplete) library that can parse a pcapng capture into objects that are easier to play with in Python. Right now it loads the whole content into memory, which might or might not be a bad limitation, but for now it will do. I guess it would also be nice if I can find a way to integrate this with Colaboratory, which is a tool I only have vague acquaintance with, but would probably be great for this kind of reverse engineering, as it looks a lot like the kind of stuff I’ve been doing by hand. That will probably be left for the future.

The primary target right now is for me to be able to reconstruct the text format of usbmon given the pcapng capture. This would at least tell me that my objects are not losing details in the construction. Unfortunately this is proving harder than expected, because the documentation of usbmon is not particularly clear, starting from the definition of the structure, that mixes sized (u32) and unsized (unsigned int) types. I hope I’ll be able to figure this out and hopefully even send changes to improve the documentation.

As you might have noticed from my Twitter rants, I maintain that the documentation needs an overhaul. From mention of “easy” things, to the fact that the current suggested format (the binary structures) is defined in terms of the text format fields — except the text format is deprecated, and the kernel actually appears to produce the text format based on the binary structures. There are also quite a few things that are not obviously documented in the kernel docs, so you need to read the source code to figure out what they mean. I’ll try rewriting sections of the documentation.

Keep reading the blog to find updates if you have interests in this.

Testing the Dexcom G6 CGM: Setup

I have written many times before how I have been using the FreeStyle Libre “flash” glucose monitor, and have been vastly happy with it. Unfortunately in the last year or so, Abbott has had trouble with manufacturing capacity for the sensors, and it’s becoming annoying to procure them. Once already they delayed my order to the point that I spent a week going back to finger-pricking meters, and it looked like I might have to repeat that when, earlier in January, they notified that my order would be delayed.

This time, I decided to at least look into the alternatives — and as you can guess from the title, I have ordered a Dexcom G6 system, which is an actual continuous monitor, rather than a flash system like the Libre. For those who have not looked into this before (or who, lucky them, don’t suffer from diabetes and thus don’t spend time looking like this), the main difference between these two is that the Libre needs to be scanned regularly, while the G6 sends the data continuously from the transmitter to a receiver of some kind.

I say “of some kind” because, like the Libre, and unlike the generation I looked at before, the G6 can be connected to a compatible smartphone instead of a dedicated receiver. Indeed, the receiver is a costly optional here, considering that already the starter kit is £159 (plus VAT, which I’m exempt from because I’m diabetic).

Speaking of costs, Dexcom takes a different approach to ordering than the Libre: it’s overly expensive if you “pay as you go”, the way Abbott does it. Instead if you don’t want to be charged through the nose, you need to accept a one year contract, for £159/month. It’s an okay price, barely more expensive than the equivalent Abbott sensors price, but it’s definitely a bit more “scary” as an option. In particular if you don’t feel sure about the comfort of the sensor, for instance.

I’m typing this post as I opened the boxes that arrived to me with the sensor, transmitter and instructions. And the first thing I will complain about is that the instructions tell me to “Set Up App”, and give me the name of the app and its icon, but provides no QR code or short link to it. So I looked at their own FAQ, they only provide the name of the app:

The Dexcom G6 app has to be downloaded and is different from the Dexcom G5 Mobile app. (Please note: The G6 system will not work with the G5 Mobile app.) It is available for free from the Apple App or Google Play stores. The app is named “Dexcom G6”

Once I actually find the app, that is reported as being developed by Dexcom, I actually find Dexcom G6 mmol/L DXCM1. What on Earth, folks? Yes of course the mmol/l is there because it’s the UK edition (the Italian edition would be mg/dl), and DXCM1 is probably… something. But this is one of the worst way to dealing with region-restricted apps.

Second problem: the login flow uses an in-app browser, as it’s clear from the cookies popup (that is annoying on their normal website too). Worse, it does not work with 1Password auto-fill! Luckily they don’t disable paste at least.

After logging in, the app forces you to watch a series of introductory videos, otherwise you don’t get to continue the setup at all. I would hope that this is only a requirement for the first time you use the app, but I somewhat don’t expect it to be as good. The videos are a bit repetitive, but I suppose they are designed to help people who are not used to this type of technology. I think it’s of note that some of the videos are vertical, while other are horizontal, forcing you to move your phone quite a few times.

I find it ironic that the videos suggests you to keep using a fingerstick meter to take treatment decisions. The Libre reader device doubles as a fingerstick meter, while Dexcom does not appear to even market one to begin with.

I have to say I’m not particularly impressed by the process, let alone the opportunities. The video effectively tells you you shouldn’t be doing anything at all with your body, as you need to place it definitely on your belly, but away from injection sites, from where you could have a seatbelt, or from where you may roll over while asleep. But I’ll go with it for now. Also, unlike the Libre, the sensors don’t come with the usual alcohol wipes, despite them suggesting you to use it and have it ready.

As I type this, I just finished the (mostly painless, in the sense of physical pain) process to install the sensor and transmitter. The app is now supposedly connecting with the (BLE) transmitter. The screen tells me:

Keep smart device within 6 meters of transmitter. Pairing may take up to 30 minutes.

It took a good five minutes to pair. And only after it paired, the sensor can be started, which takes two hours (compare to the 1 hour of the Libre). Funnily enough, Android SmartLock asked if I wanted to use to keep my phone unlocked, too.

Before I end this first post, I should mention that there is also a WearOS companion app — which my smartwatch asked if I wanted to install after I installed the phone app. I would love to say that this is great, but it’s implemented as a watch face! Which makes it very annoying if you actually like your watch face and would rather just have an app that allowed you to check your blood sugar without taking out your phone during a meeting, or a date.

Anyhoo, I’ll post more about my experience as I get further into using this. The starter kit is a 30 days kit, so I’ll probably be blogging more during February while this is in, and then finally decide what to do later in the year. I now have supplies for the Libre for over three months, so if I switch, that’ll probably happen some time in June.

CP2110 Update for 2019

The last time I wrote about the CP2110 adapter was nearly a year ago, and because I have had a lot to keep me busy since, I have not been making much progress. But today I had some spare cycles and decided to take a deeper look starting from scratch again.

What I should have done properly since then would have been procuring myself a new serial dongle, as I was not (and still am) not entirely convinced about the quality of the CH341 adapter I’m using. I think I used that serial adapter successfully before, but maybe I didn’t and I’ve been fighting with ghosts ever since. This counts double as, silly me, I didn’t re-read my own post when I resumed working on this, and been scratching my head at nearly exactly the same problems as last time.

I have some updates first. The first of which is that I have some rough-edged code out there on this GitHub branch. It does not really have all the features it should, but it at least let me test the basic implementation. It also does not actually let you select which device to open — it looks for the device with the same USB IDs as I have, and that might not work at all for you. I’ll be happy to accept pull requests to fix more of the details, if anyone happen to need something like this too — once it’s actually in a state where it can be merged, I’ll be doing a squash commit and send a pull request upstream with the final working code.

The second is that while fighting with this, and venting on Twitter, Saleae themselves put me on the right path: when I said that Logic failed to decode the CP2110→CH341 conversation at 5V but worked when they were set at 3.3V, they pointed me at the documentation of threshold voltage, which turned out to be a very good lead.

Indeed, when connecting the CP2110 at 5V alone, Logic reports a high of 5.121V, and a low of ~-0.12V. When I tried to connect it with the CH341 through the breadboard full of connections, Logic reports a low of nearly 3V! And as far as I can tell, the ground is correctly wired together between the two serial adapters — they are even connected to the same USB HUB. I also don’t think the problem is with the wiring of the breadboard, because the behaviour is identical when just wiring the two adapters together.

So my next step has been setting up the BeagleBone Black I bought a couple of years ago and shelved into a box. I should have done that last year, and I would probably have been very close to have this working in the first place. After setting this up (which is much easier than it sounds), and figuring out from the BeagleBoard Wiki the pinout (and a bit of guesswork on the voltage) of its debug serial port, I could confirm the data was being sent to the CP2110 right — but it got all mangled on print.

The answer was that the HID buffered reads are… complicated. So instead of deriving most of the structure from the POSIX serial implementation, I lifted it from the RFC2217 driver, that uses a background thread to loop the reads. This finally allowed me to use the pySerial miniterm tool to log in and even dmesg(!) the BBB over the CP2110 adapter, which I consider a win.

Tomorrow I’ll try polishing the implementation to the point where I can send a pull request. And then I can actually set up to look back into the glucometer using it. Because I had an actual target when I started working on this, and was not just trying to get this to work for the sake of it.

Why do we still use Ghostscript?

Late last year, I have had a bit of a Twitter discussion on the fact that I can’t think of a good reason why Ghostscript is still a standard part of the Free Software desktop environment. The comments started from a security issue related to file-access from within a PostScript program (i.e. a .ps file), and at the time I actually started drafting some of the content that is becoming this post now. I then shelved most of it because I’ve been busy and it was not topical.

Then Tavis had to bring this back to the attention of the public, and so I’m back writing this.

To be able to answer the question I pose in the title we have to first define what Ghostscript is — and the short answer is, a PostScript renderer. Of course it’s a lot more than just that, but for the most part, that’s what it is. It deals with PostScript programs (or documents, if you prefer), and renders them into different formats. PostScript is rarely if at all use in modern desktops — not just because it’s overly complicated, but because it’s just not that useful in a world that mostly settled in PDF, which is essentially a “compiled PostScript”.

Okay not quite. There are plenty of qualifications that go around that whole paragraph, but I think it matches the practicalities of the case fairly well.

PostScript has found a number of interesting niche uses though, a lot of which focus around printing, because PostScript is the language that older (early?) printers used. I have not seen any modern printers speak PostScript though, at least after my Kyocera FS-1020, and even those who do, tend to support alternative “languages” and raster formats. On the other hand, because PostScript was a “lingua franca” for printers, CUPS and other printer-related tooling still use PostScript as an intermediate language.

In a similar fashion, quite a few software that deal with faxes (yes, faxes), tend to make use of Ghostscript itself. I would know because I wrote one, under contract, a long time ago. The reason is frankly pragmatic: if you’re on the client side, you want Windows to “print to fax”, and having a virtual PostScript printer is very easy — at that point you want to convert the document into something that can be easily shoved down the fax software throat, which ends up being TIFF (because TIFF is, as I understand it, the closest encoding to the physical faxes). And Ghostscript is very good at doing that.

Indeed, I have used (and seen used) Ghostscript in many cases to basically combine a bunch of images into a single document, usually in TIFF or PDF format. It’s very good at doing that, if you know how to use it, or you copy-paste from other people’s implementation.

Often, this is done through the command line, too, the reason for which is to be found in the licenses used by various Ghostscript implementations and versions over time. Indeed, while many people think of Ghostscript as an open source Swiss Army Knife of document processing, it actually is dual-licensed. The Wikipedia page for the project shows eight variant, with at least four different licenses over time. The current options are AGPLv3 or the commercial paid-for license — and I can tell you that a lot of people (including the folks I worked under contract for), don’t really want to pay for that license, preferring instead the “arms’ length” aggregation of calling the binary rather than linking it in. Indeed, I wrote a .NET Library to do just that. It’s optimized for (you guessed it right) TIFF files, because it was a component of an Internet Fax implementation.

So where does this leave us?

Back ten years ago or so, when effectively every Free Software desktop PDF viewer was effectively forking the XPDF source code to adapt it to whatever rendering engine they needed, it took a significant list of vulnerabilities that needed to be fixed time and time again for the Poppler project to take off, and create One PDF Rendering To Rule Them All. I think we need the same for Ghostscript. With a few differences.

The first difference is that I think we need to take a good look at what Ghostscript, and Postscript, are useful for in today’s desktops. Combining multiple images in a single document should _not_ require processing all the way to PostScript. There’s no reason to! Particularly not when the images are just JPEG files, and PDF can embed them directly. Having a tool that is good at combining multiple images into a PDF, with decent options for page size and alignment, would probably replace many of the usages of Ghostscript that I had in my own tools and scripts over the past few years.

And while rendering PostScript for either display or print are similar enough tasks, I have some doubt the same code would work right for both. PostScript and Ghostscript are often used in _networked_ printing as well. In which case there’s a lot of processing of untrusted input — both for display and printing. Sandboxing – and possibly writing this in a language better suited to deal with untrusted input than C is – would go a long way to prevent problems there.

But there are a few other interesting topics that I want to point out on this. I can’t think of any good reason for _desktops_ to support PostScript out of the box in 2019. While I can still think of a lot of tools, particularly from the old timers, that use PostScript as an intermediate format, most people _in the world_ would use PDF nowadays to share documents, not PostScript. It’s kind of like sharing DVI files — which I have done before, but I now wonder why. While both formats might have advantages over PDF, in 2019 they definitely lost the format war. macOs might still support both (I don’t know), but Windows and Android definitely don’t, which make them pretty useless to share knowledge with the world.

What I mean with that is that it’s probably due time that PostScript becomes an _optional_ component of the Free Software Desktop, one that the users need to enable explicitly _if they ever need it_, just to limit the risks that accepting, displaying and thumbnailing full, Turing-complete programs masqueraded as documents. Even Microsoft stopped running macros in Office documents by default, when they realize the type of footgun it had become.

Of course talk is cheap, and I should probably try to help directly myself. Unfortunately I don’t have much experience with graphics formats, beside for maintaining unpaper, and that is not a particularly good result either: I tried using libav’s image loading, and it turns out it’s actually a mess. So I guess I should either invest my time in learning enough about building libraries for image processing, or poke around to see if someone wrote a good multi-format image processing library in, say, Rust.

Alternatively, if someone starts to work on this and want to have some help with either reviewing the code, or with integrating the final output in places where Ghostscript is used, I’m happy to volunteer my time. I’m fairly sure I can convince my manager to let me do some of that work as a 20% project.

Interns in SRE and FLOSS

In addition to the usual disclaimer, that what I’m posting here is my opinions and my opinions only, not those of my employers, teammates, or anyone else, I want to start with an additional disclaimer: I’m neither an intern, a hiring manager, or a business owner. This means that I’m talking from my limited personal experience that might not match someone else’s. I have no definite answers, I just happen to have opinions.

Also, the important acknowledgement: this post comes from a short chat on Twitter with Micah. If you don’t know her, and you’re reading my blog, what are you doing? Go and watcher her videos!

You might remember a long time ago I wrote (complaining) of how people were viewing Google Summer of Code as a way to get cash rather than a way to find and nurture new contributors for the project. As hindsight is 2020 (or at least 2019 soon), I can definitely see how my complaint sounded not just negative, but outright insulting for many. I would probably be more mellow about it nowadays, but from the point of view of an organisation I stand from my original idea.

If anything I have solidified my idea further with the past five and a half years working for a big company with interns around me almost all the time. I even hosted two trainees for the Summer Trainee Engineering Program a few years ago, and I was excitedly impressed with their skill — which admittedly is something they shared with nearly all the interns I’ve ever interacted with.

I have not hosted interns since, but not because of bad experiences. It had more to do with me changing team much more often than the average Google engineer — not always by my request. That’s a topic for another day. Most of the teams I have been in, including now, had at least an intern working for them. For some teams, I’ve been involved in brainstorming to find ideas for interns to work on the next year.

Due to my “team migration”, and the fact that I insist on not moving to the USA, I often end up in those brainstorms with new intern hosts. And because of that I have over time noticed a few trends and patterns.

The one that luckily appears to be actively suppressed by managers and previous hosts is that of thinking of interns as the go-to option to work on tasks that we would define “grungy” — that’s a terrible experience for interns, and it shouldn’t be ever encouraged. Indeed, my first manager made it clear that if you come up with a grungy task to be worked on, what you want is a new hire, not an intern.

Why? There are multiple reasons for that. Start with the limited time an intern has, to complete a project: even if the grungy task is useful to learn how a certain system works, does an intern really need to get comfortable with it that way? For a new hire, instead, time is much less limited, so giving them a bit more boring tasks while they go through whatever other training they need to go through is fine.

But that’s only part of the reason. The _much more important_ part is understanding where the value of an intern is for the organisation. And that is _not_ in their output!

As I said at the start, I’m not a hiring manager and I’m not a business person, but I used to have my own company, and have been working in a big org for long enough that I can tell a few patterns here and there. So for a start, it becomes obvious that an intern’s output (as in the code they write, the services they implement, the designs they write) are not their strongest value proposition, from the organisation point of view: while usually interns are paid less than the full-time engineers, hosting an intern takes _a lot_ of time away from the intern host, which means the _cost_ of the intern is not just how much they get paid, but also a part of what the host get paid (it’s not by chance that Google Summer of Code reimburses the hosting project and not just the student).

Also, given interns need to be trained, and they will likely have less experience in the environment they would be working, it’s usually the case that letting a full-time engineer provide the same output would take significantly less time (and thus, less money).

So no, the output is not the value of an intern. Instead an internship is an opportunity both for the organisation and for the interns themselves. For the organisation, it’s almost like an extended interview: they get to gauge the interns’ abilities over a period of time, and not just with nearly-trick questions that can be learnt by heart — it includes a lot more than just their coding skills, but also their “culture fit” (I don’t like this concept), and their ability to work in a team — and I can tell you that myself, at the age of most of the interns I worked with, I would have been a _terrible_ team player!

And let’s not forget that if the intern is hired afterwards, it’s a streamlined training schedule, since they already know their way around the company.

For the intern, it’s the experience of working in a team, and figuring out if it’s what they want to do. I know of one brilliant intern (who I still miss having around, because they were quite the friendly company to sit behind, as well as a skilled engineer) who decided that Dublin was not for them, after all.

This has another side effect for the hosting teams, that I think really needs to be considered. An internship is a teaching opportunity, so whatever project is provided to an intern should be _meaningful_ to them. It should be realistic, it shouldn’t be just a toy idea. At the same time, there’s usually the intention to have an intern work on something of value for the team. This is great in the general sense, but it goes down to two further problems.

The first is that if you _really_ need something, assigning it as a task to an intern is a big risk: they may not deliver, or underdeliver. If you _need_ something, you should really assign it to an engineer; as I said it would also be cheaper.

The second is that the intern is usually still learning. Their code quality is likely to not be at the level you want your production code to be. And that’s _okay_. Any improvement in the code quality of the intern over their internship is of value for them, so helping them to improve is good… but it might not be the primary target.

Because of that, my usual statement during the brainstorms is “Do you have two weeks to put the finishing polish on your intern’s work, after they are gone?” — because if not, the code is unlikely to be made into production. There are plenty of things that need to be done after a project is “complete” to make it long-lasting, whether they are integration testing and releasing, or “dotting the is and crossing the ts” on the code.

And when you don’t do those things, you end up with “mostly done” code, that feels unowned (because the original author left by that point), and that can’t be easily integrated into production. I have deleted those kind of projects from codebases (not just at Google) too many times already.

So yes, please, if you have a chance, take interns. Mentor them, teach them, show them around on what their opportunities could be. Make sure that they find a connection with the people as well as the code. Make sure that they learn things like “Asking your colleagues when you’re not sure is okay”. But don’t expect that getting an intern to work on something means that they’ll finish off a polished product or service that can be used without a further investment of time. And the same applies to GSoC students.

On Android Launchers

Usual disclaimer, that what I’m writing about is my own opinions, and not those of my employer, and so on.

I have a relationship that is probably best described as love/hate/hate with Android launchers, from the first Android phone I used — the Motorola Milestone, the European version of the Droid. I have been migrating to new launcher apps every year of two, sometimes because I got a new launcher with the firmware (I installed an unofficial CyanogenMod port on the Milestone at some point), or with a new phone (the HTC Desire HD at some point, which also got flashed with CyanogenMod), or simply because I got annoyed with one and try a different one.

I remember for a while I was actually very happy with HTC’s “skin”, which included the launcher, which came with beautiful alpha-blended widgets (a novelty at the time), but I replaced it with, I think, ADW Launcher (the version from the Android Market – what is now the Play Store – not what was on CyanogenMod at that point). I think this was the time when the system apps could not be upgraded via the Store/Market distribution. To make the transition smoother I even ended up looking for widget apps, including a couple of “pro” versions, but at the end of the day grew tired of those as well.

At some point, I think upon suggestion from a colleague, I jumped onto the Aviate launcher, which was unfortunately later bought by Yahoo!. As you can imagine, Yahoo!’s touch was not going to improve the launcher at all, to the point that one day I got annoyed enough I started looking into something else.

Of all the launchers, Aviate is probably the one that looked the most advanced, and I think it’s still one of the most interesting ideas: it had “contextual” pages, with configurable shortcuts and widgets, that could be triggered by time-of-day, or by location. This included the ability, for instance, to identify when you were in a restaurant and show FourSquare and TripAdvisor as the shortcuts.

I would love to have that feature again. Probably even more so now, as the apps I use are even more modal: some of them I only use at home (such as, well, Google Home, the Kodi remote, or Netflix), some of them nearly only on the go (Caffe Nero, Costa, Google Pay, …). Or maybe what I want is Google Now, which does not exist anymore, but let’s ignore that for now.

The other feature that I really liked about Aviate was that it introduced me to the feature that I’ll call jump-to-letter: the Aviate “app drawer” kept apps organised by letter, separated. Which meant you could just tap on the right border of your phone, and you would jump to the right letter. And having the ability to just go to N to open Netflix is pretty handy. Particularly when icons are all mostly the same except for maybe colour.

So when I migrated away from Aviate, I looked for another launcher with a similar jump-to-letter feature, and I ended up finding Action Launcher 3. This is probably the launcher I used the longest; I bought the yearly supporter IAP multiple times because I thought it deserved it.

I liked the idea of backporting the feature of what was originally the Google Now Launcher – nowadays known as the Pixel Launcher – that would allow using the new features announced by Google for their own phones on other phones already on the market. At some point, though, it started pushing the idea of sideloading an APK so that the launcher could also backport the actual Google Now page — it made me very wary and never installed it, it would have needed too many permissions. But it became too pushy when it started updating every week, replacing my default home page with its own widgets. That was too much.

At that point I looked around and found Microsoft Launcher, which was (and is) actually pretty good. While it includes integration for Microsoft services such as Cortana, they kept all the integration optional, so I did set it up with all the features disabled, and kept the stylish launcher instead. With jump-to-letter, and Bing’s lovely daily wallpapers, which are terrific, particularly when they are topical.

It was fairly lightweight, while having useful features, including the ability to hide apps from the drawer, including those that can’t be uninstalled from the phone, or that have an app icon for no reason, such as SwiftKey and Gboard, or many “Pro” license key apps that only launch the primary app.

Unfortunately last month something started going wrong, either because of a beta release or something else, and the Launcher started annoying me. Sometimes I would tap the Home button, and the Launcher would show up with no icons and no dock, the only thing I could do was to go to the Apps settings and force stop it. It also started failing to draw the AIX Weather Widget, which is the only widget I usually have on my personal phone (the work phone has the Calendar on it). I gave up, despite one of the Microsoft folks contacting me on Twitter asking for further details so that they can track down the issues.

I decided to reconsider the previous launchers I used, but I skipped over both Action Launcher (too soon to reconsider I guess) and Aviate (given the current news between Flickr and Tumblr, I’m not sure I trust them — and I didn’t even check to make sure it still is maintained). Instead I went for Nova Launcher, which I used before. It seems to be fairly straightforward, although it lacks the jump-to-letter feature. It worked well enough when I installed it, and it’s very responsive. So I went for that for now. I might reconsider more of them later.

One thing that I noticed, that all three of Action Launcher, Microsoft Launcher, and Nova Launcher do, is to allow you to back up your launcher configuration. But none of them do it through the normal Android backup system, like WhatsApp or Viber. Instead they let you export a configuration file you can reload. I guess it might be so you can copy your home screen from one phone to the other, but… I don’t know, I find it strange.

In any case, if you have suggestions for the best Android launcher, I’m happy to hear them. I’m not set on my way with Nova Launcher, and I’m happy to pay a reasonable amount (up to £10 I would say) for a “Pro” launcher, because I know it’s not cheap to build them. And if any of you know of any “modal” launcher that would allow me to change the primary home screen depending on whether I’m home or not (I don’t particularly need the detail that Aviate used to provide), I would be particularly happy.