Testing the Dexcom G6 CGM: Setup

I have written many times before how I have been using the FreeStyle Libre “flash” glucose monitor, and have been vastly happy with it. Unfortunately in the last year or so, Abbott has had trouble with manufacturing capacity for the sensors, and it’s becoming annoying to procure them. Once already they delayed my order to the point that I spent a week going back to finger-pricking meters, and it looked like I might have to repeat that when, earlier in January, they notified that my order would be delayed.

This time, I decided to at least look into the alternatives — and as you can guess from the title, I have ordered a Dexcom G6 system, which is an actual continuous monitor, rather than a flash system like the Libre. For those who have not looked into this before (or who, lucky them, don’t suffer from diabetes and thus don’t spend time looking like this), the main difference between these two is that the Libre needs to be scanned regularly, while the G6 sends the data continuously from the transmitter to a receiver of some kind.

I say “of some kind” because, like the Libre, and unlike the generation I looked at before, the G6 can be connected to a compatible smartphone instead of a dedicated receiver. Indeed, the receiver is a costly optional here, considering that already the starter kit is £159 (plus VAT, which I’m exempt from because I’m diabetic).

Speaking of costs, Dexcom takes a different approach to ordering than the Libre: it’s overly expensive if you “pay as you go”, the way Abbott does it. Instead if you don’t want to be charged through the nose, you need to accept a one year contract, for £159/month. It’s an okay price, barely more expensive than the equivalent Abbott sensors price, but it’s definitely a bit more “scary” as an option. In particular if you don’t feel sure about the comfort of the sensor, for instance.

I’m typing this post as I opened the boxes that arrived to me with the sensor, transmitter and instructions. And the first thing I will complain about is that the instructions tell me to “Set Up App”, and give me the name of the app and its icon, but provides no QR code or short link to it. So I looked at their own FAQ, they only provide the name of the app:

The Dexcom G6 app has to be downloaded and is different from the Dexcom G5 Mobile app. (Please note: The G6 system will not work with the G5 Mobile app.) It is available for free from the Apple App or Google Play stores. The app is named “Dexcom G6”

Once I actually find the app, that is reported as being developed by Dexcom, I actually find Dexcom G6 mmol/L DXCM1. What on Earth, folks? Yes of course the mmol/l is there because it’s the UK edition (the Italian edition would be mg/dl), and DXCM1 is probably… something. But this is one of the worst way to dealing with region-restricted apps.

Second problem: the login flow uses an in-app browser, as it’s clear from the cookies popup (that is annoying on their normal website too). Worse, it does not work with 1Password auto-fill! Luckily they don’t disable paste at least.

After logging in, the app forces you to watch a series of introductory videos, otherwise you don’t get to continue the setup at all. I would hope that this is only a requirement for the first time you use the app, but I somewhat don’t expect it to be as good. The videos are a bit repetitive, but I suppose they are designed to help people who are not used to this type of technology. I think it’s of note that some of the videos are vertical, while other are horizontal, forcing you to move your phone quite a few times.

I find it ironic that the videos suggests you to keep using a fingerstick meter to take treatment decisions. The Libre reader device doubles as a fingerstick meter, while Dexcom does not appear to even market one to begin with.

I have to say I’m not particularly impressed by the process, let alone the opportunities. The video effectively tells you you shouldn’t be doing anything at all with your body, as you need to place it definitely on your belly, but away from injection sites, from where you could have a seatbelt, or from where you may roll over while asleep. But I’ll go with it for now. Also, unlike the Libre, the sensors don’t come with the usual alcohol wipes, despite them suggesting you to use it and have it ready.

As I type this, I just finished the (mostly painless, in the sense of physical pain) process to install the sensor and transmitter. The app is now supposedly connecting with the (BLE) transmitter. The screen tells me:

Keep smart device within 6 meters of transmitter. Pairing may take up to 30 minutes.

It took a good five minutes to pair. And only after it paired, the sensor can be started, which takes two hours (compare to the 1 hour of the Libre). Funnily enough, Android SmartLock asked if I wanted to use to keep my phone unlocked, too.

Before I end this first post, I should mention that there is also a WearOS companion app — which my smartwatch asked if I wanted to install after I installed the phone app. I would love to say that this is great, but it’s implemented as a watch face! Which makes it very annoying if you actually like your watch face and would rather just have an app that allowed you to check your blood sugar without taking out your phone during a meeting, or a date.

Anyhoo, I’ll post more about my experience as I get further into using this. The starter kit is a 30 days kit, so I’ll probably be blogging more during February while this is in, and then finally decide what to do later in the year. I now have supplies for the Libre for over three months, so if I switch, that’ll probably happen some time in June.

CP2110 Update for 2019

The last time I wrote about the CP2110 adapter was nearly a year ago, and because I have had a lot to keep me busy since, I have not been making much progress. But today I had some spare cycles and decided to take a deeper look starting from scratch again.

What I should have done properly since then would have been procuring myself a new serial dongle, as I was not (and still am) not entirely convinced about the quality of the CH341 adapter I’m using. I think I used that serial adapter successfully before, but maybe I didn’t and I’ve been fighting with ghosts ever since. This counts double as, silly me, I didn’t re-read my own post when I resumed working on this, and been scratching my head at nearly exactly the same problems as last time.

I have some updates first. The first of which is that I have some rough-edged code out there on this GitHub branch. It does not really have all the features it should, but it at least let me test the basic implementation. It also does not actually let you select which device to open — it looks for the device with the same USB IDs as I have, and that might not work at all for you. I’ll be happy to accept pull requests to fix more of the details, if anyone happen to need something like this too — once it’s actually in a state where it can be merged, I’ll be doing a squash commit and send a pull request upstream with the final working code.

The second is that while fighting with this, and venting on Twitter, Saleae themselves put me on the right path: when I said that Logic failed to decode the CP2110→CH341 conversation at 5V but worked when they were set at 3.3V, they pointed me at the documentation of threshold voltage, which turned out to be a very good lead.

Indeed, when connecting the CP2110 at 5V alone, Logic reports a high of 5.121V, and a low of ~-0.12V. When I tried to connect it with the CH341 through the breadboard full of connections, Logic reports a low of nearly 3V! And as far as I can tell, the ground is correctly wired together between the two serial adapters — they are even connected to the same USB HUB. I also don’t think the problem is with the wiring of the breadboard, because the behaviour is identical when just wiring the two adapters together.

So my next step has been setting up the BeagleBone Black I bought a couple of years ago and shelved into a box. I should have done that last year, and I would probably have been very close to have this working in the first place. After setting this up (which is much easier than it sounds), and figuring out from the BeagleBoard Wiki the pinout (and a bit of guesswork on the voltage) of its debug serial port, I could confirm the data was being sent to the CP2110 right — but it got all mangled on print.

The answer was that the HID buffered reads are… complicated. So instead of deriving most of the structure from the POSIX serial implementation, I lifted it from the RFC2217 driver, that uses a background thread to loop the reads. This finally allowed me to use the pySerial miniterm tool to log in and even dmesg(!) the BBB over the CP2110 adapter, which I consider a win.

Tomorrow I’ll try polishing the implementation to the point where I can send a pull request. And then I can actually set up to look back into the glucometer using it. Because I had an actual target when I started working on this, and was not just trying to get this to work for the sake of it.

Why do we still use Ghostscript?

Late last year, I have had a bit of a Twitter discussion on the fact that I can’t think of a good reason why Ghostscript is still a standard part of the Free Software desktop environment. The comments started from a security issue related to file-access from within a PostScript program (i.e. a .ps file), and at the time I actually started drafting some of the content that is becoming this post now. I then shelved most of it because I’ve been busy and it was not topical.

Then Tavis had to bring this back to the attention of the public, and so I’m back writing this.

To be able to answer the question I pose in the title we have to first define what Ghostscript is — and the short answer is, a PostScript renderer. Of course it’s a lot more than just that, but for the most part, that’s what it is. It deals with PostScript programs (or documents, if you prefer), and renders them into different formats. PostScript is rarely if at all use in modern desktops — not just because it’s overly complicated, but because it’s just not that useful in a world that mostly settled in PDF, which is essentially a “compiled PostScript”.

Okay not quite. There are plenty of qualifications that go around that whole paragraph, but I think it matches the practicalities of the case fairly well.

PostScript has found a number of interesting niche uses though, a lot of which focus around printing, because PostScript is the language that older (early?) printers used. I have not seen any modern printers speak PostScript though, at least after my Kyocera FS-1020, and even those who do, tend to support alternative “languages” and raster formats. On the other hand, because PostScript was a “lingua franca” for printers, CUPS and other printer-related tooling still use PostScript as an intermediate language.

In a similar fashion, quite a few software that deal with faxes (yes, faxes), tend to make use of Ghostscript itself. I would know because I wrote one, under contract, a long time ago. The reason is frankly pragmatic: if you’re on the client side, you want Windows to “print to fax”, and having a virtual PostScript printer is very easy — at that point you want to convert the document into something that can be easily shoved down the fax software throat, which ends up being TIFF (because TIFF is, as I understand it, the closest encoding to the physical faxes). And Ghostscript is very good at doing that.

Indeed, I have used (and seen used) Ghostscript in many cases to basically combine a bunch of images into a single document, usually in TIFF or PDF format. It’s very good at doing that, if you know how to use it, or you copy-paste from other people’s implementation.

Often, this is done through the command line, too, the reason for which is to be found in the licenses used by various Ghostscript implementations and versions over time. Indeed, while many people think of Ghostscript as an open source Swiss Army Knife of document processing, it actually is dual-licensed. The Wikipedia page for the project shows eight variant, with at least four different licenses over time. The current options are AGPLv3 or the commercial paid-for license — and I can tell you that a lot of people (including the folks I worked under contract for), don’t really want to pay for that license, preferring instead the “arms’ length” aggregation of calling the binary rather than linking it in. Indeed, I wrote a .NET Library to do just that. It’s optimized for (you guessed it right) TIFF files, because it was a component of an Internet Fax implementation.

So where does this leave us?

Back ten years ago or so, when effectively every Free Software desktop PDF viewer was effectively forking the XPDF source code to adapt it to whatever rendering engine they needed, it took a significant list of vulnerabilities that needed to be fixed time and time again for the Poppler project to take off, and create One PDF Rendering To Rule Them All. I think we need the same for Ghostscript. With a few differences.

The first difference is that I think we need to take a good look at what Ghostscript, and Postscript, are useful for in today’s desktops. Combining multiple images in a single document should _not_ require processing all the way to PostScript. There’s no reason to! Particularly not when the images are just JPEG files, and PDF can embed them directly. Having a tool that is good at combining multiple images into a PDF, with decent options for page size and alignment, would probably replace many of the usages of Ghostscript that I had in my own tools and scripts over the past few years.

And while rendering PostScript for either display or print are similar enough tasks, I have some doubt the same code would work right for both. PostScript and Ghostscript are often used in _networked_ printing as well. In which case there’s a lot of processing of untrusted input — both for display and printing. Sandboxing – and possibly writing this in a language better suited to deal with untrusted input than C is – would go a long way to prevent problems there.

But there are a few other interesting topics that I want to point out on this. I can’t think of any good reason for _desktops_ to support PostScript out of the box in 2019. While I can still think of a lot of tools, particularly from the old timers, that use PostScript as an intermediate format, most people _in the world_ would use PDF nowadays to share documents, not PostScript. It’s kind of like sharing DVI files — which I have done before, but I now wonder why. While both formats might have advantages over PDF, in 2019 they definitely lost the format war. macOs might still support both (I don’t know), but Windows and Android definitely don’t, which make them pretty useless to share knowledge with the world.

What I mean with that is that it’s probably due time that PostScript becomes an _optional_ component of the Free Software Desktop, one that the users need to enable explicitly _if they ever need it_, just to limit the risks that accepting, displaying and thumbnailing full, Turing-complete programs masqueraded as documents. Even Microsoft stopped running macros in Office documents by default, when they realize the type of footgun it had become.

Of course talk is cheap, and I should probably try to help directly myself. Unfortunately I don’t have much experience with graphics formats, beside for maintaining unpaper, and that is not a particularly good result either: I tried using libav’s image loading, and it turns out it’s actually a mess. So I guess I should either invest my time in learning enough about building libraries for image processing, or poke around to see if someone wrote a good multi-format image processing library in, say, Rust.

Alternatively, if someone starts to work on this and want to have some help with either reviewing the code, or with integrating the final output in places where Ghostscript is used, I’m happy to volunteer my time. I’m fairly sure I can convince my manager to let me do some of that work as a 20% project.

Interns in SRE and FLOSS

In addition to the usual disclaimer, that what I’m posting here is my opinions and my opinions only, not those of my employers, teammates, or anyone else, I want to start with an additional disclaimer: I’m neither an intern, a hiring manager, or a business owner. This means that I’m talking from my limited personal experience that might not match someone else’s. I have no definite answers, I just happen to have opinions.

Also, the important acknowledgement: this post comes from a short chat on Twitter with Micah. If you don’t know her, and you’re reading my blog, what are you doing? Go and watcher her videos!

You might remember a long time ago I wrote (complaining) of how people were viewing Google Summer of Code as a way to get cash rather than a way to find and nurture new contributors for the project. As hindsight is 2020 (or at least 2019 soon), I can definitely see how my complaint sounded not just negative, but outright insulting for many. I would probably be more mellow about it nowadays, but from the point of view of an organisation I stand from my original idea.

If anything I have solidified my idea further with the past five and a half years working for a big company with interns around me almost all the time. I even hosted two trainees for the Summer Trainee Engineering Program a few years ago, and I was excitedly impressed with their skill — which admittedly is something they shared with nearly all the interns I’ve ever interacted with.

I have not hosted interns since, but not because of bad experiences. It had more to do with me changing team much more often than the average Google engineer — not always by my request. That’s a topic for another day. Most of the teams I have been in, including now, had at least an intern working for them. For some teams, I’ve been involved in brainstorming to find ideas for interns to work on the next year.

Due to my “team migration”, and the fact that I insist on not moving to the USA, I often end up in those brainstorms with new intern hosts. And because of that I have over time noticed a few trends and patterns.

The one that luckily appears to be actively suppressed by managers and previous hosts is that of thinking of interns as the go-to option to work on tasks that we would define “grungy” — that’s a terrible experience for interns, and it shouldn’t be ever encouraged. Indeed, my first manager made it clear that if you come up with a grungy task to be worked on, what you want is a new hire, not an intern.

Why? There are multiple reasons for that. Start with the limited time an intern has, to complete a project: even if the grungy task is useful to learn how a certain system works, does an intern really need to get comfortable with it that way? For a new hire, instead, time is much less limited, so giving them a bit more boring tasks while they go through whatever other training they need to go through is fine.

But that’s only part of the reason. The _much more important_ part is understanding where the value of an intern is for the organisation. And that is _not_ in their output!

As I said at the start, I’m not a hiring manager and I’m not a business person, but I used to have my own company, and have been working in a big org for long enough that I can tell a few patterns here and there. So for a start, it becomes obvious that an intern’s output (as in the code they write, the services they implement, the designs they write) are not their strongest value proposition, from the organisation point of view: while usually interns are paid less than the full-time engineers, hosting an intern takes _a lot_ of time away from the intern host, which means the _cost_ of the intern is not just how much they get paid, but also a part of what the host get paid (it’s not by chance that Google Summer of Code reimburses the hosting project and not just the student).

Also, given interns need to be trained, and they will likely have less experience in the environment they would be working, it’s usually the case that letting a full-time engineer provide the same output would take significantly less time (and thus, less money).

So no, the output is not the value of an intern. Instead an internship is an opportunity both for the organisation and for the interns themselves. For the organisation, it’s almost like an extended interview: they get to gauge the interns’ abilities over a period of time, and not just with nearly-trick questions that can be learnt by heart — it includes a lot more than just their coding skills, but also their “culture fit” (I don’t like this concept), and their ability to work in a team — and I can tell you that myself, at the age of most of the interns I worked with, I would have been a _terrible_ team player!

And let’s not forget that if the intern is hired afterwards, it’s a streamlined training schedule, since they already know their way around the company.

For the intern, it’s the experience of working in a team, and figuring out if it’s what they want to do. I know of one brilliant intern (who I still miss having around, because they were quite the friendly company to sit behind, as well as a skilled engineer) who decided that Dublin was not for them, after all.

This has another side effect for the hosting teams, that I think really needs to be considered. An internship is a teaching opportunity, so whatever project is provided to an intern should be _meaningful_ to them. It should be realistic, it shouldn’t be just a toy idea. At the same time, there’s usually the intention to have an intern work on something of value for the team. This is great in the general sense, but it goes down to two further problems.

The first is that if you _really_ need something, assigning it as a task to an intern is a big risk: they may not deliver, or underdeliver. If you _need_ something, you should really assign it to an engineer; as I said it would also be cheaper.

The second is that the intern is usually still learning. Their code quality is likely to not be at the level you want your production code to be. And that’s _okay_. Any improvement in the code quality of the intern over their internship is of value for them, so helping them to improve is good… but it might not be the primary target.

Because of that, my usual statement during the brainstorms is “Do you have two weeks to put the finishing polish on your intern’s work, after they are gone?” — because if not, the code is unlikely to be made into production. There are plenty of things that need to be done after a project is “complete” to make it long-lasting, whether they are integration testing and releasing, or “dotting the is and crossing the ts” on the code.

And when you don’t do those things, you end up with “mostly done” code, that feels unowned (because the original author left by that point), and that can’t be easily integrated into production. I have deleted those kind of projects from codebases (not just at Google) too many times already.

So yes, please, if you have a chance, take interns. Mentor them, teach them, show them around on what their opportunities could be. Make sure that they find a connection with the people as well as the code. Make sure that they learn things like “Asking your colleagues when you’re not sure is okay”. But don’t expect that getting an intern to work on something means that they’ll finish off a polished product or service that can be used without a further investment of time. And the same applies to GSoC students.

On Android Launchers

Usual disclaimer, that what I’m writing about is my own opinions, and not those of my employer, and so on.

I have a relationship that is probably best described as love/hate/hate with Android launchers, from the first Android phone I used — the Motorola Milestone, the European version of the Droid. I have been migrating to new launcher apps every year of two, sometimes because I got a new launcher with the firmware (I installed an unofficial CyanogenMod port on the Milestone at some point), or with a new phone (the HTC Desire HD at some point, which also got flashed with CyanogenMod), or simply because I got annoyed with one and try a different one.

I remember for a while I was actually very happy with HTC’s “skin”, which included the launcher, which came with beautiful alpha-blended widgets (a novelty at the time), but I replaced it with, I think, ADW Launcher (the version from the Android Market – what is now the Play Store – not what was on CyanogenMod at that point). I think this was the time when the system apps could not be upgraded via the Store/Market distribution. To make the transition smoother I even ended up looking for widget apps, including a couple of “pro” versions, but at the end of the day grew tired of those as well.

At some point, I think upon suggestion from a colleague, I jumped onto the Aviate launcher, which was unfortunately later bought by Yahoo!. As you can imagine, Yahoo!’s touch was not going to improve the launcher at all, to the point that one day I got annoyed enough I started looking into something else.

Of all the launchers, Aviate is probably the one that looked the most advanced, and I think it’s still one of the most interesting ideas: it had “contextual” pages, with configurable shortcuts and widgets, that could be triggered by time-of-day, or by location. This included the ability, for instance, to identify when you were in a restaurant and show FourSquare and TripAdvisor as the shortcuts.

I would love to have that feature again. Probably even more so now, as the apps I use are even more modal: some of them I only use at home (such as, well, Google Home, the Kodi remote, or Netflix), some of them nearly only on the go (Caffe Nero, Costa, Google Pay, …). Or maybe what I want is Google Now, which does not exist anymore, but let’s ignore that for now.

The other feature that I really liked about Aviate was that it introduced me to the feature that I’ll call jump-to-letter: the Aviate “app drawer” kept apps organised by letter, separated. Which meant you could just tap on the right border of your phone, and you would jump to the right letter. And having the ability to just go to N to open Netflix is pretty handy. Particularly when icons are all mostly the same except for maybe colour.

So when I migrated away from Aviate, I looked for another launcher with a similar jump-to-letter feature, and I ended up finding Action Launcher 3. This is probably the launcher I used the longest; I bought the yearly supporter IAP multiple times because I thought it deserved it.

I liked the idea of backporting the feature of what was originally the Google Now Launcher – nowadays known as the Pixel Launcher – that would allow using the new features announced by Google for their own phones on other phones already on the market. At some point, though, it started pushing the idea of sideloading an APK so that the launcher could also backport the actual Google Now page — it made me very wary and never installed it, it would have needed too many permissions. But it became too pushy when it started updating every week, replacing my default home page with its own widgets. That was too much.

At that point I looked around and found Microsoft Launcher, which was (and is) actually pretty good. While it includes integration for Microsoft services such as Cortana, they kept all the integration optional, so I did set it up with all the features disabled, and kept the stylish launcher instead. With jump-to-letter, and Bing’s lovely daily wallpapers, which are terrific, particularly when they are topical.

It was fairly lightweight, while having useful features, including the ability to hide apps from the drawer, including those that can’t be uninstalled from the phone, or that have an app icon for no reason, such as SwiftKey and Gboard, or many “Pro” license key apps that only launch the primary app.

Unfortunately last month something started going wrong, either because of a beta release or something else, and the Launcher started annoying me. Sometimes I would tap the Home button, and the Launcher would show up with no icons and no dock, the only thing I could do was to go to the Apps settings and force stop it. It also started failing to draw the AIX Weather Widget, which is the only widget I usually have on my personal phone (the work phone has the Calendar on it). I gave up, despite one of the Microsoft folks contacting me on Twitter asking for further details so that they can track down the issues.

I decided to reconsider the previous launchers I used, but I skipped over both Action Launcher (too soon to reconsider I guess) and Aviate (given the current news between Flickr and Tumblr, I’m not sure I trust them — and I didn’t even check to make sure it still is maintained). Instead I went for Nova Launcher, which I used before. It seems to be fairly straightforward, although it lacks the jump-to-letter feature. It worked well enough when I installed it, and it’s very responsive. So I went for that for now. I might reconsider more of them later.

One thing that I noticed, that all three of Action Launcher, Microsoft Launcher, and Nova Launcher do, is to allow you to back up your launcher configuration. But none of them do it through the normal Android backup system, like WhatsApp or Viber. Instead they let you export a configuration file you can reload. I guess it might be so you can copy your home screen from one phone to the other, but… I don’t know, I find it strange.

In any case, if you have suggestions for the best Android launcher, I’m happy to hear them. I’m not set on my way with Nova Launcher, and I’m happy to pay a reasonable amount (up to £10 I would say) for a “Pro” launcher, because I know it’s not cheap to build them. And if any of you know of any “modal” launcher that would allow me to change the primary home screen depending on whether I’m home or not (I don’t particularly need the detail that Aviate used to provide), I would be particularly happy.

Musings after buying a smart plug

I know that people will go and start ranting on using terms like “Internet of Shit” just for the title I’m using here. Despite being as wary and cynical about the subject of connected appliances as the next security-aware engineer, I want to point out that those reactions are blind and lacking empathy. So if your answer is to think that you’re smarter than the plug and me combined, there’s maybe no reason for you to stay around to read the post.

I also need to put the usual disclaimer forward: I work for Google, a company that produces “smart” appliances. I don’t have anything to do with the hardware products, have no special insight into them, and I am her talking about things as myself alone. I’m also not really talking about Google hardware beside for a few references to the Assistant here and there, and that’s simply because I happen to be using Google Home as my hub.

As I said I’m fairly cynical about smart appliances. It took quite a bit for me to even buy a single one, but I’m now a very happy user of a LIFX Mini Colour smart bulb. It was probably this year’s best gadget buy for me, and it is not just about the ability to control the light with an app on my phone — or with the Assistant. The bulb can dim, change colours, and can be set onto a dynamic schedule. It’s extremely convenient, and an improvement in my quality of life, particularly by setting it to red as I go to sleep, instead of keeping it bright white.

Of course, like always when buying a device that relies on external services to work (the infamous “cloud”), I am still worried about the risk of the company going under, or dropping support for my specific device, and letting me deal with the broken pieces. But quite honestly, if you tried to avoid all the cloud-based services and hardware nowadays, you will end up a luddite. And maybe you want that. Besides IKEA, that requires their full bridge, I don’t know of any other smart home brand that provides local-only controls — and local-only means no talking to the Assistant to turn on the light as part of the morning routine.

I’m happy enough that my LIFX can be controlled without an active Internet connection (this happened before). Maybe I’ll follow Matthew Garrett’s example and start reverse engineering it into a Python script for the rainy days.

But I digressed enough. What I wanted to talk about was rather smart plugs. Because that’s a device I’m not entirely sold on the idea of smart plugs, I started the original draft of this post because I thought they were completely useless. I changed my own mind as I was writing this, and that’s why I actually wanted to post this.

So why did I buy a smart plug if I am not sold on the idea? Well, since this is our first Christmas together, my girlfriend wants to have a proper Christmas tree at home. And since I would like to see the tree while I approach the apartment on the bus or on foot (hey, I have not had a Christmas Tree for more than a decade, I can have some fun!), I would like to have IFTTT turn it on for me.1

I ended up buying a TP-Link Smart Plug (UK version), which comes with their own app, and integration with the various services including IFTTT and Google Assistant. Which means we’ll be able to say “Hey Google, turn on the Christmas Tree!”

There are differences between a smart bulb and a plug though. The former adds a significant amount of value add, with things like dimming, different colours, and so on. A smart plug is still only a binary operator, it’s either on, or off. You cannot do fine-grained control over that, you can only turn things on or off.

So after thinking about this, I realized there are a few requirements for something to make sense to have connected to a smart plug:

It needs to be something that cannot stay on standby the whole day. Because if it can, there’s no real advantage in having a smart plug for it, keeping it in stand by is easier, and can easily be cheaper, as the stand-by of the plug connected to WiFi might be higher consumption than the device itself.

It needs to be something that can be at least “readied” unattended. Turning on the plug for a hairdryer is not going to be very useful, if you’re not there to use it. Also if readying something unattended is too risky, it’s a bad idea to use a smart plug. This is the case for clothes irons for instance; I wouldn’t want to turn mine on if I’m not there to make sure that it’s not on top of something it shouldn’t be.

If it’s something that comes with consumables, it needs to have big enough reserves, or a way to feed itself. Going back to the clothes iron, the one I have does not have enough of a water tank. If I was to turn it on too soon, it would just waste all of it and I would go and find it empty, which is just as bad.

Given these considerations, one of the common suggestions I hear is coffee makers. At first I thought this was pointedly American, as indeed a percolator style coffee can be filled in in the evening, and then be set to turn on in the morning and make coffee for you to drink. When I spent extensive time in Los Angeles, I used the timer on a percolator to make sure I would have hot “coffee” ready immediately after waking up. But then I realized that this is very similar for Italian-style espresso machines, too: they have an internal boiler that takes a while to get to temperature and be usable, they usually have a tank big enough for a full day (or in some cases they may be connected to the water mains), and they consume enough power in standby that you wouldn’t want to keep it turned on overnight. For those who don’t drink coffee, the same can be true of automated teawakers or teamakers — I had one from Twinings back in Italy.

Another appliance that fits the bill fairly well is the electric bathroom heater, or towel rack. Heating in general is likely better suited by a smarter “whitebox” approach — indeed I have booked an appointment to install a Nest thermostat at my apartment, after getting my landlord’s permission, because I want to be able to automate hot water availability and easily tweak the temperature over the day. But in some cases, you have additional bathroom heating that has less control: I have on/off towel racks in my bathrooms in London, and my mother uses a small electric heater in Italy, after we messed up with the house’s heating plan by replacing a bulky and leaky boiler with a more modern and efficient one.

Now for both of these examples, smart plugs are not the only obvious solution. Indeed, percolators, teawakers, and espresso machines, as well as many small electric heater, often come with their own timer. This works great for the people who have a clear schedule and fixed routine. In my case that’s rarely the case: I wake up at a different time depending on what my day looks like, sometimes I oversleep because I had a bad night, sometimes I’m up earlier than average because my girlfriend is staying over and she has to go to work. A similar result exists for my mother due to different requirements: she lives alone and really doesn’t have any reason to get up a fixed time unless she’s waiting for deliveries, services, or stuff like that. And since the house is on two floors, and she has knee pain, being able to turn on the heating, get the bathroom ready, or make sure that the coffee machine is warmed up without having to get downstairs immediately, would be a very nice feature.

I can definitely see myself appreciating the idea of saying “Hey Google, Good Morning”, and know that by the time I finished listening to the BBC News headlines, the coffee is ready and still hot for me, while the bathroom is warm enough to take a shower in. Doesn’t really work for me here, because I make pour-over coffee, and the towel rack is not controlled by a normal plug, but I can dream can’t I?

By the way, Google Assistant can do that, although it’s a bit hidden: from the [Home](https://play.google.com/store/apps/details?id=com.google.android.apps.chromecast.app app, go into the Account tab (the last one on the right), click Settings, go to the Assistant tab, and then select Routines. From there you can set up the actions you want taken when you give it a specific hotphrase.

For most of other appliances, I would probably need more whitebox smartness. I already rely on the timer for my washing machine, but it would be nice to just put it into “standby”, loaded and locked, but not start it until I wake up, or until I’m actually leaving the apartment (I don’t get woken up by the noise of the one I have here in London, but I would have been by the one in Dublin). And something that can remind me was I get home (“Hey Google, I’m home”) that I need to unload the dishwasher.

One of the things that I actually nearly considered giving a smart plug to was the Air Wick freshner. While I would love having a fine grained intensity control that would keep a background fragrance during the day, but raise it just as I’m ready to get home, to make me feel good, just having the ability to turn it off the moment I leave and on again when I come back home, would be a very nice thing to have. On the other hand, it turns out that the plug-in device consumes significantly less power than the smart plug in stand-by, so it makes no sense as it is.

I guess using more sophisticated fragrance delivery devices, such as Yankee Candle’s Scenterpiece (that my mother has, at home) would make more sense. Alternatively, Muji has very nice oil burners, though they have a small tank for water, and candle warmers are getting more common (these are probably better than the Scenterpiece in my experience). Unfortunately these are usually table-top devices, rather than plug-in, and I don’t have the space where I would want to use it. So if someone from Air Wick or Ambi Pur is reading, consider that I would pay just as much as a smart plug to have a smart plug-in freshener that can be set to adjust the intensity over the day!

So to close it up, I’m somewhat skeptical about getting more smart plugs for myself, but I can definitely see a number of useful cases for them, as well as for smarter “whitebox” appliances. Indeed, if my mother had a decent Internet connection in 2018, I would probably set her up with quite a few of those, to make her life easier. Call them accessibility helpers, maybe.


  1. You may remember that I have some particular attachment to Christmas lights Rube Goldberg machinery. The idea of having my own IFTTT-compatible smart Chrimast light tube did pass through my head. 

Ads, spying, and my personal opinion

In the past year or so, I have seen multiple articles, even by authors who I thought would have more rational sense to them, over the impression that people get about being spied upon by technology and technology companies. I never got particularly bothered to talk about them, among other things because the company I work for (Google) is one that is often at the receiving end of those articles, and it would be disingenuous for me to “defend” it, even though I work in Site Realiability, which gives me much less insight in how tracking is done than, say, my friends who work in media at other companies.

But something happened a few weeks ago gave me an insight on one of the possible reasons why people think this, and I thought I would share my opinion on this. Before I start let me make clear that what I’m going to write about is something that is pieced together with public information only. As you’ll see soon, the commentary is not even involving my company’s products, and because of that I had access to no private information whatsoever.

As I said in other previous posts, I have had one huge change in my personal life over the past few months: I’m in a committed relationship. This means that there’s one other person beside me that spends time in the apartment, using the same WiFi. This is going to be an important consideration as we move on later.

Some weeks ago, my girlfriend commented on a recent tourism advertisement campaign by Lithuania (her country) on Facebook. A few hours later, I received that very advertisement on my stream. Was Facebook spying on us? Did they figure out that we have been talking a lot more together and thus thought that I should visit her country?

I didn’t overthink it too much because I know it can be an absolute coincidence.

Then a few weeks later, we were sitting on the sofa watching Hanayamata on Crunchyroll. I took a bathroom break between episodes (because Cruncyroll’s binge mode doesn’t work on Chromecast), and as I came back she showed me that Instagram started showing her Crunchyroll ads — “Why?!” We were using my phone to watch the anime, as I have the account. She’s not particularly into anime, this was almost a first as the material interested her. So why the ads?

I had to think a moment to give her an answer. I had to make a hypothesis because obviously I don’t have access to either Crunchyroll or Instagram ads tracking, but I think I’m likely to have hit close to the bullseye and when I realized what I was thinking of, I considered the implications with the previous Facebook ads, and the whole lot of articles about spying.

One more important aspect that I have not revealed yet, is that I requested my ISP to give me a static, public IPv4 address instead of the default CGNAT one. I fell for the wet dream, despite not really having used the feature since. It’s handy, don’t get me wrong, if I was to use it. But the truth is that I probably could have not done so and I wouldn’t have noticed a difference.

Except for the ads of course. Because here’s how I can imagine these two cases to have happened.

My girlfriend reads Lithuanian news from her phone, which is connected to my WiFi when she’s here. And we both use Facebook on the same network. It’s not terribly far-fetched to expect that some of the trackers on the Lithuanian news sites she visits are causing the apartment’s stable, static, public IP address to be added to a list of people possibly interested in the country.

Similarly, when we were watching Crunchyroll, we were doing so from the same IP address she was checking Instagram. Connect the two dots and now you have the reason why Instagram thought she’d be a good candidate for seeing an advert for Crunchyroll. Which honestly would make more sense if they intended to exclude those who do have an account, in which case I would not have them trying to convince me to… give them the money I already give them.

Why do I expect this to be IP tracking? Because it’s the only thing that makes sense. We haven’t used Facebook or Messenger to chat in months, so they can’t get signal from that. She does not have the Assistant turned on on her phone, and while I do, I’m reasonably sure that even if it was used for advertisement (and as far as I know, it isn’t), it would not be for Facebook and Instagram.

IP-based tracking is the oldest trick in the book. I would argue that it’s the first tracking that was done, and probably one of the least effective. But at the same time it’s mostly a passive tracking system, which means it’s much easier to accomplish under the current limits and regulations, including but not limited to GDPR.

This obviously has side effects that are even more annoying. If the advertisers start to target IP address indiscriminately, it would be impossible for me or my girlfriend to search for surprises for each other. Just to be on the safe side, I ordered flowers for our half-year anniversary from the office, in the off-chance that the site would put me on a targeting list for flower ads and she could guess about it.

This is probably a lot less effective for people who have not set up static IP addresses, since there should be a daily or so rotation of IP addresses that confuses the tracking enough. But I can definitely see how this can also go very wrong when a household dynamic are pathological, if the previous holder of the address managed to get the IP on targeted lists for unexpected announces.

I have to say that in these cases I do prefer when ads are at least correctly targeted. You can check your Ads preferences for Google and Facebook if you want to actually figure out if they know anything about you that you don’t want them to. I have yet to find out how to stop the dozens of “{Buzzword} {Category} Crowdfunding Videos” pages that keep spamming me on Facebook though.

Updated “Social” contacts

Given the announcement of Google+ shutdown (for consumer accounts, which mine actually was not), I decided to take some time to clean up my own house and thought it would be good to provide an update of where and why you would find me somewhere.

First of all, you won’t find me on Google+ even during the next few months of transition: I fully deleted the account after using the Takeout interface that Google provides. I have not been using it except for a random rant here and there, or to reach some of my colleagues from the Dublin office.

If you want to follow my daily rants and figure out what I actually complain the most loudly about, you’re welcome to follow me on Twitter. Be warned that a good chunk of it might just be first-world London problems.

The Twitter feed also gets the auto-share of whatever I share on NewsBlur, which is, by the way, what I point everyone to when they keep complaining about Google Reader. Everybody: stop complaining and just feel how much better polished Samuel’s work is.

I have a Facebook account, but I have (particularly in the past couple of years), restricted it to the people I actually interact with heavily, so unless we know each other (online or in person) well enough, it’s unlikely I would accept a friend request. It’s not a matter of privacy, given that I have written about my “privacy policy”, it’s more about wanting to have a safe space I can talk with my family and friends without discussions veering towards nerd-rage.

Also, a few years ago I decided that most of my colleagues, awesome as they are, should rather stay at arms’ length. So with the exception of a handful of people who I do go out with outside the office, I do not add colleagues to Facebook. Former colleagues are more likely.

If you like receiving your news through Facebook (a negative idea for most of tech people I know, but something that the non-tech folks still widely prefer it seems), you can “like” my page, which is just a way for WordPress to be able to share the posts to Facebook (it can share to pages, but not to personal accounts, following what I already complained before about photos). The page also gets the same NewsBlur shared links as Twitter.

Talking about photos, when Facebook removed the APIs, I started focusing on posting only on Flickr. This turned out to be a bit annoying for a few of my friends, so I also set up a page for it. You’re welcome to follow it if you want to have random pictures from my trips, or squirrels, or bees.

One place where you won’t see me is Mastodon or other “distributed social networks” — the main reason for it is that I got already burnt by Identi.ca back in the days, and I’m not looking forward to have a repeat of the absolute filter bubble there, or the fact that, a few years later, all those “dents” got lost. As much as people complain how Twitter is ephemeral, I can still find my first tweet, while identi.ca just disappeared, as I see it, in the middle of nowhere.

And please stop even considering following me on Keybase please.

Yesterday’s Disruptors, Today’s Encumbents

You know, I always found it annoying how online stores such as Amazon, or even IKEA, have been defined “disruptors” all these years. But nowadays I can mostly see how they changed the rules of the game, particularly in favour of the customers themselves, against their own workers, and suppliers. And so, nowadays, I can accept that they have been called that way for a reason.

Of course that’s not to say that I agree them being called that way still.

Since I have moved to London last year, I have been using both Amazon and IKEA shipping quite a bit, whether it is for the random bits and bobs (Amazon) or full blown household furniture (IKEA). It’s kind of needed sometimes, or at least very convenient, because you know there’s selection and (usually) good customer support.

But at the same time, things are no longer smooth as they used to be. Or maybe they are just as smooth, but we (I) got to expect better from them.

Let’s take IKEA: I wanted to order a number of items from them just last week: a garbage bin, a bedding set and some extra towels, as well as some spice jars. I put everything in my “bag”, and tried checking out. Somehow the PayPal integration failed, the loading page got stuck, and I tried restarting… and the site decided to lock my bag “for up to 45 minutes” because of the incomplete checkout.

I’m not sure how the locking is done and timed out, because an hour later it still didn’t let me order, despite logging out and back in. So I ended up going to Marks and Spencer’s website and order (more expensive) bedding set and towels from there. Alas their shipping option appears to be significantly worse as a track record (it got split into three deliveries, and only one made to my office’s mailroom by the expected date, but it was not urgent at all). But the checkout worked perfectly fine.

Unfortunately M&S didn’t have a bin, so I looked for one at Amazon and found something I liked for £25, so on Friday I ordered it with a “nominated day delivery” of Tuesday. That should be enough lead time, no? I also ordered a smaller trash container for the bathroom, to throw things like the non-sharps injection side-results.

Fast forward to Tuesday, when I took a day off work (because I needed to relax anyway), which I spent assembling the daybed I got from IKEA… a year ago (oops!) By 2pm I see that the smaller of the two bins is “Out for delivery”, but the bigger one (the one I really needed!) was not. Although with an expected delivery of the same day, between 7am and 10pm. I have immediately contacted Amazon on Twitter, pointing out the low likelihood of them delivery on the day, but they insisted that it was still going to be delivered.

Cue 4pm when I get an email (but obviously enough no Android push notification) that tells me that they are sorry, but a delay caused the delivery to be skipped on the day and that it would happen in a one-week window following it.

You read that right. They suggested that, for an item that was meant to delivered on October 2nd, and missed delivery, the new delivery window would be October 3rd to 9th. You can imagine just how happy, as a customer, I would be about that. So I called Amazon up, and asked them to cancel the delivery, because I already skipped a day of work (sure I was going to take the day off anyway, but I could have gone out to Kew Gardens instead of staying in to wait for them), and I wouldn’t want to spend an unbound amount of days home in the hope that they would be able to deliver a garbage bin. They confirmed it would be done and an email sent to me “within 24-48 hours” and I thanked them.

Then, I ordered a (different) bin on Argos. They actually had the same bin, but at £32. I didn’t need anything as fancy, and their lower end was actually much better looking than Amazon’s, so I settled for a £10 model. And for £3.95, they allow you to select a 3 hours delivery window — If I did that right when I realize the delivery would have been missed, Argos would have delivered the same day, instead I had to settle for the following day, Wednesday, between 7am and 10am. Indeed the day after, at at 7.20am, I was the happy owner of a cheap, simple garbage bin.

This is not the first time that, on Amazon’s failure, I redirected on Argos. And after this adventure, I think they’ll just be my first and default destination for anything that I want delivered at home (which is usually bulky stuff too uncomfortable to bring across London on the Piccadilly). The last time, it was a clothes iron and board, that somehow Amazon refused to do any nominated day delivery for. Argos was happy to deliver them on a Saturday morning intead. And practically speaking, a 7am-10am delivery weekday window means I can receive at any day, before heading to the office.

I wish that it all ended there, though.

On the same Wednesday that I received the Argos delivery, while at work, the Amazon app on my phone decided to notify me that the bin (the one that I asked to cancel the delivery of), was going to be delivered that day. I once again turned to Twitter where Amazon informed me that the request for cancellation might not have been reflected yet, and that they will not deliver if it was requested not to.

Except that at around 6pm, while I was commuting home, I also received another notification to tell me that the package was delivered. Checking this, it reported the package was delivered “to the resident” — except that my building requires a fob to access, and I was nowhere near home to let them in. So either they left it in the corridor (assuming someone else opened them the main door) or they left it outside altogether (in which case, it would be unlikely for it to stay around until I made it home).

Since the Amazon Android app allows you to contact them via chat, I did so, selecting the order with the bins, explain the situation, and explicitly talking about the nominated day delivery failure. At which point they confirm they would prepare a return request, and that they would organize for pick up. I also note with them that it’s a 40 litres bin, which makes the box very big and not something I’d bring to the post office myself. I also made sure to point out with them that, as I would not have an idea where they manage to leave the box without me, I would just leave it there, and let them pick it up the same way they left them. They confirmed all of this is okay, and after greetings disconnected the chat.

A few minutes later I get an email confirming the return request for… an unrelated set of bamboo spoons that arrived the same day. Not the one I was talking about, which would have been clear from both the bulk of the object we have been talking about, the delivery type, and the delivery address. And of course the price of the spoons was significantly lower than the bin. Sigh.

Another round of chat with Amazon, and they issued the return for the right item. They also told me not to worry about the pick up, and that I could keep the bin… which I don’t need anymore and would take a lot of space. I asked explicitly for a pick up anyway, and they agreed to organize it with Hermes. It was not until I got home and checked the email they sent me, that they expected me to print the return label — but I have no printer at home.

At least expecting Hermes to contact me, if anything to complain that they can’t access the building, I left the box in the hallway where they left it for the day after. Two days later, no pick up, no note, and no call later, I checked the status of the return to find out that they marked it as “completed”. While leaving the box with me. And I now have a fancy bin in the master bathroom, which is open to a good home in West London if someone were to want to deal with it (but probably not worth doing).

I’ll add a few more words about this later on, as Amazon in particular seems to be going the wrong way, for me at least.

Software systems and institutional xenophobia

I don’t usually write about politics, because there are people with more sophisticated opinions and knowledge out there, compared to me, playing at the easiest level, to quote John Scalzi, and rarely having to fear for my future (except for when it comes to health problems). But today I need to point out something that worries me a lot.

We live in a society that, for good or bad (and I think it’s mostly for good), is more and more tied to computer systems. This makes it very easy for computer experts of one kind or another (like me!) to find a job, particularly a good paying job. But at the same time it should give us responsibilities for what we do with our jobs.

I complained on Twitter how most of the credit card application forms here in the UK are effectively saying “F**k you, immigrant scum” by not allowing you to complete the application process if you have less than three years’ addresses in the UK. In the case of a form I tried today, even though the form allows you to specify an “Overseas address” as previous address, which allows you to select Ireland as a country, it still verifies the provided post code to UK standards, and refuses you to continue the process without it.

This is not the first such form. Indeed, I ended up getting an American Express credit card because they were the only financial institution that could be convinced to take me on as a customer, with just two months living in this country, and a full history of addresses for the previous five years and more. And even for them, it was a bit of an issue to find an online form that did indeed allow me to type that in.

Yet another of the credit card companies rejected my request because “[my] file is too thin” — despite being able to prove to them I’m currently employed full time with a very well paying company, and not expecting to change any time soon. This is nearly as bad as the NatWest employee that wanted my employer’s HR representative to tell them how long they expected me to live in the UK.

But it’s not just financial institutions, it’s just at any place where you provide information, and you may end up putting up limitations that, though obviously fine for your information might not be for someone else. Sign-up forms where putting a space in a name or surname field is an error. Data processing that expects all names to only have 7-bit ASCII encoding. Electoral registries where names are read either as Latin 1 or Latin 2.

All of these might be considered smaller data issues of nearsighted developers, but they also show how these can easily turn into real discrimination.

When systems that have no reason to discard your request on the basis of the previous address have a mistake that causes the postcode validation to trigger on the wrong format, you’re causing a disservice and possible harm to someone who might really just need a credit card to be able to travel safely.

When you force people to discard part of their name, you’re going to cause them disservice and harm when they will need a full history of what they did — I had that problem in Ireland, applying for a driving learner permit, not realising that the bills for Bord Gáis Energy wrote down my name wrong (using Elio as my surname).

The fact that my council appears to think that they need to use Latin-2 to encode names, suggests they may expect that their residents are all either English or Eastern European, which in turn leads to the idea of some level of segregation of them away from Italian, French or Irish, all of which depend on Latin-1 encodings instead.

The “funnies” in Ireland was a certain bank allowing you to sign up online with no problems… as long as you had a PPS (tax ID) issued before 2013 — after that year, a new format for the number was in use, and their website didn’t consider it valid. Of course, it’s effectively only immigrants who, in 2014, would be trying to open a bank account with such codes.

Could all of these situation be considered problems with incompetence? Possibly yes. Lots of people are incompetents, in our field. But it also means that there was no coverage for these not-so-corner cases in the validation. So it’s not just an incompetent programmer, it’s an incompetent programmer paired with an incompetent QA engineer. And an incompetent product manager. And an incompetent UX designer… that’s a lot of incompetence put together for a product.

Or the alternative is that there is a level of institutional xenophobia when it comes to software development. In the UK just as in Ireland, Italy and in the United States. The idea that the only information that are being tested are those that are explicitly known to the person doing the development is so minimalist as to be useless. You may as well not validate anything.

Not having anyone from the stakeholders to the developers and testers consider “Should a person from a different culture with different naming, addressing, or {whatever else} norms be able to use this?” (or worse, consider it and answering themselves “no”), is something I consider xenophobia1.

I keep hearing calls to pledge ethics in the field of machine learning (“AI”) and data collection. But I have a feeling that those fields have much less impact on the “median” part of the population. Which is not to say you shouldn’t have ethical consideration in them at all. But rather than we should start with teaching ethics in everyday’s data processing too.

And if you’re looking for some harsh laugh after this mood-killing post, I recommend this article from The Register.


  1. Yes I’m explicitly not using the word “racism” here, because then people will focus on that, rather than the problem. A form does not look at the colour of your skin, but does look at whether you comply with its creators idea of what’s “right”.