Oh Gosh, Trying to Find a New Email Provider in 2020

In the year 2020, I decided to move out of my GSuite account (née Google Apps for Business), which allowed me to use Gmail for my personal domain, and that I have used for the past ten years or so. It’s not that I have a problem with Gmail (I worked nearly seven years for Google now, why would it be a problem?) or that I think the service is not up to scratch (as this experience is proving me, I’d argue that it’s still the best service you can rely upon for small and medium businesses — which is the area I focused on when I ran my own company). It’s just that I’m not a business and the features that GSuite provides over the free Gmail no longer make up for the services I’m missing.

But I still wanted to be able to use my own domain for my mail, rather than going back to the standard Gmail domain. So I decided to look around, and migrate my mail to another paid, reliable, and possibly better solution. Alas, the results after a week of looking and playing around are not particularly impressive to me.

First of all I discarded, without even looking at it, the option of self-hosting my mail. I don’t have the time, nor the experience, nor the will to have to deal with my own email hosting. It’s a landmine of issues and risks and I don’t intend to accept them. So if you’re about to suggest this, feel free to not comment. I’m not going to entertain those suggestions anyway.

I ended up looking at what people have been suggesting on Twitter a few times and evaluated two options: ProtonMail and FastMail. I ended up finding both lacking. And I think I’m a bit more upset with the former than the latter, for reasons I’ll get to in this (much longer than usual) blog post.

My requirements for a replacement solution were to have a reliable webmail interface, with desktop notifications. A working Android app. And security at login. I was not particularly interested in ProtonMail’s encrypt-and-sign everything approach, but I could live with that. But I wanted something that wouldn’t risk letting everyone in with just a password, so 2FA was a must for me. I was also hoping to find something that would make it easy to deal with git send-email, but I ended up accepting right away that nothing would be anywhere close to the solution that we found with Gmail and GSuite (more on that later.)

Bad 2FA Options For All

So I started by looking at the 2nd Factor Authentication options for the two providers. Google being the earliest adopter of the U2F standard means of course that this is what I’ve been using, and would love to keep using once I replace it. But of the two providers I was considering, only FastMail stated explicitly it supported U2F. I was told that ProtonMail expects to add support for it this year, but I couldn’t even tell that from their website.

So I tried first FastMail, which has a 30 days free trial. To set up the U2F device, you need to provide a phone number as a recovery option — which gets used for SMS OTP. I don’t like SMS OTP because it’s not really secure (in some countries taking over a phone number is easier than taking over an email address), and because it’s not reliable the moment you don’t have mobile network services. It’s easy to mistake the “no access to mobile network” with “no access to Internet” and say that it doesn’t really matter, but there are plenty of places where I would be able to reach the Internet and not receive SMS: planes, tube platforms, the office when I arrived in London, …

But surely U2F is enough, why am I even bothering complaining about SMS OTP, given that you can disable it once the U2F security key is added? Well, turns out that when I tried to login on the Android app, I was just sent an SMS with the OTP to log myself in. Indeed, after I removed the phone number backup option, the Android app threw me a lovely error of «U2F is your only two-step verification method, but this is not supported here.» On Android, which can act as an U2F token.

As I found out afterwards, you can add a TOTP app as well, which solves the issue of logging in on Android without mobile network service, but by that point I had already started looking at ProtonMail, because it was not the best first impression to start with.

ProtonMail and the Bridge of Destiny

ProtonMail does not provide standard IMAP/SMTP access, because encryption (that’s the best reason I can get from the documentation, I’m not sure at all what this was all about, but honestly, that’s as far as I care to look into it). If you want to use a “normal” mail agent like Thunderbird, you need to use a software, accessible to paying customers only, that acts as “bridge”. As far as I can tell after using it, it appears to be mostly a way to handle the authentication rather than the encryption per se. Indeed, you log into the Bridge software with username, password and OTP, and then it provides localhost-only endpoints for IMAP4 and SMTP, with a generated local password. Neat.

Except it’s only available in Beta for Linux, so instead I ended up running it on Windows at first.

This is an interesting approach. Gmail implemented, many years ago, a new extension to IMAP (and SMTP) that allows using OAuth 2 for IMAP logins. This effectively delegates the login action to a browser, rather than executing it inline in the protocol, and as such it allows to request OTPs, or even supporting U2F. Thunderbird on Windows does work very well with this and even supports U2F out of the box.

Sidenote: Thunderbird seems to have something silly going on. When you add a new account to it, it has a drop-down box to let you select the authentication method (or go for “Autodetect”). Unfortunately, the drop-down does not have the OAuth2 option at all. Even if you select imap.gmail.com as the server — I know hardcoding is bad, but not allowing it at all sounds worse. But if you cheat and give it 12345 as password, and select password authentication just to go through with adding the account, then you can select OAuth 2 as authentication type and it all works out.

Anyway, neither ProtonMail nor FastMail appear to have implemented this authentication method, despite the fact that, if I understood that correctly, it’s supported out of the box on Thunderbird, Apple’s Mail, and a bunch of other mail clients. Indeed, if you want to use IMAP/SMTP with FastMail, they only appear to give you the option to use application-specific passwords, which are a shame.

So why did I need IMAP access to begin with? Well, I wanted to import all my mail from Gmail into ProtonMail, and I though the easier way to do so was going to be through Thunderbird and manually copy the folders I needed. That turned out to be a mistake: Thunderbird crashed while trying to copy some of the content over, and I effectively was spending more time while waiting for it to index anything than instructing it on what to do.

Luckily there’s alternative options for this.

Important Importing Tooling

ProtonMail provides another piece of software, in addition to the Bridge, to paying customers: an Import Tool. This allows you to login to another IMAP server, and copy over the content. I decided to use that to copy over my Gmail content to ProtonMail.

First of all, the tool does not support OAuth2 authentication. To be able to access Gmail or GSuite mailboxes, it needs to use an Application-Specific Password. Annoying but not a dealbreaker for me, since I’m not enrolled in the Advanced Protection Program, which among other things disable “less-secure apps” (i.e. those apps using Application-Specific Passwords). I generated one, logged in, and selected the labels I wanted to copy over, then went to bed, a little, but not much, concerned over the 52 and counting messages that it said it was failing to import.

I woke up to the tool reporting only 32% of around fifty thousands messages imported. I paused, then resumed, the import hoping to getting it unstuck, and left to play Pokémon with my wife, coming back to a computer stuck exactly at the same point. I tried stopping and closing the Import Tool, but that didn’t work, it got stuck. I tried rebooting Windows and it refused to, because my C: drive was full. Huh?

When I went to look into it, I found a 436GB text file, that’s the log from the software. Since the file was too big to open with nearly anything on my computer, I used good old type, and beside the initial part possibly containing useful information, most of the file repeated the same error message about not being able to parse a mime type, with no message ID or subject attached. Not useful. I had to delete the file, since my system was rejecting writes because of the drive being full, but it also does not bode well for the way the importer is written: clearly there’s no retry limit on some action, no log coalescing, and no security feature to go “Hang on, am I DoSing the local system?”

I went looking for tools I could use to sync IMAP servers manually. I found isync/mbsync, which as a slightly annoyance is written in C and needs to be built, so not easy to run on Windows where I do have the ProtonMail bridge, but not something I can’t overcome. When I was looking at the website, it said to check the README for workarounds needed with certain servers. Unfortunately at the time of writing the document, in the Compatibility section, refers to “M$ Exchange” — which in 2020 is a very silly, juvenile, and annoying way to refer to what is possibly still the largest enterprise mail server out there. Yes, I am judging a project by its README the way you judge a book by its cover, but I would expect that a project unable to call Microsoft by its name in this day and age is unlikely to have added support for OAuth2 authentication or any of the many extensions that Gmail provides for efficient listing of messages.

I turned to FastMail to see how they are implementing it: importing Gmail or GSuite content can be done directly on their server side: they require you to provide OAuth2 access to all your email (but then again, if you’re planning to use them as your service provider, you kind of are already doing that). It does not allow you to choose which labels you want to import: it’ll clone everything, even your trash/bin folder. So at the time of writing it’s importing 180k messages. It took a while, and it showed the funny result of saying «175,784 of 172,368 messages imported.» Bonus point to FastMail for actually sending the completion note as an email, so that it can be fetched accordingly.

A side effect of FastMail doing the imports server side is that there’s no way for you to transfer ProtonMail boxes to FastMail, or any other equivalent server with server-side import: the Bridge needs to run on your local system for you to authenticate. It’s effectively an additional lock-in.

Instead of insisting on self-hosting options, I honestly feel that the FLOSS activists should maybe invest a little more thought and time on providing ways for average users with average needs to migrate their content, avoiding the lock-in. Because even if the perfect self-hosting email solution is out there, right now trying to migrate to it would be an absolute nightmare and nobody will bother, preferring to stick to their perfectly-working locked-in cloud provider.

Missing Features Mayhem

At that point I was a bit annoyed, but I had no urgency to move the old email away, for now at least. So instead I went on to check how ProtonMail worked as primary mail interface. I changed MX around, set up the various verification methods, and waited. One of the nice things of migrating the mail provider is that you end up realizing just how many mailing lists and stuff you keep receiving, that you previously just filed away with filters.

I removed a bunch of subscriptions to open source mailing lists for projects I am no longer directly involved in, and unlikely to go back to, and then I started looking at other newsletters and promotions. For at least one of them, I thought I would probably be better served by NewsBlur‘s newsletter-to-RSS interface. As documented in the service itself, the recommended way to use this is to create a filter that takes the input newsletter and forwards them to your newsblur alias.

And here’s the first ProtonMail feature that I’m missing: there’s no way to set up forwarding filters. This is more than a bit annoying: there was mail coming to my address that I used to forward to my mother (mostly bills related to her house, before I set up a separate domain with multiple aliases that point at our two addresses), and there still are a few messages that come to me only, that I forward to my wife, where using our other alias addresses is not feasible for various reasons.

But it’s not just a matter of forwards that is missing. When I looked into the filter system of ProtonMail I found it very lacking. You can’t filter based on an arbitrary header. You cannot filter based on a list-id! Despite the webmail being able to tell that an email came through from a mailing list, and providing an explicit Unsubscribe button, based on the headers, it neither has a “Filter messages like these” like Gmail has, nor a way to select this manually. And that is a lot more annoying.

FastMail, by comparison, provides a much more detailed rules support, including the ability to provide them directly in Sieve language, and it allows forward-and-delete of email as well, which is exactly what the NewsBlur integration needs (although to note, while you can see the interface for do that, trial accounts can’t set up forwarding rules!) And yes, the “Add Rule from Message” flow defaults to the list identifier for the messages. Also, to one-up even Gmail on this, you can set those rules from the mobile app as well — and if you think this is not that big of a deal, just think of much more likely you are to have spare time to do this kind of boring tasks while waiting for your train (if you commute by train, that is).

In terms of features, it seems like FastMail has the clear upper hand. Even ignoring the calendar provided, it supports the modern “Snooze” concept, letting mail show up later in the day or the week (which is great when, say, you don’t want to keep the unread email about your job interviews to show up on your mail inbox at the office), and it even has the ability to permanently delete messages in certain folders on after a certain amount of days — just like gmaillabelpurge! I think this last feature is the one that made me realize I really just need to use FastMail.

Sending It All Out

As I said earlier, even before trying to decide which one of the two providers to try, I gave up on the idea of being able to use either of them with git send-email to send kernel patches and similar. Neither of them supports OAuth2 authentication, and I was told there’s no way to set up a “send-only” environment.

My solution to this was to bite the bullet and deal with a real(ish) sendmail implementation again, by using a script that would connect over SSH to one of my servers, and use the postfix instance there (noting that I’m trying to cut down on having to run my own servers). I briefly considered using my HTPC for that, but then I realized that it would require me to put my home IP addresses in the SPF records for my domain, and I didn’t really want to publicise those as much.

But it turned out the information I found was incorrect. FastMail does support SMTP-only Application Specific Passwords! This is an awesomely secure feature that not even Gmail has right now, and it makes it a breeze to configure Git for it, and the worst that can happen is that someone can spoof your email address, until you figure it out. That does not mean that it’s safe to share that password around, but it does make it much less risky to keep the password on, say, your laptop.

I would even venture that this is even safer than the sendgmail approach that I linked above, as the other one requires full mail access with the token, which can easily be abused by an attacker.

Conclusion

So at the end of this whole odyssey, I decided to stick with FastMail.

ProtonMail sounds good on paper, but it give me the impression that it’s overengineered in implementation, and not thought out enough in feature design. I cannot otherwise see how many basic features (forwarding filters, send-only protocol support — C-S-c to add a CC line) would otherwise be missing. And I’m very surprised about the security angle for the whole service.

FastMail does have some rough edges, particularly on their webapp. Small things, like being able to right-click to get a context menu would be nice. U2F support is clearly lacking: having it work on their Android app for me would be a huge step forward. And I should point out that FastMail has a much friendlier way to test its service, as the 30 days free option includes nearly all of the functionality and enough space to test an import of the data from a 10 years old Gmail.

LibreView online reporting service

You may remember I complained about cloud-based solutions before. I have had harsh words about what are to me irresponsible self-hosting suggestions, and I’m not particularly impressed by how every other glucometer manufacturer appears to want their tools to be used, uploading to their cloud solutions what I would expect is a trove of real-world blood sugar reports from diabetics.

But as it happens, since I’m using the FreeStyle LibreLink app on my phone, I get the data uploaded to Abbott’s LibreView anyway. The LibreView service is geo-restricted, so it might not be available in all the countries where FreeStyle Libre is present, which probably is why the standalone Windows app still exists, and why the Libre 2 does not appear to be supported by it.

I haven’t used the service at all until this past month, when I visited the diabetic nurse at the local hospital (I had some blood sugar control issues), and she asked me to connect with their clinic. Turns out that (with the correct authorization), the staff at the clinic can access the real-time monitoring that I get from the phone. Given that this is useful to me, I find this neat, rather than creepy. Also it seems to require authorization on both sides, and it includes an email notification, so possibly they didn’t do that bad of a job with it.

The site is also a good replacement for the desktop app, when using the app with the phone, rather than the reader. It provides the same level of details in the reports, including the “pattern insights”, and a full view of the day-to-day aligned on weeks. Generally, those reports are very useful. And they are available on the site even for yourself, not just for the clinics, which is nice.

Also it turns out that the app tracks how many phones you’ve been using to scan the sensor — in my case, six. Although it’s over 1⅓ years since I have used a different one. I couldn’t see a way to remove the old phones, but at the same time, they are not reporting anything in and they don’t seem to have a limit on how many you can have.

Overall it’s effectively just a web app version of the information that was already available on the phone (but hard to extract and share) or on the reader (if you are still using that). I like the interface and it seems fairly friendly.

Also, you may remember (or notice, if you read the links above) that I had taken an aside pointing out how Diabetes Ireland misunderstood the graphs shown in the report when the Libre reached Ireland. I guess they were not alone, because in this version of the report Abbott explicitly labels the 10th-90th percentile highlight, the 25th-75th percentile highlight, and the median line. Of course this assumes that whoever is reading the graph is aware of “percentile” and “median” stand for — but that’s at least a huge step in the right direction.

More British cashback: Airtime Rewards

You could say that one of this blog’s “side hustles” is talking about cashback offers. I think it all started with the idea of describing how privacy compromises work, but more recently focused on just one fintech’s free money distribution. This time I have something to talk about that again touches on both points.

Airtime Rewards is a cashback program that was advertised to me by my mobile phone provider, and it is indeed a bit different from other programs I’ve seen, because the cashback can only be redeemed as credit to your (or someone else’s) mobile phone bill — thus the Airtime part of the name I guess.

The way it works is pretty much reminiscent of the US-only “Rewards Club Dining”: you sign up for an account, and you give them your payment cards’ details — PAN (the 16-digit number), CVV and expiration date. They don’t charge you, instead they set up a “trap” on those cards, so that they are notified when you spend on them.

And this is honestly borderline even for me, as a privacy invasion. I haven’t worked deep enough in payment systems to actually know how those traps are implemented, but it sounds like the companies like Airtime Rewards are getting pretty much a huge feed of your spend, not just related to the vendors they offer cashback for. But don’t quote me on that because I don’t actually know how they implemented it, it might be totally benign.

Now let’s preface this with one important bit of information: only Visa and MasterCard cards are usable for either of these two options. American Express is, once again, a walled garden — they are effectively the iPhone of payment cards, for good and bad (including costs). Just like they don’t allow vendors to scam people via DCC, they don’t seem to allow traps of cards for payment.

So how does this program fare, and why do I even bother talking about it? Well first of all, if you’re the type of person who don’t like leaving money on the table at any chance, it’s actually pretty good, as long as you visit the stores involved. I signed up for this just before Christmas shopping, because I knew we were going to spend a bit of money at Debenhams, and they had a nice 5% cashback, but even just the couple of orders from Waitrose, and the usual stock-up at Boots were enough to get back ~£30 in a couple of months. It’s not going to pay for the phone bill constantly, but it does pay for a few vanity domain names, at the end of the year.

The cashback offers from various retailers are there to make customers chose them over alternatives. This has worked a tiny little bit with Airtime Rewards, in the sense that I factored in the cashback offer when choosing between ordering from Morrisons, Waitrose or Ocado — because sometimes the cheaper is actually winning due to cashback offers, either Airtime Rewards, or Santander. For the most part, a number of our usual destinations are part of the program, so Boots, Pizza Express, Ryman, or (more recently) Uniqlo are nice to see. For a while, Morrisons was also part of the program, but that has not been the case for very long — it appears there’s some variability on which retailers sign up, that suggests there may be a middleman company handling the retailers connections.

Speaking of Santander cashback, because Airtime Rewards attaches to the card number itself, it is possible to combine the two offers, making it closer to (sometimes) 10% cashback than 5%. The same is true with Curve, although note that you can’t stack Curve and banks’ offers, as the latter only see the charge coming directly from Curve.

One very annoying thing with the way Airtime Rewards work, compared to offers by banks, Curve, and American Express, is that they only hand you the cashback credit after a “confirmation period” by the retailer — namely it seems to be matching the various retailers’ return policies. Which means it’s taking 90 days to confirm a Morrisons transaction, despite Morrisons not being in the program anymore. It feels very strange for restaurants (like Pizza Express) needing 35 days to confirm a transaction though — I don’t think I would be able to return my lunch there.

One important thing to note is that the offers are also not quite uniform: while most of the offers are valid for both MasterCard and Visa, some are only available on one or the other circuit. It’s not a big deal for me, as I always preferred having one card on each, but it’s something to keep in mind. Not all the offers are available online, either – again the Morrisons offer I referred to above was only available in store, which excluded home delivery – and then there’s the catch with Google Pay and Apple Pay.

You see, when you pay with Google Pay or Apple Pay, you’re using a “virtual card” — the PAN of the card reported to the merchant does not match the one printed (or embedded) in the card that you connected. This has been described as a privacy-preserving feature by many, although I can’t find any obvious official documentation of this. The idea being, if you pay alternatively with your phone (or different phones) and your physical card, the merchants shouldn’t be able to tell you’re the same customer (but the bank, obviously, can). Turns out this is also not true, because indeed you can attach the PAN of a physical card to Airtime Rewards, and you get your cashback when you pay with Google Pay (connected to that particular card) at some of the retailers.

I say some, not just because Airtime Rewards explicitly only marks some offers as compatible with Google Pay, but also because experimentally I can tell you that even some of the offers that are marked as Google Pay compatible don’t actually work when paying with Google Pay. That was the case, for instance, of Carluccio’s: the only time the cashback got registered was the one time I paid explicitly with my physical card.

What this does mean, though, is that there’s a way for third parties (beside you, your bank, and Google/Apple) to connect payments by virtual cards with the corresponding physical card. And honestly, that’s the scariest part of this whole program.

So, at the end of the day, what if you’re interested in signing up for this? You can sign up here, and use code P7YR6TPE to get £1.5 bonus for you (and a matching one for me). Or maybe you can check with your mobile provider, that might have an even better sign-up offer, honestly.

Windows Backup Solutions: trying out Acronis True Image Backup 2020

One of my computers is my Gamestation, which to be honest has not ran a game in many months now, which runs Windows out of necessity, but also because honestly sometimes I just need something that works out of the box. The main usage of that computer nowadays is Lightroom and Photoshop for my photography hobby.

Because of the photography usage, backups are a huge concern to me (particularly after movers stole my previous gamestation), and so I have been using a Windows tool called FastGlacier to store a copy of most of the important stuff to Amazon Glacier service, in addition to letting Windows 10 do its FileHistory magic on an external hard drive. Not a cheap option, but (I thought) a safe and stable one. Unfortunately the software appears to not being developed anymore, and with one of the more recent Windows 10 updates it stopped working (and since I had set it up as a scheduled operation, it failed silently, which is the worst thing that can happen!)

My original plan for last week (at the time of writing), was to work on pictures, as I have shots from trip over three years ago that I have still not wandered through, rather than working on reverse engineering. But when I noticed the lacking backups, I decided to put that on hold until the backup problem was solved. The first problem was finding a backup solution that would actually work, and that wouldn’t cost an arm and a leg. The second problem was that of course most of the people I know are tinkerers that like rube-goldberg solutions such as using rclone on Windows with the task scheduler (no thanks, that’s how I failed the Glacier backups).

I didn’t have particularly high requirements: I wanted a backup solution that would do both local and cloud backups — because Microsoft has been reducing the featureset of their FileHistory solution, and so relying on it feels a bit flaky. And the ability to store more than a couple of terabytes on the cloud solution (I have over 1TB of RAW shots!), even at a premium. I was not too picky on price, as I know features and storage are expensive. And I wanted something that would just work out of the box. A few review reads later, I found myself trying Acronis True Image Backup. A week later, I regret it.

I guess my best lesson learnt from this is that Daniel is right, and it’s not just about VPNs: most review sites seem to be scoring higher the software they get more money from via affiliate links (you’ll notice that in this blog post there won’t be any!) So while a number of sites had great words for Acronis’s software, I found it sufficiently lacking that I’m ranting about it here.

So what’s going on with the Acronis software? First of all, while it does support both “full image” and “selected folders” modes, you need to be definitely aware that the backup is not usable as-is: you need the software to recover the data. Which is why it comes with bootable media, “survival kits”, and similar amenities. This is not a huge deal to me, but it’s still a bit annoying, when FileHistory used to allow direct access to the files. It also locks you in in accessing the backup with the software, although Acronis makes the restore option available even after you let your subscription expire, which is at least honest.

Then the next thing that was clear to me was that the speed of the cloud backup is not the strongest suit of Acronis. The original estimate for backing up the 2.2TB of data that I expected to back up was on the mark at nearly six days. To be fair to Acronis, the process went extremely smoothly, it never got caught up, looped, crashed, or slowed down. The estimate was very accurate, and indeed, running this for about 144 hours was enough to have the full data backed up. Their backup status also shows the average speed of the processes, that matched my estimate while the backup was running, of 50Mbps.

The speed is the first focus of my regret. 50Mbps is not terribly slow, and for most people this might be enough to saturate their Internet uplink. But not for me. At home, my line is provided by Hyperoptic, with a 1Gbps line that can sustain at least 900Mbps upload. So seeing the backup bottlenecked by this was more than a bit annoying. And as far as I can tell, there’s no documentation of this limit on the Acronis website at the time of writing.

When I complained on Twitter about this, it was mostly in frustration for having to wait, but I was considering the 50Mbps speed at least reasonable (although I would have considered paying a premium for faster uploads!) the replies I got from support have gotten me more upset than before. Their Twitter support people insisted that the problem was with my ISP and sent me to their knowledgebase article on using the “Acronis Cloud Connection Verification Tool” — except that following the instruction showed I was supposed to be using their “EU4” datacenter, for which there is no tool. I was then advise to file a ticket about it. Since then, I appear to have moved back to “EU3” — maybe EU4 was not ready yet.

The reply to the ticket was even more of an absurdist mess. Beside a lot of words to explain “speed is not our fault, your ISP may be limiting your upload” (fair, but I already noted to them that I knew that was not the case), one of the steps they request you to follow is to go to one of their speedtest apps — which returns a 504 error from nginx, oops! Oh yeah and you need to upload the logs via FTP. In 2020. Maybe I should call up Foone to help. (Windows 10, as it happens, still supports FTP write-access via File Explorer, but it’s not very discoverable.)

Both support people also kept reminding me that the backup is incremental. So after the first cloud backup, everything else should be a relatively small amount of data to be copied. Except that I’m not sold onto that either, still: 128GB of data (which is the amount of pictures I came back from Budapest with), would take nearly six hours to back up.

When I finally managed to get a reply that was not directly from a support script, they told me to run the speedtest on a different datacenter, EU2. As it turns out, this is their “Germany” datacenter. This was very clear by tracerouting the IP addresses for the two hosts: EU3 is connected directly to LINX, EU2 goes back to AMS, then FRA (Frankfurt). The speedtest came out fairly reasonable (around 250Mbps download, 220Mbps upload), so I shared the data they requested in the ticket… and then wondered.

Since you can’t change the datacenter you backup to once you started a backup, I tried something different: I used their “Archive” feature, and tried to archive a multi-gigabyte file, but to their Germany datacenter, rather than the United Kingdom one (against their recommendation of «select the country that is nearest to your current location»). Instead of a 50Mbps peak, I got a 90Mbps peak, with a sustained of 67Mbps. Now this is still not particularly impressive, but it would have cut down the six days to three, and the five hours to around two. And clearly it sounds like their EU3 datacenter is… not good.

Anyway, let’s move on and look at local backups, which Acronis is supposed to take care of by itself. For this one at first I wanted to use the full image backup, rather than selecting folders like I did for the cloud copy, since it would be much cheaper, and I have a 9T external harddrive anyway… and when you do that, Acronis also suggests you to create what they call the “Acronis Survival Kit” — which basically means turning the external hard drive bootable, so that you can start up and restore the image straight from it.

The first time I tried setting it up that way, it formatted the drive, but it didn’t even manage to get Windows to connect the new filesystem. I got an error message linking me to a knowledgebase article that… did not exist. This is more than a bit annoying, but I decided to run a full SMART check on the drive to be safe (no error to be found), and then try again after a reboot. Then it finally seemed to work, but here’s where things got even more hairy.

You see, I’ve been wanting to use my 9TB external drive for the backup. A full image of my system was estimated at 2.6TB. But after the Acronis Survival Kit got created, the amount of space available for the backup on that disk was… 2TB. Why? It turned out that the Kit creation caused the disk to be repartitioned as MBR, rather than the more modern GPT. And in MBR you can’t have a (boot) partition bigger than 2TB. Which means that the creation of the Survival Kit silently decreased my available space to nearly 1/5th!

The reply from Acronis on Twitter? According to them my Windows 10 was started in “BIOS mode”. Except it didn’t. It’s set up with UEFI and Secure Boot. And unfortunately it doesn’t seem like there’s an easy way to figure out why the Acronis software thinks it’s that way. But worse than that, the knowledgebase article says that I should have gotten a warning, which I never did.

So what is it going to be at the end of the day? I tested the restore from Acronis Cloud, and it works fine. Acronis has been in business for many years, so I don’t expect them to disappear next year. So the likeliness of me losing access to these backups is fairly low. I think I may just stick to them for the time being, and hope that someone in the Acronis engineering or product management teams can read this feedback and think about that speed issue, and maybe start considering the idea of asking support people to refrain from engaging with other engineers on Twitter with fairly ridiculous scripts.

But to paraphrase a recent video by Techmoan, these are the type of imperfections (particularly the mis-detected “BIOS booting” and the phantom warning), that I could excuse to a £50 software package, but that are much harder to excuse in a £150/yr subscription!

Any suggestions for good alternatives to this would be welcome, particularly before next year, when I might reconsider if this was good enough for me, or a new service is needed. Suggestions that involve scripts, NAS, rclone, task scheduling, self-hosted software will be marked as spam.

Where did the discussion move to?

The oldest post you’ll find on this blog is from nearly sixteen years ago, although it’s technically a “recovered” post that came from a very old Blogspot account I used when I was in high school. The actual blog that people started following is probably fourteen years old, when Planet Gentoo started and I started writing about my development there. While this is nowhere as impressive as Scalzi’s, it’s still quite an achievement in 2020, when a lot of people appear to have moved to Medium posts or Twitter threads.

Sixteen years are an eternity in Internet terms, and that means the blog has gone through a number of different trends, from the silly quizzes to the first copy-and-paste list memes, from trackbacks to the anti-spam fights. But the one trend that has been steady over the past six years (or so) is the mistreatment of comments. I guess this went together with the whole trend of toxic comments increasing, and the (not wrong) adage of “don’t read the comments”, but it’s something that saddened me before, and that saddens me today.

First of all, the lack of comments feels, to me, like a lack of engagement. While I don’t quite write with the intention of pleasing others, I used to have meaningful conversations with readers of the blog in the past — whether it was about correcting my misunderstanding of things I have no experience with, or asking follow up questions that could become more blog posts for other to find.

Right now, while I know there’s a few readers of the blog out there, it feels very impersonal. A few people might reply to the Tweet that linked to the new post, and maybe one or two might leave a comment on LinkedIn, but that’s usually where the engagement ends for me, most of the time. Exception happen, including my more recent post on zero-waste, but even those are few and far between nowadays. And not completely unexpectedly, I don’t think anyone is paying attention to the blog’s Facebook page.

It’s not just the big social media aggregators, such as Reddit and Hacker News, that cause me these annoyances. Websites like Boing Boing, which Wikipedia still calls a “group blog”, or Bored Panda, and all their ilks, appear to mostly be gathering posts from other people and “resharing” them, nowadays. On the bright side of the spectrum, some of these sites at least appear to add their own commentary on the original content, but in many other cases I have seen them reposting the “eye catchy” part of the original content (photo, diagram, infographic, video) without the detailed explanations, and sometimes making it hard to even find the original credit.

You can imagine that it is not a complete coincidence that I’m complaining about this after having had to write a full-on commentary due to Boing Boing using extremely alarmist tones around a piece of news that, in my view, barely should have been notable. Somehow it seems news around diabetes and glucometers have this effect on people — you may remember I was already annoyed when Hackaday was tipped about my project, and decided to bundle it with an (unsafe!) do-it-yourself glucometer project that got the most of the comments on their own post.

I guess this ends up sounding a lot like an old man shouting at clouds — but I also still think that discussing ideas, posts, opinions with the creators are worth doing, particularly if the creators have the open mind of listening to critique of their mistakes — and, most importantly, the “capacitance” to send abuse away quickly. Because yeah, comments became toxic a long time ago, and I can’t blame those who prefer not to even bother with comments in the first place, despite disliking it myself.

To conclude, if you have anything to discuss or suggest me, please do get in touch. It’s actually a good feeling to know that people care.

Unnecessary, but required

In the past year, I’ve hard to learn quite a few different lessons, some harder than others, some more gratifying than others. One of the main (but far from the only) source of these lessons was learning to live with someone else — save for my mother, and a few months with Luca, I have never really shared an apartment, a flat, or a house with someone else for more than a few days. But now that I’m happily married, there’s no going back to solitude. And it’s a feeling I’m really happy about, despite the eventual challenges that this has brought to both of us.

One of the differences that we realised early on is that we have different tolerances to chaos and trinkets. I’m not particularly organised when it comes to sorting out my stuff, but I’m also not a total slob — but I don’t mind having items spread across three rooms, and I was not particularly well known for having ironed t-shirts. My wife’s much less… chaotic, but at the same time has a fairly short patience for technology for the sake of technology.

This pretty much makes a dent in the amount of random gadgets I end up buying for the sake of trying out, because they might just end up not being used, or even not being welcome if they somehow get in the way. I think my most impressive achievement has been making her accept we have an electric cheese grater. I’m still trying to convince her it’s a good idea for me to disassemble the battery charger to replace the current plug-in adapter with an micro-USB port. Which is honestly not necessary at all: the plug is an AC-DC adapter, europlug with one of those europlug-to-british screw-in adapters, which means if we decide to leave London for the Continent, we won’t be needing to replace it — it would only become an issue if we moved to a different part of the world, and we can address it then.

But at the same time, this is the type of modification that in my eyes is… well, required. Why would I not make my electric cheese grater into an USB-powered electric cheese grater?

This reminded me of what Adam Savage (of Mythbuster fame) says in his biography Every Tool’s A Hammer (which, incidentally, is an awesome read that I would recommend everyone who has even a passing interest in creating stuff):

I often describe myself as a serial skill collector. I’ve had so many different jobs over my lifetime […] that my virtual tool chest is overflowing. Still I love learning new ways of thinking and organizing, new technqiues, new ways of solving old problems. […] The skills I have, all of them, are simply arrows in my mental quiver, tools in my problem-solving tool chest, to achieve that thing. […] And I learned each of them specifically for that reason. […] Eventually, […] I came to realise this was the ONLY way I could successfully learn a skill—by doing something with it, by applying it in my real world.

Adam Savage, Every Tool’s A Hammer

This is pretty much my life. I have pretty clearly failed at learning things “academically”, lasting only a few weeks at University of Venice, and instead building up my knowledge by working on different projects, both opensource and for customers, and by trying things out for myself. This has been a blessing and a curse at same time: while it meant that I have been collecting a bunch of skills, just like Adam is saying above, for the most part I have superficial skills: I’ve only rarely had to go deep-dive into a technology or a problem in my dayjob, and the amount of time I have to spend on side projects has been fairly low, and shrinking.

Long are the the days gone when I could sit down to write a stupid IRC bot in Qt, just because I could, and not just for the lack of time. It’s also because, for the most part, I keep telling myself it’s a bad idea to work on something low level, when someone else already did it better than I could possibly do — which is likely true, but it fails to meet my requirement to add the skill to my repertoire. And that’s by itself a career-limiting move, comparable to to the bubble problem.

With these issues in mind, I’m definitely glad my wife is understanding on why I sometimes spend money, time, effort (or most likely, all three) just to get something done because I want to, and not because there’s much need for it. It’s unnecessary, but required for me to keep up to scratch. And being able to do that, without upsetting my partner despite the chaos it creates, is a significant privilege.

As well as privilege is being able to afford the time, space, and money for all these projects. I think this is, for the most part, something that is not quite clear out there yet: being able to contribute to opensource, to write up tips and tricks, to document how to do things are privileges. And I think it’s important to share this privilege, even in form of tips, tricks, videos, and blogs — which is why this blog is still existing, and even with ever-shortening spare time I try to write updates.

Whether it is Bigclive on YouTube, with sometimes off-colour comments that make me uncomfortable, or Adam Savage’s own Tested, that can rely on a real, professional shop, or Micah’s most awesome electronics reverse engineering channel, or Foone’s Twitter feed, I am very glad for those who do their best to share knowledge — and I don’t really need to know why they are doing it. Even when it doesn’t really help me directly (because I can’t learn something if I don’t try myself), I know it can help someone else. Or inspire someone else (or in some cases, me) to go and try something, that will make them learn more.

Abbott, the Libre 2, and the takedown

A few people today messaged and mentioned me on twitter regarding the news that Abbott has requested the takedown of something related to their Libre 2. I gave a quick hot take on this on Twitter, but I guess it’s worth having something in long form to be referenced, since I’m sure this will be talked about a lot more, not least because of the ominous permalink chosen by Boing Boing (“they-literally-own-you”) and the fact that, game of telephone style, the news went from the original takedown, to Reddit phrasing it as “Abbott asserts copyright on your data”, which is both silly and untrue.

So let’s start with a bit of background, that most of the re-posters of this story probably don’t know much about. The Libre 2 is an upgrade on the FreeStyle Libre system that I wrote a lot about and that I use daily. It comes with both a reader device and with support in the LibreLink app for both Android and (on more recent iPhones) iOS. The main difference with the Libre system is that the sensors provide both NFC and BLE capabilities, with the ability to proactively notify of high- or low-blood sugar conditions, that the old NFC-only sensors cannot provide, which is more similar to CGM solutions like Dexcom‘s.

In both the Libre and Libre 2 systems, the sensors don’t report blood sugar values, like in most classic glucometers. Instead they report a number of “raw” values, including from a number of temperature sensors. There’s a great explanation of these from Pierre Vandevenne, here and here. To get a real blood sugar measurement, you need to apply some algorithm, that Abbott still refines. The algorithm is what I usually refer to as “secret sauce”, and is implemented in both the reader’s firmware and the LibreLink app itself.

Above I used the word “something” to refer to what was taken down. The reason why I say that is that Boing Boing in the title straight up calls this a “tool” — but when you read the linked post from the affected person, it is described as “details of how to patch the LibreLink app”. Since I have not seen what the repository was before it was taken down, I have no idea which one to believe exactly. In either case, it looks like Abbott does not like someone to effectively leverage their “secret sauce” to use in a different application, but in particular, it does not look like we’re talking about something like glucometerutils, that implemented the protocol “clean”, without derivation off the original software.

Indeed, Boing Boing seems to make a case that this is equivalent of implementing a file format: «[…] just because Apple’s Pages can read Word docs, it doesn’t mean that Pages is a derivative of MS Office.» Except that it’s not as clear cut. If you implemented support for one format by copying the implementation code into your software, that actually would make it a derivative work, quite obviously. In this case, if I am to believe the original report instead, the taken down content were instructions to modify Abbott’s app — and not a redistribution of it. Since I’m not a lawyer, I have no idea where that stands, but it’s clearly not as black-and-white as Boing Boing appears to make it.

As I said on twitter, this does not affect either of my projects, since neither is relying on the original software, and are rather descriptions of the protocols. They also don’t include any information or support for the Libre 2, since the protocol appears to have changed. There’s an open issue with discussion, but it also appears that this time Abbott is using some encryption on the protocol. And that might be an interesting problem, as someone might have to get up close and personal with the code to figure that part out — but if that’s the case, we’re back at needing a clean-room design for implementing it.

I also want to quote Pierre explicitly from the posts I linked above:

[…] in the Libre FRAM, what we are seeing is a real “raw” signal. While the measure of the glucose signal itself is fairly reliable, it is heavily post-processed by the Libre firmware. Specifically – and in no particular order – temperature compensation, delay compensation, de-noising… all play a role. That understanding and, to some extent, my MD training, led me to extreme caution and prevented me from releasing my “solution”, which I knew to be both incomplete and unable to handle some error conditions.

The main driver behind my decision was the well known “first do no harm” (primum non nocere) motto, an essential part of the Hippocratic Oath which I symbolically took. I still stick by it today. […]

[…]

Today, there are a lot of add-on devices that aim to transform the Libre into a full CGM. To be honest, in general, I do not like either the results they provide or their (in)convenience. None of those I have tried delivered results that would lead to an approval by a regulatory agency, none of them were stable for long periods of time. But, apparently, patients still feel they are helpful and there is now a thriving community that aims at improving them.

Pierre Vandevenne

While I have not sworn a Hippocratic Oath myself, I have similar concerns to Pierre, and I have explicitly avoided documenting the sensors’ protocol, and I won’t be merging code that tries to read them directly, even if provided.

And when it comes to copyright issues, I do weigh them fairly heavily: they are the fundamental way that Free Software even works, by respecting licenses. So I will prefer someone to provide me with the description of Abbott’s encryption protocol, rather than an implementation of it where I may be afraid of a “poisonous tree.”

Environment and Software Freedom — Elitists Don't Get It

I have previously complained loudly about “geek supremacists” and the overall elitist stance I have seen in Free Software, Open Source, and general tech circles. This shows up not just in a huge amount of “groupthink” that Free Software is always better, as well as in jokes that may sound funny at first, but are actually trying to exclude people (e.g. the whole “Unix chooses its friends” line).

There’s a similar attitude that I see around environmentalism today, and it makes me uneasy, particularly when it comes to “fight for the planet” as some people would put it. It’s not just me, I’ve seen plenty of acquaintances on Twitter, Facebook, and elsewhere reporting similar concerns. One obvious case is the lack of thought given to inclusion and accessibility: whether it is a thorough attack of pre-peeled oranges with no consideration to those who are not able to hold a knife, or waste-shaming with the infamous waste jars (as an acquaintance reported, and I can confirm the same is true for me, would fill up in a fraction of the expected time just from medicine blisters).

Now the problem is that, while I have expressed my opinions about Free Software and activists a number of times in the past, I have no experience or expert opinion to write a good critique of environmentalist groups, which means I can only express my discomfort and leave it to someone else. Although I wrote about this in the past.

What I can provide some critique of, though, is an aspect that I recently noticed in my daily life, and for which I can report directly, at least for a little bit. And it goes back to the zero-waste topic I mentioned in passing above. I already said that the waste produced just by the daily pills I take (plus the insulin and my FreeStyle Libre sensors) goes beyond what some of the more active environmentalists consider appropriate. Medicine blisters, insulin pens, and the sensors’ applicators are all non-recyclable waste. This means that most of the encouragement to limit waste is unreachable for most people on medications.

The next thing I’m going to say is that waste reduction is expensive, and not inclusive of most people who don’t have a lot of spare disposable cash.

Want a quick example? Take hand wash refills. Most of the people I know use liquid soap, and they buy a new bottle, with a new pump, each time it finishes. Despite ceramic soap bottle being sold in most homeware stores, I don’t remember the last time I saw anyone I know using one. And even when my family used those for a little while, they almost always used a normal soap bottle with the pump. That’s clearly wasteful, so it’s not surprising that, particularly nowadays, there’s a lot of manufacturers providing refills — pouches, usually made with thinner, softer plastic, with a larger amount of soap, that you can use to either refill the original bottles, or to use with one of those “posh” ceramic bottles. Some of the copy on the those pouches explicitly state «These refill pouches use 75% less plastic per ml of product than a [brand] liquid handwash pump (300 ml), to help respect the environment.»

The problem with these refills, at least here in London, is that they are hard to come by, and only a few, expensive brands appear to provide them. For instance you can get refills for L’Occitane hand wash, but despite liking some of their products, at home we are not fond of their hand wash, particularly not at £36 a litre (okay, £32.4 with the recycling discount). Instead we ended up settling on Dove’s hand wash, which you can buy in most stores for £1 for the 250ml bottle (£4/litre). Dove does make refills and sell them, and at least in Germany, Amazon sells them for a lower per-litre price than the bottle. But those refills are not sold in the UK, and if you wanted to order them from overseas they would be more expensive (and definitely not particularly environmentally friendly).

If the refills are really making such a difference as the manufacturers insist they do, they should be made significantly more affordable. Indeed, in my opinion you shouldn’t be able to get the filled bottles alone at all, and they should rather be sold bundled with the refills themselves, at a higher per-liter price.

But price is clearly not the only problem — handwash is something that is subjected to personal taste a lot since our hands are with us all day long. People prefer no fragrance, or different fragrances. The fact that I can find the whopping total of two handwash refills in my usual local stores, that don’t cost more than the filled bottle is not particularly encouraging.

Soap is not the only the thing for which the “environmentally conscious” option is far from affordable. Recently, we stumbled across a store in Chiswick that sells spices, ingredients and household items plastic free, mostly without containers (bring your own, or buy it from them), and we decided to try it, easily since I’ve been saving up the glass containers from Nutella and the jams, and we had two clean ones at home for this.

This needs a bit more context: both me and my wife love spicy food in general, and in particular love mixing up a lot of different spices when making sauces or marinades, which means we have a fairly well stocked spice cupboard. And since we consume a lot of them, we have been restocking them with bags of spices rather than with new bottles (which is why we started cleaning and setting aside the glass jars), so the idea of finding a place where you can fill your own jar was fairly appealing to me. And while we did expect a bit of a price premium given the location (we were in Chiswick after all), it was worth a try.

Another caveat on all of this: the quality, choice and taste of ingredients are not obvious. They are, by definition, up to personal taste. Which means that doing a direct price-by-price comparison is not always possible. But at the same time, we do tend to like the quality of spices we find, so I think we’ve been fair when we boggled at the prices, and in particular at the prices fluctuation between different ingredients. So I ended up making a quick comparison table, based off the prices on their website, and the websites of Morrisons and Waitrose (because, let’s be honest, that’s probably the closest price comparison you want to make, as both options are clearly middle-to-upper class).

Price comparison between Source, Morrisons, Waitrose and the Schwartz brand spices. More accessible on Google Drive.
I’ve taken the cheapest priced option for all the searches, looking for bigger sizes.

If you look at the prices, you can see that, compared with the bottled spices, they are actually fairly competitive! I mean cumin costs over four times if you buy it in bottle at Waitrose, so getting it cheaper is definitely a steal… until you notice that Morrisons stocks a brand (Rajah) that is half the price. Indeed, Rajah appears to sell spices in big bags (100g or 400g), and at a significantly lower price than most of the other options. In personal taste, we love them.

A few exceptions do come to mind: sumac is not easy to find, and it’s actually cheaper at Source. Cayenne pepper is (unsurprisingly) cheaper than Waitrose, and not stocked at Morrisons at all, so we’ll probably pop by again to fill in a large jar of it. Coarse salt is cheaper, and even cheaper than the one I bought on Amazon, but I bought 3Kg two years ago and we still have one unopened bag.

The one part of the pictures that the prices don’t tell, of course, is the quality and the taste. I’ll be very honest and say that I personally dislike the Waitrose extra virgin olive oil I chose the price of (although it’s a decent oil); the Morrisons one is not the cheapest, but that one tasted nasty when I tried it, so I went for the one we actually usually buy. Since we ran out of oil at home, and we needed to buy some anyway, we are now using Source’s and, well, I do like it actually better than Morrisons, so we’ll probably stick to buying it, despite it being more expensive — it’s still within the realm of reasonable prices for good extra virgin olive oil. And they sell it in a refillable bottle, so next time we’ll use that one again.

Another thing that is very clear from the prices is just how much the “organic” label appears to weigh in on the cost of food. I don’t think it’s reasonable to pay four times the price for sunflower oil — and while it is true that I’m comparing the prices of a huge family bottle with that of a fill-your-own-bottle shop, which means you can get less of it at a time, and you pay for that convenience, it’s also one of the more easily stored groceries, so I think it’s fair enough.

And by the way, if you followed my twitter rant, I have good news. Also in Chiswick there’s a Borough Kitchen store, old good brick-and-mortar, and they had a 1L bottle for an acceptable £5.

So where does this whole rant get us? I think that the environment needs for activists to push for affordable efforts. It’s not useful if the zero-waste options are only available to the top 5%. I have a feeling that indeed for some of the better, environmentally aware options we’ll have to pay more. But that should not mean paying £5 for a litre of sunflower oil! We should make sure we can feed the people in the world, if you think that the world is worth saving, and do so in a reasonable way.

Before closing let me just point out the obvious: Source appears to have their heart in the right place with this effort. Having had my own business, I’m sure that the prices reflect the realities of renting a space just off Chiswick High Road, paying for the staff, the required services, the suppliers, and the hidden cost of families with children entering the store and letting their kids nibble on the candies and nuts straight out of the boxes (I’ve seen at least one while we were inside!), without paying or buying anything else.

What I fear we really need is this type of services to scale to the level of big high street grocery stores. Maybe with trade-in containers in place of bring-your-own for deliveries (which I would argue can be more environmentally-friendly than people having to take a car to go grocery shopping). But that’s something I can only hope for.

Working in a bubble, contributing outside of it

The holiday season is usually a great time for personal projects, particularly for people like me who don’t go back “home” with “the family” — quotes needed, since for me home is where I am (London) and where my family is (me and my wife.) Work tends to be more relaxed – even with the added pressure of completing the OKRs for the quarter, and to define those for the next – and given that there is no public transport going on, the time saved in commuting also adds up to an ideal time to work on hobbies.

Unfortunately, this year I’m feeling pretty useless on this front, and I thought this uselessness feeling is at least something I can talk about for the dozen-or-so remaining readers of this blog, in an era of social media and YouTube videos. If this sounds very dismissive, it’s probably because that is the feeling of irrelevancy that took over me, and something that I should probably aim to overcome in 2020, one way or another.

If you are reading this post, it’s likely that you noticed my FLOSS contributions waning and pretty much disappearing over the past few years, except for my work around glucometerutils, and the usbmon-tools package (that kind-of derives off it.) I have contributed the odd patch to the Linux kernel, and more recently to some of the Python typing tooling, but those are really drive-by contributions as I found time for.

Given some of the more recent Twitter threads on Google’s policies around open source contributions, you may wonder if it is related to that, and the answer is “not really”. Early on, I was granted an IARC approval for me to keep working on unpaper (which turned out possibly overkill), for the aforementioned glucometerutils, and for some code I wrote while reverse engineering my gaming mouse. More recently, I’ve leveraged the simplified patching policy, and granted approval for releasing both usbmon-tools and tanuga (although the latter is only released as a skeleton right now.)

So I have all the options, and all the opportunities, to contribute FLOSS projects while in employment of a big multinational Internet company. Why don’t I do that more, then? I think the answer is that I work in a bubble for most of the day, and when I try to contribute something on my spare time, I find myself missing the support structure that the bubble gives me.

I want to make clear here that I’m not saying that everything is better in the bubble. Just that the bubble is soft and warm enough that makes the world outside of it scary, sometimes annoying, but definitely more vast. And despite a number of sensible tools being available out there (and in many cases, better tools), it takes a significant investment in researching the right way to do something, to the point that I suffer from CBA syndrome.

The basic concepts are not generally new: people have talked out loud at conferences about the monorepo, my friend Dinah McNutt spoke and wrote at length about Rapid, the release system we use internally, and that drives the automatic releases, and so on. If you’re even more interested in the topic, this March the book Software Engineering at Google will be released by O’Reilly. I have not read it myself, but I have interacted on and off with two of the curators and I’m sure it’s going to be worth its weight in gold.

Some of the tools are also being released, even if sometimes in modified ways. But even when they are, the amount of integration you may have internally is lost when trying to use them outside. I have considered using Bazel for glucometerutils in the past — but in addition to be a fairly heavy dependency, there’s no easy way to reference most of the libraries that glucometerutils need. At the end of the day, it was not worth trying to use it, despite making my life easier by reducing the cognitive load of working on opensource projects in my personal time.

Possibly the main “support beam” of the bubble, though, is the opinionated platform, which can be seen from the outside in form of the style guides but extends further. To keep the examples related to glucometerutils, while the tests do use absl‘s parameterized class, they are written in a completely different style than I would do at work, and they feel wrong when it comes to importing the local copy of the module to test it. When I looked around to figure out what’s the best practice to write tests in Python, I could find literally dozens of blog posts, StackOverflow answers, documentation for testing frameworks, that all gave slightly different answers. In the bubble you have (pretty much) one way to write the basic test — and while people can be creative even within those guidelines, creativity is usually frown upon.

The same is true for release engineering. As I noted and linked above, all of the release grunt work is done by the Rapid tool in the bubble — and for the most part it’s automated. While there’s definitely more than one way to configure the tool, at least you know which tool to use. And while different teams have often differing opinions on those configurations, you can at least find the opinion of your team, or the closest team to you with an Opinion (with the capital O) and follow that — it might not be perfect for your use, but if it’s allowed it usually means it was reviewed and vouched for (or copy-pasted from something else that was.)

An inside joke from the Google bubble is that the documentation is always out of date and never to be trusted. Beside the unfairness of the joke to the great tech writers I had pleasure to work with, who are more than happy to make sure the documentation is not out of date (but need to know that’s the case, and most of them don’t find out until it’s too late), the truth is that at least we do have documentation for most processes and tools. The outside world has tons of documentation, and some of it is out of date, and it’s very hard to tell whether it’s still correct and valid.

Trying to figure out how to configure a CI/CD tool for a Python project on GitHub (or worse, trying to figure out how to make it release valid packages on PyPI!) still feels like going by the early 2000s HOWTOs, where you hope that the three years old description of the XFree86 configuration file is still matching the implementation (hint: it never did.) Lots of the tools are not easy to integrate, and opting into them takes energy (and sometimes money) — the end result of which is that despite me releasing usbmon-tools nearly a year ago, you still need an unreleased dependency, as the fix I needed for it is not present in any released version, and I haven’t dared bothering the author to ask for a new release yet.

It’s very possible that if I was not working in a bubble all of these issues wouldn’t be be big unknowns — probably if I spend a couple of weeks reviewing the various options for CI/CD I can come up with a good answer for setting up automated releases, and then I can go to the dependency’s author and say “Hey, can I set this up for you?” and that would solve my problem. But that is time I don’t really have, when we’re talking about hobby projects. So I end up opening up the editor in the Git repository I want to work on, add a dozen line or so of code to something I want to do, and figure out that I’m missing the tool, library, interface, opinion, document, procedure that I need, feel drained, and close the editor without having committed – let alone pushed – anything.

Stop slagging off IoT users if you care about them

It’s the season for gifts (or, as some would say, consumerism), and as way too often is the case, it starts a holy war between those who enjoy gadgets, new technology, and Internet-connected appliances, and those who define themselves as security conscious and tell people that they wouldn’t connect a computer to the Internet if they didn’t have to.

Those who follow me on Twitter, probably already know which side of this divide I find myself in: I do have a few IoT devices at home, and I’m “IoT-positive”. I even got into a long Twitter discussion years ago about the fact that IoT is no longer just a random marketing buzzword, but got to actually refer to a class of devices that the public at large can identify, the same way as “white goods” would, in the British Isles.

I have a very hard time giggling Twitter posts from geek supremacists making fun of Internet-connected ovens, when the very same geeks insist they would never possibly buy something like that — despite the excited reactions of the Linux, BSD and FLOSS communities nearly fifteen years ago at the release of a NetBSD-operated toaster.

This does not mean that I’m okay with all the random stuff that’s being proposed as an Internet-enable device. I have looked briefly at Bluetooth toothbrushes and I’m still lost on what the value proposition is with them. And even last year when I got a smart plug it took me a lot of thoughts to figure out what it would be used for, and decided that, for 11 months of the years, the plug will stay in a box, and it will come out at the same time as the Christmas Tree.

Today’s musing is finding a “Smart Essential Oil Diffuser” which was funny because I was looking for something completely different (a kitchen oil bottle, it’s a long story), but I actually clicked on it out of curiosity. I have looked into this type of devices last year, while I was writing my post about smart plugs: they sounded like an interesting approach to make sure they are on for a few minutes before we arrive home, just to give a good smell to the flat without having to keep a more standard Ambipur on all the time.

Indeed, I have considered converting our Muji diffuser into a “Smart” one with an Adafruit Featherwing, but it works too good to open it up right now, and nearly everything I can see in stores like TkMaxx appears to be fairly low quality and with power supplies that look too low to be true. But the device I found over there also appears to be a fairly bad one, so I think our old-school Muji diffuser will stay around instead.

The thing is, whether you like it or not, the public at large, not just the geeks, are the driving force of manufacturers. And you won’t win anyone over by being smug and pointing at how good you are at not buying stuff that is Internet-enabled, because you don’t trust it. The public will. So instead of throwing all IoT options under a bus, and making fun of their users, I prefer Matthew’s approach of actually looking into the various lightbulbs and documenting which ones are, indeed, terrible.

Indeed, if you think that Internet-enabled aroma diffusers are pointless, useless, and nobody will want to have one… you’ll find out that someone will be making one, people will buy one, and most likely some random Chinese factory will start making a generic enough model that other companies can rebrand, and provide the least secure option out there.

I think this is also a valid metaphor for politics nowadays. It doesn’t matter that you are sure you have the right answer — if you demonize the public at large telling them they are stupid, or that they are at fault for things, you’re not likely going take your advice for long.

So if you care about the people around you, instead of telling them that IoT is terrible and you shouldn’t connect anything to a computer ever in a million years, try finding what is not terrible, while still providing them with the convenience they desire. Whether it is a smart lightbulb, a smart thermostat, or an app-enabled doorbell. And if you can’t find anything, and you still think you’re smarter than others, make it. Clearly there’s desire for those tools, can you make a secure and safe one?